text
stringlengths 4
5.48M
| meta
stringlengths 14
6.54k
|
---|---|
\section*{Appendices}%
\refstepcounter{appendix}%
\setcounter{section}{0}%
\setcounter{lemma}{0}
\setcounter{theorem}{0}
\setcounter{definition}{0}
\setcounter{corollary}{0}
\setcounter{equation}{0}
\@addtoreset{equation}{section}
\renewcommand\thesection{\Alph{section}}%
\renewcommand\thesubsection{\Alph{section}.\arabic{subsection}}%
\renewcommand\theequation{\Alph{section}.\arabic{equation}}
\def\@seccntformat##1{{\upshape{\csname the##1\endcsname}.}\hskip .5em}
}%
\makeatother
\newcommand{\R}{\mathbb{R}}
\newcommand{\C}{\mathbb{C}}
\newcommand{\Z}{\mathbb{Z}}
\newcommand{\N}{\mathbb{N}}
\DeclareMathAlphabet{\vecfont}{OT1}{cmr}{bx}{it}
\renewcommand*{\vec}[1]{\vecfont{#1}}
\newcommand{\vecfont{a}}{\vecfont{a}}
\newcommand{\vecfont{b}}{\vecfont{b}}
\newcommand{\vecfont{c}}{\vecfont{c}}
\newcommand{\vecfont{d}}{\vecfont{d}}
\newcommand{\vecfont{e}}{\vecfont{e}}
\newcommand{\vecfont{f}}{\vecfont{f}}
\newcommand{\vecfont{g}}{\vecfont{g}}
\newcommand{\vecfont{h}}{\vecfont{h}}
\newcommand{\vecfont{j}}{\vecfont{j}}
\newcommand{\vecfont{k}}{\vecfont{k}}
\newcommand{\vecfont{m}}{\vecfont{m}}
\newcommand{\vecfont{n}}{\vecfont{n}}
\newcommand{\vecfont{o}}{\vecfont{o}}
\newcommand{\vecfont{p}}{\vecfont{p}}
\newcommand{\vecfont{q}}{\vecfont{q}}
\newcommand{\vecfont{r}}{\vecfont{r}}
\newcommand{\vecfont{s}}{\vecfont{s}}
\newcommand{\vecfont{t}}{\vecfont{t}}
\newcommand{\vecfont{u}}{\vecfont{u}}
\newcommand{\vecfont{v}}{\vecfont{v}}
\newcommand{\vecfont{x}}{\vecfont{x}}
\newcommand{\vecfont{y}}{\vecfont{y}}
\newcommand{\vecfont{z}}{\vecfont{z}}
\newcommand{\vecfont{A}}{\vecfont{A}}
\newcommand{\vecfont{B}}{\vecfont{B}}
\newcommand{\vecfont{G}}{\vecfont{G}}
\newcommand{\vecfont{R}}{\vecfont{R}}
\DeclarePairedDelimiter\bra{\langle}{\rvert}
\DeclarePairedDelimiter\ket{\lvert}{\rangle}
\DeclarePairedDelimiterX\mel[3]{\langle}{\rangle}
{ #1 \delimsize\vert \mathopen{}#2 \delimsize\vert \mathopen{}#3 }
\DeclareMathOperator{\erf}{erf}
\DeclareMathOperator*{\argmax}{arg\,max}
\DeclareMathOperator*{\argmin}{arg\,min}
\DeclarePairedDelimiterX\norm[1]{\lVert}{\rVert}{#1}
\DeclarePairedDelimiterX\abs[1]{\lvert}{\rvert}{#1}
\newcommand{H\textsuperscript{--}}{H\textsuperscript{--}}
\newcommand{H\textsuperscript{+}}{H\textsuperscript{+}}
\newcommand{H\textsubscript{2}}{H\textsubscript{2}}
\renewcommand{\thefootnote}{\ast}
\newcommand*{\Eqref}[1]{Eq.~\eqref{#1}}
\newcommand*{\D}{\mathrm{d}}
\newcommand*{\boldsymbol{\nabla}}{\boldsymbol{\nabla}}
\makeatletter
\renewcommand{\ps@plain}{%
\renewcommand{\@oddhead}{\hfil\footnotesize%
\raisebox{30pt}[0pt][0pt]{\parbox{300pt}{\centering%
A contribution to the Proceedings of the\\{}%
Workshop on Density Functionals for Many-Particle Systems\\{}
2--29 September 2019, Singapore}}\hfil}%
\renewcommand{\@evenhead}{\@oddhead}%
\renewcommand{\@oddfoot}{\hfil\footnotesize%
\raisebox{-8pt}[0pt][0pt]{\thepage}\hfil}%
\renewcommand{\@evenfoot}{\@oddfoot}%
}
\makeatother
\renewcommand{\trimmarks}{\relax}
\begin{document}
\chapter{FLEIM:\\ A stable, accurate and robust extrapolation method at
infinity for computing the ground state of electronic Hamiltonians}
\author{\'Etienne Polack}
\address{Universit\'e
Bourgogne Franche-Comt\'e and CNRS,\\
Laboratoire de Math\'ematiques de Besan\c con, F-25030 Besan\c con,
France\\[0.5ex]
etienne.polack@math.cnrs.fr}
\author{Yvon Maday}
\address{Sorbonne Universit\'e, CNRS, Universit\'e de Paris,\\
Laboratoire Jacques-Louis Lions (LJLL), F-75005 Paris,\\
and Institut Universitaire de France\\[0.5ex]
yvon.maday@sorbonne-universite.fr}
\author{Andreas Savin}
\address{CNRS and Sorbonne Universit\'e,\\
Laboratoire de Chimie Th\'eorique (LCT),
F-75005 Paris, France\\[0.5ex]
andreas.savin@lct.jussieu.fr}
\markboth{\'Etienne~Polack, Yvon~Maday, and Andreas~Savin}%
{FLEIM for electronic Hamiltonians}
\begin{abstract}
The Kohn--Sham method uses a single model system, and corrects it by a density functional
the exact user friendly expression of which is not known and is replaced by an
approximated, usable, model. We propose to use instead more than one model system, and use
a greedy extrapolation method to correct the results of the model systems. Evidently,
there is a higher price to pay for it. However, there are also gains: within the same
paradigm, e.g., excited states and physical properties can be obtained.
\end{abstract}
\vspace*{12pt}
\section{Introduction}
\subsection{Motivation}
Density functional theory (DFT) has a weak point: its approximations (DFAs).
First, the Hohenberg--Kohn theorem tells us that there is a density functional for electronic
systems, $F[\rho]$, that is universal (that is, independent of the potential of the nuclei),
but does not give us a hint on how systematic approximations can be constructed.
In practice, models are produced to be fast in computations, typically by transferring
properties from other systems, like the uniform electron gas.
Second, the most successful approximations are using the Kohn--Sham method (introducing a
fermionic wave function) that decomposes $F[\rho]$ into the kinetic energy,
the Hartree energy and an
exchange-correlation energy contribution although the question of how and what part of
$F[\rho]$ should be approximated is, in principle, open.
In the present contribution we totally change the paradigm in the following way still led by the
issue of universality.
Let us start with a physical consideration.
When electrons are close, the Coulomb repulsion is so strong that some of its features
dominate over the effect of the external potential.
This is also reflected mathematically in the short-range behavior of the wave function, as
present in the Kato cusp
condition~\cite{Kat-CPAM-57,FouHofHofOst-CMP-09,Yse-10,FlaFlaSch-20}, and in higher-order
terms~\cite{RasChi-JCP-96a,KurNakNak-AQC-16}.
We further note that approximating numerically the short-range part of the wave function
needs special care, due to the singularity of the Coulomb interaction when the electrons are
close.
The considerations above and the independence of the interaction between electrons from
that between them and the external potential provides a basis for constructing
approximations.
Thus, we propose to solve accurately a Schr\"odinger equation with a Hamiltonian that is
modified to eliminate the short-range part of the interaction between the electrons which is
one of the difficult parts in the numerical simulations.
The way to do it is not unique, and we try to turn this to our advantage: we use several
models, and from them we try to extrapolate to the physical system~\cite{Sav-JCP-11}.
In other words, we follow an ``adiabatic connection'' (see~\cite{HarJon-JPF-74}),
without ever constructing a density functional. This new paradigm thus explores the
possibility to replace the use of DFAs by mathematically controlled approximations: we make
density functional theory ``without density functionals.''
\enlargethispage{0.8\baselineskip
Our approach has introduced an additional difficulty nonexistent in the Kohn--Sham method:
the long-range part of the interaction has to be treated accurately, and not only its
electrostatic component.
One may ask whether this additional effort is justified, and whether one gains anything with respect
to a calculation where the physical (Coulomb) interaction is used.
For a single calculation, the gain is due to the lack of singularity in the interaction
expressed by a weak interaction potential allowing for
simplified treatments, such as perturbation theory.
However, as the extrapolation to the physical system needs more than one point, it is
essential that the number of points stays very small, and the interaction weak.
\subsection{Objective and structure of the paper}
We first choose, in Sec.\ref{sec:2.1}, a family of model (parameter dependent) Hamiltonians that are more flexible
than using only the Kohn--Sham (noninteracting) Hamiltonian.\footnote{Note however that this
is at the prize of working in $\R^{3N}$ instead of $\R^3$, and thus requiring accurate
many-body, e.g., configuration interaction calculations.}
This is followed by a description of how universality is introduced, namely by analyzing how
a nonsingular interaction approaches the Coulomb one, and not by transfer from other
systems, as usually done in DFAs.
The physical system of interest is one among the parameter dependent models corresponding to
some precise value of the parameter;
in Sec.~\ref{sec:2.2} its solution is extrapolated from the solutions to the
models for other values of the parameter, expected that these solutions are more simple to
be approximated.
This extrapolation is efficiently handled in the general framework
of the model reduction methods and more precisely referring to a variation of
the \emph{Empirical Interpolation Method}~\cite{barrault_empirical_2004}.
We believe that such an approach can not only discuss what DFAs are really doing, but can
evolve to being used in applications.
Some argument supporting this statement is given.
However, in this paper numerical examples (gathered in Sec.~\ref{sec:3}) are
only presented for two-electron
systems that are numerically (and sometimes even analytically) easily accessible: the
harmonium, the hydrogen anion, H\textsuperscript{--}, and the hydrogen molecule, H\textsubscript{2}{} in the ground
state, at the equilibrium distance.
As we do not use the Hohenberg--Kohn theorem, the technique can be applied without
modification also to excited states.
We provide in Sec.~\ref{sec:3.5}, as an example, the first excited state of
the same symmetry as the ground state.
Some conclusions and perspectives are presented in Sec.~\ref{sec:4}.
Finally, in order to facilitate reading the manuscript, various details are given in
Appendices \ref{app:DFT}--\ref{app:change-to-cm-12}
that follow Sec.~\ref{sec:4}.
\section{Approach}
\subsection{The model Schr\"odinger equation}\label{sec:2.1}
We study a family of Schr\"odinger equations,
\begin{equation}
\label{eq:schroedinger}
H(\mu) \Psi(\mu) = E(\mu) \Psi(\mu),
\end{equation}
where $\mu$ is some nonnegative parameter.
More precisely, in this paper, we use
\begin{equation}
\label{eq:H}
H(\mu) = T + V + W(\mu),
\end{equation}
where $T$ is the operator for the kinetic energy, $V$ is the external potential (in
particular that of the interaction between nuclei and electrons) and $W(\mu)$ represents the
interaction between electrons. Although not required by the general theory, in this paper we
introduce the dependence on $\mu$ only by modifying the interaction between electrons,
\begin{equation}
\label{eq:W}
W(\mu) = \sum_{i<j} w(r_{ij},\mu),
\end{equation}
choosing
\begin{equation}
\label{eq:werf}
w(r_{ij},\mu) = \frac{\erf(\mu r_{ij})}{r_{ij}}
\end{equation}
where $r_{ij}=\abs{\vecfont{r}_i - \vecfont{r}_j}$ is the distance between electron $i$ (at position
$\vecfont{r}_i$) and electron $j$ (at position $\vecfont{r}_j$). Finally, the external potential $V$ is
written like
\begin{equation}
V = \sum_{i=1}^N v(\vecfont{r}_i).
\end{equation}
where $v$ is the local one particle operator. Note that the $N$-particle operators are
denoted by upper case letters, while the one-particle operators are denoted by lower case
letters.
Note also that for $\mu=0$ we have a trivial noninteracting system, while for $\mu=\infty$
we recover the Coulomb system.
The operator $w$ is long-ranged: as $\mu$ increases, the Coulomb interaction $1/r_{12}$
starts being recovered from large distances.
The first reason for this choice is that, as mentioned above, we expect a universal
character for short range (this is related to the difficulty of common DFAs to correctly
describe long-range contributions, cf.\ Appendix~\ref{app:DFT}).
The second reason is that the solution of~\Eqref{eq:schroedinger} is converging more rapidly
with (conventional) basis set size when the interaction has no singularity at $r_{12}=0$.
In principle, introducing a dependence of the one-particle operators ($T$ and $V$) on $\mu$
makes the formulas a bit more clumsy, but does not introduce important difficulties in its
application.
Using such a dependence might improve the results, but it is not discussed in this contribution.
In the following, in order to simplify notation, we drop the argument $\mu$, when
$\mu=\infty$, e.g., $E=E(\mu=\infty)$.
\subsection{The correction to the model}\label{sec:2.2}
\subsubsection{Using a basis set}\label{Using_a_basis_set}
Of course, solving the Schr\"odinger equation for the model, \Eqref{eq:schroedinger} with
finite $\mu$s, does not provide the desired solution, i.e., the one that is obtained for
$\mu=\infty$.
We thus need to estimate the difference in eigenvalues:
\begin{equation}
\label{eq:ebar}
\bar{E}(\mu) = E - E(\mu).
\end{equation}
Since $\bar{E}(\mu)$ tends to zero at infinity, the idea is to first expand this difference
$\bar{E}(\mu)$ in a basis (of functions that tend to zero at infinity), retaining $M$ terms,
\begin{equation}
\label{eq:e-basis}
\bar{E}(\mu) \approx \bar{E}_M(\mu) = \sum_{j=1}^M c_j \chi^{\ }_j(\mu),
\end{equation}
leading to
\begin{equation*}
E(\mu) \approx E - \sum_{j=1}^M c_j \chi^{\ }_j(\mu),
\end{equation*}
or, more precisely since $E$ is not known, we replace it by an approximation denoted as
$E_M$,
\begin{equation}
\label{eq:e-mu-basis}
E(\mu) \approx E_M - \sum_{j=1}^M c_j \chi^{\ }_j(\mu).
\end{equation}
The idea then proceeds by determining the unknown $E_M$ values and the coefficients $c_i$ from
$M+1$ values of $E(\mu_m)$, for~$m=0, \dotsc, M$ for an appropriate choice of the parameter
values $\mu_m$.
Finally, taking into account that the functions $\chi^{\ }_j$ tend to zero at infinity, the
proposed approximation for $E$ is $E_M$.
Of course, this extrapolation approach often fails if care is not enough taken in the choice
of the functions $\chi^{\ }_j, 1\le j\le M$, and the values $\mu_m$, for~$m=0, \dotsc, M$.
First, one has to decide about their form.
Second, one has to find a way to keep $M$ as small as possible to reduce computational cost
while preserving a good accuracy.
\subsubsection{Approaching the Coulomb interaction}
As recalled above, we derive from the leading term of the Coulomb interaction between the
electrons that, to leading order, the solutions of differential equations are determined at
short range by the singularities.
The interaction $w$ in \Eqref{eq:werf} has no singularity at $r_{12}=0$, for any finite $\mu$.
However, as the parameter $\mu$ increases, $w(\cdot, \mu)$ approaches the singular Coulomb
potential.
In order to see how this limit is approached, let us perturb the exact solution.
To first order, the perturbation correction to the energy is given by
\begin{equation}
\label{eq:pert}
\bar{E}(\mu) = \mel*{\Psi}{\bigl(W - W(\mu)\bigr)}{\Psi},
\quad \text{for \( \mu \rightarrow \infty \)}.
\end{equation}
By changing the integration variables $\vecfont{r}_i$ to $\mu \vecfont{r}_i$ we see that
\begin{equation}
\label{eq:approach}
\bar{E}(\mu) \propto \mu^{-2}
\quad \text{as \( \mu \rightarrow \infty \)},
\end{equation}
providing a leading behavior that we want the basis functions $\chi^{\ }_i$ to reflect.
It is possible to continue this analysis for higher order terms.
In fact, the next term (in $\mu^{-3}$) has a coefficient proportional to that of $\mu^{-2}$,
the proportionality coefficient being determined by the nature of the Coulomb
singularity~\cite{GorSav-PRA-06}.
\subsubsection{Choice of the basis functions}
In the main part of this contribution we use a simple ansatz,
\begin{equation}
\label{eq:basis}
\tilde \chi_j^{\ }(\mu)=1- j \; \mu (1+ j^2 \; \mu^2)^{-1/2},
\quad j=1, \dotsc, M,
\end{equation}
that respects indeed the condition of \Eqref{eq:approach}.
The motivation for this specific choice, that is arbitrary to a certain degree, as well as
some results obtained with other choices of basis functions, is given in
Appendix~\ref{app:basis}.
The first functions of this basis set are presented in Fig.~\ref{fig:basis}, together with
an example of a function it has to approximate. It illustrates that the function we want to
describe is between basis function $\tilde\chi_2^{\ }$ --- for small $\mu$ ---
and basis function $\tilde\chi_3^{\ }$ ---for
large $\mu$.
However, a simple linear combination between these (only) two surrounding
basis functions from the family in
\Eqref{eq:basis} does not improve much the accuracy, but of course, more (and more
appropriate) functions in the family can (and should) be called.
\begin{figure}[htb]
\centering
\includegraphics{PMSfig1.pdf}
\caption{Basis functions $\tilde \chi^{\ }_j$ of \Eqref{eq:basis}, continuous
curves with the color corresponding
to $j$; and an (unknown) function to be approximated by linear combination
on this basis (dot-dashed, gray).
The unknown function in this figure is proportional to $\bar{E}(\mu)$
of harmonium.\label{fig:basis}}
\end{figure}
\subsubsection{Reducing the basis set}
Using a large set of $\chi_j^{\ }$ (a large $M$) can rapidly become computationally prohibitive
(because it requires a large number of evaluations of $E(\mu_m)$, for~$m=0,\dots, M$) and
numerically unstable (because it is classically much more difficult to stabilize
extrapolation than interpolation).
In order to reduce their number and increase the stability of the extrapolation, we use a
greedy (iterative) method, as in the \emph{Empirical Interpolation Method} (EIM) leading to
proper choices of $\mu_m$, for~$m=0,\dots, M$ known as ``magic points.''
In the $K$th iteration of EIM, one starts from a set of $K-1$ basis functions (for us,
$\tilde \chi^{\ }_j$) and $K-1$ points (for us, $\tilde \mu_j$ belonging to some (discretized) interval, say,
close to zero, to benefit at most of the regularization of the $\erf$ function). One then
chooses the $K$th function $\tilde \chi^{\ }_{K-1}$ (among the remaining $M-K$ basis
functions) as being the one that is most poorly approximated by the current interpolation
(based on the $K-1$ basis functions and the $K-1$ points) in a sense dedicated to the final
goal we want to achieve (that can be uniform error, error on some part of the domain, or
even at some value) and the $K$th point $\tilde \mu_{K-1}$ that, in the admissible set,
brings the more information.
In this contribution, as we are only interested at extrapolating the value of~\( \mu \) at
infinity, so we chose the error as the absolute value of difference between the $K$th
basis function and its interpolant at infinity as the final goal we want to achieve.
Note that the procedure selecting the next point and function does not make any use of the
function to be approximated (here $\bar{E}$).
It is thus a cheap step compared with the calculation of $E(\tilde \mu_m)$ on the system of
interest.
To improve the results for the extrapolation, we have modified the EIM algorithm into what
we call the \emph{Forward Looking} EIM (FLEIM).
While EIM tries to get the maximal improvement through a sequential choice of, first the new
basis function, then the new point of interpolation, FLEIM tries to get the best pair for
improvement in the selected goal.
The method is explained in more detail in Appendix~\ref{app:EIM}.
In what follows, we present the results of FLEIM as they are better and more stable than
those of EIM, as is illustrated in App~\ref{app:details-calc1}.
\subsection{Computing other physical properties}
FLEIM can be used to approximate other physical properties, i.e., correct expectation values
of operators $A \ne H$ obtained with the model wave functions, $\Psi(\mu)$,
\begin{equation}
A(\mu) = \bra{\Psi(\mu)} A \ket{\Psi(\mu)}.
\end{equation}
This can be seen immediately by noting that the derivation in
Sec.~\ref{Using_a_basis_set} is not specific for correcting $E(\mu)$, but can
also be applied to $A(\mu)$.
For the choice of the basis functions, we point out that properties are obtained by
perturbing the Hamiltonian with the appropriate operator, say, $A$,
\begin{equation}
H \rightarrow H(\lambda) = H + \lambda A.
\end{equation}
The expectation value of $A$ can be obtained as the derivative of $E(\lambda)$ w.r.t.
$\lambda$, at $\lambda = 0$. Of course, this procedure can be applied to model Hamiltonians,
yielding $E(\lambda,\mu)$ and
\begin{equation}
\bra{\Psi(\mu)} A \ket{\Psi(\mu)} = \partial_\lambda E(\lambda,\mu)\bigr|_{\lambda=0}
\end{equation}
Thus, in this contribution, we use the same type of basis functions for $A(\mu)$ as for $E(\mu)$;
see the results in Sec.~\ref{sec:expectation_values}.
Note that computing $\bra{\Psi(\mu)} A \ket{\Psi(\mu)}$ is not possible in
DFT, without having a property-specific density functional~\cite{Bau-PRB-83}.
\section{Numerical results}\label{sec:3}
\subsection{Guidelines}
The quality of the corrections using Eqs.~\eqref{eq:e-basis} and~\eqref{eq:basis} is
explored numerically.
Technical details on the calculations are given in Appendix~\ref{app:details-calc}.
The plots show the errors done by the approximations in the estimate of the energy: we
choose a model, $\mu\mapsto E(\mu)$, and let the empirical interpolation method choose which
easier models (with weaker interactions) to extrapolate and get an approximation for
$E=E(\mu=\infty)$.
The plots show the error in the estimate of $E$ made when considering approximations that
use information only for $\tilde \mu_m \le \mu$.
From the plots, we read off how small $\mu$ can be and still have ``reasonable'' accuracy.
In thermochemistry, \si{\kilo\calorie\per\mole} is a commonly considered unit, and is often
considered as ``chemical accuracy.''
For electronic excitations, one often uses \si{\eV} units, and one often indicates it with
one decimal place.
``Chemical accuracy'' is marked in the plots by horizontal dotted lines.
The plots show the errors in the range of
$\pm \SI{0.1}{\eV} \approx \SI{2.3}{\kilo\calorie\per\mole}$.
We consider approximations using up to four points (thus chosen in $[0, \mu]$).
The first point $\mu_0$ is always the value chosen $\mu_0 = \mu$ shown on the $x$-axis of
the plots, and the basis function associated to it is $\tilde \chi^{\ }_0$, the constant function; note that using only this pair $(\chi^{\ }_0, \mu_0)$ corresponds to choosing $E\simeq E(\mu_0)$=
the value provided by the model, i.e., no correction is applied.
When the number of points is increased, further values of $E(\tilde \mu_m)$, chosen by the
algorithm, are used with $\tilde \mu_m < \mu$.
The (maximal) parameter $\mu$ is considered between \num{0}~and~\SI{3}{\per\bohr}.
The model without correction (blue curve) reaches chemical accuracy for $\mu \approx \SI{3}{\per\bohr}$ for
H\textsuperscript{--}{} and harmonium, but only at $\mu \approx \SI{5}{\per\bohr}$ for H\textsubscript{2}{} in its
ground state.
\subsection{General behavior of errors}
\begin{figure}[p]
\centering
\includegraphics[viewport=140 165 450 635,clip=]{PMSfig2.pdf}
\caption{\label{fig:was2-4}%
Errors for harmonium (top), H\textsubscript{2}{} (middle), and H\textsuperscript{--}{} (bottom)
using FLEIM with one to four points (1: blue curve, 2: brown
curve, 3: green curve, 4: red curve).
The abscissa represents the biggest~\( \mu \)
allowed for use in the FLEIM algorithm.
The error of the model without correction (blue curve) does not show up
in the figure for the H\textsubscript{2}\ molecule because it is larger than the
domain covered by the plot.}
\end{figure}
The plots in Fig.~\ref{fig:was2-4} for harmonium, H\textsubscript{2}, and H\textsuperscript{--}{} have similar features and are discussed together.
As the number of points used increases, the smallest value of $\mu$ for which the good
accuracy is reached decreases.
Note that FLEIM produces very small errors for values of $\mu$ larger than 2.
However, with the chosen basis set, the algorithm presented in this contribution has difficulties
correcting the errors for $\mu$ smaller than 1.
\subsection{Possibility of error estimates}
Some tests can be done to estimate the quality of the approximation.
For example, we can compare how the approximations change when increasing the
number of basis functions, $K$, in our approximation and consider
${\abs{E_{K} - E_{K-1}}}$ as an \emph{asymptotically} valid error estimate for
$E_{K-1}$.
One can notice in the above figures that, when the difference between, say,
the 2- and the 3-point approximation error is larger than ``chemical
accuracy,'' so is the error in the 2-point approximation.
\subsection{Expectation values with FLEIM: $\langle r_{12} ^{\ }\rangle$ and
$\langle r_{12} ^2\rangle$ for harmonium}\label{sec:expectation_values}
\begin{figure}[t]
\centering
\includegraphics[viewport=140 295 450 660,clip=]{PMSfig3.pdf}
\caption{\label{fig:was5-6}%
Errors made for the expectation value of the distance between electrons (top)
and the distance squared (bottom) for harmonium, by using a model wave
function, $\Psi(\mu)$, and after correcting with FLEIM (the different curves
correspond to the number of points used). The insets zooms in.}
\end{figure}
We look at the average distance between the electrons in harmonium.
Figure~\ref{fig:was5-6}(top) shows the error made by using $\Psi(\mu)$ instead
of $\Psi(\mu=\infty)$ in computing the expectation value of $r_{12}$, as well
as the correction that can be achieved with FLEIM, using the same basis set as
above (\ref{eq:basis}).
The inset in Figure~\ref{fig:was5-6}(top) concentrates on the errors made in
the region that could be considered chemically relevant
(\SI{1}{\pico\meter}~$\approx$~\SI{0.02}{\bohr}).
We note the similarity with the behavior of in correcting $E(\mu)$.
Let us now examine the average square distance between the electrons,
$\langle r_{12}^2 \rangle$, in harmonium.
While for computing the energy we explored correcting the missing short-range part of the
interaction, we now ask whether it is possible to correct the error of using the model wave
function, $\Psi(\mu)$ for the expectation value of an operator that is important at long
range.
For $\omega=1/2$, we know the exact values of the expectation value of $r_{12}^2$ at $\mu=0$
and $\mu=\infty$; they are 6 and $ (42\sqrt{\pi}+64)/(5\sqrt{\pi}+8) \approx 8.21$,
respectively (see, e.g., Ref.~\cite{Kin-96}).
Note the large effect of the model wave function, $\Psi(\mu)$, in computing
$\langle r_{12}^2 \rangle$.
Figure~\ref{fig:was5-6}(bottom) shows the error made by using $\Psi(\mu)$ instead of
$\Psi(\mu=\infty)$ in computing the expectation value of $r_{12}^2$, as well as the effect
of the correction that can be achieved with FLEIM, using the same basis set (\ref{eq:basis})
as above.
We note again the similarity with the behavior of in correcting $E(\mu)$ or the expectation
value of the distance between electrons.
The expectation value $\langle r_{12}^2 \rangle$ also illustrates another aspect: the effect
of a change of the external potential on the energy.
At first sight this may seem surprising, as the external potential is a one-particle
operator, while $r_{12}^2$ is a two-particle operator.
However, changing the one-particle operator also modifies the wave function
and this affects the value of $\langle r_{12}^2 \rangle$.
In the case of harmonium, this can be shown analytically.
Changing $\vecfont{r}_1$ and $\vecfont{r}_2$ to center-of-mass, $\vecfont{R}$, and inter-particle distance,
$\vecfont{r}_{12}$, cf.\ Appendix~\ref{app:change-to-cm-12}, allows us to see explicitly that a
modification of $\omega^2$, the parameter that specifies the external potential, affects the
Sch\"odinger equation in $\vecfont{r}_{12}$.
It introduces a term proportional to $\omega^2 r_{12}^2 $.
The first order change in the energy when we change the external potential ($\omega^2$) is
thus proportional to $\langle r_{12}^2 \rangle$.
Our results in Fig.~\ref{fig:was5-6}(bottom) show that our conclusions on
model corrections are not modified by small changes in the external potential.
Note that the center-of-mass Schr\"odinger equation also depends on
$\omega^2$, but it is independent of $\mu$ and thus does not affect our
discussion on model correction.
\subsection{Comparison with DFAs}\label{sec:3.5}
Instead of using extrapolation with FLEIM, one can use DFAs.
While up to now the external potential did not change with $\mu$, in DFA
calculations a one particle potential that depends on $\mu$ is added in order to correct the density.
We consider here two DFAs, the local density approximation,
LDA~\cite{Savin-1996,Paziani_Moroni_Gori-Giorgi_Bachelet_2006}, and one that reproduces that
of Perdew, Burke and Ernzerhof
(PBE)~\cite{Goll_Werner_Stoll_2005,Goll_Leininger_Manby_Mitrushchenkov_Werner_Stoll_2008} at
$\mu=0$.
Both approximations are modified to be $\mu$-dependent.
In particular, they vanish at $\mu=\infty$.
\begin{figure}[t]
\centering
\includegraphics[viewport=140 265 450 655,clip=]{PMSfig4.pdf}
\caption{\label{fig:was7-9}%
Absolute errors for the harmonium molecule at equilibrium distance (top), for
the H\textsubscript{2}{} molecule at equilibrium distance (middle), and in the first
excited state of the same symmetry as the ground state
(bottom):
a $\mu$-dependent LDA (black dashed curve), combined with the a
$\mu$-dependent Perdew--Burke--Ernzerhof approximation (PBE, gray dashed
curve), FLEIM (3 points) (green curve), combined with a $\mu$-dependent local
density approximation.
The abscissa represents the biggest~\( \mu \) allowed for use in the FLEIM
algorithm.
The insets zoom into the regions of small errors, the dotted line corresponding
to the value of ``chemical accuracy.''
Note the different ranges for ${\Delta E}$.}
\end{figure}
As shown in Fig.~\ref{fig:was7-9}(top) for harmonium, DFAs are clearly much
better at small $\mu$.
However, they are not good enough.
The figure suggests the range of $\mu$ for which DFAs are within chemical
accuracy is similar to that obtained with the 3-point FLEIM\@.
This is confirmed when comparing the results with DFAs and for the H\textsubscript{2}; see
Fig.~\ref{fig:was7-9}(middle).
Note that with FLEIM the errors at large $\mu$ are smaller.
Note also that the curves obtained with extrapolation are significantly \emph{flatter} at
large $\mu$ than those obtained with DFAs.
This should not be surprising: DFAs transfer the large $\mu$ behavior, while extrapolation
extracts it from information available for the system under study.
Furthermore, using ground-state DFAs for excited states does not only pose a problem of
principle (questions its validity, as the Hohenberg--Kohn theorem is proven for the ground
state), but can also show a deterioration of quality.
However, there is no question of principle from the perspective of this contribution (of using a
model and correcting it by extrapolation).
Also, the error in the excited state seems comparable to that in the ground state, as seen
in the example of the H\textsubscript{2}{} molecule, in the first excited state of the same symmetry as
the ground state; see Fig.~\ref{fig:was7-9}(bottom).
\section{Conclusion and perspectives}\label{sec:4}
In this contribution we have illustrated with a few models how to simplify the
Hamiltonians by smoothly getting rid of the singularities in the system and
thus have more numerically tractable problems.
This simplification is obtained by introducing a parameter that, when it is
equal to infinity, it corresponds to the original, plain Hamiltonian.
After numerically solving a few simplified problems, the solution of interest
is obtained by extrapolation.
We present a new (in the field) method for extrapolating the quantities of
interest from few finite values (hence easy to solve) of the parameter by a
technique borrowed from reduced basis paradigm: the empirical interpolation
method.
In contrast to DFAs, no parameters are fitted, no transfer from different
system are made: only extrapolation is used.
Note that in contrast to DFAs, improvement can be envisaged by either adding
further points or using more appropriate basis functions and error estimates
are asymptotically accessible.
| {'timestamp': '2021-12-28T02:14:42', 'yymm': '2112', 'arxiv_id': '2112.13139', 'language': 'en', 'url': 'https://arxiv.org/abs/2112.13139'} |
\section{Introduction}
Few-body systems of an antikaon and nucleons are of great interest in the strangeness nuclear
physics~\cite{Hyodo:2011ur,Gal:2016boi}. The existence of the $\Lambda(1405)$ resonance below
the $\bar{K}N$ threshold implies that the $\bar{K}N$ interaction in the isospin $I=0$ channel
is strongly attractive with which the antikaon could be bound in nuclei~\cite{PL7.288,Akaishi:2002bg}.
Recently, three-body $\bar{K}NN$ systems have been a matter of intensive investigation based on rigorous
few-body techniques~\cite{Shevchenko:2006xy,Shevchenko:2007ke,Ikeda:2007nz,Ikeda:2008ub,Yamazaki:2007cs,Dote:2008in,Dote:2008hw,Wycech:2008wf,Ikeda:2010tk,Barnea:2012qa,Ohnishi:2017uni,Revai:2016muw,Hoshino:2017mty}.
To maximize the $I=0$ contribution of the $\bar{K}N$ pair in the $s$-wave $\bar{K}NN$ system, the state
with the total spin $J=0$ and isospin $I=1/2$ is mainly studied. It is agreed among several different groups
that the $J=0$ system supports a quasi-bound state below the threshold, although the quantitative predictions
are not yet well converged. There are some studies of the state with $J=1$ and $I=1/2$, but the quasi-bound
state is not found below the $\Lambda^{*}N$ threshold in Refs.~\cite{Uchino:2011jt,Barnea:2012qa}, presumably
because of the small fraction of the $I=0$ component of the $\bar{K}N$ pair. In Ref.~\cite{Wycech:2008wf}, it
is shown that the $J=2$ and $I=3/2$ state is bound because of the $p$-wave $\bar{K}N$ interaction that generates
the $\Sigma(1385)$ state.
Currently, nuclei with a heavy flavor (charm and bottom) meson is a subject of increasing
attention~\cite{Hosaka:2016ypm}. This is triggered by the observation of many new heavy quarkonium-like
hadronic states, so-called the $XYZ$ states, above the open-charm or open-bottom
thresholds~\cite{Brambilla:2010cs,Hosaka:2016pey,Olive:2016xmw}. Because the near-threshold states are
often interpreted as {\it hadronic molecular states}, the existence of the $XYZ$ states indicates that
the heavy flavor meson could be a constituent in the formation of such exotic hadronic bound states.
To assess the possible existence of the bound heavy mesons in nuclei, two-body $DN$
interactions~\cite{Hofmann:2005sw,Mizutani:2006vq,GarciaRecio:2008dp,Haidenbauer:2010ch,Yamaguchi:2011xb,Yamaguchi:2013ty}
and three-body quasi-bound systems~\cite{Bayar:2012dd,Yamaguchi:2013hsa} involving $D$ mesons have been
studied extensively. With a plethora of high-precision data currently available from a host
of modern experimental facilities, such as BaBar, Belle, CDF, D0, BES, CMS, and LHCb, similar investigations
are also being extended to the open-bottom sector.
In general, the dynamics of a three-body problem reflect subtle details of two-body interactions.
In some circumstances, gross properties of three-body systems can be assessed in terms of a few parameters
which solely characterize the nature of two-body interactions. In fact, when the two-body scattering
length $a$ is much larger than the interaction range $r_0$, microscopic details of two-body interactions
become irrelevant as a consequence of low-energy {\it universality}~\cite{Braaten:2004rn,Naidon:2016dpf}.
Universal physics can manifest themselves in sharply contrasting manner in the two- and three-body sectors.
In the two-body sector, universal predictions are quite simple, manifested, e.g., as a shallow two-body bound
state {\it dimer} for $a>0$ with the eigenenergy in the scaling limit given by $E_{\rm dimer}=-1/(2\mu_{\rm red} a^2)$,
where $\mu_{\rm red}$ is the generic two-body reduced mass. However, universal predictions can become more complex
in the three-body sector. The most striking phenomenon in a quantum mechanical three-body problem is the so-called
{\it Efimov effect}~\cite{Efimov:1970zz}, characterized by the emergence of an infinitely many and arbitrary
shallow geometrically spaced bound states with accumulation point at zero energy in $s$-wave three-body systems.
In the context of hadron physics, the Efimov effect and low-energy universality have been discussed in
three-nucleon systems~\cite{Braaten:2003eu,Konig:2016utl,Canham:2009zq}, charmed meson systems~\cite{Braaten:2003he},
three-pion systems~\cite{Hyodo:2013zxa}, and hyperon systems such as the $nn\Lambda$~\cite{Ando:2015fsa}.
In this work we have shed new light on the meson-nucleus systems having a strange (charm) quark, from the
viewpoint of low-energy universality. We focus on the $\bar{K}NN$ ($DNN$) system with $J=0$, $I=3/2$,
and $I_{3}=-3/2$, or more specifically, the $K^{-}nn$ ($D^{0}nn$) system. In these systems, all the two-body
interactions occur in the $I=1$ combinations. In comparison to other quantum numbers, the quantum numbers
associated with the $K^{-}nn$ ($D^{0}nn$) channel are ideal for examining the Efimov effect due to the following
reasons. First, the absence of the Coulomb interaction guarantees that the low-energy behavior of this system
is only governed by the two-body scattering lengths of the strong interaction. Second, the absence of any nearby
coupled channels is also suitable for our purpose, otherwise the existence of such coupled channels is known
to generally reduce the effective attraction in the three-body system~\cite{Braaten:2003he,Hyodo:2013zxa}
diminishing the likelihood of formation of Efimov bound states.
While the scattering length of the two-neutron system is much larger than the typical length scale of the strong
interaction, we should note that the magnitude of the physical $K^{-}n$ ($D^{0}n$) scattering length is not large enough for
the meson-neutron two-body system to become resonant. However, our analysis reveals that the meson-neutron scattering
length is expected to reach the unitary (resonant) limit, through an extrapolation of the quark mass from the strange sector
to the charm, where the couplings to sub-threshold decay channels are eliminated. Consequently with such a {\it zero coupling
limit} (ZCL), the otherwise {\it complex} meson-neutron scattering lengths turn out to be {\it real valued}. This kind of
idealization obviously leads to a considerable simplification of the analysis of the three-body {\it Faddeev-like} integral
equations compared to a more involved ``realistic'' analysis with complicated coupled-channel dynamics involving also
hadrons other than the $K^-$ ($D^0$) meson and the neutron. Such a sophisticated approach is currently beyond the scope of
our {\it naive} low-energy effective field theoretical (EFT) framework used in this work. Our primary objective in this
paper is to present a simple leading order EFT analysis in an idealized limit to illuminate certain aspects of remnant two-
and three-body universal physics that may be associated with bound state ({\it dimer} and {\it trimer}) formation in physical
systems such as $K^{-}n$ ($D^{0}n$) and $K^{-}nn$ ($D^{0}nn$). In principle, the universal features of unphysical quark mass
system can be studied in lattice QCD simulations, as previously demonstrated in the context of few-nucleon
systems~\cite{Barnea:2013uqa}.
This paper is organized as follows. In Sec.~\ref{sec:twobody}, we study the two-body interaction of a flavored
meson ($K^{-}$ or $D^{0}$) with a neutron in the $I=1$ state. We introduce a coupled-channel scattering model
to perform an extrapolation from the strangeness sector to that of the charm. We evaluate the two-body scattering
length in this extrapolation, together with the ZCL where the couplings to sub-threshold decay channels are artificially
switched off. In Sec.~\ref{sec:threebody}, we formulate a {\it pionless} cluster effective field theory which
describes the low-energy properties and dynamics of a three-particle cluster state consisting of two neutrons and
a flavored meson ($K^-$ or $D^0$). In particular, we consider a possible scenario where the $s$-wave meson-neutron
scattering length is infinitely large, while the $s$-wave neutron-neutron scattering length is fixed at
its physical value. We perform an asymptotic analysis of the system of three-body integral equations to examine the
renormalization group (RG) {\it limit cycle} behavior and other associated features related to the Efimov effect.
Our numerical results for the full non-asymptotic analysis of the integral equations with and without including a
three-body interaction are then presented. The final section is devoted to a summary of this work. Furthermore,
based on our numerical results, a plausibility argument is presented at a qualitative level on the feasibility of
$K^-nn$ or $D^0nn$ bound trimer formation. A discussion on certain numerical methodologies adopted in this work has
been relegated to the appendices.
\section{Two-body meson-neutron interaction}\label{sec:twobody}
\subsection{$K^{-}n$ system, $D^{0}n$ system, and the unitary limit}
We first summarize the known properties of the meson-neutron interactions. In the $\bar{K}N$ system,
several experimental data constrain the meson-baryon scattering amplitude~\cite{Hyodo:2011ur}.
Among others, the recent measurement of the kaonic hydrogen by the SIDDHARTA
collaboration~\cite{Bazzi:2011zj,Bazzi:2012eq} gives a strong constraint on the low-energy $\bar{K}N$
interaction, because the result is directly related to the $K^{-}p$ scattering length through the
improved Deser-type formula~\cite{Meissner:2004jr}. The analysis of the meson-baryon scattering with a
complete {\it next-to-leading} order chiral SU(3) dynamics including the SIDDHARTA
constraint~\cite{Ikeda:2011pi,Ikeda:2012au} determines the scattering lengths of the $K^{-}n$ system
as\footnote{In this paper, the scattering length is defined as $a_{0}=-f(E=0)$ with the scattering amplitude
$f$. The sign convention of the scattering length is opposite to that used in Refs.~\cite{Ikeda:2011pi,Ikeda:2012au}.}
\begin{align}
a_{0,K^{-}n}
&=
-0.57^{+0.21}_{-0.04}-i0.72_{-0.26}^{+0.41}\text{ fm} .
\label{eq:Kmnslength}
\end{align}
The negative real part indicates that the $K^{-}n$ interaction is attractive, but not strong enough to
support a quasi-bound state. The imaginary part of the scattering length indicates possible decay into
sub-threshold $\pi\Sigma$ and $\pi\Lambda$ channels.
The $D^{0}n$ system is a counterpart of the $K^{-}n$ system in the charm sector. In contrast to the
strangeness sector, there is no experimental data for the $D^{0}n$ scattering process. Theoretically, the $D^{0}n$
interaction has been studied by generalizing the established models in the
strangeness sector~\cite{Hofmann:2005sw,Mizutani:2006vq,GarciaRecio:2008dp,Haidenbauer:2010ch,Yamaguchi:2011xb}
(see Ref.~\cite{Hosaka:2016ypm} for a recent review). Here we take an alternative strategy of using the
experimental information of the $\Sigma_{c}(2800)$ resonance. The mass and width of the neutral
$\Sigma_{c}^{0}(2800)$ state are given by the Particle Data Group (PDG)~\cite{Olive:2016xmw} as
\begin{align}
M_{\Sigma_{c}^{0}(2800)}
&=
2806^{+5}_{-7}\text{ MeV} ,\quad
\Gamma_{\Sigma_{c}^{0}(2800)}
=
72^{+22}_{-15}\text{ MeV} ,
\label{eq:Sigmac2800}
\end{align}
which lies very close to the $D^{0}n$ threshold at $\sim 2803.8$ MeV. Although the spin and parity of
$\Sigma_{c}(2800)$ are not yet determined experimentally, by assuming $J^{P}=1/2^{-}$, we can determine
the strength of the $s$-wave interaction in the $D^{0}n$ system. If the resonance pole of $\Sigma_{c}(2800)$
is found to lie below the $D^{0}n$ scattering threshold, then the $D^{0}n$ interaction is attractive enough
to generate a quasi-bound state below the threshold. As will be demonstrated below, such a $D^{0}n$ quasi-bound
picture of the $\Sigma_{c}(2800)$ resonance naturally arises in the SU(4) contact interaction model.
From these observations, we can draw the following conclusion regarding the meson-neutron interaction. On the
one hand, the $K^{-}n$ system has a weakly attractive scattering length, as indicated by recent analysis in the
strangeness sector. On the other hand, the $D^{0}n$ system can support a quasi-bound state below the threshold,
which is ostensibly identified with the observed $\Sigma_{c}(2800)$ state. If we perform an extrapolation of the
$K^{-}n$ interaction to the $D^{0}n$ interaction by changing the mass of the $s$-quark to that of the $c$-quark,
we can expect the existence of an unphysical region of the quark mass between $m_{s}$ and $m_{c}$ where a very
shallow quasi-bound state is formed when the magnitude of the scattering length becomes infinitely large.
This represents the universal region around the {\it unitary limit} of the meson-neutron interaction. The
situation is analogous to the two-nucleon~\cite{Braaten:2003eu}, the two-pion~\cite{Hyodo:2013zxa}, and the
$\Lambda\Lambda$~\cite{Yamaguchi:2016kxa} systems, where the unitary limit of the two-hadron scattering is
achieved by the quark mass extrapolation into the unphysical region. In much the same way we plan in this work
to explore the universal physics in proximity to meson-neutron unitarity using unphysical quark masses between
the strangeness and charm limits.
\subsection{Models of $\bar{K}N$ and $DN$ amplitudes}
To demonstrate that the unitary limit is indeed realized through the extrapolation, we employ a contact interaction
model with flavor symmetry. In the strangeness sector, the {\it Weinberg-Tomozawa model} in the chiral SU(3) dynamics
successfully describes the $\bar{K}N$ scattering~\cite{Kaiser:1995eg,Oset:1998it,Oller:2000fj,Hyodo:2011ur}. In the
charm sector, the SU(4) generalization of this approach is developed in Ref.~\cite{Mizutani:2006vq} in which the
$\Lambda_{c}(2595)$ resonance in the $DN$ scattering is dynamically generated in the $I=0$ sector, analogous to
$\Lambda(1405)$ in the strangeness sector. Evidently, there is one model in each sector which successfully describes the known
experimental data. Therefore, in this work we describe both $\bar{K}N$ and $DN$ systems within a unified framework of
a {\it dynamical coupled-channel model}. This enables us to perform the flavor extrapolation from strangeness to charm.
The coupled-channel scattering amplitude $T_{ij}(W)$ with the total energy $W$ is given by the scattering
equation
\begin{align}
T(W)=[V^{-1}(W)-G(W)]^{-1} ,
\end{align}
with the interaction kernel $V(W)$:
\begin{align}
V_{ij}(W)
=
-\frac{C_{ij}}{f_{i}f_{j}}(2W-M_{i}-M_{j})
\sqrt{\frac{M_{i}+E_{i}}{2M_{i}}}
\sqrt{\frac{M_{j}+E_{j}}{2M_{j}}} ,
\label{eq:contactterm}
\end{align}
where $C_{ij}$ is the coupling strength matrix specified below [see Eq.~\ref{eq:Cmatrix}], $M_{i}$ ($E_{i}$) is the
mass (energy) of the baryon in channel $i$, and $f_{i}$ is the channel-dependent meson decay constant. The
loop function $G_{i}(W)$ has a logarithmic ultraviolet divergence which we tame employing dimensional regularization.
After removal of the pole term through regularization, the finite constant parts proportional to $\ln(4\pi)$, the Euler
constant $\gamma_E$, etc., appearing in the loop function are then replaced by the {\it subtraction constant} $a_{i}(\mu_{\rm reg})$
at the regularization scale $\mu_{\rm reg}$. Such a term acts similar to a counter-term in a renormalizable
theory (see, e.g., Refs.~\cite{Oller:2000fj,Hyodo:2008xr}).
Thus, we have
\begin{align}
G_{i}(W;a_{i})
&=
\frac{2M_{i}}{16\pi^{2}}
\Bigl\{
a_{i}(\mu_{\rm reg})+\ln\frac{m_{i}M_{i}}{\mu^{2}_{\rm reg}}
+\frac{M_{i}^{2}-m_{i}^{2}}{2W^{2}}\ln\frac{M_{i}^{2}}{m_{i}^{2}}
\nonumber \\
&\quad +\frac{\bar{q}_{i}}{W}
[\ln(W^{2}-(M_{i}^{2}-m_{i}^{2})+2W\bar{q}_{i}) \nonumber \\
&\quad +\ln(W^{2}+(M_{i}^{2}-m_{i}^{2})+2W\bar{q}_{i}) \nonumber \\
&\quad -\ln(-W^{2}+(M_{i}^{2}-m_{i}^{2})+2W\bar{q}_{i}) \nonumber \\
&\quad -\ln(-W^{2}-(M_{i}^{2}-m_{i}^{2})+2W\bar{q}_{i})]
\Bigr\} ,
\label{eq:loop}
\end{align}
where $m_{i}$ denotes the mass of the meson in the channel $i$, and the quantity
\begin{equation}
\bar{q}_{i}=\sqrt{[W^{2}-(M_{i}-m_{i})^{2}][W^{2}-(M_{i}+m_{i})^{2}]}/(2W)
\end{equation}
is the analytically continued three-momentum in the center-of-mass frame. In this model, the interaction strengths
$C_{ij}$ are basically determined by the flavor symmetry. The free parameters in the model are the subtraction constants
$a_{i}$. These real valued subtraction constants play the role of ultraviolet cutoffs, and effectively renormalize the
dynamics of the channels which are not explicitly included in the model space. In this convention, the (diagonal) scattering
length $a_{0,i}$ in channel $i$ is given by
\begin{align}
a_{0,i}
&=
\frac{M_{i}}{4\pi (M_{i}+m_{i})}T_{ii}(M_{i}+m_{i})\,.
\end{align}
Next we consider the model space of the scattering. Because we are interested in the energy region
near the $\bar{K}N/DN$ threshold, the channels with higher energies than the $\bar{K}N/DN$ thresholds
(such as $\eta\Sigma/\eta\Sigma_{c}$ and $K\Xi/K\Xi_{c}$) are not very relevant to our analysis which we
exclude.\footnote{
By fitting the subtraction constants to experimental data, the contribution from the higher energy channels
can be effectively renormalized. In fact, it is phenomenologically shown in Ref.~\cite{Hyodo:2007jq}
that the effect of the higher energy channels is not very strong around $\bar{K}N$ threshold energy region.}
On the other hand, we include all relevant channels lower than the $\bar{K}N$ and $DN$ thresholds, respectively,
namely,
\begin{align}
K^{-}n,\quad \pi^{-}\Lambda,\quad \pi^{0}\Sigma^{-},\quad \pi^{-}\Sigma^{0}
\label{eq:strangemodelspace}
\end{align}
in the strangeness sector, and the corresponding open channels in the charm sector:
\begin{align}
D^{0}n,\quad \pi^{-}\Lambda_{c}^{+},
\quad \pi^{0}\Sigma_{c}^{0},\quad \pi^{-}\Sigma_{c}^{+}\,.
\end{align}
Here the coupling matrix for these channels takes the form
\begin{align}
C
=
\begin{pmatrix}
1 & -\sqrt{\frac{3}{2}}\kappa
& -\sqrt{\frac{1}{2}}\kappa & \sqrt{\frac{1}{2}}\kappa \\
& 0 & 0 & 0 \\
& & 0 & -2 \\
& & & 0 \\
\end{pmatrix} .
\label{eq:Cmatrix}
\end{align}
A comment on the suppression factor $\kappa$ is in order. As discussed in Ref.~\cite{Mizutani:2006vq}, the microscopic
origin of the contact interaction, Eq.~\eqref{eq:contactterm}, can be understood in terms of vector meson exchange picture.
In this case, the interaction strength is proportional to $1/m_{V}^{2}$ with $m_{V}$ being the mass of the exchanged vector
meson. While SU(4) symmetry exactly requires $\kappa=1$, for the heavy flavor exchange processes, $C_{1i}$ ($i=2,3,4$) should
be significantly suppressed. Thus, as in Ref.~\cite{Mizutani:2006vq}, the suppression factor in the charm sector, $\kappa<1$,
is introduced to account for the SU(4) breaking effects of the underlying mechanism. The masses of hadrons relevant for
these channels are taken from the central values provided by the PDG~\cite{Olive:2016xmw}, and for for
convenience summarized in Table~\ref{tbl:hadronmass}. We use the physical meson decay constants, $f_{\pi}=92.4$ MeV and
$f_{K}=109.0$ MeV, and the decay constant of the $D$ meson is also chosen to be $f_{D}=92.4$ MeV, following
Ref.~\cite{Mizutani:2006vq}. The regularization scale is set at $\mu_{\rm reg}=1$ GeV, consistent with the typical scale
of the vector mesons not explicitly incorporated in this framework.\footnote{
It is important to note that the regularization scale can be chosen arbitrarily to suite the purpose of a given theory or
model as long as the physical consequences are independent of this scale. This is, however, not to be confused with the hard
or the {\it breakdown} scale of the same theory which in our context may be chosen as the pion mass, i.e.,
${}^{\pi\!\!\!/}\Lambda\sim m_\pi$, in commensurate with the {\it pionless} EFT analysis pursued in the three-body sector.}
\begin{table*}[tbp]
\caption{Masses of hadrons.}
\begin{ruledtabular}
\begin{tabular}{llllllllllll}
Hadron & $K^{-}$ & $\pi^{-}$ & $\pi^{0}$ & $n$ & $\Lambda$ & $\Sigma^{-}$ & $\Sigma^{0}$ & $D^{0}$ & $\Lambda_{c}^+$ & $\Sigma_{c}^{0}$ & $\Sigma_{c}^{+}$ \\
\hline
Mass [MeV] & $493.677$ & $139.57018$ & $134.9766$ & $939.565379$ & $1115.683$ & $1197.449$ & $1192.642$ & $1864.3$ & $2286.46$ & $2453.75$ & $2452.9$ \\
\end{tabular}
\end{ruledtabular}
\label{tbl:hadronmass}
\end{table*}%
The remaining parameters are the subtraction constant $a_{i}$ in each channel and the suppression factor
$\kappa$ in the coupling strength matrix, appearing in Eq.~\eqref{eq:Cmatrix}. To determine these parameters
in the strangeness sector ($a_{i}^{s}$ and $\kappa^{s}$), it is essential to take into account the SIDDHARTA data
of the kaonic hydrogen~\cite{Bazzi:2011zj,Bazzi:2012eq}. The data allows a direct extraction of the $K^{-}p$
scattering length as related through the improved Deser-type formula~\cite{Meissner:2004jr} and provides a strong
constraint on the low-energy $\bar{K}N$ interaction~\cite{Ikeda:2011pi,Ikeda:2012au}. In Ref.~\cite{Ikeda:2012au},
a simplified model (termed as the ``ETW'' model) is constructed with the Weinberg-Tomozawa interaction ($\kappa_{s}=1$)
in the model space of Eq.~\eqref{eq:strangemodelspace}, but quite reasonably reproducing the full next-to-leading
order amplitude constrained by the SIDDHARTA data. The explicit values of the subtraction constants $a^{s}_{i}$
in the ETW model are summarized in Table~\ref{tbl:subtraction}. The scattering length of the $K^{-}n$ system is
obtained as
\begin{align}
a_{0,K^{-}n}
&=
-0.135-i0.410\text{ fm} .
\label{eq:a_Kn_fullmodel}
\end{align}
Although the value deviates from the full result in Eq.~\eqref{eq:Kmnslength} due to the simplified assumptions of
this scattering model, both the order of magnitude and the qualitative features of the result (e.g., weak attraction)
remain unchanged. We think this is sufficient for the purpose of our present analysis.\footnote{
We have checked that the central value given in Eq.~\eqref{eq:Kmnslength} can be reproduced by adjusting
$a_{1}^{s}=-0.649$ and $a_{2}^{s}=-1.899$. The qualitative conclusion in Sec. II, i.e., the emergence of the unitary
limit between strangeness and charm sectors in the ZCL, remains unchanged with this parameter set. In this paper, we
use the original subtraction constants of the ETW model~\cite{Ikeda:2012au}, respecting the consistency with
the $K^{-}p$ sector.}
\begin{table}[tbp]
\caption{Subtraction constants $a_{i}^{s,c}(\mu_{\rm reg})$ at the regularization scale $\mu_{\rm reg}=1$ GeV.}
\begin{ruledtabular}
\begin{tabular}{lcccc}
channel $i$ & 1 & 2 & 3 & 4 \\
\hline
$a^{s}_{i}$ (strangeness) & $-1.283$ & $0.238$ & $-0.714$ & $-0.714$ \\
$a^{c}_{i}$ (charm) & $-1.663$ & $-2.060$ & $-2.060$ & $-2.060$ \\
\end{tabular}
\end{ruledtabular}
\label{tbl:subtraction}
\end{table}%
In the charm sector, the subtraction constants $a^{c}_{i}$ were determined in Ref.~\cite{Mizutani:2006vq} by
the argument of ``natural size''~\cite{Oller:2000fj,Hyodo:2008xr}. However, a slight modification of the constant
associated with the $DN$ channel was necessary to reproduce the observed $\Lambda^{+}_{c}(2595)$ resonance in the
$I=0$ channel amplitude. Likewise, the same strategy is pursued in the present work for the $I=1$ two-body
subsystem; the subtraction constants in channels $i=2 - 4$ (i.e., $\pi^{-}\Lambda_{c}^+,\,\pi^{0}\Sigma_{c}^{0}$
and $\pi^{-}\Sigma_{c}^{+}$) are chosen to be $-2.060$~\cite{Bayar:2012dd}, consistent with the ``natural size''
$a_{\rm nat}\sim -2$~\cite{Oller:2000fj,Hyodo:2008xr},\footnote{
There are two arguments determining the natural size of the subtraction constants. In Ref.~\cite{Oller:2000fj},
through a matching of the loop function evaluated using dimensional and cut-off regularization methods, the natural
size of these constants were estimated. In Ref.~\cite{Hyodo:2008xr}, the determination of the natural values of these
constants was pursued utilizing the renormalization condition, $G_{i}(W=M_{i};a_{\rm nat})=0$, along with the constraints from
the low-energy behavior of the amplitudes and the physical requirements on the loop functions. In both cases, the
natural value is estimated as $a_{\rm nat}\sim -2$ at the regularization scale, $\mu_{\rm reg}=1$ GeV.}
while the subtraction constant in channel 1 (i.e., for $D^{0}n$) is slightly adjusted to reproduce the $\Sigma_{c}(2800)$
state near the $D^{0}n$ threshold. Such fine-tuning of the value of the subtraction constant reflects further SU(4) breaking
effects in our model. We also tune the value of $\kappa^{c}$ to control the width of the resonance. By choosing the subtraction
constants $a^{c}_{i}$ shown in Table~\ref{tbl:subtraction} with $\kappa^{c}=0.453$, we dynamically generate a resonance pole
corresponding to the $\Sigma_{c}(2800)$ state at
\begin{align}
M
&=
2800\text{ MeV} ,\quad
\Gamma
=
72\text{ MeV} .
\end{align}
In this model, the $D^{0}n$ scattering length is found to be
\begin{align}
a_{0,D^{0}n}
&=
0.764-i0.615\text{ fm} .
\label{eq:a_D0n_fullmodel}
\end{align}
The positive real part is in accordance with the existence of the quasi-bound state $\Sigma_{c}(2800)$
below the threshold. The origin of the $I=1$ quasi-bound state can be understood in the following argument.
Because of the SU(4) symmetry, the interaction of the $D^{0}n$ channel has the same sign as that of
the $K^{-}n$ channel, while the strength at the threshold is enhanced by the ratio $m_{D}/m_{K}$. Thus, we
can expect a stronger attractive interaction in the $D^{0}n$ channel. Moreover, the heavier reduced mass
of the $DN$ system leads to suppression of the kinetic energy which is implicitly reflected by the larger
value of the two-body scattering length. This is a crucial expedient in the manifestation of universality
in the three-body $D^0nn$ system as demonstrated in the next section.
\subsection{Flavor extrapolation and zero coupling limit}\label{subsec:extrapolation}
Next we introduce a parameter $0\leq x\leq 1$ which controls the extrapolation from strangeness
to charm. Assuming a linear relationship among the following model parameters, we vary each as a
function of $x$;
\begin{align}
m_{i}(x)&=m_{i}^{s}(1-x)+m_{i}^{c} x , \\
M_{i}(x)&=M_{i}^{s}(1-x)+M_{i}^{c} x , \\
a_{i}(x)&=a_{i}^{s}(1-x)+a_{i}^{c} x , \\
f_{i}(x)&= f_{i}^{s}(1-x) + f_{i}^{c}x ,\\
\kappa(x)&= \kappa^{s}(1-x) + \kappa^{c}x ,
\end{align}
for the respective channels, $i=1,\dots 4$. Here, $x=0$ ($x=1$) corresponds to the
physical point of the model in the strangeness (charm) sector, while all other intermediate
values of $x$ represent the model in the unphysical domain. Thus, for instance, in the $i=1$
channel, by varying $x$ from 0 to 1, we can perform a linear extrapolation from the $K^{-}n$
to the $D^{0}n$ scattering sectors.
Now we study the behavior of the complex scattering length in the coupled-channel contact interaction
model. The real and imaginary parts of the meson-neutron scattering length as functions of $x$ are shown
in Fig.~\ref{fig:slengthcomp}. The scattering length varies continuously from $a_{0,K^{-}n}$ to $a_{0,D^{0}n}$
along with $x$. It is not immediately clear from this figure how to decipher the remnant of the universal
features of meson-neutron unitarity in any straightforward manner due to effects of the sub-threshold decay
channels. We note that in the present case, the $S$-matrix pole corresponding to the quasi-bound state of
the $D^{0}n$ system moves to the higher positive energy region as we decrease $x$ from 1, eventually going
far above the $K^{-}n$ threshold at $x=0$ without yielding a quasi-bound $K^{-}n$ state. The pole trajectory
of the scattering amplitude is shown in Fig.~\ref{fig:trajectory_y1}. Note that the pole is on the Riemann
sheet with physical momentum with respect to channel 1 and unphysical momenta in regard to the
others.\footnote{
In an $n$-channel problem, the scattering amplitude is defined on a $2^{n}$-sheeted Riemann surface in the
complex energy plane with momentum $p_{i}$ and $-p_{i}$ in channel $i$ corresponding to the same energy. The
Riemann sheet is specified by choosing either the physical momentum ($\text{Im } p_{i}>0$, first sheet) or the
unphysical momentum ($\text{Im } p_{i}<0$, unphysical sheet) for each channel $i$. The most adjacent Riemann sheet
to the real scattering axis is obtained by choosing physical momenta in the open channels and unphysical
momenta in the closed channels.}
Thus, the corresponding energy pole, $E_h=W-M_{1}(x)-m_{1}(x)$ (measured with respect to the threshold of the
channel 1), directly influences the physical spectrum when $\text{Re } E_{h}<0$ (as in the charm sector, $x=1$), being
on the most adjacent Riemann sheet to the physical real axis. On the other hand, its effect becomes less
significant when $\text{Re } E_{h}>0$ (as in the strangeness sector $x=0$), with the proximity of the pole position
far away from the physical axis.
\begin{figure}[tbp]
\centering
\includegraphics[width=\columnwidth,bb=0 0 362 218]{a0_quasi-1.eps}
\caption{\label{fig:slengthcomp}
Complex meson-neutron scattering length in the coupled-channel contact interaction model as
a function of the extrapolation parameter $x$. Solid (dashed) line represents the real
(imaginary) part.}
\end{figure}%
\begin{figure}[tbp]
\centering
\includegraphics[width=\columnwidth,bb=0 0 362 218]{trajectory_y1.ps}
\caption{\label{fig:trajectory_y1}
Trajectory of the pole of the scattering amplitude in the coupled-channel contact interaction
model. The energy is measured with respect to the threshold of channel 1, i.e.,
$E_{h}=W-M_{1}(x)-m_{1}(x)$.}
\end{figure}%
To elucidate a possible scenario to access the unitary limit of the meson-neutron interaction,
we consider a zero coupling limit (ZCL) in which the channel couplings are artificially switched
off~\cite{Eden:1964zz,Pearce:1988rk,Cieply:2016jby}, i.e.,
\begin{align}
C_{1i}&=C_{i1}=0 \quad \text{for } i=2,3,4 .
\end{align}
Under this assumption, the coupled-channel problem reduces to a single-channel scattering
of the $K^{-}n/D^{0}n$ system. The $K^{-}n$ system is again found to have no bound state,
while the previous quasi-bound state in the $D^{0}n$ channel now becomes a real bound state
with invariant mass of $W\sim 2802$ MeV. Correspondingly, the scattering length is real
and negative (positive) in the $K^{-}n$ ($D^{0}n$) channel:
\begin{align}
a_{0,K^{-}n}^{\rm ZCL}
&=
-0.394\text{ fm} ,\quad
a_{0,D^{0}n}^{\rm ZCL}
=
4.141\text{ fm} .
\label{eq:ZCL_ad}
\end{align}
The above numerical values of the scattering lengths in the ZCL strongly suggest that
the meson-neutron interactions become resonant at an intermediate point in the unphysical
domain ($0<x<1$). In Fig.~\ref{fig:slengthreal}, we display the result for the inverse
scattering length $1/a_{0}(x)$, expressed as a function of the extrapolation parameter $x$. We
indeed find that the unitary limit ($1/a_{0}= 0$) is achieved at
$x= 0.615$.
At this point,
the extrapolated mass of the flavored meson in the channel 1 is
\begin{align}
m_{1}(0.615)&=1336.61 \text{ MeV} .
\end{align}
The corresponding behavior of the pole trajectory in the zero coupling limit is shown in
Fig.~\ref{fig:trajectory_y0}. The {\it real} bound state pole at $x=1$ that lies on the
{\it physical (first) Riemann sheet} turns into a {\it virtual} state pole at the unitary
limit moving onto the {\it unphysical Riemann sheet}. As we further decrease $x$,
the virtual state pole eventually meets with a second virtual state pole on the unphysical
sheet and then acquires a finite width~\cite{Hyodo:2014bda}. In particular, at $x=0$, the
virtual state pole remains still on the unphysical sheet with a finite imaginary part
below the threshold.
As seen earlier the neglect of the decay channels in the ZCL yields real scattering lengths
which may appear somewhat unrealistic in contrast to the full coupled-channel model that yields
real and complex parts of the scattering lengths of roughly the same size. A natural question
therefore arises as to the utility of the ZCL approach. To this end, let us make a comment to elucidate
the significance of the ZCL analysis. One may think that the behavior of the scattering length
(Fig.~\ref{fig:slengthreal}) and the pole trajectory (Fig.~\ref{fig:trajectory_y0}) in the ZCL are
very much different from those in the full model (Figs.~\ref{fig:slengthcomp} and \ref{fig:trajectory_y1}).
However, the qualitative feature, namely, the existence of the pole remains unchanged by taking the ZCL.
In fact, it is guaranteed by a topological invariant of the scattering amplitude that the existence of a
pole is stable against the continuous change (deformation) of the model parameters, except for the special
case where a zero of the amplitude exists near the pole (vide Ref.~\cite{Kamiya:2017pcq} for details).
Because the ZCL can be achieved by gradually decreasing the off-diagonal couplings $C_{1i}=C_{i1}$, it is
{\it continuously connected} with the full model. Hence, it is worth studying the model in the ZCL to check
whether the system develops an eigenstate pole, although the exact position of the pole may show a large
deviation. Furthermore, as we will show in the next subsection, the relation between the scattering lengths
and the pole positions exhibits two-body universality, both in the full model and the model in the ZCL. Thus,
the ZCL model and the full model share common features of universal physics in the two-body sector.
\begin{figure}[tbp]
\centering
\includegraphics[width=\columnwidth,bb=0 0 362 218]{a0_bound.eps}
\caption{\label{fig:slengthreal}
Meson-neutron inverse scattering length in the absence of decay channels in the zero coupling limit
as a function of the flavor extrapolation parameter $x$.}
\end{figure}%
\begin{figure}[tbp]
\centering
\includegraphics[width=\columnwidth,bb=0 0 362 218]{trajectory_y0.ps}
\caption{\label{fig:trajectory_y0}
Trajectory of the pole of the scattering amplitude in the zero coupling limit. The energy is measured
with respect to the threshold of channel 1 [$E_{h}=W-M_{1}(x)-m_{1}(x)$]. Dashed (solid) line represents
the pole on the physical (unphysical) Riemann sheet.}
\end{figure}%
\subsection{Universality in the two-body sub-system}
It is instructive to take another look at the above results from the viewpoint of two-body
universality. When the magnitude of the scattering length $|a_{0}|$ becomes large and approaches
the unitary limit, universality suggests that there exists a two-body meson-neutron eigenstate
with the eigenmomentum
\begin{align}
k_{a}= i/a_{0} ,
\end{align}
up to corrections suppressed by effective range terms. This relation holds even for a complex value
of $a_{0}$. Thus, if we calculate the eigenmomentum $k_{h}$, that corresponds to the pole position of
the scattering amplitude in the system with a large scattering length, the true eigenvalue $k_{h}$
can be well approximated by $k_{a}$ in the two-body universal region. The deviation of $k_{h}$ from
$k_{a}$ thus serves as a measure of the violation of universality.
We show in Fig.~\ref{fig:eigenmomentumcomp} the deviation of the eigenmomentum from the universal
prediction $|k_{h}-k_{a}|$ normalized by the typical momentum scale of the strong interaction, i.e.,
the pion mass $m_{\pi}$, as a function of $x$. Although there is a large deviation in the strangeness
sector ($x=0$), the universal value $k_{a}$ around $x\sim 0.8$, including the $D^{0}n$ system at $x=1$,
gives a reasonable prediction of the true eigenvalue $k_{h}$. Since the deviation is much smaller than
the typical scale $m_\pi$, the result suggests that the full coupled-channel model, despite the influence
of the decay channels, should also reflect the relevance of low-energy universality in the meson-neutron
system close to the charm quark sector, $x\sim 1$.
\begin{figure}[tbp]
\centering
\includegraphics[width=\columnwidth,bb=0 0 362 218]{kh-ka_y_1.eps}
\caption{\label{fig:eigenmomentumcomp}
Deviation of the eigenmomentum from the universal prediction $|k_{h}-k_{a}|/m_{\pi}$ as a function of
the flavor extrapolation parameter $x$ in the coupled-channel contact interaction model.}
\end{figure}%
The situation becomes much more conspicuous in the ZCL. As shown in Fig.~\ref{fig:eigenmomentumreal},
we find quite a good agreement of $k_{h}$ with $k_{a}$, not only in the vicinity of the unitary limit
($x\sim 0.6$) but also in the entire region $x\gtrsim 0.3$. We should note that the eigenstate
in the unbound region ($x\lesssim 0.6$) corresponds to a virtual state ($a_0<0$), an unphysical
bound state with the S-matrix pole lying on the unphysical sheet. A characteristic cusp-like structure
around $x\sim 0.15$ is caused when the virtual state acquires a finite width (also see Fig.~\ref{fig:trajectory_y0}).
Thus, our simplistic model approach demonstrates that universality is a powerful tool to investigate the
meson-neutron two-body system, both in the full model and the ZCL.
\begin{figure}[tbp]
\centering
\includegraphics[width=\columnwidth,bb=0 0 362 218]{kh-ka_y_0.eps}
\caption{\label{fig:eigenmomentumreal}
Deviation of the eigenmomentum from the universal prediction $|k_{h}-k_{a}|/m_{\pi}$ in the zero coupling limit as
a function of the flavor extrapolation parameter $x$.}
\end{figure}%
\section{Three-body system of two neutrons and one meson}\label{sec:threebody}
\subsection{Effective field theory}
In this section, we consider the universal features in the three-body system consisting of two neutrons
and one flavored meson with $J=0$, $I=3/2$, and $I_{3}=-3/2$, assuming that the two-body physics is {\it finely tuned}
and characterized only by the two-body scattering lengths. A low-energy EFT can provide a simple but powerful
systematic tool, for a quantitative analysis of such three-body systems in proximity to the scattering threshold.
Recently, plethora of EFT based analyses have been used to predict the formation of bound states in a variety of
three-body cluster states~\cite{Ando:2015fsa,Hammer:2001ng,AB10b,AYO13,AO14,Hammer:2017tjm} which are especially
neutron-rich.
Primarily for the sake of a qualitative exploratory study of three-body universality, we henceforth neglect
the sub-threshold decay channels $i=2-4$, which means that our three-body analysis is consistent with the ZCL
idealization with real valued meson-neutron scattering lengths [see Eq.~(\ref{eq:ZCL_ad})].
Our motivation of this part of the study is to determine under what circumstance do we expect to find
the three-body system to become bound solely by virtue of the Efimov ``attraction''.
There is currently no consensus, either experimentally or theoretically, as to whether the system in the
physical limits, $x=0$ (i.e., $K^-nn$) and $x=1$ ($D^0nn$), respectively, are bound. Through universality based
arguments we hope to gain general insights alternative to rigorous realistic calculations which is beyond the
scope of the present work.
Here we employ a leading order cluster effective field theory with a flavored meson $K^-$ ($D^0$) and a neutron
$\psi_n$ as the elementary fields in the theory. We mainly focus on low-energy threshold states far below the pion
mass, i.e., the theory is {\it pionless}, with explicit pion degrees of freedom and their interaction effectively
integrated out. We note that the {\it breakdown scale} of the pionless EFT (i.e., ${}^{\pi\!\!\!/}\Lambda\sim m_\pi$)
is different from that in Sec.~\ref{sec:twobody}. To establish the flavor extrapolation of the meson-neutron
scattering lengths we have explicitly included pions in the SU(4) model with vector meson exchange. Once this is
settled, we concentrate on the states near the threshold by utilizing the pionless theory.
A power counting rule~\cite{Kaplan:1998tg,Kaplan:1998we,vanKolck:1998bw} for two-body contact interactions for
such finely-tuned systems, incorporated with the {\it power divergence subtraction} scheme, is most suitable for
renormalizing the strongly interacting two-body sector. In this section, we display the effective Lagrangian of
the three-body system, where we denote the generic flavored meson field as $K$ for brevity, but what it basically
stands for is the flavor extrapolated meson field $K(x)$, implicitly dependent on the parameter $0\leq x\leq 1$.
Note here that the actual meson charge is irrelevant in our context without Coulomb interaction, so that the
change in the charge from the antikaon $K^-$ (for $x=0$) to the $D^0$ meson (for $x=1$) should not be a matter
of concern.
The non-relativistic effective Lagrangian for the three-body system, consistent with the usual low-energy symmetries,
like parity invariance, charge conjugation, time-reversal invariance, and small velocity Lorentz invariance, can
be constructed as
\begin{align}
\mathcal{L}
&=
\mathcal{L}_{K}
+\mathcal{L}_{n}
+\mathcal{L}_{\text{2-body}}
+\mathcal{L}_{\text{3-body}} .
\end{align}
As regards the fundamental fields, the neutron is represented by a two-component (spin-doublet) spinor $\psi_{n}$,
and the flavor extrapolated meson is represented by a spin-singlet field, $K(x)$. The standard forms of the single
``heavy'' particle Lagrangians, $\mathcal{L}_{K}$ and $\mathcal{L}_{n}$, at the leading order are well
known~\cite{Braaten:2004rn,BS01,AH04,Ando:2015fsa}, and their expressions will not be repeated here.
However, in the two-body sector, we explicitly spell out the relevant interaction parts of the leading order
Lagrangian $\mathcal{L}_{\text{2-body}}$, projecting the dominant $s$-wave terms. When the $nn$ and flavor extrapolated
$nK$ scattering lengths are much larger than the typical length scales of short-range interactions, the two-body
interactions can be expressed by means of contact terms with couplings $g_{nK}$ and $g_{nn}$, parametrizing the
short-distance UV physics:
\begin{align}
\mathcal{L}^{int}_{\text{2-body}}
&= -g_{nn}
\left(\psi_{n}^{T}P_{(nn)}^{({}^{1}{\rm S}_{0})}\psi_{n}
\right)^{\dag}
\left(\psi_{n}^{T}P_{(nn)}^{({}^{1}{\rm S}_{0})}\psi_{n}
\right) \nonumber \\
&\quad -
g_{nK}\phi_{K}^{\dag}\psi_{n}^{\dag}
\psi_{n}\phi_{K}+\dotsb ,
\label{eq:2body}
\end{align}
where the spin-singlet projection operator is defined as
\begin{align}
P_{(nn)}^{({}^{1}{\rm S}_{0})}
&=
-\frac{i}{2}\sigma_{2} .
\label{eq:nnprojection}
\end{align}
In this context, it is convenient to describe the three-body system in terms of auxiliary {\it diatom} or
{\it dihadron} fields~\cite{Braaten:2004rn,BS01,AH04}, namely, a spin-singlet $nn$-{\it dibaryon} field $s_{(nn)}$,
and a spin-doublet $nK$-{\it dihadron} field $d_{(nK)}$. Through a Gaussian integration followed by a redefinition of
the fields, the contact terms in the above two-body Lagrangian can be re-expressed as
\begin{align}
\mathcal{L}_{d(nK)}
&= \frac{1}{g_{nK}}d^{\dag}_{(nK)}d_{(nK)}
\nonumber \\
&\quad
-\left[d_{(nK)}^{\dag}\psi_{n}\phi_{K}+\text{h.c.}\right]+\dotsb\,,\\
\mathcal{L}_{s(nn)}
&= \frac{1}{g_{nn}}s_{(nn)}^{\dag}s_{(nn)}
\nonumber \\
&\quad
-\left[s_{(nn)}^{\dag}\left(\psi_{n}^{T}P_{(nn)}^{({}^{1}{\rm S}_{0})}\psi_{n}\right)+\text{h.c.}\right]+\dotsb \,,
\end{align}
where the ellipses in the above expressions denote the sub-leading interaction terms irrelevant
in the context of our analysis.
Our three-body system exhibits the Efimov effect in the ZCL, as a straightforward consequence of the unitarity of the
two-body amplitude, which is also demonstrated by the numerical analysis below. Following the basic EFT tenet, this
necessitates the additional inclusion of a leading order non-derivatively coupled three-body contact term in the
Lagrangian for renormalization (see, e.g., Refs.~\cite{Braaten:2004rn,BS01,AH04,BHV98} for details).
In our case, such a three-body term is given by
\begin{align}
\mathcal{L}_{\text{3-body}} &={\mathcal L}^{({}^{1}{\rm S}_{0})}_{nd(nK)}+\cdots\,,\nonumber \\
{\mathcal L}^{({}^{1}{\rm S}_{0})}_{nd(nK)} &=
-\frac{m_{K} g_{s}(\mu)}{\mu^{2}}
\left(d^{T}_{(nK)}P_{(nd)}^{({}^{1}{\rm S}_{0})}\psi_{n}\right)^{\dag} \nonumber \\
& \hspace{2cm} \!\! \times \left(d^{T}_{(nK)}P_{(nd)}^{({}^{1}{\rm S}_{0})}\psi_{n}\right)+\cdots ,
\label{eq:L3body}
\end{align}
with the spin-singlet projection operator
\begin{align}
P_{(nd)}^{({}^{1}{\rm S}_{0})}
&=
-\frac{i}{\sqrt{2}}\sigma_{2}\,,
\end{align}
and $g_{s}(\mu)$, an {\it a priori} unknown scale dependent three-body coupling.
Here we especially choose to promote to the leading order, only the three-body interaction corresponding
to the spin-singlet elastic channel, $nd_{(nK)}\to nd_{(nK)}$. The ellipses denote other allowed three-body
interaction terms to be treated generally as sub-leading. It is important to note that precisely the presence of
the asymptotic limit cycle in our case gives us the freedom to choose any one of the channels to promote the
corresponding three-body term.
\subsection{Three-body integral equation}
We now consider the integral equations for the meson-neutron-neutron three-body system in the total
spin-singlet channel. Such a system of equations can be constructed either by combining the dibaryon field
$s_{(nn)}$ and the flavored meson field $K$ or by combining the dihadron field $d_{(nK)}$ and the neutron
field $n$. The system of equations consists of two kinds of Faddeev-like partitions: a direct (elastic) hadron
exchange channel $nd_{(nK)}\to nd_{(nK)}$ (denoted as $t_{a}$), and a hadron rearrangement (inelastic) channel
$nd_{(nK)}\to s_{(nn)}K$ (denoted as $t_{b}$). For concreteness, the coupled-channel three-body equations for
the {\it half-off-shell} amplitudes $t_{a}$ and $t_{b}$ are diagrammatically displayed (omitting the three-body
contact terms for brevity) in Fig.~\ref{fig:diagram}~\cite{STM1,STM2,Canham:2008jd,Ando:2015fsa}.
\begin{figure*}[tbp]
\centering
\includegraphics[width=17cm]{adiag.eps}
\includegraphics[width=12cm]{bdiag.eps}
\caption{\label{fig:diagram}
Feynman diagrams for the three-body coupled integral equations where the three-body contact terms are
omitted for brevity. The solid and dashed lines represent the bare propagators of $n$ and $K(x)$,
respectively, and the double and zigzag lines stand for the dressed dihadron propagators of $s_{(nn)}$
and $d_{(nK)}$, respectively.}
\end{figure*}%
It is straightforward to obtain these equations following the Feynman rules from the Lagrangian
presented in the previous subsection. After $s$-wave projections, we obtain the following expressions:
\begin{widetext}
\begin{align}\hspace{-1cm}
\quad t_{a}(p^{\prime},p;E) &=
m_{K}\left\{\frac{1}{2p^{\prime}p}
\ln\left[\frac{p^{\prime 2}+p^{2}
+ap^{\prime}p
-2\mu_{(nK)}E
}
{p^{\prime 2}+p^{2}
-ap^{\prime}p
-2\mu_{(nK)}E
}\right]-\frac{g_s(\Lambda)}{\Lambda^2}\right\} \nonumber \\*
&\quad -\frac{m_{K}}{\pi\mu_{(nK)}}
\int_{0}^{\Lambda} dl \,\,l^2
\left\{\frac{1}{2p^{\prime}l}
\ln\left[\frac{p^{\prime 2}+l^{2}
+ap^{\prime}l
-2\mu_{(nK)}E}
{p^{\prime 2}+l^{2}
-ap^{\prime}l
-2\mu_{(nK)}E}\right]-\frac{g_s(\Lambda)}{\Lambda^2}\right\}
\frac{t_{a}(l,p;E)}{
\frac{1}{a_{d(nK)}}
-
\sqrt{-2\mu_{nK}E
+
\frac{\mu_{(nK)}}{\mu_{n(nK)}}l^{2}
-i0^{+}}
}
\nonumber \\*
&\quad -
\frac{\sqrt{2}}{\pi}
\int_{0}^{\Lambda} dl \frac{l}{p^{\prime}}
\ln\left[\frac{p^{\prime 2}+bl^{2}
+p^{\prime}l
-M_{n}E}
{p^{\prime 2}+bl^{2}
-p^{\prime}l
-M_{n}E}\right]
\frac{t_{b}(l,p;E)}{
\frac{1}{a_{s(nn)}}
-
\sqrt{-M_{n}E
+
\frac{M_{n}}{2\mu_{K(nn)}}l^{2}
-i0^{+}}
}
\label{eq:ta}
\end{align}
and
\begin{align}
\quad t_{b}(p^{\prime},p;E) &=
\frac{M_{n}}{\sqrt{2}p^{\prime}p}
\ln
\left[\frac{bp^{\prime 2}+p^{2}
+p^{\prime}p-M_{n}E
}
{bp^{\prime 2}+p^{2}
-p^{\prime}p-M_{n}E
}\right] \nonumber \\
&\quad
-\frac{M_{n}}{\sqrt{2}\pi \mu_{(nK)}}
\int_{0}^{\Lambda} dl \frac{l}{p^{\prime}}
\ln
\left[\frac{bp^{\prime 2}+ l^{2}
+p^{\prime}l-M_{n}E
}
{bp^{\prime 2}+l^{2}
-p^{\prime}l-M_{n}E
}\right]
\frac{t_{a}(l,p;E)}{
\frac{1}{a_{d(nK)}}
-
\sqrt{-2\mu_{(nK)}E+\frac{\mu_{(nK)}}{\mu_{n(nK)}}l^{2}
-i0^{+}}
} \,.
\label{eq:tb}
\end{align}
\end{widetext}
Here $E$ is the total center-of-mass kinetic energy of the three-body system, $p$ and $p^{\prime}$
are the initial and final state momenta, and $\mu_{(nK)}(x)=M_{n}m_{K}(x)/\left[M_{n}+m_{K}(x)\right]$
is the meson-neutron reduced mass. The $s$-wave meson-neutron and neutron-neutron scattering lengths,
$a_{d(nK)}\equiv a_{d(nK)}(x)$ and $a_{s(nn)}$, respectively, are given as
\begin{equation}
\frac{1}{a_{d(nK)}(x)}
= \frac{2\pi }{\mu_{(nK)}(x)\,g_{nK}}+\mu_{p.d.s.}\,,
\end{equation}
\begin{equation}
\frac{1}{a_{s(nn)}}
= \frac{4\pi }{M_{n}g_{nn}}+\mu_{p.d.s.} ,
\end{equation}
when the power divergence subtraction scheme~\cite{Kaplan:1998tg,Kaplan:1998we,vanKolck:1998bw} is adopted
with subtraction scale $\mu_{p.d.s.}\sim m_\pi$. Furthermore, we introduce the neutron-dihadron ($n+nK$)-reduced
mass as $\mu_{n(nK)}(x)=M_{n}[M_{n}+m_{K}(x)]/\left[2M_{n}+m_{K}(x)\right]$, along with the two other mass dependent
parameters:
\begin{align}
a(x)
&=
\frac{2\mu_{(nK)}(x)}{m_{K}(x)}, \quad
b(x)
=
\frac{M_{n}}{2\mu_{(nK)}(x)}\,.
\end{align}
In the above expressions, we re-emphasized the dependence on the extrapolation parameter $x$ at a
general unphysical point $x\neq 0,1$.
The coupled integral equations must be numerically solved to obtain the non-perturbative
solutions to the three-body problem. It is important to note that these integral equations must be
regularized, e.g., by introducing the sharp momentum cut-off $\Lambda$ to remove ambiguities
in the asymptotic phase with the cut-off taken to infinity~\cite{BHV98,DL63}. The resulting cut-off
scale dependence is renormalized by introducing the leading order three-body contact interaction
term displayed in Eq.~\eqref{eq:L3body} with a cut-off dependent running coupling,
$g_s(\mu=\Lambda)\propto\ln(\Lambda)$, leading to a RG limit cycle.
\subsection{Asymptotic analysis}
In order to determine whether the three-body system exhibits the Efimov effect, it is instructive to examine the
asymptotic behavior (both in the {\it scaling} as well as unitary limits) of the integral equations with the
cut-off removed (i.e., $\Lambda\to \infty$) and the three-body contact interaction excluded (see
Refs.~\cite{Braaten:2004rn,AH04,Canham:2009zq} for details).
In this limit, the scale of the off-shell momenta can be considered very large in comparison with
the inverse two-body scattering lengths and the eigenenergies, $E=-B_d<0$ of the three-body bound
state\footnote{
Here we use the notation, $B_d=-E>0$, to denote the {\it absolute} value of the trimer binding energy, measured with
respect to three-particle break-up threshold. This is to distinguish $B_d$ from our notation for the {\it relative}
trimer energy, $B_T=B_d-E_D>0$, measured from the particle-dimer break-up threshold energy of $E_D$
[also, see Eq.~\eqref{eq:ED}].}, i.e., $p^\prime,l \gg 1/a_{d(nK)},1/a_{s(nn)},\sqrt{2\mu_{(nK)}B_d},\sqrt{M_{n}B_{d}}$. In
particular, the asymptotic solution to the amplitudes is expected to follow a power law: $t_{a,b}(p^\prime)\propto p^{\prime s-1}$.
It is then straightforward to show that the coupled integral equations in this regime get reduced into a single
transcendental equation which may be solved for the exponent $s$.
By using the meson masses in the physical limits, i.e., for the antikaon $K^{-}(x=0)$ and the charmed meson
$D^{0}(x=1)$, as given in Table~\ref{tbl:hadronmass}, we obtain in each case a pair of imaginary solutions,
$s=\pm is_{0}^{\infty}$, with the respective transcendental number $s^{\infty}_0$ given by
\begin{align}
s_{0}^{\infty}
&=
\begin{cases}
1.03069 & \text{for } K^{-}nn\,, \\
1.02387 & \text{for }D^{0}nn\,.
\end{cases}
\end{align}
This confirms the previously obtained result by Braaten and Hammer (see Fig.\,52 of Ref.~\cite{Braaten:2004rn}).
In fact, the imaginary solutions of $s$ in the unphysical domain, i.e., $0< x< 1$, are {\it continuously connected}
in the process of interpolating the flavored meson mass $m \equiv m_K(x)$ in between the physical limits $m_{K^-}(x=0)$
and $m_{D^0}(x=1)$, as depicted in Fig.~\ref{fig:asymptotic}. We thereby conclude that the coupled integral equations are
ill-defined in the asymptotic limit which means that {\it formally} Efimov effect must be manifest in the three-body
system, not only at the physical limits ($x=0,1$), but at {\it all} intermediate unphysical points. Of course one
should bear in mind the formal nature of the above results which follow from a leading order asymptotic analysis
corresponding to the scaling limit of all the two-body interactions. But the more robust physical realization of the
Efimov spectrum, i.e., whether or not it lies within an {\it Efimov window}, evidently relies on the nature of possible
crucial range effects.
\begin{figure}[tbp]
\includegraphics[width=9.1cm, height=5.6cm]{asymptotic.eps}
\caption{\label{fig:asymptotic}
Flavored meson mass dependence on the asymptotic parameter $s_{0}^{\infty}$ for $m_{K^-}<m<m_{D^0}$. }
\end{figure}
\subsection{Numerical results}
As already mentioned, the numerical evaluations of the three-body integral equation require the meson-neutron
and neutron-neutron $s$-wave scattering lengths as the principal two-body input to our leading order EFT
analysis. First, our numerical results are displayed with the spin-singlet $s$-wave $nn$ scattering length
taken as its physical value, $a_{s(nn)}=-18.63$ fm~\cite{Chen2008}. Second, we try to demonstrate the remnant
universal features of the non-asymptotic results in the physical sectors whereby we neglect the
{\it absorptive} contributions of the decay/coupled channels for simplicity. For the purpose of
extrapolating to the unphysical domain to probe the unitary limit, it is convenient to use our predicted
scattering lengths in the ZCL, as displayed in Eq.~(\ref{eq:ZCL_ad}) in the physical limits ($x=0,1$),
namely,
\begin{eqnarray}
a_{d(nK^{-})}&\equiv &a_{d(nK)}(x=0)=a_{0,K^{-}n}^{\rm ZCL}=-0.394\,\,\text{fm}\,,\nonumber \\
a_{d(nD^{0})}&\equiv &a_{d(nK)}(x=1)=a_{0,D^{0}n}^{\rm ZCL}=4.141\,\,\text{fm}.
\label{eq:ad_ZCL}
\end{eqnarray}
We thereby solve the non-asymptotic integral equations \eqref{eq:ta} and \eqref{eq:tb} to obtain the
three-body energy eigenvalues as a function of the sharp cut-off regulator $\Lambda$.
We first include the three-body force containing the coupling $g_s(\Lambda)$ to investigate into its approximate
limit cycle nature at non-asymptotic momenta/energies. Later, we shall focus on the results excluding the
three-body term to investigate into the behavior of the three-body eigenenergies of the physical sectors (i.e.,
$x=0,1$ limits), generated exclusively from the two-body dynamics. In particular, the role of the cut-off
($\Lambda$) dependence of the results is investigated which can crucially determine whether the three-body
bound state formation is formally supported in the low-energy EFT with zero-range approximation. Furthermore,
in the context of the ZCL approach, a study of the first ``critical'' cut-off $\Lambda_c^{(n=0)}$ (defined
later in the sub-section) excluding the three-body force is presented, whose variation with the
extrapolation parameter $x$ reveals vital information on three-body universality. Finally, in association
with our findings in the two-body sector, the above results may be used to predict the likelihood of realistic
Efimov-like bound states. A brief description of our methodology in the numerical solution to the integral
equations is presented in Appendix~\ref{sec:AppendixA}.
We then present our result for the RG limit cycle for the three-body contact interaction coupling $g_{s}(\Lambda)$
in the $K^-nn$ ($D^0nn$) system. For a proper estimation of this coupling a prior knowledge of a three-body observable,
such as the $s$-wave three-particle (particle-dimer) scattering length or the trimer binding energy is required,
neither of which is currently available. However, even in the absence of such a three-body datum, we may study
the RG behavior of this coupling by fixing any presumably small/near-threshold value of the trimer binding energy,
say, $B_d, B_T = 0.1$ MeV, and varying the cut-off scale $\Lambda$. Various qualitative features concerning three-body
universality can be deduced thereof.
Figure~\ref{fig:g-lambda_zcl} demonstrates the typical cyclic or quasi-log-periodic behavior of the coupling
$g_s(\Lambda)$ for the $K^{-}nn$ and $D^0nn$ systems, reflecting an approximate RG limit cycle in each case.
The periodic divergences of the coupling are associated with the sequence of successive formation of three-body
bound states. For the $D^0nn$ system the first three bound states are evident for $\Lambda<10^5$ MeV, and for the
$K^-nn$ system only the first two are visible in this range. The qualitative feature of the limit cycle plots remains
unchanged whether the binding energy, $B_d$ ($B_T$), is chosen approaching the three-particle (particle-dimer)
break-up threshold, as shown in the figure, or chosen somewhat away from the threshold; there is basically a slight
downward and rightward shift of each curve with increasing $B_d$ ($B_T$) because the interaction becomes more and
more attractive.
\begin{figure}[tbp]
\centering
\includegraphics[width=\columnwidth, height=5.5cm]{g-lambda_zcl.eps}
\caption{\label{fig:g-lambda_zcl}
RG Limit cycle for the three-body coupling $g_s$ as a function of the sharp momentum cut-off $\Lambda$. The pertinent
integral equations are solved with the input two-body scattering lengths from Eq.~(\ref{eq:ad_ZCL}) in the zero coupling
limit for fixed trimer eigenenergies, namely, $B_d=0.1$ MeV for $K^-nn$, measured with respect to the
three-particle break-up threshold, and $B_T=B_d-E_D= 0.1$ MeV for $D^0nn$, measured with respect to the particle-dimer
break-up threshold energy of $E_D=1.82$ MeV. }
\end{figure}
For the running coupling $g_{s}(\Lambda)$, the characteristic periodicity, with $\Lambda = \Lambda^{(N)}$
$\forall N\in \mathbb{N}$, can be expressed as
\begin{equation}
g_s(\Lambda^{(1)})
=g_s(\Lambda^{(N+1)})
\,\, {\rm with} \,\,
\Lambda^{(N+1)}
=\Lambda^{(1)}\,{\rm exp}\left(\frac{N\pi}{s_0}\right)
\,\,,
\end{equation}
where $s_0$ is a real three-body parameter reminiscent of the corresponding asymptotic limit cycle
value of $s^\infty_0=1.03069$ ($s^\infty_0=1.02387$) that we found earlier for the $K^-nn$ ($D^0nn$) system.
Physically, $s_0$ gives a measure of the residual approximate discrete scale invariance surviving in the
non-asymptotic regime. In particular, the running coupling vanishes quasi-periodically at a discrete set of
cut-off values, expressed as $\Lambda^{(N+1)}_0=\Lambda^{(1)}_0\,{\rm exp}\left(\frac{N\pi}{s_0}\right)$.
It is to be noted that $s^\infty_0$ is a universal number, depending only on the gross features of the
three-particle system, like ratios of the respective masses of the three particles involved, or the
overall spin and isospin quantum numbers of the system. On the other hand, the non-universal number $s_0$
deviates from $s^\infty_0$ primarily due to cut-off dependent effects, implying an implicit dependence on
$N$ itself, i.e., $s_0\equiv s^{(N)}_0$. Additional non-asymptotic parametric dependence on the trimer
binding energy, three-body coupling and the two-body scattering lengths, can further influence the
numerical value of $s^{(N)}_0$.
In other words, $s^{(N)}_0$ being the outcome of a numerical RG is purely numerical by nature, and
thereby difficult to ascribe a unique definition in terms of a analytical expression. However, reasonably
good estimates are obtained by taking the successive discrete cut-offs, say, $\Lambda^{(N)}_0$ and $\Lambda^{(N+1)}_0$
for vanishing coupling $g_s(\Lambda^{(N)}_0)=g_s(\Lambda^{(N+1)}_0)=0$, and using the closed-form definition:
\begin{equation}
s_0^{(N)}=\frac{\pi}{\ln\left(\frac{\Lambda^{(N+1)}_0}{\Lambda^{(N)}_0}\right)}\,.
\end{equation}
A crucial observation in this definition is that the sequence of the non-asymptotic numbers
$s^{(N)}_0$, as obtained above, gradually converges to the asymptotic limit value $s^\infty_0$
beyond a suitably large $N>0$. Mathematically, stated:
{\it for $\epsilon > 0$,\, $\exists\, N_0 \in \mathbb{N}$,\, such that}
\begin{equation}
\left| s_0^{(N)} - s_0^{\infty} \right| < \epsilon\,,\quad \forall \, N > N_0\,.
\end{equation}
An alternative method of extraction of the parameter $s_0$ is presented in Appendix~\ref{sec:AppendixB}.
In the following numerical study, we shall exclude the three-body force since there is no
three-body input datum available to fix the coupling $g_s$. In Fig.~\ref{fig:bd-lambda_zcl},
the results for the trimer energy of the physical systems, $K^-nn\, (x=0)$ and $D^0nn\, (x=1)$,
are summarized. In the figure, the ground $(n=0)$ and the first excited $(n=1)$ state
binding energies, $-E=B_d>0$, of the $K^-nn$ {\it Borromean} trimer~\cite{Naidon:2016dpf}
(with both $a_{d(nK^{-})},a_{s(nn)}<0$), measured with respect to the three-particle breakup threshold,
are plotted as a function of $\Lambda$. The same figure also displays the corresponding
eigenenergies, $B_T=B_d-E_D$, of the $D^0nn$ trimer state, measured with respect to the
particle-dimer ($n+D^0n$) break-up threshold energy $E_D$. Note that the threshold value of the
$D^0n$ dimer binding energy, which is given by
\begin{equation}
E_D=\frac{1}{2\mu_{(nD)}a^2_{d(nD)}},
\label{eq:ED}
\end{equation}
is obtained as $E=-E_D= -1.82$ MeV, using the $D^0n$ reduced mass $\mu_{(nD)}=\frac{M_nm_D}{M_N+m_D}$, and the
corresponding ZCL scattering length given in Eq.~(\ref{eq:ad_ZCL}). It is to be noted that the value $-E_D$
closely matches with the value of the $D^0n$ bound state scattering amplitude pole position, $E_h=-1.85$ MeV
(see Fig.~\ref{fig:trajectory_y1}).
\begin{figure}[tbp]
\centering
\includegraphics[width=\columnwidth, height=5.5cm]{bd_lambda_zcl.eps}
\caption{\label{fig:bd-lambda_zcl}
Binding energies for the ground ($n=0$) and first ($n=1$) excited level states as a function of the sharp
momentum cut-off $\Lambda$, excluding the three-body force. The pertinent integral equations are solved with the input
two-body scattering lengths from Eq.~(\ref{eq:ad_ZCL}) in the zero coupling limit. For the $K^-nn$ system, the trimer
binding energy $B_d$, is measured with respect to the three-particle break-up threshold, while for the $D^0nn$ system the
binding energy, $B_T=B_d-E_D$, is measured with respect to the particle-dimer break-up threshold energy, $E_D=1.82$ MeV.}
\end{figure}
Figure~\ref{fig:bd-lambda_zcl} clearly indicates the increasing magnitude of the binding energies as the cut-off scale
$\Lambda$ increases.
In this figure, the first branch refers to the ground state of the $D^0nn$ ($K^-nn$) system, which appears
as a shallow threshold state at the so-called ``critical'' cut-off value of $\Lambda^{\rm (n=0)}_{c}\simeq 38.4$ MeV
($\simeq 2.3$ GeV). The state then becomes increasingly deeply bound with increasing cut-off, while a second branch (first
excited state) appears at the critical value, $\Lambda^{\rm (n=1)}_c\simeq 1$ GeV ($\simeq 52.2$ GeV), which in its turn gets
progressively deeper. Continuing in this manner an infinite tower of excited states emerges from the zero energy threshold
as $\Lambda\to\infty$.\footnote{
Because here we exclude the three-body force, the critical point associated with the ground state, denoted here
as $\Lambda_c^{(n=0)}$, corresponds to the first zero at $\Lambda=\Lambda_0^{(N=1)}$ of the three-body contact interaction,
i.e., $g_s(\Lambda_0^{(N=1)})=0$ in Fig.~\ref{fig:g-lambda_zcl}, for fixed trimer binding energy.} Again, as pointed out
earlier in the context of Fig.~\ref{fig:g-lambda_zcl}, only the ground state $D^0nn$ trimer is likely to satisfy conditions
that may formally qualify it as an Efimov-like state.
Let us finally study the flavor extrapolation from the $K^-nn$ system ($x=0$) to the $D^0nn$ system
($x=1$). As seen in Sec.~\ref{subsec:extrapolation}, a continuous extrapolation of the meson-neutron
system from $K^-n$ to $D^0n$ (see Fig.~\ref{fig:slengthreal}) in ZCL yielded an unitary limit
of the meson-neutron interaction in the proximity of $x\sim 0.6$. Here excluding the three-body force we
investigate the behavior of the ``critical point'' $\Lambda^{(0)}_c$, corresponding to the ground state ($n=0$),
as a function of the extrapolation parameter $x$ for fixed (absolute or relative) trimer binding energy
($B_d$ or $B_T$). In other words, we perform an extrapolation in the interval $x \in [0,0.615)$, i.e., along
the three-particle break-up threshold with a fixed value of $B_d$, and continue further in the interval
$x \in (0.615,1]$, i.e., along the particle-dimer break-up threshold with a fixed value of $B_T=B_d-E_D$,
where $E_D\equiv E_D(x)$ in the latter interval is itself $x$ dependent. It may be noted that level energies
chosen too close to the thresholds can lead to numerical instabilities in the vicinity of the unitary limit.
We have, therefore, preferred non-zero but near threshold binding energies, say, $B_d,B_T\gtrsim 0.001$ MeV
to suit our purpose of demonstration.
Figure~\ref{fig:lambdac-x} displays our results where the meson mass and the $s$-wave meson-neutron scattering
length are simultaneously extrapolated from $x=0$ to $x=1$.
The fact that there is a large change ($\sim 3$ orders of magnitude) in the critical cut-off along $x \in [0,0.615)$
compared to a rather nominal change along $x \in (0.615,1]$, suggests that the $D^0nn$ system is much more likely
to be found in the domain of three-body universality and yields an Efimov-like bound state in comparison to
the $K^-nn$ system. This reflects the much larger scattering length, $a_{0,D^0n}^{\rm ZCL}\sim 4$ fm, than that in the
strangeness sector, $a_{0,K^-n}^{\rm ZCL}\sim -0.4$ fm. The horizontal dotted line in the above figure indicates the
{\it one-pion threshold}, the upper cut-off of the pionless EFT employed in this analysis.\footnote{
In fact, it may be noted that the cut-off scale of the short-distance two-body interaction should be taken
still larger since the {\it one-pion exchange} process is forbidden in the $K^{-}n$ or $D^{0}n$ system.} A direct
comparison with the behavior of the critical cut-off indicates that a $D^0nn$ ground state trimer is clearly supported
in the realm of the low-energy EFT framework, lying well below the hard scale ${}^{\pi\!\!\!/}\Lambda \sim m_\pi$, while
the same for the $K^-nn$ system lies far above the applicability of the low-energy EFT, and thus, can be considered
effectively unbound by Efimov attraction.
\begin{figure}[tbp]
\centering
\includegraphics[width=\columnwidth, height=5.5cm]{nnk_lambdac-x.eps}
\caption{\label{fig:lambdac-x}
Extrapolation of the critical cut-off $\Lambda^{(0)}_c$, associated with the ground state
($n=0$) for fixed binding energies ($B_d$ or $B_T$) of the meson-neutron-neutron bound state,
measured along the three-particle or particle-dimer break-up threshold. The double
dashed vertical line at $x = 0.615$ denotes the unitary limit. The plot corresponds to
the zero coupling limit with input scattering lengths, as given in Eq.~(\ref{eq:ad_ZCL}).
The three-body force is excluded in these results.}
\end{figure}
\section{Discussion and summary}\label{sec:summary}
In this work, we have explored the two and three-body universal physics associated with a
three-particle cluster state of two neutrons and a flavored meson, i.e., an antikaon
$K^{-}$ or a $D^{0}$ meson. We have demonstrated that the meson-neutron scattering length
can become infinitely large in an idealized limit, around which universality governs the
physics of two-body and three-body systems. This offers an interesting prospect that a possible
remnant of this universal physics may be observed in the physical hadronic systems.
First, we have studied the two-body meson-neutron systems. Motivated by the experimental
evidences that the $K^{-}n$ ($D^{0}n$) system has no quasi-bound state (one quasi-bound state),
we construct a theoretical coupled-channel model to describe both $K^{-}n$ and $D^{0}n$ systems
with a single flavor extrapolation parameter $x$. This model enables the extrapolation to
unphysical quark masses between the strange $(x=0)$ and the charm $(x=1)$ flavors limits, providing
a natural mechanism to tune the strength of the meson-neutron interaction. While the universal
features are not very prominent due to the coupled-channel effects, it was shown that the
meson-neutron interaction can reach resonant conditions with large two-body scattering length
around $x\sim 0.6$ in the zero coupling limit (ZCL) with the channel couplings switched off.
We further investigated two-body universality by comparing the eigenmomentum of the system
with that predicted solely by the two-body scattering lengths.
This led to the identification of the entire region $x\gtrsim 0.3$ as the two-body universal
window in the ZCL,
including the charm sector ($x=1$). Moreover, the universal prediction reasonably works in the
full coupled-channel model with complex meson-neutron scattering lengths in the extrapolation region
above $x\gtrsim 0.8$. This indicates that universality governs not only the idealized ZCL scenario
but also the physical $D^0n$ system, both being continuously connected. One can then conclude that
the physical charm sector is likely to follow the predictions of universality, even with the couplings
to the decay channels included.
Next, we investigated three-body universality by employing a low-energy cluster EFT.
In a simplified approach, we especially neglected the absorptive influence of sub-threshold decay channels
which amounts to considering the $s$-wave meson-neutron scattering lengths to be real-valued. A leading order
EFT analysis allowed us to investigate the meson-neutron-neutron system in the scaling limit (zero-range
approximation) where the universal physics was determined only by the two $s$-wave scattering lengths $a_{d(nK)}$
and $a_{s(nn)}$. In this work $a_{s(nn)}$ was kept fixed to the physical value, while $a_{d(nK)} \equiv a_{d(nK)}(x)$
was varied as a function of $x$.
The introduction of a sharp momentum cut-off $\Lambda$ in the integral equations led to the breaking of
continuous scale invariance resulting in the introduction of range-like effects. In this context, it may be
noted that effective range corrections were not explicitly considered in this qualitative leading order analysis.
We anticipate that the range correction would not be significant in the charm sector based on the two-body analysis
in Figs.~\ref{fig:eigenmomentumcomp} and \ref{fig:eigenmomentumreal}. For definite conclusion, however, it is important
to explicitly examine the range correction, which is left as a future prospect of this work.
An immediate consequence of the breaking of continuous scale invariance in the three-body sector is the
appearance of a discrete scaling symmetry in the solutions to the integral equations that can be associated
with an asymptotic RG limit cycle. In this context, Fig.~\ref{fig:asymptotic} suggests that the asymptotic limit
cycle between the $K^-nn$ and $D^0nn$ systems continuously exists for all intermediate values of $x$ so that the
respective asymptotic parameters $s^\infty_0=1.03069$ (strange sector) and $s^\infty_0=1.02387$ (charm sector) are
smoothly connected. This leads us to conclude that by neglecting the influence of decay channels and range corrections,
the Efimov effect can always be manifested in the solutions to the three-body coupled integral equations. In fact,
from Fig.~\ref{fig:g-lambda_zcl}, we confirmed approximate RG limit cycle behavior (with quasi-log-periodicity)
of the three-body running coupling $g_s(\Lambda)$ in the non-asymptotic solutions for each system.
Next, the binding energies of the three-body systems in the physical limits were obtained as a function of the cut-off
$\Lambda$. For the sake of simplicity we excluded the dependence on the three-body force with the unknown coupling
$g_s$. We found that the binding energies increase with increasing $\Lambda$, and the various level states emerge in
order from the zero energy threshold, starting from a deepest (ground) state that appears at a certain critical value,
$\Lambda=\Lambda^{(0)}_c$. It may be noted that a larger value of the momentum cut-off leads to additional inclusion of
ultraviolet physics into the theory arising from high energy modes. This evidently means greater attraction in the system
leading to larger three-body binding energies and the emergence of new level states. It clearly emerges from our results
that the Efimov spectrum is much steeper on the negative side of the scattering lengths. With the much smaller critical
cut-offs for the $D^0nn$ level states than those of the $K^-nn$ system, it is straightforward to conclude that the $D^0nn$
trimer states are manifested more easily.
Finally, as shown in Fig.~\ref{fig:lambdac-x}, the behavior of the lowest critical cut-off $\Lambda_c^{(0)}$
associated with the emergence of near threshold (three-body) ground states was obtained as a function of $x$
using the extrapolated meson-neutron scattering length in the ZCL. Clearly with smaller and smaller chosen
values of $B_d$ or $B_T$, $\Lambda_c^{(0)}$ converged to the point
($x\simeq 0.6$,\,$\Lambda_c^{(0)}\simeq 0$), at which near threshold states were realized corresponding
to the vanishing scale of two-body unitarity. As $x$ is taken away from unitarity toward the physical limits,
$x=0,1$, $\Lambda_c^{(0)}$ becomes larger, indicating larger two-body interaction strengths needed to form
three-body bound states. Clearly, the $x=1$ physical limit ($D^0nn$ system) is located much closer to the
unitary limit at $x\simeq 0.6$ than the $x=0$ physical limit ($K^-nn$ system). This naturally indicates
that a $D^0nn$ ground state trimer can be realized in the ZCL in pionless EFT framework with $\Lambda_c^{(0)}$
lying well below the pion mass. In fact, this is a straightforward consequence of the large $D^0n$ scattering length
in the ZCL, i.e., $a_{0,D^0n}^{\rm ZCL}\sim 4$ fm, which traces back to the existence of $\Sigma_c(2800)$ near the $D^0n$
threshold. While for the $K^-nn$ system, with $\Lambda_c^{(0)}\gtrsim 2$ GeV, no physically realizable mechanism in the
context of a low-energy EFT can generate sufficient interaction strength to form bound states. In a sense, the much
steeper Efimov spectrum for the $K^-nn$ system is consistent with the fact that such Borromean Efimov trimers are by
nature extremely difficult to form.
In retrospect, through a simplistic qualitative analysis we have presented here a rather idealistic scenario
whereby universal Efimov physics may be realized in a meson-neutron-neutron system, despite the smallness
of the meson-neutron scattering lengths. Such a scenario may very well be unrealistic given the rather delicate
nature of the Efimov-like physics which may easily become obscured or wiped out altogether due to range and
coupled-channel effects with $C_{i1}=C_{1i}\neq 0$ ($i=2,3,4$) restored. Nevertheless, it may still be somewhat
interesting to pursue further studies to assess the exact nature of the $D^{0}nn$ system which has never been
previously considered as a possible candidate for a hadronic molecule. Having said that, it must also be borne
in mind that the dynamics of the sub-threshold decay channels which were systematically neglected in this work
can play a vital role to ultimately decide whether the physical $D^0nn$ system supports a quasi-bound state.
With sizable imaginary parts in the meson-neutron scattering lengths, the dynamical effect of the decay channel
should be carefully checked in a more robust coupled channel framework. This may be useful to gain better insights
into the universal aspects of three-body dynamics, especially in the presence of two-body quasi-bound subsystems
with large couplings to open channels. Thus, in regard to drawing definitive conclusions, the present leading order
EFT analysis being too simplistic should either be further elaborated by including decay channels (employing complex
scattering lengths) followed by sub-leading order effective range corrections, or by employing rigorous few-body
techniques combined with realistic two-body potentials, e.g., as pursued in Ref.~\cite{Bayar:2012dd}.
| {'timestamp': '2018-09-11T02:01:55', 'yymm': '1708', 'arxiv_id': '1708.03369', 'language': 'en', 'url': 'https://arxiv.org/abs/1708.03369'} |
\section{Introduction}
Since the very early years of quantum theory, theorists have considered the interaction of low-energy atoms and molecules with surfaces \cite{lj1,*lj2,*lj3}. In comparison to a classical particle, a quantum particle at low energy was predicted to have a reduced probability to adsorb to surfaces. The reason is despite the long-range attractive van der Waals interaction between a neutral particle and surface, at sufficiently low energies, quantum particles have little probability of coming near the surface \cite{dpc92}.
This effect is named ``quantum reflection,'' and it is a simple result of the wave-like nature of low-energy particles moving in a finite-ranged attractive potential. This reduction in the particle's probability density near the surface leads to a reduction in the transition probability of the particle to a state bound to the surface. In one of the earliest applications of quantum perturbation theory, Lennard-Jones and Devonshire concluded that the probability of a neutral particle with energy $E$ sticking to the surface should vanish as $\sqrt{E}$ as $E\to 0$.
In contrast, charged particles do not experience the effects of quantum reflection. Far from the surface, charged particles interact with the surface through a Coulomb potential. Due to the slow spatial variation of the Coulomb potential, incident particles behave semiclassically. As a result, Clougherty and Kohn \cite{dpc92} found that the sticking probability should tend to a non-vanishing constant as $E\to 0$.
The seemingly universal scaling law for neutral particles was shown to hold even within a non-perturbative model that includes arbitrarily strong quantum fluctuations of the surface \cite{dpc92, dpc03}. This model however was regularized
with the use of a low-frequency cutoff. Thus the effects of an infrared divergence involving low frequency excitations were not included in the analysis.
In the eighties, experiments went to sub-milliKelvin temperatures to look for this threshold law scaling in a variety of physical systems without success \cite{doyle91}. Theorists \cite{carraro92} realized that the experiments suffered from unwanted interactions from a substrate supporting the target of a superfluid helium film. By increasing the thickness of the film, the next generation of experiments \cite{yu93} produced data consistent with the $\sqrt{E}$ law, and the controversy subsided.
In recent years, with dramatic advances in producing and manipulating ultracold atoms, there is renewed interest in interactions between low-energy atoms and surfaces. New technologies have been proposed that rely on the quantum dynamics of ultracold atoms near surfaces; microfabricated devices called ``atom chips'' would store and manipulate cold atoms near surfaces for quantum information processing and precision metrology \cite{chip04}. Our understanding of device performance will depend in part on our understanding of ultracold atom-surface interactions. Experiment is now in a position to test detailed theoretical predictions on the behavior of low-energy sticking and scattering from surfaces.
In this Letter, we consider theoretically a non-perturbative model that focusses on the effects of low-frequency excitations on quantum reflection and sticking. We follow the mean-field variational method introduced by Silbey and Harris in their analysis of the quantum dynamics of the spin-boson model \cite{sb}. Using this method we analyze the effects of the infrared divergence on the sticking process. Our analysis reveals two distinct scaling regimes in the parameter space in analogy with localized and delocalized phases in the spin-boson model. In the delocalized regime, an infrared divergence in the bath is cutoff by an energy scale that depends on the incident energy of the particle $E$. As a consequence, we find that both the threshold laws for neutral and charged particles are modified by the dissipative coupling strength $\alpha$. As a result of the low frequency fluctuations, the threshold law for neutral particles is no longer universal, and the threshold law for charged particles no longer precludes perfect reflection at ultralow energies.
We take for our model a particle coupled to a bath of oscillators
\begin{equation}
H=H_p+H_b+H_c
\end{equation}
where
\begin{eqnarray}
H_p&=&E c_k^\dagger c_k -E_b b^\dagger b,\\
H_b&=&\sum_q{\omega_q {a_q^\dagger} a_q},\\
H_c&=&-i(c_k^\dagger b+b^\dagger c_k)g_{1}\sum_q \sigma\left(\omega_{q}\right) \ ({a_q-a_q^\dagger})
-ic_k^\dagger c_k g_{2}\sum_q \sigma\left(\omega_{q}\right)\ ({a_q-a_q^\dagger}) \nonumber\\
&&-i b^\dagger b g_{3}\sum_q \sigma\left(\omega_{q}\right) \ ({a_q-a_q^\dagger})
\end{eqnarray}
where $g_{1}$, $g_{2}$ and $g_{3}$ are model coupling constants and $\sigma\left(\omega_{q}\right)$ depends on the specific particle-excitation coupling mechanism. $c_k^\dagger$ ($c_k$) creates (annihilates) a particle in the entrance channel $\left|k\right\rangle$ with energy
$E$; $b^\dagger$ ($b$) creates (annihilates) a particle in the bound state $\left|b\right\rangle$ with energy
$-E_b$.
$a_q^\dagger$ ($a_q$) creates (annihilates) a boson in the target bath with energy $\omega_q$. (We use natural units throughout where $\hbar=1$.) We work in the regime where $E\ll E_b$. We neglect the probability of ``prompt'' inelastic scattering, where bosons are created and the particle escapes to infinity with degraded energy, as the phase space available for these processes vanishes as $E\to 0$. Thus only the incoming and bound channels are retained for the particle.
We consider a model with ohmic dissipative spectral density. Such a model can be realized with a semi-infinite elastic solid where the incident particle couples to the surface strain. The spectral density function that characterizes the coupling to the excitation bath is given by
\begin{equation}
J(\omega)\equiv\sum_{q}g^{2}_{1}\sigma^{2}\left(\omega_{q}\right)\delta(\omega-\omega_{q})=g^{2}_{1}\rho\omega
\end{equation}
where $\rho$ is a frequency-independent constant.
This model differs in an important way from the model of Ref.~\cite{dpc92} where low frequency modes were cutoff to prevent an infrared divergence in the rms displacement of the surface atom. In this model, low frequency modes are included, and their effects on quantum reflection and sticking are the focus of this study.
We start with the variational approach used by Silbey and Harris \cite{sb} for the ohmic spin-boson model. A generalized unitary transformation $U=e^S$ is first performed on the Hamiltonian $H$, with
\begin{equation}
S=i b^\dagger b x
\end{equation}
and
\begin{equation}
x=\sum_q {{f_q\over\omega_q} (a_q+a_q^\dagger)}
\end{equation}
The variational parameters to be determined are denoted by $f_q$. The unitary transformation displaces the oscillators to new equilibrium positions in the presence of the particle bound to the surface and leaves the oscillators unshifted when the particle is in the continuum state.
The transformed Hamiltonian $\tilde H$ is given by
\begin{eqnarray}
{\tilde H}&=&e^S H e^{-S}\\
&=&{\tilde H_p}+{\tilde H_b}+{\tilde H_c}
\end{eqnarray}
where
\begin{eqnarray}
{\tilde H_p}&=&E c_k^\dagger c_k -{\tilde E_b}b^\dagger b,\\
{\tilde H_c}&=&-ic^{\dagger}_{k}b\sum_{q}g_{1q}(a_{q}-a_{q}^{\dagger})e^{-ix}-ib^{\dagger}c_{k}e^{ix}\sum_{q}g_{1q}(a_{q}-a_{q}^{\dagger})\nonumber\\
&&-ic_{k}^{\dagger}c_{k}\sum_{q}g_{2q}(a_{q}-a_{q}^{\dagger})-ib^{\dagger}b\sum_{q}(g_{3q}-f_{q})(a_{q}-a_{q}^{\dagger})\\
{\tilde H_b}&=&H_b\\
{\tilde E_{b}}&=&E_{b}+\sum_{q}\frac{2f_{q}g_{3q}-f_{q}^{2}}{\omega_{q}}
\end{eqnarray}
and where $g_{iq}\equiv g_{i}\sigma\left(\omega_{q}\right)$. We define a mean transitional matrix element $\Delta$
\begin{equation}
\Delta\equiv i\left\langle e^{ix}\sum_{q}g_{1q}(a_{q}-a_{q}^{\dagger})\right\rangle
\end{equation}
where $ \langle\cdots\rangle$ denotes the expectation over the bath modes.
The Hamiltonian is then separated into the following form
\begin{equation}
{\tilde H}=H_0+V
\end{equation}
where $V$ is chosen such that $\left\langle V\right\rangle=0$. Hence, we obtain
\begin{eqnarray}
H_0&=&E c_{k}^{\dagger}c_{k}-\tilde{E_{b}}b^{\dagger}b-\Delta^{*}c_{k}^{\dagger}b-\Delta b^{\dagger}c_{k}+\sum_{q}\omega_{q}a_{q}^{\dagger}a_{q}\\
V&=&-c^{\dagger}_{k}b\left(i\sum_{q}g_{1q}(a_{q}-a_{q}^{\dagger})e^{-ix}-\Delta^{*}\right)-b^{\dagger}c_{k}\left(ie^{ix}\sum_{q}g_{1q}(a_{q}-a_{q}^{\dagger})-\Delta\right)\nonumber\\
&&-ic_{k}^{\dagger}c_{k}\sum_{q}g_{2q}(a_{q}-a_{q}^{\dagger})-ib^{\dagger}b\sum_{q}(g_{3q}-f_{q})(a_{q}-a_{q}^{\dagger})
\end{eqnarray}
We calculate the ground state energy of ${H_0}$ in terms of the variational parameters $\left\{f_{q}\right\}$ and minimize to obtain the following condition
\begin{equation}
f_{q}\left(1+\frac{\epsilon+2\Delta^{2}\omega_{q}^{-1}}{\sqrt{\epsilon^{2}+4\Delta^{2}}}\right)=g_{3q}\left(1+\frac{\epsilon}{\sqrt{\epsilon^{2}+4\Delta^{2}}}\right)+\frac{2\Delta\sqrt{u}g_{1q}}{\sqrt{\epsilon^{2}+4\Delta^{2}}}
\label{selffq}
\end{equation}
which is an implicit equation for $f_{q}$. For convenience, in the above we have defined
\begin{equation}
\epsilon=E+\tilde{E_{b}}=E+E_{b}+\sum_{q}\frac{2f_{q}g_{3q}-f_{q}^{2}}{\omega_{q}}
\label{epsilon}
\end{equation}
and
\begin{eqnarray}
\Delta&=&\sqrt{u}\Omega_{1},
\label{delta}\\
u&\equiv&e^{-\sum_{q}\frac{f_{q}^{2}}{\omega_{q}^{2}}},
\label{u1}\\
\Omega_{1}&\equiv&\sum_{q}\frac{g_{1q}f_{q}}{\omega_{q}}
\label{omega1}
\end{eqnarray}\\
Under the condition ${\Delta}\ll{\epsilon}$,
Eq.(\ref{selffq}) can be simplified to
\begin{equation}
f_{q}=\frac{g_{3q}}{1+\frac{z}{\omega_{q}}}
\label{fq}
\end{equation}
where
\begin{equation}
z\equiv\frac{\Delta^{2}}{\epsilon}
\label{defz}
\end{equation}
Using Eq.(\ref{fq}), Eqs.(\ref{u1}), (\ref{omega1}) and (\ref{epsilon}) can be rewritten in terms of $z$
\begin{eqnarray}
u&=&(1+\frac{\omega_{c}}{z})^{-g_{3}^{2}\rho}e^{g_{3}^{2}\rho/(1+\frac{z}{\omega_{c}})}
\label{u2}\\
\Omega_{1}&=&g_{1}g_{3}\rho\omega_{c}-g_{1}g_{3}\rho z \ln\frac{\omega_{c}+z}{z}\\
\epsilon&=&E+E_{b}+g^{2}_{3}\omega_{c}-g^{2}_{3}\rho\omega_{c}\frac{z}{\omega_{c}+z}
\end{eqnarray}
where $\omega_{c}$ is the upper cutoff frequency of the bath. According to Eq.(\ref{defz}) and the condition $\Delta\ll\epsilon$, ${z}\ll{\omega_{c}}$ must be satisfied. This leads to the following
\begin{eqnarray}
\Omega_{1}&\approx&g_{1}g_{3}\rho\omega_{c}
\label{solomega1}\\
\epsilon&\approx&E+E_{b}+g_{3}^{2}\rho\omega_{c}
\label{bias}
\end{eqnarray}
Substitution into Eq. (\ref{defz}) gives the self-consistent equation for $z$
\begin{equation}
z=K(1+\frac{\omega_{c}}{z})^{-g_{3}^{2}\rho}e^{g_{3}^{2}\rho/(1+\frac{z}{\omega_{c}})}
\label{selfz}
\end{equation}
where
\begin{equation}
K\approx\frac{(g_{1}g_{3}\rho\omega_{c})^{2}}{E+E_{b}+g_{3}^{2}\rho\omega_{c}}
\end{equation}
It is straightforward to find the following closed-form expression for $z$, valid for $z\ll \omega_c$
\begin{equation}
z\approx K(\frac{eK}{\omega_{c}})^{\frac{\alpha}{1-\alpha}}
\label{z}
\end{equation}
where $\alpha$, the dissipative coupling strength, is given by $\alpha\equiv g^{2}_{3}\rho$.
Depending on the value of $\alpha$, there are two solutions to the variational parameters $f_{q}$. We see from Eq.~\ref{z} that as $\alpha\to 1$, $z\to 0$. Thus,
\begin{equation}
\label{eq:fq}
f_{q}\approx
\begin{cases}
g_{3q} & \text{$\alpha \ge 1$} \\
\frac{g_{3q}}{1+\frac{z}{\omega_{q}}} & \text{$\alpha < 1$}
\end{cases}
\end{equation}
In the regime where $\alpha <1$, we see that the parameter $f_q$ for excitations whose frequency $\omega_q \ll z$ vanishes as $\omega_q\to 0$. It is this weakening of the coupling to non-adiabatic excitations that allows us to extract a finite mean transitional matrix element. In the process, the sticking rate is altered from the perturbative result.
We can now show that the condition ${\Delta}\ll{\epsilon}$ is satisfied so that our variational solution is self-consistent. According to Eq.~(\ref{defz}),
${\Delta}/{\epsilon}=\sqrt{{z}/{\epsilon}}$.
For $\alpha\geq1$, $z=0$, so ${\Delta}=0$ and ${\Delta}\ll{\epsilon}$ holds true. For $\alpha<1$, $z\sim g^{\frac{2}{1-\alpha}}_{1}$. The coupling constant $g_1$ has a dependence on the initial energy of the particle $E$. This can be seen from the transition matrix element
\begin{equation}
g_{1q}=-i \langle b, 1_q|H_c| k, 0\rangle
\label{g1}
\end{equation}
The amplitude of the initial state in the vicinity of the surface is suppressed by quantum reflection. It is a simple consequence of wave mechanics \cite{dpc92} that in the low energy regime, $g_{1q}\sim \sqrt{E}$ as $E\to 0$ for a neutral particle. For a charged particle, the coupling constant behaves as $g_{1q}\sim {E^{1/4}}$ as $E\to 0$, as it is not subject to the effects of quantum reflection.
Thus in either case, the mean-field amplitude $\Delta$ becomes arbitrarily small as $E$ tends to zero, while $\epsilon$ approaches a non-zero value. Consequently the conditions for our variational solution are always satisfied for sufficiently cold particles.
For ${\Delta}\ll{\epsilon}$, the rate of incoming atoms sticking to the surface can be calculated using Fermi's golden rule \cite{leggett, *leggett2}:
\begin{equation}
R=2\pi \sum_{q}\left|\left\langle b,1_{q}\left|\tilde{H}_{c}\right|k,0\right\rangle\right|^{2}\delta\left(-\tilde{E}_{b}-E+\omega_{q}\right)
\label{R1}
\end{equation}
where $|1_q\rangle$ denotes a state of one excitation with wave vector $q$.
After calculating the relevant matrix elements, the rate becomes
\begin{equation}
R=2\pi e^{-\sum_{q}\frac{f_{q}^{2}}{\omega_{q}^{2}}}\sum_{q}\left(g_{1q}-\frac{f_{q}}{\omega_{q}}\sum_{q^{'}}\frac{f_{q^{'}}g_{1q^{'}}}{\omega_{q^{'}}}\right)^{2}\delta\left(-\tilde{E}_{b}-E+\omega_{q}\right)
\label{r3}
\end{equation}
After some algebra, we find the leading order of the rate $R$ in the incident energy $E$ to be
\begin{equation}
R=2\pi(\frac{z}{\omega_{c}})^{\alpha}e^{\alpha}g^{2}_{1}\rho E_{b}\left(\frac{E_b}{E_{b}+\alpha\omega_{c}}\right)
\label{r2}
\end{equation}
where $z$ given in Eq.~\ref{z} is a constant with a power dependence on $g_{1}$.\\
We compare this rate to that obtained by Fermi's golden rule on the untransformed Hamiltonian
\begin{eqnarray}
R&=&2\pi \sum_{q}\left|\left\langle b,1_{q}\left|H_{c}\right|k,0\right\rangle\right|^{2}\delta\left(-E_{b}-E+\omega_{q}\right)\nonumber\\
&=&2\pi g^{2}_{1}\rho E_{b}
\end{eqnarray}
The matrix elements of transformed Hamiltonian $\tilde{H}_{c}$ are reduced by a Franck-Condon factor which gives the non-perturbative rate with an additional dependence on $z$.
The coupling constant $g_{1}$ can be expressed in terms of a matrix element of the unperturbed states using Eq.~\ref{g1}. We take $H_{c}$ to have the general form in coordinate space
\begin{equation}
H_{c}=-i U(x)\sum_{q}\sigma\left(\omega_{q}\right)\left(a_{q}-a^{\dagger}_{q}\right)
\end{equation}
The coupling constant $g_{1}$ is given by
\begin{equation}
g_{1}=\left\langle k\left|U\right|b\right\rangle=\int^{\infty}_{0}{\phi^{*}_{k}(x)U(x)\phi_{b}(x)dx}
\label{g1k}
\end{equation}
(We have assumed the case of normal incidence, however results for the more general case follow from decomposing the wave vector into normal and transverse components \cite{berlinsky}.)
The continuum wave functions have the asymptotic form for a neutral particle
\begin{equation}
\phi_{k}(x)\stackrel{k\rightarrow0}{\sim} {k\ h_1(x)}
\label{phikn}
\end{equation}
and for a charged particle \cite{dpc92},
\begin{equation}
\phi_{k}(x)\stackrel{k\rightarrow0}{\sim} {\sqrt{k}\ h_2(x)}
\label{phikc}
\end{equation}
where $k=\sqrt{2mE}$, and $h_i(x)$ are functions, independent of $E$.
The probability of sticking to the surface $\it s$ is the sticking rate per surface area per unit incoming particle flux. Hence,
${\it s}(E)=\sqrt{2\pi^2 m\over { E}}R$. (We use delta-function normalization for the continuum wave functions.) From Eq.~\ref{r2} we conclude that with $\alpha<1$ for a neutral particle,
\begin{equation}
{\it s}(E)\sim C_1 {E^{(1+\alpha)/2(1-\alpha)}},\ \ \ E\to 0
\end{equation}
and for a charged particle,
\begin{equation}
{\it s}(E)\sim C_2 {E^{\alpha/2(1-\alpha)}},\ \ \ E\to 0
\end{equation}
where $C_i$ are energy-independent constants.
In summary, we have considered the effects of the infrared singularity resulting from interaction with an ohmic bath on surface sticking. We calculated using a variational mean-field method the sticking rate as a function of the incident energy in the low-energy asymptotic regime. We have shown that for an ohmic excitation bath the threshold rate for neutral particles decreases more rapidly with decreasing energy $E$, in comparison with the perturbative rate. We predict a new threshold law for neutral particle surface sticking, where the energy dependence depends on the dissipative coupling $\alpha$.
The new threshold law is a result of a bosonic orthogonality catastrophe \cite{mahan}; the ground states of the bath with different particle states are orthogonal. The sticking transition amplitude acquires a Franck-Condon factor whose infrared singularity is cutoff by $z$. As with the x-ray absorption edge \cite{mahan}, a new power law results at threshold. The low-frequency fluctuations alter the power law to a bath-dependent non-universal exponent.
For the case of charged particles, we find that dissipative coupling causes the sticking probability to vanish as $E\to 0$, in contrast to the perturbative result \cite{dpc92}. Thus, ``quantum mirrors'' --surfaces that become perfectly reflective to particles with incident energies asymptotically approaching zero-- can also exist for charged particles.
\begin{acknowledgments}
DPC thanks Daniel Fisher, Walter Kohn and James Langer for stimulating discussions on various aspects of this problem. We gratefully acknowledge support by the National Science Foundation (DMR-0814377).
\end{acknowledgments}
| {'timestamp': '2010-12-21T02:03:37', 'yymm': '1012', 'arxiv_id': '1012.4405', 'language': 'en', 'url': 'https://arxiv.org/abs/1012.4405'} |
\section{Introduction}
\label{sec:0}
White dwarf binaries are thought to be the most common binaries in the
Universe, and in our Galaxy their number is estimated to be as high as 10$^8$.
In addition most stars are known to be part of binary systems, roughly half
of which have orbital periods short enough that the evolution of the two stars
is strongly influenced by the presence of a companion. Furthermore, it has
become clear from observed close binaries, that a large fraction of binaries
that interacted in the past must have lost considerable amounts of angular
momentum, thus forming compact binaries, with compact stellar components. The
details of the evolution leading to the loss of
angular momentum are uncertain, but generally this is interpreted in the
framework of the so called ``common-envelope evolution'': the picture that in a
mass-transfer phase between a giant and a more compact companion the
companion quickly ends up inside the giant's envelope, after which
frictional processes slow down the companion and the core of the
giant, causing the ``common envelope'' to be expelled, as well as
the orbital separation to shrink dramatically \cite{Taam and Sandquist (2000)}.
Among the most compact binaries know, often called ultra-compact or ultra-short
binaries, are those hosting two white dwarfs and classified into two
types:
\emph{detached} binaries, in which the two components are relatively widely
separated and \emph{interacting} binaries, in which mass is transferred from one
component to the other. In the latter class a white dwarf is accreting from a
white dwarf like object (we often refer to them as AM CVn systems, after the
prototype of the class, the variable star AM CVn; \cite{warn95,Nelemans
(2005)}).
\begin{figure}
\includegraphics[height=7.5cm,angle=0]{P_Mtot_SPYnew1}
\caption{Period versus total mass of double white dwarfs. The points and arrows
are observed systems \cite{Nelemans et al. (2005)}, the
grey shade a model for the
Galactic
population. Systems to the left of the dashed line will merge within a Hubble
time, systems above the dotted line have a combined mass above the
Chandrasekhar mass. The top left corner shows the region of possible type Ia
supernova progenitors, where the grey shade has been darkened for better
visibility (adapted from \cite{Nelemans (2007)}). }
\label{fig:P_Mtot}
\end{figure}
In the past many authors have emphasised the importance of studying white dwarfs
in DDBs. In fact, the study of ultra-short
white dwarf binaries is relevant to some important astrophysical questions
which have been outlined by several author. Recently, \cite{Nelemans
(2007)} listed the
following ones:
\begin{itemize}
\item {\em Binary evolution} Double white dwarfs are excellent tests of
binary evolution. In particular the orbital shrinkage during the
common-envelope phase can be tested using double white dwarfs. The
reason is that for giants there is a direct relation between
the mass of the core (which becomes a white dwarf and so its mass is
still measurable today) and the radius of the giant. The latter
carries information about the (minimal) separation between the two
components in the binary before the common envelope, while the
separation after the common envelope can be estimated from the
current orbital period. This enables a detailed reconstruction of
the evolution leading from a binary consisting of two main sequence
stars to a close double white dwarf \cite{Nelemans et al.(2000)}.
The interesting
conclusion of this exercise is that the standard schematic
description of the common envelope -- in which the
envelope is expelled at the expense of the orbital energy -- cannot
be correct. An alternative scheme, based on the angular momentum,
for the moment seems to be able to explain all the observations
\cite{Nelemans and Tout (2005)}.
\item {\em Type Ia supernovae} Type Ia supernovae have peak
brightnesses that are well correlated with the shape of their light
curve \cite{Phillips (1993)}, making them ideal standard candles to determine
distances. The measurement of the apparent brightness of far away
supernovae as a function of redshift has led to the conclusion that
the expansion of the universe is accelerating
\cite{Perlmutter et al. (1998),Riess et al.(2004)}. This depends on the
assumption
that these
far-away (and thus old) supernovae behave the same as their local
cousins, which is a quite reasonable assumption. However, one of the
problems is that we do not know what exactly explodes and why, so
the likelihood of this assumption is difficult to assess
\cite{Podsiadlowski et al. (2006)}. One of the proposed models for the
progenitors of type Ia supernovae are massive close double white
dwarfs that will explode when the two stars merge \cite{Iben and Tutukov
(1984)}.
In Fig.~\ref{fig:P_Mtot} the observed double
white dwarfs are compared to a model for the Galactic population of
double white dwarfs \cite{Nelemans et al. (2001)}, in
which the merger rate of
massive double white dwarfs is similar to the type Ia supernova
rate. The grey shade in the relevant corner of the diagram is
enhanced for visibility. The discovery of at least one system in
this box confirms the viability of this model (in terms of event
rates).
\item {\em Accretion physics} The fact that in AM CVn systems the mass losing
star is an evolved, hydrogen deficient star,
gives rise to a unique astrophysical laboratory, in which accretion
discs made of almost pure helium \cite{Marsh et al. (1991),Schulz et
al.(2001),Groot et al. (2001),Roelofs et al. (2006),Werner et al. (2006)}.
This opens the possibility to test the behaviour of accretion discs of
different chemical composition.
\item {\em Gravitational wave emission}
Untill recently the DDBs with two NSs were considered among the best sources to
look for gravitational wave emission, mainly due to the relatively high chirp
mass expected for these sources, In fact, simply inferring the strength of the
gravitational wave amplitude expected for from \cite{Evans et al. (1987)}
\begin{equation}
h = \left[ \frac{16 \pi G L_{GW}} {c^3 \omega^2_g 4 \pi d^2} \right] ^{1/2} =
10^{-21} \left( \frac{{\cal{M}}}{{\it M}_{\odot}}
\right)^{5/3} \left ( \frac{P_{orb}}{\rm 1
hr} \right)^{-2/3} \left ( \frac{d}{\rm 1 kpc} \right)^{-1}
\end{equation}
where
\begin{equation}
L_{GW} = \frac{32}{5}\frac{G^4}{c^5}\frac{M^2 m^2 (m+M)}{a^5} ;
\end{equation}
\begin{equation}
{\cal{M}}=\frac{(Mm)^{3/5}}{(M+m)^{1/5}}
\end{equation}
where the frequency of the wave is given by $f = 2/P_{orb}$. It is evident that
the strain signal $h$ from DDBs hosting neutron stars is a factor 5-20 higher
than in the case of DDBs with white dwarfs as far as the orbital period is
larger than approximatively 10-20 minutes. In recent years, AM CVns have
received great
attention as they represent a large population of guaranteed sources for the
forthcoming \textit{Laser Interferometer Space Antenna}
\cite{2006astro.ph..5722N,2005ApJ...633L..33S}. Double WD
binaries enter the \textit{LISA}
observational window (0.1 $\div$ 100 mHz) at an orbital period $\sim$ 5 hrs
and, as they evolve secularly through GW emission, they cross the whole
\textit{LISA} band. They are expected to be so numerous ($\sim 10^3 \div 10^4$
expected), close on average, and luminous in GWs as to create a stochastic
foreground that dominates the \textit{LISA} observational window up to
$\approx$ 3 mHz \cite{2005ApJ...633L..33S}. Detailed knowledge of the
characteristics of their background signal would thus be needed to model it and
study weaker background GW signals of cosmological origin.
\end{itemize}
\begin{figure}
\includegraphics[height=12cm,angle=-90]{Galactic_GWR1}
\caption{Expected signals of ultra-compact binaries,
the ones with error bars from (adapted from \cite{Roelofs et al.
(2006),Nelemans (2007)}.}
\label{fig:fh_HST}
\end{figure}
A relatively small number of ultracompact DDBs systems is presently known.
According to \cite{2006MNRAS.367L..62R} there exist 17
confirmed objects with
orbital periods in the $10 \div 70$ min in which a hydrogen-deficient mass
donor, either a semi-degenerate star or a WD itself, is present. These are
called AM CVn systems and are roughly characterized by optical emission
modulated at the orbital period, X-ray emission showing no evidence for a
significant modulation (from which a moderately magnetic primary is suggested,
\cite{2006astro.ph.10357R}) and, in the few cases where
timing analyses could be
carried out, orbital spin-down consistent with GW emission-driven mass
transfer.
In addition there exist two peculiar objects, sharing a number of observational
properties that partially match those of the ``standard'' AM CVn's. They
are
RX\,J0806.3+1527\ and RX\,J1914.4+2456, whose X-ray emission is $\sim$ 100\% pulsed, with
on-phase and off-phase of approximately equal duration. The single modulations
found in their lightcurves, both in the optical and in X-rays, correspond to
periods of, respectively, 321.5 and 569 s
(\cite{2004MSAIS...5..148I,2002ApJ...581..577S}) and were
first interpreted as orbital periods. If so, these two objects
are the binary systems with the shortest orbital period known and could belong
to the AM CVn class. However, in addition to peculiar emission properties with
respect to other AM CVn's, timing analyses carried out by the above cited
authors demonstrate that, in this interpretation, these two objects have
shrinking orbits. This is contrary to what expected in mass transferring
double white dwarf systems (including AM CVn's systems) and suggests the
possibility that the binary is detached, with the orbit shrinking because of GW
emission. The electromagnetic emission would have in turn to be caused by some
other kind of interaction.
Nonetheless, there are a number of alternative models to account for the
observed properties, all of them based upon binary systems. The intermediate
polar (IP) model (\cite{Motch et al.(1996),io99,Norton et al. (2004)}) is
the only one in
which the pulsation periods are not assumed to be orbital. In this model, the
pulsations are likely due to the spin of a white dwarf accreting from
non-degenerate secondary star. Moreover, due to geometrical constraints the
orbital period is not expected to be detectable. The other two models assume a
double white dwarf binaries in which the pulsation period is the orbital period.
Each of them invoke a semi-detached, accreting double white dwarfs: one is
magnetic, the double degenerate polar model
(\cite{crop98,ram02a,ram02b,io02a}), while the
other is non-magnetic, the
direct impact model (\cite{Nelemans et al. (2001),Marsh and
Steeghs(2002),ram02a}), in which,
due
to the compact dimensions of these systems, the mass transfer streams is forced
to hit directly onto the accreting white dwarfs rather than to form an
accretion disk .
\begin{table}[!ht]
\caption{Overview of observational properties of AM CVn stars (adapted from
\cite{Nelemans (2005)})}
\label{tab:overview}
\smallskip
\begin{center}
\hspace*{-0.5cm}
{\small
\begin{tabular}{lllllllcc}\hline
Name & $P_{\rm orb}^a$ & & $P_{\rm sh}^a$ & Spectrum & Phot. var$^b$ & dist &
X-ray$^c$ &
UV$^d$ \\
& (s) & & (s) & & & (pc) & & \\
\hline \hline
ES Cet & 621 &(p/s) & & Em & orb & 350& C$^3$X & GI \\
AM CVn & 1029 &(s/p) & 1051 & Abs & orb & 606$^{+135}_{-95}$ & RX & HI \\
HP Lib & 1103 &(p) & 1119 & Abs & orb & 197$^{+13}_{-12}$ & X & HI \\
CR Boo & 1471 &(p) & 1487 & Abs/Em? & OB/orb & 337$^{+43}_{-35}$& ARX & I \\
KL Dra & 1500 &(p) & 1530 & Abs/Em? & OB/orb & & & \\
V803 Cen & 1612 &(p) & 1618 & Abs/Em? & OB/orb & & Rx & FHI \\
SDSSJ0926+36 & 1698.6& (p) & & & orb & & & \\
CP Eri & 1701 &(p) & 1716 & Abs/Em & OB/orb & & & H \\
2003aw & ? & & 2042 & Em/Abs? & OB/orb & & & \\
SDSSJ1240-01 & 2242 &(s) & & Em & n & & & \\
GP Com & 2794 &(s) & & Em & n & 75$\pm2$ & ARX & HI \\
CE315 & 3906 &(s) & & Em & n & 77? & R(?)X & H \\
& & & & & & & & \\
Candidates & & & & & & & & \\\hline\hline
RXJ0806+15 & 321 &(X/p) & & He/H?$^{11}$ & ``orb'' & &
CRX & \\
V407 Vul & 569 &(X/p) & & K-star$^{16}$ & ``orb'' & &
ARCRxX & \\ \hline
\end{tabular}
}
\end{center}
{\small
$a$ orb = orbital, sh = superhump, periods from {ww03}, see
references therein, (p)/(s)/(X) for photometric, spectroscopic, X-ray period.\\
$b$ orb = orbital, OB = outburst\\
$c$ A = ASCA, C = Chandra, R = ROSAT, Rx = RXTE, X = XMM-Newton {kns+04}\\
$d$ F = FUSE, G = GALEX, H = HST, I = IUE
}
\end{table}
After a brief presentation of the two X--ray selected double degenerate binary
systems, we discuss the main scenario of this type, the
Unipolar Inductor Model (UIM) introduced by \cite{2002MNRAS.331..221W} and
further developed by \cite{2006A&A...447..785D,2006astro.ph..3795D}, and compare
its predictions with the salient observed
properties of these two sources.
\subsection{RX\,J0806.3+1527}
\label{0:j0806}
RX\,J0806.3+1527\ was discovered in 1990 with the {\em ROSAT}\ satellite during the All-Sky Survey
(RASS; \cite{beu99}). However, it was only in 1999 that a
periodic
signal at 321\,s was detected in its soft X-ray flux with the {\em ROSAT}\ HRI
(\cite{io99,bur01}).
Subsequent deeper optical studies allowed to unambiguously identify the optical
counterpart of RX\,J0806.3+1527, a blue $V=21.1$ ($B=20.7$) star (\cite{io02a,io02b}). $B$,
$V$ and $R$ time-resolved photometry revealed the presence of
a $\sim 15$\% modulation at the $\sim 321$\,s X-ray period (\cite{io02b,ram02a}.
\begin{figure*}[htb]
\resizebox{16pc}{!}{\rotatebox[]{-90}{\includegraphics{new_spec_norm.ps}}}
\caption{VLT FORS1 medium (6\AA; 3900--6000\AA) and low (30\AA; above
6000\AA) resolution spectra obtained for the optical counterpart of
RX\,J0806.3+1527. Numerous faint emission lines of HeI and HeII (blended
with H) are labeled (adapted form \cite{io02b}).}
\label{spec}
\end{figure*}
\begin{figure*}[hbt]
\centering
\resizebox{20pc}{!}{\includegraphics{israel_f1.eps}
}
\caption{Left panel: Results of the phase fitting technique used
to infer the P-\.P coherent solution for RX\,J0806.3+1527: the linear term (P
component) has been corrected, while the quadratic term (the \.P
component) has been kept for clarity. The best \.P solution inferred
for the optical band is marked by the solid fit line. Right panel:
2001-2004 optical flux measurements at fdifferent wavelengths.}
\label{timing}
\end{figure*}
The VLT spectral study revealed a blue continuum with no intrinsic
absorption lines \cite{io02b} . Broad ($\rm FWHM\sim 1500~\rm
km~s^{-1}$), low equivalent width ($EW\sim -2\div-6$ \AA) emission
lines from the He~II Pickering series (plus additional emission lines
likely associated with He~I, C~III, N~III, etc.; for a different interpretation
see \cite{rei04}) were instead
detected \cite{io02b}. These findings, together with the period stability and
absence of any additional modulation in the 1\,min--5\,hr period
range, were interpreted in terms of a double degenerate He-rich binary
(a subset of the AM CVn class; see \cite{warn95}) with
an orbital period of 321\,s, the shortest ever recorded. Moreover,
RX\,J0806.3+1527\ was noticed to have optical/X-ray properties similar to
those of RX\,J1914.4+2456, a 569\,s modulated soft X-ray source proposed
as a double degenerate system (\cite{crop98,ram00,ram02b}).
In the past years the detection of spin--up was reported, at a rate of
$\sim$6.2$\times$10$^{-11}$\, s~s$^{-1}$,
for the 321\,s orbital modulation, based on optical data taken from the
Nordic Optical Telescope (NOT) and the VLT archive, and by using
incoherent timing techniques \cite{hak03,hak04}.
Similar results were
reported also for the X-ray data (ROSAT and Chandra; \cite{stro03})
of RX\,J0806.3+1527\
spanning over 10 years of uncoherent observations and based on the NOT
results \cite{hak03}.
A Telescopio Nazionale Galileo (TNG) long-term project (started on 2000)
devoted to the study of the long-term timing properties of RX\,J0806.3+1527\ found a
slightly energy--dependent pulse shape with the pulsed fraction increasing
toward longer wavelengths, from $\sim$12\% in the B-band to
nearly 14\% in the I-band (see lower right panel of Figure~\ref{QU};
\cite{2004MSAIS...5..148I}). An additional variability, at a level of
4\% of the optical pulse
shape as a function of time (see upper right panel of Figure~\ref{QU} right) was
detected. The first coherent
timing solution was also
inferred for this source, firmly assessing that the source was spinning-up:
P=321.53033(2)\,s, and \.P=-3.67(1)$\times$10$^{-11}$\,s~s$^{-1}$ (90\%
uncertainties are reported; \cite{2004MSAIS...5..148I}).
Reference \cite{2005ApJ...627..920S} obtained independently a
phase-coherent timing solutions for
the orbital period of this source over a similar baseline, that is fully
consistent with that of \cite{2004MSAIS...5..148I}. See
\cite{2007MNRAS.374.1334B} for a similar coherent timing solution
also including the covariance
terms of the fitted parameters.
\begin{figure}[hbt]
\centering
\resizebox{33pc}{!}{\rotatebox[]{0}{\includegraphics{israel_f2.eps}}}
\caption{Left Panel: The 1994--2002 phase coherently connected X--ray folded
light curves (filled squares; 100\% pulsed fraction) of RX\,J0806.3+1527, together with the
VLT-TNG 2001-2004 phase connected folded optical light curves (filled circles).
Two orbital cycles are reported for clarity. A nearly anti-correlation was
found.
Right panels: Analysis of the phase variations induced by pulse shape changes
in the optical band (upper panel), and the pulsed fraction as a function of
optical wavelengths (lower panel). }
\label{QU}%
\end{figure}
The relatively high accuracy obtained for the optical phase coherent
P-\.P solution (in the January 2001 - May 2004 interval) was used to
extend its validity backward to the ROSAT observations without loosing the
phase coherency, i.e. only one possible period cycle consistent with
our P-\.P solution. The best X--ray phase coherent
solution is P=321.53038(2)\,s, \.P=-3.661(5)$\times$10$^{-11}$\,s~s$^{-1}$ (for
more details see \cite{2004MSAIS...5..148I}). Figure~\ref{QU}
(left panel) shows the
optical (2001-2004) and X--ray (1994-2002) light curves folded by using the
above reported P-\.P coherent solution, confirming the amazing
stability of the X--ray/optical anti-correlation first noted by
(\cite{2003ApJ...598..492I}; see inset of left panel of
Figure\,\ref{QU}).
\begin{figure}
\centering
\resizebox{20pc}{!}{\rotatebox[]{-90}{\includegraphics{israel_f3.eps}}}
\caption{The results of the {\em XMM--Newton}\ phase-resolved spectroscopy
(PRS)
analysis for the absorbed blackbody spectral parameters: absorption,
blackbody temperature, blackbody radius (assuming a distance of
500\,pc), and absorbed (triangles) and unabsorbed (asterisks)
flux. Superposed is the folded X-ray light curve. }
\label{xmm}%
\end{figure}
\begin{figure}
\centering
\resizebox{16pc}{!}{\rotatebox[]{-90}{\includegraphics{israel_f4.eps}}}
\caption{ Broad-band energy
spectrum of
RX\,J0806.3+1527\ as inferred from the {\em Chandra}, {\em XMM--Newton}, VLT and TNG
measurements and {\it EUVE\/} upper limits. The dotted line represents one of
the possible
fitting blackbody models for the IR/optical/UV bands.}
\label{xmm}%
\end{figure}
On 2001, a Chandra observation of RX\,J0806.3+1527\ carried out in simultaneity with time
resolved optical observation at the VLT, allowed for the first time to study the
details of the X-ray emission and the phase-shift between X-rays and optical
band. The X-ray spectrum is consistent with a occulting, as a function of
modulation phase, black body with a temperature of $\sim$60\,eV
\cite{2003ApJ...598..492I}. A 0.5 phase-shift was reported for the X-rays
and the optical band \cite{2003ApJ...598..492I}. More recently, a 0.2
phase-shift was
reported by analysing the whole historical X-ray and optical dataset: this
latter result is considered the correct one \cite{2007MNRAS.374.1334B}.
On 2002 November 1$^{\rm st}$ a second deep X-ray observation was obtained with
the {\em XMM--Newton}\ instrumentations for about 26000\,s, providing an increased
spectral accuracy (see eft panel of
Figure~\ref{xmm}). The {\em XMM--Newton}\ data show a lower value of the absorption column,
a relatively constant black body temperature, a
smaller black body size, and, correspondingly, a slightly lower
flux. All these differences may be ascribed to the pile--up effect in
the Chandra data, even though we can not completely rule out the
presence of real spectral variations as a function of time. In any case
we note that this result is in agreement with the idea of a
self-eclipsing (due only to a geometrical effect) small, hot and
X--ray emitting region on the primary star. Timing analysis did not
show any additional significant signal at periods longer or shorter
than 321.5\,s, (in the 5hr-200ms interval). By using the {\em XMM--Newton}\ OM a
first look at the source in the UV band (see right panel of Figure~\ref{xmm})
was obtained
confirming the presence of the blackbody component inferred from IR/optical
bands.
Reference \cite{2003ApJ...598..492I} measured an on-phase
X-ray luminosity (in the range
0.1-2.5 keV) $L_X = 8 \times 10^{31} (d/200~\mbox{pc})^2$ erg s$^{-1}$ for
this source. These authors suggested that the bolometric luminosity might even
be dominated by the (unseen) value of the UV flux, and reach values up to
5-6 times higher.
The optical flux is only $\sim$ 15\% pulsed, indicating that most of it might
not be associated to the same mechanism producing the pulsed X-ray emission
(possibly the cooling luminosity of the WD plays a role). Given these
uncertainties and, mainly, the uncertainty in the distance to the source, a
luminosity $W\simeq 10^{32} (d/200~\mbox{pc})^2$ erg s$^{-1}$ will be assumed
as a reference value.\\
\subsection{RX\,J1914.4+2456\ }
\label{0:j1914}
The luminosity and distance of this source have been subject to much debate
over the last years. Reference \cite{2002MNRAS.331..221W} refer
to earlier ASCA measurements that,
for a distance of 200-500 pc, corresponded to a luminosity in the range
($4\times 10^{33} \div 2.5 \times 10^{34}$) erg s$^{-1}$. Reference
\cite{2005MNRAS.357...49R}, based on more recent XMM-Newton
observations and a standard blackbody
fit to the X-ray spectrum, derived an X-ray luminosity of $\simeq 10^{35}
d^2_{\mbox{\tiny{kpc}}}$ erg s$^{-1}$, where $d_{\mbox{\tiny{kpc}}}$ is the
distance in kpc. The larger distance of $\sim$ 1 kpc was based on a work by
\cite{2006ApJ...649..382S}. Still more recently,
\cite{2006MNRAS.367L..62R} find that an optically thin
thermal emission spectrum,
with an edge at 0.83 keV attributed to O VIII, gives a significantly better
fit to the data than a blackbody model. The optically thin thermal plasma
model implies a much lower bolometric luminosity of L$_{\mbox{\tiny{bol}}}
\simeq 10^{33}$ d$^2_{\mbox{\tiny{kpc}}}$ erg s$^{-1}$.
\\
Reference \cite{2006MNRAS.367L..62R} also note that the
determination of a 1 kpc distance
is not free of uncertainties and that a minimum distance of $\sim 200$ pc
might still be possible: the latter leads to a minimum luminosity of $\sim 3
\times 10^{31}$ erg s$^{-1}$. \\
Given these large discrepancies, interpretation of this source's properties
remains ambiguous and dependent on assumptions. In the following, we refer to
the more recent assessment by \cite{2006MNRAS.367L..62R}
of a luminosity $L =
10^{33}$ erg s$^{-1}$ for a 1 kpc distance.\\
Reference \cite{2006MNRAS.367L..62R} also find possible
evidence, at least in a few
observations, of two secondary peaks in power spectra. These are very close to
($\Delta \nu \simeq 5\times 10^{-5}$ Hz) and symmetrically distributed around
the strongest peak at $\sim 1.76 \times 10^{-3}$ Hz. References
\cite{2006MNRAS.367L..62R} and \cite{2006ApJ...649L..99D}
discuss the implications of this possible finding.
\section{The Unipolar Inductor Model}
\label{sec:1}
The Unipolar Inductor Model (UIM) was originally proposed to
explain the origin of bursts of
decametric radiation received from Jupiter, whose properties appear to be
strongly influenced by the orbital location of Jupiter's satellite Io
\cite{1969ApJ...156...59G,1977Moon...17..373P} . \\
The model relies on Jupiter's spin being different from the system orbital
period (Io spin is tidally locked to the orbit). Jupiter has a surface
magnetic field $\sim$ 10 G so that, given
Io's good electrical conductivity ($\sigma$), the satellite experiences an
e.m.f. as it moves across the planet's field lines along the orbit. The e.m.f.
accelerates free charges in the ambient medium, giving rise to a flow of
current along the sides of the flux tube connecting the bodies. Flowing
charges are accelerated to mildly relativistic energies and emit coherent
cyclotron radiation through a loss cone instability (cfr. \cite{Willes and
Wu(2004)} and references therein):
this is the basic framework in which Jupiter decametric radiation and its
modulation by Io's position are explained.
Among the several confirmations of the UIM in this system, HST UV observations
revealed the localized emission on Jupiter's surface due
to flowing particles hitting the planet's surface - the so-called Io's
footprint \cite{Clarke et al. (1996)}.
In recent years, the complex interaction between Io-related free charges
(forming the Io torus) and Jupiter's magnetosphere has been understood in much
greater detail \cite{Russ1998P&SS...47..133R,Russ2004AdSpR..34.2242R}. Despite
these significant
complications, the above scenario maintains its general validity, particularly
in view of astrophysical applications.\\
Reference \cite{2002MNRAS.331..221W} considered the UIM in the
case of close white dwarf binaries.
They assumed a moderately magnetized primary, whose spin is not synchronous
with the orbit and a non-magnetic companion, whose spin is tidally locked.
They particularly highlight the role of ohmic dissipation of currents flowing
through the two WDs and show that this occurs essentially in the primary
atmosphere.
A small bundle of field lines leaving the primary surface thread the whole
secondary. The orbital position of the latter is thus ``mapped'' to a small
region onto the primary's surface; it is in this small region that ohmic
dissipation - and the associated heating - mainly takes place. The resulting
geometry, illustrated in Fig. \ref{fig:1}, naturally leads to mainly thermal,
strongly pulsed X-ray emission, as the secondary moves along the orbit.
\begin{figure}
\centering
\includegraphics[height=6.18cm]{Wu2004.eps}
\caption{Electric coupling between the asynchronous, magnetic primary star
and the non-magnetic secondary,, in the UIM (adapted from
\cite{2002MNRAS.331..221W}).}
\label{fig:1}
\end{figure}
The source of the X-ray emission is ultimately represented by the relative
motion between primary spin and orbit, that powers the electric circuit.
Because of resistive dissipation of currents, the relative motion is
eventually expected to be cancelled. This in turn requires angular momentum to
be redistributed between spin and orbit in order to synchronize them. The
necessary torque is provided by the Lorentz force on cross-field currents
within the two stars.\\
Reference \cite{2002MNRAS.331..221W} derived synchronization
timescales $(\tau_{\alpha}) \sim$ few
10$^3$ yrs for both RX\,J1914.4+2456\ and RX\,J0806.3+1527\ , less than 1\% of their
orbital evolutionary timescales. This would imply a much larger Galactic
population of such systems than predicted by population-synthesis models, a
major difficulty of this version of the UIM. However,
\cite{2006A&A...447..785D,2006astro.ph..3795D} have shown that
the electrically active phase is actually
long-lived because perfect synchronism is never reached.
In a perfectly synchronous system the electric circuit would be turned off,
while GWs would still cause orbital spin-up. Orbital motion and primary spin
would thus go out of synchronism, which in turn would switch the circuit on.
The synchronizing (magnetic coupling) and de-synchronizing (GWs) torques are
thus expected to reach an equilibrium state at a sufficiently small degree of
asynchronism.\\
We discuss in detail how the model works and how the major observed properties
of RX\,J0806.3+1527\ and RX\,J1914.4+2456\ can be interpreted in the UIM framework. We
refer to \cite{2005MNRAS.357.1306B} for a possible
criticism of the model based on
the shape of the pulsed profiles of the two sources.
Finally, we refer to \cite{2006ApJ...653.1429D,2006ApJ...649L..99D}, who
have recently proposed alternative mass transfer models that can also account
for long-lasting episodes of spin-up in Double White Dwarf systems.
\section{UIM in Double Degenerate Binaries}
\label{sec:1.1}
According to \cite{2002MNRAS.331..221W}, define the primary's
asynchronism parameter
$\alpha \equiv \omega_1/ \omega_o$, where $\omega_1$ and $\omega_o$ are the
primary's spin and orbital frequencies. In a system with orbital separation
$a$, the secondary star will move with the velocity
$ v = a (\omega_o - \omega_1) = [G M_1 (1+q)]^{1/3}~\omega^{1/3}_o (1-\alpha)$
relative to field lines, where $G$ is the gravitational constant, $M_1$ the
primary mass, $q = M_2/M_1$ the system mass-ratio. The electric field induced
through the secondary is thus {\boldmath$E$} = $\frac{\mbox{{\boldmath$v
\times B_2$}}}{c}$, with an associated e.m.f. $\Phi = 2R_2 E$, $R_2$ being the
secondary's radius and {\boldmath$B$$_2$} the primary magnetic field at the
secondary's location.
The internal (Lorentz) torque redistributes angular momentum between spin and
orbit conserving their sum (see below), while GW-emission causes a net
loss of orbital angular momentum. Therefore, as long as the primary spin is
not efficiently affected by other forces, \textit{i.e.} tidal forces (cfr.
App.A in \cite{2006A&A...447..785D}), it will lag
behind the evolving orbital
frequency, thus keeping electric coupling continuously active.\\
Since most of the power dissipation occurs at the primary atmosphere (cfr.
\cite{2002MNRAS.331..221W}),
we slightly simplify our treatment assuming no dissipation at all at the
secondary. In this case, the binary system is wholly analogous to the
elementary circuit of Fig. \ref{fig:2}.
\begin{figure}
\centering
\includegraphics[width=5.8cm]{CircuitComplete.eps}
\caption{Sketch of the elementary circuit envisaged in the UIM. The secondary
star acts as the battery, the primary star represents a resistance connected
to the battery by conducting ``wires'' (field lines.) Inclusion of the effect
of GWs corresponds to connecting the battery to a plug, so that it is
recharged at some given rate. Once the battery initial energy reservoir is
consumed, the bulb will be powered just by the energy fed through the plug.
This corresponds to the ``steady-state'' solution.}
\label{fig:2}
\end{figure}
Given the e.m.f. ($\Phi$) across the secondary star and the system's effective
resistivity ${\cal{R}} \approx(2\sigma R_2)^{-1}~(a/R_1)^{3/2}$, the
dissipation rate of electric current ($W$) in the primary atmosphere is:
\begin{equation}
\label{dissipation}
W = I^2 {\cal{R}} = I \Phi = k \omega^{17/3}_o (1-\alpha)^2
\end{equation}
where $k = 2 (\mu_1/c)^2 \sigma R^{3/2}_1 R^3_2 / [GM_1(1+q)]^{11/6}$ is a
constant of the system.\\
The Lorentz torque ($N_L$) has the following properties: \textit{i}) it acts
with the same magnitude and opposite signs on the primary star and the orbit,
$N_L = N^{(1)}_L = - N^{(\mbox{\tiny{orb}})}_L$. Therefore; \textit{ii})
$N_L$ conserves the total angular momentum in the system, transferring all
that is extracted from one component to the other one; \textit{iii}) $N_L$ is
simply related to the energy dissipation rate: $W = N_L \omega_o (1-\alpha)$.\\
From the above, the evolution equation for $\omega_1$ is:
\begin{equation}
\label{omega1}
\dot{\omega}_1 = (N_L/I_1) = \frac{W}{I_1 \omega_o (1-\alpha)}
\end{equation}
The orbital angular momentum is $L_o = I_o \omega_o$, so that the orbital
evolution equation is:
\begin{equation}
\label{omegaevolve}
\dot{\omega}_o = - 3 (N_{\mbox{\tiny{gw}}}+ N^{(\mbox{\tiny{orb}})}_L)/I_o =
- 3 (N_{\mbox{\tiny{gw}}} - N_L)/I_o =
-\frac{3}{I_o\omega_o}\left(\dot{E}_{\mbox{\tiny{gw}}} -\frac{W}{1-\alpha}
\right)
\end{equation}
where $I_o = q (1+q)^{-1/3} G^{2/3} M^{5/3}_1 \omega^{-4/3}_o$ is the orbital
moment of inertia and $N_{\mbox{\tiny{gw}}} = \dot{E}_{\mbox{\tiny{gw}}}/
\omega_o$ is the GW torque.
\subsection{Energetics of the electric circuit}
\label{sec:2}
Let us focus on
how energy is transferred and consumed by the electric circuit.
We begin considering the rate of work done by $N_L$ on the orbit
\begin{equation}
\label{Eorb}
\dot{E}^{(orb)}_L = N^{(orb)}_L \omega_o = -N_L \omega_o = - \frac{W}
{1-\alpha},
\end{equation}
and that done on the primary:
\begin{equation}
\label{espin}
\dot{E}_{spin} = N_L \omega_1 = \frac{\alpha}{1-\alpha} W = -\alpha
\dot{E}^{(orb)}_L.
\end{equation}
The sum $\dot{E}_{spin} + \dot{E}^{(orb)}_L = -W$. Clearly, not all of the
energy extracted from one component is transferred to the other one. The
energy lost to ohmic dissipation represents the energetic cost of spin-orbit
coupling.\\
The above formulae allow to draw some further conclusions concerning the
relation between $\alpha$ and the energetics of the electrical circuit.
When $\alpha>1$, the circuit is powered at the expenses of the primary's
spin energy. A fraction $\alpha^{-1}$ of this energy is transferred to the
orbit, the rest being lost to ohmic dissipation. When $\alpha <1$, the circuit
is powered at the expenses of the orbital energy and a fraction $\alpha$ of
this energy is transferred to the primary spin.
Therefore, \textit{the parameter $\alpha$ represents a measure of the
energy transfer efficiency of spin-orbit coupling}: the more
asynchronous a system is, the less efficiently energy is transferred, most of
it being dissipated as heat.
\subsection{Stationary state: General solution}
\label{sec:2.1}
As long as the asynchronism parameter is sufficiently far from unity, its
evolution will be essentially determined by the strength of the synchronizing
(Lorentz) torque, the GW torque being of minor relevance. The evolution in
this case depends on the initial values of $\alpha$ and $\omega_o$, and on
stellar parameters.
This evolutionary phase drives $\alpha$ towards unity, \textit{i.e.} spin and
orbit are driven towards synchronism. It is in this regime that the GW torque
becomes important in determining the subsequent evolution of the system. \\
Once the condition $\alpha =1$ is reached, indeed, GW emission drives a small
angular momentum disequilibrium. The Lorentz torque is in turn switched on to
transfer to the primary spin the amount of angular momentum required for it to
keep up with the evolving orbital frequency.
This translates to the requirement that $\dot{\omega}_1 = \dot{\omega}_o$. By
use of expressions (\ref{omega1}) and (\ref{omegaevolve}), it is found
that this condition implies the following equilibrium value for $\alpha$
(we call it $\alpha_{\infty})$:
\begin{equation}
\label{alfainf}
1 - \alpha_{\infty} = \frac{I_1}{k} \frac{\dot{\omega}_o/\omega_o}
{\omega^{11/3}}
\end{equation}
This is greater than zero if the orbit is shrinking ($\dot{\omega}_o >0$),
which implies that $\alpha_{\infty} <1$. For a widening orbit, on the other
hand, $\alpha_{\infty} > 1$. However, this latter case does not correspond to
a long-lived configuration. Indeed,
define the electric energy reservoir as $E_{UIM} \equiv (1/2) I_1 (\omega^2_1
-\omega^2_o)$, which is negative when $\alpha <1$ and positive when $\alpha
>1$. Substituting eq. (\ref{alfainf}) into this definition:
\begin{equation}
\label{stationary}
\dot{E}_{UIM} = - W,
\end{equation}
If $\alpha = \alpha_{\infty} >1$, energy is consumed at the
rate W: the circuit will eventually switch off ($\alpha_{\infty}=1$). At later
times, the case $\alpha_{\infty}<1$ applies.\\
If $\alpha = \alpha_{\infty} <1$, condition (\ref{stationary}) means that the
battery recharges at the rate $W$ at which electric currents dissipate
energy: the electric energy reservoir is conserved as the binary evolves.\\
The latter conclusion can be reversed (cfr. Fig. \ref{fig:2}): in steady-state,
the rate of energy dissipation ($W$) is fixed by the rate at which power is
fed to the circuit by the plug ($\dot{E}_{UIM}$). The latter is determined by
GW emission and the Lorentz torque and, therefore, by component masses,
$\omega_o$ and $\mu_1$.\\
Therefore the steady-state degree of asynchronism of a given binary system is
uniquely determined, given $\omega_o$. Since both $\omega_o$ and
$\dot{\omega}_o$ evolve secularly, the equilibrium state will be
``quasi-steady'', $\alpha_{\infty}$ evolving secularly as well.
\subsection{Model application: equations of practical use}
\label{application}
We have discussed in previous sections the existence of an asymptotic regime
in the evolution of binaries in the UIM framework.
Given the definition of $\alpha_\infty$ and $W$ (eq.
\ref{alfainf} and \ref{dissipation}, respectively), we have:
\begin{equation}
\label{luminosity}
W = I_1 \omega_o \dot{\omega}_o~\frac{(1-\alpha)^2}{1-\alpha_{\infty}}.
\end{equation}
The quantity $(1-\alpha_{\infty})$ represents the \textit{actual} degree of
asynchronism only for those systems that had enough time to evolve towards
steady-state, \textit{i.e} with sufficiently short orbital period.
In this case, the steady-state source luminosity can thus be written as:
\begin{equation}
\label{luminsteady}
W_{\infty} = I_1 \dot{\omega}_o \omega_o (1-\alpha_{\infty})
\end{equation}
Therefore - under the assumption that a source is in steady-state - the
quantity $\alpha_{\infty}$ can be determined from the
measured values of $W$, $\omega_o$, $\dot{\omega}_o$. Given its definition
(eq. \ref{alfainf}), this gives an estimate of $k$ and, thus, $\mu_1$.\\
The equation for the orbital evolution (\ref{omegaevolve}) provides a further
relation between the three measured quantities, component masses and degree of
asynchronism.
This can be written as:
\begin{equation}
\label{useful}
\dot{E}_{\mbox{\tiny{gr}}} + \frac{1}{3} I_o \omega^2_o (\dot{\omega}_o /
\omega_o) = \frac{W} {(1-\alpha)} \nonumber
\end{equation}
that becomes, inserting the appropriate expressions for $\dot{E}_
{\mbox{\tiny{gr}}}$ and $I_o$:
\begin{equation}
\label{extended}
\frac{32}{5}\frac{G^{7/3}}{c^5}~\omega^{10/3}_o X^2 -\frac{1}{3}~
G^{2/3} \frac{\dot{\omega}_o}{\omega^{1/3}_o} X + \frac{W}{1-\alpha} = 0~,
\end{equation}
where $X \equiv M^{5/3}_1 q/(1+q)^{1/3} = {\cal{M}}^{5/3}$, $\cal{M}$ being the
system's chirp mass.
\section{RX\,J0806.3+1527\ }
\label{rxj08}
We assume here the values of $\omega_o$, $\dot{\omega}_o$ and of the
bolometric luminosity reported in \ref{sec:0} and refer to
See \cite{2006astro.ph..3795D} for a complete
discussion on how our conclusions
depend on these assumptions.\\
In Fig. \ref{fig3} (see caption for further details), the dashed line
represents the locus of points in the $M_2$ vs. $M_1$ plane, for which the
measured $\omega_o$ and $\dot{\omega}_o$ are consistent with being due to GW
emission only, \textit{i.e.} if spin-orbit coupling was absent ($\alpha =1$).
This corresponds to a chirp mass ${\cal{M}} \simeq$ 0.3 M$_{\odot}$. \\
Inserting the measured quantities in eq. (\ref{luminosity}) and assuming a
reference value of
$I_1 = 3 \times 10^{50}$ g cm$^2$, we obtain:
\begin{equation}
\label{j08}
\frac{(1-\alpha)^2} {1-\alpha_{\infty}} \simeq \frac{10^{32}d^2_{200}}
{1.3 \times 10^{34}} \simeq 8 \times 10^{-3} d^2_{200}.
\end{equation}
In principle, the source may be in any regime, but our aim is to check whether
it can be in steady-state, as to avoid the short timescale problem mentioned
in \ref{sec:1}. Indeed, the short orbital period strongly suggests it may
have reached the asymptotic regime (cfr. \cite{2006astro.ph..3795D}).
If we assume $\alpha =\alpha_{\infty}$, eq. (\ref{j08}) implies
$(1-\alpha_{\infty}) \simeq 8 \times 10^{-3}$.\\
Once UIM and spin-orbit coupling are introduced, the locus of allowed points
in the M$_2$ vs. M$_1$ plane is somewhat sensitive to the exact value of
$\alpha$: the solid curve of Fig. \ref{fig3} was obtained, from eq.
(\ref{extended}), for $\alpha = \alpha_{\infty} = 0.992$. \\
From this we conclude that, if RX\,J0806.3+1527\ is interpreted as being in the UIM
steady-state, $M_1$ must be smaller than 1.1 $M_{\odot}$ in order for the
secondary not to fill its Roche lobe, thus avoiding mass transfer.
\begin{figure}[h]
\centering
\includegraphics[height=10.18cm,angle=-90]{Massej08.ps}
\caption{M$_2$ vs. M$_1$ plot based on the measured timing properties of
RX\,J0806.3+1527\ . The dashed curve is the locus expected if orbital decay is driven
by GW alone, with no spin-orbit
coupling. The solid line describes the locus expected if the
system is in a steady-state, with $(1-\alpha) = (1-\alpha_{\infty}) \simeq 8
\times 10^{-3}$. The horizontal dot-dashed line represents the minimum mass
for a degenerate secondary not to fill its Roche-lobe at an orbital period of
321.5 s. Dotted lines are the loci of fixed mass ratio.}
\label{fig3}
\end{figure}
From $(1-\alpha_{\infty}) = 8\times 10^{-3}$ and from eq. (\ref{dissipation}),
$k \simeq 7.7 \times 10^{45}$ (c.g.s.): from this, component masses and
primary magnetic moment can be constrained.
Indeed, $k = \hat{k}(\mu_1, M_1, q; \overline{\sigma})$ (eq.
\ref{dissipation}) and a further constraint derives from the fact that $M_1$
and $q$ must lie along the solid curve of Fig. \ref{fig3}. Given the value of
$\overline{\sigma}$, $\mu_1$ is obtained for each point along the solid
curve. We assume an electrical conductivity of
$\overline{\sigma} = 3\times 10^{13}$ e.s.u.
\cite{2002MNRAS.331..221W,2006astro.ph..3795D}. \\
The values of $\mu_1$ obtained in this way, and the
corresponding field at the primary's surface, are plotted in Fig. \ref{fig2},
from which $\mu_1 \sim$ a few $\times 10^{30}$ G cm$^3$ results, somewhat
sensitive to the primary mass.\\
We note further that, along the solid curve of Fig. \ref{fig3}, the chirp mass
is slightly variable, being: $X \simeq (3.4 \div 4.5) \times 10^{54}$
g$^{5/3}$, which implies ${\cal{M}} \simeq (0.26 \div 0.31)$ M$_{\odot}$.
More importantly, $\dot{E}_{\mbox{\tiny{gr}}} \simeq (1.1 \div 1.9) \times
10^{35}$ erg s$^{-1}$ and, since $W/(1-\alpha_{\infty}) = \dot{E}^{(orb)}_L
\simeq 1.25 \times 10^{34}$ erg s$^{-1}$, we have
$\dot{E}_{\mbox{\tiny{gr}}} \simeq (9\div 15)~\dot{E}^{(orb)}_L $.
Orbital spin-up is essentially driven by GW alone; indeed, the dashed and solid
curves are very close in the M$_2$ vs. M$_1$ plane. \\
\begin{figure}[h]
\centering
\includegraphics[height=10.18cm,angle=-90]{Muj08.ps}
\caption{The value of the primary magnetic moment $\mu_1$, and the
corresponding surface B-field, as a function of the primary mass M$_1$, for
$(1-\alpha) = (1-\alpha_{\infty}) = 8\times 10^{-3}$.}
\label{fig2}
\end{figure}
Summarizing, the observational properties of RX\,J0806.3+1527\ can be well
interpreted in the UIM framework, assuming it is in steady-state.
This requires the primary to have $\mu_1 \sim 10^{30}$ G cm$^3$ and a spin
just slightly slower than the orbital motion (the difference being less
than $\sim 1$\%). \\
The expected value of $\mu_1$ can in principle be tested by future
observations, through studies of polarized emission at optical and/or radio
wavelenghts \cite{Willes and Wu(2004)}.
\section{RX\,J1914.4+2456\ }
\label{rxj19}
As for the case of RX\,J0806.3+1527\ , we adopt the values discussed in
\ref{sec:0} and refer to \cite{2006astro.ph..3795D} for
a discussion of all
the uncertainties on these values and their implications for the model.\\
Application of the scheme used for RX\,J0806.3+1527\ to this source is not as
straightforward. The inferred luminosity of this source seems inconsistent
with steady-state.
With the measured values of\footnote{again assuming $I_1= 3 \times 10^{50}$ g
cm$^2$} $\omega_o$ and $\dot{\omega}_o$,
the system steady-state luminosity should be $ < 2 \times 10^{32}$ erg
s$^{-1}$ (eq. \ref{luminsteady}). This is hardly consistent even with
the smallest possible luminosity referred to in \ref{sec:0}, unless
allowing for a large value of ($1-\alpha_{\infty} \geq 0.15 $). \\
From eq. (\ref{luminosity}) a relatively high ratio between the actual
asynchronism parameter and its steady-state value appears unavoidable:
\begin{equation}
\label{j19}
|1-\alpha| \simeq 2.2 (1-\alpha_{\infty})^{1/2}
\end{equation}
\subsubsection{The case for $\alpha>1$}
\label{thecase}
The low rate of orbital shrinking measured for this source and its relatively
high X-ray luminosity put interesting constraints on the primary spin. Indeed,
a high value of $N_L$ is associated to $W\sim 10^{33}$ erg s$^{-1}$. \\
If $\alpha<1$, this torque sums to the GW torque: the resulting orbital
evolution would thus be faster than if driven by GW alone. In fact, for
$\alpha < 1$, the smallest possible value of $N_L$ obtains with $\alpha = 0$,
from which $N^{(\mbox{\tiny{min}})}_L = 9 \times 10^{34}$ erg. This implies an
absolute minimum to the rate of orbital shrinking (eq. \ref{omegaevolve}),
$3~N^{(\mbox{\tiny{min}})}_L / I_o$, so close to the measured one
that unplausibly small component masses would be required for
$\dot{E}_{\mbox{\tiny{gr}}}$ to be negligible.
We conclude that $\alpha <1$ is essentially ruled out in the UIM discussed
here. \\
If $\alpha > 1$ the primary spin is faster than the orbital motion and the
situation is different.
Spin-orbit coupling has an opposite sign with respect to the GW torque.
The small torque on the orbit implied by the measured $\dot{\omega}_o$ could
result from two larger torques of opposite signs partially cancelling each
other.\\
This point has been overlooked by \cite{Marsh and Nelemans (2005)} who
estimated the
GW luminosity of the source from its measured timing parameters and, based on
this estimate, claimed the failure of the UIM for RX\,J1914.4+2456\ .
In discussing this and other misinterpretations of the UIM in the literature,
\cite{2006astro.ph..3795D} show that the argument by
\cite{Marsh and Nelemans (2005)}
actually leads to our same conclusion: in the UIM framework, the orbital
evolution of this source must be affected significantly by spin-orbit
coupling, being slowed down by the transfer of angular momentum and energy
from the primary spin to the orbit. The source GW luminosity must accordingly
be larger than indicated by its timing parameters.
\subsection{Constraining the asynchronous system}
\label{lifetime}
Given that the source is not compatible with steady-state, we constrain system
parameters in order to match the measured values of $W, \omega_o$ and
$\dot{\omega}_o$ and meet the requirement that the resulting state has a
sufficiently long lifetime.\\
\begin{figure}[h]
\centering
\includegraphics[height=10.18cm,angle=-90]{solnew.ps}
\caption{M$_2$ vs. M$_1$ plot based on measured timing properties of
RX\,J1914.4+2456\ . The dot-dashed line corresponds to the minimum mass for a
degenerate secondary not to fill its Roche-lobe. The dashed curve represents
the locus expected if orbital decay was driven by GW alone, with no spin-orbit
coupling. This curve is consistent with a detached system only for extremely
low masses. The solid lines describe the loci expected if spin-orbit coupling
is present (the secondary spin being always tidally locked) and gives a
\textit{negative} contribution to $\dot{\omega}_o$. The four curves are
obtained for $W = 10^{33}$ erg s$^{-1}$ and four different values of $\alpha =
1.025, 1.05, 1.075$, $1.1$, respectively, from top to bottom, as reported in
the plot.}
\label{fig5}
\end{figure}
Since system parameters cannot all be determined uniquely, we adopt the
following scheme: given a value of $\alpha$ eq. (\ref{extended}) allows to
determine, for each value of M$_1$, the corresponding value of M$_2$ (or $q$)
that is compatible with
the measured $W, \omega_o$ and $\dot{\omega}_o$. This yields the solid curves
of Fig. \ref{fig5}.\\
As these curves show, the larger is $\alpha$ and the smaller the upward shift
of the corresponding locus. This is not surprising, since these curves are
obtained at fixed luminosity $W$ and $\dot{\omega}_o$. Recalling
that $(1/\alpha)$ gives the efficiency of energy transfer in systems with
$\alpha >1$ (cfr. \ref{sec:2}), a higher $\alpha$ at a given
luminosity implies that less energy is being transferred to the orbit.
Accordingly, GWs need being emitted at a smaller rate to match the measured
$\dot{\omega}_o$.\\
The values of $\alpha$ in Fig. \ref{fig5} were chosen arbitrarily and are
just illustrative: note that the resulting curves are similar to those
obtained for RX\,J0806.3+1527\ .
Given $\alpha$, one can also estimate $k$ from the definiton of $W$ (eq.
\ref{dissipation}). The information given by the curves of
Fig. \ref{fig5} determines all quantities contained in $k$, apart from
$\mu_1$. Therefore, proceeding as in the previous section,
we can determine the value of $\mu_1$ along each of the four curves of Fig.
\ref{fig5}. As in the case of RX\,J0806.3+1527\ , derived values are in the $\sim
10^{30}$ G cm$^3$ range. Plots and discussion of these results are reported by
\cite{2006astro.ph..3795D}.\\
We finally note that the curves of Fig. \ref{fig5} define the value of $X$ for
each (M$_1$,M$_2$), from which the system GW luminosity $\dot{E}_
{\mbox{\tiny{gr}}}$ can be calculated and its ratio to spin-orbit coupling.
According to the above curves, the expected GW luminosity of this source is
in the range $(4.6 \div 1.4) \times 10^{34}$ erg s$^{-1}$. The corresponding
ratios $\dot{E}_{\mbox{\tiny{gr}}}/ \dot{E}^{(orb)}_L$ are $1.15, 1.21, 1.29$
and $1.4$, respectively, for $\alpha =1.025, 1.05, 1.075$ and $1.1$.\\
Since the system cannot be in steady-state a strong question on the duration
of this transient phase arises.
\begin{figure}[h]
\centering
\includegraphics[height=10.18cm,angle=-90]{tauj19.ps}
\caption{The evolution timescale $\tau_{\alpha}$ as a function of the primary
mass for the same values of $\alpha$ used previously, reported on the curves.
Given the luminosity $W\sim 10^{33}$ erg s$^{-1}$ and a value of $\alpha$,
$\tau_{\alpha}$ is calculated as a function of M$_1$.}
\label{fig8}
\end{figure}
The synchronization timescale $\tau_{\alpha} = \alpha / \dot{\alpha}$ can be
estimated combining eq. (\ref{omega1}) and (\ref{omegaevolve}). With the
measured values of $W$, $\omega_o$ and $\dot{\omega}_o$, $\tau_{\alpha}$ can
be calculated as a function of $I_1$ and, thus, of M$_1$, given a particular
value of $\alpha$. Fig. \ref{fig8} shows results obtained for the same four
values of $\alpha$ assumed previously. The resulting timescales range from a
few $\times 10^4$ yrs to a few $\times 10^5$ yrs, tens to hundreds times
longer than previously obtained and compatible with constraints derived from
the expected population of such objects in the Galaxy.
Reference \cite{2006astro.ph..3795D} discuss this point
and its implications in more
detail.
\section{Conclusions}
\label{conclusions}
The observational properties of the two DDBs with the shortest orbital period
known to date have been discussed in relation with their physical nature. \\
The Unipolar Inductor Model and its coupling to GW emission have been
introduced to explain a number of puzzling features that these two sources
have in common and that are difficult to reconcile with most, if not all,
models of mass transfer in such systems.\\
Emphasis was put on the relevant new physical features that characterize the
model. In particular, the role of spin-orbit coupling through the Lorentz
torque and the role of GW emission in keeping the electric interaction active
at all times has been thoroughly discussed in all their implications.
It has been shown that the model does work over arbitrarily long timescales.\\
Application of the model to both RX\,J0806.3+1527\ and RX\,J1914.4+2456\ accounts in a
natural way for their main observational properties. Constraints on physical
parameters are derived in order for the model to work, and can be verified by
future observations.\\
It is concluded that the components in these two binaries may be much more
similar than it may appear from their timing properties and luminosities.
The significant observational differences could essentially be due to the two
systems being caught in different evolutionary stages. RX\,J1914.4+2456\ would be in
a luminous, transient phase that preceeds its settling into the dimmer
steady-state, a regime already reached by the shorter period RX\,J0806.3+1527\ .
Although the more luminous phase is transient, its lifetime can be as long as
$ \sim 10^5$ yrs, one or two orders of magnitude longer than previously
estimated.\\
The GW luminosity of RX\,J1914.4+2456\ could be much larger than previously expected
since its orbital evolution could be largely slowed down by an additional
torque, apart from GW emission.\\
Finally, we stress that further developements and refinements of the model are
required to address more specific observational issues and to assess the
consequences that this new scenario might have on evolutionary scenarios and
population synthesis models.
| {'timestamp': '2010-03-16T01:00:09', 'yymm': '1003', 'arxiv_id': '1003.2636', 'language': 'en', 'url': 'https://arxiv.org/abs/1003.2636'} |
\section{Introduction}\label{sec:Introduction}
Toward the sixth generation of wireless networks (6G), a number of exciting applications will benefit from sensing services provided by future perceptive networks, where sensing capabilities are integrated in the communication network. Once the communication network infrastructure is already deployed with multiple interconnected nodes, a multi-static sensory mesh can be enabled and exploited to improve the performance of the network itself~\cite{9296833}. Therefore, the joint communications and sensing (JCAS) concept has emerged as an enabler for an efficient use of radio resources for both communications and sensing purposes, where high frequency bands, that are expected to be available in 6G, can favor very accurate sensing based on radar-like technology~\cite{art:Wild_Nokia}.
Relying on the coordination of the network and a distributed processing, sensing signals can be transmitted from one node, and the reflections on the environment can be received at multiple nodes, in a coordinated manner~\cite{art:Wild_Nokia}. Thus, distributed multi-static sensing approaches can improve sensing accuracy while alleviating the need of full-duplex operation at sensing nodes. In this context, the high-gain directional beams provided by performing beamforming in multiple-input multiple-output (MIMO) and massive MIMO systems, which are essential for the operation of communication systems at higher frequencies, will be also exploited for improving sensing by considering distributed implementations~\cite{8764485,art:multistatic_Merlano}. In multi-static MIMO radar settings, the synchronization among sensing nodes is crucial, thus, this issue has motivated the study of feasibility of synchronization. For instance, a synchronization loop using in-band full duplex (IBFD) was demonstrated for a system with two MIMO satellites sensing two ground targets in~\cite{art:multistatic_Merlano}.
Additionally, multicarrier signals such as orthogonal frequency-division multiplexing (OFDM) waveforms have proven to provide several advantages for the use on JSAC systems, including independence from the transmitted user data, high dynamic range, possibility to estimate the relative velocity, and efficient implementation based on fast Fourier transforms~\cite{5776640}.
For instance, uplink OFDM 5G New Radio (NR) waveforms have been effectively used for indoor environment mapping in \cite{art:Baquero_mmWaveAlg}. Therein, a prototype full-duplex transceiver was used to perform range-angle chart estimation and dynamic tracking via extended Kalman Filter.
Moreover, the capabilities of distributed sensing system can be further extended by relying on the advantages of flexible nodes as unmanned aerial vehicles (UAVs), which have already raised significant attention for their applicability in numerous scenarios and even in harsh environments~\cite{8877114}. Therefore, UAVs have been already considered for sensing purposes~\cite{art:Wei_UAV_Safe,art:UAVs_Guerra,Chen_JSACUAVSystem}.
For instance, in~\cite{art:Wei_UAV_Safe}, UAVs are explored to perform simultaneous jamming and sensing of UAVs acting as eavesdroppers by exploiting the jamming signals for sensing purposes. Therein, sensing information is used to perform optimal online resource allocation to maximise the amount of securely served users constrained by the requirements on the information leakage to the eavesdropper and the data rate to the legitimate users. Besides in~\cite{art:UAVs_Guerra}, a UAV-based distributed radar is proposed to perform distributed sensing to locate and track malicious UAVs using frequency modulated continuous wave (FMCW) waveforms. It was shown that the mobility and distributed nature of the UAV-based radar benefits the accuracy for tracking mobile nodes if compared with a fixed radar. However, it does not make complete use of its distributed nature, as each UAV performs local sensing accounting for only the sensing information of its neighbouring UAVs, and there is no consideration on communication tasks.
In the context of JSAC, in~\cite{Chen_JSACUAVSystem}, a general framework for a full-duplex JSAC UAV network is proposed, where area-based metrics are developed considering sensing and communication parameters of the system and sensing requirements. This work uses a full-duplex operation for local sensing at each UAV while considering reflections from other UAVs as interference.
Different from previous works and considering the complexity of full-duplex systems, this work focuses on half-duplex operation and proposes a framework for performing a grid-based distributed sensing relying on the coordination of multiple UAVs to sense a ground target located on an area of interest. It is considered that MIMO UAVs employ OFDM waveforms and digital beamforming is implemented at the receiver side. A periodogram is used for the estimation of the radar cross-section (RCS) of each cell in the grid, leveraging the knowledge of the geometry of the system. The RCS estimation is performed by all of the non-transmitting UAVs, simultaneously, while one UAV is illuminating a certain sub-area of the grid. This process is performed until all UAVs have illuminated their respective sub-areas, then all UAVs inform the measured RCS per cell on the grid to a UAV acting as a fusion center (FC), which performs information-level fusion. This process allows a half-duplex operation in a distributed sensing setting.
\section{System Model}\label{sec:SysModel}
\begin{figure}[bt]
\centering
\includegraphics[width=0.73\linewidth]{sysmod4.pdf}
\caption{System model.\vspace{-1em}}
\label{fig:sysModel}
\end{figure}
Consider the system depicted in Fig.~\ref{fig:sysModel}, where a single point-like target of RCS $\sigma_{\mathrm{T}}$ is positioned on a square area $S$ of $\ell$ meters of side length. $U$ UAVs are deployed (for simplicity and insighfulness) at a common altitude $h$ and are coordinated to perform distributed sensing to locate the ground target. Each UAV $u$ in the set of all UAVs $\mathcal{U}$, with $u\in\mathcal{U}$, is positioned at coordinates $\mathbf{r}_u = (x_u,y_u,h)$, with $|\mathcal{U}|=U$. Also, the RCS of a ground cell is denoted by $\sigma_{G}$.
Similar to~\cite{Chen_JSACUAVSystem}, it is assumed that each UAV has two arrays of antennas namely a square uniform planar array (UPA) (mounted facing downward) for sensing and a uniform linear array (ULA) (mounted horizontally) to communicate with the FC for information fusion and coordination tasks. The square UPA consists of $n\times n$ isotropic antenna elements spaced $\lambda/2$ from each-other, where $\lambda=f_0/c_0$ is the wavelength of the signal, $f_0$ is the frequency of the signal, and $c_0$ is the speed of light.
\if\mycmd1
To perform sensing, the UAV $u \in \mathcal{U}$ estimates the RCS of a certain point on the ground, denoted as $p$, located at the coordinates $\mathbf{r}_p=(x_p,y_p,0)$. For this purpose, $u$ utilizes a digital receive beamformer $\mathbf{w}_p\in\mathbb{C}^{n\times 1}$. The reflection from point $p$ arriving at UAV $u$ has an angle-of-arrival (AoA) of $\varphi_{p,u} = (\theta_{p,u},\phi_{p,u})$, where $\theta_{p,u}$ corresponds to the elevation angle and $\phi_{p,u}$ to the azimuth. The corresponding beam-steering vector $\mathbf{g}(\varphi_{p,u})$ has its elements $g_{ij}(\varphi_{p,u})$ for all $i,j = 0,...,n-1$, where $i$ is the index corresponding to the antenna element in the $x$ axis and $j$ in the $y$ axis of the UPA defined as~\cite{book:BalanisAntennas}
\begin{align}\nonumber
g_{ij}(\varphi_{p,u}) = &e^{-j\pi (i-1)\sin(\theta_{p,u})\sin(\phi_{p,u})} \times \\
&e^{-j\pi (j-1)\sin(\theta_{p,u})\cos(\phi_{p,u})}.
\end{align}
The steering matrix $\mathbf{G}_u \in \mathbb{C}^{n^2\times H}$ contains the steering vectors corresponding to the $H$ reflections captured at UAV $u$ as
\begin{equation}
\mathbf{G}_u = [\mathbf{g}(\varphi_{1,u}),..., \mathbf{g}(\varphi_{H,u})]_{n^2\times H},
\end{equation}
where $n^2$ is the total number of antennas at UAV $u$.
After applying the receive beamformer $\mathbf{w}_{p}$ at reception, the beam pattern from the reflections captured at $u$ is given by
\begin{align}
\pmb{\chi} = \mathbf{G}_u^T\mathbf{w}_{p} = [\chi(\varphi_{1,u}),...,\chi(\varphi_{H,u})]^T,
\end{align}
where $\chi(\varphi_{p,u})$ is the gain of the reflection coming from $p$ and $\pmb{\chi}$ is the beam pattern vector of size $H\times 1$ at every AoA by applying beamformer $\mathbf{w}_{p}$ at UAV $u$.
\fi
\section{Distributed Sensing Protocol}\label{sec:Protocol}
For the sensing process, it is considered that the total area $S$ is sectioned into a grid composed of $L\times L$ square cells with dimensions $ d \times d $ with $d = \ell/L$. Each cell is characterised by its middle point $p$ of position $\mathbf{r}_p =(x_p,y_p,0)$ such that $p\in\mathcal{P}$, where $\mathcal{P}$ is the set of all cells. For notational simplicity we will refer a certain cell by its middle point $p$. The point $p^*$ represents the target, which is located in the position $\mathbf{r}_{p^*} =(x_{p^*},y_{p^*},0)$.
It is also considered that, at a certain time, a UAV $u\in\mathcal{U}$ illuminates straight down with its UPA working as a phased array,
thus the half power beam width (HPBW) projection forms a circle on the ground. In this sense, it is assumed that the cells that are completely inside the largest inscribed square of the HPBW projection are the intended ones to be sensed by the reflections produced from the illumination of UAV $u$, and are characterised as the cell set $\mathcal{P}_{u}$, while the set of non-intended illuminated cells $\mathcal{P}_{u}'$ contains the cells that are not inside the largest inscribed squared, which are treated as clutter, as illustrated in Fig.~\ref{fig:gridCells}.
\begin{figure}[bt]
\centering
\includegraphics[width=0.95\linewidth]{gridFig_Cellsu.pdf}
\caption{Illumination grid.}
\label{fig:gridCells}
\vspace{-1em}
\end{figure}
In total, the set of illuminated cells
is given as $\mathcal{Q}_{u} = \mathcal{P}_{u}\cup\mathcal{P}_{u}'$.
The distributed sensing framework is summarized as follows
\begin{itemize}[label={},leftmargin=*]
\item \textbf{Step 1:} The UAVs coordinate and assume their positions to cover the whole area of interest $S$, such that every cell in $\mathcal{P}$ is contained in a single $\mathcal{P}_u$, $u\in\mathcal{U}$.
\item \textbf{Step 2:} UAV $u\in\mathcal{U}$ illuminates the ground directly below acting as a phased array, illuminating the elements of $\mathcal{Q}_u$, and potentially, the target $p^*$.
\item \textbf{Step 3:} Every other UAV $u'\in\mathcal{U}\setminus\{u\}$ processes the incoming reflections by choosing a cell $p\in\mathcal{P}_u$ and for that cell
\begin{itemize}
\item computes and applies a digital receive beamformer as described in Section~\ref{sec:BF}, and
\item computes the periodogram corresponding to $p$, and estimates its RCS as described in Section~\ref{sec:Periodogram}.
\end{itemize}
\item \textbf{Step 4:} Repeat Step 3 for all cells $p\in\mathcal{P}_u$.
\item \textbf{Step 5:} Repeat Steps 2-4 for all UAVs $u\in\mathcal{U}$. After this, each UAV $u$ has an estimated RCS map of the grid, $\mathbf{\Hat{\Gamma}}_u$, which is a matrix of RCS estimates of all cells in $\mathcal{P} \setminus \mathcal{P}_u$. This occurs because the UAV $u$ does not estimate the RCS of the cells in $\mathcal{P}_u$, thus avoiding the need for a full-duplex system at the UAVs.
\item \textbf{Step 6:} All UAVs $u\in\mathcal{U}$ send their RCS estimation maps $\mathbf{\Hat{\Gamma}}_u$ to the FC for information-level fusion.
\item \textbf{Step 7:} The FC fuses the estimates together into the fused RCS map $\mathbf{\Hat{\Gamma}}$, and, by assuming a non-reflective ground such that the RCS of the ground is smaller than that of the target ($\sigma_{\mathrm{G}} < \sigma_{\mathrm{T}}$), the target is estimated to be located in the cell of highest estimated RCS, i.e. in $\argmax \mathbf{\Hat{\Gamma}}$, as described in Section~\ref{sec:Fuse}.
\end{itemize}
\section{Beamformer Design}\label{sec:BF}
The receive beamformer is designed to have the main lobe of the resulting beam pattern steered towards the intended cell $p$ in order to estimate its RCS. To this end, two different approaches are considered for the design of the receive beamformer, namely least-squares (LS) heuristic formulation and the minimum variance beam pattern based on Capon method. These approaches are described in the following.
\subsection{Least-Squares heuristic approach}
For this approach, the beamformer is obtained by solving the following constrained LS optimisation problem \cite{art:Shi_ILS,art:Zhang_DBR}
\begin{subequations}
\begin{alignat}{3}
\mathrm{\textbf{P1:}}\;\;\;\;&\mathrm{minimise}&\qquad&|| \mathbf{G}^T\mathbf{w}_{p} - \mathbf{v} ||_2^2\\
&\mathrm{subject~to}&\qquad&|| \mathbf{w}_{p} ||_2^2 = 1,
\end{alignat}
\end{subequations}
where $\mathbf{v}$ is the desired response vector over the $H'$ AoAs in the beam-steering matrix $\mathbf{G}\in\mathbb{C}^{n\times H'}$.
In this approach, a heuristic is employed, in which the AoAs in $\mathbf{A}$ are chosen such that they span evenly on the elevation and azimuth ranges, centred around the intended AoA $\varphi_{p,u}$. The AoAs are taken as a mesh of $n$ elevation angles and $4n$ azimuth angles respectively given by
\begin{alignat}{3}
\theta_i =&\!\mod{\left( \theta_{p,u} + \frac{i\pi}{2(n-1)}, \frac{\pi}{2} \right)}, \; & i=0,...,n-1\\
\phi_j =&\!\mod{\left( \phi_{p,u} + \frac{j2\pi}{4n-1}, 2\pi \right)}, \; & j=0,...,4n-1,
\end{alignat}
such that $H' = 4n^2$.
The solution of \textbf{P1} is well known to be $\mathbf{w}_{p} = (\mathbf{A}^T)^\dagger \mathbf{v}$ \cite{art:Zhang_DBR} where $(\cdot)^\dagger$ is the pseudo-inverse, but, since $\mathbf{A}^T$ is a matrix with more rows than columns, it can be efficiently solved by applying Cholesky factorization. Therefore, the iterative LS algorithm proposed in \cite{art:Shi_ILS} can be employed to solve \textbf{P1}.
\subsection{Capon method}
The Capon method provides minimum-variance distortionless
response beamforming and can be formulated as a quadratic program (QP) convex optimisation problem \cite{art:Stoica_Capon}
\begin{subequations}
\begin{alignat}{3}
\mathrm{\textbf{P2:}}\;\;\;\;&\mathrm{minimise}&\qquad&\mathbf{w}_{p}^H\mathbf{R}\mathbf{w}_{p}\\
&\mathrm{subject~to}&\qquad& \mathbf{w}_{p}^H\mathbf{g}(\varphi_{p,u'}) = 1,
\end{alignat}
\end{subequations}
where $\mathbf{R}\in\mathbb{C}^{n\times n}$ is the covariance matrix of the received signal over the desired direction, which can be defined as $\mathbf{R} = \mathbf{g}(\varphi_{p,u'})\mathbf{g}(\varphi_{p,u'})^H + \alpha \mathbf{I}$~\cite{art:Shi_ILS} ,
where $\mathbf{I}\in\mathbb{R}^{n\times n}$ is the identity matrix and $\alpha$ is a small real number. The solution for \textbf{P2} is obtained as in~\cite{art:Stoica_Capon}, and given by
\begin{equation}
\mathbf{w_{p}} = \frac{\mathbf{R}^{-1}\mathbf{g}(\varphi_{p,u'})}{\mathbf{g}(\varphi_{p,u'})^H\mathbf{R}^{-1}\mathbf{g}(\varphi_{p,u'})}.
\end{equation}
\section{Periodogram}\label{sec:Periodogram}
\if\mycmdPeriodogram0
\fi
\if\mycmdPeriodogram1
For performing sensing, the UAVs illuminating ground transmit frames consisting of $N$ OFDM symbols, each consisting of $M$ orthogonal subcarriers. The transmitted OFDM frame can be expressed as a matrix denoted by $\mathbf{F_{TX}}=[c^{TX}_{k,l}]\in\mathcal{A}^{N\times M}$ with $k=0,...,N-1$, $l=0,...,M-1$ and $\mathcal{A}$ is the modulated symbol alphabet. At the side of sensing UAVs, the received frame matrix is denoted by $\mathbf{F_{RX}}=[c^{RX}_{k,l}]$ and is composed by the received baseband symbols corresponding to all reflections from $\mathcal{Q}_{\mathrm{u}}$ at UAV $u'$. The elements of the received frame matrix have the form~\cite{art:Baquero_OFDM,art:OFDM_Samir
\begin{align}\label{eq:bPointTarg_Mult} \nonumber
&c^{RX}_{k,l} = ~~b_{p}c^{TX}_{k,l}\chi(\varphi_{p,u'}) e^{j2\pi f_{D,p}T_o l}e^{-j2\pi \tau_{p} \Delta f k}e^{-j\zeta_{p}}\\ \nonumber
&+\sum_{p'\in\mathcal{Q}_{\mathrm{u}}\setminus\{p\}} b_{p'}c^{TX}_{k,l}\chi(\varphi_{p',u'}) e^{j2\pi f_{D,p'}T_o l}e^{-j2\pi \tau_{p'} \Delta f k}e^{-j\zeta_{p'}}\\
&+ \delta_{u}b_{p^*}c^{TX}_{k,l}\chi(\varphi_{p^*,u'}) e^{j2\pi f_{D,p^*}T_o l}e^{-j2\pi \tau_{p^*} \Delta f k}e^{-j\zeta_{p^*}}+ z_{k,l},
\end{align}
where $f_{D,p}$ is the Doppler experienced by the reflection from $p$ (assumed constant through the frame), $T_o$ is the OFDM symbol duration (including the cyclic prefix time $T_{CP}$), $\tau_p$ is the delay of the reflection from $p$, $\Delta f$ is the inter-carrier spacing, $\zeta_p$ is a random phase shift of the reflection from $p$, $z_{k,l}$ is additive white Gaussian noise (AWGN) of spectral density $N_0$, and $b_p$ is a term embedding the propagation of the wave and the reflecting characteristics of the reflector in $p$. In this expression, the first term corresponds to the intended cell $p$; the second term corresponds to the interference from the other cells, $p'$, in $\mathcal{Q}_u$; and the third term corresponds to the target reflection, with $\delta_{u}=1$ if the target has been illuminated by UAV $u$, and $\delta_{u}=0$ otherwise. Considering a point-source model, $b_p$ is the amplitude attenuation of the signal, given by~\cite{art:OFDM_Samir}
\begin{equation}\label{eq:b_val}
b_p = \sqrt{\frac{P_T G_T \sigma_p \lambda^2}{(4\pi)^3 d_{p,1}^2d_{p,2}^2}},
\end{equation}
where $\sigma_p\in\{\sigma_{\mathrm{G}},\sigma_{\mathrm{T}}\}$, $P_T$ is the transmit power, $G_T$ is the transmit antenna gain, $d_{p,1}$ is the distance from $u$ to $p$ and $d_{p,2}$ is the distance from $p$ to $u'$.
The received complex symbols $c^{RX}_{k,l}$ contain the transmitted symbols $c^{TX}_{k,l}$, thus, are data-dependent. In order to process $\mathbf{F_{RX}}$, this data-dependency is removed by performing element-wise division, $\mathbf{F}$$=$$\mathbf{F_{RX}}\oslash \mathbf{F_{TX}}$, to obtain the processed samples consisting of elements $c_{k,l}$$ = $$c^{RX}_{k,l}/c^{TX}_{k,l}$.
To estimate the delay and Doppler from $\mathbf{F}$, a common approach for OFDM signals is to use the periodogram, which provides the maximum likelihood (ML) estimator~\cite{art:Baquero_OFDM}. The periodogram takes the fast Fourier transform (FFT) of $\mathbf{F}$ over OFDM symbols, followed by the inverse FFT (IFFT) over subcarriers at a given delay-Doppler pair $(n,m)$ as~\cite{art:Baquero_OFDM}
\begin{align}\label{eq:periodogram}
\nonumber P&_{\mathbf{F}}(n,m) = \\
&\frac{1}{NM} \left| \sum_{k=0}^{N'-1}\left( \sum_{l=0}^{M'-1} c_{k,l} e^{-j2\pi l\frac{m}{M'}} \right) e^{-j2\pi k\frac{n}{N'}} \right|^2,
\end{align}
where $M'\geq M$ and $N'\geq N$ are the lengths of the FFT and IFFT operations respectively, $n=0,...,N'-1$ and $m=0,...,M'-1$\footnote{If $M' > M$ or $N' > N$ is needed in order to have more $n$ or $m$ values, padding is used by setting the added padded symbols to zero.}.
It has been proven that the ML estimator of the delay and Doppler for a single target coincides with the maximum point in the periodogram as $(\hat{n}, \hat{m}) = \argmax_{n,m} P_{\mathbf{F}}(n,m)$~\cite{art:Baquero_OFDM}, which is maximised when
\begin{align}\label{eq:periodogramMax}
\frac{f_D}{\Delta f} -\frac{\hat{m}}{M} = 0 \;\;\;\land\;\;\; \frac{\tau}{T_o} -\frac{\hat{n}}{N} =0.
\end{align}
Then, from \eqref{eq:bPointTarg_Mult}, \eqref{eq:b_val} and \eqref{eq:periodogram}, $\sigma_p$ can be estimated a
\begin{equation}\label{eq:rcsestcalc}
\Hat{\sigma}_p = \left(\frac{1}{NM}\right) \frac{P_{\mathbf{F}}(\hat{n}, \hat{m})(4\pi)^3d_{p,1}^2d_{p,2}^2}{P_TG_T\lambda^2}.
\end{equation}
Then, considering the the geometry and protocol of the system, each UAV can set $\hat{m}$, $M$, $\hat{n}$ and $N$ so that \eqref{eq:periodogramMax} is met exactly for each cell $p$ to be sensed, and the corresponding RCS estimate is obtained by computing \eqref{eq:rcsestcalc}.
\fi
\section{Information-Level Fusion}\label{sec:Fuse}
After all UAVs $u\in\mathcal{U}$ have sent their local RCS estimates for each cell on the grid $\mathbf{\Hat{\Gamma}}_{u}$ to the FC, it performs information-level fusion of the local estimates to obtain a global estimate $\mathbf{\Hat{\Gamma}}$. Then, the following hypothesis test is performed for all the elements of the grid, $p\in\mathcal{P}$
\begin{subequations}\label{eq:hypothesis}
\begin{alignat}{3}
\mathcal{H}_0:\;\;\;\;&|| \mathbf{r}_{p^*} - \mathbf{r}_{p} ||_\infty > \frac{d}{2} \\
\mathcal{H}_1:\;\;\;\;&|| \mathbf{r}_{p^*} - \mathbf{r}_{p} ||_\infty \leq \frac{d}{2},
\end{alignat}
\end{subequations}
where $||\cdot||_\infty$ is the $L^\infty$ norm. Hypothesis $\mathcal{H}_1$ states the cases when the target $p^* $ is considered to be located in the corresponding cell $p$, and is considered to be met at the cell $p$ that has the maximum value estimate $\Hat{\sigma}=\max \mathbf{\Hat{\Gamma}}$. On the other hand, $\mathcal{H}_0$ states the cases when the target is not located at $p$, and considered to be met at every other cell $p$ that has an estimate $\Hat{\sigma}$ such that $\Hat{\sigma} < \max \mathbf{\Hat{\Gamma}}$.
The information-level fusion will be carried out using two techniques, namely averaging and pre-normalising the local estimates before averaging.
\textbf{Average: }
The FC averages the values of the cells over the local maps from all UAVs in $\mathcal{U}$ such that $\mathbf{\Hat{\Gamma}} = \frac{1}{U}\sum_{u\in\mathcal{U}} \mathbf{\Hat{\Gamma}}_{u}$.
\textbf{Pre-normalised average: } An average of the pre-normalised local maps is obtained, in which each local map $\mathbf{\Hat{\Gamma}}_{u}$ is normalised between 0 and 1 as
\begin{equation}
\Bar{\sigma} = \frac{\Hat{\sigma} - \min{(\mathbf{\Hat{\Gamma}}_{u})}}{\max{(\mathbf{\Hat{\Gamma}}_{u})} - \min{(\mathbf{\Hat{\Gamma}}_{u})}}\enskip,\enskip \forall \Hat{\sigma}\in\mathbf{\Hat{\Gamma}}_{u} \enskip,\enskip \forall u \in\mathcal{U}.
\end{equation}
The resulting normalised local maps $\Bar{\Gamma}_{u}$ are then averaged as in the previous approach.
\section{Numerical Results}\label{sec:Results}
In this section, the performance of the proposed sensing protocol will be evaluated in terms of the probability of detection, where detection is considering to occur whenever $\mathcal{H}_1$ is achieved for the cell that contains the target. For this purpose, Monte Carlo simulations were performed, where the target is randomly located at each iteration, and the simulation parameters are presented in Table~\ref{tab:commonPar}, unless stated otherwise. The value for $\sigma_\mathrm{T}$ is assumed as 10 dBsm\footnote{$\sigma$[dBsm] = 10$\log_{10}(\sigma$[$m^2$]$/1 m^2)$}, which is a reasonable value for large vehicles~\cite{art:RCSvals01}, and -30 dBsm for the ground, which is reasonable for grass or dirt~\cite{art:RCSvalsGND}. The OFDM parameters are taken from \cite{Chen_JSACUAVSystem}. The UAVs are set uniformly distributed across $S$ in a way to cover the whole grid, and each one illuminates $L/\sqrt{U}\times L/\sqrt{U}$ cells and avoiding intersections between the $\mathcal{P}_{u}$ cells, unless stated otherwise.
\begin{table}[h!]\vspace{-1em}\centering
\caption{Common simulation parameters}\label{tab:commonPar}
\begin{tabular}{|c|c||c|c|}
\hline
\textbf{Parameter} & \textbf{Value} & \textbf{Parameter} & \textbf{Value} \\ \hline
$P_T$ & 1 [W] & $M$ & 64 \\ \hline
$G_T$ & 1 & $n$ & 8 \\ \hline
$\ell$ & 100 [m] & $f_0$ & 24 [GHz] \\ \hline
$U$ & 16 & $BW$ & 200 [MHz] \\ \hline
$N_0$ & -174 [dBm/Hz] & $T_{CP}$ & 2.3 [$\mu$s] \\ \hline
$\sigma_{\mathrm{G}}$ & -30 [dBsm] & $L$ & 20 \\ \hline
$\sigma_{\mathrm{T}}$ & 10 [dBsm] & $f_D$ & 0 [Hz] \\ \hline
$N$ & 16 & & \\ \hline
\end{tabular}
\vspace{-1em}
\end{table}
\begin{figure}
\centering
\includegraphics[width=0.73\linewidth]{result_d_ALL_N_1_PDF.pdf}
\caption{Detection probability of the target for different cell length $d$ for different beamforming techniques, fusion techniques and $\sigma_{\mathrm{G}}$ values. The number of UAVs and number of cells illuminated per UAV is kept constant, so larger $d$ values imply total area and higher UAV altitude.\vspace{-1em}}
\label{fig:resultNew_Pd_d_BF}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.73\linewidth]{result_Lx_ALL_N_0_PDF.pdf}
\caption{Detection probability of the target for different cell length $d$ for different beamforming and fusion techniques. Total area and UAV altitude is kept constant, so larger $d$ values imply less cells illuminated per UAV.\vspace{-2em}}
\label{fig:resultNew_Pd_d_VAR}
\end{figure}
In Fig.~\ref{fig:resultNew_Pd_d_BF} the detection probability is shown as a function of the cell side length $d$, for different $\sigma_{\mathrm{G}}$ values. The number of intended cells per UAV is maintained constant, thus the size of the cells determine the total size of the area $S$ and the altitude of the UAVs $h$, which increases as $d$ increases to accommodate the same number of larger cells in its HPBW. Note that, $d$ increases as $h$ increases, and the $b_p$ value from \eqref{eq:b_val} is closer to the noise level, then the probability of detection decreases.
There exists a maximum point around $d=2$m, where the best probability of detection is achieved.
As expected, as $\sigma_{\mathrm{G}}$ increases, the difference between the RCS of the ground and the target decreases, so that the probability of detection also decreases. By comparing beamforming techniques, both show a similar behaviour. However, when comparing fusion techniques, pre-normalising the local estimates performs better only for larger $d$ and $\sigma_{\mathrm{G}}$ values.
Conversely, in Fig.~\ref{fig:resultNew_Pd_d_VAR} the total size of the area $S$ and the altitude of the UAVs $h$ is kept constant, while varying $d$. This is accomplished by adjusting the number of cells in the grid $L$.
In this case, note that higher $d$ values lead to better probability of detection, as there is a higher area per cell. However, a local optimum can be appreciated around $d=4$m, which shows the presence of a local optimum that offers more precise detection.
\begin{figure}
\centering
\includegraphics[width=0.73\linewidth]{result_Deltad2_ALL_N_0_PDF.pdf}
\caption{Detection probability of the target at a $\Delta$ cells distance for different $\sigma_{\mathrm{G}}$ values and different cell size $d$.\vspace{-1em}}
\label{fig:resultNew_Pd_Dd}
\end{figure}
Furthermore, in Fig.~\ref{fig:resultNew_Pd_Dd} the detection probability is plotted for different values of $\sigma_{\mathrm{G}}$, different values of a modified threshold $d(\frac{1}{2} + \Delta$) in the hypothesis test \eqref{eq:hypothesis}, and different values of $d$. The curves show the probability of detection of the target at $\Delta$ cells away from the cell with the maximum value in $\Bar{\Gamma}$. Note that values of $\sigma_{\mathrm{G}}\leq -10$dBsm present probability of detection close to 100\% for $\Delta \geq 1$, which is within a distance of one cell. This suggests a probability of detection above 99.89\% with high accuracy within $5$cm ($d=0.01$m, $\Delta=2$, $\sigma_{\mathrm{G}}\leq -10$dBsm), which is more accurate than other state-of-the-art works utilising MIMO OFDM bistatic-radar such as in~\cite{art:Lyu_MIMOOFDMEurasip}, where they achieve an accuracy of $3$m which uses passive radar in a multi-user MIMO OFDM setting. The results show that for small $\sigma_{\mathrm{G}}$ values, most misdetections occur in an adjacent cell
\begin{figure}
\centering
\includegraphics[width=0.73\linewidth]{result_n_ALL_N_0_PDF.pdf}
\caption{Detection probability of the target for different total number of antennas for the UAV UPA $n^2$ for different beamforming techniques, fusion techniques and $\sigma_{\mathrm{G}}$ values.\vspace{-2em}}
\label{fig:resultNew_Pd_n_BF}
\end{figure}
Fig.~\ref{fig:resultNew_Pd_n_BF} illustrates the detection probability as a function of the number of antennas in the UPAs of the UAVs, for different $\sigma_{\mathrm{G}}$ values. Therein, the number of UAVs and the number of illuminated cells per UAV is maintained constant, so that narrower beams imply that the UAVs increase their altitudes to accommodate the same intended cells. It is worth noting that the increase of the number of antennas derives into a narrower main beam, and as the beam becomes narrower (higher $n$ values), it is observed an improvement on the probability of detection due to the increased directionality and precision towards the intended sensed cells. However when the beam becomes too narrow, asmall beam misalignment have a bigger impact on the detection of the target, and the increases in the UAV altitudes causes a stronger pathloss, making the received signal closer to the noise level, thus the probability of detection decreases.
For larger $\sigma_{\mathrm{G}}$ values, the probability of detection decreases even further, as expected. It is also noticed that both fusion techniques show a similar detection probability results, similar to the case with both beamforming techniques. However, the Capon method shows a slightly better performance for a high number of antennas and a small $\sigma_{\mathrm{G}}$ value. Moreover, for smaller $\sigma_{\mathrm{G}}$ values, the fusion by averaging slightly outperforms the pre-normalised averaging approach, while for higher $\sigma_{\mathrm{G}}$ values the opposite is true.
\begin{figure}
\centering
\includegraphics[width=0.73\linewidth]{result_z_ALL_N_0_PDF.pdf}
\caption{Detection probability of the target at a $\Delta$ cells distance for different $\sigma_{\mathrm{G}}$ values for different UAV altitude $h$ values.\vspace{-2em}}
\label{fig:result_Pd_zUAV}
\end{figure}
Fig.~\ref{fig:result_Pd_zUAV} illustrates the detection probability as a function of the common UAV altitude $h$ for varying $\sigma_{\mathrm{G}}$ and $\Delta$ values. The UAVs are positioned in a similar configuration to the previous figure, thus, less cells are covered by the main beam of the transmitting UAVs at smaller altitude, thus resulting in cells not being illuminated by any UAV. The maximum altitude is considered to be the one where all cells are illuminated once (no overlapping). As the $h$ value increases, each $\mathcal{P}_{u}$ set goes from allocating $1\times 1$ cell, to $3\times 3$ cells and finally to $5\times 5$ cells, such that all cells are illuminated once. This behaviour can be seen in the $\Delta=0$ curve, where a sudden increasing in the probability of detection is observed at altitudes where more cells are allocated in $\mathcal{P}_{u}$, whereas this tendency is also observed for higher $\sigma_{\mathrm{G}}$ values, with worse performance. For higher $\Delta$ values, the probability of detection is higher and increases smoothly with $h$ as higher $\Delta$ implies that the detection can be considered as successful on non-illuminated cells that are adjacent to illuminated cells. This is particularly observed for $\Delta=2$, where every cell in the grid is considered for detection.
\vspace{-1em}
\section{Conclusions}\label{sec:Conclusions}
\vspace{-0.2em}
In this paper, a half-duplex distributed sensing framework for UAV-assisted networks was proposed
in which the area of interest is sectioned into a grid, and the RCS of each cell is estimated by employing receive digital beamforming and a periodogram-based approach, and later sent to a FC for information-level fusion. Results show that the detection probability of the system increases for ground cells of smaller RCS values and that higher accuracy can be achieved within a one-cell distance. Increasing the number of antennas in the UAVs improves the detection probability of the target, however the increase of the altitude of the UAVs can deteriorate it. Moreover, it was found that detection probability is higher for larger cell size $d$ if the UAV altitude is kept constant, however there is a small $d$ value local maximum. Future works can consider the effect of the Doppler and position control of the UAVs to increase the sensing performance of the framework.
\iffalse
\fi
\section*{Acknowledgement}
This research has been supported by the Academy of Finland, 6G Flagship program under Grant 346208 and project FAITH under Grant 334280
\bibliographystyle{unsrt}
| {'timestamp': '2023-02-22T02:15:23', 'yymm': '2302', 'arxiv_id': '2302.10673', 'language': 'en', 'url': 'https://arxiv.org/abs/2302.10673'} |
\section{Introduction}
By assuming the metric in the form
$ds^2 = {a(t)}^2 ds_3^2 - dt^2$,
Friedmann (1922) built up his relativistic
universe on two
assumptions: the universe looks identical in whichever direction we look,
and this would also be true if we were observing it anywhere else. By
the two differential equations imposed by the
Einstein equation upon the function
$a(t)$, he
showed then that, instead of being static, the universe is expanding.
Without knowing Friedmann's theoretical prediction,
this phenomenon was actually
discovered by Edwin Hubble (1929) several years later.
This phenomenon is pointed out in
this paper in quite a different situation.
Independently from Friedmann's theory, this one has been evolved
from those works of the author where isospectral manifolds with
different local geometries were constructed on two different types of
manifolds, called Z-torus bundles (alias Z-crystals)
and Z-ball bundles, respectively \cite{sz1}-\cite{sz4}.
These bundles are constructed by means of
nilpotent Lie groups as well as their solvable
extensions such that one considers tori resp. balls in the center
(called also
Z-space) of the nilpotent group. Both the nilpotent and extended groups
are endowed with appropriate natural invariant metrics.
Surprisingly enough, the Laplacians on the nilpotent groups
(endowed always with invariant positive definite Riemann metrics)
are nothing but
the familiar Hamilton operators corresponding to
particle-antiparticle systems. On the two types of manifolds, the
represented particles can be distinguished as follows. On Z-crystals, the
Laplacian represents particles having no interior, where it
actually appears as a Ginsburg-Landau-Zeeman operator of a system
of electrons, positrons, and electron-positron-neutrinos.
The particles represented by Z-ball bundles do have
interior and the Laplacian decomposes into an exterior
Ginsburg-Landau-Zeeman operator and an
interior spin operator by which the weak-force and the strong-force
interactions can be described, respectively. These nuclear forces are
very different from the electromagnetic force emerging in the Laplacian
of Z-crystals. The weak nuclear force
explains the beta decay, while the strong force keeps the
parts of atomic nuclei together. Yet, the Z-crystal-Laplacian and the
weak-force-Laplacian can be led back to the very same radial
Ginsburg-Landau-Zeeman operator. This phenomenon is consistent with
the Weinberg-Salam theory of beta decays, which unified the weak force
with the electromagnetic force.
There are two ways to introduce relativistic time on these models.
The static model is constructed by the Cartesian product of the nilpotent
group with $\mathbb R$. The latter component becomes the time axis
regarding the natural Lorenz-indefinite metric.
According to the type of model being extended,
the Laplacian is the sum of Schr\"odinger and electron-positron-neutrino,
resp., weak-nuclear and strong-nuclear
wave operators. The last two wave operators appear for particles having
interior. They are further decomposed into W- and Z-operators
which are analogous to the electron-positron and
electron-positron-neutrino wave operators.
Relativistic time can be introduced also by solvable extension,
which also increases the
dimension of the nilpotent group by $1$. The new axis,
which is just a half-line $\mathbb R_+$, can also be used
as time-axis for introducing a natural
relativistic metric on these extensions.
Contrary to the static case, in this way
one defines expanding models
obeying Hubble's law, furthermore, the Laplacian decomposes into expanding
Schr\"odinger and electron-positron-neutrino, resp., weak and strong wave
operators and the corresponding W- and Z-operators.
It is a well known experimental fact that, even though the universe is
expanding, there is no expansion measured on small scale level.
Thus the question arises if the expanding solvable
models describe real existing microscopic
world. Fortunately this question can positively be answered. In fact,
despite of the expansion, the particles must not be expanding.
The reason explaining this paradoxical phenomenon is that,
defined by the angular momentum and spin operators,
also these mathematical models correspond constant magnetic fields to the
particles, and, due to the expansion, the change of these fields induces
electromagnetic fields, which are completely radiated out from the system,
keeping both the magnetic fields and the spectra of the
particle-systems constant.
There is also mathematically established in this paper that this
radiation must be isotrpic, meaning that it is the
same whichever direction is measured from. Thus the size and
several other constants of particles must not be changing even according
to the expanding solvable model. Actually this model
gives a new explanation
for the presence of an isotropic radiation in the space.
Expanding on large scale but being stationary on small
scale is a well known phenomenon, which, without the
above explanation, could have been a major argument against
the physical reality of the solvable extensions.
According to physical experiments, although far
distant clusters of galaxies move very rapidly away from us,
the solar system is not expanding, nor is our galaxy or the cluster
of galaxies to which it belongs. This stagnancy is even more apparent
on microscopic level, where, for instance, the spectroscopic
investigations
of the light arriving from far distance galaxies confirm that
the spectrum of hydrogen atom is the same today than it was macro
billions of years ago.
The existence of isotropic background radiation is also well known which
was measured, first, by Arno Penzias and Robert Wilson, in 1965.
It is believed, today, that these radiations are travelling to us
across most of the observable universe, thus the radiation isotropy
proves that the universe must be the same in every direction, if only
on a large scale. This phenomenon is considered as a remarkably
accurate confirmation of Friedmann's first assumption. Our expanding
model provides a much more subtle conclusion, however: This background
radiation must be isotropic even if it arrives to us from
very near distances. Moreover, it holds true also on non-isotropic spaces.
This exact mathematical model is not derived from the standard
model of elementary particles,
which theory is based on a non-Abelian gauge theory where
the basic objects are
Yang-Mills connections
defined on principal fibre bundles having structure group $SU(3)$.
Contrary to these gauge theories, in our case
all physical quantities are defined by invariant Riemann metrics
living on
nilpotent resp. solvable groups. It will also be pointed out that
no regular gauge-group exist on these models regarding of which these
objects are gauge invariant.
Yet, there is a bridge built up between
the two theories, which explains why the particles introduced by the
two distinct models exhibit the very same physical features. This bridge
can be regarded as a correspondence principle associating certain Riemann
metrics to the Yang-Mills models of elementary particles.
The key point about the new exact mathematical model is that
the center of the nilpotent group
makes room to describe also the rich ``inner life" of particles, which is
known both experimentally and by theories
explaining these experimental facts.
This ``inner life" is displayed by the de Broglie waves
which appear in a new form in this new situation such that they
are written up in terms of the Fourier transforms
performed only on the center of the nilpotent group. By this reason,
they are called Z-Fourier transforms, which are defined on the two types
of models accordingly. On Z-crystals, where there is no ``inner life",
it is nothing but the discrete
Z-Fourier transform defined by the Z-lattice
by which the Z-crystals are introduced. On Z-ball bundles, however,
in order to
obey the boundary conditions, more complicated so called twisted
Z-Fourier transforms are introduced.
The action of the very same Laplacian
appears quite differently on these different
function spaces. On Z-crystals, where there are no Z-boundary conditions
involved, the strong nuclear forces do not appear either. In this case,
the eigenfunctions arise as eigenfunctions of
Ginsburg-Landau-Zeeman operators. By this reason, they are called
electromagnetic eigenfunctions. On Z-crystals, the theory corresponds to
quantum electrodynamics (QED), while on Z-ball bundles
it relates to quantum chromodynamics (QCD).
In fact, on Z-ball bundles, due to the the Z-boundary conditions,
the Laplacian appears in a much more complex form exhibiting both the
weak and strong forces. More precisely, the weak
force eigenfunctions satisfying a
given boundary condition are defined by the
eigenfunctions of the exterior Ginsburg-Landau-Zeeman operator
introduced above
for particles having interior. Although there are numerous
differences between this exterior Ginsburg-Landau-Zeeman operator
and the original
GLZ-operator defined on Z-crystals, they are both reduced to
the very same radial operator acting on radial functions. As a result, from
the point of view of the elements of the spectrum,
they are the same operators.
This is the mathematical certification of
the Weinberg-Salam theory which unified the weak interaction with the
electromagnetic force. The strong force eigenfunctions are defined
by the eigenfunctions of the inner spin operator. All these forces
reveal the very same strange properties which are described in QCD.
This is how a unified theory for the three:
1.) electromagnetic, 2.) weak, and 3.) strong nuclear forces is
established in this paper. The only
elementary force missing from this list is the gravitation,
which is not discussed in this paper.
This very complex physical-mathematical theory can clearly be evolved
just gradually. In order
to understand the physical contents of the basic objects
appearing in new forms in this new approach, first, those parts
of the classical quantum theory are reviewed which are necessary
to grasp these renewed versions of these basic concepts. Then, after
introducing the basic mathematical objects on 2-step nilpotent Lie groups,
several versions of the Z-Fourier transform will be studied. They are the
basic tools both for introducing the de Broglie waves in a new explicit
form and developing the theory unifying the three fundamental forces.
Besides explicit eigenfunction computations,
there is pointed out in this part that the Laplacian on Z-crystals
is nothing but the Ginsburg-Landau-Zeeman operator of a system of
electrons, positrons, and electron-positron-neutrinos. Furthermore,
on the Z-ball models, it is the sum of the exterior
Ginsburg-Landau-Zeeman operator
and the interior spin operator by which the strong force interaction
can be established.
Then, relativistic time is introduced and
both static and expanding models are established. The Laplacian on these
space-time manifolds appears as the sum of wave operators belonging to
the particles the system consists of.
The paper is concluded by pointing out the spectral
isotropy in the most general situations. Since the Riemann metrics
attached to the particle systems are
not isotropic in general, this statement points out a major
difference between our
model and Friedmann's cosmological model where the isotropy of the
space is one of his two assumptions. Our statement says that radiation
isotropy holds true also on non-isotropic spaces and the two
isotropy-concepts are by no means equivalent.
\section{Basics of classical quantum theory.}
In this sections three topics of classical
quantum theory are reviewed. The first one
describes the elements of de Broglie's theory associating waves to
particles. The second resp. third ones are surveys on meson theory resp.
Ginsburg-Landau-Zeeman and Schr\"odinger operators of charged particles.
\subsection{Wave-particle association.}
In quantum theory, a particle with energy $E$ and momentum $\mathbf p$
is associated with a wave,
$Ae^{\mathbf i(K\cdot Z-\omega t)}$, where
$K=(2\pi/\lambda )\mathbf n$ is the wave vector and
$\mathbf n$ is the wave normal. These quantities yield the following
relativistically invariant relations.
For light quanta the most familiar relations are
\begin{equation}
\label{e=hnu}
E=\hbar\omega \quad ,\quad \mathbf p=\hbar K,
\end{equation}
where the length of the wave vector yields also the following equations:
\begin{equation}
\label{p=e/c}
\mathbf k=|K|=\frac{\omega}{c},\quad \mathbf k^2=\frac{\omega^2}{c^2},
\quad {\rm and}\quad
|\mathbf p|=p=\frac{E}{c},\quad \mathbf p^2=\frac{E^2}{c^2}.
\end{equation}
For a material particle of rest mass $m$, the fundamental relation is
\begin{equation}
\label{e(m)}
\frac{E}{c}=\sqrt{ \mathbf p^2+m^2c^2},
\end{equation}
which can be established by the well known equations
\begin{equation}
E=\frac{mc^2}{\sqrt{1-v^2/c^2}},\quad\quad \mathbf p=
\frac{m\mathbf v}{\sqrt{1-v^2/c^2}}
\end{equation}
of relativistic particle mechanics.
The idea of de Broglie was that
(\ref{e=hnu}) should also be valid for a material particle such that
(\ref{p=e/c}) must be replaced by
\begin{equation}
\label{p(m)=e/c}
\sqrt{\mathbf k^2+\frac{m^2c^2}{\hbar^2}}=\frac{\omega}{c},\quad\quad
{\mathbf k^2+\frac{m^2c^2}{\hbar^2}}=\frac{\omega^2}{c^2}.
\end{equation}
In this general setting, $m=0$ corresponds to the light.
In wave mechanics, de Broglie's most general wave packets are
represented by the Fourier integral formula:
\begin{equation}
\label{wavepack}
\psi (Z,t)=\int\int\int A(K_1,K_2,K_3)
e^{\mathbf i(\langle K,Z\rangle-\omega t)}dK_1dK_2dK_3,
\end{equation}
where $\omega$ is given by (\ref{p(m)=e/c}).
In other words, a general wave appears as superposition of
the above plane waves. Instead of the familiar $X$, the vectors from the
3-space, $\mathbb R^3$, are denoted here by $Z$, indicating that the
reformulated de Broglie waves will be introduced in the new theory
in terms of the so called
twisted Z-Fourier transform, which is performed just on the
center (alias Z-space) of the nilpotent group. The X-space of a
nilpotent group is complement to the Z-space and the integration
in the formula of twisted Z-Fourier transform does not apply to the
X-variable. It applies just to the Z-variable. The above denotation is
intended to help to understand the renewed de Broglie waves more easily.
The above wave function, $\psi$,
satisfies the relativistic scalar wave equation:
\begin{equation}
\label{waveeq}
\big(\nabla^2-\frac{1}{c^2}\frac{\partial^2}{\partial t^2}\big)
\psi (Z,t)=\frac{m^2c^2}{\hbar^2}\psi (Z,t),
\end{equation}
which statement can be seen by substituting (\ref{wavepack})
into this equation.
Then (\ref{p(m)=e/c}) implies (\ref{waveeq}), indeed. According to this
equation, the wave function is an eigenfunction of the wave operator with
eigenvalue ${m^2c^2}/{\hbar^2}$. By this observation we get that
the spectrum of the
wave operator is continuous and the multiplicity of each eigenvalue
is infinity.
The Fourier integral formula
(\ref{wavepack}) converts differential operators to multiplication
operators. Namely we have:
\begin{equation}
\frac{\partial}{\partial Z_j}\sim \mathbf iK_j\quad ,\quad
\frac{\partial}{\partial t}\sim \mathbf i\omega .
\end{equation}
These correspondences together with (\ref{e=hnu}) yield the
translational key:
\begin{equation}
\label{correl}
-\mathbf i\hbar\frac{\partial}{\partial Z_j}\sim p_j\quad ,\quad
\mathbf i\hbar \frac{\partial}{\partial t}\sim E
\end{equation}
between the classical quantities $\mathbf p$ and $E$ of classical mechanics
and the operators of wave mechanics.
In his lectures on physics \cite{p} (Vol. 5, Wave mechanics, pages 3-4),
Pauli describes the
transition from the above relativistic theory to the non-relativistic
approximation as follows. In mechanics, for $v\,<<\,c$ and $p\,<<\,mc$,
we have
\begin{equation}
\label{pnonrel_1}
\frac{E}{c}=\sqrt{ \mathbf p^2+m^2c^2}\sim mc(1+\frac{1}{2}
\frac{p^2}{m^2c^2}+\dots )=\frac{1}{c}(mc^2+
\frac{1}{2}\frac{p^2}{m}+\dots ).
\end{equation}
From (\ref{p(m)=e/c}) we also obtain
\begin{equation}
\label{pnonrel_2}
\omega =\frac{E}{\hbar}=\frac{mc^2}{\hbar}+\frac{\hbar}{2m}k^2+\dots ,
\end{equation}
where $E=mc^2+E_{kin}$ and $E_{kin}=p^2/2m$. The non-relativistic wave
\begin{equation}
\label{nonrel_wavepack}
\tilde \psi (Z,t)=\int\int\int A(K_1,K_2,K_3)
e^{\mathbf i(\langle K,Z\rangle -\tilde \omega t)}dK_1dK_2dK_3,
\end{equation}
is defined in terms of
\begin{equation}
\label{pnonrel_3}
\tilde\omega =\frac{\hbar}{2m}\mathbf k^2=\omega -\frac{mc^2}{\hbar},
\end{equation}
which relates to the relativistic wave function by the formula:
\begin{equation}
\label{pnonrel_4}
\psi (Z,t)=e^{-\frac{\mathbf imc^2}
{\hbar}t} \tilde \psi (Z,t).
\end{equation}
Substitution into (\ref{waveeq}) yields then:
\begin{equation}
\label{pnonrel_5}
\nabla^2 \tilde \psi +\frac{m^2c^2}{\hbar^2}
\tilde \psi + 2\frac{\mathbf im}{\hbar}
\frac{\partial\tilde \psi}{\partial t}
-\frac{1}{c^2}\frac{\partial^2\tilde\psi}{\partial t^2}
=\frac{m^2c^2}{\hbar^2}\tilde\psi ,
\end{equation}
which is nothing but the non-relativistic wave equation:
\begin{equation}
\label{nonrel_waveeq}
\nabla^2 \tilde\psi + \mathbf i\frac{2m}{\hbar}
\frac{\partial\tilde\psi}{\partial t}
-\frac{1}{c^2}\frac{\partial^2\tilde\psi }{\partial t^2}=0.
\end{equation}
The imaginary coefficient
ensures that there is no special direction in time. This equation is
invariant under the transformations
$t\to -t$ and $\tilde \psi\to\tilde \psi^{*}$,
whereby $\tilde \psi\tilde \psi^*$,
where $*$ means conjugation of complex numbers,
remains unchanged (this conjugation will be denoted by
$\overline\psi$ later on). According to quantum theory,
the physically measurable quantity is not the wave
functions $\psi$ or $\tilde \psi$, but the probability densities
$\psi\psi^*$ resp. $\tilde \psi\tilde \psi^*$.
\subsection{Meson theory.}
The above solutions of the wave equation strongly relate to the
theory of nuclear forces and mesons.
To review this field, we literally quote Hideki Yukawa's Nobel Lecture
"Meson theory in its developments",
delivered on December 12, 1949. Despite
the fact that elementary particle physics went through enormous
developments since 1949, this review has been chosen
because it is highly suggestive regarding
the physical interpretations of the
mathematical models developed in this paper:
"The meson theory started from the extension of the concept of the
field of force so as to include the nuclear forces in addition
to the gravitational and electromagnetic forces. The necessity of
introduction of specific nuclear forces, which could not be reduced
to electromagnetic interactions between charged particles, was
realized soon after the discovery of the neutron, which was to be
bound strongly to the protons and other neutrons in the atomic nucleus.
As pointed out by Wigner$^1$, specific nuclear forces between two
nucleons, each of which can be either in the neutron state or
the proton state, must have a very short range of the order of
10-13 cm, in order to account for the rapid increase of the
binding energy from the deuteron to the alphaparticle. The binding
energies of nuclei heavier than the alpha-particle do not increase
as rapidly as if they were proportional to the square of the mass
number A, i.e. the number of nucleons in each nucleus,
but they are in fact approximately proportional to A. This
indicates that nuclear forces are
saturated for some reason. Heisenberg$^2$
suggested that this could be accounted for, if we assumed a force
between a neutron and a proton, for instance, due
to the exchange of the electron or, more generally, due to the exchange
of the electric charge, as in the case of the chemical
bond between a hydrogen atom and a proton. Soon afterwards,
Fermi$^3$ developed a theory of beta-decay based on the hypothesis
by Pauli, according to which a neutron,
for instance, could decay into a proton,
an electron, and a neutrino, which
was supposed to be a very penetrating neutral particle with a
very small mass.
This gave rise, in turn, to the expectation
that nuclear forces could be reduced to the exchange of a pair of
an electron and a neutrino between two nucleons, just as
electromagnetic forces were regarded as due to the exchange
of photons between charged particles. It turned out, however, that
the nuclear forces thus obtained was much too small$^4$,
because the betadecay was a very slow process compared with
the supposed rapid exchange of the electric charge responsible
for the actual nuclear forces. The idea of the meson field was
introduced in 1935 in order to make up this gaps. Original assumptions
of the meson theory were as follows:
I. The nuclear forces are described by a scalar field U,
which satisfies the wave equation
\begin{equation}
\label{y1}
\big(\frac{\partial^2}{\partial Z_1^2}+\frac{\partial^2}{\partial Z_2^2}
+\frac{\partial^2}{\partial Z_3^2}-\frac{1}{ c^2}
\frac{\partial^2}{\partial t^2}
-\kappa^2\big )U=0
\end{equation}
in vacuum, where x is a constant
with the dimension of reciprocal length. Thus, the static potential
between two nucleons at a distance r is proportional
to $\exp (-xr ) / rh$, the range of forces being given by $1/x$.
II. According to the general principle of quantum theory, the field U is
inevitably accompanied by new particles or quanta, which have the mass
\begin{equation}
\mu =\frac{\kappa\hbar}{ c}
\end{equation}
and the spin $0$, obeying Bose-Einstein statistics. The mass of
these particles can be inferred from the range of nuclear forces. If
we assume, for instance, $x = 5 \times 10^{12} cm$$^1$, we obtain
$\mu\sim 200 m_e$, where $m_e$ is the mass of the electron.
III. In order to obtain exchange forces, we must assume that these mesons
have the electric charge $+ e$ or $- e$, and that a positive (negative)
meson is emitted (absorbed) when the nucleon jumps from the proton state
to the neutron state, whereas a negative (positive) meson is emitted
(absorbed) when the nucleon jumps from the neutron to the proton.
Thus a neutron and a proton can interact with each other by
exchanging mesons just as two charged particles interact by
exchanging photons. In fact, we obtain an exchange force of Heisenberg
type between the neutron and the proton of the correct magnitude,
if we assume that the coupling constant g between the nucleon and
the meson field, which has the same dimension as the elementary
charge $e$, is a few times larger than $e$.
However, the above simple theory was incomplete in various respects. For
one thing, the exchange force thus obtained was repulsive for triplet
S-state of the deuteron in contradiction to the experiment, and
moreover we could
not deduce the exchange force of Majorana type, which was necessary in
order to account for the saturation of nuclear forces just at
the alpha-particle. In order to remove these defects,
more general types of meson fields including vector, pseudoscalar
and pseudovector fields in addition to the scalar
fields, were considered by various authors$^6$. In particular, the
vector field was investigated in detail, because it could give a
combination of exchange
forces of Heisenberg and Majorana types with correct signs
and could further account for the anomalous magnetic moments
of the neutron and the proton qualitatively. Furthermore, the
vector theory predicted the existence of noncentral forces between
a neutron and a proton, so that the deuteron might have the electric
quadripole moment. However, the actual electric quadripole
moment turned out to be positive in sign, whereas the vector theory
anticipated the sign to be negative. The only meson field, which gives the
correct signs both for nuclear forces and for the electric quadripole
moment of the deuteron, was the pseudoscalar field$^7$.
There was, however, another feature of nuclear forces,
which was to be accounted for as a consequence of
the meson theory. Namely, the results of experiments on the scattering of
protons by protons indicated that the type and magnitude
of interaction between two protons were, at least approximately,
the same as those between a neutron and a proton, apart from
the Coulomb force. Now the interaction between two protons or
two neutrons was obtained only if we took into account the
terms proportional to $g^4$, whereas that between a neutron and a
proton was proportional to $g^2$, as long as we were considering charged
mesons alone. Thus it seemed necessary to assume further:
IV. In addition to charged mesons, there are neutral mesons with the mass
either exactly or approximately equal to that of charged mesons. They must
also have the integer spin, obey
Bose-Einstein statistics and interact with
nucleons as strongly as charged mesons. This assumption obviously
increased the number of arbitrary constants in meson theory,
which could be so adjusted as to agree with a variety of experimental
facts. These experimental facts could-not be restricted to those of
nuclear physics in the narrow sense, but was to include
those related to cosmic rays, because we expected that mesons
could be created and annihilated due to the interaction of cosmic
ray particles with energies much larger than cosmic rays in 1937$^8$
was a great encouragement to further developments of
meson theory. At that time, we came naturally to the conclusion that the
mesons which constituted the main part of the hard component of cosmic
rays at sea level was to be identified with the mesons
which were responsible for nuclear force$^9$. Indeed, cosmic
ray mesons had the mass around $200 m_e$
as predicted and moreover, there was the definite evidence
for the spontaneous decay, which was the consequence of the
following assumption of the original mesonn theory :
V. Mesons interact also with light particles, i.e. electrons
and neutrinos, just as they interact with nucleons, the only
difference being the smallness of the coupling constant g’ in
this case compared with g. Thus a positive (negative)
meson can change spontaneously into a positive (negative) electron and a
neutrino, as pointed out first by Bhabha$^10$.
The proper lifetime, i.e. the mean
lifetime at rest, of the charged scalar meson, for example, is given by
\begin{equation}
\tau_0=2\big (\frac{\hbar c}{ (g^\prime)^2}\big )
\big (\frac{\hbar}{ \mu c^2}\big )
\end{equation}
For the meson moving with velocity $\nu$, the lifetime increases
by a factor $1 /\sqrt{1- (\nu/c)2}$
due to the well-known relativistic delay of the moving clock.
Although the spontaneous decay and the velocity dependence of the lifetime
of cosmic ray mesons were remarkably confirmed by various
experiments$^{11}$,
there was an undeniable discrepancy between theoretical and experimental
values for the lifetime. The original intention of meson
theory was to account
for the beta-decay by combining the assumptions III and V together.
However, the coupling constant $g’$, which was so adjusted as to give the
correct result for the beta-decay, turned out to be too large
in that it gave the lifetime $\tau_0$ of mesons of the order of
$10^{-8} sec$, which was much smaller
than the observed lifetime $2 x 10^{-6} sec$.
Moreover, there were indications,
which were by no means in favour of the expectation that cosmic-ray mesons
interacted strongly with nucleons. For example, the observed crosssection
of scattering of cosmic-ray mesons by nuclei was much smaller than
that obtained theoretically. Thus, already in 1941,
the identification of the
cosmic-ray meson with the meson, which was supposed to be responsible
for nuclear forces, became doubtful. In fact, Tanikawa and Sakata$^{12}$
proposed in 1942 a new hypothesis as follows:
The mesons which constitute the hard
component of cosmic rays at sea level are not
directly connected with nuclear
forces, but are produced by the decay of heavier mesons which interacted
strongly with nucleons.
However, we had to wait for a few years before this two-meson hypothesis
was confirmed, until 1947, when two very important facts were discovered.
First, it was discovered by Italian physicists’s that the negative mesons
in cosmic rays, which were captured by lighter atoms,
did not disappear instantly,
but very often decayed into electrons in a mean time interval of the
order of $10^{-6} sec$. This could be understood
only if we supposed that ordinary
mesons in cosmic rays interacted very weakly with nucleons.
Soon afterwards, Powell and others$^{14}$ discovered
two types of mesons in cosmic rays,
the heavier mesons decaying in a very short time into lighter mesons. Just
before the latter discovery, the two-meson hypothesis was proposed by
Marshak and Bethe$^{15}$ independent of the Japanese
physicists above mentioned.
In 1948 , mesons were created artificially in Berkeley$^{16}$ and
subsequent experiments confirmed the general picture of two-meson theory.
The fundamental assumptions are now$^{17}$
(i) The heavier mesons, i.e. n-mesons with the mass $m_\pi$,
about $280 m_e$ interact
strongly with nucleons and can decay into lighter mesons, i.e.
$\pi$-mesons and neutrinos with a lifetime of the order of
$10^{-8} sec$; $\pi$-mesons have integer spin
(very probably spin $0$) and obey Bose-Einstein statistics.
They are responsible for, at least, a part of nuclear forces.
In fact, the shape of nuclear potential
at a distance of the order of $\hbar/m_\pi c$ or larger could
be accounted for as due to
the exchange of $\pi$-mesons between nucleons.
(ii) The lighter mesons, i.e. $\mu$-mesons with the mass about $210 m_e$
are the main constituent of the hard component of cosmic rays
at sea level and can decay into electrons and neutrinos with the
lifetime $2 x 10^{-6} sec$. They have
very probably spin ${1\over 2}$ and obey Fermi-Dirac statistics.
As they interact only
weakly with nucleons, they have nothing to do with nuclear forces.
Now, if we accept the view that $\pi$-mesons are the mesons that have been
anticipated from the beginning, then we may expect the existence of neutral
$\pi$-mesons in addition to charged p-mesons. Such neutral mesons,
which have integer spin and interact as strongly as charged
mesons with nucleons, must be very unstable, because each
of them can decay into two or three photons$^18$.
In particular, a neutral meson with spin $0$ can decay into two photons and
the lifetime is of the order of $10^{-14} sec$ or even less than that.
Very recently, it became clear that some of the experimental
results obtained in Berkeley could be accounted for consistently
by considering that, in addition to
charged n-mesons, neutral $\nu$-mesons
with the mass approximately equal to
that of charged p-mesons were created
by collisions of high-energy protons
with atomic nuclei and that each of these neutral
mesons decayed into two
mesons with the lifetime of the order of
$10^{-13} sec$ or less$^{19}$.
Thus, the neutral mesons must have spin $0$.
In this way, meson theory has changed a great deal during these fifteen
years. Nevertheless, there remain still many questions unanswered. Among
other things, we know very little about mesons heavier than $\pi$-mesons.
We do not know yet whether some of the heavier mesons are responsible for
nuclear forces at very short distances. The present form of meson theory is
not free from the divergence difficulties, although recent development of
relativistic field theory has succeeded in removing some of them. We do not
yet know whether the remaining divergence
difficulties are due to our ignorance
of the structure of elementary particles themselves$^{20}$.
We shall probably
have to go through another change of the theory, before we shall be
able to arrive at the complete understanding
of the nuclear structure and of
various phenomena, which will occur in high energy regions.
1. E. Wigner, Phys. Rev., 43 (1933 ) 252.
2. W. Heisenberg, Z. Physik, 77 (1932) I; 78 (1932) 156; 80 (1933) 587
3. E. Fermi, Z. Physik, 88 (1934) 161.
4. I. Tamm, Nature, 133 (1934) 981; D. Ivanenko, Nature, 133 (1934) 981.
5. H. Yukawa, Proc. Phys.-Math. SOc.Japan, 17 (1935) 48 ;
H. Yukawa and S. Sakata, ibid., I9 (1937) 1084.
6. N. Kemmer, Proc. Roy. SOc. London, A 166 (1938) 127;
H. Fröhlich, W. Heitler, and N. Kemmer,
ibid., 166 (1938) 154; H. J.. Bhabha, ibid.,
166 (1938) 501; E. C.
G. Stueckelberg, Helv.Phys. Acta,
II (1938) 299; H. Yukawa, S. Sakata, and M.
Taketani, Proc. Phys.-Math. SOc. Japan, 20 (1938) 319; H.
Yukawa, S. Sakata, M.
Kobayasi, and M. Taketani, ibid., 20 (1938) 720.
7. W. Rarita and J. Schwinger, Phys. Rev., 59 (1941) 436, 556.
8. C. D. Anderson and S. H. Neddermeyer, Phys. Rev.,
51 (1937) 884; J. C. Street
andE.C. Stevenson, ibid., 5I(1937) 1005; Y. Nishina,
M.Takeuchi, and T. Ichimiya, ibid., 52 (1937) 1193.
9. H. Yukawa, Proc. Phys.-Math. Soc. Japan, 19 (1937) 712;
J. R. Oppenheimer and R. Serber, Phys. Rev., 51 (1937) 1113;
E. C. G. Stueckelberg, ibid., 53 (1937) 41.
IO. H. J. Bhabha, Nature,141 (1938) 117.
11. H. Euler and W. Heisenberg, Ergeb. Exakt. Naturw., I:
(1938) I ; P. M. S. Blackett,
Nature, 142 (1938) 992; B. Rossi, Nature, 142 (1938) 993;
P. Ehrenfest, Jr. and A. Freon, Coopt. Rend., 207 (1938) 853 ;
E. J. Williams and G. E. Roberts, Nature, 145 (1940) 102.
12. Y. Tanikawa, Progr. Theoret. Phys. Kyoto, 2 (1947) 220;
S. Sakata and K. Inouye, ibid., I (1946) 143.
13. M. Conversi, E. Pancini, and O. Piccioni, Phys. Rev., 71 ( 1947) 209.
14.. C. M. G. Lattes, H. Muirhead, G. P. S. Occhialini,
and C. F. Powell, Nature, 159 (1947) 694; C. M. G. Lattes,
G. P. S. Occhialini, and C. F. Powell, Nature, 160 (1947) 453,486.
15, R. E. Marshak and H. A. Bethe, Phys. Rev., 72 (1947) 506.
16. E. Gardner and C. M. G. Lattes, Science, 107 (1948) 270;
W. H. Barkas, E. Gardner, and C. M. G. Lattes, Phys. Rev., 74 (1948) 1558.
17. As for further details, see H. Yukawa, Rev. Mod. Phys., 21 (1949) 474.
134 1949 H . Yukawa
18. S. Sakata and Y. Tanikawa, Phys. Rev., 57 (1940) 548; R. J.
Finkelstein, ibid., 72 (1947) 415.
19. H. F. York, B. J. Moyer, and R. Bjorklund, Phys. Rev., 76 (1949) 187.
20. H. Yukawa, Phys. Rev., 77 (1950) 219."
\subsection{Zeeman and Schr\"odinger operators of electrons.}
In physics, the classical Zeeman operator of a charged particle
is:
\begin{equation}
\label{land}
-{\hbar^2\over 2m}\Delta_{(x,y)} -
{\hbar eB\over 2m c\mathbf i}
D_z\bullet
+{e^2B^2\over 8m c^2}(x^2+y^2) +eV .
\end{equation}
Originally, this operator is considered
on the 3-space expressed in terms of
3D Euclidean Laplacian and 3D magnetic dipole momentum operators.
The latter operators are the first
ones where a preliminary version of spin concept appears in the history
of physics. This is the so called exterior or orbiting spin
associated with the 3D angular momentum
\begin{eqnarray}
\label{3Dang_mom}
\mathbf P=(P_1,P_2,P_3)=\frac{1}{\hbar}Z\times\mathbf p\quad\quad
{\rm where}\\
P_1=\frac{1}{h}(Z_2p_3-Z_3p_2)=\frac{1}{\mathbf i}
\big(Z_2\frac{\partial}{\partial Z_3}-
Z_3\frac{\partial}{\partial Z_2}\big).
\dots
\label{3Dangmom}
\end{eqnarray}
Note that the above 2D operator keeps only
component $P_3$ of this angular momentum.
The components of the complete 3D angular momentum
obeys the commutation relations:
\begin{equation}
[P_1,P_2]=\mathbf iP_3,\quad [P_1,P_3]=-\mathbf iP_2,\quad
[P_2,P_3]=\mathbf iP_1.
\end{equation}
In the mathematical models, the particles are orbiting in complex
planes determined by the complex structures associated with the component
angular momenta $P_j$, thus the 2D version plays more important role
in this paper than the 3D version.
The 2D operator is obtained from the 3D operator
by omitting $P_1$ and $P_2$ and by restricting the rest
part onto the $(x,y)$ plane.
If Coulomb potential $V$ is omitted, it is nothing but
the {\it Ginsburg-Landau-Zeeman operator} of a charged
particle orbiting on the $(x,y)$-plane in a constant magnetic field
directed toward the z-axis.
The {\it magnetic dipole momentum operator}, which is the term involving
$D_z\bullet :=x\partial_y-y\partial_x$ and which is associated with the
{\it angular momentum operator} $hD_z$,
commutes with the
rest part, $\mathbf O$, of the operator, therefore, splitting the
spectral lines of $\mathbf O$. The {\it Zeeman effect} is explained
by this fine structure of the Zeeman operator.
The Hamilton operator represents the total energy of a given physical
system. More precisely, the eigenvalues of this operator are the discrete
(quantized) energy values which can be assumed by the system. Thus,
correspondence (\ref{correl}) implies Schr\"odinger's wave equation
\begin{equation}
\label{land}
-\big({\hbar^2\over 2m}\Delta_{(x,y)} -
{\hbar eB\over 2m c\mathbf i}
D_z\bullet
+{e^2B^2\over 8m c^2}(x^2+y^2) -eV\big)\psi =\mathbf i\hbar
\frac{\partial \psi}{\partial t}
\end{equation}
of an electron orbiting in the $(x,y)$-plane.
As it well known, Schr\"odinger discovered first the
relativistic equation which is a second order differential operator
regarding the $t$-variable. By the time that Schr\"odinger came to
publish this equation, it had already been independently
rediscovered by O. Klein and W. Gordon. This is why it is usually
called Klein-Gordon equation. Numerous problems had arise regarding
this equation. Schr\"odinger became discouraged because it gave the
wrong fine structure for hydrogen. Some month later he realized, however,
that the non-relativistic approximation to his relativistic equation
was of value even if the relativistic equation was incorrect. This
non-relativistic approximation is the familiar Schr\"odinger equation.
Dirac also had great concerns about the Klein-Gordon equation.
His main objection
was that the probabilistic quantum theory based on this equation produced
negative probabilities. Actually, the elimination of this problem led
Dirac to the discovery of his relativistic electron equation. By this
theory, however, proper probabilistic theory can be developed only
on the relativistic space-time. This feature was strongly
criticized by
Pauli, according to whom such theory makes sense only on the space.
The journey toward an understanding of the nature of spin and its
relationship to statistics
has been taking place on one of the most difficult and exciting routes
\cite{b,tom}.
Although the Schr\"odinger wave equation
gives excellent agreement with experiment in predicting the frequencies
of spectral lines, small discrepancies are found, which can be explained
only by adding an intrinsic angular momentum to its usual orbital
angular momentum of the electron that acts as if it came from a spinning
solid body. The pioneers of developing this concept
were Sommerfeld, Land\'e, and Pauli. They found that agreement
with the Stern-Gerlach experiment
proving the existence of Zeeman effect
can be obtained by assuming that the
magnitude of this additional angular momentum was $\hbar /2$. The
magnetic moment needed to obtain agreement was, however, $e\hbar /2mc$,
which is exactly the same as that arising from an orbital angular moment
of $\hbar$. The gyromagnetic ratio, that is, the ratio of magnetic moment
to angular momentum is therefore twice as great for electron spin as it is
for orbital motion.
Many efforts were made to connect this intrinsic angular momentum to an
actual spin of the electron, considered it as a rigid body. In fact,
the gyromagnetic ratio needed is exactly that which would be obtained
if the electron consisted of a uniform spherical shell spinning about
a definite axis. The systematic
development of such a theory met, however,
with such great difficulties that no one was able to carry it through
to a definite conclusion. Somewhat later, Dirac derived his above
mentioned relativistic
wave equation for the electron, in which the spin and charge were shown
to be bound up in a way that can be understood only in connection
with the requirements of relativistic invariance. In the non-relativistic
limit, however, the electron still acts as if it had an intrinsic angular
momentum of $\hbar /2$. Prior to the Dirac equation, this non-relativistic
theory of spin was originally developed by Pauli.
Finally, we explain yet why the Coulomb operator does not appear in
this paper and how can it be involved into the further investigations.
The present ignorance is mainly due to the fact
that also
the Hamilton operator (Laplacian) on nilpotent
groups involves no Coulomb potential. There appear, instead,
nuclear potentials like those Yukawa described in meson theory.
An other major distinguishing feature is that this Laplacian
(Hamilton operator) includes also terms corresponding
to the electron-positron-neutrino, which, by the standard model, is always
something of a silent partner in an electron-positron-system, because,
being electrically neutral, it ignores not only the nuclear force but
also the electromagnetic force. Although the operator corresponding
to this silent partner will be established
by computations evolved by Pauli to determine the non-relativistic
approximation, all these operators appear in the new theory
as relativistic operators complying with Einstein's equation of general
relativity.
The Coulomb force can be considered just later, after developing certain
explicit spectral computations. This spectral theory
includes also a spectral decomposition of the corresponding $L^2$-function
spaces such that the subspaces appearing in this decomposition are
invariant with respect to the actions both of the Hamilton operator and
the complex Heisenberg group representation. Also the latter
representation is naturally inbuilt into these mathematical models.
The complications about the Coulomb operator
are due to the fact that these invariant subspaces
(called also zones) are not
invariant regarding the Coulomb's multiplicative operator
\cite{sz5}- \cite{sz7}.
In order to extend the theory also to electric fields, the Coulomb
operator must be modified such that also this operator leaves the zones
invariant. Such natural zonal Coulomb operator can be defined for a
particular zone such that,
for a given function $\psi$ from the zone, function $V\psi$ is projected
back to the zone. This modified Coulomb operator is the correct one
which must be added to the Laplace operator on the nilpotent Lie group
in order to have a relevant unified electro-magnetic particle theory.
However, this modified Coulomb force is externally
added and not naturally inbuilt into the Laplacian (Hamiltonian) of
the nilpotent Lie group. In order to construct such Riemann manifolds
whose Laplacian unifies the Ginsburg-Landau-Zeeman +neutrino+
nuclear operators also
with an appropriate Coulomb operator, the mathematical models must be
further developed such that,
instead of nilpotent Lie groups, one considers general nilpotent-type
Riemann manifolds and their solvable-type extensions. This generalization
of the theory,
which is similar to passing from special relativity to the general
one, will be the third step in developing this theory.
\section{Launching the mathematical particle theory.}
The physical features of elementary particles
are most conspicuously exhibited also by certain Riemann manifolds.
A demonstration of this apparent physical content present
in these abstract mathematical structures is, for instance,
that the classical Hamilton and Schr\"odinger operators
of elementary particle systems appear as Laplace-Beltrami
operators defined on these manifolds. Thus, these
abstract structures are really deeply inbuilt into the very fabric
of the physical world which can serve also as fundamental tools for
building up a comprehensive unified quantum theory.
Yet, this physical content of these particular mathematical
structures has never been recognized in the
literature so far. The ignorance is probably due to the fact
that this new theory is not in a direct genetic relationship
with fundamental theories such as the Gell-Mann$\sim$Ne'eman
theory of quarks by which the standard model of elementary particles
has been established. Neither is a direct genetic connection to
the intensively studied super string
theory, which, by many experts, is thought to be
the first viable candidate ever for a
unified quantum field theory of all of the elementary particles and their
interactions which had been provisionally described by the standard model.
Actually, the relationship between the two
approaches to elementary particle physics is more precisely described
by saying that there are both strong connections and
substantial differences between the standard and our new models.
In order to clearly explain the new features of
the new model, we start with a brief review of the standard model.
A review of string theory is, however, beyond the scope of this article.
\subsection{A rudimentary review of the standard model.}
The Gell-Mann$\sim$Ne'eman
theory \cite{gn} of quantum chromodynamics (QCD) grew out, in the 60's,
from Yang-Mills' \cite{ym}
non-Abelian gauge theory where the
gauge group was taken to be the $SU(2)$ group of isotopic spin
rotations, and the vector fields analogous to the photon field were
interpreted as the fields of strongly-interacting vector mesons of
isotopic spin unity. QCD is also a non-Abelian gauge theory
where, instead of $SU(2)$, the symmetry group is
$SU(3)$. This model adequately described the rapidly growing
number of elementary particles by grouping the known baryons and
mesons in various irreducible representations of this gauge group.
The most important difference between this and the new mathematical
model is that the Yang-Mills
theories do not introduce a (definite or indefinite) Riemann metric
with the help of which all other physical objects are defined. They
rather describe the interactions by Lagrange functions which contain more
than a dozen arbitrary constants, including those yielding the various
masses of the different kinds of particles. What makes this situation even
worser is that all these important numbers are incalculable in principle.
Under such circumstances the natural Hamilton and wave operators
of particles can not emerge as Laplacians on Riemann manifolds.
This statement is the main attraction on the new models.
To speak more mathematically, the fundamental structures for
non-Abelian gauge theories are principal fibre bundles over
Minkwski space with compact non-Abelian structure group $SU(n)$ on which
a potential is defined by a connection $A$, with components $A_\mu$,
in the Lie algebra $su(n)$. The field is the curvature whose
components are
$F_{\nu\mu}=\partial_\mu A_\nu -\partial_\nu A_\mu +[A_\mu ,A_\nu ]$.
The most straightforward generalization of Maxwell's equations are
the Yang-Mills equations $dF=0$ and $d^*F=0$, where $d$ and $d^*$ are
covariant derivatives. Gauge theories possess an infinite-dimensional
symmetry group given by functions $g: M\to SU(n)$ and all physical
or geometric properties are gauge invariant.
To specify a physical theory the usual procedure is to define a
Lagrangian. In quantum chromodynamics (QCD) such Lagrangian is to be
chosen which is capable to portray the elementary particles in
the following very rich complexity:
The neutron and
proton are composite made of quarks. There are quark of six
type, or "flavors", the $u, c$, and $t$ quarks having charge $2/3$,
and the $d, s$, and $b$ quarks having charge $-1/3$ (these denotations
are the first letters of words: up, down, charm, strange, top,
and bottom). Quarks of each
flavor come in three "colors" which furnish the defining representation
$\mathbf 3$ of the $SU(3)$ gauge group.
Quarks have the remarkable property
of being permanently trapped inside "white" particles such
as neutron, proton, baryon and meson. Baryon resp.
meson are color-neutral
bound states of three quarks resp. quarks and antiquarks. Neutron resp.
proton are barions consisting an up- and two down- resp. one down- and
two up-quarks. Thus the neutron has no charge while, in the same units
in which the electron has an electric charge of $-1$, the
proton has a charge of $+1$. The total charge of a white particle
is always an integer number. Only the quarks confined inside of them
can have non-integer charges.
QCD can be regarded as the
modern theory of strong nuclear forces holding the quarks together.
With no scalar fields,
the most general renormalizable Lagrangian describing also these strong
interactions can be put in the form \cite{w2}
\begin{equation}
\label{QCDLagrange}
\mathcal L=-\frac 1{4}F^{\alpha \mu\nu}{F^\alpha}_{\mu\nu}
-\sum_n\overline{\psi}_n[x^\mu(\partial_\mu-
\mathbf igA^\alpha_\mu t_\alpha )+m_n]\psi_n,
\end{equation}
where $\psi_n(x)$ is a matter field, $g$ is the strong coupling constant,
$t_\alpha$ are a complete set of generators of color $SU(3)$ in the
$\mathbf 3$-representation (that is ,
Hermitian traceless $3\times 3$ matrices
with rows and columns labelled by the three quark colors), normalized so
that $Tr(t_\alpha t_\beta )=\delta_{\alpha\beta}/2$, and the subscript
$n$ labels quark flavors, with quark color indices suppressed.
The first term is called matter Lagrangian density, the second one is
the gauge field.
Just as the electromagnetic force between electrons is generated by
the virtual exchange of photons, so the quarks are bound to one another
by a force that comes from the exchange of other quanta, called gluons
because they glue the quarks together to make observable white objects.
The gluons are flavor blind, paying no attention to flavor, however,
they are very sensitive of color. They interact with color much as the
photon interacts with electron charge.
\subsection{More specifics about the new abstract model.}
The above sketchily described objects are the most fundamental
concepts in QCD.
Their properties are established by the Lagrangians introduced there.
In order to compare them, we review some more details
about the new theory. The concepts introduced here will rigorously
be establishment in the following sections.
The mathematical structures on which the new theory is built on
are 2-step nilpotent Lie groups and their
solvable extensions. Both type of manifolds are endowed
with natural left invariant metrics. In this scheme, the
nilpotent group plays the role of space, on which always
positive definite metric is considered. This choice is dictated also by
the fact that the Hamilton operators of elementary particle
systems emerge
as the Laplacians of these invariant Riemann metrics. In order to
ensure that these systems have positive energies,
just positive definite metric can be chosen on these manifolds;
for indefinite metrics the Laplacian never
appears as the Hamilton operator of a particle system.
Time can be introduced by adding
new dimension to these nilpotent manifolds.
This can be implemented either by a simple Cartesian product with the
real line $\mathbb R$, or
by the solvable extensions of nilpotent groups. Both
processes increase the dimension of the nilpotent groups
by $1$ and in both cases invariant indefinite metrics are
defined such that the time-lines intersect the nilpotent subgroup
perpendicularly, furthermore, also
$\langle\partial_t,\partial_t\rangle <0$ holds.
On these extended manifolds the Laplacian appears as the natural
wave operator (Schr\"odinger operator) attached to the particle
systems. The difference between the two constructions
is that the first one
provides a static model, while the second one is an
expanding model which yields the Hubble law of cosmology.
Although both are relativistic, these space-time concepts
are not quite the same than those developed in general relativity.
Actually, Einstein's 4D space-time concept
has no room for exhibiting the rich ``inner life'' of particles which
is attributed to them by meson theory, or, by the general
standard model of elementary particle physics. This ``inner life" can
not be explained only by the properties of space-time.
For instance, the symmetries underlying the electroweak theory are
called internal symmetries, because one thinks of them as
having to do with the intrinsic nature of the particles, rather
than their position or motion.
The abstract mathematical models, however,
do make room for both the rich ``inner life"
and ``exterior life" of particles.
The main tool for exhibiting
the inner physics is the center of the
nilpotent Lie algebra, while the stage for the ``exterior life"
(that is, for the motion
of particles) is the so called X-space denoted by $\mathcal X$.
This space is a complement
of the center $\mathcal Z$, which is called also Z-space.
The space-like Z-space exhibits, actually,
dualistic features. The primary meaning of vectors lying in the center
is that they are the axes of angular momenta defined
for the charged
particles which are orbiting in complex planes in
constant magnetic
fields standing perpendicular to these complex planes. Actually, this
axis-interpretation of vectors, $Z$, is developed in
the following more subtle way: For any unit vector $Z$ there is a complex
structure, $J_Z$ acting on the X-space corresponded such that
the particles are orbiting in the complex planes defined by $J_Z$
along the integral curves of vector fields defined by $X\to J_Z(X)$.
As it is pointed out in the next section,
this 2-step nilpotent Lie group is uniquely determined by the linear space,
$J_{\mathcal Z}$, of skew endomorphisms $J_Z$. Bijection $Z\to J_Z$
provides a natural identification between $\mathcal Z$ and
$J_{\mathcal Z}$.
More precisely, the group can be
considered such that it is defined by a linear space of
skew angular momentum
endomorphisms acting on a Euclidean space, $\mathcal X$, such that it is
considered, primarily,
as an abstract space $\mathcal Z$ which is identified with the
endomorphism space, $J_{\mathcal Z}$, by the natural bijection
$Z\to J_Z$.
Note that in this interpretation, the
axis, $Z$, of the angular momentum, $J_Z$, is separated from
the complex plane where the actual orbiting is taking place. Anyhow,
from this point of view,
the Z-vectors exhibit space-like features.
Whereas,
the constant magnetic field defined by the structure pins
down a unique inertia system on which relations
$B=constant$ and $E=0$ holds. Thus
a naturally defined individualistic inner time is given
for each of these particles. This time can be synchronized, allowing
to define also a common time, $T$, which defines the time
both on the static models and the solvable extensions.
From this point of view, the center exhibits time-like features.
This argument clarifies the contradiction between the
angular-momentum-axis- and the customary time-axis-interpretation
of the center of the Heisenberg groups.
Although the concepts of relativity and quantum theory
appear in new forms, they
should be considered as refined versions
of the original classical objects. Beside the above one, an other example
for this claim is the new form by which
de Broglie's waves are introduced
on these groups. The most important new feature
is that the Fourier transform
is defined only on the center, $\mathcal Z=\mathbb R^l$,
by the following formula:
\begin{equation}
\label{newDiracwave}
\int_{\mathbb R^l}A(|X|,K)\prod z_i^{p_i}(K_u,X)\overline z_i^{q_i}(K_u,X)
e^{\mathbf i(\langle K,Z\rangle-\omega t)}dK,
\end{equation}
where, for a fixed complex basis $\mathbf B$, complex coordinate system
$\{ z_i(K_u,X)\}$ on the X-space is defined regarding the complex
structure $J_{K_u}$, for all unit Z-vector $K_u$. This so called
twisted Z-Fourier transform binds the Z-space and the X-space together
by the polynomials $\prod z_i^{p_i}\overline z_i^{q_i}$ which depend
both on the X- and K-variables. It appears also in several other
alternative forms. Due to this complexity of the wave functions,
the three main forces: the electromagnetic; the weak; and
the strong forces of particle theory can be introduced in a
unified way such that each of them
can be expressed in terms of the weak forces.
The main objects on these abstract structures are the Laplace operators
considered both on nilpotent and solvable groups.
They turn out to be the Hamilton resp. Schr\"odinger operators of the
particle-systems represented by these metric groups.
Due to the fact that these
operators are Laplace operators on Riemann manifolds,
the conservation of energy is
automatically satisfied. In classical quantum theory
the Hamilton function,
which counts with the total energy of a system, is replaced
by the Hamilton operator whose discrete eigenvalues are the quantized
energy levels on which the system can exist.
On the relativistic mathematical models the total energy is encoded
into the Einstein tensor (stress-energy tensor) of the indefinite
Riemannian metrics. In the quantum theory developed on these manifold,
this stress-energy tensor is replaced by the Laplacians of
these manifolds, which are, actually, the Hamilton resp.
Schr\"odinger operators of the
particle-systems represented by these models. In other words, this is a
correspondence principle associating the Laplacian resp. the
eigenfunction-equations to the Einstein tensor resp.
Einstein equation defined on these indefinite Riemann manifolds.
Let it be emphasized again that this theory will be evolved
gradually without adding any new objects to those defined mathematically
on these abstract structures.
The main focus is going to be to rediscover the most important
physical features which are known by the standard model.
Before starting this exploration,
we describe, yet, a more definite bond between the two models.
\subsection{Correspondence principle bridging the two models.}
Correspondence principle associating 2-step nilpotent groups
to $SU(n)$-mo\-dels can also be introduced.
It can be considered such that, to the Lagrange functions defined
on the $SU(n)$-models, 2-step metric nilpotent Lie groups are
corresponded. The combination of this correspondence principle with the
above one associates the Laplacian of the metric group
to the Lagrange function defined on the Yang-Mills model.
This association explains why are
the conclusions about the nature of
electromagnetic, strong, and weak
forces so similar on the two models. This bridge
can be built up as follows.
As it is explained above,
the invariant Riemann metrics defined on 2-step nilpotent groups,
modelling the particle systems in the new
theory, can be defined for any linear space, $J_{\mathcal Z}$,
of skew endomorphisms acting on the X-space. Thus, for a
faithful representation, $\rho$,
of $su(n)\subset so(2n)$ in the Lie algebra of real orthogonal
transformations acting on a Euclidean space $\mathcal X$, one can define
a natural 2-step nilpotent metric group by the endomorphism space
$J_{\mathcal Z}=\rho (su(n))\subset so(\mathcal X)$ whose X-space
is $\mathcal X$ and the Z-space is the abstract linear space
$\mathcal Z=\rho (su(n))$.
Actually, this is the maximal 2-step nilpotent metric
group which can be corresponded to a Yang-Mills principal fibre bundle
having structure group $SU(n)$, and whose fibres consists of orthonormal
frames of $\mathcal X$ on which the action of $\rho (SU(n))$ is
one-fold transitive. Note that this group is still independent from
the Yang-Mills connection $A_\mu$ by which the Yang-Mills field
(curvature), $F_{\mu\nu}$, is defined.
By the holonomy group
$Hol_p(A)\subset\rho (SU(n))_p$,
defined at a fixed
point $p$ of the Minkowski space,
groups depending on
Yang-Mills fields can also be introduced.
Even gauge-depending groups can be
constructed, by sections $\sigma :M\to\tilde M$, where
$\tilde M$ denotes the total space of the fibre bundle
and $\sigma (p)$ is lying in the fibre
over the point $p$. In this case the endomorphism space is
spanned by the skew
endomorphisms $A_\mu (\sigma (p))$ considered for all $p\in M$ and indices
$\mu$. This construction depends on sections $\sigma (p)$. Since the
gauge group is transitive on the set of these sections, this
correspondence is not gauge invariant.
These correspondences
associate 2-step nilpotent metric groups also to the
representations of the particular Lie algebras
$su(2)\subset so(4)$ resp. $su(3)\subset so(6)$ by which
the Yang-Mills- resp. Gell-Mann$\sim$Ne'eman-models are introduced.
This association does not mean, however, the equivalence of the two
theories. The X-space (exterior world) of the associated group is
the Euclidean space $\mathcal X$ where the skew endomorphisms
from $\rho (su(n))$ are acting. The Z-space is the abstract
space $\mathcal Z=\rho (\mathit g)$ where $\mathit g$ can be any
of the subspaces of $su(n)$ which were introduced above.
In other words, for a Yang-Mills model, X-space $\mathcal X$ is already
given and angular momentum
endomorphism space is picked up in $\rho (su(n))$ in order to define
the Z-spaces of the corresponded nilpotent groups. The space-time,
including both the exterior and interior worlds, are defined
by these spaces. Note that this construction completely ignores the
Minkowski space which is the base-space for the Yang-Mills principle
fibre bundle. These arguments also show,
that the nilpotent groups corresponded to a fixed
Yang-Mills model are not uniquely determined, they depend on the Z-space
chosen on the YM-model.
Whereas, on a YM-model, the exterior world is the Minkowski space over
which the principal fibre bundle is defined and the interior world is
defined there by $\rho (su(n))$. An other fundamental difference is
that YM-models are based on submersion-theory, however, not this is the
case with the new models. For instance, the Hamilton operators of
particles never appear as sub-Laplacians defined on the X-space.
Quite to the contrary, these Hamilton operators are acting
on the total $(X,Z)$-space, binding the exterior and interior worlds
together into an unbroken unity not characteristic for submersions.
These are the most important roots explaining
both the differences and similarities between the two models.
These arguments also show
that the new theory deals with much more general situations
than those considered in Yang-Mills' resp. Gell-Mann$\sim$Ne'e\-man's
$SU(2)$- resp. $SU(3)$-theories. It goes far beyond the
$SU(n)\times\dots\times SU(n)$-theories.
Since the Laplacians are natural Hamilton operators of
elementary particles also in these
most general situations, there is no reason to deny their involvement
into particle theory.
Differences arise also regarding the Maxwell theory of electromagnetism.
The $SU(n)$-theories are non-Abelian gauge theories where
the field is the curvature, $F$, of a Yang-Mills connection (potential)
and the properties are gauge invariant regarding an infinite dimensional
gauge group whose Lie algebra consists of $su(n)$-valued 1-forms,
$\omega$, satisfying $d\omega =F$. The
customary reference to this phenomenon
is that only $F$ is the physical object and 1-forms $\omega$ do not have
any physical significance, they are the results of mere mathematical
constructions. This interpretation strongly contrasts the Aharanov-Bohm
theory where these vector potentials do have physical meanings by which
the effect bearing their names can be established \cite{ab}, \cite{t}.
Whereas, on the nilpotent groups and
their solvable extensions
the fundamental fields are the natural invariant Riemann metrics, $g$,
by which all the other
objects are defined. The classical Hamilton resp. Schr\"odinger operators
emerge as Laplacians of metrics $g$.
Curvature $F$ corresponds to the field
$g(J_Z(X),Y)$ in this interpretation.
Since the basic objects are not invariant regarding their actions,
the gauge-symmetries are not involved to these investigations. One-form
$\omega_Z(Y)=(1/2)g(J_Z(X),Y)$, defined over the points $X\in\mathcal X$,
is the only element of the Lie algebra of the gauge-symmetry group
which is admitted to these considerations. It defines the constant
magnetic field and vanishing electric field which are
associated with the orbiting spin and inner time $T$.
This interpretation
shows that this model breaks off, at some point,
from the Maxwell theory whose greatest achievement was that it unified
the, until 1865, separately handled partial theories of magnetism
and electricity. The Yang-Mills gauge theory is a generalization of this
unified theory to vector valued fields and potentials. The nilpotent
Lie group model approaches to the electromagnetic
phenomena from a different
angle. Since the potentials do have significance there,
it stands closer to the Aharanov-Bohm theory
than to the Maxwell-Yang-Mills gauge theory. Actually the full recovery
of electromagnetism under this new
circumstances will be not provided in this
paper. Note that only the magnetic field has been appeared so far
which is not associated with a non-trivial electric field. In other
words, the magnetism and electricity emerge as being separated
in this paper.
The reunion of this temporarily separated couple can be established just
at a later point after further developing this new theory.
This bridge explains the great deal
of properties which manifest similarly on these two models, however,
turning from one model to the other one is not simple and the
complete exploration of overlapping phenomena requires further extended
investigations.
In this paper the first order task is to firmly
establish the point about the new model.
Therefore, when several properties are named by the same name
used in QCD, their definition strictly refer to our
setting. Whereas, by these deliberately chosen names,
the similarities between the two theories are indicated.
\section{Two-step nilpotent Lie groups.}
\subsection{Definitions and interpretations.}
A 2-step nilpotent metric Lie algebra,
$\{\mathcal N ,\langle ,\rangle\}$, is defined on a real vector space
endowed with a positive definite inner product.
The name indicates that the
center, $\mathcal Z$, can be reached
by a single application of the Lie bracket, thus its second application
always results zero.
The orthogonal complement of the center is denoted by $\mathcal X$.
Then the Lie bracket operates among these subspaces according to
the following formulas:
\begin{equation}
[\mathcal N,\mathcal N]=\mathcal Z\quad ,\quad
[\mathcal N,\mathcal Z]=0\quad ,\quad
\mathcal N=\mathcal X\oplus\mathcal Z=\mathbb R^{k}\times\mathbb R^l.
\end{equation}
Spaces $\mathcal Z$ and $\mathcal X$ are called also Z- and
X-space, respectively.
Upto isometric isomorphisms, such a Lie algebra is uniquely
determined by the linear space, $J_{\mathcal Z}$, of skew endomorphisms
$J_Z:\mathcal X\to\mathcal X$ defined for any
$Z\in\mathcal Z$ by the formula
\begin{equation}
\label{brack}
\langle [X,Y],Z\rangle =\langle J_Z(X),Y\rangle ,
\forall Z\in\mathcal Z.
\end{equation}
This statement means that for an orthogonal direct sum,
$\mathcal N=\mathcal X\oplus\mathcal Z=\mathbb R^{k}\times\mathbb R^l$,
of Euclidean spaces a non-degenerated linear map,
$\mathbb J:\mathcal Z\to SE(\mathcal X)\, ,\, Z\to J_Z$,
from the Z-space into the space of skew endomorphisms acting on the
X-space, defines a 2-step nilpotent metric Lie algebra on $\mathcal N$ by
(\ref{brack}). Furthermore, an other non-degenerated linear map
$\tilde{\mathbb J}$ having the same range
$\tilde J_{\mathcal Z}=J_{\mathcal Z}$ as
$\mathbb J$ defines isometrically isomorphic Lie algebra.
By means of the exponential map, also
the group can be considered such that it is defined on $\mathcal N$.
That is, a point is denoted by $(X,Z)$ on
the group as well. Then, the group multiplication is given
by the formula
$(X,Z) (X^\prime ,Z^\prime)=(X+X^\prime ,Z+Z^\prime
+{1\over 2}[X.X^\prime ])$.
Metric tensor, $g$, is defined by the left invariant extension of
$\langle ,\rangle$ onto the group $N$.
Endomorphisms $J_Z(.)$ will be associated with angular momenta. It must
be pointed out, however,
a major conceptual difference between the classical
3D angular momentum, introduced in (\ref{3Dangmom}), and this new sort
of angular momentum.
For a fixed axis $Z\in \mathbb R^3$
the endomorphism associated with the classical 3D angular momentum
is defined with the help of the cross product $\times$ by
the formula $J_Z: X\to Z\times X$. That is,
axis $Z$ is lying in the same
space, $\mathbb R^3$, where the endomorphism itself is acting.
Linear map $\mathbb J:\mathcal Z\to SE(\mathcal X)$ on a 2-step nilpotent
Lie group, however, separates
axis $Z\in\mathcal Z$ from the X-space where the
endomorphism $J_Z(.)$ is acting.
In other words, the latter endomorphism defines just orbiting of
a position vector $X$ in the plane spanned
by $X$ and $J_Z(X)$, but the axis of
orbiting is not in the X- but in the Z-space.
In this respect, the Z-space
is the abstract space of the axes associated with the angular momentum
endomorphisms. According to this interpretation,
for a fixed axis $Z$ in the Z-space, a particle occupies a
complex plane in the complex space
defined by the complex structure $J_Z$ on the X-space.
Abstract axis, $Z$, is considered
as an ``inner dial" associated with the particles, which is represented
separately in the Z-space. This Z-space contributes new dimensions to
the X-space which is considered as the inner-world supplemented to the
exterior-world in order to have a natural stage for describing the inner
physics of elementary particles.
The above definition of 2-step nilpotent Lie groups
by their endomorphism spaces $J_{\mathcal Z}$ shows the large variety
of these groups. For instance, if $\mathcal Z$ is an $l$-dimensional
Lie algebra of a compact group and
$\mathbb J:\mathcal Z\to so(k)$ (which corresponds $J_Z\in so(k)$
to $Z\in \mathcal Z$) is its representation
in a real orthogonal Lie algebra (that is, in the Lie algebra of
skew-symmetric matrices) defined for $\mathcal X=\mathbb R^k$,
then the system
$\{\mathcal N=\mathcal X\oplus \mathcal Z, J_{\mathcal Z}\}$
defined by orthogonal direct sum determines a unique 2-step nilpotent
metric Lie algebra where the inner product on $\mathcal Z$ is
defined by $\langle Z,V\rangle =-Tr(J_{Z}\circ J_V)$.
Thus,
any faithful representation $\mathbb J:\mathcal Z\to so(k)$
determines a unique
two-step nilpotent metric Lie algebra. Since Lie algebras
$su(k/2)\subset so(k)$ used in non-Abelian gauge theories
are of
compact type, therefore, to any of their representations in orthogonal
Lie algebras, one can associate a natural two-step nilpotent metric
Lie algebra. This association is the natural bridge between a non-Abelian
$SU(n)$-theory and the new theory developed in this paper.
The 2-step nilpotent Lie groups form even a much
larger class than those constructed above
by orthogonal Lie algebra representations. In fact, in the most general
situation, endomorphism space $J_{\mathcal Z}$ is just
a linear space defined by the range of a non-degenerated linear map
$\mathbb J:\mathcal Z\to SE(\mathcal X)$
which may not bear any kind of Lie algebra structure.
However, such general groups
can be embedded into those constructed by the
compact Lie algebras, $J_{\tilde{\mathcal Z}}$. The smallest such
Lie algebra is generated by the endomorphism space
$J_{\mathcal Z}$ by the Lie brackets.
Very important particular 2-step nilpotent Lie groups are the
Heisenberg-type groups, introduced by Kaplan \cite{k}, which are
defined by endomorphism spaces $J_{\mathcal Z}$
satisfying the Clifford condition $J^2_Z=-\mathbf z^2id$, where
$\mathbf z=|Z|$ denotes the length of the corresponding vector.
These groups are attached to Clifford modules
(representations of Clifford algebras).
The well known classification
of these modules provides
classification also for the Heisenberg-type groups.
According to this classification,
the X-space and the endomorphisms
appear in the following form:
\begin{equation}
\mathcal X=
(\mathbb R^{r(l)})^a\times (\mathbb{R}^{r(l)})^b\, ,\,
J_{Z} =
(j_{Z} \times\dots\times j_Z) \times (-j_Z\times\dots\times -j_Z),
\end{equation}
where $l=dim(\mathcal Z)$ and the endomorphisms $j_Z$ act on the
corresponding component,
$\mathbb R^{r(l)}$, of this Cartesian product. The
groups and the corresponding natural metrics are denoted by
$H^{(a,b)}_l$ and $g^{(a,b)}_l$ respectively.
Particularly important examples are the
H-type groups $H^{(a,b)}_3$, where the
3-dimensional Z-space, $\mathbb R^3$,
is considered as the space of imaginary
quatrnions, furthermore, action of $j_Z$ on the space
$\mathbb R^{r(3)}=\mathbb H=\mathbb R^4$
of quaternoinic numbers is defined by left multiplications with $Z$.
A brief account on the classification of Heisenberg
type groups is as follows.
If $l=dim(J_{\mathcal Z})\not =3\mod 4$, then, upto equivalence,
there exist exactly one
irreducible H-type endomorphism space acting
on a Euclidean space $\mathbb R^{n_l}$,
where the dimensions $n_l$, which depend just on $l$,
are described below. These endomorphism spaces
are denoted by $J_l^{(1)}$. If $l=3\mod 4$, then, upto equivalence,
there exist exactly
two non-equivalent irreducible H-type endomorphism spaces acting on
$\mathbb R^{n_l}$. They are denoted by
$J_l^{(1,0)}$ and
$J_l^{(0,1)}$
respectively. They relate to each other by the relation
$J_l^{(1,0)}\simeq -J_l^{(0,1)}$.
The values $n_l$ corresponding to
$
l=8p,8p+1,\dots ,8p+7
$
are
\begin{eqnarray}
n_l=2^{4p}\, ,\, 2^{4p+1}\, , \, 2^{4p+2}\, , \,
2^{4p+2}\, ,
\, 2^{4p+3}\, ,\, 2^{4p+3}\, , \, 2^{4p+3}\, , \,
2^{4p+3}.
\label{cliff}
\end{eqnarray}
The reducible Clifford endomorphism spaces can be built up by these
irreducible ones. They are denoted by
$J_l^{(a)}$ resp. $J_l^{(a,b)}$.
The corresponding Lie algebras are denoted by
$\mathcal H^{(a)}_r$ and
$\mathcal H^{(a,b)}_l$ respectively,
which define the groups
$H^{(a)}_r$ resp.
$H^{(a,b)}_l$.
In the latter case, the X-space
is defined by the $(a+b)$-times product
$\mathbb R^{n_l}\times\dots\times\mathbb R^{n_l}$
such that, on the last $b$ component, the action of a $J_Z$ is defined by
$J_Z^{(0,1)}\simeq
-J_Z^{(1,0)}$,
and, on the first $a$ components, the action is defined by
$J^{(1,0)}_Z$. In the first case this process should be applied only on
the corresponding $a$-times product.
One of the fundamental statements in this theory is that, in case of
$l=3\mod 4$, two groups $H^{(a,b)}_l$ and $H^{(a^\prime ,b^\prime )}_l$
are isometrically isomorphic if and only if $(a,b)=(a^\prime ,b^\prime )$
upto an order. By a general statement, two metric
2-step nilpotent Lie groups with Lie algebras
$\mathcal N=\mathcal X\oplus\mathcal Z$ and
$\mathcal N^\prime =\mathcal X^\prime\oplus\mathcal Z^\prime$
are isometrically isomorphic if and only if there exist orthogonal
transformations $A :\mathcal X\to\mathcal X^\prime$ and
$B :\mathcal Z\to\mathcal Z^\prime$
such that $J_{B (Z)}=A\circ J_Z\circ A^{-1}$
holds, for all $Z\in\mathcal Z$. The isomorphic isometry between
$H^{(a,b)}_l$ and $H^{(b,a)}_l$ is defined by $A =id$ and
$B =-id$. If
$(a,b)\not =(a^\prime ,b^\prime )$ (upto an order) then the
corresponding groups are not isometrically isomorphic.
In order to unify the two cases, denotations
$J_l^{(1,0)}=J_l^{(1)}$ and
$J_l^{(0,1)}=-J_l^{(1)}$ are used also in cases
$l\not =3\mod 4$. One should keep in mind, however, that these
endomorphism spaces are equivalent, implying that
two groups $H^{(a,b)}_l$ and $H^{(a^\prime ,b^\prime )}_l$
defined by them
are isometrically isomorphic if and only if
$a+b=a^\prime +b^\prime$ holds.
H-type groups can be characterized as being such particular 2-step metric
nilpotent Lie groups on which
the skew endomorphisms, $J_Z$, for any fixed $Z\in\mathcal Z$,
have the same eigenvalues $\pm \mathbf z\mathbf i$.
By polarization we have:
\begin{equation}
\frac{1}{2}(J_{Z_1}J_{Z_2}+ J_{Z_2}J_{Z_1})=-\langle Z_1,Z_2\rangle Id,
\end{equation}
which is called Dirac's anticommutation equation. It implies
that two endomrphisms, $J_{Z_1}$ and $J_{Z_2}$, with
perpendicular axes, $Z_1\perp Z_2$,
anticommute with each other. Endomorphism
spaces satisfying this weaker property define more general, so called
totally anticommutative 2-step nilpotent Lie groups on which the
endomorphisms can have also properly distinct eigenvalues.
The classification
of these more general groups is unknown in the literature.
Let it be mentioned, yet, that groups constructed above by
$su(2)$-repre\-sen\-tations
are exactly the groups $H^{(a,b)}_3$, while those constructed
by $su(3)$-rep\-re\-sen\-ta\-tions are
not even totally commutative spaces.
Thus, they are
not H-type groups either. It is also noteworthy, that the Cliffordian
endomorphism spaces $J_l^{(a,b)}$ do not form a Lie algebra in general.
In fact, only the endomorphism spaces defined for $l=1,3,7$ can
form a Lie algebra. In the first two cases, pair $(a,b)$ can be arbitrary,
while in case $l=7$ only cases $a=1,b=0$, or, $a=0,b=1$ yield Lie algebras.
It is an interesting question that which orthogonal Lie algebras can be
generated by Cliffordian endomorphism spaces $J_l{(a,b)}$.
Let it be mentioned, yet, that Lie algebra
$su(3)$ does not belong even to this category.
\subsection{Laplacian and curvature.}
Although most of the results of this paper extend to the most general
2-step metric nilpotent Lie groups, in what follows only H-type groups
will be considered.
On these groups, the Laplacians appear in the following form:
\begin{equation}
\label{Delta}
\Delta=\Delta_X+(1+\frac 1{4}\mathbf x^2)\Delta_Z
+\sum_{\alpha =1}^r\partial_\alpha D_\alpha \bullet,
\end{equation}
where $D_\alpha\bullet$ denotes directional derivatives along
the vector fields
$X\to J_\alpha (X)=J_{e_\alpha}(X)$, furthermore, $\mathbf x=|X|$
denotes the length of X-vectors.
This formula can be established by the following explicit formulas.
Consider orthonormal bases $\big\{E_1;\dots;E_k\}$ and
$\big\{e_1;\dots;e_l\big\}$ on the X- and Z-space respectively.
The coordinate systems defined by them are
denoted by
$\big\{x^1;\dots;x^k\big\}$ and $\big\{z^1;\dots;z^l\big\}$ respectively.
Vectors $E_i;e_{\alpha}$ extend into the left-invariant
vector fields
\begin{eqnarray}
\label{invar-vect}
\mathbf
X_i=\partial_i + \frac 1 {2} \sum_{\alpha =1}^l
\langle [X,E_i],e_{\alpha}\rangle \partial_\alpha =
\partial_i + \frac 1 {2} \sum_{{\alpha} =1}^l \langle
J_\alpha\big(X\big),E_i\rangle
\partial_\alpha
\end{eqnarray}
and $\mathbf Z_{\alpha}=\partial_\alpha$, respectively,
where $\partial_i=\partial /\partial x^i$, $\partial_\alpha
=\partial/\partial z^\alpha$
and $J_\alpha = J_{e_\alpha}$.
The covariant derivative acts on these
invariant vector fields according to the following formulas.
\begin{equation}
\label{nabla}
\nabla_XX^*=\frac 1 {2} [X,X^*]\quad ,
\quad\nabla_XZ=\nabla_ZX=-\frac 1 {2}
J_Z\big (X\big)\quad ,\quad\nabla_ZZ^*=0.
\end{equation}
The Laplacian, $\Delta$, acting on functions can
explicitly be established by substituting (\ref{invar-vect}) and
(\ref{nabla}) into
the following well known formula
\begin{equation}
\label{inv-delta}
\Delta=\sum_{i=1}^k\big(\mathbf X_i^2-
\nabla_{\mathbf X_i}\mathbf X_i\big)
+\sum_{\alpha =1}^l\big (\mathbf Z_{\alpha}^2-\nabla_{\mathbf Z_{\alpha}}
\mathbf Z_{\alpha}\big ).
\end{equation}
These formulas allow to compute also the Riemannian curvature, $R$, on
$N$ explicitly. Then we find:
\begin{eqnarray}
R(X,Y)X^*=\frac 1 {2} J_{[X,Y]}(X^*) -
\frac 1 {4} J_{[Y,X^*]}
(X) + \frac 1 {4} J_{[X,X^*]}(Y); \\
R(X,Y)Z=-\frac 1 {4} [X,J_Z(Y)]+
\frac 1 {4} [Y,J_Z(X)] ;
\quad R(Z_1,Z_2)Z_3=0; \\
R(X,Z)Y=-\frac 1 {4} [X,J_Z(Y)] ; \quad
R(X,Z)Z^*=-\frac 1 {4}
J_ZJ_{Z^*}(X); \\
R(Z,Z^*)X=-\frac 1 {4} J_{Z^*}J_Z(X) + \frac 1 {4}J_Z
J_{Z^*}(X),
\end{eqnarray}
where $X;X^*;Y \in \mathcal X$ and $Z;Z^*;Z_1;Z_2;Z_3 \in \mathcal Z$
are considered as the elements of the Lie algebra $\mathcal N$.
The components of this tensor field on coordinate systems
$\big\{x^1;\dots;x^k;z^1;\dots;z^l\big\}$
can be computed by formulas (\ref{invar-vect}).
\section{Particles without interior.}
These Riemann manifolds were used, originally \cite{sz1}-\cite{sz4},
for isospectrality constructions
in two completely different situations. In the first
one, the Z-space is factorized by a Z-lattice, $\Gamma_Z$, defined on the
Z-space, which process results a torus bundle over the X-space. In the
second case, Z-ball resp. Z-sphere bundles are considered
by picking Z-balls resp. Z-spheres in the Z-space over the points of the
X-space. It turns out that, apart from a constant term,
the Laplacian on the
Z-torus bundles, called also Z-crystals,
is the same as the classical Ginsburg-Landau-Zeeman operator
of an electron-positron system
whose orbital angular momentum is expressed in terms of the endomorphisms
$J_{Z_\alpha}$ defined by the lattice points $Z_\alpha\in \Gamma_Z$.
More precisely, the Z-lattice defines a natural decomposition
$\sum_\alpha W_\alpha$ of the $L^2$-function space
such that the components $W_\alpha$ are invariant under the action of
the Laplacian, which, after restricting it onto a fixed $W_\alpha$,
appears as the Ginsburg-Landau-Zeeman operator whose
orbital angular momentum
is associated with the fixed endomorphism $J_{Z_\alpha}$.
It turns out, in the next chapters, that the constant
term corresponds to neutrinos (massless particles with no charge)
accompanying an electron-positron system. Thus,
altogether, the Laplacian appears as the Hamilton operator of a system
of electrons positrons and
electron-positron-neutrinos which is formally the sum of
a Ginsburg-Landau-Zeeman operator and a constant term. Names
Ginsburg-Landau
indicate that no Coulomb potential or any kind of electric forces are
involved into this operator. Thus the forces which manifest themself
in the eigenfunctions of the Laplacian,
are not the complete electromagnetic
forces, yet. However, when also these forces will be introduced, the
eigenfunctions remain the same, the electric force contributes
only to the magnitude of the eigenvalues. By this reason,
the forces associated with these
models are called electromagnetic forces. Since the Z-lattices consist
of points and intrinsic physics is exhibited on the Z-space, particles
represented by Z-crystals are considered as point-like particles
having no insides. The theory developed for them is in the
strongest connection with quantum electrodynamics (QED).
In the second case both the Laplacian
and the angular
momentum operator appear in much more complex forms. Contrary
to the first case, the angular
momentum operator is not associated with a fixed Z-vector, but it
represents spinning about each Z-vector.
Beside the orbiting spin, also natural
inner operators emerge which can be
associated both with weak and strong
nuclear forces. The particles represented by these models do have
inside on which the intrinsic physics described in QCD is exhibited
on a full scale. An openly admitted purpose
in the next sections is to give a unified theory for the 3 forces:
1.) electromagnetic- 2.) weak-nuclear- 3.) strong-nuclear-forces.
However, some concepts such as Dirac's spin
operator will be introduced in a
subsequent paper into this theory.
\subsection{Z-crystals modelling Ginsburg-Landau-Zeeman operators.}
The Z-torus bundles are defined by a factorization,
$\Gamma\backslash H$, of the nilpotent group $H$ by a Z-lattice,
$\Gamma=\{Z_\gamma\}$, which is defined
only on $\mathcal Z$ and not on the whole $(X,Z)$-space.
Such a factorization defines a Z-torus bundle over the X-space.
The natural Z-Fourier decomposition,
$L^2_{\mathbb C}:=\sum_\gamma W_\gamma$, of the $L^2$ function space
belonging to this bundle is defined such that subspace
$W_\gamma$ is spanned by functions of the form
\begin{equation}
\label{diskFour}
\Psi_\gamma (X,Z)=\psi (X)e^{2\pi\mathbf i\langle Z_\gamma ,Z\rangle}.
\end{equation}
Each $W_\gamma$ is invariant under the action of $\Delta$, more precisely
we have:
\begin{eqnarray}
\Delta \Psi_{\gamma }(X,Z)=(\lhd_{\gamma}\psi )(X)
e^{2\pi\mathbf i\langle Z_\gamma ,Z\rangle},\quad
{\rm where}
\\
\lhd_\gamma
=\Delta_X + 2\pi\mathbf i D_{\gamma }\bullet
-4\pi^2\mathbf z_\gamma ^2(1 + \frac 1 {4} \mathbf x^2).
\end{eqnarray}
In terms of parameter $\mu =\pi \mathbf z_\gamma $,
this operator is written
in the form
$
\lhd_{\mu}=
\Delta_X +2 \mathbf i D_{\mu }\bullet -\mu^2\mathbf x^2-4\mu^2.
$
Although it is defined in terms of the X-variable,
this operator is not a sub-Laplacian resulted by a submersion.
It rather is the restriction of the total Laplacian onto
the invariant subspace $W_\gamma$.
Actually, the Z-space is represented by the constant $\mu$
and operator $D_\mu\bullet$. A characteristic feature of this
restricted operator is that it involves only a single endomorphism,
$J_{Z_\gamma}$.
In the 2D-case, such an operator can be transformed
to the the Ginsburg-Landau-Zeeman
operator (\ref{land}) by choosing $\mu ={eB/2\hbar c}$ and
multiplying the whole operator with
$-{\hbar^2/2m}$.
In general dimensions,
number $\kappa =k/2$ means the {\it number of particles}, and,
endomorphisms $j_Z$ and $-j_Z$ in the above formulas are attached to
systems electrons resp. positrons. More precisely, by the classification
of H-type groups, these endomorphisms are acting on the irreducible
subspaces $\mathbb R^{n_l}$ and the system is interpreted such that
there are $n_l/2$ particles of the same charge orbiting on complex planes
determined by the complex structures $j_{Z_u}$ resp. $-j_{Z_u}$
in constant magnetic
fields whose directions are perpendicular to the complex planes
where the particles are orbiting.
The actuality of the
complex planes where the orbiting takes place can be determined just
probabilistically by the probability amplitudes defined for such systems.
The total number of particles is $\kappa =(a+b)n_l/2$. The probability
amplitudes must refer to $\kappa$ number of particles, that is, they
are defined on the complex X-space $\mathbb C^\kappa$ defined by the
complex structure $J_{Z_u}$. This theory
can be established just after developing an adequate spectral theory.
Above, adjective "perpendicular" is meant to be just symbolic,
for axis $Z$ is actually separated
from the orbiting. That is, it is not the actual axis of orbiting,
but it is a vector in the Z-space which is attributed to the orbiting
by the linear map $\mathbb J:Z\to J_Z$.
Thus, it would be more appropriate
to say that this constant perpendicular
magnetic field, $B$, is just "felt"
by the particle orbiting on a
given complex plane. Anyhow, the $B$ defines
a unique inertia system on the complex
plane in which $E=0$, that is, the
associated electric field vanishes. In all of the other inertia system
also a non-zero $E$ must be associated with $B$. The relativistic time
$T$ defined on this unique inertia system is the inner time defined
for the particle orbiting on a complex plane.
This time can be synchronized, meaning that
common time $T$ can be introduced which defines time on
each complex plane.
Note that this operator contains
also an extra constant term, $4\mu^2$, which is explained later
as the total energy of neutrinos accompanying
the electron-positron system.
This energy term is neglected in the original
Ginsburg-Landau-Zeeman Hamiltonian.
Thus the higher dimensional mathematical model
really represents a system of particles and antiparticles
which are orbiting in constant
magnetic fields. Operator, $D_\mu\bullet$, associated with magnetic
dipole resp. angular momentum operators,
are defined for the lattice points
separately. Therefore this model can be viewed such that it is
associated with
{\it magnetic-dipole-moment-crystals}, or,
{\it angular-moment-crystals}. In short, they are called Z-crystals.
They are particularly interesting
on a group $H^{(a,b)}_3$ where the Z-space is $\mathbb R^3$. On this
Euclidean space all
possible crystals are well known by classifications.
It would be interesting to know what does this mathematical
classification mean from physical point of view and if these Z-crystals
really exist in nature?
\subsection{Explicit spectra of Ginsburg-Landau-Zeeman operators.}
On Z-crystals, the spectral
investigation of the total operator (\ref{Delta}) can be
reduced to the operators $\lhd_\gamma$, induced by $\Delta$
on the invariant
subspaces $W_\gamma$.
On Heisenberg-type groups this operator involves only a single parameter
$\mu >0$, where $\mu^2$ is the single eigenvalue of
$-J_\gamma^2$. By this reason, it is denoted by $\lhd_\mu$.
This problem
is traced back to an ordinary differential operator
acting on radial functions, which can be found by seeking
the eigenfunctions in the form
$F(X)=f(\langle X,X\rangle )\mathtt H^{(n,m )}
(X)$,
where $f$ is an even function
defined on $\mathbb R$ and
$\mathtt H^{(n,m)}(X)$ is a complex valued
homogeneous harmonic polynomial of order
$n$, and, simultaneously, it is also an eigenfunction
of operator
$\mathbf iD_\mu\bullet$
with eigenvalue $m\mu$.
Such polynomials can be constructed as follows.
Consider a complex orthonormal basis,
$\mathbf B=\{B_1,\dots ,B_{\kappa}\}$,
on the complex space defined by the complex structure
$J=(1/\mu)J_\mu$. The corresponding
complex coordinate system is denoted by
$\{z_1,\dots ,z_{\kappa}\}$.
Functions
$
P=z^{p_1}_1\dots z^{p_{\kappa}}_{\kappa}
\overline z^{q_1}_1\dots \overline z^{q_{\kappa}}_{\kappa}
$
satisfying
$p_1+\dots +p_{\kappa}=p,\,
q_1+\dots +q_{\kappa}=n-p$
are $n^{th}$-order homogeneous polynomials which are eigenfunctions
of $\mathbf iD\bullet$ with eigenvalue $m =2p-n$.
However, these polynomials are not harmonic.
In order to get the harmonic
eigenfunctions, they must be exchanged for the
polynomials
$
\Pi^{(n)}_X (P)
$,
defined by projections, $\Pi^{(n)}_X$, onto the space of
$n^{th}$-order
homogeneous harmonic
polynomials of the X-variable. By their explicit description (\ref{proj}),
these projections are of the form
$\Pi^{(n)}_X =\Delta_X^0+B^{(n)}_1\mathbf x^2\Delta_X +
B^{(n)}_2\mathbf x^4\Delta_X^2+\dots$,
where $\Delta_X^0=id$. By this formula,
also the harmonic polynomial obtained by this
projection is an eigenfunction of $\mathbf iD\bullet$
with the same eigenvalue $m\mu$.
When operator $\lhd_\mu$ is acting on
$F(X)=f(\langle X,X\rangle )\mathtt H^{(n,m)}(X)$,
it defines an ordinary differential operator acting on $f$. Indeed,
by $D_{\mu }\bullet f=0$, we have:
\begin{eqnarray}
(\lhd_{\mu }F)(X)=\big(4\langle X,X\rangle f^{\prime\prime}
(\langle X,X\rangle )
+(2k+4n)f^\prime (\langle X,X\rangle )\\
-(2m\mu +4\mu^2((1 +{1\over 4}\langle X,X\rangle )
f(\langle X,X\rangle ))\big)\mathtt H^{(n,m)} (X).
\nonumber
\end{eqnarray}
The eigenvalue problem can, therefore, be reduced to an ordinary
differential operator. More precisely, we get:
\begin{theorem}
On a Z-crystal, $B_R\times T^l$, under a given
boundary condition $Af^\prime (R^2)+Bf(R^2)=0$ defined by constants
$A,B\in\mathbb R$,
the eigenfunctions
of $\lhd_{\mu }$ can be represented
in the form $f(\langle X,X\rangle )\mathtt H^{(n,m)} (X)$, where the radial
function $f$ is an eigenfunction of the ordinary differential operator
\begin{equation}
\label{Lf_lambda}
(\large{\Diamond}_{\mu,\tilde t}f)(\tilde t)=
4\tilde tf^{\prime\prime}(\tilde t)
+(2k+4n)f^\prime (\tilde t)
-(2m\mu +4\mu^2(1 +
{1\over 4}\tilde t))f(\tilde t).
\end{equation}
For fixed degrees $n$ and $m$, the multiplicity
of such an eigenvalue is the
dimension of space formed by the spherical harmonics
$\mathtt H^{(n,m)} (X)$.
In the non-compact case,
when the torus bundle is considered over the whole X-space,
the $L^2$-spectrum can explicitly be computed.
Then, the above functions are sought
in the form
$f_\mu (\tilde t)=u(\tilde t\mu)e^{-\tilde t\mu /2}$
where $u(\tilde t)$ is a uniquely determined $r^{th}$-order polynomial
computed for $\mu =1$.
In terms of these parameters, the elements of the spectrum are
$\nu_{(\mu,r,n,m)}=-((4r+4p+k)\mu+4\mu^2)$. This
spectrum depends just on $p=(m+n)/2$ and the same
spectral element appears
for distinct degrees $n$. Therefore, the multiplicity of each
eigenvalue is infinity.
\end{theorem}
\begin{proof}
Only the last statement is to be established. We proceed, first, with the
assumption $\mu =1$.
Then, function
$e^{-{1\over 2}\tilde t}$
is an eigenfunction of this radial
operator with eigenvalue $-(4p+k+4)$. The general
eigenfunctions are sought in the form
\begin{equation}
f(\tilde t)=u(\tilde t)e^{-{1\over 2}\tilde t}.
\end{equation}
Such a function is an eigenfunction of
$\Diamond$
if and only if $u(\tilde t)$ is an eigenfunction of the differential
operator
\begin{equation}
\label{Pop}
(P_{(\mu =1,n,m )}u)(\tilde t)=
4\tilde tu^{\prime\prime}(\tilde t)
+(2k+4n-4\tilde t)u^\prime (\tilde t)
-(4p+k+4)u(\tilde t).
\end{equation}
Because of differentiability conditions,
we impose $u^\prime (0)=0$ on the
eigenfunctions. Since in this case, $u(0)\not =0$ hold for any non-zero
eigenfunction, also the condition $u(0)=1$ is imposed.
In the compact case, corresponding to the ball$\times$torus-type manifolds
defined over a ball $B_R$, the spectrum of this Laguerre-type differential
operator can not be explicitly computed. For a given boundary condition
(which can be Dirichlet, $u (R)=0$, or Neumann,
$u^\prime (R)=0$) the spectrum
consists of a real sequence $0\leq \mu_1>\mu_2>\dots \to -\infty$.
The multiplicity of each of these Laguerre-eigenvalues is $1$ and the
multiplicity corresponding to the Ginsburg-Landau-Zeeman operator
is the dimension
of the space of spherical harmonics $\mathtt H^{(n,m)}$.
The elements of the Laguerre-spectrum are zeros
of a holomorphic function expressed by an integral formula \cite{coh}.
Contrary to the compact case, the spectrum
can be explicitly computed for the non-compact torus bundle
$\Gamma\backslash H$, defined over the whole X-space.
An elementary argument shows that for any $r\in \mathbb N$, operator
(\ref{Pop}) has a
uniquely determined polynomial eigenfunction
\begin{equation}
\label{lag}
u_{(\mu =1,r,n,m )}(\tilde t)
=\tilde t^r+a_1\tilde t^{r-1}+a_2\tilde t^{r-2}+\dots +
a_{r-1}\tilde t+a_r
\end{equation}
with coefficients satisfying the recursion formulas
\begin{equation}
a_0=1\quad ,\quad a_i=-a_{i-1}(r-i)(r+n+{1\over 2}k+1-i)r^{-1}.
\end{equation}
Actually, this argument can be avoided and these polynomials can
explicitly be established by observing
that they are nothing but
the Laguerre polynomials defined as the $r^{th}$-order polynomial
eigenfunctions
of operator
\begin{equation}
\label{Lop}
\Lambda_{\alpha} (u)(\tilde t)=\tilde tu^{\prime\prime}+
(\alpha +1-\tilde t)u^\prime ,
\end{equation}
with eigenvalues $-r$.
This statement follows from identity
\begin{equation}
P_{(\mu =1,n,m )}=4\Lambda_{({1\over 2}k+
n-1)}-(4p+k+4),
\end{equation}
implying that
the eigenfunctions of operators
(\ref{Pop}) and (\ref{Lop})
are the same and
the eigenvalue corresponding to (\ref{lag}) is
\begin{equation}
\label{leigv}
\nu_{(\mu =1,r,n,m )}=-(4r+4p+k+4)\quad ,
\quad p={1\over 2}(m +n).
\end{equation}
We also get that, for fixed values of $k,n,m$
(which fix the value also for $p$), functions
$
u_{(\mu =1,r,n,m )}\, ,\, n=0,1,\dots \infty
$ form a basis in $L^2([0,\infty ))$.
In case of a single $\mu$,
the eigenfunctions are sought in the form
\begin{equation}
\label{eigfunc}
u_{\mu r nm}(\langle X,X\rangle )e^{-{1\over 2}\mu
\langle X,X\rangle}\mathtt H^{(n,m)}(X).
\end{equation}
It turns out that
\begin{equation}
u_{(\mu ,r,n,m)}(\tilde t)=
u_{(\mu =1 ,r,n,m)}(\mu \tilde t)
\end{equation}
and the corresponding eigenvalue is
\begin{equation}
\label{eigval}
\nu_{(\mu,r,n,m)}=-((4r+4p+k)\mu+4\mu^2).
\end{equation}
This statement can be explained as follows. For a general $\mu$,
the action of (\ref{Lf_lambda}) on a function
$f(\tilde t)=u(\mu \tilde t)e^{-{1\over 2}\mu \tilde t}$ can be described
in terms of $\tau =\mu \tilde t$ as follows:
\begin{equation}
\label{Lf_lambda}
(L_{(\mu ,n,m )}f=\mu (4\tau f_{\tau\tau}+(2k+4n)f_{\tau}
-(2m+\tau )f)-4\mu^2f,
\end{equation}
from which the statement follows.
\end{proof}
This technique extends to general 2-step nilpotent Lie groups, where
the endomorphisms may have distinct eigenvalues,
$\{\mu_i\}$. In this case
the eigenfunctions are represented
as products of functions of the form
\begin{equation}
F_{(i)}(X)=f_{(i)}(\langle X,X\rangle)
\mathtt H^{(n_i,m_i )}(X),
\end{equation}
where the functions in the formula are defined on the maximal
eigensubspace corresponding to the parameter $\mu_i$. When the
spectrum is computed on the whole X-space, this method works out for the
most general Ginsburg-Landau-Zeeman operators. In case of a single $\mu$,
this method applies also to computing the spectra on torus
bundles over balls and spheres.
In case of multiple $\mu$'s, it applies to torus bundles over
the Cartesian product
of balls resp. spheres defined on the above $X_i$-spaces.
\section{Particles having interior.}
In order to sketch up a clear map for this rather complex section,
we start with a review of the main mathematical and physical ideas
this exposition is based on. These ideas are rigorously established
in the subsequent subsections.
\subsection{A preliminary review of the main ideas.}
Systems of particles having insides can be attached to
ball$\times$ball- and ball$\times$\-sphe\-re-type
manifolds. Originally they emerged in the second type of spectral
investigations performed in \cite{sz2}-\cite{sz4}. These manifolds
are defined by appropriate smooth fields of Z-balls resp. Z-spheres
of radius $R_Z(\mathbf x)$
over the points of a fixed X-ball $B_X$ whose radius is denoted
by $R_X$. Note that radius $R_Z(\mathbf x)$ depends just on the
length, $\mathbf x:=|X|$, of vector $X\in B_X$ over which the
Z-balls resp. Z-spheres are considered. The centers all
of the balls resp. spheres which show up in this definition are
always at the origin of the corresponding spaces. The boundaries
of these manifolds are the so called
sphere$\times$ball- resp. sphere$\times$sphere-type manifolds,
which are trivial Z-ball-
resp. Z-sphere-bundles defined
over fixed X-spheres of radius $R_X$.
In short, one considers Z-balls resp. Z-spheres instead of the
Z-tori used in the previous constructions of Z-crystals. In the
isospectrality investigations these compact domains
corresponding to $R_X <\infty$ play the
primary interest.
In physics, however, the non-compact bundles corresponding to
$R_X =\infty$ (that is, which are defined over the whole X-space)
become the most important cases. In what follows, both the
compact and non-compact cases will be investigated.
Contrary to the Z-crystal models,
the computations in this case can not be reduced to a single
endomorphism. Instead, they always have to be established for the complete
operator $\mathbf M=\sum\partial_\alpha D_\alpha\bullet$ which includes
the angular momentum endomorphisms $J_Z$ with respect to any Z-directions.
This operator strongly relates both to the
3D angular momentum
$\mathbf P=(P_1,P_2,P_3)=\frac{1}{\hbar}Z\times\mathbf p$,
defined in (\ref{3Dang_mom}), and the strong interaction
term $\mathbf igx^\mu A^\alpha_\mu t_\alpha$
of the QCD Lagrangian (\ref{QCDLagrange}). Actually, it has a rather
apparent formal identity with Pauli's intrinsic spin Hamiltonian
$\sum B_iP_i$ defined by magnetic fields $\mathbf B=(B_1,B_2,B_3)$
(cf. \cite{p}, Volume 5, pages 152-159).
In $SU(3)$-theory the term corresponding to the 3D angular momentum
is exactly the above mentioned strong interaction term.
Due to the new form
(\ref{newDiracwave}) of the de Broglie waves,
where the angular-momentum-axes are separated from the planes on which
the particles are orbiting, also the angular momentum operator
has to appear in a new form.
In order to make the argument about the analogy with Pauli's
intrinsic spin resp. strong interaction term more clear, note
that the Lie algebras determined by the 3D angular momenta
resp. Gell-Mann's matrices $t_\alpha$ are $su(2)$ resp. $su(3)$. Thus, the
nilpotent-group-models corresponding to these classical Yang-Mills
models are $H^{(a,b)}_3$ resp. the
nilpotent group constructed by the representations of $su(3)$.
When operator $\mathbf M$ acts on wave function
(\ref{newDiracwave}), it appears behind the integral in the form
$\mathbf iD_K\bullet$. Consider, first, group $H^{(a,b)}_3$ and suppose
that $K=e_1$, where $\{e_1,e_2,e_3\}$ is the natural basis on
$\mathbb R^3$. Then, on the $\{e_2,e_3\}$-plane, which is a complex plane
regarding the complex structure $J_{e_1}$,
operator $\mathbf iD_{e_1}\bullet$ is nothing but the first component,
$-P_1=\mathbf i\big(Z_2\frac{\partial}{\partial Z_3}
-Z_3\frac{\partial}{\partial Z_2}\big)$,
of Pauli's angular momentum operator. That is, the analogy
between $\mathbf M$ and the classical angular momentum operator
becomes apparent after letting $\mathbf M$
act on wave functions (\ref{newDiracwave}). This action is the very
same how $P$ is acting on the original de Broglie waves. Thus this
new form
of action, which can be described by axis-separation and
placing the orbiting particles onto the complex planes, can really
be considered as adjustment to the
new forms of the wave functions. These arguments
work out also for groups constructed by $su(3)$-representations. Thus
it is really justified to consider $\mathbf M$ as a spin operator
appearing in a new
situation. The greatest advantage of this new form is
that it describes also the strong nuclear forces.
This complication gives rise to a
much more complex mathematical and physical situation
where both the
exterior and the interior life of particle systems exhibit themself
on a full scale. First, let the physical role of the Fourier
transforms appearing in the formulas be clarified. If term involving time
is omitted from the formula of wave functions, the rest is called
timeless probability amplitude. Both these amplitudes and wave functions
could have been defined also by means of the inverse function
$e^{-\mathbf i\langle Z,K\rangle}$. Wave functions (or amplitudes)
obtained from
the very same function by using $e^{\mathbf i\langle Z,K\rangle}$ resp.
$e^{-\mathbf i\langle Z,K\rangle}$ in their
Fourier transforms are said to be wave functions (or amplitudes)
defined for particle- resp. antiparticle-systems. In other words,
the definition of particles and antiparticles is possible because
of these two choices. Calling one object particle and its
counter part antiparticle is very similar to naming one of the poles
of a magnet north-pole and the other one south pole.
Since the Laplace operators on 2-step nilpotent Lie groups are defined
by means of constant magnetic
fields, this is actually the right physical explanation for choosing
$e^{\mathbf i\langle Z,K\rangle}$ or
$e^{-\mathbf i\langle Z,K\rangle}$ to introduce probability amplitudes.
By this reason, $\mathbf M$ is called unpolarized magnetic dipole moment
or angular momentum operator. The polarized operators appear behind
the integral sign of the Fourier integral formula when $\mathbf M$ is
acting on the formula.
The complexity of this operator is fascinating. For instance, it
is the sum of extrinsic, $\mathbf{L}$, and intrinsic, $\mathbf{S}$,
operators which do not commute with each other. Furthermore, operator
$\OE =\Delta_X+(1+\frac 1{4}\mathbf x^2)\Delta_Z+\mathbf{L}$
is a Ginsburg-Landau-Zeeman operator which exhibits just orbital spin. The
intrinsic life of particles is encoded into $\mathbf{S}$.
Also the strong nuclear forces, keeping the
particles having interior together, can be explained by this operator.
In order to make this
complicated situation as clear as possible, we describe, in advance,
how certain
eigenfunctions of $\Delta$ can explicitly be computed. These
computations provide a great opportunity also for a preliminary review
of the general eigenfunction computations, which will be connected
to these particular computations as follows.
Since these particular functions
do not satisfy the boundary conditions, they do not
provide the final solutions for finding the eigenfunctions
yielding also given boundary conditions. This conditions can be imposed
just after certain projections performed on the center (that is, in
the insides of the particles). But then, these projected functions
will not be eigenfunctions of the complete $\Delta$ any more.
They are eigenfunctions
just of the exterior operator $\OE$. By this reason,
they are called weak force eigenfunctions. The strong force
eigenfunctions, defined by the eigenfunctions of the complete $\Delta$
satisfying also a given boundary condition, can be expressed just
by complicated combinations of the weak force eigenfunctions,
meaning that the strong forces are piled up by weak forces. Since the
weak force eigenfunctions are Ginsburg-Landau-Zeeman eigenfunctions, by the
combinations of which the strong force eigenfunctions can be expressed,
this theory really unifies the electromagnetic, the weak, and the strong
nuclear forces. Let it be mentioned yet that the only force-category
missing from this list is the gravitational force. At this early
point of the development, we do not comment the question if this
unification can be extended also to this force.
Now we turn back to establish an explicit formula describing
certain eigenfunctions of $\Delta$ on general
Heisenberg type Lie groups $H^{(a,b)}_l$. More details about the
general eigenfunction computations will also be provided.
Although this construction can be
implemented also on general 2-step nilpotent Lie groups,
this more complicated case is omitted in
this paper.
First, the eigenfunctions of a single
angular momentum operator $\mathbf D_K\bullet$, defined for a
Z-vector $K$ are described as follows. For a fixed X-vector $Q$
and unit Z-vector $K_u={1\over \mathbf k}K_u$, consider the X-function
$\Theta_{Q}(X,K_u)=\langle Q+\mathbf iJ_{K_u}(Q),X\rangle$ and its
conjugate $\overline{\Theta}_{Q}(X,K_u)$. For a vector $K=\mathbf kK_u$
of length $\mathbf k$,
these functions are
eigenfunctions of $D_{K}\bullet$ with eigenvalue
$-\mathbf k\mathbf i$ resp.
$\mathbf k\mathbf i$. The higher order eigenfunctions are of the form
$\Theta_{Q}^p\overline\Theta^q_{Q}$
with eigenvalue $(q-p)\mathbf k\mathbf i$.
In order to find the eigenfunctions of the compound operator $\mathbf M_Z$,
consider a Z-sphere bundle $S_{R_Z}(\mathbf x)$ over the X-space
whose radius function $R_Z(\mathbf x)$ depends just on
$|X|=\mathbf x$. For an appropriate function
$\phi (\mathbf x,K)$
(depending on $\mathbf x$ and $K\in S_{R_Z}$, furthermore, which
makes the following integral formula well defined) consider
\begin{equation}
\label{fourR_Z}
\mathcal F_{QpqR_Z}(\phi )(X,Z)
=\oint_{S_{R_Z}}e^{\mathbf i
\langle Z,K\rangle}\phi (\mathbf x,K)
(\Theta_{Q}^p\overline\Theta^q_{Q})(X,K_u)dK_{no},
\end{equation}
where $dK_{no}$ is the normalized measure on $S_{R_Z}(\mathbf x)$.
By $\mathbf M_Z\oint =\oint \mathbf iD_K\bullet$,
this function restricted to the Z-space over an arbitrarily fixed
X-vector is an eigenfunction
of $\mathbf M_Z$ with the real eigenvalue $(p-q)R_Z(\mathbf x)$.
These functions are eigenfunctions also of $\Delta_Z$ with eigenvalue
$R_Z^2(\mathbf x)$. Also note
that these eigenvalues do not change by varying $Q$.
The function space spanned by functions (\ref{fourR_Z}) which are
defined by all possible $\phi$'s is not invariant with respect to
the action of $\Delta_X$, thus the eigenfunctions of the complete
operator $\Delta$ do not appear in this form. In order to find the
common eigenfunctions,
the homogeneous but non-harmonic polynomials
$
\Theta_{Q}^p\overline\Theta^q_{Q}
$
of the X-variable should be exchanged for the
polynomials
$
\Pi^{(n)}_X (\Theta_{Q}^p\overline\Theta^q_{Q})
$,
defined by projections, $\Pi_X$, onto the space of $n=(p+q)$-order
homogeneous harmonic
polynomials of the X-variable. Formula
$\Pi_X =\Delta_X^0+B_1|X|^2\Delta_X +B_2\mathbf x^4\Delta_X^2+\dots$,
established in (\ref{proj}), implies that, over each X-vector, also
\begin{equation}
\label{HfourR_Z}
\mathcal {HF}_{QpqR_Z}(\phi )(X,Z)
=\oint_{S_{R_Z}}e^{\mathbf i
\langle Z,K\rangle}\phi (\mathbf x,K)
\Pi^{(n)}_X(\Theta_{Q}^p\overline\Theta^q_{Q})(X,K_u))dK_{no}
\end{equation}
are eigenfunctions
of $\mathbf M$ and $\Delta_Z$ with the same eigenvalues
what are defined for (\ref{fourR_Z}).
The action of the complete Laplacian is a combination of
X-radial differentiation, $\partial_{\mathbf x}$,
and multiplications with functions depending
just on $\mathbf x$. Due to the normalized measure $dK_{no}$, these
operations can be considered such that they directly act
inside of the integral sign on function
$\phi (\mathbf x,K)$ in terms of the $\mathbf x$-variable, only.
That is, the action is completely reduced to X-radial
functions and the eigenfunctions of $\Delta$ can be found in the form
\begin{eqnarray}
f(\mathbf x^2)\oint_{S_{R_Z}}e^{\mathbf i
\langle Z,K\rangle}
F_{Qpq}(X,K_u))dK_{no},
\quad
{\rm where}
\\
F_{Qpq}(X,K_u))=\varphi (K)\Pi^{(n)}_X
(\Theta_{Q}^p(X,K_u)\overline\Theta^q_{Q}(X,K_u)).
\end{eqnarray}
The same computations developed for the Z-crystals
yield that this reduced operator appears in the following form:
\begin{equation}
\label{Lf_mu(x)}
({\Diamond}_{\mu (\tilde t),\tilde t}f)(\tilde t)=
4\tilde tf^{\prime\prime}(\tilde t)
+(2k+4n)f^\prime (\tilde t)
-(2m\mu (\tilde t) +4\mu^2(\tilde t)(1 +
{1\over 4}\tilde t))f(\tilde t),
\end{equation}
where $\tilde t=\mathbf x^2$, and
$\mu (\tilde t)=R_Z(\sqrt{\tilde t})=R_Z(\mathbf x)$. Note that
this is exactly the same operator what was obtained for
the radial Ginsburg-Landau-Zeeman operator on
Z-crystals. Since function
$\mu (\tilde t)$ may depend also on $\tilde t=\mathbf x^2$,
it actually appears in a more general form here.
However, for constant radius functions $R_Z$,
it becomes the very same operator, indeed. That is, also this
eigenfunction-problem is reduced to finding the eigenfunctions of
the Ginsburg-Landau-Zeeman operator reduced to X-radial functions.
This reduced
operator remains the same by varying $Q$, thus also the spectrum
on the invariant spaces considered for fixed $Q$'s is not changing
regarding these variations. This phenomena reveals the later discussed
spectral isotropy yielded on these models.
Note that this construction is carried out by a fixed X-vector $Q$,
but it extends to general polynomials as follows.
Consider an orthonormal system
$\mathbf B=\{B_{1},\dots ,B_{\kappa}\}$ of vectors
on the X-space. They form a complex, but generically
non-orthonormal basis regarding the complex structures $J_{K_u}$,
where the unit vectors $K_u$ yielding this property form
an everywhere dense open set on the unit Z-sphere.
This set is the complement of a set of $0$ measure.
The corresponding complex coordinate systems on the X-space
are denoted by
$\{z_{K_u1}=\Theta_{B_{K_u1}},\dots ,z_{K_u\kappa}
=\Theta_{B_{K_u\kappa}}\}$. For given values
$p_1,q_1,\dots ,p_{\kappa},q_{\kappa}$,
consider the polynomial
$
\prod_{i=1}^{\kappa}
z_{K_ui}^{p_i}
\overline z_{K_ui}^{q_i}.
$
Then functions
\begin{eqnarray}
\label{fourprod}
\oint_{S_{R_Z}}e^{\mathbf i
\langle Z,K\rangle}f(\mathbf x^2)\varphi (K)
\Pi^{(n)}_X\prod_{i=1}^{\kappa}
z_{K_ui}^{p_i}
\overline z_{K_ui}^{q_i} dK_{no}=
\\
=f(\mathbf x^2)\oint_{S_{R_Z}}e^{\mathbf i
\langle Z,K\rangle}F_{\mathbf Bp_iq_i}(X,K_u) dK_{no}
= \mathcal {HF}_{\mathbf Bp_iq_iR_Z}(f )(X,Z)
\nonumber
\end{eqnarray}
are eigenfunctions of $\Delta$ if and only if function
$f(\mathbf x^2)=f(\tilde t)$ is an eigenfunction of the
radial operator (\ref{Lf_mu(x)}), where $p=\sum p_i, q=\sum q_i, n=p+q$.
Consider a Z-ball bundle with radius function $R_Z(\mathbf x)$ defining
a compact or non-compact ball$\times$ball-type domain. Then the
eigenfunctions satisfying the Dirichlet or Z-Neumann conditions on this
domain can not be sought among the above eigenfunctions because
functions $F_{Qpq}(X,K_u)$ resp. $F_{\mathbf Bp_iq_i}(X,K_u)$ are not
spherical harmonics regarding the $K_u$-variable but they are
rather combinations
of several spherical harmonics belonging to different eigenvalues of
the Z-spherical Laplacian. It turns out, however, that one can construct
the complete function space satisfying a given boundary condition by the
above formulas if functions $F_{\dots}$ are substituted by their
projections $\Pi^{(s)}_{K_u}(F_{\dots})$ into the space of $s^{th}$-order
spherical harmonics regarding variable $K_u$.
Actually, this projection appears in a more subtle form,
$\Pi^{(vas)}_{K_u}=\Pi^{(\alpha)}_{K_u}$,
which projects $F_{\dots}$, first, into the space of
$(v+a)^{th}$-order homogeneous polynomials
(where $v$ resp. $a$ refer to the degrees
of functions to which $\varphi (K)$ resp. the $(p+q)^{th}$-order
polynomials are projected).
The projection to the $s^{th}$-order function
space applies, then, to these homogeneous functions. The most important
mathematical tool applied in this investigations is the Hankel transform
developed later.
Although these new functions yield the boundary conditions,
they do not remain eigenfunctions
of the complete Laplacian anymore. However, they are still
eigenfunctions of the
partial operator $\OE$. They are called weak-force-eigenfunctions, which
can be considered as electromagnetic-force-eigenfunctions because
both $\OE$ and the Ginsburg-Landau-Zeeman operators
can be reduced to the same radial
operator. The old ones from which the new functions are derived
are called linkage-eigenfunctions of $\Delta$,
which bridge the electromagnetic interactions with the weak interactions.
Since the extrinsic, $\mathbf{L}$, and intrinsic, $\mathbf{S}$,
operators do not
commute, the weak-force-eigenfunctions can not be the eigenfunctions
of the complete operator $\Delta =\OE +\mathbf{S}$. In other words, the
weak-force-eigenfunctions can not be equal to the
strong-force-eigenfunctions defined by the eigenfunctions of $\Delta$
satisfying a given boundary condition.
To construct these functions, the Fourier integrals must be considered
on the whole Z-space $\mathbb R^l$, by seeking them in the form
$
\int_{\mathbb R^l}e^{\mathbf i\langle Z,K\rangle}
\phi_\alpha (\mathbf x,\mathbf z)\Pi_{K}^{(\alpha)}(F_{\dots}(X,K))dK.
$
The action of $\Delta$ on these functions can be described in the form
$
\int_{\mathbb R^l}e^{\mathbf i\langle Z,K\rangle}
\bigcirc_\alpha (\phi_1,\dots ,\phi_d) \Pi_{K}^{(\alpha)}(F_{\dots}
(X,K))dK,
$
where operator $\bigcirc_\alpha (\phi_1,\dots ,\phi_d)$, corresponding
d-tuples of $(X,Z)$-radial functions to each other, is defined in terms
of Hankel transforms combined with radial derivatives of the
functions appearing in the arguments. Finding the eigenfunctions of
$\Delta$
means finding the eigen-d-tuples, $(\phi_1,\dots ,\phi_d)$, of the
radial, so called roulette operator $\bigcirc_\alpha$.
As it will be pointed out, these eigenfunctions exhibit properties
characteristic to strong force eigenfunctions.
These arguments really unify the 3 fundamental forces of particle
theory. The real union is exhibited by the common unpolarized operator
$\Delta$. After polarization, they are separated into three categories.
These cases correspond to the function spaces on which the polarized
operators are acting. The details are as follows.
\subsection{Twisted Z-Fourier transforms.}
This is the main mathematical tool which incorporates de Broglie's
wave theory into the new models in a novel, more general form.
The name indicates that the Fourier transform is performed,
over each X-vector,
only on the Z-space in the same manner as if one would like to consider
the de Broglie waves
only in the center of the Lie group. However, an important new
feature is that this transform applies to product of functions,
where one of them purely depends just on
the center variable, $K$, while the other is a complex polynomial
of the X-variable defined in terms of the complex structures $J_{K_u}$,
where $K_u=K/\mathbf k$ and $\mathbf k=|K|$. This Z-Fourier transform
is said to be twisted by the latter polynomials.
Thus this transform has impact also on
the X-variable. This simple idea establishes the necessary connection
between the abstract axes, $K_u$, and the particles placed onto the
complex planes of the complex structures $J_{K_u}$.
This transform is defined in several alternative forms corresponding
to those introduced in the previous review. The difference is
that, over each X-vector, the following functions and integrals
are defined on the whole Z-space. This is contrary to the previous section
where the integral is defined just on Z-spheres.
Since the eigenfunctions constructed in the
review do not satisfy any of the boundary conditions, this
reformulation of the Z-Fourier transform is really necessary for
the complete solutions of the considered problems.
In the first case, consider a
fixed X-vector $Q$, and define the same functions
\begin{equation}
\Theta_{Q}(X,K_u)=\langle Q+\mathbf iJ_{K_u}(Q),X\rangle\, ,\,
\overline{\Theta}_{Q}(X,K_u),
\end{equation}
as above.
For fixed integers $p,q\geq 0$ and $L^2$-function
$
\phi (\mathbf x,K),
$
consider the Z-Fourier transform
\begin{equation}
\mathcal F_{Qpq}(\phi )(X,Z)=
\int_{\mathbf z} e^{\mathbf i
\langle Z,K\rangle}
\phi (\mathbf x,K)
\Theta_{Q}^p(X,K_u)\overline\Theta^q_{Q}(X,K_u)dK,
\end{equation}
which is said to be twisted by the polynomial
$
\Theta_{Q}^p(X,K_u)\overline\Theta^q_{Q}(X,K_u).
$
Function $\phi$ is considered also in the form
$\phi (\mathbf x,\mathbf k)\varphi (K_u)$, where $\varphi$ is
a homogeneous polynomial of the K-variable.
The $L^2$-space spanned by these functions is denoted by
$\mathbf \Phi^{(n)}_{Qpq}$, where $n=p+q$ indicates that these functions
are $n^{th}$-order polynomials regarding the $X$-variable.
The space spanned by
the twisted functions
$\phi (\mathbf x,K)\Theta_{Q}^p(X,K_u)\overline\Theta^q_{Q}(X,K_u)$
is denoted by
$\mathbf {P\Phi}_{Qpq}^n$. This is the pre-space to which the Fourier
transform is applied.
Instead of a single vector $Q$, the second alternative form is defined
regarding $\kappa =k/2$ independent vectors,
$\mathbf B=\{E_1,\dots ,E_{\kappa}\}$, of the X-space. Such a system
forms a complex basis for almost all complex structure $J_{K_u}$,
where $K_u=K/\mathbf k$. Now the twisting functions are polynomials
of the complex coordinate functions
\begin{equation}
\{z_{K_u1}(X)=\Theta_{Q_1}(X,K_u),\dots
,z_{K_u\kappa}(X)=\Theta_{Q_{\kappa}}(X,K_u)\},
\end{equation}
where these formulas indicate that how the coordinate functions
can be expressed in terms of the above $\Theta_Q$-functions.
For appropriate functions $\phi (\mathbf x,K)$ and polynomial exponents
$(p_i,q_i)$ (where $i=1,\dots ,k/2$) transform
$\mathcal F_{\mathbf B(p_iq_i)}(\phi\varphi )(X,Z)$ is defined by:
\begin{eqnarray}
\int_{\mathbb R^l} e^{\mathbf i\langle Z,K\rangle}
\phi (\mathbf x,\mathbf k)\varphi (K_u)
\prod_{i=1}^{\kappa}z^{p_i}_{K_ui}(X)\overline z^{q_i}_{K_ui}(X)dK,
\end{eqnarray}
where $\varphi (K_u)$ is the restriction of an $m^{th}$-order homogeneous
polynomial, $\varphi (K)$, onto the unit sphere of the K-space.
If $\sum_i(p_i+q_i)=n$, then the twisting functions are $n^{th}$-order
complex valued polynomials regarding the X-variable and for any fixed $X$,
the whole function is of class $L^2_K$ regarding the K-variable. These
properties are inherited also for the transformed functions.
When $\phi (\mathbf x,K)$
runs through all functions which are of class $L^2_K$,
for any fixed $|X|$, the transformed functions span the function space
$\mathbf \Phi_{\mathbf B p_iq_i}^{(n)}$.
All these function spaces, defined for index sets
satisfying $n=\sum (p_i+q_i)$, span the function space denoted by
$\mathbf \Phi_{\mathbf B}^{(n)}=
\sum_{\{(p_iq_i)\}}\mathbf \Phi_{\mathbf Bp_iq_i}^{(n)}$.
The corresponding pre-spaces are denoted by
$\mathbf{P\Phi}_{\mathbf B p_iq_i}^{(n)}$
and
$\mathbf{P\Phi}_{\mathbf B}^{(n)}$
respectively.
Twisted Z-Fourier transforms (\ref{fourR_Z})-(\ref{fourprod})
defined in the previous section by
considering Dirac-type functions concentrated on spheres $S_{R}$ of
radius $R(\mathbf x)$ can be generated by the familiar $L^\infty$
approximation of these Dirac type functions by $L^2$ functions.
That is, this radius depends just on $\mathbf x$. Function spaces
$\mathbf{\Phi}_{\mathbf B p_iq_i R}^{(n)}$,
$\mathbf{\Phi}_{\mathbf B R}^{(n)}$, and their pre-spaces
$\mathbf{P\Phi}_{\mathbf B p_iq_iR}^{(n)}$
resp.
$\mathbf{P\Phi}_{\mathbf BR}^{(n)}$ are defined similarly as for $L^2$
functions. They are defined also for one-pole functions. These cases are
denoted such that $\mathbf B$ is replaced by $Q$.
In many cases the
theorems hold true for each version of these function spaces. By this
reason we introduce the unified denotation
$\mathbf{\Phi}_{...R}^{(n)}$ and $\mathbf{P\Phi}_{...R}^{(n)}$,
where the dots represent the symbols
introduced above on the indicated places. A unified denotation for
the total spaces are
$\mathbf{\Phi}_{.R}^{(n)}$ and $\mathbf{P\Phi}_{.R}^{(n)}$,
indicating that symbols $p_iq_i$ do not show up in these formulas.
If letter $R$ is omitted, the formulas concern the previous cases
when the functions are of class $L^2$ regarding the $K$-variable.
The elements of the latter total function spaces are
complex valued functions defined on the $(X,Z)$-space such that,
upto multiplicative X-radial functions, they are
$n^{th}$-order polynomials regarding the X-variable and for any fixed $X$
they are $L^2$ functions regarding the Z-variable. It is very
important to clarify the relations between the above twisted spaces
and the latter complex valued function spaces. By considering an
arbitrary real basis $\mathbf Q=\{Q_1,\dots ,Q_k\}$ on the X-space,
the complex valued functions appear in the form
$\sum_{\{a_1,\dots ,a_k\}}\phi_{a_1,\dots ,a_k}(|X|,Z)
\prod_{i=1}^k\langle Q_i,X\rangle^{a_i}$,
where the sum is considered for all sets $\{a_1,\dots ,a_k\}$ of
non-negative integers satisfying $\sum a_i=n$ and functions
$\phi$ are $L^2_Z$-functions for any fixed $|X|$. The function space
spanned by these functions is called
{\it straight $L^2_Z$ space of complex valued
$(X,Z)$-functions}. The Z-Fourier transform
performed on such functions is called {\it straight Z-Fourier transform}.
The explanations below show that twisted functions
$\phi (|X|,K)
\prod_{i=1}^{k/2}z^{p_i}_{K_ui}(X)\overline z^{q_i}_{K_ui}(X)$
can be converted to straightly represented functions in the terms of which
the twisted Z-Fourier transform becomes a straight Z-Fourier transform.
This conversion works out also in the opposite (from the straight to the
twisted) direction. The precise details below
show that the straightly defined
function spaces are complete regarding the $L^2_Z$ norm in which both
$\mathbf \Phi_{\mathbf B}^n$ and the pre-space
$\mathbf {P\Phi}_{\mathbf B}^n$
are everywhere dense subspaces. By this reason, the corresponding straight
spaces are denoted by
$\overline{\mathbf \Phi}_{\mathbf B}^n$ resp.
$\overline{\mathbf {P\Phi}}_{\mathbf B}^n$.
However, these spaces are equal.
Ultimately, both versions of the Z-Fourier
transforms define authomorphisms
(one to one and onto maps) of this very same ambient function space.
This situation can be illuminated by the real $k\times k$ matrix field
$A_{ij}(K_u)$ defined on the unit Z-vectors
which transforms the real basis
$\mathbf Q$ to the vector system
$\mathbf B_{\mathbb R}=\{B_1,\dots ,B_{k/2},B_{(k/2)+1}=J_{K_u}(B_1),
\dots , B_k=J_{K_u}(B_{k/2})\}$.
That is, this field is uniquely determined by the
formula $B_i=\sum_{j=1}^kA_{ij}Q_j$,
where $i=1,\dots ,k$. The entries are
polynomials of $K_u$. By plugging these formulas
into the twisted Z-Fourier
transform formula, one gets the straight
representations both of the twisted
functions and their Z-Fourier transforms.
Conversion from the straight to the twisted functions is more complicated.
In this case vectors $Q_i$ should be exchanged for vectors $B_j$
according to the formula $Q_i=\sum_{j=1}^kA^{-1}_{ij}B_j$.
Then vectors
$B_j$ resp. $B_{(k/2)+j}$, where $j\leq k/2$, should be expressed
in the form
\begin{eqnarray}
B_j={1\over 2}(B_j+J_{K_u}(B_j))+{1\over 2}(B_j-J_{K_u}(B_j))\quad
{\rm resp.}
\\
B_{(k/2)+j}=J_{K_u}(B_j)=
{1\over 2}(B_j+J_{K_u}(B_j))-{1\over 2}(B_j-J_{K_u}(B_j)),
\end{eqnarray}
which, after performing powering and appropriate rearranging,
provide the desired twisted formulas.Due to the
degeneracy of the matrix field $A_{ij}$ on $S_{\mathbf B}$, some entries
of the inverse matrix field $A^{-1}_{ij}$ have limits $+\infty$ or
$-\infty$ of order at most $k/2$ on this singularity set. Therefore,
the Z-Fourier transform of a term of
the twisted function involving such functions may be not defined, despite
the fact that the Z-Fourier transform of the whole function exist which is
equal to the transform of the straightly represented function. In other
words, the infinities appearing in the separate terms cancel each other
out in the complete function.
This contradictory situation can be resolved as follows. For a given
$\epsilon >0$, let $S_{\mathbf B\epsilon}$ be the $\epsilon$-neighborhood
of the singularity set on the unit Z-sphere and let
$\mathbb RS_{\mathbf B\epsilon}$ be the conic set covered by the rays
emanating from the origin which are spanned by unit Z-vectors pointing
to the points of $S_{\mathbf B\epsilon}$. For an $L^2_Z$-function
$\phi (\mathbf x,K)$ discussed above let
$\phi_\epsilon(\mathbf x,K)$ be the function
which is the same as $\phi$ on the outside of
$\mathbb RS_{\mathbf B\epsilon}$ and it is equal to zero in the inside of
this set. Then, regarding the $L^2_Z$-norm,
$\lim_{\epsilon\to 0}\phi_\epsilon =\phi$ holds. For a function $F(X,K)$,
expressed straightly, define $F_\epsilon (X,K)$ by substituting each
function $\phi$ by $\phi_\epsilon$. If function $\psi_\epsilon (K)$ is
defined by $1$ outside of $\mathbb RS_{\mathbf B\epsilon}$ and by $0$
inside, then $F_\epsilon (X,K)=\psi_\epsilon (K)F(X,K)$ holds. Convert
$F_\epsilon$ into the twisted form. Then the twisted Z-Fourier transform
is well defined for each
term of the twisted expression, providing the same
transformed function as what is defined by the straight Z-Fourier
transform. Thus each straightly represented $L^2_Z$ function is an
$L^2_Z$-limit of functions which can be converted to twisted functions
in which each twisted term is an $L^2_Z$ function having well defined
twisted Z-Fourier transform.
In this process, function $\psi_\epsilon (K)$, which is constant in radial
directions, can be chosen such that it is of
class $C^\infty$ satisfying $0\leq\psi_\epsilon (K)\leq 1$, furthermore,
it vanishes
on $\mathbb RS_{\mathbf B\epsilon /2}$ and is equal to $1$ outside of
$\mathbb RS_{\mathbf B\epsilon}$. Thus we have:
\begin{theorem}
\label{welldef1}
The twisted functions, defined in each of
their terms by $L^2_Z$ functions,
form an everywhere dense subspace ${\mathbf {P\Phi}}_{\mathbf B}^n$
in the complete space $\overline{\mathbf {P\Phi}}_{\mathbf B}^n$
of straightly defined
$L^2_Z$ functions. Although the first space depends on $\mathbf B$, the
second one is a uniquely determined function space which does not
depend neither on $\mathbf B$ nor on $\mathbf Q$.
A general function from the ambient
space becomes an appropriate twisted function belonging
to the dense subspace
after multiplying it with a function $\psi_\epsilon (K)$
which is zero on the above described set
$\mathbb RS_{\mathbf B\epsilon}$ and equal to $1$ on the complement of
this set. This function, $\psi_\epsilon (K)$, can be chosen
to be of class $C^\infty$
such that
it is constant in radial directions
satisfying $0\leq\psi_\epsilon (K)\leq 1$, furthermore,
it vanishes
on $\mathbb RS_{\mathbf B\epsilon /2}$ and is equal to $1$ outside of
$\mathbb RS_{\mathbf B\epsilon}$.
Then the $L^2_Z$-approximation is defined
by the limiting $\epsilon\to 0$.
The twisted Z-Fourier transform continuously extends
to the straight Z-Fourier transform defined on the ambient space.
On the ambient
space this Fourier transform is an authomorphism.
The Z-Fourier transform is a bijection between the everywhere
dense subspaces
${\mathbf {P\Phi}}_{\mathbf B}^n$
and ${\mathbf \Phi}_{\mathbf B}^n$. Particularly the relation
$\overline{\mathbf {P\Phi}}_{\mathbf B}^n=
\overline{\mathbf {\Phi}}_{\mathbf B}^n$ holds.
The same statements hold true also for Q-pole function spaces. Total
spaces
$\mathbf{\Phi}_{\mathbf B}^{(n)}$ resp. $\mathbf{P\Phi}_{\mathbf B}^{(n)}$
are spanned by the corresponding Q-pole functions whose poles, $Q$,
are in the real span of vectors belonging to $\mathbf B$.
\end{theorem}
In order to show the one to one property,
suppose that the Z-Fourier transform of a
twisted function vanishes. That is:
\begin{eqnarray}
\label{fou=0}
\int_{\mathbb R^l} e^{\mathbf i\langle Z,K\rangle}\sum_{\{(p_i,q_i)\}}
\phi_{(p_i,q_i)} (|X|,K)
\prod_{i=1}^{k/2}z^{p_i}_{K_ui}(X)\overline z^{q_i}_{K_ui}(X)dK=
\\
=\int_{\mathbb R^l} e^{\mathbf i\langle Z,K\rangle}\sum_{\{(p_i,q_i)\}}
\Phi_{(p_i,q_i)} (X,K)dK=0.
\nonumber
\end{eqnarray}
Then, for any fixed $X$, the Z-Fourier transform of
$\sum_{\{(p_i,q_i)\}}\Phi_{(p_i,q_i)} (X,K)$ vanishes.
Therefore, this function
must be zero for all $X$ and for almost all $K$. For vectors $K_u$
not lying in the singularity set $S_{\mathbf B}$, the $\mathbf B$ is a
complex basis and complex polynomials
$\prod_{i=1}^{k/2}z^{p_i}_{K_ui}(X)\overline z^{q_i}_{K_ui}(X)$
of the X-variable are linearly
independent for distinct set $\{(p_i,q_i)\}$
of exponents. Thus all component functions $\phi_{(p_i,q_i)} (|X|,K)$
of the $K$ variable must vanish almost everywhere.
This proves that the twisted
Z-Fourier transform is a one to one map of the everywhere
dense twisted subspace onto the everywhere dense twisted range space.
The same proof, applied to straight
functions, yields the statement on the
straight space where the Z-Fourier transform is obviously an onto map by
the well known theorem of Fourier transforms.
This section is concluded by introducing several invariant functions
by which physical objects such as charge and volume will be precisely
defined. Suppose that system $\mathbf B$ consists of orthonormal
vectors and let $\mathbb B^\perp$ be the orthogonal complement of
the real span, $\mathbb B=Span_{\mathbb R}(\mathbf B)$, of vectors
belonging to $\mathbf B$. Basis $\mathbf B$ defines an orientation
on $\mathbb B$. Let $\mathbb B_+^\perp$ resp. $\mathbb B_-^\perp$
be the two possible orientations which can be chosen on $\mathbb B^\perp$
such that, together with the orientation of $\mathbb B$, they define
positive resp. negative orientation on the X-space, $\mathbb R^k$.
Charging a particle system represented by $\mathbf B$ means choosing
one of these two orientations. Once the system is charged, choose an
orthonormal basis also in $\mathbb B^\perp$, complying with the chosen
orientation. Together with $\mathbf B$, this system defines an
orthonormal basis, $\mathbf Q$, on the X-space by which the matrix
field $A_{ij}(Z_u)$ defined above can be introduced. The invariants
of these matrix field are independent from the above basis chosen on
$\mathbb B^\perp$. Invariant functions
$\mathit{ch}(Z_u)=Tr(A_{ij}(Z_u))-(k/2)$ and
$\mathit{v}(Z_u)=det(A_{ij}(Z_u))$ are called {\it charger} and
{\it volumer} respectively. Their integral regarding a proper probability
density defines the charge resp. mass of the particle being on a given
proper state (these concepts are precisely
described later in this paper).
\subsection{Hankel transform.}
This twisted Fourier transform is investigated by means of the
Hankel transform. The statement regarding this transform
asserts:
\begin{theorem} The Fourier
transform considered on
$\mathbb R^l$
transforms a product,
$f(r)F^{(\nu )}(\theta )$,
of radial functions
and spherical harmonics
to the product,
$H^{(l)}_\nu (f)(r)F^{(\nu )}(\theta )$, of the same form, i. e.,
for any fixed degree $\nu$ of the spherical harmonics, it
induces maps,
$H^{(l)}_\nu (f)(r)$,
on the radial functions, which,so called Hankel transform, is uniquely
determined for any fixed indices $l$ and $\nu$.
\end{theorem}
This is actually a weak form of the original Hankel theorem
which can be directly settled by the following
mean value theorem of the spherical harmonics, $F^{(\nu )}(\theta )$,
defined on the unit sphere, $S$, about the origin
of $\mathbb R^l$. These functions are eigenfunctions of the Laplacian
$\Delta_S$ with eigenvalue $\lambda_\nu$, moreover, there exists a
uniquely determined radial eigenfunction $\varphi_{\lambda_\nu}(\rho )$,
where $0\leq \rho\leq\pi$ and $\varphi_{\lambda_\nu}(0)=1$, on $S$ which
has the same eigenvalue $\lambda_\nu$ such that,
on a hypersphere $\sigma_\rho(\theta)\subset S$
of radius
$\rho$ and center $\theta$ on the ambient sphere $S$, the identity
$\oint_\sigma F^{(\nu )}d\sigma_{no} =F^{(\nu )}(\theta )
\varphi_{\lambda_\nu}(\rho )$ holds, where $d\sigma_{no}$
is the normalized measure measuring $\sigma$ by $1$.
This mean value theorem can be used
for computing the Fourier transform
$\int_{\mathbb R^l}e^{\mathbf i\langle Z,K\rangle}f(|K|)F^{(\nu )}
(\theta_K)dK$ at a point $Z=(|Z|,\theta_Z)$.
This integral is computed by Fubini's theorem such that one considers
the line $l_Z(t)$, spanned by $\theta_Z$; parameterized with arc-length
$t$; and satisfying $l_Z(Z)>0$, and one computes the integrals first in
hyperplanes intersecting $l_Z$ at $t$ perpendicularly and then on
$l_Z$ by $dt$.
On the hyperplanes, write up the integral in polar coordinates defined
around the intersection point with $l_Z$, where the radial Euclidean
distance from this origin is denoted by $\tau$. Consider polar
coordinates also
on $S$ around $\theta_Z$, where the radial spherical distance from
this origin is denoted by $\rho$. Then $\tau =|t||\tan \rho |$ holds,
furthermore, a straightforward computation yields:
\begin{eqnarray}
\int_{\mathbb R^l}e^{\mathbf i\langle Z,K\rangle}f(|K|)F^{(\nu )}
(\theta_K)dK=
H^{(l)}_\nu (f)(r)F^{(\nu )}(\theta_Z),\quad {\rm where}
\\
H^{(l)}_\nu (f)(r)=\Omega_{l-2}\int_{-\infty}^\infty
e^{\mathbf irt} |t|^{l-1}
\int_0^\pi f(|t\tan\rho |)\varphi_{\lambda_\nu}(\rho )
{\sin^{l-2}\rho\over |\cos\rho |^l}
d\rho dt,
\nonumber
\end{eqnarray}
where $\Omega_{l-2}$ denotes the volume of an $(l-2)$-dimensional
Euclidean unit sphere.
These formulas prove the above statement completely.
\subsection{Projecting to spherical harmonics.}
Among the other mathematical tools by which twisted Z-Fourier
transforms are investigated are the projections
$
\Pi_{K_u}^{(r,s)} (\varphi (K_u)
\prod_{i=1}^{\kappa}z^{p_i}_{K_ui}(X)\overline z^{q_i}_{K_ui}(X)),
$
corresponding $s^{th}$-order polynomials to
$r^{th}$-order polynomials of the $K_u$-variable.
Although the functions they are applied to may
depend also on the X-variable, they refer strictly to the $K_u$-variable,
meaning, that they are performed, over each X-vector, in the Z-space.
These characteristics are exhibited also by the fact that these
projections appear as certain polynomials of the Laplacian
$\Delta_{K_u}$ defined on the unit K-sphere. It is much more convenient
to describe them in terms of homogeneous functions, which are projected
by them to harmonic homogeneous polynomials. In this version
these polynomials depend on $K$ and the projections can be described
in terms of the Laplacian $\Delta_{K}$ defined on the ambient space.
By
restrictions onto the unit K-spheres, one can easily find then the
desired formulas in terms of $\Pi_{K_u}^{(r,s)}$.
For an $n^{th}$ order homogeneous polynomial, $P_n(K)$,
projection $\Pi^{(n)}_K:=\Pi^{(n,n)}_K$ onto the space of $n^{th}$ order
harmonic polynomials can be computed by the formula
\begin{equation}
\label{proj}
\Pi_K^{(n)}(P_n(K))
=\sum_s C^{(n)}_s \langle K,K\rangle^s\Delta^s_K(P_n(K)),
\end{equation}
where $C^{(n)}_0=1$ and the other coefficients can be determined
by the recursive formula
$
2s(2(s+n)-1)C^{(n)}_s+C^{(n)}_{s-1}=0.
$
In fact, exactly for these coefficients is the function defined
by an arbitrary $P_n(K)$ on the right side a homogeneous
harmonic polynomial.
These formulas can be easily established for polynomials
$P_n(K)=\langle W,K\rangle^n$, defined by a fixed Z-vector $W$.
Since they span
the space of $n^{th}$ order homogeneous polynomials, the statement
follows also for general complex valued polynomials.
This projection is a surjective map of the $n^{th}$-order
homogeneous polynomial space
$\mathcal P^{(n)}(K)$ onto the space, $\mathcal H^{(n)}(K)$,
of the $n^{th}$-order homogeneous harmonic polynomials
whose kernel
is formed by polynomials of the form $\langle K,K\rangle P_{n-2}(K)$.
Subspace
$\mathcal H^{(n)}\subset \mathcal P^{(n)}$ is a complement
to this kernel.
The complete decomposition of $P_n(K)$ appears in the
form $P_n(K)=\sum_i\langle K,K\rangle^i HP_{n-2i}(K)$, where $i$
starts with $0$ and running through the integer part, $[n/2]$, of
$n/2$, furthermore, $HP_{n-2i}(K)$ are harmonic
polynomials of order $(n-2i)$. It can
be established by successive application of the above computations. In the
second step in this process, one considers the functions
$P_n(K)-\Pi_K^{(n)}(P_n(K))$, which appear in the form
$\langle K,K\rangle P_{n-2}(K)$, and obtains $HP_{n-2}(K)$ by the
projection $\Pi_K^{(n-2)}(P_{n-2}(K))$. Then, also this second term
is removed from $P_n(K)$, in order to get ready for the third step,
where the same computations are repeated. This process can be completed
in at most $[n/2]$ steps. It is clear that
projections $\Pi_K^{(n,n-2i)}$ resulting functions $HP_{n-2i}(K)$
from $P_n(K)$ are of the form
$\Pi_K^{(n,n-2i)}=D_{(n,n-2i)}\Pi_K^{(n-2i,n-2i)}\Delta_K^i$,
where indices $(n,n-2i)$ indicates that the projection maps
$n^{th}$-order polynomials
to $(n-2i)^{th}$-order harmonic polynomials (in this respect, projection
$\Pi_K^{(n)}$ is the same as $\Pi_K^{(n,n)}$). The technical calculation
of constants $D_{(n,n-2i)}$
is omitted. The corresponding projections $\Pi_{K_u}^{(r,s)}$ defined
on the unit spheres can immediately be established by these projections
defined for homogeneous functions. They appear
as polynomials of the Laplacian $\Delta_{K_u}$ defined on the unit
$K_u$-sphere.
Since these projections depend just on the degrees $r$ and $s$,
they apply also to twisted functions which
depend on the X-variable as well.
In order to make the Hankel transform applicable, they are used for
decomposing functions in the form
\begin{eqnarray}
\sum_{(r;s)} f_{(r;s)}(\mathbf x,\mathbf k)
\Pi_{K_u}^{(r,s)} (\varphi (K_u)
\prod_{i=1}^{\kappa}z^{p_i}_{K_ui}(X)\overline z^{q_i}_{K_ui}(X))\\
=f_{\alpha}(\mathbf x,\mathbf k)
\Pi_{K_u}^{\alpha} (F^{(p_i,q_i)}(X,K_u)),
\nonumber
\end{eqnarray}
where the right side is just a short way
to describe the sum appearing on the left side in terms of compound
indices $\alpha =(r,s)$ and functions $F^{(p_i,q_i)}$.
To be more precise, functions
$\phi_n(\mathbf x,K)P^{(n)}(X,K_u)$
appearing in the pre-spaces
must be brought to appropriate forms
before these projections can directly be applied to them. First of all,
function $\phi_n(\mathbf x,K)$ should be considered in the
form $\phi_n(\mathbf x,K)=\sum_v\phi_{n,v}
(\mathbf x,\mathbf k)\varphi^{(v)}(K)$,
where
$\varphi^{(v)}(K)$ is an $v^{th}$-order homogeneous harmonic polynomial.
Then, after implementing all term-by-term multiplications in the
products of
$\Theta_{B_i}=\langle B_i,X\rangle+\langle\mathbf iJ_{K_u}(B_i),X\rangle$
and
$\overline\Theta_{B_i}=\langle B_i,X\rangle-
\langle\mathbf iJ_{K_u}(B_i),X\rangle$, the above polynomials
have to be
taken to the form $P^{(n)}(X,K_u)=\sum_{a=0}^nP^{(n,a)}(X,K_u)$,
where polynomial $P^{(n,a)}$ involves exactly $a$ number of
linear polynomials of the form $\langle J_{K_u}(Q_i),X\rangle$. The
above projections defined in terms of $r$ directly act on functions
$\varphi^{(v)}(K)P^{(n,a)}(X,K)$ satisfying $r=v+a$, which are
obtained by term by term multiplications of the sums given above for
$\phi_n(\mathbf x,K)$ and $P^{(n)}(X,K_u)$.
This complicated process can be considerably simplified by
considering only one-pole functions defined for single
$Q$'s which are in the real span of the vector system $\mathbf B$.
In fact, all the 1-pole total spaces
$\mathbf\Phi_{Q}^{(n)}=\sum_{p,q}\mathbf\Phi_{Qpq}$, where $n=p+q$,
span also the total space
$\sum_{(p_i,q_i)}\mathbf\Phi^{(n)}_{\mathbf B}$, thus there is
really enough to
establish the theorems just for these simpler
one-pole functions. In this case, function $P^{(n,a)}(X,K_u)$
is nothing but a constant-times of function
\begin{equation}
R_Q^{(n,a)}(X,K_u)=\langle Q,X\rangle^{n-a}
\langle J_{K_u}(Q),X\rangle^{a}=
\langle Q,X\rangle^{n-a} \langle [Q,X],K_u\rangle^{a}.
\end{equation}
Note that, depending on $p$ and $q$,
the component
of a particular $P^{(n)}(X,K_u)$ corresponding to a
given $0\leq a\leq n$ may vanish. However, there exist such pairs $(p,q)$
for which this $a$-component is non-zero. Space
$\mathbf P^{(n,a)}(X,K)$ is defined by the span of functions
$R_Q^{(n,a)}(X,K)$. Note that $K_u$ has been changed to $K$ in the above
formula. These functions are $n^{th}$- resp. $a^{th}$-order homogeneous
polynomials regarding the X- resp. K-variables. Keep
in mind that these functions can be derived from functions
$\Theta_Q^p\overline\Theta_Q^q$,
where $p+q=n$, by linear combinations, therefore, they belong to the
above twisted function spaces. More
precisely, for a given $n$, there exist an invertible matrix
$M^{(n,a)}_{pq}$
such that
$R_Q^{(n,a)}(X,K_u)=\sum_{p,q}M^{(n,a)}_{pq}\Theta_Q^p\overline\Theta_Q^q$
hold, where $p+q=n$.
Projections
$\Pi_X^{(n)}=\sum_s C^{(n)}_s \langle X,X\rangle^s\Delta^s_X$
acting on functions
$P(X,K_u)=\Theta^p_{Q}(X,K_u)\overline
\Theta^q_{Q}(X,K_u)$ resp.
$P(X,K_u)=\prod_{i=1}^{\kappa}z^{p_i}_{K_ui}(X)
\overline z^{q_i}_{K_ui}(X))$,
where $p+q=n$ resp. $\sum_ip_i+q_i=n$, regarding the X-variable are also
involved to these investigations.
Since they are $n^{th}$-order homogeneous functions
regarding $X$, projection $\Pi_X^{(n)}$ applies to them
immediately.
Then, for any fixed $K_u$,
function $\Pi_X^{(n)}(P(X,K_u))$ is an $n^{th}$-order homogeneous
harmonic polynomial regarding the X-variable.
The twisted Z-Fourier transforms,
$\mathcal{HF}_{\mathbf B(p_iq_i)}(\phi )(X,Z)$,
involving these projections are defined by
\begin{eqnarray}
\label{projio1}
\int_{\mathbb R^l} e^{\mathbf i\langle Z,K\rangle}
\phi (|X|,K)\Pi_X^{(n)}
(\prod_{i=1}^{k/2}z^{p_i}_{K_ui}(X)\overline z^{q_i}_{K_ui}(X))dK.
\end{eqnarray}
The corresponding $L^2_Z$ function spaces spanned by the transformed
functions and the pre-space are denoted by
$\mathbf \Xi_{\mathbf Bp_iq_i}^{(n)}$ and
$\mathbf{P\Xi}_{\mathbf Bp_iq_i}^{(n)}$ respectively. These function spaces
are well defined also for one-pole functions and also for the third type
of Z-Fourier transforms defined for Dirac type generalized functions
concentrated on Z-sphere bundles.
Although the following results are not used in the rest part of this paper,
because of their importance, we describe some mathematical
processes by which these
projections can explicitly be computed.
Further on, the formulas concern a fixed $n$ even if
it is not indicated there.
Decomposition into
K-harmonic polynomials will be implemented for the K-homogeneous functions
$\varphi^{(v)}(K)R_Q^{(a)}(X,K)$ of order $r=v+a$.
According to two different representations of the first function,
these projections
will be described in two different ways.
The first description is more or less technical, yet, very useful
in proving the independence theorems stated below.
In the second
description, the projected functions are directly constructed.
In both cases we consider Q-pole functions defined by a unit
vector $Q\in Span_{\mathbb R}{\mathbf B}=\mathbb B$. However,
the multipole cases
referring to vector systems $\mathbf B$ are also discussed
in the theorem established below.
According to the formula (\ref{projio1}), a pure harmonic
one-pole function
with pole $\zeta$ in the K-space is of the form
$\varphi_{\zeta}^{(v)}(K)=
\sum_s D_{2s}\mathbf k^{2s}\langle\zeta ,K\rangle^{v-2s}$,
where $D_0=|\zeta |^{v}$ and the other coefficients
depending on $|\zeta |^{v-2s}$ and constants $C_s$
are uniquely determined by the
harmonicity assumption. It is well known that these functions span the
space of homogeneous harmonic polynomials. Functions
$\partial^c_{\tilde K}\gamma_{\zeta}^{(v)}$ obtained by
directional derivatives regarding a fixed vector $\tilde K$
are also homogeneous harmonic $\zeta$-pole functions of order $(v-c)$.
The action of operator $\Delta^b_{K}$ on
$\varphi_\zeta^{(v)}(K)R_Q^{(a)}(X,K)$ results function:
\begin{equation}
\sum_{c=0}^{[{a\over 2}]}D_c\langle Q,X\rangle^{n-a}
(\partial_\zeta^{b-c}\varphi_\zeta^{(v)})(K)
\langle J_\zeta (Q),X\rangle^{b-c} \langle J_{K} (Q),X\rangle^{a-b-c}
|[Q,X]|^{2c}.
\nonumber
\end{equation}
The terms of this sum are obtained such that $\Delta_K^c$ acts
on $R_Q^{(n,a)}(X,K)$, resulting the very last term, while the others
are due to the action of $\Delta^{b-c}$ on the product according to
the formula
$
\partial_\zeta^{b-c}\gamma_\zeta^{(v)}
\partial_\zeta^{b-c}R_Q^{(a)}(X,K)
$.
Note that, because of the harmonicity, the action of $\Delta_K$ on
$\varphi_\zeta^{(v)}$ is trivial.
When the complete projection $\Pi^{(s)}_K$ is computed, then
the $b$'s involved to the formula are denoted by $b_j$.
For given $c$, factor out
$|[Q,X]|^{2c}$ from the corresponding terms. Thus the final projection
formula appears in the form
$\sum_c|K|^{4c}P_c^{(s)}(X,K)|[Q,X]|^{2c}$,
where term $P_c^{(s)}$ is equal to
\begin{equation}
\label{P^s_q}
\sum_{j}D_{cj}\langle Q,X\rangle^{n-a}
(\partial_\zeta^{b_j-c}\varphi_\zeta^{(v)})(K)
\langle J_\zeta (Q),X\rangle^{b_j-c} \langle J_{K} (Q),X\rangle^{a-b_j-c}.
\end{equation}
This is a rather formal description of the projected functions.
A more concrete construction is as follows.
The linear map $\mathcal X\to\mathcal Z$
defined by $X\to [Q,X]$ is surjective
whose kernel is a $(k-l)$-dimensional
subspace. Next projections $\Pi_K$ will be investigated in
the Z-space over such an X-vector, $\tilde X$, which is not in this kernel
and the unit vector
$\zeta_Q(\tilde X)$ defined by $[Q,\tilde X]=
|[Q,\tilde X]|\zeta_Q(\tilde X)$ is not in the
singularity set $S_{\mathbf B}$.
If $X_Q$ denotes the orthogonal projection of
$X$ onto the $l$-dimensional subspace spanned by $Q$ and vectors
$J_{K_u}(Q)$ considered for all unit vectors $K_u$, then
$|[Q,X]|^{2}=|X_Q|^2- \langle Q,X\rangle^{2}$.
The direct representation of $\varphi^{(v)}$ is the product
of two harmonic one-pole functions having perpendicular poles.
One of the poles is $\tilde\zeta=\zeta_Q(\tilde X)$ while the other
is an arbitrary perpendicular unit vector
$\tilde\zeta^\perp \not\in S_{\mathbf B}$. Then,
for all $0\leq c\leq v$, consider the product
$\varphi^{(v)}=\varphi_{\tilde\zeta^\perp}^{(c)}
\varphi_{\tilde\zeta}^{(v-c)}$
of harmonic homogeneous one-pole functions. Because of the perpendicular
poles, these products are also harmonic functions of the K-variable,
furthermore, considering them for all $0\leq c\leq v$ and
$\tilde\zeta^\perp$, they span the whole space of $v^{th}$-order
homogeneous harmonic polynomials
of the K-variable, for each point $X$. However, $\tilde\zeta$ is not
pointing into the direction of $[Q,X]$ in general.
When such a pure K-function is multiplied
with $R_Q^{(n,a)}(X,K)$, over $\tilde X$, for the required projections,
only the decomposition
of $\varphi_{\tilde\zeta}^{(v-c)}\langle \tilde\zeta ,K\rangle^a$
should be determined. A simple calculation shows:
\begin{equation}
\varphi_{\tilde\zeta}^{(v-c)}\langle \tilde\zeta ,
K\rangle^a=\sum_{i=0}^{a}
D_i\langle K,K\rangle^i \varphi_{\tilde\zeta}^{(v-c+a-2i)}.
\end{equation}
This function multiplied with
$\langle Q,X\rangle^{n-a}|[Q,\tilde X]|^a
\varphi_{\tilde\zeta^\perp}^{(c)}$
provides the desired decomposition and projections.
When this projected function is considered over an arbitrary $X$, neither
$\varphi_{\tilde\zeta^\perp}^{(c)}$ nor
$\varphi_{\tilde\zeta}^{(v-c+a-2i)}$ appear as pure K-functions.
One can state only that, for all $i$, this product is an
$a^{th}$-order homogeneous
polynomial, also regarding the X-variable.
This complication is due to that
$\zeta_Q(\tilde X)$ and $\zeta_Q(X)$ are not parallel,
thus the projection operator
involves both functions to the computations.
According to these arguments we have:
\begin{theorem}
\label{projth}
(A)
For given non-negative integers $r$, $a$, and $(v-a)\leq s\leq (v+a)$;
where $s$ has the same parity as $(v+a)$ or $(v-a)$; functions
$\varphi^{(v)}(K)R_Q^{(n,a)}(X,K)$ project to $(v+a)^{th}$-order
K-homogeneous functions which, over $\tilde X$, appear in the form
\begin{eqnarray}
\label{proj_s}
D_{(v+a-s)/2}\langle K,K\rangle^{(v+a-s)/2}
\langle Q,X\rangle^{n-a}|[Q,\tilde X]|^a
\varphi_{\tilde\zeta^\perp}^{(c)}
\varphi_{\tilde\zeta}^{(s-c)},
\end{eqnarray}
where $0\leq c\leq v$, which, by omitting term
$\langle K,K\rangle^{(v+a-s)/2}$,
are $s^{th}$-order homogeneous K-harmonic
polynomials. For given $v$ and $a$, this projection is trivial
(that is, it maps to zero) for all those values $s$ which do not
satisfy the above
conditions. Above, exactly the non-trivial terms are determined.
(B) Functions
$\langle K,K\rangle^{(v+a-s)/2}$ and $\langle Q,X\rangle^{n-a}$
of degrees $(v+a-s)$ resp. $(n-a)$
are independent from $c$, furthermore, function
$\varphi_{\tilde\zeta^\perp}^{(c)}\varphi_{\tilde\zeta}^{(s-c)}$
is an $a^{th}$-order homogeneous polynomial of the X-variable.
It follows that, for a given $s$,
the subspaces, $\mathbf {PHo}_Q^{(v,a)}$, spanned by projected functions
considered for distinct pairs $(v,a)$ are independent, more precisely,
$\sum_{(v,a)}\mathbf {PHo}_Q^{(v,a)}=\mathbf {PHo}_Q^{(s)}$ is a finite
direct sum decomposition of the corresponding space of complex valued
homogeneous functions which are $n^{th}$-order regarding the X-variable
and $s^{th}$-order harmonic functions regarding the Z-variable.
This decomposition is further graded by the subspaces,
$\mathbf {PHo}_Q^{(v,a,c)}$ defined for distinct $c$'s introduced above.
(C) These statements remain true for the total spaces
$\mathbf {PHo}_{\mathbf B}^{(v,a)}$ and
$\mathbf {PHo}_{\mathbf B}^{(s)}$ obtained by summing up all the
corresponding previous spaces defined for $Q$'s which are in the
real span of independent vector-system $\mathbf B$. Since the projections
$\Pi_X$ and $\Pi_K$
commute, they remain true also for spaces
$\mathbf {PXo}_{\mathbf B}^{(v,a)}$ and
$\mathbf {PXo}_{\mathbf B}^{(s)}$ (as well as for versions defined
for fixed $Q$'s) obtained by applying $\Pi_X$ to the corresponding
spaces $\mathbf {PHo}$.
\end{theorem}
\begin{proof}
The independence-statement in (B) follows also from (\ref{P^s_q}), because
functions $|[Q,X]|^{2c}$ regarding distinct powers $2c$ are independent
and, for a fixed $\tilde K_u$, also functions $P_c^{(s)}(X,\tilde K)$
of the X-variable expressed by means of complex structure $J_{\tilde K_u}$
as a complex valued function has distinct real and imaginary degrees
with respect to distinct $a$'s. That is, these functions defined for a
fixed $a$ can not be a linear combination of the others defined for other
$a$'s.
Statement (C) can be established by an appropriate generalization of
(\ref{P^s_q}).
Instead of Q-pole functions, now functions belonging to
$\mathbf{PHo}_{\mathbf B}^{(v,a_i)}$ should be projected, where
$a=\sum a_i$ and degree $a_i$ regards $B_i$. In this
situation, one gets functions
$\langle [B_i,X],[B_j,X]\rangle^{v_{ij}}$
multiplied with the corresponding functions $P^{(s)}_{c_{ij}}(X,V)$,
which, for a fixed $\tilde K_u$ and system $c_{ij}$ of exponents
have distinct real and imaginary
degrees regarding $J_{\tilde K_u}$ with respect to distinct
$\sum_i a_i=a\not =a^\prime =\sum_ia^\prime_i$.
\end{proof}
\subsection{Twisted Hankel decomposition.}
In order to construct the complete pre-spaces
$\mathbf{P\Phi}^{(n)}_Q$, $\mathbf{P\Phi}^{(n)}_{\mathbf B}$,
$\overline{\mathbf{P\Phi}}^{(n)}_Q$, and
$\overline{\mathbf{P\Phi}}^{(n)}_{\mathbf B}$,
by means of functions in
$\mathbf{PHo}^{(n,v,a)}_Q$ resp. $\mathbf{PHo}^{(n,v,a)}_{\mathbf B}$,
they must be multiplied with functions of the form $\phi (|X|,|K|)$ which
multiplied with $|K|^{v+a}$ incorporated to the $\mathbf{PHo}$-functions
provide K-radial $L^2_K$-functions for any fixed $|X|=\mathbf x$.
These functions
summed up regarding $(v,a)$ span the corresponding $n^{th}$-order twisted
spaces whose $L^2_K$-closures provide the complete space which can
also be introduced by straightly defined functions.
Since they are everywhere dense in the straightly defined function spaces,
the twisted spaces
must be complete regarding the $\mathbf{PHo}$-spaces.
Thus the completion of twisted
spaces is ultimately implemented on the space of K-radial functions.
Projections $\Pi^{(s)}_K$ should be defined for functions belonging to
$\mathbf{P\Phi}^{(n,v,a)}_Q$ resp. $\mathbf{P\Phi}^{(n,v,a)}_{\mathbf B}$,
first. Note that this operation has no effect on radial functions. It's
action is restricted to the spherical harmonics defined by restricting
the above homogeneous K-harmonic polynomials to the unit K-sphere over
each point $X$. By this interpretation, these projection operators
can be expressed as polynomials of the Laplacian $\Delta_{K_u}$ defined
on this unit sphere. The function spaces obtained by projecting the
whole corresponding ambient spaces are denoted by
$\mathbf{P\Phi}^{(n,s)}_Q$ resp. $\mathbf{P\Phi}^{(n,s)}_{\mathbf B}$.
There is described in the previous theorem that which functions labeled
by $(v,a)$ provide non-trivial $s$-components in these operations. They
provide direct sum decompositions of the ambient spaces which is called
also {\it pre-Hankel decomposition}. When also $\Pi^{(n)}_X$ (which
operator commutes with $\Pi^{(s)}_K$,
for all $s$) is acting, the obtained
pre-Hankel spaces are denoted by
$\mathbf{P\Xi}^{(n,s)}_Q$ resp. $\mathbf{P\Xi}^{(n,s)}_{\mathbf B}$, where
$n$ still indicates the degree of the involved homogeneous polynomials
regarding the X-variable.
The Z-Fourier transforms
$\int e^{\mathbf i\langle Z,K\rangle}\mathbf{P\Phi}^{(n,s)}_QdK$ resp.
$\int e^{\mathbf i\langle Z,K\rangle}\mathbf{P\Phi}^{(n,s)}_{\mathbf B}dK$
define the {\it $s^{th}$-order twisted Hankel spaces}
$\mathbf{H\Phi}^{(n,s)}_Q$ resp. $\mathbf{H\Phi}^{(n,s)}_{\mathbf B}$,
which are projected to
$\mathbf{H\Xi}^{(n,s)}_Q$ resp. $\mathbf{H\Xi}^{(n,s)}_{\mathbf B}$
by $\Pi^{(n)}_X$.
The direct sums of these non-complete subspaces define the corresponding
{\it total twisted Hankel spaces}.
They are different from the pre-spaces,
but also everywhere
dense subspaces of the corresponding straightly defined function spaces.
Thus the closure of these twisted spaces provides again the whole
straightly defined spaces.
\subsection{Twisted Dirichlet and Z-Neumann functions.}
All above constructions are implemented by using the whole center
$\mathbb R^l$.
In this section twisted functions
satisfying the Dirichlet or Z-Neumann
condition on the boundary, $\partial M$, of
ball$\times$ball- type domains, $M$, are explicitly constructed
by the method described in the review of this section. That is, they are
represented by twisted Z-Fourier transforms of Dirac type generalized
functions concentrated on $\partial M$. Due to this representation,
the eigenfunctions of the exterior operator $\OE$ satisfying given
boundary conditions can explicitly be computed.
In the first step of this process consider a sphere,
$S_R$, of radius $R$ around the origin of the Euclidean space
$\mathbf R^l$. As it is well known,
the Dirichlet or Neumann eigenfunctions of the Euclidean
Laplacian $-\Delta_{\mathbb R^l}$
on the ball $B_R$ bounded by $S_R$ appear as products of
$s^{th}$-order spherical harmonics $\varphi^{(s)}(K_u)$ with radial
functions $y_i^{(s)}(\mathbf z)$. For $s=0$, these eigenfunctions
are radial taking $1$ at the origin and having multiplicity $1$. For
$s>0$, the radial functions take $0$ at the origin and the multiplicity,
for fixed $s$ and $i$, is equal to the dimension of the space of
$s^{th}$-order spherical harmonics $\varphi^{(s)}(K_u)$.
Corresponding to the Dirichlet or Neumann conditions,
these eigenvalues are denoted by $\lambda_{Di}^{(s)}$
and $\lambda_{Ni}^{(s)}$ respectively. For any fixed $s$ and condition
$D$ or $N$, these infinite sequences satisfy
$\lambda_i^{(s)}\uparrow\infty$ and,
except for $0=\lambda_{N1}^{(0)}$, also
the relation $0<\lambda_{i}^{(s)}$ holds.
Eigenfunctions corresponding to Dirichlet or Neumann eigenvalues
$\lambda_i^{(s)}=\lambda$ can be represented by the integral formula
\begin{equation}
y_i^{(s)}(\mathbf z)\varphi^{(s)}(Z_u)=\oint_{S_{\sqrt{\lambda} }}
e^{\mathbf i\langle Z,V\rangle}
\varphi^{(s)}(K/\mathbf k)dK_{no},
\end{equation}
where $dK_{no}$ is the normalized
integral density on the sphere of radius
$\sqrt{\lambda}$. Apply $-\Delta_{\mathbb R^l}$ on the right
side to see that this function is an eigenfunction of this operator
with eigenvalue $\lambda_i^{(s)}:=\lambda$, which, because of the
Hankel transform, must appear as the function being on the left side.
This formula strongly relates to the third version of
the twisted Z-Fourier transforms which is introduced also in
the review of this section. However, it can be directly applied
there only after decomposing the functions behind the integral sign by
the projections $\Pi^{(r,s)}_{K_u}$.
Before this application, functions
$y_i^{(s)}(\mathbf z):=y(t)$, belonging to an eigenvalue $\lambda$,
are more explicitly determined as follows. By formulas
\begin{equation}
\Delta_Z=\partial_{t}\partial_{t}+{l-1\over t}\partial_{t}+
{\Delta_S\over t^2},\quad \Delta_S\varphi^{(s)}=-{s(s+l-2)}\varphi^{(s)},
\end{equation}
it satisfies the differential equation
\begin{equation}
y^{\prime\prime}+{l-1\over t}y^\prime +
\{\lambda -{{s(s+l-2)}\over t^2}\}y=0,
\end{equation}
which, after the substitutions $\tau =\sqrt\lambda t$ and $y(t)=z(\tau )$,
becomes
\begin{equation}
z^{\prime\prime}+{l-1\over \tau}z^\prime +
\{1 -{{s(s+l-2)}\over \tau^2}\}z=0.
\end{equation}
That is, function $J(\tau )=\tau^{l/2-1}z(\tau )$ satisfies
the ordinary differential equation
\begin{equation}
J^{\prime\prime}+{1\over \tau}J^\prime +
\{1 -{{(2s+l-2)^2}\over 4\tau^2}\}J=0,
\end{equation}
therefore, it is a bounded Bessel function of order $(s+l/2-1)$. Thus,
except for complex multiplicative constant,
equation $J=J_{s+l/2-1}$ must hold.
In order to find the functions satisfying the boundary conditions,
consider spheres $S_{\lambda (\mathbf x^2)}$ of radius
$\lambda (\mathbf x^2)$
around the origin of the Z-space. For appropriate functions
$\phi (\mathbf x^2)$ and $\varphi (K_u)$, the twisted Z-Fourier
transform on the sphere bundle
$S_\lambda$ is defined by:
\begin{equation}
\label{FourLamb_Z}
\mathcal F_{Qpq\lambda}(\phi\varphi )(X,Z)
=\oint_{S_{\sqrt\lambda}}e^{\mathbf i
\langle Z,K\rangle}\phi (\mathbf x^2)\varphi (K_u)
(\Theta_{Q}^p\overline\Theta^q_{Q})(X,K_u)dK_{no},
\end{equation}
where $dK_{no}=dK/Vol(\sigma_{\lambda})$ is
the normalized measure on the sphere
and function $\varphi (K/\mathbf k)\Theta_{Q}^p
\overline\Theta^q_{Q}(X,K/\mathbf k)$
is defined on $S_\lambda$.
Since no Hankel projections are involved,
these functions do not satisfy the required
boundary conditions, yet.
However, by the above arguments we have:
\begin{theorem}
Consider a ball$\times$ball-type domain defined by the Z-balls
$B_{R(\mathbf x^2)}$ and let $\lambda_i^{(s)}(\mathbf x^2)
:=\lambda(\mathbf x^2)$
be a smooth function defined by the $i^{th}$ eigenvalue in
the $s^{th}$-order Dirichlet or
Neumann spectra of the Euclidean balls $B_{R(\mathbf x^2)}$.
Then function (\ref{FourLamb_Z}) defined for
$\lambda =\lambda_i^{(s)}(\mathbf x^2)$, or
\begin{equation}
\label{XFourLamb_Z}
\oint_{S_{\sqrt{\lambda}}}e^{\mathbf i
\langle Z,K\rangle}\phi (|X|^2)\pi_K^{(s)}(\beta^{(m)}
\Pi^{(n)}_X(\Theta_{Q}^p\overline\Theta^q_{Q}))(X,K_u)dK_{no}
\end{equation}
satisfy the Dirichlet resp. Z-Neumann condition on the domain $M$.
The restrictions of these functions onto $M$ span
$\mathbf\Phi^{(n,s)}_Q(M)$
resp. $\mathbf\Xi^{(n,s)}_Q(M)$,
for any fixed boundary condition.
If functions $\Theta_{Q}^p\overline\Theta^q_{Q}$ are exchanged for the
polynomials
$\prod_{i=1}^{\kappa}z^{p_i}_{K_ui}(X)\overline z^{q_i}_{K_ui}(X)$,
then the above construction provides the Dirichlet-, Z-Neumann-, resp.
mixed-condition-functions spanning
$\mathbf\Phi^{(n,s)}_{\mathbf B}(M)$
resp. $\mathbf\Xi^{(n,s)}_{\mathbf B}(M)$.
\end{theorem}
\subsection{Constructing the orbital and inner force operators.}
The complicated action of the compound angular momentum operator
$\mathbf M_{Z}$ on twisted Hankel functions is due to fact that
its Fourier transform, $\mathbf iD_K\bullet$, is non-commuting
with the Hankel projections, that is, the commutator on the
right side of
$D_K\bullet \Pi_{K_u}^{\alpha} =\Pi_{K_u}^{\alpha} D_K\bullet +
[D_K\bullet \Pi_{K_u}^{\alpha}]$ is non-vanishing in general.
Thus there are two non-trivial terms on the right side of equation
\begin{eqnarray}
\mathbf M_{Z}\int e^{\mathbf i\langle Z,K\rangle}
f_{\alpha}(\mathbf x,\mathbf k)
\Pi_{K_u}^{\alpha} (F^{(p_i,q_i)}(X,K_u))dK=
\\
=\mathbf i\int e^{\mathbf i\langle Z,K\rangle}
f_{\alpha}(\mathbf x,\mathbf k)D_K\bullet
\Pi_{K_u}^{\alpha} (F^{(p_i,q_i)}(X,K_u))dK=
\nonumber
\\
=\mathbf i\int e^{\mathbf i\langle Z,K\rangle}
f_{\alpha}(\mathbf x,\mathbf k) (\Pi_{K_u}^{\alpha} D_K\bullet +
[D_K\bullet ,\Pi_{K_u}^{\alpha}]) (F^{(p_i,q_i)}(X,K_u))dK,
\nonumber
\end{eqnarray}
by which the orbital:
\begin{eqnarray}
\mathbf{L}_{Z}\int e^{\mathbf i\langle Z,K\rangle}
f_{\alpha}(\mathbf x,\mathbf k)
\Pi_{K_u}^{\alpha} (F^{(p_i,q_i)}(X,K_u))dK=
\\
=\mathbf i\int e^{\mathbf i\langle Z,K\rangle}
f_{\alpha}(\mathbf x,\mathbf k) \Pi_{K_u}^{\alpha} D_K\bullet
(F^{(p_i,q_i)}(X,K_u))dK
\nonumber
\end{eqnarray}
and the intrinsic spin operators:
\begin{eqnarray}
\mathbf{S}_{Z}\int e^{\mathbf i\langle Z,K\rangle}
f_{\alpha}(\mathbf x,\mathbf k)
\Pi_{K_u}^{\alpha} (F^{(p_i,q_i)}(X,K_u))dK=
\\
=\mathbf i\int e^{\mathbf i\langle Z,K\rangle}
f_{\alpha}(\mathbf x,\mathbf k) [D_K\bullet ,\Pi_{K_u}^{\alpha}]
(F^{(p_i,q_i)}(X,K_u))dK
\nonumber
\end{eqnarray}
are defined, respectively.
The commutator appears in the following explicit form
\begin{equation}
\label{[DPI]}
[D_K\bullet ,\Pi_{K_u}^{\alpha}]=
S_{\beta}^\alpha\Pi_{K_u}^{\beta}\mathbf M_{K^\perp_u}=
\mathbf M_{K^\perp_u}S_{\beta}^\alpha\Pi_{K_u}^{\beta},
\end{equation}
where
$\mathbf M_{K^\perp_u}=\mathbf M_{K}-\partial_{\mathbf k}D_{K_u}\bullet$
is the Z-spherical angular momentum operator defined on Z-spheres. (For
a fixed unit vector $K_u$ and orthonormal basis $\{e_\alpha ,K_u\}$
(where $\alpha =1,2,\dots ,l-1$) of K-vectors, this operator is of the
form $\mathbf M_{K^\perp_u}=\sum_\alpha\partial_{\alpha}D_{\alpha}\bullet$.
)
Formula (\ref{[DPI]}) follows by applying relations
$D_{K}\bullet\Delta_{K_u}=\Delta_{K_u}D_{K}\bullet -2\mathbf M_{K^\perp_u}$
and the commutativity of $\Delta_{K_u}$ with operators
$\partial_{\alpha}, \partial_{K_u}, D_{\alpha}\bullet$, and
$D_{K_u}\bullet$ in order to evaluate
\[
D_{K}\bullet\Pi_{K_u}^{(r,r-2i)}=D_{K}\bullet\tilde
D_{(r,r-2i)}\Pi_K^{(r-2i,r-2i)}\Delta_{K_u}^i,
\]
where, according to the denotations introduced above, $r=v+a$ holds.
This computation results the equation
$[D_{K}\bullet,\Pi_{K_u}^{(r,r-2i)}]=P_{i+1}(\Delta_{K_u})
\mathbf M_{K^\perp_u}$,
where the lowest exponent of $\Delta_{K_u}$ in the polynomial
$P_{i+1}(\Delta_{K_u})$ is $i+1$.
This term defines a uniquely
determined constant times of projection $\Pi_{K_u}^{(r,r-2(i+1))}$
such that
$P_{i+1}(\Delta_{K_u})=A_{i+1}\Pi_{K_u}^{(r,r-2(i+1))}+
P_{i+2}(\Delta_{K_u})$ holds,
where the lowest exponent of $\Delta_{K_u}$ in the polynomial
$P_{i+2}(\Delta_{K_u})$ is $i+2$.
In the next step, the above arguments are repeated regarding $P_{i+2}$
to obtain $P_{i+3}$. This process can be concluded in finite many steps
which establish the above formula completely.
Formula (\ref{[DPI]}) allows to define an {\it inner algorithm} where these
computations are iterated infinitely many times.
In the second step it is repeated for
$f^{(2)}_\beta :=S_{\beta}^\alpha f^{}_\alpha$ as follows.
First note that operator
$
f_\alpha S_{\beta}^\alpha\Pi_{K_u}^{\beta}\mathbf M_{K^\perp_u}=
\mathbf M_{K^\perp_u} \Pi_{K_u}^{\beta} S_{\beta}^\alpha f_\alpha
$
can be decomposed in the following form:
\begin{eqnarray}
\label{inop}
f_{\beta}^{(2)}\Pi_{K_u}^{\beta}\mathbf M_{K^\perp_u}=
-\partial_{K_u}(f_\beta^{(2)})\Pi_{K_u}^{\beta}
D_{K_u}\bullet +
\mathbf M_{K}\Pi_{K_u}^{\beta}f_{\beta}^{(2)} .
\end{eqnarray}
The action of the first operator on functions
\begin{eqnarray}
F^{(p,q)}(X,K_u)=\varphi_\zeta^{(r)}(K_u)
\Theta_{Q}^p(X,K_u)\overline\Theta^q_{Q}(X,K_u)
\\
{\rm resp.}\quad
F^{(p,q)}(X,K_u)=\varphi_\zeta^{(r)}(K_u)
\prod_{i=1}^{k/2}z^{p_i}_{K_ui}\overline z^{q_i}_{K_ui},
\end{eqnarray}
where $\varphi_\zeta^{(r)}(K_u)$ denotes an $r^{th}$-order homogeneous
harmonic polynomial introduced at explicit description of Hankel
projections, results
\[
-(p-q)\mathbf i \partial_{K_u}(f_\alpha S_{\beta}^\alpha )
\Pi_{K_u}^{\beta}(F^{(p,q)}).
\]
This term is already in finalized form which does not alter
during further computations. Together with the orbiting operator, they
define, in terms of the radial operator
\begin{equation}
\bigcirc^{(1)}_\alpha (f_{\beta_1},\dots ,f_{\beta_d})=
-(p-q)\mathbf i({\mathbf k}f_\alpha +
\partial_{K_u}(f_\beta S_{\alpha}^\beta )),
\end{equation}
the one-turn operator
\begin{eqnarray}
\mathbf M^{(1)}_{Z}\int e^{\mathbf i\langle Z,K\rangle}
f_{\alpha}(\mathbf x,\mathbf k)
\Pi_{K_u}^{\alpha} (F^{(p_i,q_i)}(X,K_u))dK=
\\
=\int e^{\mathbf i\langle Z,K\rangle}\bigcirc^{(1)}_\alpha
(f_{\beta_1},\dots ,f_{\beta_d})
\Pi_{K_u}^{\alpha} (F^{(p_i,q_i)}(X,K_u))dK,
\nonumber
\end{eqnarray}
where the name indicates that it is expressed in terms of the first
power of the inner spin operator $S_{\alpha}^\beta$ permutating the Hankel
radial functions.
Be aware of the novelty of this spin-concept emerging in
these formulas! It defines just permutation of the radial Hankel
functions to which no actual spinning of the particles can be corresponded.
This concept certainly does not lead to a dead-end-theory like those
pursued in
classical quantum theory where one tried to explain the inner spin of
electron by actual spinning.
This abstract merry-go-round does not stop after making one turn.
It is actually the second operator,
$\mathbf M_{K}\Pi_{K_u}^{\beta}f_\beta^{(2)}$,
in (\ref{inop}) which generates the indicated process
where the above arguments are repeated for functions
$f^{(2)}_\beta :=S_{\beta}^\alpha f_\alpha$.
Index $2$ indicates that these
functions are obtained from the starting functions
$f^{(1)}_\alpha :=f_\alpha$ by the next step. The details are as follows.
In these computations the second operator is derived by the Hankel
transform turning functions defined on the $\tilde K$-space to functions
defined on the $K$-space. Operator $H^{(-s_\beta)}$ denotes the inverse
of the Hankel transform $H_{s_\beta}^{(l)}$,
where $s_\beta$ denotes the third index in $\beta =(v,a,s)$. Then we have:
\begin{eqnarray}
\mathbf M_{K}f_\beta^{(2)}
\Pi_{K_u}^{\beta}F^{(p,q)}(X,K_u)=
\\
=\mathbf M_{K}\int e^{\mathbf i\langle K,\tilde K\rangle}
H^{(-s_\beta)}(f^{(2)}_\beta )(\mathbf x,\tilde{\mathbf k})
\Pi_{\tilde K_u}^{\beta}
F^{(p,q)}(X,\tilde K_u)d\tilde K=
\nonumber
\\
=\mathbf i\int e^{\mathbf i\langle K,\tilde K\rangle}
\tilde f^{(2)}_{\beta} (\mathbf x,
\tilde{\mathbf k})D_{\tilde K}\bullet
\Pi_{\tilde K_u}^{\beta}F^{(p,q)}(X,\tilde K_u)d\tilde K,
\nonumber
\end{eqnarray}
where $\tilde f^{(2)}_{\beta} =H^{(-s_\beta)}(f^{(2)}_\beta ) $.
At this step commutator $[D_{\tilde K}\bullet ,\Pi_{\tilde K_u}^{\beta}]$
can be calculated by (\ref{[DPI]}), resulting functions
$f^{(3)}_\beta :=S_{\beta}^\alpha\tilde f^{(2)}_{\alpha}$
which are subjected to the operations performed in the following step 3.
These steps must infinitely many times be iterated.
Regarding radial functions $\tilde f^{(2)}_\alpha$,
which are defined on the $\tilde K$-space,
the orbiting spin and the one-turn operator can be defined in the same way
as they are defined on the $K$-space.
After performing the Z-Fourier and the associated Hankel transforms
on these finalized functions they
become functions defined on the $K$-space. By adding these
terms to those obtained in the first step, one defines the two-turn
operator $\mathbf M^{(2)}_{Z}$ associated with
$\bigcirc^{(2)}_\alpha (f_{\beta_1},\dots ,f_{\beta_d})$,
called two-turn merry-go-round and roulette operators respectively.
The sum of the orbiting spin operators defines $\mathbf L^{(2)}$, which
is called one-turn orbiting spin operator involving just $S^\alpha_\beta$
into its definition.
One should keep in mind that these operators involve also the operators
defined in the previous step. These computations work out for an arbitrary
$u^{th}$-step defining operators
$\mathbf M^{(u)}_{Z}$,
$\bigcirc^{(u)}_\alpha (f_{\beta_1},\dots ,f_{\beta_d})$, and
$\mathbf L^{(u)}$ which are called u-turn merry-go-round-, roulette-,
and $(u-1)$-turn orbiting spin operator respectively.
In the end, the action of
$\mathbf M_{Z}=\mathbf M^{(\infty )}_{Z}$ can be described
in the form:
\begin{eqnarray}
\mathbf M_{Z}\int e^{\mathbf i\langle Z,K\rangle}
f_{\alpha}(\mathbf x,\mathbf k)
\Pi_{K_u}^{\alpha} (F^{(p_i,q_i)}(X,K_u))dK=
\\
=\int e^{\mathbf i\langle Z,K\rangle}\bigcirc_\alpha
(f_{\beta_1},\dots ,f_{\beta_d})
\Pi_{K_u}^{\alpha} (F^{(p_i,q_i)}(X,K_u))dK,
\nonumber
\end{eqnarray}
where operator
$\bigcirc_\alpha (f_{\beta_1},\dots ,f_{\beta_d})=
\bigcirc^{(\infty )}_\alpha (f_{\beta_1},\dots ,f_{\beta_d})$,
called high roulette operator, is defined
by the infinite series $\lim_{u\to\infty}\bigcirc^{(u)}_\alpha$.
In this sense, the complete angular momentum operator $\mathbf M_{Z}$
can be called high merry-go-round operator.
The point in this formula is that this action can be described
in terms of d-tuples, $(f_{\beta_1},\dots ,f_{\beta_d})$, of radial
functions which can not be reduced to a single function defined by a
fixed $\alpha$.
This statement holds also for the total Laplacian $\Delta$, thus the
eigenfunctions satisfying the Dirichlet or Z-Neumann
conditions on the considered manifolds can also be completely described
in terms of radial functions. But the corresponding equations involve
all radial functions $(f_{\beta_1},\dots ,f_{\beta_d})$ defined for
all indices $\beta_i$. By this reason, operator $\bigcirc_\alpha$
plays the role of a confider, giving rise to the effect that the
eigenfunctions satisfying a given boundary condition
are expressed in terms of radial functions which do not
satisfy these conditions individually but exhibit them
together, confined in a complicated combination. Neither
can these eigenfunctions be observed as single $\OE$-force potential
functions. On the other hand, for a given boundary condition,
the $\OE$-eigenfunctions span the whole $L^2$-space, therefore,
$\Delta$-eigenfunctions satisfying the same boundary condition
can be expressed as infinite linear combinations of $\OE$-eigenfunctions.
This observation further supports the idea that the rather
strong $\Delta$-forces are built up by the much weaker $\OE$-forces in
a way how Hawking describes the action of the Weinberg-Salam weak force
in \cite{h}, pages 79:
``The Weinberg-Salam theory known as spontaneous symmetry breaking.
This means that what appear to be a number of completely different
particles at low energies are in fact found to be all the same type of
particle, only in different states. At high energies all these
particles behave similarly. The effect is rather like the behavior
of a roulette ball on a roulette wheel. At high energies (when the wheel
is spun quickly) the ball behaves in essentially only one way -- it rolls
round and round. But as the wheel slows, the energy of the ball decreases,
and eventually the ball drops into one of the thirty-seven slots
in the wheel. In other words, at low energies there are thirty seven
different states in which the ball can exist. If, for some reason,
we could only observe the ball at low energies, we would then think
that there were thirty-seven different types of ball!"
This quotation explains the origin of the name given to the
roulette operators. This polarized operators are derived from the
non-polarized merry-go-round operators, which name was suggested to me
by Weinberg's book \cite{w1} where the name ``merry-go-round" appears
in a different situation not discussed here.
All these arguments clearly suggest
that the $\Delta$-eigenfunctions must correspond to the strong forces
keeping the quarks together. A formula expressing these eigenfunctions
as linear combinations of weak force potential functions would shed light
to the magnitude of the strong force piled by the roulette operator during
building up these eigenfunctions.
The explicit eigenfunction computations give
rise to difficult mathematical problems which are completely open
at this point of the developments. In this paper we explicitly describe
only the eigenfunctions of the exterior orbiting operator $\OE$ defined
by omitting the interior spin operator from the $\Delta$. This operator is
a scalar operator whose action can be reduced to a single radial function
$f$. By the arguments developed in the introductions and also at several
points in the body of this paper, this operator
is associated with the weak force interaction.
These computations are established in the next section.
This section is concluded by saying more about the recovery of several
concepts of the standard model within this new theory.
Suppose that function
\begin{eqnarray}
\psi (X,Z)=\int e^{\mathbf i\langle Z,K\rangle}
f_{\alpha}(\mathbf x,\mathbf k)
\Pi_{K_u}^{\alpha} (F^{(p_i,q_i)}(X,K_u))dK
\end{eqnarray}
is an eigenfunction of the complete Laplacian $\Delta$. It is called
also probability amplitude of the particle system.
Also in this formula the Einstein
convention indicates summation. A fixed index $\alpha =(v,a,s)$
is called slot-index and function
\begin{eqnarray}
\mathcal Q_{vas} (X,Z)=\int e^{\mathbf i\langle Z,K\rangle}
f_{vas}(\mathbf x,\mathbf k)
\Pi_{K_u}^{(vas)} (F^{(p_i,q_i)}(X,K_u))dK
\end{eqnarray}
is the so called high slot probability amplitude. These exact
mathematical objects correspond
to quarks whose flavor is associated with index $v$ and its color is
associated with index $a$. Index $s$ is called azimuthal
index. Note that also these indices have mathematical meanings, namely,
they refer to the degrees of the corresponding polynomials by which
the formula of a slot-amplitude is built up.
According to these definitions,
a particle-system-amplitude is the sum of the high slot probability
amplitudes
which are considered to be the mathematical manifestations of quarks.
Slot amplitudes defined in the same way by the $\OE$-eigenfunctions are
called retired or laid-off slot probability amplitudes.
It is very interesting to see how these
constituents of a high energy particle are held together by the
strong force interaction.
When $D_K\bullet$ is acting on functions $\langle Q,X\rangle$ resp.
$\langle J_K(Q),X\rangle$ of a quark, then the first one becomes of the
second type and the second one becomes of the first type. In either
cases the color index, $a$, changes by $1$ and a slot-amplitude of odd
color index becomes an amplitude of even color index. This process can be
interpreted as gluon exchange in the following way.
Action of $D_K\bullet$ on $\langle Q,X\rangle$ resp.
$\langle J_K(Q),X\rangle$ are interpreted
as gluon absorption resp. emission. More precisely, some of the
slot-particles (quarks)
of odd color index emit gluon which is absorbed by some of the
slot-particles (quarks)
which also have odd color index. As a result they become quarks
of even color index. The same process yields also for quarks having
even color index, which, after gluon exchange, become quarks of odd
color index. This is a rather clear explanation for the flavor-blindness
and color sensitiveness of gluons.
It also explains why a high slot-particle (quark) can
never retire
to become a laid-off $\OE$-particle. Indeed, in a high
slot-state defined for fixed slot index $\alpha =(v,a,s)$ the
corresponding Hankel function does not satisfy the boundary conditions,
however, it can be expanded by the $\OE$-eigenfunctions.
At this point nothing
is known about the number of $\OE$-eigenfunctions by which a
high slot-amplitude can be expressed. This number can very well be equal
to the infinity, but it is always greater then one.
Then, instead of consisting of a single term, the high slot
probability density is a multi-term sum of weak
densities determined by these laid off probability amplitudes.
Real positive function $\psi\overline\psi$ whose integral on the whole
ball$\times$ball-type domain is $1$ is called probability density. Protons
are defined by functions $\mathit{ch}(Z_u)(\psi_c\overline\psi_c)(Z,X)$,
where $\psi_c$ is a constant times of $\psi$, whose integral on the
ball$\times$ball-type domain is $1$. If this integral is $0$ or $-1$,
it is called neutron or antiproton respectively. (The mass can be defined
by means of $|\det (A_{ij}(Z_u))|$, but we do not go into these details
here.) This argument shows
that in a complete eigensubspace decomposition of the $L^2$ function
space of a two-step nilpotent Lie group representing a particle system
the eigenfunctions, actually, represent all kind of particles and not just
particular ones. This phenomenon can be considered as a clear
manifestation of the bootstrap principle, from which super string
theory grew out, also in the new theory. In super string theory
the notion was (cf. \cite{g}, pages 128)
that ``a set of elementary particles could be treated as if
composed in a self-consistent manner of combinations of those same
particles. All the particles would serve as constituents, all the
particles (even the fermions in a certain sense) would serve as quanta
for force fields binding the constituents together, and all the particles
would appear as bound states of the constituents".
\subsection{$\OE$-forces in the union of 3 fundamental forces.}
The eigenfunction computations of $\OE$ on the twisted function space
$\Xi_{.R}^{(n)}$ satisfying
a given boundary condition can be reduced to the same radial
differential operator what was obtained
for the standard Ginsburg-Landau-Zeeman
operator on Z-crystals. To see this statement,
let the complete Laplacian (\ref{Delta}) act on (\ref{XFourLamb_Z}).
By the commutativity relation $\mathbf M_Z\oint =\oint \mathbf iD_K\bullet$
and $\Delta_Z\oint =-\oint \mathbf k^2$,
this action is a combination of
X-radial differentiation, $\partial_{\mathbf x}$,
and multiplications with functions depending
just on $\mathbf x$, that is, it is completely reduced to X-radial
functions. More precisely,
\begin{equation}
\Delta (\mathcal {HF}_{QpqR}(\phi\beta ))
=\oint_{S_{R}}e^{\mathbf i
\langle Z,K\rangle}\Diamond_{R,\mathbf x^2}(\phi )\beta
\Pi^{(n)}_X(\Theta_{Q}^p\overline\Theta^q_{Q})dK_{no}
\end{equation}
holds, where
\begin{eqnarray}
(\Diamond_{R,\mathbf x^2}\phi ) (\mathbf x^2)
=4\mathbf x^2\phi^{\prime\prime}(\mathbf x^2)
+(2k+4(p+q))\phi^{\prime}(\mathbf x^2)+\\
-(R^2(1+{1\over 4}\mathbf x^2) +(p-q)R)
\phi (\mathbf x^2),
\nonumber
\end{eqnarray}
and $\phi^\prime\,\, , \phi^{\prime\prime}$ mean the corresponding
derivatives of $\phi (\tilde t)$ with respect to the $\tilde t$ variable.
The eigenfunctions of $\Delta$ can be found by seeking
the eigenfunctions of the reduced operator
$\Diamond_{R,\mathbf x^2}$ among the X-radial functions.
Note that no projections $\Pi^{(s)}_K$ are applied in the above
integral formula, thus these eigenfunctions do not satisfy the
boundary conditions in general. These projections, however, do
not commute with $D_K\bullet$, and the eigenfunctions of the
complete operator $\Delta$ can not be expressed in terms of a single
function $\phi (\mathbf x^2)$. This simple
reduction applies just to the exterior operator,
$\OE$, defined by neglecting the anomalous intrinsic momentum operator
$\mathbf{S}_Z$ from $\Delta$ and keeping just the orbital
(alias, exterior)
spin operator $\mathbf{L}_Z$. Regarding this operator we have:
\begin{theorem}
The exterior operator $\OE$,
on constant radius Z-ball bundles
reduces to a radial operator appearing in terms of the
Dirichlet-, Neumann-, resp. mixed-condition-eigenvalues
$\lambda_i^{(s)}$ of the
Z-ball $B_Z(R)$ in the form
\begin{equation}
\label{BLf_mu}
(\Diamond_{\lambda ,\tilde t}f)(\tilde t)=
4\tilde tf^{\prime\prime}(\tilde t)
+(2k+4n)f^\prime (\tilde t)
-(2m\sqrt{\lambda_i^{(s)}\over 4} +4{\lambda_i^{(s)}\over 4} (1 +
{1\over 4}\tilde t))f(\tilde t).
\end{equation}
By the substitution
$\lambda=\sqrt{\lambda_i^{(s)}/4}$, this is exactly
the radial Ginsburg-Landau-Zeeman operator
(\ref{Lf_lambda}) obtained on Z-crystal models.
Despite this formal identity with electromagnetic forces, the nuclear
forces represented by $\OE$ manifest themself quite differently.
Like for the electromagnetic forces, one can introduce both charged
and neutral particles also regarding $\OE$. A major
difference between the two particle-systems is that the particles
represented by $\OE$ are extended ones.
This property can be seen, for instance, by the eigenfunctions of $\Delta$
which involve also Z-spherical harmonics.
Due to these harmonics, the multiplicities of the eigenvalues regarding
$\OE$ are higher than what is corresponded to the same eigenvalue
regarding the Ginsburg-Landau-Zeeman operator on Z-crystals.
An other consequence of the extension is that the
$\OE$-neutrinos always have positive mass, which is zero for neutrinos
associated with electromagnetic forces.
Actually, the nuclear forces represented by $\OE$ are weaker than the
electromagnetic forces. This phenomenon can be explained, by the extension,
by the very same argument of classical electrodynamics
asserting that the electromagnetic self-mass for a surface
distribution of
charge with radius $a$ is $e^2/6\pi ac^2$, which, therefore, blows up
for $a\to 0$. Note that, in the history of quantum theory,
this argument provided the first warning that a point
electron will have infinite electromagnetic self-mass. It is well known
that this problem appeared with even much greater severity in
the problem of infinities invading quantum field theory. The tool
by which these severe problems had been handled is renormalization, which
turned QED into a renormalizable theory. The above argument
clearly suggests that also the $\OE$-theory must be renormalizable.
The major difference between the $\OE$- and the complete $\Delta$-theory
is that the first one is a scalar theory, which can be reduced to a radial
operator acting on a single radial function, while the reduced operator
obtained in the $\Delta$-theory acts on d-tuples of radial functions.
As it is described above, this action defines also a new type of
nuclear inner spin of the extended particles to which new particles
such as quarks and gluons can be associated and by which strong nuclear
forces keeping the parts of the nucleus together can be introduced.
Partial operator $\OE$ is the maximal scalar operator in $\Delta$.
Except for $\OE=\OE^{(1)}$, partial operators $\OE^{(u)}$
(defined by replacing $\mathbf M_Z$
by $\mathbf L_Z^{(u)}$) and $\Delta^{(u)}$
(defined by replacing $\mathbf M_Z$ by $\mathbf M_Z^{(u)}$) can
be reduced just to operators which irreducibly act on d-tuples of radial
functions.
The main unifying principle among the three fundamental forces is
that they are derived from the very same Hamilton operator, $\Delta$.
More precisely, the Hamilton operators of the individual elementary
particles emerge on corresponding invariant subspaces of $\Delta$ such
that it is restricted to the subspace corresponded to the given
elementary particles. The corresponding forces are defined by those
acting among these particles. The electromagnetic forces manifest
themself on function spaces consisting functions which are periodic
regarding a Z-lattice $\Gamma_Z$. The systems of particles defined on
these function spaces are without interior. They
consist of particles such as electrons, positrons, and
electron-positron-neutrinos.
The various nuclear forces appear on function spaces defined on Z-ball
and Z-sphere bundles by fixed boundary conditions.
The attached particles, which do have interior, and the forces acting
among them are discussed above.
The function spaces corresponded to
particular particles are constructed with the corresponding twisted
Z-Fourier transforms. Since the eigenfunctions of the Hamilton operators
can be sought in this form, this transform seems to be the only natural
tool for assigning the invariant function spaces corresponding to
elementary particles. It is also remarkable that the twisted Z-Fourier
transforms emerge also in the natural generalization of the de Broglie
waves fitting the new theory.
A particle-system defined by a fixed
function space consists of all kind of particles which can be defined
by the characteristic property of the given function space. For
instance, all particles without interior appear
on a $\Gamma_Z$-periodic function spaces. In order to avoid the
annihilation of particles by antiparticles, the particles belonging
to the same system are considered to be not interacting with each other.
According to this argument, the particles in a system defined by an
invariant subspace are gregarious which are always accompanied with
other particles. For instance, an electron is always partying with an
electron-neutrino. This complexity of the particle-systems is reflected
by the Laplacian (Hamiltonian) which appears as the sum of Hamiltonians
of particles partying in a system. This phenomenon is a relative of
those described by the bootstrap principle of super string theory.
Let it be mentioned yet that the distinct
function spaces belonging to distinct type of forces are not
independent, thus there is the possibility to work out an interaction
theory between particles of distinct types. The existence of such a
viable theory is a completely open question in this field.
\end{theorem}
The $\OE$-forces are very similar to the weak nuclear forces
described in the Weinberg-Salam theory. Weinberg introduces these
forces on pages 116-120 of his popular book \cite{w1} as follows:
`` The weak nuclear force first turned up in the discovery of
radioactivity by Henri Becquerel in 1896. In the 1930's it become
understood that in the particular kind of radioactivity that was
discovered by Becquerel , known as beta decay, the weak nuclear force
causes a neutron inside the nucleus to turn into a proton, at the same
time creating an electron and another particle known today as
antineutrino, and spitting them out of the nucleus. This is something
that is not allowed to happen through any other kind of force. The
strong nuclear force that holds the protons and neutrons together
in the nucleus and the electromagnetic force that tries to push the
protons in the nucleus apart cannot change the identities of these
particles, and the gravitational force certainly does not do anything
of the sort, so the observation of neutrons changing into protons
or protons into neutrons provided evidence of a new kind of force
in the nature. As its name implies, the weak nuclear force is weaker
than the electromagnetic or the strong nuclear forces. This is shown
for instance by the fact that nuclear beta decay is so slow; the
fastest nuclear beta decays take on the average about a hundredth
of a second; languorously slow compared with the typical time scale of
processes caused by the strong nuclear force, which is roughly a
millionth millionth millionth millionth of a second.
In 1933 Enrico Fermi took the first significant step toward a theory
of this new force. ... There followed a quarter century of experimental
afford aimed at tying up the loose ends of the Fermi theory.
... In 1957 this [problem] was settled and the Fermi theory of the
weak nuclear force was put into its final form. ... Nevertheless, even
though we had a theory that was capable of accounting for everything
that was known experimentally about the weak force, physicists in general
found the theory highly unsatisfactory.... The things that were wrong
with the Fermi theory were not experimental but theoretical. ...
when the theory was applied to more exotic processes it gave nonsensical
results....when they did the calculations the answer would turn out
to be infinite.... Infinities like these had been encountered in the
theory of electromagnetic forces by Oppenheimer and others in the early
1930's, but in the late 1940's theorists had found that all these
infinities in quantum electrodynamics would cancel when the mass and
electric charge of the electron are properly defined, or ``renormalized".
As more and more became known about the weak forces it became increasingly
clear that the infinities in Fermi's theory of the weak forces
would not cancel in this way; the theory was not renormalizable. The
other thing that was wrong with the theory of weak forces was that it
has a large number of arbitrary elements....
I had worked on the theory of weak forces off and on since graduate
school, but in 1967 I was working instead on the strong nuclear forces,
the forces that hold neutrons and protons together inside atomic nuclei.
I was trying to develop a theory of the strong nuclear forces based on
an analogy with quantum electrodynamics. I thought that the difference
between the strong nuclear forces and electromagnetism might be explained
by a phenomenon known as broken symmetry, which I explain later. It did
not work. I found myself developing a theory that did not look like at all
the strong forces as they were known to us experimentally. Then it suddenly
occurred to me that these ideas, although they had turned out to be
completely useless as far as the strong forces were concerned, provided
a mathematical basis for a theory of weak nuclear forces that would do
anything that one might want. I could see the possibility of a theory
of the weak force analogous to quantum electrodynamics. Just as the
electromagnetic force between distant charged particles is caused by
the exchange of photons, a weak force would not act all at once at a
single point in space (as in the Fermi theory) but it would be caused
by the exchange of photonlike particles between particles at different
positions. These new photonlike particles could not be massless like the
photon (for one thing, if massless they would have been discovered long
before), but they were introduced into the theory in a way that was so
similar to the way that the photon appears in quantum electrodynamics
that I thought that the theory might be renormalizable in the same sense
as quantum electrodynamics--that is, that the infinities in the theory
could be canceled by a redefinition of the masses and other quantities
in the theory. Furthermore, the theory would be highly constrained by
its underlying principles and would thus avoid a large part of
arbitrariness of previous theories.
I worked out a particular concrete realization of this theory, that is,
a particular set of equations that govern the way the particles interacted
and that would have the Fermi theory as a low energy approximation.
I found in doing this, although it had not been my idea at all to start
with, that it turned out to be a theory not only of the weak forces,
based on an analogy with electromagnetism; it turned out to be a unified
theory of the weak and electromagnetic forces that showed that they were
both just different aspects of what subsequently became called an
electroweak force. The photon, the fundamental particle whose emission
and absorption causes electromagnetic forces, was joint in a tight-knit
family group with the other photonlike particles predicted by the theory:
electrically charged W particles whose exchange produces the weak force
of beta radioactivity, and a neutral particle I called the ``Z",
about which more later. (W particles were an old story in speculations
about the weak forces; the W stands for ``weak". I picked the letter Z
for their new sibling because the particle has zero electric charge
and also because Z is the last letter of the alphabet, and I hoped that
this would be the last member of the family.) Essentially the same
theory was worked out independently in 1968 by the Pakistani physicist
Abdus Salam, working in Trieste....
Both Salam and I had stated our
opinion that this theory would eliminate the problem of infinities
in the weak forces. But we were not clever enough to prove this.
In 1971 I received a preprint from a young graduate student at the
University of Utrecht named Gerard 't Hooft, in which he claimed
to show that this theory actually had solved the problem of the
infinities: the infinities in calculations of observable quantities
would in fact all cancel just just as in quantum electrodynamics...."
\section{Unified wave mechanics.}
There are two natural ways to furnish the time on nilpotent groups.
One of them is defined by solvable extensions
while in the other case the
time-axis is introduced by
Cartesian product with the real line $\mathbb R$.
These two extensions are called expanding- and static-models
respectively.
The metric on the nilpotent group is positive definite
where the Laplacian
turns out to be natural physical Hamilton operator corresponding
to elementary particle systems. Concrete systems are represented by the
corresponding invariant subspaces of the Laplacian and one obtains the
Hamilton operator of a given system by restricting the Laplacian onto these
subspaces. In order to establish the wave equations regarding
these Hamiltonians, on both extensions indefinite
metrics must be introduced. That is, adequate wave mechanics
associated with these Hamiltonians can just relativistically be
introduced such that one assumes appropriate
Lorenz-indefinite metrics on both extensions.
As it turns out, these metrics really provide the familiar wave operators
of wave mechanics.
\subsection{Solvable extensions.}
Any 2-step nilpotent Lie group, $N$, extends to a
solvable group,
$SN$, which is defined on the half-space $\mathcal N\times \mathbb R_+$ by
the group multiplication
\begin{equation}
(X,Z,t)(X^\prime ,Z^\prime ,t^\prime )
=(X+t^{\frac{1} {2}}X^\prime ,Z+tZ^\prime +\frac {1}{2}t^{\frac{1}{2}}
[X,X^\prime ],tt^\prime ).
\end{equation}
This formula provides the
multiplication also
on the nilpotent group $N$, which appears as a subgroup
on the level set $t=1$.
The Lie algebra of this
solvable group is $\mathcal S=\mathcal N\oplus \mathcal T$.
The Lie
bracket is completely determined by the formulas
\begin{equation}
[\partial_t,X]=\frac{1}{2}X\quad ;\quad [\partial_t,Z]=Z\quad;\quad
[\mathcal N,\mathcal N]_{/SN}
=[\mathcal N,\mathcal N]_{/N},
\end{equation}
where $X \in \mathcal X$ and $Z \in \mathcal Z$.
The indefinite metric tensor is defined
by the left-invariant extension
of the indefinite inner product,
$\langle\, ,\,\rangle$, defined
on the solvable Lie algebra $\mathcal S$ by
$\langle\partial_t ,\partial_t\rangle =-1$,
$\langle\partial_t ,\mathcal N\rangle =0$, and
$\langle\mathcal N ,\mathcal N\rangle =
\langle\mathcal N ,\mathcal N\rangle_{\mathcal N}$.
The last formula indicates
that the original innerproduct is kept on the subalgebra $\mathcal N$.
Lie algebra $\mathcal S$ is considered as the tangent space at the origin
$(0,0,1)\in SN$ of the solvable group and
$\langle\, ,\,\rangle$ is extended
to a left-invariant metric $g$ onto $SN$ by the group multiplication
described above.
Scaled inner product $\langle\, ,\,\rangle_{q}$
with scaling factor $q > 0$ is also defined by
$\langle \partial_t, \partial_t\rangle_{q}=-1/q^{2}$, but,
keeping both the inner product
on $\mathcal N$
and the relation $\partial_t\perp \mathcal N$ on $\mathcal N$ intact.
That is, the scaling regards just the direction regarding $\partial_t$.
The left invariant
extension of these inner products are denoted by $g_{q}$.
For precise explanations we need some explicit formulas
on these groups as well.
The left-invariant extensions, $\mathbf Y_i, \mathbf V_\alpha ,\mathbf T$,
of the unit vectors
\begin{equation}
E_i=\partial_i\quad ,\quad e_\alpha =\partial_\alpha
\quad ,\quad\epsilon =q\partial_t
\end{equation}
picked up at the
origin $(0,0,1)$ are the vector fields:
\begin{equation}
\mathbf Y_i=t^{\frac 1{2}}\mathbf X_i\quad ;\quad \mathbf V_\alpha
=t\mathbf Z_\alpha
\quad ;\quad \mathbf T=qt\partial_t ,
\end{equation}
where $\mathbf X_i$ and $\mathbf Z_\alpha$ are the
invariant vector fields introduced on $N$ previously.
One can establish these
formulas by the following standard computations. Consider the
vectors $\partial_i ,\partial_{\alpha}$ and $\partial_t$ at the origin
$(0,0,1)$ such that they are the
tangent vectors of the curves $c_A (s)=(0,0,1)+s\partial_A$,
where $A=i,\alpha ,t$. Then
transform these curves to an arbitrary point by the left
multiplications.
Then the tangent of the transformed
curve gives the desired left invariant vector at an arbitrary point.
According to these
formulas, not $t$ but $T$ defined by $\partial_T=\mathbf T$ is the correct
physical time parameterization on the $t$- parameter line, which, by
the below arguments, are geodesics on $SN$. The transformation law
$\partial_T=(dt/dT)\partial_t$ yields the relations $(dt/dT)=qt$,
$\ln t=qT$, and $t=e^{qT}$. Thus a $t$-level set is the same as the
$T=(\ln t)/q$-level set and subgroup $N$ corresponds both to $t=1$ and
$T=0$. The reversed time $-T$ is denoted by $\tau$.
Let $c_x(s)$ and $c_z(s)$ be integral curves of finite length
$||c_x||$ resp. $||c_z||$
of the invariant vector fields $\mathbf X$ and $\mathbf Z$ on the
subgroup $N$. Then the flow generated by $\partial_\tau$ moves these
curves to $c^\tau_x(s)$ resp. $c^\tau_z(s)$ of length
$||c_x^\tau ||=||c_x||e^{q\tau /2}$ resp.
$||c_z^\tau ||=||c_z||e^{q\tau}$.
That is, by considering them as functions
of the time-variable $\tau$, the length is increasing
such that the rate of change
(derivative with respect to $\tau$) is
proportional to the length of the curves. In other words,
this mathematical space-time model represents an expanding micro universe
where the distance between particles is growing exactly in the same way
how this growing distance was measured by Hubble, in 1929,
between galaxies \cite{h}.
Edwin Hubble came to this conclusion after
experimenting red-shift in the spectra of galaxies he observed for
cataloguing their distances from the earth. It was quite a surprise
that, contrary to the expectation, the
red- and blue-shifted galaxies occurred with
no equal likelihood but most of them appeared red-shifted. That is,
most of them are moving away from us. Even more surprising was to find
that even the size of a galaxy's red shift is not random, but it is
directly proportional to the galaxy's distance from us. This means, that
the farther the galaxy is, the faster it is moving away. This is the
familiar Hubble's law which was actually predicted by the Friedmann
cosmological model, in 1922.
The tendency to expand must be rooted
from the very same tendency inbuilt into the microscopic universe.
This argument, however, contradicts the experimental fact
(explained in the introduction) according to which
the particles are not expanding. This paradox can be resolved, however,
by recognizing that, due to the expansion, the change of the constant
magnetic fields, which are present in the particles
also by the new model, induces electromagnetic waves which are
immediately radiated out into the space. This explanation clarifies not
only this paradox but it casts also a new light to the presence of the
constant radiation experimentally known in the space. These
arguments greatly enhances the importance of the exact mathematical
models introduced in this paper.
The covariant derivative can be computed by the well known formula
\begin{equation}
\langle\nabla_PQ,R\rangle =
\frac 1{2}\{\langle P,[R,Q]\rangle +\langle Q,[R,P]\rangle +
\langle [P,Q],R\rangle\},
\end{equation}
where $P,Q,R$ are invariant vector fields.
Then we get
\begin{eqnarray}
\label{solvnabla}
\nabla_{X+Z}(X^*+Z^*)=\nabla^N_{X+Z}(X^*+Z^*)-q(\frac 1{2}
\langle X,X^*\rangle+\langle Z,Z^*\rangle )\mathbf T ;\nonumber
\\
\nabla_X\mathbf T=\frac q{2}X\quad ;
\quad\nabla_Z\mathbf T=qZ \quad ;\quad
\nabla_TX=\nabla_TZ=\nabla_TT=0,
\nonumber
\end{eqnarray}
where $\nabla^N$ denotes covariant derivative
and $X,X^*\in\mathcal X;Z,Z^*\in\mathcal Z;T\in\mathcal T.$
The Laplacian on these solvable groups can be established by the same
computation performed on $N$. Then we get
\begin{eqnarray}
\Delta=
t^{2}\Delta_Z-q^2(t^2\partial^2_{tt}+t\partial_t)+\\
t\big(\Delta_X+
\frac 1 {4}\sum_{\alpha ;\beta =1}^l
\langle J_\alpha (X),J_\beta (X)\rangle\partial^2_{\alpha \beta}
+\sum_{\alpha =1}^l\partial_\alpha D_\alpha\bullet \big)
+q^2(\frac k {2}+l)t\partial_t=
\nonumber\\
=e^{2qT}\Delta_Z-\partial^2_{TT}+\nonumber
\\
e^{qT}\big(\Delta_X+
\frac 1 {4}\sum_{\alpha ;\beta =1}^l
\langle J_\alpha (X),J_\beta (X)\rangle\partial^2_{\alpha \beta}
+\sum_{\alpha =1}^l\partial_\alpha D_\alpha\bullet \big)
+q(\frac k {2}+l)\partial_T.
\nonumber
\end{eqnarray}
This is the Laplacian on the solvable extension of a general 2-step
nilpotent Lie group. On the extension of a H-type group it appears in the
following simpler form:
\begin{equation}
\Delta =\{e^{2qT}\Delta_Z-\partial^2_{TT}\}+
\big\{e^{qT}\big(\Delta_X+
\frac 1 {4}
\mathbf x^2\Delta_Z
+\sum\partial_\alpha D_\alpha\bullet \big)
+q(\frac k {2}+l)\partial_T\big\}
\end{equation}
This operator is expressed regarding the collapsing time direction.
Substitution $T=-\tau$ provides the operator in terms of the expanding
time direction. Note that the first operator,
$e^{-2q\tau}\Delta_Z-\partial^2_{\tau\tau}$, looks like "expanding
meson operator", while the second one is similar to
the Schr\"dinger operator of charged particles.
This question is further investigated in the next section.
In order to understand the deeper connections to general relativity,
also the Riemannian curvature should explicitly be computed. This
calculation can straightforwardly be implemented by substituting
formulas (\ref{solvnabla}) into the standard
formula of the Riemannian curvature. Then we get
\begin{eqnarray}
R_q(X^*\wedge X)=R(X^*\wedge X)+ \frac q{2}[X^*,X]\wedge\mathbf T+
\frac {q^2}{4}X^*\wedge X ;
\\
R_q(X\wedge Z)=R(X\wedge Z)+\frac q{4}J_Z(X)\wedge\bold T+\frac{q^2}{2}
X\wedge Z ;
\\
R_q(Z^*\wedge Z)=R(Z^*\wedge Z)+q^2Z^*\wedge Z;\\
R_q((X+Z),\mathbf T)(.)=q\nabla_{\frac 1{2}X+Z} (.); \quad
R_q((X+Z)\wedge\mathbf T)=
\\
=\frac{1}{2}q(\sum_\alpha
J_\alpha(X)\wedge e_\alpha
-J^*_Z)-q^2(\frac{1}{4}X+Z)\wedge\mathbf T ,
\end{eqnarray}
where $J^*_Z$ is the 2-vector dual to the
2-form $\langle J_Z(X_1),X_2\rangle$ and
$R$ is the Riemann curvature on
$N$. The vectors in these formulas are elements of the Lie algebra.
By introducing $H(X,X^*,Z,Z^*):=\langle J_Z(X),J_{Z^*}(X^*)\rangle$,
for the Ricci curvature
we have
\begin{eqnarray}
Ri_q(X)=Ri(X)-q^2(\frac k{4}+\frac l{2})X;\\
Ri_q(Z)=Ri(Z)-q^2(\frac k{2}+l)Z\quad ;\quad
Ri_q(T)=+q^2(\frac k{4}+l)T,
\end{eqnarray}
where the Ricci tensor $Ri$ on $N$ is described
by formulas
\begin{eqnarray}
Ri(X,X^*)=-\frac 1 {2} \sum_{\alpha =1}^lH(X,X^*,e_\alpha ,e_\alpha )=
-\frac 1{2}H_{\mathcal X}(X,X^*)=-\frac l{2}\langle X,X^*\rangle ;
\nonumber
\\
Ri(Z,Z^*)=\frac 1 {4} \sum_{i=1}^k
H(E_i,E_i,Z,Z^*)=\frac 1{4}H_{\mathcal Z}(Z,Z^*)=\frac k {4}
\langle Z,Z^*\rangle ,
\nonumber
\end{eqnarray}
and by
$Ri(X,Z)=0$. By assuming $q=1$, we have
\begin{eqnarray}
Ri_1(X)=-(\frac k{4}+l)X\, ;\,
Ri_1(Z)=-(\frac k{4}+l)Z;\\
Ri_1(T)=(\frac k{4}+l)T, \quad
\mathcal R=-(\frac k{4}+l)(k+l+1),
\\
Ri_1(X+Z,X^*+Z^*)-\frac 1{2}\mathcal R\langle X+Z,X^*+Z^*\rangle =\\
=(\frac k{4}+l)(k+l-\frac 1{2})\langle X+Z,X^*+Z^*\rangle ,
\\
Ri_1(X+Z,T)-\frac 1{2}\mathcal R\langle X+Z,T\rangle =0,
\\
Ri_1(T,T)-\frac 1{2}\mathcal R\langle T,T\rangle =(\frac k{4}+l)
(k+l+\frac 3{2})\langle T,T\rangle .
\end{eqnarray}
These tensors are defined in terms of the elements of the Lie algebra.
In order to compute them on local coordinate systems, first
the metric tensor
$
g_{ij}=g\big(\partial_i,
\partial_j)\, ,\, g_{i\alpha}=g ( \partial_i,
\partial_\alpha )\, ,\, g_{\alpha\beta}=g(\partial_\alpha
,\partial_\beta )
$
and its inverse, $g^{ij},g^{i\alpha},g^{\alpha\beta}$, on $N$,
need to be calculated. By the explicit
form of the invariant vector fields we have:
\begin{eqnarray}
g_{ij}=\delta_{ij} + \frac 1 {4}\sum_{\alpha =1}^l
\langle J_\alpha (X),\partial_i\rangle\langle J_\alpha
(X),\partial_j\rangle ,\,\, g_{\alpha\beta}=\delta_{\alpha \beta} ,
\\
g_{i\alpha}=-\frac 1 {2} \langle J_\alpha (X),\partial_i\rangle ,
\,\,
g^{ij}=\delta_{ij} , \,\, g^{i\alpha}=\frac 1 {2} \langle\partial_i,
J_\alpha (X)\rangle ,
\\
g^{\alpha \beta}=
\delta_{\alpha \beta} + \frac 1 {4} \langle J_\alpha (X),
J_\beta (X)\rangle =(1+\frac 1 {4}\mathbf x^2) \delta_{\alpha \beta}.
\end{eqnarray}
These components determine the metric tensor on $SN$
by the formulas:
$
tg_{ij}=\, ,\,t^{3/2}g_{i\alpha}\, , \, t^2g_{\alpha\beta}
$.
\subsection{Static Schr\"odinger and neutrino equations.}
The static model is defined by the Cartesian product, $N\times\mathbb R$,
of metrics, where $\mathbb R$,
parameterized by $t$, is endowed by the indefinite inner
product $\langle \partial_t, \partial_t\rangle_{q}=-1/q^{2}$.
The several objects
such as Riemann curvature can be computed by laws corresponding to the
Cartesian products, thus they are non-trivial only regarding the
nilpotent direction. These explicit formulas can easily be
recover from the previous ones.
In what follows
we utilize Pauli's computation (\ref{pnonrel_1})-(\ref{nonrel_waveeq})
regarding the non-relativistic approximation of the relativistic
wave equation. By choosing $q=1/c$,
the Laplacian appears in the following form:
\begin{eqnarray}
\label{lapl}
\Delta =
(\Delta_Z-\frac 1{c^2}\partial^2_{tt})+
\big(\Delta_X+\frac 1 {4}\mathbf x^2\Delta_Z
+\sum\partial_\alpha D_\alpha\bullet \big)
=\\
=
(\Delta_Z+\frac{2m\mathbf i}{\hbar}\partial_t-\frac 1{c^2}\partial^2_{tt})
+\big(\Delta_X+\frac 1 {4}\mathbf x^2\Delta_Z
+\sum\partial_\alpha D_\alpha\bullet -\frac{2m\mathbf i}{\hbar}
\partial_t \big).
\nonumber
\end{eqnarray}
On the Z-space, operator $\Delta_Z-\frac 1{c^2}\partial^2_{tt}$ is nothing
but the wave operator (\ref{y1}). According to Yukawa's exposition,
the eigenfunctions, $U$, of this operator describe
the eigenstates of nuclear forces.
Due to (\ref{waveeq}), the general solutions of this wave equation
are de Broglie's wave packets (\ref{wavepack}). On the mathematical model,
however, these wave packets are
represented by twisted functions of the form
\begin{eqnarray}
\psi_{\mathbf Bp_iq_i} (X,Z,t)=
\\
=\int_{\mathbb R^l} e^{\mathbf i(\langle Z,K\rangle -\omega t)}
\phi (\mathbf x,\mathbf k)\varphi (K_u)
\prod_{i=1}^{k/2}z^{p_i}_{K_ui}(X)\overline z^{q_i}_{K_ui}(X)dK=
\\
=\int_{\mathbb R^l} e^{\mathbf i(\langle Z,K\rangle -\omega t)}
\phi (\mathbf x,\mathbf k)F_{\mathbf Bp_iq_i} (X,K_u)dK,
\\
{\rm where}\quad\quad
\sqrt{\mathbf k^2+\frac{m^2c^2}{\hbar^2}}=\frac{\omega}{c}.
\end{eqnarray}
Wave packets $\psi_{Qpq} (X,Z,t)$ are defined by means of the functions
\begin{equation}
F_{Qpq} (X,K_u)=\varphi (K_u) (\Theta_{Q}^p\overline\Theta^q_{Q})(X,K_u).
\end{equation}
They are defined also for Z-sphere bundles
$S_R(\mathbf x)$, which versions are
indicated by denotations
$\psi_{\mathbf Bp_iq_iS_R}$ and $\psi_{QpqS_R}$.
From this respect, $\psi_{\mathbf Bp_iq_i\mathbb R^l}$ and
$\psi_{Qpq\mathbb R^l}$ correspond to the above introduced
wave packets.
For a fixed Z-vector, $Z_\gamma$, the regarding denotations
are
$\psi_{\mathbf Bp_iq_iZ_\gamma}$ and $\psi_{QpqZ_\gamma}$,
where $\mathbf B$
is an orthonormal basis regarding $J_{Z{\gamma u}}$. In this case the
integral is taken with respect to the Dirac delta measure concentrated
at $Z_\gamma$, thus these formulas can be written up without indicating
this integral or constant $\varphi (K_u)$. By projections $\Pi_X^{(n)}$,
one defines
\begin{equation}
\Psi_{....}(X,Z,t)=
\int_{\circ} e^{\mathbf i(\langle Z,K\rangle -\omega t)}
\phi (\mathbf x,\mathbf k)\Pi_X^{(n)}F_{...} (X,K_u)dK,
\end{equation}
where dots $....$ can be substituted by any of the symbols
$\mathbf Bp_iq_i\mathbb R^l,\,Qpq\mathbb R^l$, .. e. t. c., and circle,
$\circ$, could symbolize any of the integral domains
$\mathbb R^l,\, S_R,\, Z_\gamma$.
If also projections $\Pi_K^{(r,s)}$ are applied to $F_{...}$,
the corresponding functions are $\psi^{(r,s)}_{....}$ resp.
$\Psi^{(r,s)}_{....}$. This operation makes sense only for
integral domains $\mathbb R^l$ or $S_R(\mathbf x)$
but it is not defined for the
singular Dirac delta domain $Z_\gamma$.
Anti de Broglie wave packets are defined by replacing $-\omega$ with
$+\omega$ in the above formulas. The corresponding functions are denoted
by $\psi^{anti}_{....}$ and $\Psi^{anti}_{....}$. The associated particles
are called antiparticles. These objects can be introduced also by keeping
$-\omega$ and replacing $\mathbf i$ by $-\mathbf i$.
As it is indicated, the right side of (\ref{lapl}) is computed by adding
$\frac{2m\mathbf i}{\hbar}\partial_t-
\frac{2m\mathbf i}{\hbar}\partial_t=0$
to the left side.
Then the wave operator
associated with nuclear forces becomes
\begin{equation}
\label{N}
\mathit N=\Delta_Z+\frac{2m\mathbf i}{\hbar}
\partial_t-\frac 1{c^2}\partial^2_{tt},
\end{equation}
which is the non-relativistic wave operator established in
(\ref{nonrel_waveeq}).
As it is explained in (\ref{nonrel_wavepack})-(\ref{nonrel_waveeq}),
the $\mathit N$-harmonic waves, defined by
$\mathit N (\tilde \psi )(Z,t)=0$,
relate to the relativistic wave by the formula
\begin{equation}
\psi (Z,t)=e^{-\frac{\mathbf imc^2}{\hbar}t} \tilde \psi (Z,t).
\end{equation}
Also remember that frequency $\tilde\omega$ is derived from
\begin{equation}
\omega =\frac{E}{\hbar}
=\frac{mc^2}{\hbar}\sqrt{1+ \frac{\hbar^2\mathbf k^2}{m^2c^2}}=
\frac{mc^2}{\hbar}+\tilde\omega
=\frac{mc^2}{\hbar}+\frac{\hbar}{2m}\mathbf k^2+\dots ,
\end{equation}
by the Taylor expansion of function $\sqrt{1+x}$. Thus the third term
depends on $\hbar^3$ and, by stepping further,
this exponent is increased by $2$, by each step. For low speed
particles, value $\tilde\omega =\frac{\hbar}{2m}\mathbf k^2$ is a good
approximation of the frequency, thus also $E=\hbar\tilde\omega$ is a good
approximation for the
energy of the particle associated with this non-relativistic wave.
Note that $E=E_{kin}$ is nothing but the kinetic energy owned by the
material particle. By this reason, the particle associated with the wave
operator $\mathit N$
can be consider as one of the residues of a decaying material particle
which has neither mass nor charge and the only source of its energy is
the kinetic energy of the decaying material particle. Such particles are
the neutrinos, thus $\mathit N$ is called neutrino operator accompanying
the electron-positron system.
The energy $mc^2$ of the material particle is completely attributed to the
other particle associated with the second operator
\begin{equation}
\label{S}
\mathit S=\Delta_X+\frac 1 {4}\mathbf x^2\Delta_Z
+\sum\partial_\alpha D_\alpha\bullet -\frac{2m\mathbf i}{\hbar}
\partial_t .
\end{equation}
incorporated into the Laplacian (\ref{lapl}). In order to understand
this particle represented by this operator, we introduce first the
de Broglie wave packets $\tilde \Psi_{....}(X,Z,t)$ and
$\tilde \Psi^{anti}_{....}(X,Z,t)$ in the same way as before, but now,
the $\omega$ is replaced by $\tilde\omega$ which can take values such as
$\frac{\hbar}{2m}\mathbf k^2,\frac{\hbar}{2m}(4r+4p+k)\mu ,
\frac{\hbar}{2m}((4r+4p+k)\mu +4\mu^2)$.
By (\ref{land}), the Schr\"odinger equation for an electron is:
\begin{eqnarray}
\label{land_schr}
-\big(\Delta_{(x,y)} -
{ eB\over \hbar c\mathbf i}
D_z\bullet
+{e^2B^2\over 4\hbar^2 c^2}(x^2+y^2) \big)\psi =
\frac{2m\mathbf i}{\hbar}\frac{\partial \psi}{\partial t}.
\end{eqnarray}
On the Z-crystal model, operator $\lhd_\mu$ is defined by the
action of the Laplacian $\Delta$ of the nilpotent group on functions
of the form
$\psi (X)e^{2\pi\mathbf i\langle Z_\gamma ,Z\rangle}$.
In terms of $\lambda =\pi \mathbf z_\gamma $, this action can
be described as acting only on $\psi$ by the operator
$
\lhd_{\mu}=
\Delta_X +2 \mathbf i D_{\mu }\bullet -\mu^2\mathbf x^2-4\mu^2.
$
If the last constant term is omitted, the operator left is denoted by
$\sqcup_\mu$.
Then, in the 2D case after substitution $\mu ={eB/2\hbar c}$,
the negative of this reduced operator becomes nothing but the
Hamilton operator standing on the left side of the
above Schr\"odinger equation.
If $\tilde K=2\pi Z_\gamma\, ,\, \mu =\tilde{\mathbf k}/2\, ,\,
m=\kappa m_e$, and
$f_\mu (\mathbf x^2)$ is a function such that
$f_\mu (\tilde t)$ is an eigenfunction
of the radial Landau-Zeeman operator $\Diamond_{\tilde t}+4\mu^2$,
defined in (\ref{Lf_lambda}), with eigenvalue
$-\tilde\omega =-(4r+4p+k)\mu$, then for
$ \mathit S\big(\tilde\Psi^{anti}_{...\tilde K}(X,Z,t)\big)$ we have:
\begin{eqnarray}
\mathit S\big(
e^{\mathbf i(\langle Z,\tilde K\rangle +\frac{\hbar}{2m}\tilde\omega t)}
f_\mu (\mathbf x^2)\Pi_X^{(n)}F_{...} (X,\tilde K_u)\big)=
\\
=e^{\mathbf i\langle Z,\tilde K\rangle}(\sqcup_\mu -
\frac{2m\mathbf i}{\hbar}\frac{\partial}{\partial t})\big(
e^{\frac{\hbar\mathbf i}{2m}\tilde\omega t}
f_\mu (\mathbf x^2)\Pi_X^{(n)}F_{...} (X,\tilde K_u)\big)=0.
\nonumber
\end{eqnarray}
Thus on Z-crystals, operator $\mathit S$ is nothing but
Schr\"odinger's classical wave operator of an electron positron system.
Note that no
non-relativistic limiting was used to obtain this operator. It is
naturally incorporated into the complete
Laplacian $\Delta$. Although it is the same as the non-relativistic wave
operator obtained earlier by non-relativistic limiting, even the neutrino
operator, $\mathit N$, is not the result of a non-relativistic limiting.
The Laplacian $\Delta$ is the sum of these two natural operators, meaning
that it actually represents a system consisting of electrons positrons
and electron-positron-neutrinos.
The above arguments also suggest that this system can be regarded as the
result of a sort of nucleus-decay. A rigorous theory describing this
process is yet to be established. It is clear, however, that
the basic mathematical tool underlying this physical theory must be
the decomposition of the Laplacian into operators corresponding to the
constituents of a given particle system. Dealing with Laplacian means
that one does not violates the principle of energy conservation. Moreover,
this tool provides also the exact operators associated with the particles,
which is the most attractive new feature of these exact mathematical
models.
Actually, the elementary particles discovered in classical quantum
theory were introduced by the very same idea.
For instance, the neutrino was first postulated in 1930 by
Wolfgang Pauli to preserve conservation of energy, conservation of
momentum, and conservation of angular momentum in beta decay –
the decay of a neutron into a proton, an electron and an antineutrino.
Pauli theorized that an undetected particle was carrying away
the observed difference between the energy, momentum,
and angular momentum of the initial and final particles. The only
difference between the two ways introducing the neutrinos is that
Pauli did not have a Riemann manifold in hand in the Laplacian of which
he would have been able to separate the neutrino from the other particles
resulted by the decay.
The only term in the Laplacian containing
second order derivatives regarding
the time variable $t$ is the neutrino operator.
This term is of first order in the Schr\"odinger operator. Because
of this, waves $\tilde\Psi$ are not solutions of the neutrino
operator and waves $\tilde\psi$ obtained above by the Taylor expansion are
not solutions of the Schr\"odinger equation. In order to cope with this
difficulty, non-relativistic approximation
can be implemented such that one attributes the
kinetic energy represented by $\Delta_Z$ in the neutrino operator
to the Hamilton
operator associated with $\mathit S$ by considering the total
Schr\"odinger operator
\begin{equation}
\label{TS}
{\mathbb S}=\Delta_X+(1+\frac 1 {4}\mathbf x^2)\Delta_Z
+\sum\partial_\alpha D_\alpha\bullet -\frac{2m\mathbf i}{\hbar}
\partial_t ,
\end{equation}
which is the sum of the Schr\"odinger operator and $\Delta_Z$.
In this step, the two operator is pulled together to form an operator
which is of first order regarding the time variable.
This scheme is completely analogous to those applied by Schr\"odinger
when, instead
of the Klein-Gordon equation, he introduced his equation. A major
difference is, however, that the above operator accounts also with the
energy of neutrinos accompanying the electron-positron system, moreover,
the non-relativistic approximation is applied to the neutrino operator
and not to the electron-positron operator. The neglected Taylor-terms
in this approximation depend on $\hbar^s$, where $s\geq 3$.
The wave functions regarding this pulled-together operator
are defined by the eigenvalues
$-\tilde\omega =-((4r+4p+k)\mu +4\mu^2$. Then,
in terms of
$\mathbf F^{(n)}_{...} (X,\tilde K_u)=\Pi_X^{(n)}F_{...} (X,\tilde K_u)$,
we have:
\begin{eqnarray}
\mathbb S\big(\tilde\Psi^{anti}_{...\tilde K}(X,Z,t)\big)=\mathbb S\big(
e^{\mathbf i(\langle Z,\tilde K\rangle +\frac{\hbar}{2m}\tilde\omega t)}
f_\mu (\mathbf x^2)\mathbf F^{(n)}_{...} (X,\tilde K_u)\big)=
\\
=e^{\mathbf i\langle Z,\tilde K\rangle}(\lhd_\lambda -
\frac{2m\mathbf i}{\hbar}\frac{\partial}{\partial t})\big(
e^{\frac{\hbar\mathbf i}{2m}\tilde\omega t}
f_\mu (\mathbf x^2)\mathbf F^{(n)}_{...} (X,\tilde K_u)\big)=0.
\nonumber
\end{eqnarray}
\begin{eqnarray}
\mathbb S\big(\tilde\Psi^{anti}_{...S_R}\big)=\mathbb S\big(\oint_{S_R}
e^{\mathbf i(\langle Z,K\rangle +\frac{\hbar}{2m}\tilde\omega t)}
f_{\frac 1{2}\mathbf k} (\mathbf x^2)\mathbf F^{(n)}_{...} (X,K)dK_n\big)=
\\
=\oint_{S_R}e^{\mathbf i\langle Z,K\rangle}(\lhd_{\frac 1{2}\mathbf k} -
\frac{2m\mathbf i}{\hbar}\frac{\partial}{\partial t})\big(
e^{\frac{\hbar\mathbf i}{2m}\tilde\omega t}
f_{\frac 1{2}\mathbf k}
(\mathbf x^2)\mathbf F^{(n)}_{...} (X,K)dK_n\big)=0.
\nonumber
\end{eqnarray}
\begin{eqnarray}
\mathbb S\big(\tilde\Psi^{anti}_{...\mathbb R^l}\big)=
\mathbb S\big(\int_{\mathbb R^l}
e^{\mathbf i(\langle Z,K\rangle +\frac{\hbar}{2m}\tilde\omega t)}
f_{\frac 1{2}\mathbf k} (\mathbf x^2)\mathbf F^{(n)}_{...} (X,K_u)dK\big)=
\\
\int e^{\mathbf i\langle Z,K\rangle}
(\lhd_{\frac 1{2}\mathbf k} -
\frac{2m\mathbf i}{\hbar}\frac{\partial}{\partial t})\big(
e^{\frac{\hbar\mathbf i}{2m}\tilde\omega t}
f_{\frac 1{2}\mathbf k}
(\mathbf x^2)\mathbf F^{(n)}_{...}
(X,K_u)\big)\mathbf k^{l-1}dK_ud\mathbf k=0.
\nonumber
\end{eqnarray}
The same formulas hold for operator $\mathit S$ which appears
as the classical Schr\"odinger operator
$\sqcup_{\frac 1{2}\mathbf k} -\frac{2m\mathbf i}{\hbar}
\frac{\partial}{\partial t}$ behind the integral sign.
Similar arguments work out also for operator $\OE$, in which case
the Schr\"odinger operators act on wave functions
$\tilde\Psi^{anti(r,s)}_{...S_R}(X,Z,t)$. This is still a scalar operator
which can be reduced to a radial operator acting on a single radial
function.
The radial operator to which the complete operator $\Delta$ can be reduced
act on d-tuples of radial function, therefore integral formulas regarding
these cases must be built up in terms of function
$f_\beta\Pi^{(\beta )}_{K_u}\mathbf F^{(n)}$ where d-tuple
$(f_1,\dots ,f_d)$ is an eigen d-tuple of the reduced radial operator.
The particles defined by these operators are denoted by
$W_{\OE} =W_{\OE^{(1)}}$ resp.
$W_\Delta =W_\Delta^{(\infty)}$. They are called
clean-weak and clean-high W-particles respectively, while the other
particles $W^{(u)}_{\OE} =W_{\OE^{(u)}}$ resp.
$W^{(u)}_\Delta =W_{\Delta^{(u)}}$ are the so called dirty W-particles.
The neutrino operator is the same in all of these cases, thus
the associated particles are denoted $Z_{\OE}$. These denotations are
are suggested by the theory of weak nuclear forces. They indicate that
W-type particles can analogously be defined also regarding strong forces.
However, the beta decay can be explained just by the clean weak nuclear
forces.
\subsection{Expanding Schr\"odinger and neutrino equations.}
For the sake of simplicity the following formulas are established regarding
the collapsing (shrinking) time-direction $T$ under the condition
$q=1$. Formulas regarding the expanding
time-direction $\tau$ can be obtained
by the substitution $T=-\tau$. Instead of $t$, the expanding wave functions
are introduced in terms of $e^T$.
That is, the shrinking twisted wave packets are of the form
\begin{eqnarray}
\Psi_{....}(X,Z,T)=
\int_{\circ} e^{\mathbf i(\langle Z,K\rangle -\omega e^T)}
\phi (\mathbf x,\mathbf k)\Pi_X^{(n)}F_{...} (X,K_u)dK
\\
=\int_{\circ} e^{\mathbf i(\langle Z,K\rangle -\omega e^T)}
\phi (\mathbf x,\mathbf k)\mathbf F^{(n)}_{...} (X,K_u)dK,
\nonumber
\end{eqnarray}
where
$
\sqrt{\mathbf k^2+\frac{m^2c^2}{\hbar^2}}=\frac{\omega}{c},
$
and, as above, dots $....$ can be substituted by any of the symbols
$\mathbf Bp_iq_i\mathbb R^l,\,Qpq\mathbb R^l$, .. e. t. c., and circle,
$\circ$, could symbolize any of the integral domains
$\mathbb R^l,\, S_R,\, Z_\gamma$.
De Broglie's wave packets $\tilde \Psi_{....}(X,Z,T)$ and
$\tilde \Psi^{anti}_{....}(X,Z,T)$ are introduced by the same modification,
that is, the $\omega$ is replaced by $\tilde\omega$ in the latter formula,
which can take values such as
$\frac{\hbar}{2m}\mathbf k^2,\frac{\hbar}{2m}(4r+4p+k)\mu,
\frac{\hbar}{2m}((4r+4p+k)\mu +4\mu^2)$.
The meson operator appears now in the form:
\begin{equation}
\label{M}
\mathit M=e^{2T}\Delta_Z+\partial_T-\partial^2_{TT},
\end{equation}
The same computation implemented on the static model shows that the
shrinking matter waves $\Psi_{....}(X,Z,T)$, defined in terms of $\omega$,
are really harmonic, meaning $\mathit M \Psi_{....}=0$,
regarding this operator. Moreover, wave packet
$\hat \Psi (X,Z,T)$ defined by
\begin{equation}
\Psi (X,Z,T)=e^{-\frac{\mathbf imc^2}{\hbar}e^T} \hat \Psi (X,Z,T)
\end{equation}
is harmonic regarding the shrinking neutrino operator
\begin{equation}
\label{NtrnoS}
\mathit N=
e^{2T}\Delta_Z+(1+\frac{2m\mathbf i}{\hbar}e^T)\partial_T-\partial^2_{TT}.
\end{equation}
According to this computation, the corresponding decomposition
of the Laplacian into non-polarized neutrino and Schr\"odinger operator
of a particle system is as follows
\begin{eqnarray}
\Delta =\{e^{2T}\Delta_Z-\partial^2_{TT}\}+
\\
+\big\{e^{T}\big(\Delta_X+\frac 1 {4}\mathbf x^2\Delta_Z
+\sum\partial_\alpha D_\alpha\bullet \big)
+(\frac k {2}+l)\partial_T\big\}=
\nonumber
\\
=
\big\{ e^{2T}\Delta_Z+(1+\frac{2m\mathbf i}{\hbar}e^T)\partial_T-
\partial^2_{TT}\big\} +
\\
+\big\{e^{T}\big(\Delta_X+\frac 1 {4}\mathbf x^2\Delta_Z
+\sum\partial_\alpha D_\alpha\bullet \big)
+(\frac k {2}+l-1-\frac{2m\mathbf i}{\hbar}e^T)\partial_T\big\}
\nonumber
\\
=
\big( e^{2T}\Delta_Z+(1+\frac{2m\mathbf i}{\hbar}e^T)\partial_T-
\partial^2_{TT}\big) +
\\
+e^{T}\big(\Delta_X+\frac 1 {4}\mathbf x^2\Delta_Z
+\sum\partial_\alpha D_\alpha\bullet
-\frac{2m\mathbf i}{\hbar}\partial_T \big)
+\big(\frac k {2}+l-1\big)\partial_T.
\nonumber
\end{eqnarray}
In terms of $\tau =-T$, these operators define the expanding
non-polarized neutrino, Schr\"odinger, and tractor operators respectively.
The force associated with the third operator supplies the energy what is
needed to maintain the expansion. Let it also be pointed out that
according to these models the particles are not just moving away from
each other but this movement is also accelerating. This acceleration
can be computed by taking the second derivatives of the distance
function introduced at explaining the expansion.
This acceleration can be explained just by this new force represented
by the third operator.
It is important to keep in mind that these operators are non-polarized
The polarized operators appear behind the integral sign after these
non-polarized operators are acting on the de Broglie waves expressed
by means of twisted Z-Fourier transforms.
\section{Spectral isotropy.}
Spectral isotropy means that, on an arbitrary ball$\times$ball-
or sphere$\times$ball-type manifold with a fixed boundary condition,
for any two unit X-vectors
$Q$ and $\tilde Q$, the Laplacian is isospectral on the invariant
function spaces
$
\mathbf \Xi_{Q}=\sum_n\mathbf \Xi_{Q}^{(n)}
$
and
$
\mathbf \Xi_{\tilde Q}=\sum_n\mathbf \Xi_{\tilde Q}^{(n)}
$
satisfying the given boundary condition. Recall that total space
$\mathbf \Xi_{Q}$ is everywhere dense in the straight space spanned
by functions of the form $f(|X|,Z)\langle Q,X\rangle$, furthermore,
the boundary conditions can totally be controlled by $(X,Z)$-radial
functions, therefore, this total function space is the same than what is
defined in terms of the straight functions.
Next we prove that any of the Heisenberg type groups is spectrally
isotropic. On general 2-step nilpotent Lie
groups, where the endomorphisms $J_Z$ can have distinct eigenvalues,
this statement can be established just in a much weaker form not discussed
in this paper. Contrary to these general cases, the H-type groups
have the distinguishing characteristics that they represent systems
consisting identical
particles and their anti-particles. Also note that on the expanding model
this spectral
isotropy explains why the radiation induced by the change of
the constant magnetic field attached to the the spin operator is the
same whichever direction it is measured from. This radiation isotropy,
which has been measured with great accuracy, actually indicates that
the Heisenberg type groups are enough to describe the elementary particles
and there is no need to involve more general 2-step nilpotent Lie groups
to this new theory.
This spectral isotropy is established by the intertwining operator
$
\omega_{Q\tilde Qpq\bullet}:
\mathbf \Xi_{Qpq.}\to
\mathbf \Xi_{\tilde Qpq\bullet}
$,
defined by
\begin{eqnarray}
\label{intertw1}
\mathcal{HF}_{Qpq\bullet}(\phi )=
\int_{\bullet} e^{\mathbf i\langle Z,K\rangle}
\phi (\mathbf x,K)\Pi_X^{(n)}(\Theta_{Q}^p\overline\Theta^q_{Q})(X,K_u)
dK_\bullet\to
\\
\to \mathcal{HF}_{\tilde Qpq}(\phi )=
\int_{\bullet} e^{\mathbf i\langle Z,K\rangle}
\phi (\mathbf x,K)\Pi_X^{(n)}(\Theta_{\tilde Q}^p
\overline\Theta_{\tilde Q}^q)(X,K_u)dK_{\bullet} ,
\nonumber
\end{eqnarray}
where heavy dot $\bullet$ may represent $R_Z(\mathbf x)$ or $\mathbb R^l$.
They indicate the function spaces this operator is defined for.
The very same operators are defined by corresponding Hankel
functions obtained by the Hankel decomposition of the above functions
to each other. This statement immediately follows from the fact that
the sums of the Hankel components restore the original functions in
the above formulas. Also note that the $(X,Z)$-radial Hankel functions
are the same regarding the two corresponded functions and they differ
from each other just by the Hankel polynomials obtained by the
projections. Thus the operator defined by Hankel decomposition
must really be the same as the above operator. It follows that operator
(\ref{intertw1}) preserves the Hankel decompositions.
An other remarkable property of this transform is
that it can be induced by
point transformations
of the form $(O_{\tilde QQ},id_Z)$,
where $id_Z$ is the identity map on the Z-space and $O_{\tilde QQ}$
is orthogonal transformation on the X-space, transforming
subspace $S_{\tilde Q}$, spanned by $\tilde Q$ and all
$J_{Z_u}(\tilde Q)$, onto the similarly defined
$S_{Q}$. This part of the
map is uniquely determined, whereas, between the complement
X-spaces it can be arbitrary orthogonal transformation. One should keep in
mind that such a point transformation pulls back a function just from
$\Xi_{Q\bullet}$ into the function space $\Xi_{\tilde Q\bullet}$ and it is
not defined for the whole $L^2$ function spaces.
Next we prove that this operator intertwines the restrictions of
the Laplacian to these invariant subspaces, term by term. Since X-radial
functions are mapped to the same X-radial functions, furthermore,
also X-spherical harmonics of the same degree are intertwined with
each other, the statement holds for $\Delta_X$. This part of the statement
can be settled also by the formula
$
\Delta_X=\sum\partial_{z_{K_ui}}\partial_{\overline z_{K_ui}}
$
written up in a coordinate system established by an orthonormal
complex basis where
$Q$ is the first element of this basis
$\{Q_{K_ui}\}$, for all $K_u$. That is,
$\Theta_Q=z_1$ holds and the statement really follows by the above formula.
A third proof can be derived from the fact that this operator is induced
by the above described point transformation.
Due to the relations
\begin{eqnarray}
\label{MF}
\mathbf M\mathcal F_{Qpq}(\phi )=
\mathcal F_{Qpq}((q-p)\mathbf k\phi )\, ,\,
\Delta_Z\mathcal F_{Qpq}(\phi )=
\mathcal F_{Qpq}(-\mathbf k^2\phi ),\\
\label{HMF}
\mathbf M\mathcal{HF}_{Qpq}(\phi )
=\mathcal{HF}_{Qpq}((q-p)\mathbf k\phi )\, ,\,
\Delta_Z\mathcal {HF}_{Qpq}(\phi )=
\mathcal {HF}_{Qpq}(-\mathbf k^2\phi ),
\nonumber
\end{eqnarray}
the other parts of the Laplacians are also
obviously intertwined. In these formulas, the second line follows
from the first
one by the commutativity of operator $D_K\bullet$ with the
projection $\Pi_X$.
The intertwining property regarding the Dirichlet or Z-Neumann conditions
on ball$\times$ball- resp. sphere$\times$ball-type domains can also
be easily established either by the above point transformations,
or with the help of the Hankel transform implying that functions of
the form $f(|X|,|Z|)$ appearing in the transform are intertwined
with themself. Since the
boundary conditions are expressed in terms of these double radial
functions, this argument provides a second proof for the statement.
A third proof can be obtained by the explicit formulas established for
twisted functions satisfying these boundary conditions.
The most interesting new feature of this spectral isotropy is that
it holds even in cases when the space is not spatially isotropic.
Let it be recalled that spatial-isotropy is the first assumption on
the Friedmann model and the overwhelming evidence supporting this
assumption was exactly the isotropic radiation measured by Penzias
and Wilson, in 1965.
The mathematical models demonstrate, however, that the spectral
isotropy manifests itself even in much more general situations when the
space is rather not spatially isotropic. In order to explain
this situation more clearly, we describe the isometries of H-type groups
in more details.
Generically speaking, these groups are not isotropic regarding the
X-space. They satisfy this property just in very rare occasions.
Starting with Heisenberg-type groups $H^{(a,b)}_3$, there is a subgroup,
$\mathbf {Sp}(a)\times \mathbf{Sp}(b)$,
of isometries which acts as the identity
on the Z-space and which acts transitively just on the X-sphere
of $H_3^{(a+b,0)}$. In this case the intertwining property for operators
$\omega_{Q\tilde Q,\bullet}$ also follows from the existence of
isometries transforming $Q$ to $\tilde Q$. But the isometries
are not transitive on the X-spheres of the other spaces satisfying
$ab\not =0$.
This statement follows from the fact that the complete group
of isometries is
$(\mathbf {Sp}(a)\times \mathbf{Sp}(b))\cdot SO(3)$,
where the action of $SO(3)$, described in terms of unit quaternions
$q$ by
\[
\alpha_q(X_1,\dots ,X_{a+b},Z)=
(qX_1\overline q,\dots ,qX_{a+b}\overline q,qZ\overline q),
\]
is transitive on the Z-sphere.
Thus the above tool is not available to prove the spectral
isotropy in these cases. Yet, by the above arguments, the
$\omega_{Q\tilde Q\bullet}$ is an intertwining operator on its own right,
without the help of the isometries.
Note that the members of a family defined
by a fixed number $a+b$ have the same X-space but non-isomorphic
isometry groups having
different dimensions in general. More precisely, two members,
$H^{(a,b)}_3$ and $H^{(a^\prime ,b^\prime)}_3$,
are isometrically isomorphic if and only if $(a,b)=(a^\prime ,b^\prime)$
holds up to an order. Furthermore,
the sphere$\times$sphere-type manifolds are homogeneous just on
$H_3^{(a+b,0)}\simeq H_3^{(0,a+b)}$,
while they are locally inhomogeneous, even on the X-spheres,
on the other members of the family.
Let it be emphasized again that this homogeneity concerns not just
the homogeneity of the
X-spheres but the whole sphere$\times$sphere-type manifold.
The isometries are well known also for all H-type groups $H^{(a,b)}_l$.
The X-space-isotropy is obviously true also on the Heisenberg groups
$H^{(a,b)}_1$ which can be defined as H-type groups satisfying $l=1$.
Besides this and the above quaternionic examples, it yet holds just on
$H_7^{(1,0)}\simeq H_7^{(0,1)}$. Thus the X-isotropy regarding isometries
is a rare property, indeed. This is why the spectral isotropy,
yielded by any of the H-type groups,
is a very surprising phenomenon indeed. It puts a completely
new light to the radiation isotropy evidencing the spatial-isotropy
assumed in Friedmann's model. According to the above theorem,
the radiation isotropy seems to
be evidencing all the new relativistic models of elementary
particle systems which are built up in this paper by nilpotent
Lie groups and their solvable extensions. These models are far beyond those
satisfying the spatial-isotropy assumption.
By these arguments, all the isospectrality examples established in
\cite{sz1}-\cite{sz4} for a family $H^{(a,b)}_l$ defined by the same
$a+b$ and $l$ can be reestablished almost in the same way.
Note that such a family is defined
on the same $(X,Z)$-space and two members defined for
$(a,b)$ resp. $(a^\prime ,b^\prime)$ are not locally isometric, unless
$(a,b)=(a^\prime ,b^\prime)$ upto an order.
The above intertwining operator
proving the spectral isotropy appears now in the following modified form
$
\Omega_{Qpq\bullet}:
\mathbf \Xi_{Qpq\bullet}\to
\mathbf \Xi^\prime_{Qpq\bullet}
$,
that is, it corresponds one-pole functions having the same pole but which
are defined by the distinct complex structures $J_{K_u}$ resp.
$J^\prime_{K_u}$ to each other. The precise correspondence is then
\begin{eqnarray}
\mathcal{HF}_{Qpq\bullet}(\phi )=
\int_{\bullet} e^{\mathbf i\langle Z,K\rangle}
\phi (\mathbf x,K)\Pi_X^{(n)}(\Theta_{Q}^p\overline\Theta^q_{Q})(X,K_u)
dK_\bullet\to
\\
\to \mathcal{HF}^\prime_{Qpq}(\phi )=
\int_{\bullet} e^{\mathbf i\langle Z,K\rangle}
\phi (\mathbf x,K)\Pi_X^{(n)}(\Theta^{\prime p}_{Q}
\overline\Theta^{\prime q}_{Q})(X,K_u)dK_{\bullet} ,
\nonumber
\end{eqnarray}
which, by the same argument used for proving spectral isotropy,
intertwines both the Laplacian and the boundary conditions on
any of the ball$\times$ball- resp.
sphere$\times$ball-type domains.
In order to establish the complete isospectrality,
pick the same system $\mathbf B$ of independent vectors
for both of these manifolds and,
by implementing the obvious alterations in the previous formula, define
$
\Omega_{\mathbf Bp_iq_i\bullet}:
\mathbf \Xi_{\mathbf Bp_iq_i\bullet}\to
\mathbf \Xi^\prime_{\mathbf Bp_iq_i\bullet}.
$
This is a well defined operator between the complete $L^2$ function spaces
which follows from the theorem asserting that the
twisted Z-Fourier transforms are $L^2$ bijections mapping
$\mathbf{P\Phi}_{\mathbf B}$ onto an everywhere dense subspace of
the complete straightly defined space
$\overline{\mathbf{\Phi}}_{\mathbf B}$. It
can be defined also by all those maps $\Omega_{Qpq\bullet}$ where
pole $Q$ is in the real span of the vector system $\mathbf B$. Thus, by
the above argument, operators $\Omega_{\mathbf Bp_iq_i\bullet}$ intertwine
the complete $L^2$ function spaces along with the boundary conditions.
Interestingly enough,
the isospectrality can be establish also in a new way,
by using only the intertwining operators $\omega_{Q\tilde Qpq\bullet}$.
Indeed, suppose that the elements,
$\{\nu_{p,q,i}\}$,
of the spectrum appear on a total one-pole space
$
\mathbf \Xi_{Qpq}
$ with multiplicity, say $m_{pq,i}$. By the one-pole intertwining
operators this spectrum is uniquely determined and each eigenvalue
regarding the whole $L^2$-space must be listed on this list, furthermore,
the multiplicity regarding the whole $L^2$-space is the multiple of these
one-pole multiplicities by
the dimension of $\Xi_{\mathbf B pq}$.
On the other hand, for
$Q\in\mathbb R^{r(l)a}$, the isospectrality
obviously follows from $\mathbf\Xi_{Qpq}=\mathbf\Xi^{\prime}_{Qpq}$,
thus both the elements of the spectra and the regarding multiplicities
must be the same on these two manifolds.
This proof clearly demonstrates that how can the spectrum
``ignore'' the isometries. It explains also the striking examples
where one of the members of the isospectrality family is a homogeneous
space while the others are locally inhomogeneous. It also
demonstrates that spectral isotropy implies the isospectrality. Recall
that this isospectrality is established above
by an intertwining operator which most conspicuously
exhibits the following so called C-symmetry principle of physics:
``The laws remain the
same if, in a system of particles, some of them are exchanged for their
anti-particles." This intertwining operator really operates such that
some of the particles are exchanged for their anti-particles.
Thus this proof demonstrates that spectral isotropy
implies the C-symmetry. Moreover, the isospectrality is the manifestation
of the physical C-symmetry. This is a physical verification of the
isospectrality of the examples established
in \cite{sz1}-\cite{sz4}. Actually
these examples provide a rigorous mathematical proof for the C-symmetry
which is not a theorem but one of the principles in physics.
Let it be mentioned yet that the
isospectrality proof provided in this paper is completely new, where the
Hankel transform appears in the very first time in these investigations.
(All the other proofs are established by
different integral transformations.)
The isospectrality theorem naturally extends to
the solvable extensions endowed with positive definite invariant
metrics.
Just functions $\phi(\mathbf x,K)$ should be exchanged for
$\phi(\mathbf x,t,K)$ in the above formulas, that is, the
intertwining is led back to the nilpotent group.
It is important to keep in mind that the metrics are positive definite
in the spectral investigations of the
solvable extensions. The group
of isometries acting on the sphere$\times$sphere-type manifolds of
$SH^{(a,b)}_3$, where $ab\not =0$, is
$(\mathbf {Sp}(a)\times \mathbf{Sp}(b))\cdot SO(3)$,
while
it is $\mathbf{Sp}(a+b)\cdot\mathbf{Sp}(1)$
on $SH^{a+b,0)}_3$.
By these formulas, the
same statement proved for the nilpotent groups can be generalized
to the solvable isospectrality family.
These solvable examples provide also
new striking examples of isospectral
metrics where one of them is homogeneous
while the other is locally inhomogeneous.
By summing up, we have
\begin{theorem}
Operators $\Omega_{Qpq}$ and
$\omega_{Q\tilde Qpq}$ defined for combined spaces
intertwine the Laplacians, moreover,
they can be induced by
point transformations
of the form
$(K_Q,id_Z)$ resp. $(O_{Q\tilde Q},id_Z)$,
where $id_Z$ is the identity map on the Z-space and the first ones
are orthogonal transformations on the X-space, transforming
subspace $S_Q$, spanned by $Q$ and all $J_{Z_u}(Q)$,
to $S^\prime_Q$ resp. $S_{\tilde Q}$. This part of the
map is uniquely determined, whereas, between the complement
spaces it can be arbitrary orthogonal transformation.
By this induced map interpretation, both the Dirichlet and Z-Neumann
conditions are also intertwined by these operators. This statement
follows also from the fact that
the very same operators are defined by corresponding the Hankel
functions obtained by the Hankel decomposition of the above functions
to each other. That is, these maps intertwine also the corresponding Hankel
subspaces along with the exterior operator $\OE$ and the interior
strong force operator $\mathbf S$.
So far the isospectrality is established for one-pole functions. For a
global establishment consider a system $\mathbf B$ of $k/2$ independent
vectors described earlier on the X-space. Then the global operator
$\Omega_{\mathbf Bp_iq_i}:
\mathcal{HF}_{\mathbf Bp_iq_i}(\phi )
\to \mathcal{HF}^\prime_{\mathbf Bp_iq_i}(\phi )$ can be defined also
by $Q$-pole functions satisfying $Q\in Span_{\mathbb R}(\mathbf B)$.
This proves that the $\kappa_{\mathbf Bp_iq_i}$ defines a
global intertwining operator.
\end{theorem}
\noindent{\bf Acknowledgement} I am indebted to the Max Planck Institute
for Mathematics in the Sciences, Leipzig, for the excellent working
conditions provided for me during my visit in the 07-08 academic year.
My particular gratitude yields to Prof. E. Zeidler for the interesting
conversations.
\bibliographystyle{my-h-elsevier}
| {'timestamp': '2009-09-22T05:43:39', 'yymm': '0909', 'arxiv_id': '0909.3902', 'language': 'en', 'url': 'https://arxiv.org/abs/0909.3902'} |
\section*{References}
| {'timestamp': '2017-04-20T02:00:59', 'yymm': '1704', 'arxiv_id': '1704.05499', 'language': 'es', 'url': 'https://arxiv.org/abs/1704.05499'} |
\section{\label{sec:Intro}Introduction}
The laser-induced dynamics of pure and doped helium (He) nanodroplets is currently attracting considerable attention~\cite{Mudrich:2008,Gruner:2011,KrishnanPRL:2011,Kornilov:2011,PentlehnerPRL:2013,Ovcharenko:2014,Mudrich:2014}. While the superfluidity of He nanodroplets has been tested in numerous key experiments by probing stationary properties~\cite{Hartmann2:1996,Grebenev:1998,Brauer:2013}, the impact of the quantum nature of the droplets on their dynamic response to impulsive excitation or ionization is much less well established. As a prominent recent example, the rotational dynamics of various molecules embedded in He droplets induced by impulsive alignment was found to be significantly slowed down and rotational recurrences were completely absent~\cite{PentlehnerPRL:2013}. This indicates that substantial transient system-bath interactions are present during the laser pulse. In contrast, the vibrational dynamics of rubidium (Rb) molecules Rb$_2$ attached to the surface of He nanodroplets revealed only slow relaxation and dephasing proceeding on a nanosecond time scale~\cite{Mudrich:2009,Gruner:2011}.
Various recent experimental and theoretical studies have addressed the dynamics of solvation and desolvation of ionized or excited metal atoms off the surface of He nanodroplets~\cite{Loginov:2007,Loginov:2012,Fechner:2012,Zhang:2012,Vangerow:2014,Loginov:2014,Mudrich:2014,TheisenImmersion:2011,Theisen:2010,Mateo:2013,Leal:2014}. So far, these studies have concentrated on measuring the total yield and the final velocity of the ejected atoms as a function of the atomic species and the electronic state of excitation. In this paper we present the first time-resolved characterization of the desorption process of Rb atoms off the surface of He nanodroplets upon excitation to the droplet-perturbed states correlating to the 6p atomic orbital. The experimental scheme we apply is femtosecond (fs) pump-probe photoionization in combination with time-of-flight mass-spectrometry. We find that the yield of detected Rb$^+$ photoions as a function of delay time $\tau$ between the exciting pump and the ionizing probe pulses is determined by the interplay of the repulsive interaction of excited Rb$^\ast$ with respect to the He surface and the attractive interaction of the Rb$^+$ ion with the He surface induced by photoionization.
The Rb$^\ast$-He droplet repulsion initiates the desorption of the Rb$^\ast$ atom off the He droplet surface. Except for the lowest excited state of Rb, 5p$_{1/2}$, all excited states up to high Rydberg levels experience strong repulsion from He droplets~\cite{Aubock:2008,Callegari:2011}. In contrast, the Rb$^+$-He droplet attraction causes the Rb$^+$ ion to fall back into the He droplet when created near the He droplet surface at short delay times~\cite{Theisen:2010,Leal:2014}. Atomic cations are known to form stable ``snowball'' structures consisting of a cationic core which is surrounded by a high density shell of He atoms. As a result, free Rb$^+$ ions appear in the mass spectrum only after a characteristic pump-probe delay time $\tau_D$, which depends on the state the Rb atom is initially excited to.
In addition to neat Rb$^+$ atomic ions, the photoionization mass spectra contain Rb$^+$He molecular ions in the full range of laser wavelengths correlating to the droplet-perturbed Rb 6p-state. The occurrence of such molecular ions has previously been interpreted by the formation of metastable `exciplex' molecules~\cite{Droppelmann:2004,Mudrich:2008,Giese:2012,Fechner:2012,Vangerow:2014,Mudrich:2014}. These bound states of excited metal atoms and one or few He atoms can be populated either by a tunneling process~\cite{Reho:2000,Reho2:2000,Loginov:2007,Loginov:2015} or by direct laser-excitation of bound states in the metal atom-He pair potential~\cite{Pascale:1983,Fechner:2012,Vangerow:2014,Loginov:2014}. In the former case, exciplex formation times $\gtrsim 50$~ps are expected~\cite{Reho2:2000,Droppelmann:2004}, whereas in the latter case, exciplexes are created instantaneously. Thus, previous pump-probe measurements revealing exciplex formation times of $8.5$ and $11.6$~ps for Rb$^4$He and Rb$^3$He, respectively, upon excitation into the droplet-perturbed 5p$_{3/2}$-state could not be consistently interpreted~\cite{Droppelmann:2004}.
In the present study we observe a time-delayed increase of the Rb$^+$He signal as for Rb$^+$ indicating that the pump-probe dynamics is primarily determined by the competition between desorption of the Rb$^\ast$He exciplex off the He droplet surface and the Rb$^+$He cation falling back into the He droplet interior.
Moreover, a pronounced maximum in the Rb$^+$He signal transients indicates that an additional Rb$^+$He formation channel besides photoionization of Rb$^+$He exciplexes is active -- photoassociative ionization (PAI) of the desorbing Rb atom and a He atom out of the droplet surface. PAI is a well-known process where a bound cationic molecule or complex is formed by photoionization or photoexcitation into autoionizing states of an atom or molecule of a collision complex~\cite{Shaffer:1999}. PAI is a special case of traditional associative ionization where a bound molecular cation is formed in a binary collision of an electronically excited atom~\cite{Weiner:1990}. In either case the binding energy is taken away by the electron emitted in the process.
\section{Experimental setup}
The experimental setup is similar to the previously used arrangement~\cite{Mudrich:2009,Fechner:2012} except for the ionization and detection schemes. He droplets are produced by a continuous supersonic expansion of He~6.0 through a 5~$\mu$m nozzle at a pressure of $50$~bar. The transversal velocity spread of the beam is reduced by placing a 400~$\mu$m skimmer 13~mm behind the nozzle. Unless otherwise stated, the nozzle temperature is kept at 17~K. This results in a log-normal distribution of the He droplet size with a mean size of $1.1\times 10^4$ He atoms. Subsequently, the droplet beam passes a mechanical chopper and a Rb-filled cell of length 1~cm, stabilized at a temperature of $85~\degree$C. At the corresponding vapor pressure, most droplets pick up on average one Rb atom following poissonian statistics. By overlapping the droplet beam with the output of the fs laser, we resonantly excite and ionize the dopant atom.
In contrast to previous studies, we use amplified fs laser pulses generated by a regenerative amplifier operated at a pulse repetition rate of 5 kHz. At this repetition rate, multiple excitations of Rb atoms by subsequent pulses from the pulse train are safely excluded. The pulses are frequency-doubled in a BBO crystal resulting in a pulse duration of $t_p=120$~fs with a variation for different laser center wavelengths of 20~fs. Two identical, time-delayed pump and probe pulses are generated by means of a mechanical delay line. The laser beam is focused into the vacuum chamber using a 30~cm lens which leads to a peak intensity in the range of $5\times 10^{12}$ Wcm$^{-2}$.
Photoions are detected by a time-of-flight (TOF) mass spectrometer in Wiley-McLaren configuration mounted in-line with the He droplet beam~\cite{Wiley:1955}. At the end of the drift tube a high negative potential is applied to further accelerate the arriving ions which boosts the efficiency of detecting large cluster masses in the $10^4$~amu range using a Daly-type detector~\cite{Daly:1960}. The latter consists of a Faraday cup, a scintillator with an optical bandpass interference filter and a photomultiplier tube. In case of electron detection, a simple electrode setup consisting of a repeller, an extractor grid and a channeltron detector with positive entrance potential is used. For both detectors, the resulting pulses are amplified, threshold-discriminated and acquired by a fast digitizer. When detecting heavy masses a counting unit is used.
\section{R\MakeLowercase{b} desorption dynamics}
In the present paper we concentrate on the fs pump-probe dynamics of Rb atoms attached to He nanodroplets which are excited to droplet-perturbed states correlating to the atomic 6p-state. These states have previously been studied using nanosecond pulsed excitation and velocity-map imaging of photoions and electrons~\cite{Fechner:2012,Vangerow:2014}. Due to the interaction of the excited Rb atom with the He droplet surface, the 6p-state splits up into the two states 6p$\Sigma$ and 6p$\Pi$ according to the pseudo-diatomic model which treats the whole He droplet, He$_N$, as one constituent atom of the RbHe$_N$ complex~\cite{Stienkemeier:1996,LoginovPRL:2011,Callegari:2011}.
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{Fig1_Potentials.pdf}
\caption{Potential energy diagram of the Rb-He nanodroplet complex. Vertical arrows depict the photo-excitation and ionization processes. The potential curves of the neutral Rb-He$_{2000}$ complex are taken from~\cite{Callegari:2011}, the one of the Rb$^+$-He$_{2000}$ complex is obtained from the Rb$^+$-He pair potential~\cite{Koutselos:1990} on the basis of the He density distribution of the groundstate RbHe$_{2000}$ complex~\cite{Pi}. The two peaks plotted vertically on the left-hand scale show the expected excitation spectrum based on these potentials.}
\label{fig:potentials}
\end{figure}
Using the RbHe$_N$ pseudo-diatomic potential curves for the 5s$\Sigma$ electronic groundstate and the 6p$\Sigma,\,\Pi$ excited states we compute the Franck-Condon profiles for the expected vertical excitation probability using R. LeRoy's program BCONT~\cite{bcont}. The corresponding transition probability profile is depicted on the left-hand side of Fig.~\ref{fig:potentials}. The experimental excitation spectrum is in good agreement with the calculated one apart from the fact that the experimental peaks are somewhat broader~\cite{Fechner:2012}. Since both 6p$\Sigma$ and 6p$\Pi$ pseudo-diatomic potentials are shifted up in energy by up to 1200 cm$^{-1}$ with respect to the atomic 6p level energy, we expect strong repulsion and therefore fast desorption of the Rb atom off the He droplet surface to occur following the excitation.
However, upon ionization of the excited Rb atom by absorption of a second photon (vertical arrow on the right-hand side of Fig.~\ref{fig:potentials}), the interaction potential suddenly turns weakly attractive. Thus, the Rb$^+$ ion may be expected to turn around and to fall back into the He droplet provided ionization occurs at short delay times after excitation such that the desorbing Rb$^\ast$ picks up only little kinetic energy
\begin{equation}
E_{kin,\,\mathrm{Rb}^\ast}(R)<E_{pot,\,\mathrm{Rb}^+}(R).
\label{eq:ineq}
\end{equation}
Here, $E_{pot,\,\mathrm{Rb}^+}(R)$ denotes the lowering of the potential energy of the Rb$^+$ ion due to the attractive interaction with the He droplet at the distance $R$ from the droplet surface. Eq.~\ref{eq:ineq} holds for short distances $R<R_{c}$ falling below a critical value $R_{c}$. When assuming classical motion, we can infer from Eq.~\ref{eq:ineq} the critical distance $R_{c}$ for the turn-over. From simulating the classical trajectory $R(t)$ we can then obtain the delay time $\tau_c$ at which the turn-over occurs. In the following we refer to $\tau_c$ as `fall-back time'. Thus, when measuring the number of free Rb$^+$ ions emitted from the He droplets by pump-probe photoionization we may expect vanishing count rates at short delays $\tau <\tau_c$ due to the Rb$^+$ ions falling back into the droplets, followed by a steep increase and subsequent constant level of the Rb$^+$ signal at delays $\tau>\tau_c$.
\subsection{Experimental results}
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{Fig2_RbPP.pdf}
\caption{Pump-probe transient Rb$^+$ ion count rates recorded for various wavelengths $\lambda$ of the fs laser. At $\lambda\gtrsim 409$ nm, excitation occurs predominantly to the 6p$\Pi$-state, at $\lambda\lesssim 409$ nm predominantly the 6p$\Sigma$-state is excited. The thin smooth lines are fits to the data.}
\label{fig:RbTransients}
\end{figure}
Fig.~\ref{fig:RbTransients} shows the transient Rb$^+$ ion signals measured by integrating over the $^{85}$Rb and $^{87}$Rb mass peaks in the time-of-flight mass spectra recorded for each value of the pump-probe delay. The shown data are obtained by subtracting from the measured ion signals the sum of ion counts for pump and probe laser pulses only. The error bars stem from error propagation taking into account the uncertainties associated with the different signal contributions. By tuning the wavelength of the fs laser $\lambda$ we can excite predominantly the 6p$\Pi$ ($\lambda\gtrsim 409$ nm) or the 6p$\Sigma$-states ($\lambda\lesssim 409$) of the RbHe$_N$ complex. As expected, we observe a step-like increase of the Rb$^+$-yield at delays ranging from 600 fs ($\lambda =401$ nm) up to about 1500 fs ($\lambda =415$ nm). The signal increase occurs at shorter delays when exciting into the more repulsive 6p$\Sigma$-state because the Rb atom moves away from the He droplet surface faster than when it is excited into the shallower 6p$\Pi$-state. The rising edge of the signal jump is extended over a delay period of about 400~fs, partly due to the finite length and bandwidth of the laser pulses. Desorption along the 6p$\Pi$-potential appears as an even smoother signal rise, indicating that a purely classical model is not suitable for reproducing the observed dynamics. For laser wavelengths $\lambda<409$ nm we observe a weakly pronounced double-hump structure with maxima around 800 and 1800~fs, respectively, which we discuss in section~\ref{sec:simulations}.
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{Fig3_MassSpec.pdf}
\caption{(a) Typical mass spectra recorded for Rb-doped He nanodroplets by fs photoionization taken at a center wavelength $\lambda=415$ nm and a 5~ps pump probe delay. In addition to the atomic isotopes $^{85}$Rb$^+$ and $^{87}$Rb$^+$ the mass spectra contain Rb$^+$He and Rb$^+$He$_2$ molecular ions. (b) An extended view of mass spectra taken at various nozzle temperatures using single fs pulses at $\lambda=415$ nm reveals the presence of large masses of unfragmented ion-doped He droplets Rb$^+$He$_N$.}
\label{fig:massspec}
\end{figure}
Before discussing our model calculations for these transients, let us first examine the measured time-of-flight mass spectra in more detail. Fig.~\ref{fig:massspec} (a) depicts a representative mass spectrum in the mass range around 100~amu at a pump-probe delay of 5~ps and a center wavelength $\lambda=415$ nm. The spectrum is averaged over 5000 laser shots. Clearly, the dominant fragments in this mass range are neat Rb$^+$ ions at 85 and 87 amu, where the different peak heights reflect the natural abundances of isotopes (72 and 28 \%, respectively). Even when ionizing with single laser pulses the mass spectra contain bare Rb$^+$ ions at a low level. We attribute this to a fraction of the Rb atoms desorbing off the droplets and subsequently ionizing within the laser pulse.
A contribution to the Rb$^+$ signal may come from free Rb atoms accompanying the droplet beam as a consequence of the detachment of the Rb atom from the droplet during the pick-up process. Aside from neat Rb$^+$ atomic ions, the pump-probe mass spectra feature peaks at 89, 91, and 95 amu, which evidence the formation of Rb$^+$He and Rb$^+$He$_2$ molecular ions. These masses are usually attributed to photoionization of bound metastable Rb$^\ast$He exciplexes~\cite{Droppelmann:2004,Mudrich:2008,Fechner:2012,Loginov:2014}.
In addition to these discrete mass peaks, we measure extended mass distributions reaching up to 64,000 amu using our time-of-flight mass spectrometer which is optimized to detecting cluster ions. These distributions are in good agreement with the size distributions of pure He nanodroplets generated in a sub-critical expansion~\cite{Lewerenz:1993,Toennies:2004}. From comparing the peak areas of the light masses Rb$^+$, Rb$^+$He$_n$, $n=1,2$ with those of the heavy droplet ions Rb$^+$He$_N$ we deduce that by ionizing with single pulses a fraction of $\lesssim 10$\% of the doped He droplets fragments into free atomic or molecular ions. The larger part of the ionized Rb-doped He droplets generates unfragmented Rb$^+$He$_N$ due to the sinking of the Rb$^+$ ion into the He droplet and the formation of a stable snowball complex~\cite{Theisen:2010}. When adding an additional time-delayed probe pulse we may expect to alter this ratio by depleting the unfragmented Rb$^+$He$_N$ fraction in favor of creating free ions Rb$^+$ and Rb$^+$He$_{1,2}$ ions after desorption.
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{Fig4_triple.pdf}
\caption{Transient ion and electron signals measured at $\lambda=400$ nm. The ion signal traces (a and b) are obtained from integrating over the free atomic Rb$^+$ ion peaks and over the charged He droplet mass distribution, respectively. The total photoelectron signal (c) is measured using a simple electron detector. The thin smooth lines are fits to the data.}
\label{fig:triple}
\end{figure}
Indeed, the delay-dependent peak integrals of the measured mass peaks at $\lambda=400$ nm confirm this picture, see Fig.~\ref{fig:triple} (a) and (b). While the atomic Rb$^+$ ion signal sharply increases around 600 fs and remains largely constant for longer delays, the Rb$^+$He$_N$ signal displays the opposite behavior. The maximum signal level at zero delay significantly drops around $\tau =600$ fs and remains low for long delay times.
In addition to the mass-resolved ion signals we have measured the total yield of photoelectrons, depicted in Fig.~\ref{fig:triple} (c). From comparing the electron counts with and without blocking the He droplet beam we find that for pump-probe ionization $>$79\% of photoelectrons correlates with the Rb-doped He droplet beam, $<21$\% is attributed to ionization of Rb and other species in the background gas.
The observation that the electron count rate remains constant within the experimental scatter in the entire range of pump-probe delays indicates that the photoionization efficiency (cross-section) of a Rb atom is largely independent of its position with respect to the He droplet surface. These observations further support our interpretation of the step-like increase of Rb$^+$ counts in terms of the competition between desorption of excited Rb atoms and solvation of Rb$^+$ cations into the He droplets.
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{Fig5_Fallbacktime.pdf}
\caption{Simulated (a) and experimental (b) fall-back times as a function of the laser wavelength, derived from the rising edges of the pump-probe transients. The curves in (a) are obtained for various effective masses $m_{\mathrm{He}_n}=N_\mathrm{eff}m_\mathrm{He}$ of the He droplet in units of the He atomic mass $m_\mathrm{He}$. The different symbols in (b) denote the experimental fit results for Rb$^+$, Rb$^+$He$+$Rb$^+$ and Rb$^+$He$_N$ signals. Panel (c) shows the exponential decay constants from fits of the Rb$^+$He ion transients with Eq.~(\ref{Fitfunction}).}
\label{fig:tcrit}
\end{figure}
{Fig.~\ref{fig:tcrit} (b) displays a compilation of the critical delays for all measured laser wavelengths which we obtain by fitting the experimental data with an error function,
\begin{equation}
f_{Rb^+}(t)=A\cdot\{\mathrm{erf}\left[(t-\tau_c)/\sigma\right]+1\}
\label{Rb_Fitfunction}
\end{equation}
of variable amplitude $A$, width $\sigma$ and position $\tau_c$. Shown are the results for the raw Rb$^+$ and Rb$^+$He$_N$ transients as well as those obtained by fitting the sum of the transients of Rb$^+$ atomic and Rb$^+$He molecular ions. In particular for the 6p$\Pi$ state, the latter signal more accurately reflects the dynamics of the fall-back process than the individual Rb$^+$ and Rb$^+$He transients since additional transient redistribution of population between Rb$^+$ and Rb$^+$He channels, which we discuss below, cancels out. Correspondingly, the fitted time constants of the summed Rb$^+$ and Rb$^+$He transients and those of Rb$^+$He$_N$ are in good agreement. This confirms our conception that the light ions fall back to produce heavy cluster ions at short delays. {Fig.~\ref{fig:tcrit} (c) will be discussed in section~\ref{sec:RbHeDynamics}.
\subsection{Simulations}
\label{sec:simulations}
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{Fig6_Rb_des_fallback.pdf}
\caption{Classical trajectories of the excited and ionized Rb atom initially located in a dimple near the He droplet surface. (a) At 6p$\Pi$-excitation ($\lambda=415$ nm) and long pump-probe delay $\tau =500$~fs the Rb atom fully desorbs off the He droplet and propagates as a free Rb$^+$ cation after ionization. (b) At shorter $\tau =400$~fs the Rb$^+$ ion turns over and falls back into the He droplet. The schemes at the bottom visualize the dynamics for increasing time from left to right.}
\label{fig:RbTrajectories}
\end{figure}
Further support for our interpretation of the experimental findings is provided by classical trajectory simulations of the dynamics of the pump-probe process. In this model, the Rb atom and the He droplet surface are taken as two point-like particles which propagate classically according to the pseudo-diatomic model potentials~\cite{Callegari:2011,Pi}. Note that these potentials were calculated based on the minimum-energy configuration of a droplet consisting of $N = 2000$ He atoms subjected to the external potential of an Rb atom in the electronic ground state.
The classical equation of motion
\begin{equation}
\mu \ddot{R} = -\frac{dV(R)}{dR},
\label{eq:Newton}
\end{equation}
is solved numerically. Here, $V=V_{\Sigma ,\,\Pi ,\,\mathrm{Rb}^+ }(R)$ denotes the potential curves of the excited and ionic states, and $R(t)$
is the distance between the Rb atom and the He dimple at the droplet surface. The initial value of the Rb-He droplet distance is the position of the minimum of the groundstate potential well (6.4~\AA). Eq.~\ref{eq:Newton} is first solved for the neutral excited state potential $V_{\Sigma}$ or $V_\Pi$ up to the pump-probe delay time $\tau$. Subsequently, the Rb atom is considered to be ionized and the particle is propagated further using the ionic Rb$^+$-He$_N$ potential $V_{\mathrm{Rb}^+ }$. The reduced mass $\mu=m_\mathrm{Rb}m_{\mathrm{He}_n}/(m_\mathrm{Rb} + m_{\mathrm{He}_n})$ is given by the mass of the Rb atom or ion, $m_\mathrm{Rb}$, and the effective mass of the He droplet, $m_{\mathrm{He}_n}$. We set $m_{\mathrm{He}_n} = 40$~amu for the propagation of the excited as well as for the subsequent propagation of the Rb$^+$ ion with respect to the He droplet. This value is based on previous experimental as well as theoretical findings~\cite{Vangerow:2014}.
The motion of the excited and subsequently ionized Rb atom with respect to the He droplet surface is illustrated in Fig.~\ref{fig:RbTrajectories} for different initial conditions. The time-dependent positions of the Rb atom and the He surface are depicted as red and blue lines in the upper parts. The lower parts are graphical visualizations of the dynamics.
Fig.~\ref{fig:RbTrajectories} (a) depicts the case when the excitation of the Rb atom, which is initially located in the groundstate equilibrium configuration of the RbHe$_N$ complex, occurs at $t=0$ and ionization is delayed to $\tau = 500$ fs. The laser wavelength is set to $\lambda=415$ nm where the motion follows the 6p$\Pi$-potential. In this case the excited Rb atom fully desorbs off the He droplet and continues to move away from the droplet after its conversion into an ion. In the case of shorter delay $\tau = 400$~fs between excitation and ionization, shown in Fig.~\ref{fig:RbTrajectories} (b), the Rb atom turns over upon ionization as a result of Rb$^+$-He$_N$ attraction and falls back into the He droplet.
For assessing the effect of an initial spread of Rb-He$_N$ droplet distances $R$ due to the broad laser bandwidth and of the finite length of the laser pulses $t_p$ we extend the classical trajectory calculation to a mixed quantum-classical simulation which includes an approximate description of the quantum wave packet dynamics of the system.
The initial wave packet is obtained by transforming the spectral profile of the laser into a distribution as a function of $R$ using the potential energy difference between the initial 5s$\Sigma$ and the final 6p$\Sigma,\,\Pi$ pseudo-diatomic states. We use a Gaussian-shaped laser profile with a full width at half maximum, $\Delta\nu$, inferred from measured spectra. Typically $\Delta\nu\approx$ 2~nm, depending on the center wavelength of the laser. This corresponds to the instantaneous creation of a wave packet in the excited state centered around the minimum of the groundstate potential.
For simulating the dynamics the wave packet is approximated by 25
segments $i$ and each segment is propagated individually according to Eq.~\ref{eq:Newton} where $R(t)$ is replaced by $R_i(t)$ representing
the Rb-He$_N$ distance for the $i$-th segment. Convergence of the final results with respect to the number of segments has been checked.
This simplified description of the wave packet dynamics is justified because no quantum interference effects are expected for this simple dissociation reaction. Comparison with the full quantum simulation of the desorption process yields excellent agreement within the propagation time range relevant to the experiment.
Simulated transient yield curves as a function of the pump-probe delay $\tau$ are obtained by taking the weighted sum of the segments which have propagated outwards up to very large distances after long propagation times. This sum we identify with the fraction of desorbed atoms. Those segments which have turned over towards short distances are considered to contribute to the Rb$^+$ ions falling back into the droplet. For those segments the condition formulated initially (inequality~(\ref{eq:ineq})) is fulfilled implicitly. The finite duration of the excitation process is taken into account by convolving the resulting yield curves with the autocorrelation function of the two laser pulses.
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{Fig7_Rb_Sim.pdf}
\caption{Semiclassical simulations of the yield of free Rb$^+$ ions created by excitation and time-delayed ionization for various center wavelengths $\lambda$ of the laser pulses. See text for details.}
\label{fig:RbPPsimulation}
\end{figure}
The resulting simulated yields of free Rb$^+$ ions as a function of $\tau$ are depicted in Fig.~\ref{fig:RbPPsimulation} for various center wavelengths $\lambda$ of the laser pulses. The obtained curves qualitatively resemble the experimental ones in many respects. Excitation at long wavelengths $\lambda > 409$ nm, at which predominantly the more weakly repulsive 6p$\Pi$ state is populated, induces a smooth signal increase at about $\tau = 400$ fs. At $\lambda < 409$ nm, where predominantly the 6p$\Sigma$-state is excited, the signal rise occurs around $\tau = 210$ fs, considerably earlier than for 6p$\Pi$ excitation. This result qualitatively agrees with the experimental finding see Fig.~\ref{fig:tcrit} (b). Moreover, the superposition of the two rising edges at intermediate wavelengths $\lambda\sim 409$ nm may provide an explanation for the double-hump structure observed in the experimental Rb$^+$ transients at $\lambda < 409$ nm. However, the simulated rising edges occur at significantly shorter delay times than in the experiment, roughly by a factor 2 for excitations to the 6p$\Sigma$-state and up to a factor 4 for the 6p$\Pi$-state.
The discrepancy between the experimental results and those of the simulations, shown in Fig.~\ref{fig:tcrit} (b), is present even when assuming very large effective masses of the He droplet $m_{\mathrm{He}_n}>1000$ amu.
We attribute this discrepancy to the limited validity of our model assumptions. In particular the interaction potentials we use were obtained on the basis of the frozen He density distribution for the RbHe$_N$ groundstate equilibrium configuration~\cite{Callegari:2011,Pi}. However, transient deformations of the He droplet surface in the course of the dynamics are likely to significantly modify the effective Rb$^\ast$-He$_N$ interactions. Recent time-dependent density functional simulations show a complex ultrafast response of the He droplet to the presence of a Rb$^+$ ion near the surface~\cite{Leal:2014}. In particular when the desorption dynamics is slow ($\Pi$-state) a complex reorganization of the He droplet surface during the Rb desorption process may be expected~\cite{Vangerow:2014}. A clear manifestation of the break-down of the simple pseudo-diatomic model is the formation of Rb$^\ast$He exciplexes which we discuss in the following section. Recently, M. Drabbels and coworkers suggested that the pseudo-diatomic potentials of the excited Na$^\ast$He$_N$ complex may be transiently shifted and even intersect~\cite{Loginov:2014,Loginov:2015}. Detailed three-dimensional simulations including the full spectrum of properties of He droplets are needed to provide an accurate description of this kind of dynamics~\cite{Hernando:2012,Mateo:2013,Vangerow:2014,Leal:2014}. Experimentally, the time evolution of the interaction potential energies will be visualized by means of fs time-resolved photoelectron spectroscopy in the near future.
\section{R\MakeLowercase{b}H\MakeLowercase{e}$^+$ dynamics}
\label{sec:RbHeDynamics}
Aside from free Rb$^+$ ions, fs photoionization of Rb-doped He nanodroplets generates Rb$^+$He$_n$, $n=1,2$ molecular ions. Relative abundances reach up to 31\% and 1.5\%, respectively, measured at $\lambda =415$ nm corresponding to the 6p$\Pi$ excitation, see Fig.~\ref{fig:massspec}. At $\lambda =399$ nm (6p$\Sigma$-excitation), abundances are 4\% and 1\%, respectively. Free Rb$^+$He ions are associated with bound states in the Rb$^\ast$He excited states pair potentials, so called exciplexes. Both the 6p$\Sigma$ and the 6p$\Pi$-states of the RbHe diatom feature potential wells which sustain bound vibrational states that can be directly populated by laser excitation out of the groundstate of the RbHe$_N$ complex~\cite{Pascale:1983,Fechner:2012}. Thus, exciplexes are directly created in a process akin to photoassociation, in contrast to previously observed Na$^\ast$He and K$^\ast$He exciplexes which were formed by an indirect tunneling process upon excitation of the lowest p$\Pi$-states~\cite{Reho2:2000,Loginov:2015}.
Exciplex formation is the only route to producing Rb$^+$He ions by photoionization using continuous-wave or nanosecond lasers, where ionization takes place at long delay times when the dynamics of exciplex formation and desorption off the droplets is long complete. In fs experiments, however, ionization can be triggered before or during the process of desorption of the excited atom or exciplex off the droplet surface. In this case, due to the attractive Rb$^+$-He potential a bound Rb$^+$He molecular ion can be formed upon ionization, even if the excited Rb$^\ast$-He interaction does not sustain bound states of the neutral diatom. The process of inducing a molecular bond between two initially unbound neutral species by photoionization is known as photoassociative ionization~\cite{Weiner:1990}.
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{Fig8_RbHe_PP.pdf}
\caption{Experimental yields of Rb$^+$He molecular ions as a function of pump-probe delay for various center wavelengths of the laser pulses. The thin smooth lines are fits to the data.}
\label{fig:RbHeTransients}
\end{figure}
\subsection{Experimental results}
The transient yield of Rb$^+$He for various laser wavelengths is displayed in Fig.~\ref{fig:RbHeTransients}. Similarly to the Rb$^+$ transients, we measure vanishing Rb$^+$He pump-probe signal contrast around zero delay. For increasing laser wavelength from $\lambda =399$ up to 418~nm, which corresponds to the crossover from the 6p$\Sigma$ to the 6p$\Pi$ excited pseudo-diatomic states, a step-like increase of the Rb$^+$He ion signal occurs at delays ranging from $\tau =500$~fs up to about $2000$~fs. Besides, at $\lambda\lesssim 415$ nm we measure a transient overshoot of the Rb$^+$He signal by up to about 100\% of the signal level at long delays. The transient yield of Rb$^+$He is fitted using the model function
\begin{equation}
f_{Rb^+He}(t)=f_{Rb^+}(t)(Ee^{-t/\tau_E}+1).
\label{Fitfunction}
\end{equation}
As for the Rb$^+$ case, $f_{Rb^+}(t)$ models the fall-back dynamics by Eq.~\ref{Rb_Fitfunction}. Additionally, the exponential function with amplitude $E$ and time constant $\tau_E$ takes the transient overshoot into account, whereas the additive constant account for a $\tau$-independent Rb$^+$He formation channel. The exponential time constants $\tau_E$ are plotted as black circles in Fig.~\ref{fig:tcrit} (c). To obtain these values, the parameters $\tau_c$ and $\sigma$ are taken as constants from the fit of the sum of Rb$^+$ and Rb$^+$He signals with Eq.~\ref{Rb_Fitfunction}. Here we make the assumption that the fall-back dynamics is only weakly perturbed by the attachment of a He atom to the Rb atom or ion, which is confirmed by our simulations.
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{Fig9_RbHe_des_fallback.pdf}
\caption{Classical trajectories of the Rb-He-He$_N$ three-body system at $\lambda=409$ nm. The schemes at the bottom visualize the various dynamics for increasing time from left to right. (a) The excited Rb atom departing from the He droplet surface suddenly experiences Rb$^+$He pair attraction upon ionization at $\tau =400$~fs (a). Consequently, a He atom attaches to the Rb atom while it leaves the droplet. (b) For short delay $\tau=350$~fs at $\lambda=409$~nm a Rb$^+$He molecule forms as in (a) but the attraction towards the He droplet makes it turn over and fall back.}
\label{fig:RbHeTrajectories}
\end{figure}
\subsection{Simulation of photoassociative ionization}
For a more quantitative interpretation of the Rb$^+$He transients we extend our classical and mixed quantum-classical models to the one-dimensional three-body problem in the second stage of the calculation after ionization has occurred by including one individual He atom out of the surface layer. The
classical trajectories are now obtained by solving three individual coupled equations of motion for the three individual particles Rb$^+$, He, and He$_n$. The Rb$^\ast$-He$_N$ interaction leading to desorption is represented by the pseudodiatomic potentials as before. The Rb$^+$-He dynamics is
initialized by the velocity and distance of the dissociating Rb$^\ast$He$_N$ complex at the moment of ionization. The Rb$^+$-He pair interaction is given by the Rb$^+$-He pair potential~\cite{Koutselos:1990} augmented by a 16.7~cm$^{-1}$ deep potential step to account for the He-He$_N$ extraction energy as suggested by Reho et al.~\cite{Reho2:2000,Droppelmann:2004,Fechner:2012}.\\
Exemplary trajectories are shown in Fig.~\ref{fig:RbHeTrajectories} for two cases at $\lambda=409$ nm. For long pump-probe delays the Rb$^+$ ion leaves the He droplet without attaching a He-atom, as shown in Fig.~\ref{fig:RbTrajectories}. However, there is a range of delays in which the desorbing Rb atom is far enough away from the droplet so that it will not fall back upon ionization, but it is still close enough to attract a He atom out of the droplet surface so as to form a bound molecular ion by PAI (Fig.~\ref{fig:RbHeTrajectories} (a)). Fig.~\ref{fig:RbHeTrajectories} (b) illustrates the dynamics at short delay when the attractive forces acting between the Rb$^+$ ion and the droplet surface prevent the full desorption and Rb$^+$-He pairwise attraction leads to the formation of Rb$^+$He.\\
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{Fig10_RbHe_SimOverlap.pdf}
\caption{Simulations of the yield of free Rb$^+$He-molecules created by excitation and time-delayed ionization for various center wavelengths of the laser pulses. See text for details.}
\label{fig:RbHePPsimulation}
\end{figure}
For simulating the transient Rb$^+$He yields to compare with the experimental data shown in Fig.~\ref{fig:RbHeTransients} we extend the mixed quantum-classical model for the desorption dynamics of bare Rb described in Sec.~\ref{sec:simulations}. It is augmented by computing the probability of populating bound vibrational states of the Rb$^+$He molecule for each segment of the Rb wave packet upon ionization as the sum of spatial overlap integrals
\begin{align}
p_i^{PAI}(\tau )=\sum_v\left|\int^\infty_\infty
\phi_v (R) \cdot \psi_i(R,\tau ) \; dR\right|^2.
\end{align}
Here $\psi^i$ denotes the $i$-th wave packet segment and $\phi_v$ stands for the vibrational wave functions of Rb$^+$He calculated using R. J. LeRoy's LEVEL program~\cite{level} for the 6p$\Sigma ,\,\Pi$ pair potentials of Rb$^+$He~\cite{Pascale:1983}. The identification of bound and free Rb$^+$He ions in the simulation is based on analyzing the final Rb$^+$-He and Rb$^+$He-He$_N$ distances after long delays $\tau >10$~ps, respectively. The final probability $P$ of detecting a Rb$^+$He molecule is obtained by summing up the detection probabilities for every segment,
\begin{equation}
P(\tau ) = \sum_i p_i^D(\tau)\cdot (p_i^{PAI}(\tau ) + p_{ex}).
\label{eq:Ptau}
\end{equation}
In agreement with Eq. \ref{Fitfunction}, $p_i^D$ denotes the desorption probability and $p_{ex}$ is the probability of creating a bound neutral Rb$^\ast$He exciplex, which is assumed to occur instantaneously upon laser excitation and thus does not depend on $\tau$. Since the relative contributions of PAI and direct exciplex formation are not precisely known we resign from quantitatively modeling the relative efficiencies of the two pathways leading to free Rb$^+$He. Instead we adjust them to the experimental transients by taking $p_{ex}$ as a free fit parameter. The transient signal $P(\tau )$ is finally convoluted with the intensity autocorrelation function of the laser pulses, as for the Rb$^+$ transients.
The resulting simulated yields of free Rb$^+$He molecular ions are depicted in Fig.~\ref{fig:RbHePPsimulation}. Clearly, the same general trends as for neat Rb$^+$ ions are recovered: (i) at short delay times $\tau < 200$ fs the appearance of Rb$^+$He is suppressed due to the falling back of the ion into the He droplet; (ii) longer laser wavelengths $\lambda\gtrsim 409$ nm (6p$\Pi$-excitation) lead to weaker repulsion and therefore to the delayed appearance of free ions as compared to 6p$\Sigma$-excitation at $\lambda\lesssim 409$ nm. These results again qualitatively agree with the experimental findings but the simulated appearance times are shorter by a factor 2-4, as shown in Fig.~\ref{fig:tcrit}. As in the Rb$^+$ case we attribute these deviations to the use of pseudo-diatomic potentials calculated for the frozen RbHe$_N$ groundstate complex.
Moreover, the simulation reproduces a signal overshoot around $\tau=300$ fs at short wavelengths, which is due to the contribution of the photoassociative ionization channel. Association of a bound Rb$^+$He ion is possible only at sufficiently short Rb-He distances given at delays $\tau\lesssim 600$ fs for 6p$\Sigma$-excitation and $\tau\lesssim 900$ fs for 6p$\Pi$-excitation, respectively. At these delay times, the PAI signal adds to the signal due to ionization of Rb$^\ast$He exciplexes formed directly by the pump pulse. Note that we have adjusted to the experimental curves the relative contributions to the total Rb$^+$He signal arising from PAI and from exciplex ionization. Therefore our model does not permit a quantitative comparison with the experimentally measured signal amplitudes. A more detailed three-dimensional simulation of the dynamics is needed for a fully quantitative interpretation~\cite{Mateo:2013}. Nevertheless, we take the simulation result as a clear indication that PAI is an additional channel producing He-containing ionic complexes which needs to be considered in experiments involving photoionization of dopants attached to He droplets when using ultrashort pulses.
We note that in the particular case of exciting into the 6p$\Sigma$-state of the RbHe$_N$ complex, unusual Rb$^+$He signal transients may arise from the peculiar shape of the RbHe pair potential curve which features a local potential maximum at intermediate Rb-He distance~\cite{Pascale:1983,Fechner:2012}. This potential barrier causes the highest-lying Rb$^\ast$He vibrational states to be predissociative. Semi-classical estimates yield predissociation time constants for the two highest vibrational levels $v=5$ and $6$ of 3.5 ns and 2.2 ps, respectively. However, these values significantly exceed the exponential decay times inferred from the measured transients (see Fig.~\ref{fig:tcrit} (c)). Moreover, we may expect that not only the highest vibrational levels are populated. Note that for the case of Na$^\ast$He and K$^\ast$He formed in the lowest 3p$\Pi$ and 4p$\Pi$ states, respectively, all vibrational levels including the lowest ones were found to be populated to varying extents depending on the laser wavelength~\cite{Reho:2000}. Therefore we tend to discard predissociation of Rb$^\ast$He exciplexes as the origin of the peculiar shape of the Rb$^+$He transients, although we cannot strictly rule it out. More insight into the Rb$^+$He dynamics may be provided by further measurements using electron and ion imaging detection.
\section{Summary}
This experimental fs pump-probe study discloses the competing dynamics of desolvation and solvation of excited and ionized states, respectively, of Rb atoms which are initially located at the surface of He nanodroplets. The generic feature of the pump-probe transients -- the time-delayed appearance of photoions -- is shown to result from the falling back of ions into the droplets when the ionization occurs at an early stage of the desorption process. This interpretation is backed by the experimental observation of the opposing trend when measuring the yield of unfragmented He nanodroplets containing a Rb$^+$ ion. Furthermore, mixed quantum-classical model calculations based on one-dimensional pseudo-diatomic potentials confirm this picture qualitatively. The limited quantitative agreement with the experimental results is attributed to the use of model potentials in the calculations, which do not account for the transient response of the He density upon excitation and ionization of the Rb dopant atom. Much better agreement may be expected from three-dimensional time-dependent density functional simulations~\cite{Vangerow:2014,Leal:2014} of the full pump-probe sequence which are currently in preparation.
Pump-probe dynamics similar to the Rb$^+$ case is observed when detecting Rb$^+$He molecular ions which primarily result from photoionization of Rb$^\ast$He exciplexes. The peculiar structure of the Rb$^+$He transients as well as extended model calculations indicate that photoassociative ionization is an additional mechanism of forming He-containing ionic complexes in fs experiments. However, the dynamics resulting from the additional photoassociative ionization channel cannot unambiguously be distinguished from predissociation of Rb$^\ast$He exciplexes in high-lying vibrational levels of the 6p$\Sigma$-state.
These results shed new light on the interpretation of the Rb$^+$He pump-probe transients measured previously by ionizing via the lowest 5p$\Pi$ excited state~\cite{Droppelmann:2004,Mudrich:2008}. The signal increase at short delays was interpreted as the manifestation of the formation dynamics of the Rb$^\ast$He exciplex by a tunnelling process. Possibly the competing desorption of the excited neutral and the fall-back of the photoion actually more crucially determines the rise time of the Rb$^+$He signal in those transients. This issue will be elucidated in future experiments using two-color pump-probe ionization via the 5p droplet states and at low laser repetition rate so as to exclude concurrent effects by subsequent laser pulses.
Furthermore, we will investigate the photodynamics of metal atom-doped He nanodroplets in more detail by applying refined detection schemes such as ion and electron imaging~\cite{Fechner:2012,Vangerow:2014} and coincidence detection~\cite{Buchta:2013,BuchtaJCP:2013}.
\begin{acknowledgments}
The authors gratefully acknowledge fruitful discussions with W. Strunz. We thank M. Pi and M. Barranco for providing us with the Rb$^+$-He$_{2000}$ pseudodiatomic potential curve. Furthermore, we thank the Deutsche Forschungsgemeinschaft for financial support.
\end{acknowledgments}
| {'timestamp': '2015-07-03T02:10:41', 'yymm': '1507', 'arxiv_id': '1507.00637', 'language': 'en', 'url': 'https://arxiv.org/abs/1507.00637'} |
\section{}
Recently Grumiller \cite{Grumiller}, starting from a simple set of assumptions, proposed the metric\footnote{We take $\Lambda=0$ without losing generality of our arguments.}
\begin{subequations}{\label{one}}
\begin{eqnarray}
ds^{2} &=& -K^{2}dt^{2} + \frac{dr^{2}}{K^{2}}+r^{2}\left(d\theta^{2}+\sin^{2}\theta d\phi^{2}\right),\\
K^{2} &=& 1-2 \frac{MG}{r}+ 2 b r,
\end{eqnarray}
\end{subequations}
\noindent as a somewhat general framework to approach various systems with anomalous accelerations such as the rotation curves of spiral galaxies and the Pioneer anomaly.
It is stated that $b$ comes in as an arbitrary constant depending on the system under study and that for $b>0$ and of the order of inverse Hubble length, a qualitative understanding of the mentioned anomalies are possible. It is also stated that the effective energy-momentum tensor resulting from Eqs.(\ref{one}) is that of an anisotropic fluid obeying the equation of state $p_{r}=-\rho$ and $p_{\theta}=p_{\phi}=p_{r}/2$ with
\begin{equation}{\label{onunki}}
\rho=\frac{4 b}{\kappa r}\;,
\end{equation}
where $\kappa$ is the (positive) gravitational coupling constant, i.e. the constant in the Einstein equation $G_{\mu\nu} =\kappa T_{\mu\nu}$.
While we agree on the equation of state we disagree on the sign of $\rho$; the metric in Eqs.(\ref{one}) yields\footnote{We use MTW \cite{MTW} sign conventions, but of course the signs of $\rho$ and $p$'s are the same in all commonly used conventions.}
\begin{equation}{\label{bizimki}}
\rho=-\frac{4 b}{\kappa r}\;.
\end{equation}
The effective potential formalism for the geodesic equation of a test particle is given by
\begin{equation}
\left(\frac{dr}{d\lambda}\right)^{2}+2V_{\rm eff}(r)={E^{2}-\epsilon},
\end{equation}
with
\begin{equation}{\label{veff}}
V_{\rm eff}(r)=-\epsilon\frac{MG}{r}+\frac{L^{2}}{2r^{2}}-MG\frac{L^{2}}{r^{3}}+\epsilon br+b\frac{L^{2}}{r},
\end{equation}
where units are chosen such that $c=1$, $\lambda$ is the affine parameter along the geodesic and the constants of motion are $E=K^{2}dt/d\lambda$ and $L=r^{2}d\phi/d\lambda$. Also, for massive and massless particles we have $\epsilon=1$ and $\epsilon=0$ respectively.
It is still true that for $b>0$ the effect of $br$ is a constant anomalous acceleration towards the center for objects moving non-relativistically , {\em despite the negative energy density}. This follows,
because the effective potential is derived from the metric directly;
and can also be seen from the Raychaudhuri equation specialized to a
collection of test particles initially at rest in a small volume of
space \cite{Carroll}:
\begin{equation}\label{raycha1}
\frac{d\theta}{d\tau}=-4\pi G(\rho+p_{r}+p_{\theta}+p_{\phi})=4\pi G\rho\;,
\end{equation}
where $\theta$ is the quantity called {\em expansion} and $\tau$ is the proper-time along geodesics followed by the test particles\footnote{The LHS of Eq.\ref{raycha1} can also be written as $\ddot{V}/V$ where $V$ denotes the volume occupied by the test particles. See \cite{baezbunn} for a nice introduction to general relativity where Raychaudhuri's equation is placed at the center of exposition.}. The second equality follows from the peculiar equation of state described in \cite{Grumiller} and confirmed here. The negativity of the derivative of expansion along geodesics shows that gravity is attractive for a given fluid; this is the case here because of Eq.(\ref{bizimki}), for positive $b$.
The negative energy density naturally leads us to question if the fluid violates any of the so-called {\em energy conditions}\footnote{We use the definitions in \cite{Carroll} for the energy conditions.}, the compatibility with which is generally taken as a measure of physical reasonableness. The weak energy condition (WEC) requires $\rho \geq 0$ and $\rho+p_{i} \geq 0$; the strong energy condition (SEC), $\rho+\sum{p_{i}} \geq 0$ and $\rho+p_{i} \geq 0$; and the dominant energy condition (DEC), $\rho \geq |p_{i}|$; it is easily seen that our fluid violates all three\footnote{One can easily find that the null energy condition (NEC) and the null dominant energy condition (NDEC) are also violated.}.
On one hand, one might say that the violation of energy conditions means that the model is not very physically reasonable; but on the other hand, we are talking about an {\em effective} fluid, not necessarily a real one. Also, the attractive nature of the fluid (as confirmed by application of the Raychaudhuri's equation) in the face of these violations serves as an example of a delicate fact about SEC: while SEC ensures that gravity is attractive it does not encompass all {\em attractive} gravities.
Finally, we would like to point out a possibility for the relation between $b$ and the system under consideration: The fluid is attractive; in fact, the $1/r$ dependence of the density and pressures show that it clusters around the central mass. Though speculative at this point, it seems reasonable that bigger masses will accumulate more fluid, i.e., $b$ will be a monotonically increasing function of $M$. On the other hand the very meaning of $M$ next to $b$ is questionable
because one has to match the metric in Eq.(\ref{one}) to the metric of the interior system (star or galaxy), the matching conditions will undoubtedly yield a relation between the integral of the energy density inside the interior system which we may call $M_{s}$ and the parameters of the metric outside; $M$ and $b$. We leave the quantitative
analysis of these points for future work.
| {'timestamp': '2010-12-24T02:01:03', 'yymm': '1012', 'arxiv_id': '1012.4207', 'language': 'en', 'url': 'https://arxiv.org/abs/1012.4207'} |
\section{Introduction}
\IEEEPARstart{I}ce has an unusual property called recrystallization. When water starts to freeze, it forms many small crystals. Some of the small crystals soon dominate and continue to become large by stealing water molecules from the surrounding small crystals \cite{yu2001winter}. This phenomenon can prove to be particularly lethal for living organisms in extreme cold weather due to the intracellular formation of ice \cite{griffith1997antifreeze}. Antifreeze proteins (AFPs) neutralize this recrystallization effect by binding to the surface of the small ice crystals and retarding the growth into larger dangerous crystals \cite{davies2002structure}\cite{fletcher2001antifreeze}. Therefore they are also called as 'ice structuring proteins' (ISPs). The AFPs lower the freezing point of water without altering the melting point, this interesting property of the AFPs is called as 'thermal hysteresis' \cite{urrutia1992plant}.
The AFPs are critical for the survival of living organisms in extremely cold environments. They are found in various insects, fish, bacteria, fungi and overwintering plants such as gymnosperms, ferns, monocotyledonous, and angiosperms \cite{yu2001winter},\cite{davies2002structure},\cite{urrutia1992plant},\cite{scholander1957supercooling}, \cite{moriyama1995seasonal},\cite{logsdon1997origin},\cite{ewart1999structure},\cite{cheng1998evolution},\cite{davies1997antifreeze}. Several studies on various AFPs have shown that there is little structural and sequential similarity for an ice-binding domain \cite{davies2002structure}. This inconsistency relates to the lack of common features in different AFPs and therefore a reliable prediction of AFPs is considered to be an ardent task.
The Recent success of machine learning algorithms in the area of protein classification, has encourage several researchers to develop automated approaches for the identification of AFPs. AFP-Pred \cite{kandaswamy} is considered to be the earliest work in this direction. The work is essentially based on random forest approach making use of the sequence information such as functional groups, physicochemical properties, short peptides and secondary structural element. In AFP\_PSSM \cite{xiaowei} evolutionary information is used with support vector machine (SVM) classification. In iAFP \cite{yu} n-peptide composition is used with limited experimental results. In particular amino acids, di-peptide and tri-peptide compositions were used. We argue that tri-peptide composition is computationally expensive (require the calculation of $20^3$ combinations) resulting in redundant information. Consequently the selection of the most significant features using genetic algorithms (GA) has shown limited results \cite{yu}. It is also worth noting that n-peptide compositions were derived for the whole sequence. The latest work in this regard is AFP-PseAAC \cite{mondal} where the pseudo amino acid composition is used with an SVM classifier to achieve a 'good' prediction accuracy.
In machine learning, the difficult manifold learning problems can effectively be addressed using a localized processing approach compared to its holistic counterparts \cite{kovnatsky2015madmm}. Considering the diversified structures of AFPs, it is intriguing to explore the localized processing of the protein sequences. We therefore propose to adopt a segmentation approach where each protein sequence is segmented into two sub-sequences. The amino acids and di-peptide compositions are derived for each sub-sequence from which we extract the relevant features. The most significant features are further selected using the concept of information gain and the random forest approach is used for classification. To the best of our knowledge, this is the first time that localized processing is proposed to deal with the challenging problem of learning diversified structures of the AFPs. The proposed method has shown to comprehensively outperform all the existing approaches on standard datasets(section 3).
The paper is organized as follows: the details and mathematical framework of the proposed approach is presented in Section \ref{proposed}, followed by the description of the data sets and our experimental results in Section \ref{secresult}. Our conclusions are provided in Section \ref{conclusion}.
\section{Proposed Approach}\label{proposed}
The reliable prediction of proteins can only be achieved by robustly encoding the protein sequences into mathematical expressions. This ensures that the underlying structures of the protein sequences have been truly learned. In the absence of robust learning methods of the protein sequences, the predictor is unlikely to perform well for unseen test samples. From the machine learning perspective, the difficult manifold learning problems are effectively tackled using the localized processing approach \cite{kovnatsky2015madmm,yang2008prediction}. While holistic methods deal with the training samples in a global sense, the localized learning focuses on the various segments of the samples. Typically, features extracted from confined segments are efficiently fused. For the challenging manifold learning problems, localized learning has shown to outperform its counterparts in various applications of machine learning \cite{kandaswamy2013ecmpred,dehzangi2015gram,zhang2008using}. We therefore propose a local analysis approach of AFPs for feature extraction.
\subsection{Features}\label{proposed1}
Since the structures of various AFPs are uncorrelated and lack in similarity, the automated prediction of AFPs is therefore considered to be a challenging task. Motivated by the robustness of the localized learning approaches, we propose an approach that processes the localized segments of the AFP sequences. In particular, each protein sequence is segmented into two sub-sequences, each sub-sequence is individually analyzed for amino acid and di-peptide compositions.
Consider a protein chain of $L$ amino acid residues:
\begin{eqnarray}
\mathbf{P}&=&R_1R_2R_3\ldots R_L
\end{eqnarray}
where $R_i$ represents the $i^{th}$ residue of protein $\mathbf{P}$ \cite{chou2011some}. According to the amino acid composition protein $\mathbf{P}$ can be expressed as an array of occurrence frequency of the twenty native amino acids:
\begin{eqnarray}
\mathbf{P}&=&[f_1f_2f_3\ldots f_{20}]^{T}
\end{eqnarray}
where $f_j;\ j=1,2,3,\ldots,20$ is the normalized occurrence frequency of the $j^{th}$ native amino acid in $\mathbf{P}$, and $T$ is the vector transpose operator. Accordingly, the amino acid composition of a protein can readily be derived once the protein sequencing information is known. This simple, but effective, amino acid composition (AAC) model has been widely used in a number of statistical methods to predict protein structures \cite{horton2007wolf}, \cite{du2012pseaac}.
Dipeptide compositions are computed using 400 ($20\times20$) dipeptides, i.e. AA, AC, AD,$\ldots$, YV, YW, YY. Each component is calculated using the following equation:
\begin{multline}
\mbox{fraction of the }\ k^{th}\ \mbox{dipeptide} \\
= \frac{\mbox{total number of the}\ k^{th}\ \mbox{dipeptide}}{\mbox{total number of all possible dipeptides}}
\end{multline}
\begin{table}
\caption{List of features}
\begin{tabular}{ll}
\hline
\textbf{Features} & \textbf{Number of attributes} \\
\hline
Segment 1 & \\
\hline
Amino Acid Composition features & 20 \\
\hline
Dipeptide Composition features & 400 \\
\hline
Segment 2 & \\
\hline
Amino Acid Composition features & 20 \\
\hline
Dipeptide Composition features & 400 \\
\hline
Total & 840 \\
\hline
\end{tabular}
\label{features}
\end{table}
The 20 AACs and 400 dipeptide compositions are combined to form 420 attributes for each segment of the AFP sequence. Finally the 420 attributes of individual sub-sequences are fused to form a single representative feature vector consisting of 840 attributes. Table \ref{features} shows a list of derived features.
It is well established that the redundant information tends to degrade the classification results \cite{ding2005minimum}. It is therefore customary to select the most relevant features for the purpose of classification \cite{koller1996toward}, \cite{langley1994selection}. Information gain (IG) or Info-Gain is considered to be an important criterion for the selection of the most significant features \cite{kandaswamy2010spred}. Given a training set $S$ and an attribute $A$, the information gain with respect to the attribute $A$, can be defined as a reduction in entropy of the training set once the attribute $A$ is observed \cite{mitchell1997machine}, mathematically:
\begin{eqnarray}
IG(S,A)=H(S)-H(S/A)
\end{eqnarray}
where $H(S)$ is the entropy of $S$ and $H(S/A)$ is the entropy of $S$ conditioned to the observation of attribute $A$. For the classical case of a dichotomizer:
\begin{eqnarray}
H(S)=-\sum_{l=1}^{2}p_{l}\log_{2}p_{l}
\end{eqnarray}
and
\begin{eqnarray}
H(S/A)=\sum_{v\in Values(A)}\frac{|S_v|}{|S|}H(S_v)
\end{eqnarray}
where $Values(A)$ is a set of all possible values of the attribute $A$, $S_v$ is the partition of the training set characterizing the value $v$ of attribute $A$, $H(S_v)$ is the entropy of $S_v$ and $|.|$ is the cardinality operator \cite{mitchell1997machine}.
We propose to use the concept of the Info-Gain for the selection of the most significant features from a pool of 840 features (discussed in Section \ref{proposed1}). The features are ranked using the above formulation of IG in a descending order such that the attribute with the highest IG is given the top priority.
\subsection{Classification}
The Random forest approach has shown to produce excellent results for various prediction problems in proteomics \cite{kandaswamy2010spred,kandaswamy2013ecmpred,kandaswamy,wu2003comparison,lee2005extensive,diaz2006gene,kumar2009dna,masso2010knowledge}. Random forest is an ensemble classification protocol which combines several weak classifiers (decision trees) to constitute a single strong classifier. The decision trees generated by the random forest approach are combined using a weighted average scheme \cite{breiman2001random}. The approach harnesses the power of many decision trees, rational randomization, and ensemble learning to develop accurate classification models \cite{breiman2001random}.
Random forest is a supervised learning approach consisting of two steps: (1) bagging, and (2) random partitioning. In bagging several decision trees are grown by drawing multiple samples (with replacement) from the original training data set. Although an indefinite number of such trees can be grown, typically 200-500 trees are considered to be enough \cite{mitchell1997machine}. The Random forest approach introduces randomness in tree-growing by first randomly selecting a subset of prospective predictors and then producing the split by selecting the best available splitter. The approach is robust to overfitting and quite efficient on large datasets \cite{breiman2001random}. A Random forest classifier was implemented using the WEKA tool \cite{frank2004data}, with the following controlling parameters: (1) maximum depth of tree = 10, (2) number of features = 100, (3) number of trees = 50, and (4) number of seeds = 1. The work-flow of the proposed RAFP-Pred is shown in Figure \ref{WorkFlow1}
\begin{figure}[h!]
\begin{center}
\centerin
\includegraphics[width=8cm]{workflow.png
\end{center}
\caption{Work-flow of the proposed RAFP-Pred approach.}
\label{WorkFlow1}
\end{figure}
\section{Experimental Results} \label{secresult}
\subsection{Evaluation Parameters}
For any prediction framework, the Receiver Operating Characteristic (ROC) is considered to be the most comprehensive performance criterion. The proposed algorithm was therefore extensively evaluated for true positive rate (sensitivity), true negative rate (specificity), prediction accuracy and the area under the curve (AUC). The proposed algorithm was also evaluated for Matthew's Correlation Coefficient (MCC). MCC ranges from -1 to 1 with values of MCC = 1 and MCC = -1 indicating the best and the worst predictions respectively, MCC = 0 shows the case of a random guess. Youden's index (or Youden's J statistics) is an interesting way of summarizing the results of a diagnostic experiment \cite{youden1950index}. Ranging from 0 to 1, 0 indicates the worst performance while 1 shows perfect results with no false positives and no false negatives. Youden's index is typically useful for the evaluation of highly imbalanced test data.
\subsection{Experimental Results}
Extensive experiments were conducted on a number of state-of-the-art datasets reported frequently in the literature \cite{kandaswamy}, \cite{yu}.
\subsubsection{Dataset 1} \label{dataset1} Dataset 1 consists of 481 AFPs and 9493 non-AFPs reported in \cite{kandaswamy}. The dataset is further partitioned into training and testing sets. The training set characterizes 300 AFPs and 300 non-AFPs selected randomly from a pool of 481 AFPs and 9493 non-AFPs respectively. The remaining 181 AFPs and 9193 non-AFPs constitute the testing set. Training accuracy was achieved by evaluating the proposed algorithm on the 600 training samples. This dataset is obtained from \cite{kandaswamy} in which the protein sequences were collected from the Pfam database \cite{ sonnhammer1997pfam}. For the redundancy check the PSI-BLAST search was performed for each sequence against a non-redundant sequence database with a stringent threshold (E-value 0.001) and followed by the manual inspection to retain only antifreeze proteins. The final dataset contains only the protein sequence with $<=$40\% sequence and all other similar proteins were removed from the dataset using CD-HIT \cite{li2001clustering}.
The proposed approach attained 100\% accuracy on a randomly selected training set which outperforms the AFP-Pred method by a margin of 18.67\% \cite{kandaswamy} and the AFP\_PSSM method by a margin of 17.33\% \cite{xiaowei}. The average accuracy of three randomly selected training sets, for the proposed method, was found to be 99.91\% with a standard deviation of 0.16\%. This prediction performance is 10.22\% better compared to the AFP-PseAAC approach (standard deviation of 0.706\%)\cite{mondal}.
\begin{table*}
\begin{center}
\caption{Performance of the proposed RAFP-Pred on test dataset containing 181 AFPs and 9193
non-AFPs using different feature subsets.}
\begin{tabular}{cccccc}
\hline
{\bf Feature subset} & {\bf Sensitivity (\%)} & {\bf Specificity (\%)} & {\bf MCC} & {\bf Accuracy (\%)} & {\bf Youden's index} \\
\hline
\\
25 features & 79.01\% & 89.24\% & 0.288 & 89.04\% & 0.68 \\
\hline
50 features & 82.32\% & 90.03\% & 0.314 & 89.88\% & 0.72 \\
\hline
75 features & 81.77\% & 89.83\% & 0.308 & 89.67\% & 0.72 \\
\hline
{\bf 100 features} & {\bf 83.98\%} & {\bf 91.07\%} & {\bf 0.339} & {\bf 90.93\%} & {\bf 0.75} \\
\hline
200 features & 79.01\% & 90.10\% & 0.301 & 89.88\% & 0.69\\
\hline
400 features & 80.11\% & 90.93\% & 0.320 & 90.72\% & 0.71 \\
\hline
600 features & 82.87\% & 90.20\% & 0.319 & 90.06\% & 0.73 \\
\hline
800 features & 82.87\% & 89.67\% & 0.310 & 89.54\% & 0.72 \\
\hline
All features & 83.43\% & 89.22\% & 0.306 & 89.11\% & 0.73 \\
\\
\hline
\end{tabular}
\label{testdata1}
\end{center}
\end{table*}
The results for the test data set, using different feature subsets, are shown in Table \ref{testdata1}. The proposed RAFP-Pred achieves the best accuracy of 90.93\% utilizing the 100 most significant features. For a comprehensive evaluation, the proposed approach was also compared to the state-of-art methods reported in the literature (refer to Table \ref{testdata1_1}). Note that for a fair comparison we implemented and evaluated all approaches using the same training and testing examples. We were however unable to generate results for AFP\_PSSM \cite{xiaowei} as the data was unavailable during our experiments. Instead we compared our results directly with those reported in \cite{xiaowei}.
\begin{table*}
\begin{center}
\caption{Comparison of the proposed RAFP-Pred with different machine learning approaches on Dataset 1.}
\begin{tabular}{cccccc}
\hline
{\bf Predictor} & {\bf Sensitivity (\%)} & {\bf Specificity (\%)} & {\bf Accuracy (\%)} & {\bf Youden's index} & {\bf AUC} \\
\hline
\\
iAFP & 9.94\% & 97.23\% & 95.55\% & 0.07 & NA \\
\hline
AFP-Pred & 82.32\% & 79.02\% & 79.08\% & 0.61 & 0.89\\
\hline
AFP\_PSSM \cite{xiaowei} & 75.89\% & 93.28\% & 93.01\% & 0.69 & 0.93\\
\hline
AFP-PseAAC & 82.87\% & 87.61\% & 87.52\% & 0.70 & NA\\
\hline
{\bf RAFP-Pred} & {\bf 83.98\%} & {\bf 91.07\%} & {\bf 90.93\%} & {\bf 0.75} & 0.95 \\
\\
\hline
\end{tabular}
\label{testdata1_1}
\end{center}
\end{table*}
The test data set is highly imbalanced with 181 (AFPs) positive and 9193 (non-AFPs) negative examples. For such a highly imbalanced test data, there is a natural tendency for the predictor to be biased in favor of the class which has more samples. In such scenarios, the evaluation parameters such as the AUC and Youden's index are more representative of the predictor's performance than the conventional sensitivity, specificity and accuracy measures.
For instance in Table \ref{testdata1_1} iAFP achieves a very high specificity of 97.23\% but a poor sensitivity of 9.94\%. Therefore, although the overall accuracy of 95.55\% appears to be the best reported accuracy, the predictor has a low Youden's index of 0.07 and therefore cannot be regarded as competitive. The proposed approach achieved a Youden's index of 0.75 which is better than all reported results in the literature. The receiver operating characteristics (ROC) are shown in Figure \ref{roc} where the highest AUC of 0.95 verifies the excellent performance of the proposed RAFP-Pred approach.
The 100 most significant features obtained using the training samples of dataset 1 are available online at https://goo.gl/3i7gQD. These 100 features were used for all the datasets.
\begin{figure}[h!]
\begin{center}
\centerin
\includegraphics[width=8cm]{roc_1.png
\end{center}
\caption{ROC curves for the proposed RAFP-Pred approach.}
\label{roc}
\end{figure}
It is interesting to compare the proposed approach with the latest and the most successful method reported in literature i.e., AFP-PseAAC. The proposed RAFP-Pred approach has shown to comprehensively outperform the AFP-PseAAC method. The AFP-PseAAC achieved a sensitivity and specificity of 82.87\% and 87.61\% respectively which lags the proposed approach by a margin of 1.11\% and 3.46\%. The Youden's index of the AFP-PseAAC was also found to be 0.70 which is inferior to the proposed approach.
\subsubsection{Dataset 2} Dataset 2 consists of 44 AFPs and 3762 non-AFPs collected from the Protein Data Bank (PDB) \cite{berman2000protein} and the PISCES server \cite{wang2003pisces} respectively (reported in \cite{yu}). The non-AFPs in the dataset had, 25\% pairwise sequence identity (SI), R-factors of 0.25 and a crystallographic resolution of at least 2 A$^{0}$. In this dataset only those AFPs that had known 3D structures are included. Dataset 2 is also a highly imbalanced dataset with 44 positive and 3762 negative examples. In the literature, the only results reported on this dataset are for the iAFP method \cite{yu}. In particular the iAFP method attained an accuracy of 99.32\% on 7-fold cross validation. The proposed RAFP-Pred approach attained a comparable accuracy of 99.71\% using the 100 most significant features obtained in section \ref{dataset1}. The MCC value of 0.87 found for the proposed RAFP-Pred is also favorably comparable to the 0.79 reported for the iAFP method. Note that the proposed RAFP-Pred approach was trained using the samples of dataset 2 only and no other training samples were used. For each iteration of 7-fold cross validation, the redundancy between the training and testing samples was explicitly checked and all samples were found to be unique.
The state-of-art AFP-PseAAC approach achieved an accuracy of 99.74\% which is quite comparable to the 99.71\% of the proposed approach. The MCC value of 0.88 achieved by the AFP-PseAAC is also comparable to the 0.87 attained by the proposed RAFP-Pred approach.
\subsubsection{Dataset 3} Dataset 3 is an independent dataset representing an evolutionarily divergent group of organisms consisting of 369 AFPs obtained from the UniProKB database by searching for the phrase ``antifreeze" \cite{bairoch2000swiss}, \cite{uniprot2010universal}. Any redundancies i.e., duplicate sequence or partial sequences, were removed during the search. To further filter the dataset all sequences were also removed that were labeled as ``predicted" and ``putative" in the protein name field and followed by the manual check against the literature. To avoid any confusion any proteins which belong to ``antifreeze-like proteins" were also excluded. The results on this dataset are reported only for the iAFP method \cite{yu} in \cite{mondal}. The proposed RAFP-Pred was trained using the training data in \cite{mondal} (i.e. dataset 2), where the 100 most significant features were used. The sequences of training and testing sets were scanned for similar sequences and no identical sequences were found. The proposed RAFP-Pred approach attained the highest verification of 83.19\% which is substantially better than the 57.18\% reported for the iAFP.
The AFP-PseAAC approach was also evaluated using the same training and testing samples achieving a verification rate of 40.17\% which is 43.02\% inferior to the proposed RAFP-Pred approach.
\section{Biological Justification of the Most Significant Features Selected by the Proposed Approach}
It is well known that the biological proteins usually have hydrophobic amino acids in the core (away from water molecules in the solvent). Interestingly, some AFPs have many hydrophobic amino acids on their surfaces \cite{AFPstructure1}, \cite{AFPstructure2}, \cite{AFPstructure3}. On the other hand $\alpha$-helices are most commonly found at the surface of the protein cores (for the case of some fish AFPs for instance) where they provide an interface with the aqueous environment.
Regions which tend to form an $\alpha$-helix are: (1) Richer in alanine (A), glutamic acid (E), leucine (L), and methionine (M), and (2) poorer in proline (P), glycine (G), tyrosine (Y), and serine (S). Careful analysis of the localized segments show that:
\begin{itemize}
\item Segment 1 contains high Proline, high Serine, high Tyrosine and low Alanine which indicates less likelihood of an $\alpha$-helix in segment 1.
\item Segment 2 contains low Tyrosine, high Glutamic Acid, high Alanine and moderate Methionine which indicates high probability of an $\alpha$-helix in segment 2.
\end{itemize}
The above discussion shows that segment 2 has a high probability of an $\alpha$-helix region. Biologically, we can expect AFPs to have more hydrophobic amino acids in segment 2 compared to the non-AFPs. This can serve as a biologically justified point of discrimination as such.
The features selected by the proposed RAFP-Pred contains about 68\% of the features from segment 2 and the 58\% of the segment 2 features are hydrophobic amino acid related features. It therefore follows that the proposed approach selected the most relevant and biologically justified features for the AFP prediction.
The structural and sequential diversity in AFPs demands a feature-set encompassing a broader range of features catering for most types of AFPs. For instance, the cysteine composition my vary for different organisms, conserved cysteines form disulfide bonds in beta-helix insect AFP but the same is not true for type 1 fish AFPs. A broader range of features is therefore required to predict AFPs across organisms. A thorough investigation shows that the optimal feature-set obtained by the proposed RAFP-Pred approach indeed contains a broad spectrum of these significant features.
For instance type 1 AFPs are rich in alanine amino acid \cite{Type1_AFP}, type 2 and type 5 AFPs are rich in cysteine amino acid \cite{Type2_AFP}, \cite{Type5_AFP} and Type 4 AFPs are rich in glutamine amino acid \cite{Type4_AFP1}, \cite{Type4_AFP2}. Interestingly the optimal feature set obtained by the proposed RAFP-Pred approach contains all these features. This explains the better performance of the proposed approach compared to the contemporary predictors.
In our experiments, the training data of dataset 1 (300 AFPs and 300 Non-AFPs) was used to identify the top 100 significant features. The details are provided in the supplementary material. Here we discuss the top three features selected by the proposed approach.
The most relevant feature selected by the proposed approach is the frequency of the tryptophan amino acid in segment 2. A careful exploration of the training data shows that segment 2 of the AFPs contains 40.97\% more tryptophan compared to the non-AFPs. It is therefore safe to assume that the frequency of the tryptophan amino acid in segment 2 is a discriminating feature. The second most relevant feature selected by the proposed approach is the frequency of the leucine amino acid in segment 2. The non-AFPs of the training data set contains 20.82\% more leucine compared to the AFPs counterpart; leucine can therefore be regarded as another discriminating feature. The frequency of occurrence of the amino acid cysteine is the third most relevant feature that is selected by the proposed approach. Analysis on the training data shows that it is found in abundance in both segments of the AFPs compared to the non-AFPs. In particular the training data contained 36.30\% more cysteine in the AFPs than the non-AFPs and therefore cysteine is regarded as an important discriminating feature. This finding is supported by other researches who highlight the significance of cysteine in the prediction of the AFPs \cite{Type2_AFP}, \cite{Type5_AFP}. In fact 19 out of 100 features selected by the proposed approach are cysteine related.
For further details on all the selected features, the reader is referred to the supplementary material.
\section{Conclusion}\label{conclusion}
The structural and sequential dissimilarity protein sequences makes the prediction of the AFPs a difficult task. Previous sequence-based AFP predictors make use of the whole protein sequence. In this work we propose a novel concept of the localized analysis of AFP sequences. Extensive experiments on a number of standard datasets have been conducted. The proposed RAFP-Pred approach has shown to perform better compared to the previous predictors such as AFP-PseAAC, AFP\_PSSM, AFP-Pred and iAFP. The Weka model of the proposed approach have been made publicly available for benchmarking purposes (https://goo.gl/3i7gQD). Our favorable results suggest further explorations in this direction. For instance a more extensive segmentation could be a possible area of future research.
\section{Acknowledgement}
The authors would like to thank University of Western Australia (UWA), Pakistan Air Force - Karachi Institute of Economics and Technology (PAF-KIET), and Iqra University (IU), for providing the necessary support towards conducting this research and the anonymous reviewers for their important comments.
\bibliographystyle{IEEEtran}
| {'timestamp': '2018-09-27T02:00:12', 'yymm': '1809', 'arxiv_id': '1809.09620', 'language': 'en', 'url': 'https://arxiv.org/abs/1809.09620'} |
\section{Introduction}
\section{Introduction}
\label{intro}
In this paper, we analyse the long time behavior of random walks taking place in {\em an evolving field of traps}. A starting motivation is
to consider a {\em dynamical environment} version of Bouchaud's trap model on $\mathbb{Z}^d$. In the (simplest version of the) latter model, we
have a continuous time random walk (whose embedded chain is an ordinary random walk) on $\mathbb{Z}^d$ with spatially inhomogeneous jump rates,
given by a field of iid random variables, representing traps. The greater interest is for the case where the inverses of the rates
are heavy tailed, leading to subdiffusivity of the particle (performing the random walk), and to the appearance of the phenomenon of aging.
See~\cite{FIN02} and~\cite{BAC}.
In the present paper, we have again a continuous time random walk whose embedded chain is an ordinary random walk
(with various hypotheses on its jump distribution, depending on the result), but now the rates are
spatially {\em as well as temporally} inhomogeneous, the rate at a given site and time is given by a (fixed) function, which we denote by $\varphi$,
of the state of a birth-and-death chain (in continuous time; with time homogeneous jump rates) at that site and time; birth-and-death chains for different sites are iid and ergodic.
We should not expect subdiffusivity if $\varphi$ is bounded away from $0$, so we make the opposite assumption for our first main result, which is
nevertheless a Central Limit Theorem for the position of the particle (so, no subdiffusivity there, either), as well as a corresponding
Law of Large Numbers.
CLT's for random walks in dynamical random environments have been, from a more general point of view, or under different motivations,
previously established in a variety of situations; we mention~\cite{BMP97},
\cite{BZ06},~\cite{DKL08},~\cite{RV13} for a few cases with fairly general environments, and ~\cite{dHdS},~\cite{MV},~\cite{HKT}
in the case of environments given by specific interacting particle systems;~\cite{BMPZ} and~\cite{BPZ} deal with a case where the jump times of the particle are iid.
There is a relatively large literature establishing strong LLN's for the position of the particle in random walks in space-time random environments;
besides most of the references given above, which also establish it, we mention~\cite{AdHR} and~\cite{BHT}. \cite{Y09} derives large deviations for
the particle in the case of an iid space-time environment.
These papers assume (or have it naturally) in their environments an ellipticity condition, from which our environment crucially departs,
in the sense of our jump rates not being bounded away from $0$.
Jumps are generally also taken to be bounded, a possibly merely technical assumption in many respects,
which we in any case forgo. It should also be said that in many other respects, these models are quite more general, or more correlated
than ours\footnote{This is perhaps a good point to remark that even though our environment is constituted by iid Birth-and-Death processes,
and the embedded chain of the particle is independent of them, the continuous time motion of the particle brings about a correlation between
the particle and the environment.}.
So, we seem to need a different approach, and that is what we develop here. Our argument requires monotonocity of $\varphi$,
and "strong enough" ergodicity of the environmental chains (translating into something like a second moment condition on its equilibrium
distribution).
The main building block for arguing our CLT, in the case where the initial environment is identically 0, is a Law of Large Numbers for the time
that the particle takes to make $n$ jumps; this in turn relies on a subadditivity argument, resorting to the Subadditive Ergodic Theorem;
in order to obtain the control the latter theorem requires on expected values, we rely on a domination of the environment left by the particle
at jump times (when starting from equilibrium); this is a {\em stochastic} domination, rather than a strong domination, which would be provided
by the infimum of $\varphi$, were it positive. We extend to more general, product initial environments, with a unifom exponentially decaying
tail (and also restricting in this case to spatially homogeneous environments), by means of coupling arguments.
We expect to be able to establish various forms of subdiffusivity in this model when the environment either is not ergodic or not "strongly ergodic"
(with, say, heavy tailed equilibrium measures). This is under current investigation. \cite{BPZ} has results in this direction in the
case where the jump times of the particle are iid.
Another object of analysis in this paper is the long time behavior of the environment seen by the particle at jump times. We show convergence in distribution under different hypotheses (but always with spatially homogeneous environments, again in this case), and also that the limiting distribution is absolutely continuous with respect to the product of environmental equilibria. We could not bring the domination property mentioned above to bear for this result in the most involved instances
of a recurrent embedded chain, so we could not avoid the assumption of a bounded away from 0 $\varphi$ (in which case, monotonicity can be dropped),
and a "brute force", strong tightness control this allows.
This puts us back under the ellipticiy restriction on the rates\footnote{But the state space of the environment remains non compact.},
adopted in many results of the same nature that have been previously obtained,
as in many of the above mentioned references, to which we add~\cite{BMP94}.
\begin{center}
------------------------------------------
\end{center}
The remainder of this paper is organized as follows. In Section~\ref{mod} we define our model in detail, and discuss some of its properties.
Section~\ref{conv} is devoted to the formulations and proofs of the LLN and CLT under an environment started from the identically 0 configuration.
The main ingredient, as mentioned above, a LLN for the time that the particle takes to give $n$ jumps, is developed in Subsection~\ref{sec:3.1}, and the remaining subsections are devoted for the conclusion. In Section~\ref{ext} we extend the CLT for more general (product) initial configurations of the
environment (with a uniform exponential moment). In Section~\ref{env} we formulate and prove our result concerning the environment seen from the
particle (at jump times). Three appendices are devoted for auxiliary results concerning birth-and-death processes and ordinary (discrete time) random walks.
\section{The model}
\label{mod}
\setcounter{equation}{0}
\noindent
For $d\in {\mathbb N}_*:={\mathbb N}\setminus \{0\}$ and $S\subset {\mathbb R}^d$, let ${\cal D} \left({\mathbb R}_+,S\right)$ denote the set of
c\`adl\`ag trajectories from ${\mathbb R}_+$ to $S$.
We represent by $\mathbf 0\in E$ and $\mathbf{1}\in E$, $E={\mathbb N}^d, \mathbb{Z}^d,
{\mathbb N}^{\mathbb{Z}^d}$, respectively, the null element, and the element with all coordinates identically equal to 1.
We will use the notation $M\sim BDP(\mathbf p,\mathbf q)$ to to indicate that $M$ is a birth-and-death process on ${\mathbb N}$
with birth rates $\mathbf p=(p_n)_{n\in{\mathbb N}}$ and death rates $\mathbf q=(q_n)_{n\in{\mathbb N}_*}$.
We will below consider indepent copies of such a process, and we will assume that
$p_n,q_n\in(0,1)$ for all $n$, $p_n+q_n\equiv1$ and
\begin{equation}\label{erg}
\sum_{n\geq1}\prod_{i=1}^n\frac{p_{i-1}}{q_i}<\infty.
\end{equation}
This condition is well known to be equivalent to ergodicity of such a process.
We will also assume that $p_n\leq q_n$ for all $n$ and $\inf_np_n>0$. See Remark~\ref{relax} at the end of Section~\ref{conv}.
We now make an explicit construction of our process, namely, the random walk in a birth-and-death (BD) environment.
Let $\omega=\left(\omega_{\mathbf{x}} \right)_{\mathbf{x}\in \mathbb{Z}^d}$ be an independent family of BDP's as prescribed in the
paragraph of~\eqref{erg} above, each started from its respective initial distribution $\mu_{\mathbf{x},0}$, independently of each other;
we will denote by $\mu_{\mathbf{x},t}$ the distribution of $\omega_{\mathbf{x}}(t)$,
$t\in{\mathbb R}_+$, $\mathbf{x}\in\mathbb{Z}^d$;
$\omega$ plays the role of random dynamical environment of our random walk, which we may view as a
stochastic process $\left(\omega(t)\right)_{t\in{\mathbb R}_+}$ on $\Lambda:={\mathbb N}^{\mathbb{Z}^d}$ with initial distribution
$\hat{\mu}_0:=\bigotimes\limits_{\mathbf{x}\in\mathbb{Z}^d}\mu_{\mathbf{x},0}$ and trajectories living on $A:={\cal D}\left({\mathbb R}_+,{\mathbb N}\right)^{\mathbb{Z}^d}$.
Let ${\mathrm P}_{\hat{\mu}_{{0}}}$ denote the law of $\omega$.
Let now $\pi$ be a probability on $\mathbb{Z}^d\setminus\left\lbrace \mathbf 0\right\rbrace$,
and let $\xi:=\left\lbrace \xi_n\right\rbrace_{n\in{\mathbb N}_*}$ be an iid sequence of random vectors taking values in $\mathbb{Z}^d\setminus\left\lbrace \mathbf 0\right\rbrace$, each distributed as $\pi$; $\xi$ is asumed independent of $\omega$.
Next, let ${\mathcal M}$ be a Poisson point process of rate $1$ in ${\mathbb R}^d\times{\mathbb R}_+$,
independent of $\omega$ and $\xi$. For each $\mathbf{x}=(x_1,\ldots,x_d)\in\mathbb{Z}^d$, let
\begin{equation}
{\mathcal M}_{\mathbf{x}} = {\mathcal M}\cap \left(C_{\mathbf{x}}\times {\mathbb R}_+\right),
\label{2.2}
\end{equation}
where $C_{\mathbf{x}}=\bigtimes\limits_{{i=1}}^{{d}}\left[c_{x_i},%
c_{x_i}+1\right)$, with $c_{x_i}:=x_i-1/2$, $1\leq i\leq d$. It is quite clear that
\begin{equation}
{\mathcal M}=\bigcup\limits_{\mathbf{x}\in\mathbb{Z}^d}{\mathcal M}_{\mathbf{x}}
\label{2.3}
\end{equation}
and that by well known properties of Poisson point processes, $\left\{{\mathcal M}_{\mathbf{x}}:\mathbf{x}\in\mathbb{Z}^d\right\}$ is an independent collection such with ${\mathcal M}_{\mathbf{x}}$
a Poisson point process of rate $1$ in $C_{\mathbf{x}}\times [0,+\infty)$.
Given $\omega\in A$ and
$\varphi:{\mathbb N}\to(0,1]$,
set
\begin{equation}
{\mathcal N}_{\mathbf{x}}=\left\{(y_1,\ldots,y_d,r)\in{\mathcal M}_{\mathbf{x}}:y_d\in\left[\,c_{x_d},c_{x_d}+\varphi(\omega_{\mathbf{x}}(r))\,\right)\right\}, \quad \mathbf{x}\in\mathbb{Z}^d.
\label{2.4}
\end{equation}
\noindent Note that the projection of ${\mathcal N}_{\mathbf{x}}$ on $\{\mathbf{x}\}\times {\mathbb R}_+$ is a
inhomogeneous Poisson point process on $\{\mathbf{x}\}\times {\mathbb R}_+$ with intensity function given by
\begin{equation}
\lambda_{\mathbf{x}}(r)=\varphi(\omega_{\mathbf{x}}(r)),\quad \mathbf{x}\in\mathbb{Z}^d,~r\geq 0.
\label{2.5}
\end{equation}
Let us fix $X(0)=\mathbf{x}_{{0}}$, $\mathbf{x}_{{0}}\in \mathbb{Z}^d$, and define
$X(t)$, $t\in {\mathbb R}_+$, as follows.
Let $\tau_{{0}}=0$, and set
\begin{equation}
\tau_{1}=\inf\left\{r>0:{\mathcal N}_{\mathbf{x}_0}\cap\left(C_{\mathbf{x}_0}\times \left(0,r\right]\right)\neq \emptyset\right\},
\label{2.6}
\end{equation}
\noindent where by convention $\inf\emptyset=\infty$. For $t\in (0,\tau_{{1}})$, $X(t)=X(0)$, and, if $\tau_{1}<\infty$, then
\begin{equation}
X\left(\tau_{{1}}\right)=X(0)+\xi_1.
\label{2.7}
\end{equation}
\noindent For $n\geq 2$, we inductively define
\begin{equation}
\tau_{n}=\inf\left\{r>\tau_{n\text{-}{1}}:{\mathcal N}_{X_{\tau_{{n\text{-}1}}}}
\cap\left(C_{X_{\tau_{n\text{-}{1}}}}\times \left(\tau_{n\text{-}{1}},
r\right]\right)\neq \emptyset\right\}.
\label{2.8}
\end{equation}
\noindent For $t\in\left(\tau_{n\text{-}{1}},\tau_{n}\right)$, we set $X(t)=
X\left(\tau_{n\text{-}{1}}\right)$, and, if $\tau_{n}<\infty$, then
\begin{equation}
X\left(\tau_{{n}}\right)=X\left(\tau_{n-1}\right)+\xi_n.
\label{2.9}
\end{equation}
In words, $\left(\tau_n\right)_{n\in {\mathbb N}}$ are the jump times of the process $X:=\left(X(t)\right)_{t\in{\mathbb R}_+}$, which in turn, given $\omega\in A$, is a continuous time random walk on $\mathbb{Z}^d$ starting from $\mathbf{x}_{{0}}$ with jump rate at $\mathbf{x}$ at time $t$ given by $\varphi(\omega_{\mathbf{x}}(t))$,
$\mathbf{x}\in\mathbb{Z}^d$. Moreover, when at $\mathbf{x}$, the next site to be visited is given by $\mathbf{x}+\mathbf{y}$, with $\mathbf{y}$ generated from $\pi$,
$\mathbf{x},\mathbf{y}\in\mathbb{Z}^d$.
We adopt ${\cal D}\left({\mathbb R}_+,\mathbb{Z}^d\right)$ as sample space for $X$.
Let us denote by $P_{\mathbf{x}_0}^{^{\omega}}$ the conditional law of $X$ given $\omega\in A$.
We remark that, since ${\mathcal N}_{\mathbf{x}}\subset {\mathcal M}_{\mathbf{x}}$ for all $\mathbf{x}\in\mathbb{Z}^d$, it follows from the lack of memory of Poisson processes that,
for each $n\in {\mathbb N}_*$, given that $\tau_{n-1}<\infty$, $P_{\mathbf{x}_0}^{^{\omega}}$-almost surely $(P_{\mathbf{x}_0}^{^{\omega}}$-a.s.), $\tau_n-\tau_{n-1}\geq Z_n$, with
$Z_n$ a standard exponential random variable. Thus, $\tau_n\to\infty$ $P_{\mathbf{x}_0}^{^{\omega}}$-a.s.~as $n\to\infty$, i.e., $X$ is non-explosive. Thus, given $\omega\in A$, the inductive construction of $X$ proposed above is well defined for all $t\in{\mathbb R}_+$. We also notice that given the ergodicity assumption we made on $\omega$, we also have that $X$ gives $P_{\mathbf{x}_0}^{^{\omega}}$-a.s.~infinitely many jumps along all of its history for almost every realization of $\omega$.
Let us denote by $\mathsf x=\left(\mathsf{x}_{\ms{n}}\right)_{n\in{\mathbb N}}$
the embedded (discrete time) chain of $X$. We will henceforth at times make reference to a {\em particle} which moves in continuous time on $\mathbb{Z}^d$,
starting from $\mathbf{x}_0$, and whose trajectory is given by $X$; in this context, $X(t)$ is of course the position of the particle at time $t\geq0$.
For simplicity, we assume $\mathsf x$ {\em irreducible}.
\begin{observacao}
At this point it is worth pointing out that, given $\omega$, $X$ is a time inhomogeneous Markov jump process; we also have that the joint process
$\left(X(t),\omega(t)\right)_{t\in{\mathbb R}_+}$ is Markovian.
\end{observacao}
We may then realize our joint process in the triple
$(\Omega,{\cal F},{\mathbf P}_{\hat{\mu}_{{0}},\mathbf{x}_0})$, with $\hat{\mu}_{{0}},\mathbf{x}_0$ as above, where
$\Omega={\cal D}\left({\mathbb R}_+,{\mathbb N}\right)^{\mathbb{Z}^d}\times{\cal D}\left({\mathbb R}_+,\mathbb{Z}^d\right)$,
${\cal F}$ is the appropriate product $\sigma$-algebra on $\Omega$, and
\begin{equation}
{\mathbf P}_{\hat{\mu}_{{0}},\mathbf{x}_0}\left(M\times N\right)=\int_{M}
\dif{\mathrm P}_{\hat{\mu}_{{0}}}(\omega)P_{\mathbf{x}_0}^{\omega}(N),
\label{2.13}
\end{equation}
where $M$ an $N$ are measurable subsets from $A$ e ${\cal D}\left({\mathbb R}_+,\mathbb{Z}^d
\right)$, respectively. We will call
$P_{\mathbf{x}_0}^{^{\omega}}$ the \textit{quenched} law of $X$ (given $\omega$), and ${\mathbf P}_{\hat{\mu}_{{0}},\mathbf{x}_0}$ the \textit{annealed} law of $X$.
We will say that a claim about $X$ holds ${\mathbf P}_{\mathbf{x}_0,\hat{\mu}_{{0}}}$-a.s.~if for ${\mathrm P}_{\hat{\mu}_{{0}}}$-almost every~$\omega$
(for ${\mathrm P}_{\hat{\mu}_{{0}}}$-a.e. $\omega$), the claim holds $P^{\omega}_{\mathbf{x}_0}$-q.c.
We will also denote by $\mathrm{E}_{\hat{\mu}_{{0}}}$, $E_{\mathbf{x}_0}^{^{\omega}}$ and $\mathbf{E}_{\hat{\mu}_{{0}},\mathbf{x}_0}$
the expectations with respect to ${\mathrm P}_{\hat{\mu}_{{0}}}$, $P_{\mathbf{x}_0}^{^{\omega}}$ and ${\mathbf P}_{\hat{\mu}_{{0}},\mathbf{x}_0}$, respectively.
We reserve the notation ${\mathbb P}_\mu$ (resp., ${\mathbb P}_n$) and ${\mathbb E}_\mu$ (resp., ${\mathbb E}_n$) for the probability and its expectation underlying
a single birth-and-death process (as specified above) starting from a initial distribution $\mu$ on ${\mathbb N}$ (resp., starting from $n\in{\mathbb N}$).
Furthermore, in what follows, without loss, we will adopt $\mathbf{x}_0\equiv\mathbf 0$, and omit such a subscript, i.e.,
\begin{equation}
P^{^{\omega}}:=P_{\mathbf 0}^{^{\omega}} \quad \text{and} \quad
{\mathbf P}_{\hat{\mu}_{{0}}}:={\mathbf P}_{\hat{\mu}_{{0}},\mathbf 0}.
\end{equation}
\noindent We will also omit the subscript $\hat{\mu}_{{0}}$ when it is
irrelevant. And from now on we will indicate
\begin{equation}
{\mathbf P}_{\mathbf w}, \quad \mathbf w\in \Lambda,
\label{eq:2.15}
\end{equation}
\noindent the law of the joint process starting from $\omega(0)=\mathbf w$ and $\mathbf{x}_0\equiv \mathbf 0$.
Let now $\Delta_n:=\tau_{\ms{n}}-\tau_{{n-1}}$, $n\in {\mathbb N}_*$. We observe that
\begin{equation}
{\mathbf P}_{\hat{\mu}_{{0}}} \left(\t_{\ms{1}}>t\right)=
{\mathrm E}_{\hat{\mu}_{{0}}}\left[\exp{\left(-\int_0^t
\varphi(\omega_{\mathbf 0}(s))\,ds\right)}\right], \quad t\in{\mathbb R}_+,
\label{2.15}
\end{equation}
\noindent and, for $n\in{\mathbb N}$,
\begin{equation}
{\mathbf P}_{\hat{\mu}_{{0}}} \left(\Delta_{n+1}>t\right)=
{\mathrm E}_{\hat{\mu}_{{0}}}\left[\exp{\left(-\int_{\tau_{\ms{n}}}^{\tau_{\ms{n}}+t}
\varphi(\omega_{\mathsf{x}_{\ms{n}}}(s))\,ds\right)}\right], \quad t\in{\mathbb R}_+,
\label{2.16}
\end{equation}
\noindent recalling that $\left(\mathsf{x}_{\ms{n}}\right)_{n\in{\mathbb N}}$ denotes the jump chain of $\left(X(t)\right)_{t\in{\mathbb R}_+}$. For $n\in{\mathbb N}$, let us set
\begin{equation}
I_n(t):=\int_{\tau_{\ms{n}}}^{\tau_{\ms{n}}+t} \varphi(\omega_{\mathsf{x}_{\ms{n}}}(s))\,ds,\quad t\in {\mathbb R}_+,
\label{2.18}
\end{equation}
\noindent
$I_n:{\mathbb R}_+\to {\mathbb R}_+$, $n\in{\mathbb N}$, is well defined and is invertible ${\mathbf P}$-q.c.~under our conditions on the parameters of $\omega$ (which ensure its recurrence).
We may thus write
\begin{equation}
{\mathbf P}_{\hat{\mu}_{{0}}} \left(\tau_1>t\right)=
{\mathrm E}_{\hat{\mu}_{{0}}}\left[e^{-I_{0}(t)}\right], \quad t\in{\mathbb R}_+,
\label{2.19}
\end{equation}
\noindent and
\begin{equation}
{\mathbf P}_{\hat{\mu}_{{0}}} \left(\Delta_{n+1}>t\right)=
{\mathrm E}_{\hat{\mu}_{{0}}}\left[e^{-I_n\left(t\right)}\right],
\quad t\in{\mathbb R}_+.
\label{2.20}
\end{equation}
\subsection{Alternative construction}
\label{sec:2.2}
\noindent We finish this section with an alternative construction of $X$, based in the following simple remark, which will be used further on.
\noindent Let $\omega$ and $\xi$ as above be fixed, and set $\mathsf{T}_0=0$ and, for $n\in{\mathbb N}_*$, $\mathsf{T}_n=\sum\limits_{{k=0}}^{{n-1}} I_{k}\left(\Delta_{k+1}\right)$.
\begin{lema}
Under the conditions on the parameters of $\omega$ assumed in the paragraph of~\eqref{erg}, we have that
$\left\lbrace \mathsf{T}_n:n\in{\mathbb N}_*\right\rbrace$ is a rate 1 Poisson point process on ${\mathbb R}_+$, independent
of $\omega$ and $\xi$.
\label{lema:2.2}
\end{lema}
\begin{proof}
It is enough to check that, given $\omega$ and $\xi$, $\left(\Delta_{n}\right)_{n\in{\mathbb N}_*}$ are the event times of a
Poisson point process, which are thus independent of each other; the conclusion follows readily from the fact that
%
\begin{equation*}
{\mathbf P}\big(I_n(\Delta_{n+1})>t\big)=
{\mathbf P}\big(\Delta_{n+1}>I_n^{-1}(t)\big)=
{\mathrm E}\left[e^{-I_n\left(I_n^{-1}(t)\right)}\right]
=e^{-t},~ t\in{\mathbb R}_+.
\label{2.26}
\end{equation*}
\end{proof}
We thus have an alternative construction of $X$, as follows.
Let $\omega$, $\xi$ be as described at the beginning of the section. Let also $\mathsf V=\left(\mathsf V_n\right)_{n\in{\mathbb N}}$
be an indepent family of standard exponential random variables. Then, given $\omega$, set $X(0)=\mathsf{x}_{\ms{0}}\equiv \mathbf 0$ and $\tau_{{0}}=0$,
and define
\begin{equation}
\tau_{1}=I^{-1}_0(\mathsf V_1).
\label{2.30}
\end{equation}
\noindent For all $t\in (0,\tau_{{1}})$, $X(t)=X(0)$ and
\begin{equation}
X(\tau_{{1}})=X(0)+\xi_1=\mathsf{x}_{\ms{1}}.
\label{2.31}
\end{equation}
set, inductively,
\begin{equation}
\tau_{n}=\tau_{n-1}+I_{n-1}^{-1}(\mathsf V_n),
\label{2.32}
\end{equation}
\noindent and for $t\in\left(\tau_{n{-}{1}},\tau_{n}\right)$, $X(t)=
X({\tau_{n{-}{1}}})$ and
\begin{equation}
X(\tau_{{n}})=X(\tau_{n-1})+\xi_n=\mathsf{x}_{\ms{n}}.
\label{2.33}
\end{equation}
\noindent
We have thus completed the alternative construction of $X$. Notice that we have made use of $\omega$ and $\xi$, as in the original construction,
but replaced ${\mathcal M}$ of the latter construction by $\mathsf V$ as the remaining ingredient.
The alternative construction comes in handy in a coupling argument we develop in order to prove a law of large numbers for the jump times of $X$.
\section{Limit theorems under ${\mathbf P}_{\mathbf 0}$}
\label{conv}
\setcounter{equation}{0}
\noindent
In this section state and prove two of our main results, namely a Law of Large Numbers and a Central Limit Theorem for $X$ under ${\mathbf P}_{\mathbf 0}$\footnote{We recall that ${\mathbf P}_{\mathbf w}$ represents the law of $X$ starting from $\omega(0)=\mathbf w$ and $\mathbf{x}_0=\mathbf 0$.} and under the following extra conditions on $\varphi$:
\begin{equation}\label{vf}
\varphi \mbox { is non increasing},\, \varphi(0)=1, \mbox{ and } \lim_{n\to\infty}\varphi(n)=0.
\end{equation}
The statements are provided shortly, and the proofs are presented in the second and third subsections below, respectively. The main ingredient for these results is a Law of Large Numbers for the jump time of $X$, which in turn uses a stochastic domination result for the distribution of the environment seen by the particle at jump times; both results, along with other preliminay material, are developed in the first subsection below.
In order to state the main results of this section, we need the following preliminaries and further conditions on $\mathbf p,\mathbf q$. Let $\nu$ denote the invariant distribution of $\omega_\mathbf 0$, such that, as is well known, $\nu_n=$ const $\prod_{i=1}^n\frac{p_{i-1}}{q_i}$, for $n\in{\mathbb N}$, where the latter product is conventioned to equal 1 for $n=0$. Next set $\rho_n=\frac{p_n}{q_n}$, $R_n=\prod_{i=1}^n\rho_i$ and $S_n=\sum_{i\geq n} R_i$ , $n\geq1$,
and let $R_0=1$. These quantities are well defined and, in particular, it follows from~\eqref{erg} that the latter sum is finite for all $n\geq1$.
We will require the following extra condition on $\mathbf p,\mathbf q$, in addition to those imposed in the paragraph of~\eqref{erg} above: we will assume
\begin{equation}\label{extracon}
\sum_{n\geq1}\frac{S_n^2}{R_n}<\infty.
\end{equation}
We note that it follows from our previous assumptions on $\mathbf p,\mathbf q$ that~\eqref{extracon} is stronger than~\eqref{erg}, since $S_n\geq R_n$ for all $n$.
The relevance of this condition is that it implies
the two conditions to be introduced next.
Let $\mathsf w$ denote the embedded chain of $\omega_\mathbf 0$, and, for $n\geq0$, let $T_n$ denote the first passage time of $\mathsf w$ by $n$, namely, $T_n=\inf\{i\geq0:\,\mathsf w_i=n\}$, with the usual convention that $\inf\emptyset=\infty$.
Condition~\eqref{extracon} is equivalent, as will be argued in Appendix~\ref{app}, to either
\begin{equation}\label{extracor}
{\mathbb E}_\nu(T_0)<\infty \mbox{ or } {\mathbb E}_1(T_0^2)<\infty. \footnotemark
\end{equation}
\footnotetext{We note that $\mathsf w$ has the same invariant distribution as $\omega_\mathbf 0$, namely $\nu$.}
It may be readily shown to be stronger than asking that $\nu$ have a finite first moment, and a finite second moment of $\nu$ implies it,
under our conditions on $\mathbf p,\mathbf q$ \footnote{It looks as though a finite second moment of $\nu$ may be a necessary condition for it, as well.}.
Conditions~\eqref{extracor} will be required in our arguments for the following main results of this section ---
they are what we meant by 'strongly ergodic' in the abstract.
See Remark~\ref{relax} at the end of this section.
\begin{teorema}[Law of Large Numbers for $X$]
Assume the above conditions and that ${\mathbf E}(\|\xi_1\|)<\infty$. Then there exists $\mu\in(0,\infty)$ such that
\begin{equation}
\frac{X(t)}{t}\to \frac{{\mathbf E}(\xi_1)}{\mu} ~~{\mathbf P}_{\mathbf 0}\mbox{-a.s.}~\text{as}~t\to\infty.
\label{eq:63}
\end{equation}
\label{teo:3.1}
\end{teorema}
Here and below $\|\cdot\|$ is the sup norm in $\mathbb{Z}^d$.
\begin{teorema} [Central Limit Theorem for $X$]
Assume the above conditions and that ${\mathbf E}(\|\xi_1\|^2)<\infty$ and ${\mathbf E}(\xi_1)=\mathbf 0$. Then, for ${\mathrm P}_{\mathbf 0}$-a.e. $\omega$, we have that
\begin{equation}
\frac{X(t)}{\sqrt{ t/\mu}}\Rightarrow N_d(\mathbf 0,\Sigma) \,\mbox{ under }\, P^{^{\omega}},
\label{eq:75}
\end{equation}
\label{teo:3.2}
%
\noindent where $\Sigma$ is the covariance matrix of $\xi_1$, and $\mu$ is as in Theorem~\ref{teo:3.1}.
\end{teorema}
In the next section we will state a CLT under more general initial environment conditions (but restricting to homogeneous cases of the environmental BD dynamics).
As for the mean zero assumption in Theorem~\ref{teo:3.2}, going beyond it would require substantially more work than we present here, under our approach; see Remark~\ref{ext_clt} at the end of this section.
\begin{subsection}{Law of large numbers for the jump times of $X$}
\label{sec:3.1}
\noindent In this subsection, we prove a Law of Large Numbers for $(\tau_{\ms{n}})_{n\in{\mathbb N}}$ under ${\mathbf P}_\mathbf 0$; this is the key ingredient in our arguments for the main results of this section; see Proposition~\ref{prop:3.1} below.
Our strategy for proving the latter result is to establish suitable stochastic domination of the environment by a modified environment, leading to a corresponding domination for jump times; we develop this program next.
\medskip
We start by recalling some well known definitions. Given two probabilities on ${\mathbb N}$, $\upsilon_1$ and $\upsilon_2$,
we indicate by $\upsilon_1\preceq \upsilon_2$ that $\upsilon_1$ is stochastically dominated by $\upsilon_2$, i.e.,
%
\begin{equation}
\upsilon_1\left({\mathbb N}~\setminus~ \mathsf{A}_k \right)\leq
\upsilon_2\left({\mathbb N}~\setminus~ \mathsf{A}_k\right),
\quad \mathsf{A}_k:=\left\lbrace 0,\ldots, k\right\rbrace, ~ \forall~k\in{\mathbb N}.
\end{equation}
We equivalently write, in this situation, $X_1\preceq \upsilon_2$, if $X_1$ is a random variable distributed as $\upsilon_1$.
%
Now let $\mathsf Q$ denote the generator of $\omega_\mathbf 0$ (which is a {\em Q-matrix}), and consider the following matrix
%
\begin{equation}
\mathsf Q^\psi=D\mathsf Q,~\textrm{with}~D=\mathrm{diag}\{\psi(n)\}_{n\in{\mathbb N}},
\label{eq:3.3}
\end{equation}
where $\psi:{\mathbb N}\to[1,\infty)$ is such that $\psi(n)=1/\varphi(n)$ for all $n$, with $\varphi$ as defined in the paragraph of~\eqref{2.4} above.
%
\noindent
Notice that $\mathsf Q^\psi$ is also a Q-matrix, and that it generates a birth-and-death process on ${\mathbb N}$, say $\check\omega_\mathbf 0$, with transition rates given by
\begin{equation}
\mathsf Q^\psi(n,n+1)=\psi_np_n=:p^\psi_n,\,n\in{\mathbb N}\, ; \quad \mathsf Q^\psi(n,n-1)=\psi_nq_n=:q^\psi_n,\, n\in{\mathbb N}_*; \footnotemark
\label{3.16}
\end{equation}
\footnotetext{We occasionally use $g_n$ to denote $g(n)$, for $g:{\mathbb N}\to{\mathbb R}$.}
this is a positive recurrent process, with invariant distribution $\nu^\psi$ on ${\mathbb N}$ such that
\begin{equation*}
\nu^\psi_ n=\mbox{const }\prod_{i=1}^n\frac{p^\psi_{i-1}}{q^\psi_i},\,n\in{\mathbb N},
\end{equation*}
with a similar convention for the product as for $\nu$.
One may readily check that $\nu_\psi\preceq\nu$, since $\psi$ is increasing.
The relevance of $\check\omega_\mathbf 0$ in the present study issues from the following strightforward result. Recall~\eqref{2.18}.
\begin{lema}
Suppose $\omega_\mathbf 0(0)\sim\check\omega_\mathbf 0(0)$. Then
\begin{equation}\label{coup1}
(\omega_\mathbf 0(t),\,t\in{\mathbb R}_+)\sim(\check\omega_\mathbf 0(I_0(t)),\,t\in{\mathbb R}_+).
\end{equation}
%
\label{coup}
\end{lema}
We have the following immediate consequence from this and Lemma~\ref{lema:2.2}.
\begin{corolario}
Let $\mathsf V_1$ be a standard exponential random variable, independent of $\check\omega_\mathbf 0$. Then
\begin{equation}
\omega_\mathbf 0(\tau_1)\sim\check\omega_\mathbf 0(\mathsf V_1).
\label{coup3}
\end{equation}
\label{coup2}
\end{corolario}
Figure~\ref{fig:3.1} illustrates a coupling behind~(\ref{coup1},\ref{coup3}).
\begin{figure}[!htb]
\centering
\definecolor{qqtttt}{rgb}{0,0.2,0.2}
\definecolor{xdxdff}{rgb}{0.49,0.49,1}
\definecolor{qqwwcc}{rgb}{0,0.4,0.8}
\definecolor{ffttww}{rgb}{1,0.2,0.4}
\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=0.8cm,y=0.7cm]
\clip(-2.10,-2.27) rectangle (15.25,6.93);
\draw (0.76,0)-- (3,0);
\draw (3,0)-- (7,0);
\draw (7,0)-- (7.9,0);
\draw (0.76,0.63)-- (3,1.88);
\draw (3,1.88)-- (7,2.88);
\draw (7,2.88)-- (7.9,3.33);
\draw [dash pattern=on 2pt off 2pt] (0,0)-- (0.76,0);
\draw [dash pattern=on 2pt off 2pt] (0.76,0)-- (0.76,0.63);
\draw [dash pattern=on 2pt off 2pt] (0.76,0.63)-- (0,0.63);
\draw [dash pattern=on 2pt off 2pt] (0,0.63)-- (0,0);
\draw [dash pattern=on 2pt off 2pt] (0,0)-- (3,0);
\draw [dash pattern=on 2pt off 2pt] (3,0)-- (3,1.88);
\draw [dash pattern=on 2pt off 2pt] (3,1.88)-- (0,1.88);
\draw [dash pattern=on 2pt off 2pt] (0,1.88)-- (0,0);
\draw [dash pattern=on 2pt off 2pt] (0,0)-- (7,0);
\draw [dash pattern=on 2pt off 2pt] (7,0)-- (7,2.88);
\draw [dash pattern=on 2pt off 2pt] (7,2.88)-- (0,2.88);
\draw [dash pattern=on 2pt off 2pt] (0,2.88)-- (0,0);
\draw [dash pattern=on 2pt off 2pt] (0,0)-- (7.9,0);
\draw [dash pattern=on 2pt off 2pt] (7.9,0)-- (7.9,3.33);
\draw [dash pattern=on 2pt off 2pt] (7.9,3.33)-- (0,3.33);
\draw [dash pattern=on 2pt off 2pt] (0,3.33)-- (0,0);
\draw [dotted,color=qqtttt] (13,0)-- (0,0);
\draw [dotted,color=qqtttt] (0,0)-- (0,5);
\draw [dotted,color=qqtttt] (0,5)-- (13,5);
\draw [dotted,color=qqtttt] (13,5)-- (13,0);
\draw (-2.1,5.35) node[anchor=north west] {$\mathsmaller{\mathsf V_1:=I_0\left(\tau_1\right)}$};
\draw (12.65,0.0) node[anchor=north west] {$\mathsmaller{\tau_1}$};
\draw (13.85,0.0) node[anchor=north west] {$\mathsmaller{t}$};
\draw (0.07,0.0) node[anchor=north west] {$\mathsmaller{{\mathcal E}_1}$};
\draw (1.55,0.0) node[anchor=north west] {$\mathsmaller{{\mathcal E}_2}$};
\draw (4.7,0.0) node[anchor=north west] {$\mathsmaller{{\mathcal E}_3}$};
\draw (7.15,0.0) node[anchor=north west] {$\mathsmaller{{\mathcal E}_4}$};
\draw (-1.2,0.67) node[anchor=north west] {$\mathsmaller{\varphi_k{\mathcal E}_1}$};
\draw (-1.47,1.60) node[anchor=north west] {$\mathsmaller{\varphi_{k+1}{\mathcal E}_2}$};
\draw (-1.47,2.69) node[anchor=north west] {$\mathsmaller{\varphi_{k+2}{\mathcal E}_3}$};
\draw (-1.47,3.45) node[anchor=north west] {$\mathsmaller{\varphi_{k+1}{\mathcal E}_4}$};
\draw [->] (0,0) -- (14.2,0);
\draw (-0.55,6.7) node[anchor=north west] {$\mathsmaller{I_0(t)}$};
\draw (7.9,3.33)-- (8.3,3.73);
\draw (0,0)-- (0.76,0.63);
\draw [->] (0,0) -- (-0.01,6.05);
\begin{scriptsize}
\fill [color=black] (0,0) ++(-1.5pt,0 pt) -- ++(1.5pt,1.5pt)--++(1.5pt,-1.5pt)--++(-1.5pt,-1.5pt)--++(-1.5pt,1.5pt);
\fill [color=ffttww] (0.76,0) ++(-1.5pt,0 pt) -- ++(1.5pt,1.5pt)--++(1.5pt,-1.5pt)--++(-1.5pt,-1.5pt)--++(-1.5pt,1.5pt);
\fill [color=ffttww] (3,0) ++(-1.5pt,0 pt) -- ++(1.5pt,1.5pt)--++(1.5pt,-1.5pt)--++(-1.5pt,-1.5pt)--++(-1.5pt,1.5pt);
\fill [color=ffttww] (7,0) ++(-1.5pt,0 pt) -- ++(1.5pt,1.5pt)--++(1.5pt,-1.5pt)--++(-1.5pt,-1.5pt)--++(-1.5pt,1.5pt);
\fill [color=ffttww] (7.9,0) ++(-1.5pt,0 pt) -- ++(1.5pt,1.5pt)--++(1.5pt,-1.5pt)--++(-1.5pt,-1.5pt)--++(-1.5pt,1.5pt);
\fill [color=qqwwcc,shift={(0,0.63)},rotate=90] (0,0) ++(0 pt,2.25pt) -- ++(1.95pt,-3.375pt)--++(-3.9pt,0 pt) -- ++(1.95pt,3.375pt);
\fill [color=qqwwcc,shift={(0,1.88)},rotate=90] (0,0) ++(0 pt,2.25pt) -- ++(1.95pt,-3.375pt)--++(-3.9pt,0 pt) -- ++(1.95pt,3.375pt);
\fill [color=qqwwcc,shift={(0,2.88)},rotate=90] (0,0) ++(0 pt,2.25pt) -- ++(1.95pt,-3.375pt)--++(-3.9pt,0 pt) -- ++(1.95pt,3.375pt);
\fill [color=qqwwcc,shift={(0,3.33)},rotate=90] (0,0) ++(0 pt,2.25pt) -- ++(1.95pt,-3.375pt)--++(-3.9pt,0 pt) -- ++(1.95pt,3.375pt);
\fill [color=qqwwcc,shift={(0,5)},rotate=90] (0,0) ++(0 pt,2.25pt) -- ++(1.95pt,-3.375pt)--++(-3.9pt,0 pt) -- ++(1.95pt,3.375pt);
\fill [color=xdxdff] (13,0) circle (1.5pt);
\fill [color=black,shift={(8.3,-0.2)},rotate=270] (0,0) ++(0 pt,1.5pt) -- ++(1.3pt,-2.25pt)--++(-2.6pt,0 pt) -- ++(1.3pt,2.25pt);
\fill [color=black,shift={(8.5,-0.2)},rotate=270] (0,0) ++(0 pt,1.5pt) -- ++(1.3pt,-2.25pt)--++(-2.6pt,0 pt) -- ++(1.3pt,2.25pt);
\fill [color=black,shift={(8.7,-0.2)},rotate=270] (0,0) ++(0 pt,1.5pt) -- ++(1.3pt,-2.25pt)--++(-2.6pt,0 pt) -- ++(1.3pt,2.25pt);
\end{scriptsize}
\end{tikzpicture}
\caption{${\mathcal E}_1,{\mathcal E}_2,\ldots$ are iid standard exponentials; $x$(resp., $y$)-axis indicates constancy intervals of $\omega_\mathbf 0$ (resp., $\check\omega_\mathbf 0$) in a realization where with $\omega_{\mathbf 0}(0)=\check\omega_{\mathbf 0}(0)= k\in{\mathbb N}$.}
\label{fig:3.1}
\end{figure}
The following result is most certainly well known, and may be argued by a straightforward coupling argument.
\begin{lema}
Let $\mu$ and $\mu'$ denote two probabilities on ${\mathbb N}$ such that $\mu\preceq \mu'$. Then,
for all $t\in{\mathbb R}_+$,
%
\begin{equation}
\mu e^{t\mathsf Q}\preceq \mu' e^{t\mathsf Q}.
\label{3.1}
\end{equation}
%
\label{lema:3.1}
\end{lema}
Here and below $e^{t\mathsf Q'}$ denotes the semigroup associated to an irreducible and recurrent Q-matrix $\mathsf Q'$ on ${\mathbb N}$.
We have an immediate consequence of Lemma~\ref{lema:3.1}, as follows.
\begin{corolario}
If $\mu$ is a probability on ${\mathbb N}$ such that $\mu\preceq \nu$, then, for all $t\in{\mathbb R}_+$,
\begin{equation}
\mu e^{t\mathsf Q}\preceq \nu.
\label{3.0}
\end{equation}
\label{coro:3.0}
\end{corolario}
We present now a few more substantial domination lemmas,
leading to a key ingredient for justifying
the main result of this subsection.
\begin{lema}\label{domi1}
Let $\mathsf Q^\psi$ be as in (\ref{eq:3.3},\ref{3.16}). Then, for all $t\in{\mathbb R}_+$,
%
\begin{equation}
\nu e^{t\mathsf Q^\psi}\preceq \nu.
\label{3.17}
\end{equation}
%
\label{lema:3.2}
\end{lema}
\begin{proof}
Let $\mathsf Y=\left(\mathsf Y_t\right)_{t\in{\mathbb R}_+}$ denote the birth-and-death process generated by
$\mathsf Q^\psi$ started from $\nu$. Set $P_{n,j}(t):={\mathbb P}(\mathsf Y_t=j~\mathbin{\vert{}}~ \mathsf Y_0=n)$, $t\in{\mathbb R}_+$, $n,j\in{\mathbb N}$.
For $l\in{\mathbb N}$,
\begin{equation}
{\mathbb P}(\mathsf Y_t\leq l)=\sum_{j\leq l}{\mathbb P}(\mathsf Y_t=j)=
\sum_{j\leq l}\sum_{n\geq 0}\nu_n P_{n,j}(t).
\label{3.18}
\end{equation}
\noindent By Tonelli,
\begin{equation}
{\mathbb P}(\mathsf Y_t\leq l)=\sum_{n\geq 0} \sum_{j\leq l} \nu_n P_{n,j}(t).
\label{3.19}
\end{equation}
Consider now Kolmogorov's forward equations for $\mathsf Y$, given by
%
\begin{align}
P'_{n,0}(t)&=-p^\psi_0P_{n,0}(t)+q^\psi_1P_{n,1}(t);\\
P'_{n,j}(t)&=p^\psi_{j-1}P_{n,j-1}(t)-\psi_jP_{n,j}(t)+q^\psi_{j+1}P_{n,j+1}(t),~~j\geq 1;
\label{3.20-3.21}
\end{align}
$n\geq 0$. It follows that
\begin{equation}
\left\lvert \sum_{{j\leq l}} \nu_nP'_{n,j}(t)\right\rvert=
\nu_n\Big| q^\psi_{l+1}P_{n,l+1}(t)-p^\psi_{l}P_{n,l}(t)\Big|\leq \nu_n\psi_{l+1},
\label{3.22}
\end{equation}
for all $t$; since $\nu$ is summable, we have that
\begin{equation}
{\mathbb P}'(\mathsf Y_t\leq l)=\sum_{n\geq 0} \sum_{j\leq l}\nu_nP'_{n,j}(t).
\label{3.23}
\end{equation}
We now make use in \eqref{3.23} of Kolmogorov's backward equations for $\mathsf Y$, given by
%
\begin{align}
P'_{0,j}(t)&=-p^\psi_0P_{0,j}(t)+p^\psi_0P_{1,j}(t)=p^\psi_0(P_{1,j}(t)-P_{0,j}(t));\\
P'_{n,j}(t)&=q^\psi_{n}P_{n-1,j}(t)-\psi_nP_{n,j}(t)+p^\psi_{n}P_{n+1,j}(t),\nonumber\\
&=q^\psi_{n}(P_{n-1,j}(t)-P_{n,j}(t))-p^\psi_{n}(P_{n,j}(t)-P_{n+1,j}(t)),\quad n\geq 1,
\label{3.24,3.25}
\end{align}
$j\geq 0$. Setting $d_n:={\mathbb P}_n(\mathsf Y_t\leq l)-{\mathbb P}_{n+1}(\mathsf Y_t\leq l)$, $n\in\mathbb N$,
we find that
\begin{align}
{\mathbb P}'(\mathsf Y_t\leq l)&=\sum_{j\leq l}\nu_0P'_{0,j}(t)+
\sum_{n\geq 1}\sum_{j\leq l}\nu_nP'_{n,j}(t)\nonumber\\
&=-\nu_0p^\psi_0d_0+\sum_{n {\geq} 1}\nu_n\big(q^\psi_nd_{n\text{-}{1}}-p^\psi_nd_n\big)\nonumber\\
&=\sum_{n {\geq} 0}\nu_{n+1}q^\psi_{n+1}d_{n}-\sum_{n {\geq} 0}\nu_np^\psi_nd_n,
\label{3.26}
\end{align}
provided
\begin{equation}
\sum_{n {\geq} 1}\nu_n\psi_n(d_{n\text{-}{1}}\vee d_{n})<\infty,
\label{3.27}
\end{equation}
which we claim to hold; see justification below. We note that $d_n\geq0$ for all $n,l$ and $t$, as can be justified by a straightforward coupling argument.
It follows that
\begin{align}
{\mathbb P}'(\mathsf Y_t\leq l)&=\sum_{n {\geq} 0}(\nu_{n+1}q^\psi_{n+1}-\nu_{n}p^\psi_{n})d_{n}\nonumber\\
&=\sum_{n {\geq} 0}(\psi_{n+1}\nu_{n+1}q_{n+1}-\psi_{n}\nu_{n}p_n)d_{n}\nonumber\\
&=\sum_{n {\geq} 0}(\psi_{n+1}\nu_{n}p_{n}-\psi_{n}\nu_{n}p_n)d_{n}\nonumber\\
&=\sum_{n {\geq} 0}(\psi_{n+1}-\psi_{n})\nu_{n}p_nd_{n}\geq0
\label{3.27a}
\end{align}
since $\psi$ is nondecreasing, where the third equality follows by reversibility of $Y$.
We thus have that ${\mathbb P}(\mathsf Y_t\leq l)$ is nondecreasing in $t$ for every $l$; we thus have that
\begin{equation}
\nu(\mathsf{A}_l)={\mathbb P}(\mathsf Y_0\leq l)\leq{\mathbb P}(\mathsf Y_t\leq l)
\label{3.27b}
\end{equation}
for all $l$, and~\eqref{3.17} is established.
\smallskip
It remains to argue~\eqref{3.27}. Let $\mathsf H_n:=\inf\{s\geq 0: \mathsf Y_s=n\}$, $n\in\mathbb N$ be the hitting time of $n$ by $\mathsf Y$. For $n\geq l$,
we have that
\begin{align}
d_n=&{\mathbb P}_n(\mathsf Y_t\leq l)-\int_0^t{\mathbb P}_{n+1}(\mathsf H_n\in \mathrm d s)
{\mathbb P}_{n}(\mathsf Y_{t-s}\leq l)\mathrm d s\nonumber\\
=&\int_0^t{\mathbb P}_{n+1}(\mathsf H_n\in \mathrm d s)\Big[{\mathbb P}_n(\mathsf Y_t\leq l)-
{\mathbb P}_{n}(\mathsf Y_{t-s}\leq l)\Big]\mathrm d s\nonumber\\&+
{\mathbb P}_n(\mathsf Y_t\leq l)\int_t^\infty {\mathbb P}_{n+1}(\mathsf H_n\in \mathrm d s)\mathrm d s\nonumber\\
=&\int_0^t{\mathbb P}_{n+1}(\mathsf H_n\in \mathrm d s)\Big[{\mathbb P}_n(\mathsf Y_t\leq l,\mathsf Y_{t-s}> l)-
{\mathbb P}_n(\mathsf Y_t> l,\mathsf Y_{t-s}\leq l)\Big]\mathrm d s\nonumber\\&+ {\mathbb P}_n(\mathsf Y_t\leq l){\mathbb P}_{n+1}(\mathsf H_n>t)\nonumber\\
=&:d_n'+d_n''
\label{3.28}
\end{align}
Let now $V=\left(V_i\right)_{i\in {\mathbb N}_*}$ be a sequence of independent standard exponential random variables,
and consider the embedded chain
$\tilde{\mathsf Y}=\big(\tilde{\mathsf Y}_k\big)_{k\geq 0}$ of $\left(\mathsf Y_t\right)_{t\in{\mathbb R}_+}$, and $\tilde{\mathsf H}_n=\inf\{k\geq 0: \tilde{\mathsf Y}_k=n\}$.
Notice that $\tilde{\mathsf Y}$ is distributed as $\mathsf w$, and $\tilde{\mathsf H}_n$ is distributed as $T_n$, introduced at the beginning of the section.
Let $V$ and $\tilde{\mathsf Y}$ be independent.
%
Let us now introduce an auxiliary random vaiable $\mathsf H'_n=\sum\limits_{i=1}^{\tilde{\mathsf H}_n}V_i$, and note that, given that $\mathsf Y_0=n+1$,
$\mathsf H_n\stackrel{{st}}{\preceq} \varphi_{n+1}\mathsf H'_n$; it follows from this and the Markov inequality that
\begin{equation}
{\mathbb P}_{n+1}(\mathsf H_n>t)\leq {\mathbb P}_{n+1}(\mathsf H'_n>\psi_{n+1}t)\leq \frac{\varphi_{n+1}{\mathcal T}_{n+1}}t\leq\mbox{ const }\varphi_{n+1}\frac{S_n}{R_n}
\label{3.29}
\end{equation}
(see Appendix~\ref{app}). It follows that
\begin{equation}
\sum_{n {>} l}\nu_n\psi_nd''_{n-1}\leq\mbox{ const }\sum_{n {\geq} 1}\frac{\nu_n}{R_n}S_n\leq\mbox{ const }\sum_{n {\geq} 1}S_n<\infty
\label{3.27c}
\end{equation}
by the ergodicity assumption on $\omega_\mathbf 0$, and similarly $\sum_{n {\geq} 1}\nu_n\psi_nd''_{n}<\infty$.
Now, by the Markov property
%
\begin{align}
{\mathbb P}_n(\mathsf Y_t\leq l,\mathsf Y_{t-s}> l)
&=\sum_{j\geq l+1}{\mathbb P}_n(\mathsf Y_{t-s}=j){\mathbb P}_j(\mathsf Y_{s}\leq l)\nonumber\\
&\leq \sum_{j\geq l+1}{\mathbb P}_n(\mathsf Y_{t-s}=j){\mathbb P}_{l+1}(\mathsf Y_{s}\leq l)\nonumber\\
&\leq \sum_{j\geq l+1}{\mathbb P}_n(\mathsf Y_{t-s}=j)\left( 1-e^{-\psi_{l+1}s}\right)
\leq 1-e^{-\psi_{l+1}s}.
\label{3.31}
\end{align}
\noindent Thus,
%
\begin{align}
d'_n&\leq \int_0^t{\mathbb P}_{n+1}(\mathsf H_n\in \mathrm d s)(1-e^{-\psi_{l+1}s})\mathrm d s \leq
{\mathbb E}_{n+1}\left(1-e^{-\psi_{l+1}\mathsf H_n}\right)\nonumber\\
&\leq \psi_{l+1}{\mathbb E}_{n+1}\left(H_n\right) \leq \psi_{l+1}\varphi_{n+1}{\mathcal T}_{n+1},
\label{3.32}
\end{align}
and, similarly as above, we find that $\sum_{n {\geq} 1}\nu_n\psi_n (d'_{n-1}+ d'_{n})<\infty$, and~\eqref{3.27} is established.
\end{proof}
In other words, if $\omega_\mathbf 0(0)\sim\nu$ , then
\begin{equation}\label{dom1}
\omega_\mathbf 0(\tau_1)\preceq\nu.
\end{equation}
Let us now assume that $\hat\mu_{\mathbf{x},0}\preceq\nu$ for every $\mathbf{x}\in\mathbb{Z}^d$.
Based on the above domination results, we next construct a modification of the joint process $(X,\omega)$, to be denoted $(\breve X,\breve\omega)$,
in a coupled way to $(X,\omega)$, so that $\breve\omega$ has {\em less} spatial dependence than, and at the same time dominates $\omega$ in a suitable way.
The idea is to let $\breve X$ have the same embedded chain as $X$, and jump according to
$\breve\omega$ as $X$ jumps according to $\omega$; we let $\breve\omega$ evolve with the same law as $\omega$ between its jump times, and at jump times
we replace $\breve\omega$ at the site where $\breve X$ jumped from by a suitable dominating random variable distributed as $\nu$.
Details follow.
We first construct a sequence of environments between jumps of $\breve X$, as follows.
Let $(X,\omega)$ be as above, starting from $X(0)=0$, $\omega(0)\sim\hat\mu_0$, then, enlarging the original probability space if necessary,
we can find iid random variables $\omega^0_\mathbf{x}(0)$, $\mathbf{x}\in\mathbb{Z}^d$, distributed according to $\nu$, such that $\omega^0_\mathbf{x}(0)\geq\omega_\mathbf{x}(0)$, $\mathbf{x}\in\mathbb{Z}^d$.
We let now $\omega^0$ evolve for $t\geq0$ in a coupled way with $\omega$ in such a way that $\omega^0_\mathbf{x}(t)\geq\omega_\mathbf{x}(t)$, $\mathbf{x}\in\mathbb{Z}^d$.
Let now $\breve\tau_1$ be obtained from $\omega^0$ in the same way as $\tau_1$ was obtained from $\omega$, using the same ${\mathcal M}$ for $\omega^1$ as for $\omega$
(recall definition from paragraph of~\eqref{2.2}); $\breve\tau_{1}$ is the time of the first jump of $\breve X$, and set $\breve X(\breve\tau_1)=\mathsf x_1$.
Notice that $\breve\tau_1\geq\tau_1$.
Noticing as well that $\omega^0_\mathbf{x}(\breve\tau_1)$, $\mathbf{x}\ne0$, are independent with common distribution $\nu$, and independent of $\omega^0_\mathbf 0(\breve\tau_1)$,
and using~\eqref{dom1}, again enlarging the probability space if necessary, we find ${\mathcal W}_1$ distributed as $\nu$ such that
${\mathcal W}_1\geq\omega^0_\mathbf 0(\breve\tau_1)$, with ${\mathcal W}_1$ is independent of $\omega^0_\mathbf{x}(\breve\tau_1)$, $\mathbf{x}\ne0$;
and we make $\omega^1_\mathbf 0(\breve\tau_1)={\mathcal W}_1$, and $\omega^1_\mathbf{x}(\breve\tau_1)=\omega^0_\mathbf{x}(\breve\tau_1)$, $\mathbf{x}\ne0$.
Notice that $\omega^1_\mathbf{x}(\breve\tau_1)$, $\mathbf{x}\in\mathbb{Z}^d$ are iid with marginals distributed as $\nu$.
We now iterate this construction, inductively: given $\xi$, let us fix $n\geq1$, and suppose that for each $0\leq j\leq n-1$, we have constructed $\breve\tau_j$, and $\omega^{j}(t),\,t\geq\breve\tau_{j}$, with $\{\omega^j_\mathbf{x}(\breve\tau_j), \,\mathbf{x}\in\mathbb{Z}^d\}$ iid with marginals distributed as $\nu$. We then define $\breve\tau_n$ from $\omega^{n-1}(\breve\tau_{n-1})$ in the same way as $\tau_1$ was defined from $\omega^0(0)$, but with the random walk originating in $\mathsf x_{n-1}$, and with the
marks of ${\mathcal M}$ in the upper half space from $\breve\tau_{n-1}$; $\breve\tau_{n}$ is the time of the $n$-th jump of $\breve X$, and we set $\breve X(\breve\tau_{n})=\mathsf x_{n}$.
Next, from~\eqref{dom1}, we obtain ${\mathcal W}_n\geq\omega^{n-1}_{\mathbf{x}_{n-1}}(\breve\tau_{n})$ such that $\{{\mathcal W}_n; \omega^{n-1}_{\mathbf{x}}(\breve\tau_{n}),\,\mathbf{x}\ne\mathsf x_{n-1}\}$ is an iid family of random variables with marginals distributed as $\nu$, and define a $BDP(\mathbf p,\mathbf q)$ $(\omega^n(t))_{t\geq\breve\tau_n}$ starting from
$\{\omega^n_\mathbf{x}(\breve\tau_n)=\omega^{n-1}_\mathbf{x}(\breve\tau_n),\,\mathbf{x}\ne\mathsf x_{n-1}; \,\omega^n_{\mathbf{x}_{n-1}}(\breve\tau_n)={\mathcal W}_n\}$ so that
$\omega^n_{\mathsf x_{n-1}}(t)\geq\omega^{n-1}_{\mathsf x_{n-1}}(t)$,
$\omega^n_\mathbf{x}(t)=\omega^{n-1}_\mathbf{x}(t)$, $\mathbf{x}\ne\mathsf x_{n-1}$, $t\geq\breve\tau_n$.
We finally define $\breve\omega(t)=\omega^n(t)$ for $t\in[\breve\tau_{n},\breve\tau_{n+1})$, $n\geq0$. This coupled construction of $(\omega,\breve\omega)$ has the following properties.
\begin{lema}\label{domult}
\mbox{}
\begin{enumerate}
\item \begin{equation}\label{comp1}
\breve\omega_\mathbf{x}(t)\geq\omega_\mathbf{x}(t) \text{ for all }\,\mathbf{x}\in\mathbb{Z}^d \text{ and }\, t\geq0;
\end{equation}
\item for each $n\geq0$,
\begin{equation}\label{comp2}
\breve\omega_\mathbf{x}(\breve\tau_n), \, \mathbf{x}\in\mathbb{Z}^d,\text{ are iid random variables with marginals distributed as }\nu;
\end{equation}
\item for all $n\geq0$, we have that
\begin{equation}\label{comp3}
\tau_n\leq\breve\tau_n.
\end{equation}
\end{enumerate}
\end{lema}
\begin{proof}
The first two items are quite clear from the construction, so we will argue only the third item, which is quite clear for $n=0$ and $1$
(the latter case was already pointed out in the description of the construction, above); for the remaining cases, let $n\geq1$,
and suppose, inductively, that $\tau_n\leq\breve\tau_n$; there are two
possibilities for $\tau_{n+1}$: either $\tau_{n+1}\leq\breve\tau_n$, in which case, clearly, $\tau_{n+1}\leq\breve\tau_{n+1}$, or $\tau_{n+1}>\breve\tau_n$;
in this latter case, $\tau_{n+1}$ (resp., $\breve\tau_{n+1}$)
will correspond to the earliest Poisson point (of ${\mathcal M}$) in ${\mathcal Q}_n:=[c_{\mathsf x_n(d)},c_{\mathsf x_n(d)}+\varphi(\omega_{\mathsf x_n}(r))_{r\geq\breve\tau_n}$
(resp., $\breve{\mathcal Q}_n:=[c_{\mathsf x_n(d)},c_{\mathsf x_n(d)}+\varphi(\breve\omega_{\mathsf x_n}(r))_{r\geq\breve\tau_n}$). By~\eqref{comp1} and the monotonicity of $\varphi$, we have that
$\breve{\mathcal Q}_n\subset{\mathcal Q}_n$, and it follows that $\tau_{n+1}\leq\breve\tau_{n+1}$.
\end{proof}
The next result follows immediately.
\begin{corolario}\label{domesp}
For $n\geq1$
\begin{equation}\label{dom2}
{\mathbf E}_{\hat\mu_0}\left(\tau_n\right)\leq {\mathbf E}_{\hat\nu}\left(\breve\tau_n\right)=n{\mathbf E}_{\hat\nu}\left(\breve\tau_1\right)=n{\mathbf E}_{\hat\nu}\left(\tau_1\right).
\end{equation}
\end{corolario}
The following result, together with~\eqref{dom2}, is a key ingredient in the justification of the main result of this subsection.
\begin{lema}\label{fin}
\begin{equation}\label{fin1}
{\mathbf E}_{\hat{\nu}}\left(\t_{\ms{1}}\right)<\infty
\end{equation}
\end{lema}
\begin{proof}
Let us write
\begin{align}
{\mathbf E}_{\hat{\nu}}\left(\tau_1\right)&=\int_{0}^{\infty}{\mathbf P}_{\hat{\nu}}\left(\tau_1>t
\right)dt\nonumber\\
&=\int_{0}^{\infty}{\mathrm E}_{\hat{\nu}}\left(e^{-I_{\mathbf 0}(t)}\right)dt\nonumber\\
&=\int_{0}^{+\infty}{\mathrm E}_{\hat{\nu}}\left(e^{-I_0(t)};I_0(t)\geq \epsilon t\right)dt
+\int_{0}^{+\infty}{\mathrm E}_{\hat{\nu}}\left(e^{-I_0(t)};I_0(t)<\epsilon t\right)dt\nonumber\\
&\leq \epsilon^{-1}+\int_{0}^{+\infty}{\mathbf P}_{\hat{\nu}}\left(I_0(t)<\epsilon t\right)dt\nonumber \\
&\leq \epsilon^{-1}+\int_{0}^{+\infty}{\mathbf P}_{\hat{\nu}}\left(\int_{0}^t
\mathds{1}\left\lbrace\omega_{\ms{\ms{\zero}}}(s)=0\right\rbrace ds<\epsilon t\right)dt.
\label{eq:51}
\end{align}
For $k\in{\mathbb N}$, set $\mathbf{k}=k\times \mathbf 1$. Conditioning in the initial state of the environment at the origin,
we have, for each $\delta>0$ and each $t\in{\mathbb R}_+$,
%
\begin{eqnarray}
{\mathbf P}_{\hat{\nu}}\left(\int_{0}^t\mathds{1}\left\lbrace \omega_{\ms{\ms{\zero}}}(s)=0\right\rbrace ds<\epsilon t\right)&=&
\sum_{k=1}^{\left\lfloor \delta t\right\rfloor}\nu_k\,{\mathbf P}_{\mathbf{k}}
\left(\int_{0}^t \mathds{1}\{\omega_{\ms{\ms{\zero}}}(s)=0\}ds<\epsilon t
\right)\nonumber \\
&& +\sum_{k=\left\lceil \delta t\right\rceil}^{\infty}\nu_k\,
{\mathbf P}_{\mathbf{k}}\left(\int_{0}^t\mathds{1}\{\omega_{\ms{\ms{\zero}}}(s)=0\}ds<
\epsilon t\right)\nonumber\\
&\leq& \sum_{k=1}^{\left\lfloor \delta t\right\rfloor}\nu_k\,
{\mathbf P}_{\mathbf{k}}\left(\int_{0}^t\mathds{1}\{\omega_{\ms{\ms{\zero}}}(s)=0\}ds<\epsilon t\right)\nonumber\\
&& +\nu([\delta t,\infty)).
\label{eq:52}
\end{eqnarray}
%
Thus,
%
\begin{equation}
{\mathbf E}_{\hat{\nu}}\left(\t_{\ms{1}}\right) \leq \epsilon^{-1} + \delta^{-1} {\mathbb E}({\mathcal W})
+\int_{0}^{+\infty} \sum_{k=1}^{\left\lfloor \delta t\right\rfloor}\nu_k\,{\mathbf P}_{\mathbf{k}}
\left(\int_{0}^t\mathds{1}\{\omega_{\ms{\ms{\zero}}}(s)=0\}ds<\epsilon
t\right)dt,
\label{3.83}
\end{equation}
where ${\mathcal W}$ is a $\nu$-distributed random variable; one readily checks that~\eqref{extracon} implies that ${\mathcal W}$ has a first moment.
It remains to consider the latter summand in~\eqref{3.83}.
%
For that, let us start by setting $W_0=\inf\{s>0:\omega_{\ms{\ms{\zero}}}(s)= 0\}$, and defining
%
\begin{gather}
Z_1=\inf\left\lbrace s>W_0:\omega_{\ms{\ms{\zero}}}(s)\neq 0\right\rbrace-W_0,\\
W_1=\inf\left\lbrace s>W_0+Z_1:\omega_{\ms{\ms{\zero}}}(s)=0\right\rbrace-\left(W_0+Z_1\right),
\label{eq:53-34}
\end{gather}
%
and making $Y_1=Z_1+W_1$. Note that $Z_1$ is an exponential random variable with rate $p_0$, and $W_1$
is the hitting time of the origin by a $BDP(\mathbf p,\mathbf q)$ on ${\mathbb N}$ starting from 1; under ${\mathbf P}_{\ms{\ms{\zero}}}$, $W_0=0$, clearly.
For $i\geq 1$, let us suppose defined $Y_1,\ldots,Y_{i-1}$, and let us further define
%
\begin{gather}
Z_i=\inf\left\lbrace s>W_0+\sum_{j=1}^{i-1}Y_j:\omega_{\ms{\ms{\zero}}}(s)\neq 0\right\rbrace-\left(W_0+\sum_{j=1}^{i-1} Y_j\right),\\
W_i=\inf\left\lbrace s>W_0+\sum_{j=1}^{i-1} Y_j+Z_i:\omega_{\ms{\ms{\zero}}}(s)=0\right\rbrace-\left(W_0+\sum_{j=1}^{i-1} Y_j+Z_i\right),
\label{eq:55-56}
\end{gather}
%
and $Y_i=Z_i+W_i$. By the strong Markov property, it follows that $Z_i$ e $W_i$ are distributed as $Z_1$ e $W_1$, respectively,
and $Z_i,W_i$, $i\geq1$ are independent, and thus $\left(Y_i\right)_{i\geq 1}$ is iid.
Now set $T_0=W_0$ and for$n\geq 1$,
$T_n=T_{n-1}+Y_n$. Moreover, for $t\in{\mathbb R}_+$, let us define $\mathsf C_t=\sum\limits_{n=1}^{\infty}\mathds{1}\left\lbrace
T_n\leq t\right\rbrace$. Note that for $k\in{\mathbb N}$ and $a>0$, we have
%
\begin{eqnarray}
{\mathbf P}_{\mathbf{k}}\left(\int_{0}^t\mathds{1}\{\omega_{\ms{\ms{\zero}}}(s)=0\}ds<\epsilon t\right)&=&
{\mathbf P}_{\mathbf{k}}\left(\int_{0}^t\mathds{1}\{\omega_{\ms{\ms{\zero}}}(s)=0\}ds<\epsilon t, \mathsf C_t<\left\lfloor at \right\rfloor\right)\nonumber \\
&& +~{\mathbf P}_{\mathbf{k}}\left(\int_{0}^t\mathds{1}\{\omega_{\ms{\ms{\zero}}}(s)=0\}ds<\epsilon t,\mathsf C_t\geq \left\lfloor at\right\rfloor\right)\nonumber\\
&\leq& {\mathbf P}_{\mathbf{k}}\left(\mathsf C_t<\left\lfloor at \right\rfloor\right)+ {\mathbf P}\left(\sum_{j=1}^{\left\lfloor at\right\rfloor}Z_j<\epsilon t\right)
\label{eq:57}
\end{eqnarray}
%
and, given $\alpha\in(0,1)$,
%
\begin{align}
{\mathbf P}_{\mathbf{k}}\left(\mathsf C_t<\lfloor at\rfloor \right)&={\mathbf P}_{\mathbf{k}}\left(\mathsf C_t<\lfloor at\rfloor,T_0<\alpha t \right)+
{\mathbf P}_{\mathbf{k}}\left(\mathsf C_t<\lfloor at\rfloor,T_0\geq \alpha t \right)\nonumber\\
&\leq {\mathbf P}_{\mathbf 0}\left(\mathsf C_{(1-\alpha)t}<\lfloor at\rfloor \right)+{\mathbf P}_{\mathbf{k}}\left(T_0\geq \alpha t \right)\nonumber\\
&={\mathbf P}\left(\sum_{j=1}^{\left\lfloor at\right\rfloor}Y_j>(1-\alpha)t \right)+{\mathbf P}_{\mathbf{k}}\left(T_0\geq \alpha t \right).
\label{eq:58}
\end{align}
%
By well known elementary large deviation estimates, we have that
\begin{equation}
\int_0^\infty dt\,{\mathbf P}\!\left(\sum_{j=1}^{\left\lfloor at\right\rfloor}Z_j<\epsilon t\right)<\infty
\label{eq:59}
\end{equation}
as soon as $a<p_0\epsilon$, which we assume from now on. To conclude, it then suffices to show that
%
\begin{equation}
\int_0^\infty dt\,{\mathbf P}\left(\sum_{j=1}^{\left\lfloor at\right\rfloor}Y_j>(1-\alpha)t \right)<\infty\,\mbox{ and }\,
\int_0^\infty dt\,\sum_{k\geq0}\nu_k\,{\mathbf P}_{\mathbf{k}}\left(T_0\geq \alpha t \right)<\infty.
\label{eq:61}
\end{equation}
The latter integral is readily seen to be bounded above by $\alpha^{-1} {\mathbb E}_\nu(T_0)$, and the first condition in~\eqref{extracor} implies
the second assertion in~\eqref{eq:61}. The first integral in~\eqref{eq:61} can be written as
\begin{equation}
\int_0^\infty dt\,{\mathbf P}\left(\frac1{at}\sum_{j=1}^{\left\lfloor at\right\rfloor}\bar Y_j>\zeta \right),
\label{cconv}
\end{equation}
where $\bar Y_j=Y_j-b$, $b={\mathbf E} Y_1={\mathbf E} Y_j$, $j\geq1$, $\zeta=(1-\alpha-ab)/a$. Now we have that the expression in~\eqref{cconv} is finite
by the Complete Convergence Theorem of Hsu and Robbins (see Theorem 1 in~\cite{HR}), as soon as $a,\alpha>0$ are close enough
to 0 (so that $\zeta>0$), and $W_1$ has a second moment (and thus so does $Y_1$), but this follows immediately from the first condition in~\eqref{extracor}.
\end{proof}
We are now ready to state and prove the main result of this subsection.
\begin{proposicao} There exists a constant $\mu\in[0,\infty)$ such that
\label{prop:3.1}
%
\begin{equation}
\frac{\tau_{\ms{n}}}{n}\to \mu \quad {\mathbf P}_{\mathbf 0}\text{-a.s.} \quad \textrm{as }
n\to\infty.
\label{3.50}
\end{equation}
Furthermore,
\begin{equation}\label{3.50a}
\mu>0.
\end{equation}
%
\end{proposicao}
\begin{proof}
\noindent We divide the argument in two parts. We first construct a superadditive triangular array of random variables
$\lbrace \mathsf L_{m,n}:m,n\in{\mathbb N}, m\leq n\rbrace$ so that $\mathsf L_{0,n}$ equals $\tau_n$ under ${\mathbf P}_{\mathbf 0}$.
Secondly, we verify that $\{-\mathsf L_{m,n}:m,n\in{\mathbb N}, m\leq n\}$ satisfies the conditions of Liggett's version of Kingman's
Subadditive Ergodic Theorem, an application of which yields the result.
\paragraph{A triangular array of jump times} \mbox{}
\smallskip
Somewhat similarly as in the construction leading to Lemma~\ref{domult} (see description preceding the statement of that result),
we construct a sequence of environments $\mathring\omega^m$, $m\geq0$, coupled to $\omega$, in a {\em dominated} way (rather than {\em dominating},
as in the previous case), as follows.
Let $\omega(0)=\mathbf 0$, and set $\mathring\omega^0=\omega$. Consider now $\tau_1,\tau_2,\ldots$, the jump times of $X$, as define above.
For $m\geq1$, we define $(\mathring\omega^m(t))_{t\geq\tau_m}$ as a $BDP(\mathbf p,\mathbf q)$ starting from $\mathring\omega^m(\tau_m)=\mathbf 0$, coupled to $\omega$ in
$[\tau_m,\infty)$ so that
\begin{equation}\label{comp4}
\mathring\omega_\mathbf{x}^m(t)\leq\omega_\mathbf{x}(t)
\end{equation}
for all $t\geq\tau_m$ and all $\mathbf{x}\in\mathbb{Z}^d$.
Let $\mathring X^m$ be a random walk in environment $\mathring\omega^m$ starting at time $\tau_m$ from $\mathbf{x}_m$, with jump times determined, besides
$\mathring\omega^m$, the Poisson marks of ${\mathcal M}$ in the upper half space from $\tau_m$, in the same way as the jump times of $X$ after $\tau_m$
are determined by $(\omega(t))_{t\geq\tau_m}$ and the Poisson marks of ${\mathcal M}$ in the upper half space from $\tau_m$, and having subsequent
jump destinations given by $\mathsf x_j$, $j\geq m$.
Now set $\mathring\tau^m_0=\tau_m$ and let $\mathring\tau^m_1,\mathring\tau^m_2,\ldots$ be the successive jump times of $\mathring X^m$.
Finally, for $n\geq m$, set $\mathsf L_{m,n}=\mathring\tau^m_{n-m}-\tau_m$. $\mathsf L_{m,n}$ is the time $\mathring X$ takes to give $n-m$ jumps.
Notice that $\mathsf L_{0,n}=\tau_n$.
\paragraph{Properties of $\{\mathsf L_{m,n},\,0\leq m\leq n<\infty\}$}\mbox{}
\smallskip
We claim that the following assertions hold.
%
\begin{eqnarray}
&\mathsf L_{0,n}\geq \mathsf L_{0,m}+\mathsf L_{m,n}\,\,{\mathbf P}_{\mathbf 0}\text{-a.s.};&\label{sad1}\\
&\left\{\mathsf L_{nk,(n+1)k}, n\in {\mathbb N}\right\} \text{ is ergodic for each }\, k\in{\mathbb N};&\label{sad2}\\
&\text{ the distribution of } \left\{\mathsf L_{n,n+k}: k\geq 1\right\} \text{ under ${\mathbf P}_{\mathbf 0}$ does not depend on }\, n\in{\mathbb N};&\label{sad3}\\
& \text{there exists }\, \gamma_0<\infty \,\text{ such that }\,{\mathbf E}_\mathbf 0(\mathsf L_{0,n})\leq \gamma_0 n. &\label{sad4}
\end{eqnarray}
\eqref{3.50} then follows from an application of Liggett's version of Kingman's Subadditive Ergodic Theorem to $(-\mathsf L_{m,n})_{0\leq m\leq n<\infty}$
(see~\cite{Lig}, Chapter VI, Theorem $2.6$).
\eqref{sad3} is quite clear, \eqref{sad2} follows immediately upon remarking that $\mathsf L_{nk,(n+1)k}$, $n\in {\mathbb N}$, are, quite clearly, independent random variables,
and~\eqref{sad4} follows readily from~\eqref{dom2} and~\eqref{fin1}. So, it remains to argue~\eqref{sad1}, which is equivalent to
\begin{equation}\label{sub}
\mathring\tau^m_{n-m}\leq\tau_n,\,0\leq m\leq n<\infty.
\end{equation}
We make this point similarly as for~\eqref{comp3}, above.
\eqref{sub} is immediate for $m=0$. Let us fix $m\geq1$. Then~\eqref{sub} is immediate for $n=m$, and for $n=m+1$ it follows readily from the
fact that $\mathring\omega_{\mathsf x_m}(t)\leq\omega_{\mathsf x_m}(t)$, $t\geq\tau_m$.
For the remaining cases, let $n\geq m+1$, and suppose, inductively, that $\mathring\tau^m_{n-m}\leq\tau_n$; there are two
possibilities for $\mathring\tau^m_{n+1-m}$: either $\mathring\tau^m_{n+1-m}\leq\tau_n$, in which case, clearly, $\mathring\tau^m_{n+1-m}\leq\tau_{n+1}$, or $\mathring\tau^m_{n+1-m}>\tau_n$;
in this latter case, $\tau_{n+1}$ (resp., $\mathring\tau^m_{n+1-m}$)
will correspond to the earliest Poisson point (of ${\mathcal M}$) in ${\mathcal Q}'_n:=[c_{\mathsf x_n(d)},c_{\mathsf x_n(d)}+\varphi(\omega_{\mathsf x_n}(r))_{r\geq\tau_n}$
(resp., $\mathring{\mathcal Q}_n:=[c_{\mathsf x_n(d)},c_{\mathsf x_n(d)}+\varphi(\mathring\omega^m_{\mathsf x_n}(r))_{r\geq\tau_n}$). By~\eqref{comp4} and the monotonicity of $\varphi$, we have that
$\mathring{\mathcal Q}_n\supset{\mathcal Q}'_n$, and it follows that $\mathring\tau^m_{n+1-m}\leq\tau_{n+1}$.
Finally, one readily checks from~\eqref{sad1} that $\mu\geq{\mathbb E}_0(\tau_1)$; the latter expectation can be readily checked to be strictly positive,
and the argument is complete.
\end{proof}
\end{subsection}
\begin{subsection}{Proof of the Law of Large Numbers for $X$ under ${\mathbf P}_{\mathbf 0}$}
\label{sec:3.3/2}
We may now prove Theorem~\ref{teo:3.1}.
For $t\in{\mathbb R}_+$, let $\mathsf{N}_t=\inf{\left\lbrace n\geq 0: \tau_{\ms{n}}<t\right\rbrace}$.
%
It follows readily from Proposition~\ref{prop:3.1} that
\begin{equation}
\frac{\mathsf{N}_t}{t}\to \frac{1}{\mu} ~{\mathbf P}_{\mathbf 0}\text{-a.s.}
~\textrm{as $t\to \infty$}.
\label{eq:65}
\end{equation}
It follows from~\eqref{eq:65} and the Strong Law of Large Numbers for $(\mathbf{x}_n)$ that
\begin{equation}
\frac{X(t)}{t}=\frac{\mathsf{x}_{\mathsf{N}_t}}{t}=\frac{\mathsf{x}_{\mathsf{N}_t}}{\mathsf{N}_t}
\times\frac{\mathsf{N}_t}{t}\to \frac{{\mathbf E}(\xi_1)}{\mu}~{\mathbf P}_{\mathbf 0}\text{-a.s}
~ \text{as $t\to\infty$}.
\label{eq:66}
\end{equation}
\end{subsection}
\begin{subsection}{Proof of the Central Limit Theorem for $X$ under ${\mathbf P}_{\mathbf 0}$}
\label{sec:3.2}
\noindent We now prove Theorem~\ref{teo:3.2}. Let $\gamma=1/\mu$, and write
\begin{equation}\label{subs}
\frac{X(t)}{\sqrt{\gamma t}}=
\frac{\mathsf x_{\mathsf N_t}-\mathsf x_{\lfloor \gamma t\rfloor}}{\sqrt{\gamma t}}+ \frac{\mathsf x_{\lfloor \gamma t\rfloor}}{\sqrt{\gamma t}}.
\end{equation}
%
By the Central Limit Theorem obeyed by $(\mathsf x_n)$, we have that, under ${\mathbf P}$, as $t\to\infty$,
%
\begin{equation}\label{clt}
\frac{\mathsf x_{\lfloor \gamma t\rfloor}}{\sqrt{\lfloor \gamma t\rfloor}}\Rightarrow N_d(\mathbf 0,\Sigma).
\end{equation}
%
We now claim that the first term on the right hand side of~\eqref{subs} (after multiplication by $\gamma$) vanishes in probability as $t\to\infty$ under ${\mathbf P}_{\mathbf 0}$.
%
Indeed, let us write $\xi_k=\left(\xi_{k,1},\ldots,\xi_{k,d}\right)$, $k\in{\mathbb N}$. Given $\epsilon>0$, let us set $\delta=\epsilon^3$; we have that
%
\begin{equation}\label{decomp}
{\mathbf P}_{\mathbf 0}\left(\left\lVert \frac{\mathsf x_{\mathsf N_t}-\mathsf x_{\lfloor \gamma t\rfloor}}{\sqrt{t}}\right\rVert>\epsilon\right)\leq
{\mathbf P}_{\mathbf 0}\left(\left\lVert \mathsf x_{\mathsf N_t}-\mathsf x_{\lfloor \gamma t\rfloor}\right\rVert>\epsilon\sqrt{t},\,\left|{\mathsf N_t}-\gamma t\right|<{\delta t}\right)+
{\mathbf P}_{\mathbf 0}\left(\left|{\mathsf N_t}-\gamma t\right|\geq{\delta t}\right).
\end{equation}
%
By~\eqref{eq:65}, it then suffices to consider the first term on the right hand side of~\eqref{decomp}, which may be readily seen to be bounded above by
%
\begin{equation}
\sum_{i=1}^d\left\{{\mathbf P}_{\mathbf 0}\left(\max_{0\leq\ell\leq\delta t}\left| \sum_{k=\gamma t-\ell}^{\gamma t}\xi_{k,i}\right|>\epsilon\sqrt{t}\right)
+
{\mathbf P}_{\mathbf 0}\left(\max_{0\leq\ell\leq\delta t}\left| \sum_{k=\gamma t}^{\gamma t+\ell}\xi_{k,i}\right|>\epsilon\sqrt{t}\right)\right\}
\leq 3\,\mathrm{Tr}\!\left(\Sigma\right)\epsilon,
\label{eq:3.97}
\end{equation}
%
\noindent where we have used Kolmogorov's Maximal Inequality in the latter passage; the claim follows since $\epsilon$ is arbitrary.
And the CLT follows readily from the claim and~\eqref{clt}.
\medskip
\begin{observacao}\label{ext_clt}
A meaningful extension of our arguments for the above CLT to the non mean zero case would require understanding the fluctutations of $(\mathsf N_t)$, and their dependence to those of a centralized $X$, issues that we did not pursue for the present article, even though they are most probably treatable by a regeneration argument (possibly dispensing with the domination requirements of our argument for the mean zero case, in particular that $\varphi$ be decreasing).
Another extension is to prove a functional CLT; for the mean zero case treated above, that, we believe, requires no new ideas, and thus we refrained to present a standard argument to that effect (having already gone through standard steps in our justifications for the LLN and CLT for $X$).
\end{observacao}
\begin{observacao}\label{relax}
It is quite clear from our arguments that all that we needed to have from our conditions on $\mathbf p,\mathbf q$ is the validity of both conditions in~\eqref{extracor}, and
thus we may possibly relax to some extent~\eqref{extracon}, and certainly other conditions imposed on $\mathbf p,\mathbf q$ (in the paragraph of~\eqref{erg}), with the same approach, but we have opted for simplicity and cleanness, within a measure of generality.
\end{observacao}
\begin{observacao}\label{rever}
For the proof of Lemma~\ref{3.17}, a mainstay of our approach, we relied on
the reversibility of the birth-and-death process, the positivity of $d_n$, and the increasing monotonicity of $\psi$; see the upshot of the paragraph of~\eqref{3.27a}. It is natural to think of extending the argument for other reversible ergodic Markov processes on ${\mathbb N}$; one issue for longer range cases is the positivity of $d_n$; there should be examples of long range reversible ergodic Markov processes on ${\mathbb N}$ where positivity of $d_n$ may be ascertained by a coupling argument, and we believe we have worked out such an example, but it looked too specific to warrant a more general formulation of our results (and the extra work involved in such an attempt), so again we felt content in presenting our approach in the present setting.
\end{observacao}
\begin{observacao}\label{alt-lln}
Going back to the construction leading to Lemma~\ref{domult}, for $0\leq m\leq n$, let $\sL'_{m,n}$ denote the time $\omega^m$ takes to give $n-m$ jumps.
Then it follows from the properties of $\omega,\omega^m$, $m\geq0$, as discussed in the paragraphs preceding the statement of Lemma~\ref{domult}, that
$\{\sL'_{m,n},\,0\leq m\leq n<\infty\}$ is a {\em subadditive} triangular array, and a Law of Large Numbers for $\tau_n$ under ${\mathbf P}_{\hat\nu}$ would follow, once we establish ergodicity of $\{\sL'_{nk,(n+1)k}, n\in {\mathbb N}\}$, other conditions for the application of the Subadditive Ergodic Theorem being readily seen to hold.
This would require a more susbstantial argument than for the corresponding result for $\{\mathsf L_{nk,(n+1)k}, n\in {\mathbb N}\}$, made briefly above (in the second paragraph
below~\eqref{sad4}), since independence is lost. Perhaps a promising strategy would be one similar to that which we undertake in next section, to the same effect; see Remark~\ref{rem:erg}. For this, if for nothing else, we refrained from pursuing this specific point in this paper.
\end{observacao}
\begin{observacao}\label{vfzero}
The restriction of positivity of $\varphi$, made at the beginning, is not really crucial in our approach. It perhaps makes parts of the arguments clearer, but our approach works if we allow for $\varphi(n)=0$ for $n\geq n_0$ for any given $n_0\geq1$ --- in this case, we note, the auxiliary process $\mathsf Y$ introduced in the proof of Lemma~\ref{lema:3.2} is a birth-and-death process on $\{0,\ldots,n_0-1\}$.
\end{observacao}
\end{subsection}
\section{Other initial conditions}
\label{ext}
\setcounter{equation}{0}
In this section we extend Theorem~\ref{teo:3.2} to other (product) initial conditions. In this and in the next section, we will assume for simplicity that the BD process environments are homogeneous, i.e., $p_n\equiv p$, with $p\in(0,1/2)$. In this context, we use the notation $BDP(p,q)$ for the process, where $q=1-p$. We hope that the arguments developed for the inhomogeneous case, as well as subsequent ones, are sufficiently convincing that this may be relaxed --- although we do not pretend to be able to propose optimal or near optimal conditions for the validity of any of the subsequent results.
As we will see below, our argument for this extension does not go through a LLN for the position of the particle, as it did in the previous section,
we do not discuss an extension for the LLN, rather focusing on the CLT.\footnote{But the same line of argumentation below may be readily
seen to yield a LLN, under the same conditions.}
We will as before assume that the initial condition for the environment is product, given by $\hat\mu_0=\bigotimes\limits_{\mathbf{x}\in\mathbb{Z}^d}\mu_{\mathbf{x},0}$,
and we will further assume that $\mu_{\mathbf{x},0}\preceq\bar\mu$, with $\mu$ a probability measure on ${\mathbb N}$ with an exponentially decaying tail, i.e.,
there exists a constant $\beta>0$ such that
\begin{equation}\label{expdec}
\bar\mu([n,\infty))\leq\text{const } e^{-\beta n}
\end{equation}
for all $n\geq0$. Notice that this includes $\hat\nu$, in the present homogeneous BDP case. Again, it should hopefully be quite clear from our arguments that these conditions can be relaxed both in terms of the homogeneity of $\bar\mu$, as the decay of its tail, but we do not seek to do that presently, or to suggest optimal or near optimal conditions.
Our strategy is to first couple the environment starting from $\hat\mu_0$ to the one starting from $\mathbf 0$, so that for each $\mathbf{x}\in\mathbb{Z}^d$, each respective BD process evolves independently one from the other until they first meet, after which time they coalesce forever.
One natural second step is to couple two versions of the random walks, one starting from each of the two coupled environments in question, so that they jump together when they are at the same point at the same time, and see the same environment. One quite natural way to try and implement such a strategy is to have both walks have the same embedded chains, and show that they will (with high probability) eventually meet at a time at and after which they only see the same environments. Even though this looks like it should be true, we did not find a way to control the distribution of the environments seen by both walks in their evolution (in what might be seen as a {\em game of pursuit}) in an effective way.
So we turned to our actual subsequent strategy, which depends on the dimension (and requires different further conditions on $\pi$, the distribution of $\xi$, in $d\geq2$). In $d\leq2$, we modify the strategy proposed in the previous paragraph by letting the two walks evolve independently when separated, and relying on recurrence to ensure that they will
meet in the afore mentioned conditions; there is a technical issue arising in the latter point for general $\pi$ (within the conditions of
Theorem~\ref{teo:3.2}), which we resolve by invoking a result in the literature, which is stated for $d=1$ only, so for $d=2$ we need to restrict $\pi$ to be symmetric. See Remark~\ref{symm} below.
In $d\geq3$, we of course do not have recurrence, but, rather, transience, and so we rely on this, instead, to show that our random walk will eventually find itself in a cut point of its trajectory such that the environment along its subsequent trajectory is coalesced with a suitably coupled environment starting from $\mathbf 0$; this allows for a comparison to the situation of Theorem~\ref{teo:3.2}.
The argument requires the a.s.~existence of infinitely many cut points of $(\mathsf{x}_{\ms{n}})$, and, to ascertain that, we rely on the literature, which states boundedness of the support of $\pi$ as a sufficient condition (but no symmetry).
\begin{teorema} [Central Limit Theorem for $X$] \label{gclt}\mbox{}
Under the same conditions of Theorem~\ref{teo:3.2}, and assuming the conditions on $\hat\mu_0$ stipulated in the paragraph of~\eqref{expdec} above hold, then we have that for ${\mathrm P}_{\hat{\mu}_{{0}}}$-a.e. $\omega$
\begin{equation}
\frac{X(t)}{\sqrt{ t/\mu}}\Rightarrow N_d(\mathbf 0,\Sigma) \,\mbox{ under }\, P^{^{\omega}},
\label{eq:75ext}
\end{equation}
\label{teo:3.2ext}
%
provided the following extra conditions on $\pi$ hold, depending on $d$: in $d=1$, no extra condition; in $d=2$, $\pi$ is symmetric;
in $d\geq3$, $\pi$ has bounded support.
\end{teorema}
We present the proof of Theorem~\ref{teo:3.2ext} in two arguments, spelling out the above broad descriptions, in two subsequent subsections, one for $d\leq2$, and another one for $d\geq3$. We first state and prove a lemma which enters
both arguments, concerning successive coalescence of coupled versions of the environments, one started from $\mathbf 0$, and the other from $\hat\mu_0$, over certain times related to displacements of $(\mathsf{x}_{\ms{n}})$.
Consider two coalescing versions of the environment, $\mathring\omega$ and $\omega$, the former one starting from $\mathbf 0$, and the latter starting from $\hat\mu_0$ as above, such that $\mathring\omega_\mathbf{x}(t)\leq\omega_\mathbf{x}(t)$ for all $\mathbf{x}$ and $t$, and for $\mathbf{x}\in\mathbb{Z}^d$, let $\mathsf T_{\mathbf{x}}$ denote the coalescence time of $\mathring\omega_\mathbf{x}$ and $\omega_\mathbf{x}$, i.e.,
\begin{equation}
\mathsf T_{\mathbf{x}}=\inf\left\lbrace s>0:\mathring\omega_{\mathbf{x}}(s)=\omega_{\mathbf{x}}(s)\right\rbrace.
\end{equation}
Now let $\mathring X$ and $X$ be versions of the random walks on $\mathbb{Z}^d$ in the respective environments, both starting from $\mathbf 0(\in\mathbb{Z}^d)$.
Let us suppose, for simplicity, that they have the same embedded chain $(\mathsf{x}_{\ms{n}})$.
For $n\in{\mathbb N}$, let ${\mathcal B}_n$ denote $\{-2^n,-2^n+1,\ldots,2^n-1,2^n\}^d$, let $\mathring\cH_n$ (resp., ${\mathcal H}_n$) denote the hitting time of $\mathbb{Z}^d\setminus{\mathcal B}_n$ by $\breve X$ (resp., $X$), and consider the event $\mathring\mathsf A_n$ (resp., $\mathsf A_n$) that $\mathsf T_{\mathbf{x}}\leq\mathring\cH_n$ (resp., $\mathsf T_{\mathbf{x}}\leq{\mathcal H}_n$) for all $\mathbf{x}\in{\mathcal B}_{n+1}$.
Let also $\mathsf h_n$ denote the hitting time of $\mathbb{Z}^d\setminus{\mathcal B}_n$ by $(\mathsf{x}_{\ms{n}})$.
\begin{lema} \label{coupenv}
%
\begin{equation}\label{coupenv1}
{\mathbf P}_{\mathbf 0}(\mathring\mathsf A_n^c\text{ infinitely often})={\mathbf P}_{\hat{\mu}_{{0}}}(\mathsf A_n^c\text{ infinitely often})=0
\end{equation}
\end{lema}
\begin{proof}
Under our conditions, the argument is quite elementary, and for this reason we will be rather concise.
Let us first point out that both $\mathring\cH_n$ and ${\mathcal H}_n$ are readily seen to be bounded from below stochastically by
$\bar\cH_n:=\sum_{i=1}^{\mathsf h_n}{\mathcal E}_i$, where ${\mathcal E}_1,{\mathcal E}_2,\ldots$ are iid standard exponential random variables,
which independent of $\mathsf h_n$ and of $\mathring\omega$ and $\omega$.
It follows readily from Kolmogorov's Maximal Inequality that for all $n\in{\mathbb N}$
\begin{equation}\label{kmi}
{\mathbf P}(\mathsf h_n\leq 2^n)={\mathbf P}\Big(\max_{1\leq i\leq 2^n}\|\mathsf x_i\|> 2^n\Big)\leq\text{const } 2^{-n},
\end{equation}
and by the above mentiond domination and elementary well known large deviation estimates, we find that
\begin{equation}\label{coupenv2}
{\mathbf P}_{\mathbf 0}(\mathring\cH_n\leq 2^{n-1})\vee{\mathbf P}_{\hat{\mu}_{{0}}}({\mathcal H}_n\leq 2^{n-1})\leq{\mathbf P}(\bar\cH_n\leq 2^{n-1})\leq\text{const } 2^{-n}.
\end{equation}
We henceforth treat only the first probability in~\eqref{coupenv1}; the argument for the second one is identical.
The probability of the event that $\mathring\cH_n\leq 2^{n-1}$ and $\mathsf T_{\mathbf{x}}>{\mathcal H}_n$ for some $x\in{\mathcal B}_{n+1}$ is bounded above by
\begin{equation}\label{coupenv3}
\text{const }2^{dn}\,{\mathbb P}(\mathsf T_{\mathbf 0}> 2^{n-1}).
\end{equation}
It may now be readily checked that $\mathsf T_{\mathbf 0}$ is stochastically dominated by the hitting time of the origin by a simple symmetric random walk on ${\mathbb Z}$
in continuous time with homogeneous jump rates equal to 1, with probability $p$ to jump to the left, initially distributed as $\bar\mu$. Thus, given $\delta>0$
\begin{equation}\label{coupenv4}
{\mathbb P}(\mathsf T_{\mathbf 0}> 2^{n-1})\leq\bar\mu([\d2^n,\infty))+{\mathbb P}\Big(\sum_{i=1}^{\d2^n}H_i>2^{n-1}\Big),
\end{equation}
where $H_1,H_2$ are iid random variables distributed as the hitting time of the origin by a simple symmetric random walk on ${\mathbb Z}$
in continuous time with homogeneous jump rates equal to 1, with probability $p$ to jump to the left, starting from 1.
$H_1$ is well known to have a positive exponential moment; it follows from elementary large deviation estimates that we may choose
$\delta>0$ such that the latter term on the right hand side of~\eqref{coupenv4} is bounded above by const $e^{-b2^n}$ for some constant
$b>0$ and all $n$. Using this bound, and substituting~\eqref{expdec} in~\eqref{coupenv4}, we find that
\begin{equation}\label{coupenv5}
{\mathbb P}(\mathsf T_{\mathbf 0}> 2^{n-1})\leq\text{const }e^{-b'2^n}
\end{equation}
for some $b'>0$ and all $n$, and~\eqref{coupenv1} upon a suitable use of the Borel-Cantelli Lemma.
\end{proof}
\begin{observacao}\label{rem:erg}
As vaguely mentioned in Remark~\ref{alt-lln} at the end of the previous section,
a seemingly promising strategy for establishing the ergodicity of $\{\sL'_{nk,(n+1)k}, n\in {\mathbb N}\}$
would be to approximate an event of ${\cal F}^+_{m'}$, the $\sigma$-field generated by $\{\sL'_{nk,(n+1)k}, n\geq m'\}$, by one generated by a version of
an environment starting from $\mathbf 0$ at time $\sL'_{0,mk}$, coupled to the original environment in a coalescing way as above, with suitable
couplings of the jump times and destinations, with fixed $m\in{\mathbb N}_*$ and $m'\gg m$. Ergodicity would follow by the independence of the latter
$\sigma$-field and ${\cal F}^-_{m}$, the $\sigma$-field generated by $\{\sL'_{(n-1)k,nk}, 1\leq n\leq m\}$.
We have not attempeted to work this idea out in detail; if we did, it looks as though we might face the same issues arising in the extension of the CLT,
as treated in the present section, thus possibly not yielding a better result than Theorem~\ref{teo:3.2ext}.
\end{observacao}
\begin{subsection}{Proof of Theorem~\ref{teo:3.2ext} for $d\leq2$}
\label{12d}
We start by fixing the coalescing environments $\mathring\omega$ and $\omega$, as above, and considering two independent random walks, denoted
$\mathring X$ and $X'$ in the respective environments $\mathring\omega$ and $\omega$. The jump times of $\mathring X$ and $X'$ are obtained from $\mathring\M$ and
${\mathcal M}'$, as in the original construction of our model, where $\mathring\M$ and ${\mathcal M}'$ are independent versions of ${\mathcal M}$.
For the jump destinations of $\mathring X$ and $X'$, we will change things a little, and consider independent families $\mathring \xi=\{\mathring \xi_\mathsf z,\,\mathsf z\in\mathring\M\}$ and $\xi'=\{\xi'_\mathsf z,\,\mathsf z\in{\mathcal M}'\}$
of independent versions of $\xi_1$. The jump destination of $\mathring X$ at the time corresponding to an a.s.~unique point $\mathsf z$ of $\mathring\M$ is then
given by $\mathring \xi_\mathsf z$, and correspondingly for $X'$.
Let $\mathsf D=\big(\mathsf D(s):=\mathring X(s)-X'(s), s\geq0\big)$, which is clearly a continuous time jump process, and consider the embedded chain of $\mathsf D$, denoted
$\mathsf d=\left(\mathsf d_n\right)_{n\in {\mathbb N}}$. We claim that under the conditions of Theorem~\ref{teo:3.2ext} for $d\geq2$, $\mathsf d$ is recurrent, that is,
it a.s.~returns to the origin infinitely often.
Before justifying the claim, let us indicate how to reach the conclusion of the proof of Theorem~\ref{teo:3.2ext} for $d\leq2$ from this. We consider the
sequence of return times of $\mathsf D$ to the origin, i.e., $\tilde\s_0=0$, and for $n\geq1$,
\begin{equation}\label{meet}
\tilde\s_n=\inf\big\{s>\tilde\s_{n-1}:\, \mathsf D(s)=0\text{ and } \mathsf D(s-)\ne0\big\}.
\end{equation}
It may be readily checked, in particular using the recurrence claim, that this is an infinite sequence of a.s.~finite stopping times given $\mathring\omega,\omega$, such that
$\tilde\s_n\to\infty$ as $n\to\infty$.
Then, for each $n\in{\mathbb N}$, we define a version of $X'$, denoted $X_n$, coupled to $\mathring X$ and $X'$ as follows: $X_n(s)=X'(s)$ for $s\leq\tilde\s_n$, and for
for $s>\tilde\s_n$, the jump times and destinations of $X_n$ are defined from $\omega$ as before, except that we replace the Poisson marks of ${\mathcal M}'$ in the half space above $\tilde\s_n$ by the corresponding marks of $\mathring\M$, and we use the corresponding jump destinations of $\mathring \xi$. It may be readily checked that $X_n$ is a version of $X'$, and that starting at $\tilde\s_n$, and as long as $X_n$ and $\mathring X$ see the same respective environments, they remain together.
It then follows from Lemma~\ref{coupenv} that there exists a finite random time $N$ such that $\mathring X(t)$ and $X'(t)$ each sees only coupled environments
for $t>N$, and thus so do $\mathring X(t)$ and $X_n(t)$ for $t>\tilde\s_n>N$. It then follows from the considerations above that given $\mathring\omega,\omega$, $n\in{\mathbb N}$ and $x\in{\mathbb R}$
\begin{eqnarray}\nonumber
&\Big|P\Big(\frac{X'(t)}{\sqrt t}<x\Big)-P\Big(\frac{\mathring X(t)}{\sqrt t}<x\Big)\Big|
=\Big|P\Big(\frac{X_n(t)}{\sqrt t}<x\Big)-P\Big(\frac{\mathring X(t)}{\sqrt t}<x\Big)\Big|&\\
&\leq P\big(\{t>\tilde\s_n>N\}^c\big)\leq P(\tilde\s_n\geq t)+P(N\geq \tilde\s_n),&
\label{clt1}
\end{eqnarray}
and it follows that the limsup as $t\to\infty$ of the left hand side of~\eqref{clt1} is bounded above by the latter probability in the same expression.
The result (for $X'$) follows since (it does for $\mathring X$, by Theorem~\ref{teo:3.2}, and) $n$ is aribitrary.
In order to check the recurrence claim, notice that if $\pi$, the distribution of $\xi_1$, is symmetric, then $\mathsf d$ is readily seen to be a discrete time random walk on $\mathbb{Z}^d$ with jump distribution given by $\pi$, and the claim folllows from well known facts about mean zero random walks with finite second moments for $d\leq2$.
This completes the argument for Theorem~\ref{teo:3.2ext} for $d=2$.
For $d=1$ and asymmetric $\pi$, $\mathsf d$ is no longer Markovian, but we may resort to Theorem 1 of~\cite{RHG91} to justify the claim as follows.
Let us fix a realization of $\mathring\omega$, $\omega$, $\mathring\M$ and ${\mathcal M}'$ (such that no two marks in $\mathring\M\cup{\mathcal M}'$ have the same time coordinate, which is of course an event of full probability).
Let us now dress $\mathsf d$ up as a {\em controlled random walk (crw)} (conditioned on $\mathring\omega$, $\omega$, $\mathring\M$ and ${\mathcal M}'$), in the language of~\cite{RHG91}; see paragraph before the statement of Theorem 1 therein.
There are two kinds of jump distributions for $\mathsf d$ ($p=2$, in the notation of~\cite{RHG91}):
$F_1$ denotes the distribution of $\xi_1$, and $F_2$ denotes the distribution of $-\xi_1$.
In order to conform to the set up of~\cite{RHG91}, we will also introduce two independent families of (jump) iid random variables
(which will in the end not be used), namely, $\breve \xi=\{\breve \xi_\mathsf z,\,\mathsf z\in\mathring\M\}$ and $\xi''=\{\xi''_\mathsf z,\,\mathsf z\in{\mathcal M}'\}$, independent of, but having the same marginal distributions as, $\mathring \xi$ (and $\xi'$).
Let us see how the choice between each of the two distributions is made at each step of $\mathsf d$. This is done using the indicator functions
$\psi_n$, introduced and termed in~\cite{RHG91} the {\em choice of game at time} $n\geq1$, inductively, as follows.
Given $\mathring\omega$, $\omega$, $\mathring\M$ and ${\mathcal M}'$, let $\zeta_1$ denote the earliest point of $\mathring\cN_\mathbf 0 \cup{\mathcal N}'_\mathbf 0$, where $\mathring\cN_\mathbf{x}$, ${\mathcal N}'_\mathbf{x}$, $\mathbf{x}\in\mathbb{Z}^d$, are defined from $(\mathring\omega,\mathring\M)$ and $(\omega,{\mathcal M}')$, respectively, as ${\mathcal N}_{\mathbf{x}}$ was defined from $(\omega,{\mathcal M})$ at the beginning of Section~\ref{mod},
and let $\eta_1$ denote the time coordinate of $\zeta_1$, and set $\psi_1= 1+\mathbb 1\{\zeta_1\in{\mathcal N}'_\mathbf 0\}$, and
\begin{equation}\label{psi1}
X_1^i:=
\begin{cases}
\,\,\,\,\mathring \xi_{\zeta_1},&\text{ if } \psi_1=1\text{ and } i=1,\\
-\breve \xi_{\zeta_1},&\text{ if } \psi_1=1\text{ and } i=2,\\
\,\,\,\,\xi''_{\zeta_1},&\text{ if } \psi_1=2\text{ and } i=1,\\
-\xi'_{\zeta_1},&\text{ if } \psi_1=2\text{ and } i=2.
\end{cases}
\end{equation}
Notice that $X_1^1$ and $X_1^2$ are independent and distributed as $F_1$ and $F_2$, respectively, and that a.s.
\begin{eqnarray}\label{x121}
\mathring X(\eta_1)&=&X_1^{\psi_1}\mathbb 1\{\psi_1=1\}+\mathring X(0)\mathbb 1\{\psi_1=2\},\\
X'(\eta_1)&=&X_1^{\psi_1}\mathbb 1\{\psi_1=2\}+X'(0)\mathbb 1\{\psi_1=1\}.
\end{eqnarray}
For $n\geq2$, having defined $\zeta_j$, $\eta_j$, $\psi_j$, $X_j^i$, $j<n$, $i=1,2$, let
$\zeta_n$ denote the earliest point of $\mathring\cN_{\mathring X(\eta_{n-1})}(\eta_{n-1})\cup {\mathcal N}'_{X'(\eta_{n-1})}(\eta_{n-1})$,
where for $\mathbf{x}\in\mathbb{Z}^d$ and $t\geq0$, $\mathring\cN_\mathbf{x}(t)$, ${\mathcal N}'_\mathbf{x}(t)$ denote the points of $\mathring\cN_\mathbf{x}$, ${\mathcal N}'_\mathbf{x}$ with time coordinates above $t$,
respectively.
Let now $\eta_n$ denote the time coordinate of $\zeta_n$, and set
$\psi_n= 1+\mathbb 1\{\zeta_n\in{\mathcal N}'_{X'(\eta_{n-1})}(\eta_{n-1})\}$, and
\begin{equation}\label{psin}
X_n^i:=
\begin{cases}
\,\,\,\,\mathring \xi_{\zeta_n},&\text{ if } \psi_1=1\text{ and } i=1,\\
-\breve \xi_{\zeta_n},&\text{ if } \psi_1=1\text{ and } i=2,\\
\,\,\,\,\xi''_{\zeta_n},&\text{ if } \psi_1=2\text{ and } i=1,\\
-\xi'_{\zeta_n},&\text{ if } \psi_1=2\text{ and } i=2.
\end{cases}
\end{equation}
Notice that $\{X_j^i;\,1\leq j\leq n,\,i=1,2\}$ are independent and $X_j^1$ and $X_j^2$ are distributed as $F_1$ and $F_2$, respectively, for all $j$. Morever, a.s.
\begin{eqnarray}\label{x12n}
\mathring X(\eta_n)&=&X_n^{\psi_n}\mathbb 1\{\psi_n=1\}+\mathring X(\eta_{n-1})\mathbb 1\{\psi_n=2\},\\
X'(\eta_n)&=&X_n^{\psi_n}\mathbb 1\{\psi_n=2\}+X'(\eta_{n-1})\mathbb 1\{\psi_n=1\}.
\end{eqnarray}
We then have that for $n\geq1$, $\mathsf d_n=\sum_{j=1}^nX^{\psi_j}_j$.
One may readily check that (given $\mathring\omega$, $\omega$, $\mathring\M$ and ${\mathcal M}'$) $\mathsf d$ is a crw in the set up of Theorem 1 of~\cite{RHG91},
an application of which readily yields the claim, and the proof of Theorem~\ref{teo:3.2ext} for $d\leq2$ is complete.
\end{subsection}
\begin{observacao}\label{symm}
We did not find an extension of the above mentioned theorem of~\cite{RHG91} to $d=2$,
or any other way to show recurrence of $\left(\mathsf d_n\right)$ for general asymmetric $\pi$
within the conditions of Theorem~\ref{teo:3.2ext}.
\end{observacao}
\begin{subsection}{Proof of Theorem~\ref{teo:3.2ext} for $d\geq3$}
\label{3+d}
We now cannot expect to have recurrence of $\mathsf d$, quite on the contrary, but transience suggests that we may have enough of a regeneration scheme, and we pursue precisely this idea, in order to implement which, we resort to {\em cut times} of the trajectory of $(\mathsf{x}_{\ms{n}})$, to ensure the existence of infinitely many of which, we need to restrict to boundedly supported $\pi$'s.
We will be rather sketchy in this subsection, since the ideas are all quite simple and/or have appeared before in a similar guise.
We now discuss a key concept and ingredient of our argument: cut times for $\mathsf x=(\mathsf{x}_{\ms{n}})$. First some notation:
for $i,j\in{\mathbb N}$, $i\leq j$, let $\mathsf x[i,j]:=\bigcup\limits_{k=i}^j\{\mathsf x_k\}$, and $\mathsf x[i,\infty):=\bigcup\limits_{l=1}^\infty\mathsf x[i,l]$, and set
\begin{equation}
\mathsf K_1=\inf{\lbrace n\in{\mathbb N}:\mathsf x[0,n]\cap \mathsf x[n+1,\infty)=\emptyset\rbrace},
\end{equation}
\noindent and, recursively, for $\ell\geq 2$,
\begin{equation}
\mathsf K_\ell:=\inf{\lbrace n>\mathsf K_{\ell-1}:\mathsf x[0,n]\cap \mathsf x[n+1,\infty)=\emptyset\rbrace}.
\end{equation}
\noindent $(\mathsf K_\ell)_{\ell\in{\mathbb N}_*}$ is a sequence of {\em cut times} for $(\mathsf{x}_{\ms{n}})$; under our conditions, it is ensured to be an a.s.~well defined infinite sequence of finite entries, according to Theorem 1.2 of~\cite{JP97}.
We will have three versions of the environment coupled in a coalescent way, as above, with different initial conditions: $\mathring\omega$, starting from $\mathbf 0$;
$\omega$, starting from $\hat\mu_0$; and $\tilde\omega$, starting from $\hat\nu$; in particular, we have that $\mathring\omega_\mathbf{x}(t)\leq\omega_\mathbf{x}(t),\tilde\omega_\mathbf{x}(t)$ for all $\mathbf{x}\in\mathbb{Z}^d$ and $t\geq0$. We may suppose that the initial conditions of $\omega$ and $\tilde\omega$ are independent.
We now consider several coupled versions of versions of our random walk, starting with two: $X$, in the environment $\omega$, as in the statement of Theorem~\ref{teo:3.2ext}; and $\mathring X$, in the environment $\mathring\omega$. $X$ and $\mathring X$ are constructed from
the same $\mathsf x$ and $\mathsf V$, following the alternative construction of Subsection~\ref{sec:2.2}. Let $\varsigma_\ell$ and $\mathring \varsigma_\ell$ be the time $X$ and $\mathring X$ take to give $\mathsf K_\ell$ jumps, respectively. It may be readily checked, similarly as in Section~\ref{conv} --- see~(\ref{comp3}, \ref{sub}) ---, from the environmental monotonicity pointed to in the above paragraph and the present construction of $X$ and $\mathring X$, that $\mathring \varsigma_\ell\leq\varsigma_\ell$ for all $\ell\in{\mathbb N}_*$.
Finally, for each $\ell\in{\mathbb N}_*$, we consider three modifications of $\mathring X$ and $X$, namely, $\mathring X_\ell$, $X_\ell$ and $X'_\ell$, defined as follows:
\begin{equation}\label{chex}
\mathring X_\ell(t)=
\begin{cases}
\hspace{2cm}\mathring X(t),&\text{ for } t\leq\mathring \varsigma_\ell,\\
\text{evolves in the environment } \tilde\omega,&\text{ for } t>\mathring \varsigma_\ell;
\end{cases}
\end{equation}
%
\begin{equation}\label{xp}
X_\ell(t)=
\begin{cases}
\hspace{2cm}X(t),&\text{ for } t\leq\varsigma_\ell,\\
\text{evolves in the environment } \tilde\omega,&\text{ for } t>\varsigma_\ell;
\end{cases}
\end{equation}
%
\begin{equation}\label{xpp}
X'_\ell(t)=
\begin{cases}
\hspace{2cm}X(t),&\text{ for } t\leq\varsigma_\ell,\\
\text{evolves in the environment } \tilde\omega(\cdot-\varsigma_\ell+\mathring \varsigma_\ell),&\text{ for } t>\varsigma_\ell.
\end{cases}
\end{equation}
Let $U$ denote the first time after which $\mathring X$ and $X$ see the same environments $\mathring\omega,\omega,\tilde\omega$ (from where they stand at each subsequent time).
Lemma~\ref{coupenv} ensures that $U$ is a.s.~finite. Let us consider the event $A_{\ell,t}:=\{t>\varsigma_\ell>U\}$.
It readily follows that in $A_{\ell,t}$
\begin{equation}\label{eqs1}
\mathring X(t)=\mathring X_\ell(t)=X'_\ell(t+\varsigma_\ell-\mathring \varsigma_\ell)\text{ and }X(t)=X_\ell(t).
\end{equation}
Given $\mathring\omega,\omega,\tilde\omega$, let $P^{\,\mathring\omega,\omega,\tilde\omega}$ denote the probability measure underlying our coupled random walks. Since $\hat\nu$ is invariant for the environmental BD processes, it follows readily from our construction that $P^{\,\mathring\omega,\omega,\tilde\omega}(X_\ell\in\cdot)$ and $P^{\,\mathring\omega,\omega,\tilde\omega}(X'_\ell\in\cdot)$
have the same distribution (as random probability measures).
For $R=(-\infty,r_1)\times\cdots\times(-\infty,r_d)$ a semi-infinite open hyperrectangle of ${\mathbb R}^d$, we have that
\begin{eqnarray}\nonumber
&\big|P^{\,\mathring\omega,\omega,\tilde\omega}\big(X(t)\in R\sqrt{\gamma t}\,\big)-P^{\,\mathring\omega,\omega,\tilde\omega}\big(X_\ell(t)\in R\sqrt{\gamma t}\,\big)\big|&\\\label{eqs2}
&\leq P^{\,\mathring\omega,\omega,\tilde\omega}(A_{\ell,t}^c)
\leq P^{\,\mathring\omega,\omega,\tilde\omega}(\varsigma_\ell\geq t)+P^{\,\mathring\omega,\omega,\tilde\omega}(U\geq\varsigma_\ell)&
\end{eqnarray}
--- as before, $\gamma=1/\mu$; see statement of Theorem~\ref{teo:3.2ext} ---, and it follows that
\begin{equation}\label{eqs2a}
\limsup_{\ell\to\infty}\limsup_{t\to\infty}
\big|P^{\,\mathring\omega,\omega,\tilde\omega}\big(X(t)\in R\sqrt{\gamma t}\,\big)-P^{\,\mathring\omega,\omega,\tilde\omega}\big(X_\ell(t)\in R\sqrt{\gamma t}\,\big)\big|=0
\end{equation}
for a.e.~$\mathring\omega,\omega,\tilde\omega$.
Similarly, we find that for a.e.~$\mathring\omega,\omega,\tilde\omega$,
\begin{equation}\label{eqs3}
\limsup_{\ell\to\infty}\limsup_{t\to\infty}
\big|P^{\,\mathring\omega,\omega,\tilde\omega}\big(X'_\ell(t)\in R\sqrt{\gamma t}\,\big)-P^{\,\mathring\omega,\omega,\tilde\omega}\big(\mathring X((t-\delta_\ell)^+)\in R\sqrt{\gamma t}\,\big)\big|=0,
\end{equation}
where $\delta_\ell=\varsigma_\ell-\mathring \varsigma_\ell$.
Now letting $B_{\ell,t,\epsilon}$ denote the event $\big\{\|\mathring X((t-\delta_\ell)^+)-\mathring X(t)\|\leq\epsilon\sqrt{\gamma t}\big\}$, where $\epsilon>0$, we have that
\begin{eqnarray}\nonumber
&\big|P^{\,\mathring\omega,\omega,\tilde\omega}\big(\mathring X((t-\delta_\ell)^+)\in R\sqrt{\gamma t}\,\big)-P^{\,\mathring\omega,\omega,\tilde\omega}\big(\mathring X(t)\in R\sqrt{\gamma t}\,\big)\big|&\\\label{eqs4}
&\leq P^{\,\mathring\omega,\omega,\tilde\omega}\big(\mathring X(t)\in(R^+_\epsilon\setminus R^-_\epsilon)\sqrt{\gamma t}\,\big)+ P^{\,\mathring\omega,\omega,\tilde\omega}\big(B_{\ell,t,\epsilon}^c\big),&
\end{eqnarray}
where $R^\pm_\epsilon=(-\infty,r_1\pm\epsilon)\times\cdots\times(-\infty,r_d\pm\epsilon)$.
We now claim that for all $\ell\in{\mathbb N}_*$ and $\epsilon>0$
\begin{equation}\label{eqs5}
\limsup_{t\to\infty}P^{\,\mathring\omega,\omega,\tilde\omega}\big(B_{\ell,t,\epsilon}^c\big)=0
\end{equation}
for a.e.~$\mathring\omega,\omega,\tilde\omega$.
It then follows from~(\ref{eqs3}, \ref{eqs4}, \ref{eqs5}) and Theorem~\ref{teo:3.2} that for $\epsilon>0$
\begin{equation}\label{eqs6}
\limsup_{\ell\to\infty}\limsup_{t\to\infty}
\big|P^{\,\mathring\omega,\omega,\tilde\omega}\big(X'_\ell(t)\in R\sqrt{\gamma t}\,\big)-\Phi(R)\big|\leq\Phi(R^+_\epsilon\setminus R^-_\epsilon)
\end{equation}
for a.e.~$\mathring\omega,\omega,\tilde\omega$, where $\Phi$ is the $d$-dimensional centered Gaussian probability measure with covariance matrix $\Sigma$.
Since $\epsilon$ is arbitrary, and the left hand side of~\eqref{eqs6} does not depend on $\epsilon$, we find that it vanishes for a.e.~$\mathring\omega,\omega,\tilde\omega$.
From the remark in the paragraph right below~\eqref{eqs1}, we have that $P^{\,\mathring\omega,\omega,\tilde\omega}(X_\ell(t)\in R\sqrt{\gamma t})$ is distributed as
$P^{\,\mathring\omega,\omega,\tilde\omega}\big(X'_\ell(t)\in R\sqrt{\gamma t}\,\big)$; it follows that
\begin{equation}\label{eqs7}
\limsup_{\ell\to\infty}\limsup_{t\to\infty}\big|P^{\,\mathring\omega,\omega,\tilde\omega}\big(X_\ell(t)\in R\sqrt{\gamma t}\,\big)-\Phi(R)\big|=0
\end{equation}
for a.e.~$\mathring\omega,\omega,\tilde\omega$, and it follows from~\eqref{eqs2a} that
\begin{equation}\label{eqs8}
\limsup_{t\to\infty}\big|P^{\,\mathring\omega,\omega,\tilde\omega}\big(X(t)\in R\sqrt{\gamma t}\,\big)-\Phi(R)\big|=0
\end{equation}
for a.e.~$\mathring\omega,\omega,\tilde\omega$, which is the claim of Theorem~\ref{teo:3.2ext}.
In order to complete the proof, it remains to establish~\eqref{eqs5}. For that, we first note that
\begin{equation}\label{bd1}
\|\mathring X((t-\delta_\ell)^+)-\mathring X(t)\|=\Big\|\sum_{i=\mathsf N_{(t-\delta_\ell)^+}}^{\mathsf N_t}\xi_i\Big\|\leq K \big(\mathsf N_t-\mathsf N_{(t-\delta_\ell)^+}\big),
\end{equation}
where $K$ is the radius of the support of $\pi$, and $\mathsf N_t$, we recall from Subsection~\ref{sec:3.3/2},
counts the jumps of $\mathring X$ up to time $t$. Thus, the probability on the left hand side of~\eqref{eqs5} is bounded above by
\begin{equation}\label{bd2}
P^{\,\mathring\omega,\omega,\tilde\omega}\big(\mathsf N_t-\mathsf N_{(t-u)^+}>\epsilon K^{-1}t\big)+P^{\,\mathring\omega,\omega,\tilde\omega}\big(\delta_\ell>u\big),
\end{equation}
where $u>0$ is arbitrary.
One may readily check from our conditions on $\varphi$ that $\mathsf N_t-\mathsf N_{(t-u)^+}$ is stochastically dominated
by a Poisson distribution of mean $u$ for each $t$, and it follows that the first term in~\eqref{bd2} vanishes as $t\to\infty$ for a.e.~$\mathring\omega,\omega,\tilde\omega$;
\eqref{eqs5} then follows since $u$ is arbitrary and $\delta_\ell$ is finite a.s.
\end{subsection}
\section{Environment seen from the particle}
\label{env}
\setcounter{equation}{0}
\noindent
We finally turn, in the last section of this paper, to the behavior of the environment seen from the particle {\em at jump times}.
Our aim is to derive the convergence of its distribution as time/the number of jumps diverges, and to compare the limiting
distribution with the product of invariant distributions of the marginal BD processes.
The main result of this section, stated next, addresses these issues under different subsets of the following set of conditions
on the parameters of our process.
\begin{enumerate}
\item\begin{equation}\label{cond1}{\mathbf E}(\xi_1)\ne0;\end{equation}
\item\begin{equation}\label{cond2}{\mathbf E}(\|\xi_1\|^{2+\varepsilon})<\infty; \end{equation}
\item \begin{equation}\label{cond3}\mathsf x \mbox{ is transient, and } \pi \mbox{ has bounded support}; \end{equation}
\item \begin{equation}\label{cond4}\inf_{n\geq0}\varphi(n)>0,\end{equation}
\end{enumerate}
out of which we compose the following conditions:
\begin{itemize}
\item[$1'.$] Conditions~\eqref{cond1} and~\eqref{cond4} hold;
\item[$2'.$] Conditions~\eqref{cond2} and~\eqref{cond4} hold;
\item[$3'.$] Conditions~\eqref{cond3} and~\eqref{cond4} hold.
\end{itemize}
We note that in neither case we require monotonicity of $\varphi$ \footnote{which is bounded above (by 1), as elsewhere in this paper}.
As anticipated, we focus on the homogeneously distributed case of the environment (i.e., we assume, as in the previous section, that
$p_n\equiv p\in(0,1/2)$) starting from a product of identical distributions on ${\mathbb N}$ with a positive exponential moment,
and we will additionally assume that $\pi$ either has non zero mean, or has a larger than 2 moment.
Let $\omega$ be a family of iid homogeneous ergodic BD processes on ${\mathbb N}$ indexed by $\mathbb{Z}^d$, starting from $\hat\mu_0$ as in the
paragraph of~\eqref{expdec} of Section~\ref{ext}, and let $X$ be a time inhomogeneous random walk on $\mathbb{Z}^d$ starting from
$\mathbf 0$ in the environment $\omega$, as in the prrevious sections. Let us recall that $\tau_n$ denotes the time of the $n$-th jump of $X$,
$n\geq1$, and consider
\begin{equation}\label{envs}
\varpi_\mathbf{x}(n)=\omega_{X(\tau_n-)+\mathbf{x}}(\tau_n),\,\mathbf{x}\in\mathbb{Z}^d.
\end{equation}
$\varpi(n):=\{\varpi_\mathbf{x}(n),\,\mathbf{x}\in\mathbb{Z}^d\}$ represents the environment seen by the particle {\em right before} its $n$-th jump.
\begin{teorema} \label{conv_env} \mbox{}
%
Assume the condition stipulated on $\hat\mu_0$ in Section~\ref{ext} \footnote{see paragraph of~\eqref{expdec}
and suppose that ${\mathbf E}(\|\xi_1\|)<\infty$, and that, of the conditions listed above, at beginning of this section, either $1$, $2'$ or $3$ hold.
Then
\begin{itemize}
\item[$1.$] $\varpi(n)$ converges in ${\mathbf P}_{\hat{\mu}_{{0}}}$-distribution (in the product topology on ${\mathbb N}^{\mathbb{Z}^d}$) to $\varpi:=\{\varpi_\mathbf{x},\,\mathbf{x}\in\mathbb{Z}^d\}$,
whose distribution does not depend on the particulars of initial distribution\footnotemark.
\end{itemize}
Moreover, if either $1'$, $2'$ or $3'$ hold, then
\begin{itemize}
\item[$2.$] $\varpi$ is absolutely continuous with respect to $\hat\nu$.
\end{itemize}
\end{teorema}
\footnotetext{i.e., it equals the one for the case where $\hat\mu_0=\hat\nu$}
\begin{observacao}\label{mono}
There may be a way to adapt our approach in this section to relax/modify Condition 4 to some extent, by, say, requiring a slow decay of $\varphi$ at $\infty$, perhaps adding monotonicity, as in the previous sections. But we do not feel that a full relaxation of that condition, even if imposing monotonicity, is within the present approach, at least not without substantial new ideas (to control tightness to a sufficient extent).
\end{observacao}
\begin{observacao}\label{short}
As with results in previous sections, we do not expect our conditions for the above results to be close to optimal; again, our aim is to give reasonably natural conditions under which we are able to present an argument in a reasonably simple way. A glaring gap in our conditions is the mean zero $\xi_1$, not bounded away from zero $\varphi$ case, even assuming monotonicity of $\varphi$, as we did for the results in previous sections;
notice that the domination implied
by~(\ref{comp1},\ref{comp2}) holds for jump times of $\breve X$, not of $X$, and is thus not directly applicable, nor did we find an indirect application of it, or another way to obtain enough tightness for the environment at jump times of $X$ to get our argument going in that case.
\end{observacao}
As a preliminary for the proof of Theorem~\ref{conv_env}, we consider the (prolonged) {\em backwards in time}
random walk starting from $X(\tau_n-)$ (and moving backwards in time)
$\mathsf y=\{\mathsf y_\ell,\,\ell\in{\mathbb N}\}$ such that $\mathsf y_0=0$ and for $\ell\in{\mathbb N}_*$
\begin{equation}\label{back}
\mathsf y_\ell=\sum_{i=n-\ell}^{n-1}\xi'_i,
\end{equation}
where $\xi'_i=-\xi_i$, $i\geq1$, and we have prolonged $\xi$ to non positive integer indices in an iid way.
Notice that, for all $n\in{\mathbb N}_*$, $\mathsf y$ is a random walk starting from $\mathbf 0$ whose (iid) jumps are distributed as $-\xi_1$
(and thus its distribution does not depend on $n$);
notice also that $\mathsf y_\ell=\mathsf x_{n-1-\ell}-\mathsf x_{n-1}$ for $0\leq\ell\leq n-1$.
It is indeed convenient to use a single backward random walk $\mathsf z$ (with the same distribution as $\mathsf y$) for all $n$.
So in many arguments below, we condition on the trajectory of $\mathsf z$ (which appears as a superscript in (conditional) probabilities below).
\medskip
\noindent{\em Proof of Theorem~\ref{conv_env}.}
We devote the remainder of this section for this proof.
Let $M$ be an arbitrary positive integer, and consider the random vector
\begin{equation}\label{vpm}
\varpi^M(n):=\{\varpi_\mathbf{x}(n),\,\|\mathbf{x}\|\leq M\}.
\end{equation}
To establish the first assertion of Theorem~\ref{conv_env}, it is enough to show that $\varpi^M(n)$ converges in distribution as $n\to\infty$.
We start by outlining a fairly straightforward argument for the first assertion of Theorem~\ref{conv_env} under Condition 3. In this case, we are again (as in the argument for the case of $d\geq3$ of Theorem~\ref {gclt} above), under the conditions for which we have cut times for the trajectory of $\mathsf z$.
It follows that there a.s.~exists a finite cut time $T_M$ such that the trajectory of $\mathsf z$ after $T_M$ never visits $\{\mathbf{x},\,\|\mathbf{x}\|\leq M\}$.
Then, assuming that the environment is started from $\bar\nu$, we have that, as soon as $n>T_M$, the conditional distribution of $\varpi^M(n)$ given $\mathsf z$ equals that of $\check\varpi^{M,\mathsf z}$, which is defined to be the $\{\mathbf{x},\,\|\mathbf{x}\|\leq M\}$-marginal of the environment of a process $(X,\omega)$, with $\omega$ started from $\hat\nu$ and $\mathsf x$ started from $\mathsf z_{T_M}$, seen at the time of the $T_M$-th jump of $X$ around the position it occupied immediately before that jump, with $\mathsf x_\ell=\mathsf z_{T_M-\ell}$, $0\leq\ell\leq T_M$. Notice that the result of the integration of the distribution of $\check\varpi^{M,\mathsf z}$ with respect to the distribution of $\mathsf z$ does not depend on $n$; we may denote by $\check\varpi^M$ the random vector having such (integrated) distribution. It is thus quite clear that $\varpi^M(n)$ converges to $\check\varpi^M$ in distribution as $n\to\infty$. That this also holds under the more general assumption on the initial environment stated in Theorem~\ref{conv_env} can be readily argued via a coupling argument between $\hat\mu_0$ and $\hat\nu$,
as done in Section~\ref{ext}. This concludes the proof of Theorem~\ref{conv_env} under Condition 3.
Below, a similar argument, not however using cut times, will be outlined for the case where Condition 1 holds --- see Subsubsection~\ref{mune0}.
In order to obtain convergence of $\varpi^M(n)$ when
${\mathbf E}(\xi_1)=0$, and either $d\leq2$ or $\pi$ has unbounded support,
we require a bound on the tail of the distribution of single-site marginal distributions of the environment at approriate jump times of $X$, to be specified below. (We also need Condition 2.)
In order to find such bound, we felt the need to further impose Condition 4.
A bound of the same kind will also enter our argument for the second assertion of Theorem~\ref{conv_env}.
We devote the next subsection for obtaining this bound, and the two subsequent subsections for the conclusion of the proof Theorem~\ref{conv_env}.
\subsection{Bound on the tail of the marginal distribution of the environment}
\begin{lema}\label{benv}
Let $\mathbf{x}\in\mathbb{Z}^d,\,m\geq1$ and suppose ${\mathcal R}$ is a stopping time of $\mathsf z$ such that ${\mathcal R}\geq m$ a.s., and on $\{m\leq{\mathcal R}<\infty\}$ we have that
$\mathsf z_{\mathcal R}=\mathbf{x}$ and $\mathsf z_{{\mathcal R}-i}\ne\mathbf{x}$, $i=1,\ldots,m$.
Then, assuming that ${\mathbf E}(\xi_1)=0$, ${\mathbf E}(\|\xi_1\|^2)<\infty$ and that~\eqref{cond4} holds, there exist a constant $\alpha>0$ and $m_0\geq1$
such that for all $m\geq m_0$, outside of an event involving $\mathsf z$ alone of probability exponentially decaying in $m$, we have that
%
\begin{equation}\label{benv1}
%
{\mathbf P}^\mathsf z_{\hat\mu_0}(\omega_{\mathbf{x}-\mathsf z_{n-1}}(\tau_{(n-{\mathcal R}+\frac m2)^+})> (\log m)^2))\leq e^{-\alpha (\log m)^2},
%
\end{equation}
for all $n\geq0$, where ${\mathbf P}^\mathsf z_{\hat\mu_0}$ denotes the conditional probability ${\mathbf P}_{\hat\mu_0}(\cdot|\mathsf z)$.
%
\end{lema}
\begin{proof}
Given $n\geq0$, let us denote $\omega_{\mathbf{x}-\mathsf z_{n-1}}$ by $\omega'_{\mathbf{x}}$.
For $n\leq{\mathcal R}$
from Lemma~\ref{fbe}.
For this reason, we may restrict to the event where $n\geq{\mathcal R}$.
Let ${\mathcal R}=\theta_0, \theta_1, \theta_2,\ldots$ be the successive visits of $\mathsf z$ to $\mathbf{x}$ starting at ${\mathcal R}$;
this may be a finite set of times;
set $I_0=0$,
%
and, for $k>0$, set $I_k=\inf\{i>I_{k-1}:\theta_{i}-\theta_{i-1}> km\}$, ${\mathcal I}_k=[\theta_{I_{k-1}},\ldots,\theta_{I_k-1}]\cap{\mathbb Z}$,
${\mathcal I}'_k=(\theta_{I_{k}-1},\ldots,\theta_{I_k})\cap{\mathbb Z}$;
and let ${\mathcal I}'=\cup_{k\geq1}{\mathcal I}'_k$. Notice that
\begin{equation}\label{jk1}
\mathsf z\ne\mathbf{x} \text{ on }{\mathcal I}', \text{ and }\, |{\mathcal I}'_k|\geq km, \, k\geq1.
\end{equation}
Now let us consider $|{\mathcal I}_k|$. We will bound the upper tail of its distribution. This will be based on a bound to the upper tail of the distribution
of $I_k-I_{k-1}$. A moment's thought reveals that, over all the cases of $\pi$ comprised in our assumptions, the worst case is the one dimensional,
recurrent case.
%
For this case, and thus for all cases, Proposition 32.3 of~\cite{Spi76} yields that
\begin{equation}\label{ik1}
{\mathbf P}\big(I_k-I_{k-1}>(km)^2\big)=\big[{\mathbf P}(\theta_1-\theta_0\leq km)\big]^{(km)^2}\leq \Big(1-\frac c{\sqrt{km}}\Big)^{(km)^2}\leq e^{-ckm}
\end{equation}
as soon as $m$ is large enough, where (here and below) $c$ is a positive real constant, not necesarily the same in each appearance.
(Here and below, we omit the subscript in the probability symbol when it may be restricted to the distribution of $\mathsf z$ only.)
We now note that $|{\mathcal I}_k|\preceq km(I_k-I_{k-1})$, and it follows that
\begin{equation}\label{ik2}
{\mathbf P}\big(|{\mathcal I}_k|>(km)^{3}\big)\leq e^{-ckm}.
\end{equation}
It readily follows that, setting ${\mathcal J}={\mathcal J}(\mathbf{x},m)=\cap_{k\geq1}\big\{ |{\mathcal I}_k|\leq(km)^{3}\big\}$, we have that
\begin{equation}\label{k1}
{\mathbf P}\big({\mathcal J}^c\big)\leq e^{-cm},
\end{equation}
as soon as $m$ is large enough.
Now, given $\mathsf z$, let $K$ be such that $0\in{\mathcal I}_K\cup{\mathcal I}'_K$. We will assume that $\mathsf z\in{\mathcal J}$ and $n\geq{\mathcal R}$, and bound the conditional distribution of $\omega'_{\mathbf{x}}(\tau_{n-{\mathcal R}})$ given such $\mathsf z$, via coupling, as follows.
It is quite clear from the characteristics of $X$ that, given the boundedness assumption on $\varphi$, its jump times can be stochastically bounded from above and below by a exponential random variables with rates 1 and $\delta:=\inf\varphi$, respectively, independent of $\omega$.
Let us for now
consider the succesive continuous time intervals ${\mathfrak I}_k,{\mathfrak I}_k'$, $1\leq k<K$ in the timeline of $X$,
during which $\mathsf z$ jumps in ${\mathcal I}_k$, ${\mathcal I}'_k$, $1\leq k\leq K$, respectively, if there is any such interval\footnote{${\mathfrak I}_K'$ may be empty.}.
%
We recall that time for $X$ and $\mathsf z$ moves in different directions.
Let ${\mathfrak T}_k=|{\mathfrak I}_k|$, ${\mathfrak T}'_k=|{\mathfrak I}_k'|$ denote the respective interval lengths.
\begin{observacao}\label{frakbds}
We note that, given our assumed bounds on $\varphi$, whenever $1\leq k<K$, we have that ${\mathfrak T}'_k$ may be bounded from below by the sum of $km$ independent standard exponential random variables. If $\mathsf z\in{\mathcal J}$, then, for $1\leq k\leq K$, we have that ${\mathfrak T}_k$ may be bounded from above by the sum of $(km)^3$ iid exponential random variables of rate $\delta$.
\end{observacao}
Whenever $K\geq2$, we introduce, enlarging the probability space if necessary, for each $1\leq k<K$,
versions of
$\omega'_\mathbf{x}$ evolving at ${\mathfrak I}'_k$, namely $\omega^{\text{eq}}_k$ and $\omega^+_k$, coupled to $\omega'_\mathbf{x}$
so that, at $\tau_{n-\theta_{I_k}}$, $\omega^{\text{eq}}_k$ is in equilibrium, and $\omega^+_k$ equals the maximum of $\omega$ in ${\mathfrak I}_{k+1}$,
and $\omega^{\text{eq}}_k$, $\omega^+_k$ and $\omega'_\mathbf{x}$ evolve independently on ${\mathfrak I}'_k$ until any two of them meet, after which time they coalesce.
Notice that it follows, in this case, that $\omega^+_k\geq\omega'_\mathbf{x}(\tau_{n-\theta_{I_k}})$.
We now need an upper bound for the distribution of $\omega^+_k$ at time $\tau_{n-\theta_{I_k}}$ assuming it starts from equilibrium at time $\tau_{n-\theta_{I_k+1}}$. From the considerations on Remark~\ref{frakbds}, we readily find that it is bounded by the max of a BDP starting from
equilibrium during a time of length given by the sum of $(km)^3$ iid exponential random variables of rate $\delta$. In Appendix~\ref{frakbd}
we give un upper bound for the tail of the latter random variable --- see Lemma~\ref{fbd} ---, which implies from the above reasoning that
\begin{equation}\label{ub1}
\sum_{w\geq0}{\mathbf P}^\mathsf z\big(\omega^+_k(\tau_{n-\theta_{I_k}})>(\log(km))^2|\omega'_\mathbf{x}(\tau_{(n-\theta_{I_{k+1}-1})^+})=w\big)\nu(w)\leq e^{-c(\log(km))^2},
\end{equation}
$1\leq k<K$, where $\nu$, we recall, is the equilibrium distribution of the underlying environmental BDP. As follows from the proof of
Lemma~\ref{fbd} --- see Remark~\ref{gen_init} ---,~\eqref{ub1} also holds when we replace $\nu$ by any distribution on ${\mathbb N}$ with an
exponentially decaying tail; so, in particular, it holds for $k=K-1$ if we replace $\nu$ by the distribution of
$\omega'_\mathbf{x}\big(\tau_{(n-\theta_{I_{K}-1})^+}\big)$, which may be checked to have such a tail --- see Lemma~\ref{fbe}.
We now consider the events $A_k=\{\omega^+_k(\tau_{n-\theta_{I_k}})\leq(\log(km))^2\}$, $k=1,\ldots, K$, and also the events
$A'_k=\{$during ${\mathfrak I}'_k$, both $\omega^{\text{eq}}_k$ and $\omega^+_k$ visit the origin\}, $k=1,\dots,K-1$.
\begin{observacao}\label{couple}
In $A'_k$, $\omega'_\mathbf{x}$ and $\omega^{\text{eq}}_k$ (and $\omega^+_k$) coincide at time $\tau_{n-\theta_{I_k-1}}$.
\end{observacao}
Given the drift of the BDP towards the origin, and the fact that ${\mathcal I}'_k\geq km$, and also the lower bound on $\varphi$, by a standard large deviation estimate,
we have that
\begin{equation}\label{ub2}
{\mathbf P}^\mathsf z\big((A'_k)^c|A_k\big)\leq e^{-ckm},\,k=1,\ldots,K-1.
\end{equation}
Let us set $B_K=\cap_{k=1}^KA_k\cap\cap_{k=1}^{K-1}A'_k$, if $K\geq2$, and $B_K\big|_{K=1}=A_1$.
From the reasoning in the latter two paragraphs above,
(given $\mathsf z\in{\mathcal J}$, and minding Remark~\ref{couple}) we readily find that
\begin{equation}\label{ub3}
{\mathbf P}^\mathsf z_{\hat\mu_0}\big((B_K)^c\big)\leq e^{-c(\log m)^2},\,K\geq2,
\end{equation}
%
and the same may be readily seen to hold also for $K=1$. Notice that the above bound is uniform in $K\geq1$.
Combining this estimate with~\eqref{k1}, and from the fact that in $B_K$ we have that $\omega'_{\mathbf{x}}(\tau_{n-{\mathcal R}})\leq(\log m)^2$, and thus, given that
from ${\mathcal R}-\frac m2$ to ${\mathcal R}-1$, $\mathsf z$ does not visit $\mathbf{x}$, and resorting again to a coupling of $\omega'_\mathbf{x}$ to suitable $\omega^{\text{eq}}_0$ and $\omega^{\text{eq}}_0$ on the time interval $\big[\tau_{(n-{\mathcal R})^+},\tau_{(n-{\mathcal R}+\frac m2)^+}\big]$, similarly as the ones of $\omega'_\mathbf{x}$ to $\omega^{\text{eq}}_k$ and $\omega^{\text{eq}}_k$ on ${\mathfrak I}'_k$, and to the lower bound on $\varphi$, as well as to a standard large deviation estimate, as in the argument for~\eqref{ub2} above, the result for the
case where $\omega'_\mathbf{x}$ starts from equilibrium follows. If the initial distribution of $\omega'_\mathbf{x}$ is not necessarily the equilibrium one, but
%
satisfies the conditions of Section~\ref{ext} --- see paragraph of~\eqref{expdec} ---, then, by the considerations at the end of the paragraph of~\eqref{ub1} above, the ensuing arguments are readily seen to apply.
\end{proof}
\subsection{Conclusion of the proof of the first assertion of Theorem~\ref{conv_env}}
\subsubsection{First case: ${\mathbf E}(\xi_1)=0$ and $d=1$}\label{m01
\noindent
In this case we also assume, according to the conditions of Theorem~\ref{conv_env} , that ${\mathbf E}(|\xi_1|^{2+\varepsilon})<\infty$,
for some $\varepsilon>0$; we may assume, for simplicity, that $\varepsilon\leq2$.
Given $\mathsf z$ and $M\in{\mathbb N}$, consider the following (discrete) stopping times of $\mathsf z$:
$\vec{\vartheta}_1=\inf\{n>0: |\mathsf z_n|>\Upsilon_1\}$, with $\Upsilon_1=M$, and for $\ell\geq1$,
\begin{eqnarray}\label{vt1}
\cev{\vartheta}_{\ell}&=&
\begin{cases}
\inf\big\{n>\vec{\vartheta}_{\ell}: \mathsf z_n<\mathsf z_{\vec{\vartheta}_{\ell}}\big\},&\text{ if } \mathsf z_{\vec{\vartheta}_{\ell}}>0;\\
\inf\big\{n>\vec{\vartheta}_{\ell}: \mathsf z_n>\mathsf z_{\vec{\vartheta}_{\ell}}\big\},&\text{ if } \mathsf z_{\vec{\vartheta}_{\ell}}<0,
\end{cases}\\\label{vt2}
\vec{\vartheta}_{\ell+1}&=& \inf\big\{n\geq\cev{\vartheta}_{\ell}: |\mathsf z_n|>\Upsilon_{\ell+1}\big\},
\end{eqnarray}
where $\Upsilon_{\ell+1}=\max\big\{|\mathsf z_{n}|,\,n<\cev{\vartheta}_{\ell}\big\}$.
For $\ell\geq1$, $\vec{\vartheta}_\ell$ indicates the times where $|\mathsf z|$ exceeds the previous maximum (above $M$), say at respective values $x_\ell$, and $\cev{\vartheta}_\ell$ the return time after that to either a value below $x_\ell$, if $x_\ell>0$, or a value above $-x_\ell$, if $x_\ell<0$. Notice that we may have
$\vec{\vartheta}_{\ell+1}=\cev{\vartheta}_{\ell}$ for some $\ell$
\footnote{The first moment condition on $\xi_1$ can however be readily shown to imply that the set of such $\ell$ is a.s.~finite; this remark is however not taken advantage of in the sequel.}.
Figure~\ref{fig:cross} illustrates realizations of these random variables.
\begin{figure}[!htb]
\centering
\includegraphics[scale=0.4,trim=0 55 0 0,clip]{crorr.pdf}
\caption{Illustration of occurrences of random variables introduced in~(\ref{vt1},\ref{vt2}).
In (b) and (c), edges in red represent single jumps.
In (c) we have $\vec{\vartheta}_{\ell+1}=\cev{\vartheta}_{\ell}$, but not in (b).}
\label{fig:cross}
\end{figure}
Now set, for $\ell\geq1$
\begin{equation}\label{vt3}
\varrho_\ell=\cev{\vartheta}_{\ell}-\vec{\vartheta}_{\ell}\quad\text{and}\quad \chi_\ell=|\mathsf z_{\vec{\vartheta}_\ell}|-\Upsilon_\ell.
\end{equation}
We will argue in Appendix~\ref{rwrs}
that for $m\geq1$
\begin{eqnarray}\label{u1}
&\sup_{\ell}{\mathbf P}(\varrho_\ell>m)\geq\frac{\text{const}}{\sqrt m};&\\\label{u2}
&\sup_{\ell}{\mathbf P}(\chi_\ell>m)\leq\frac{\text{const}}{m^\varepsilon}.&
\end{eqnarray}
For $m\geq1$ fixed, let ${\mathcal L}_m=\inf\{\ell\geq1:\varrho_\ell\geq m\}$. It readily follows from~\eqref{u1} that
\begin{equation}\label{vt4}
{\mathbf P}({\mathcal L}_m>m)\leq\Big(1-\frac{\text{const}}{\sqrt m}\Big)^m\leq e^{-c\sqrt m},
\end{equation}
for some constant $c>0$, and it follows from~\eqref{u2} that for $b>0$
\begin{equation}\label{vt5}
{\mathbf P}\big(\max_{1\leq\ell\leq m}\chi_\ell>m^b\big)\leq \text{const } m^{1-b\varepsilon},
\end{equation}
so that the latter probability vanishes as $m\to\infty$ for $b>1/\varepsilon$.
We now notice that, pointing out to Appendix~\ref{rwrs} for the definitions of $\mathbf T^-_0$ and $\mathbf T^+_0$, that\break
$\big(\chi'_\ell:=\Upsilon_{\ell+1}-|\mathsf z_{\vec{\vartheta}_\ell}|,\varrho_\ell\big)$ is distributed as
$\big(\!\max_{0\leq i<\mathbf T^-_0}\mathsf z_i,\mathbf T^-_0\big)$, if $\mathsf z_{\vec{\vartheta}_\ell}>0$,
and as $\big(\!-\min_{0\leq i<\mathbf T^+_0}\mathsf z_i,\mathbf T^+_0\big)$, if $\mathsf z_{\vec{\vartheta}_\ell}<0$. In the former case, we have for $k\geq1$
\begin{equation}\label{vt6}
{\mathbf P}(\chi'_\ell>k;\,\varrho_\ell\leq m)\leq {\mathbf P}\big(\max_{0\leq i\leq m}\mathsf z_i>k\big)\leq\text{const }\frac{m}{k^2},
\end{equation}
where the latter passage follows from Kolmogorov's Maximal Inequality, and the same bound holds similarly in the latter case.
We thus have that, for $b>0$,
\begin{equation}\label{vt7}
{\mathbf P}\big(\max_{1\leq\ell<{\mathcal L}_m}\chi'_\ell>m^b;\,{\mathcal L}_m\leq m\big)\leq\text{const }m^{1-2b}\leq\text{const }m^{1-b\varepsilon},
\end{equation}
and, since $\Upsilon_{\ell}=M+\sum_{i=1}^{\ell-1}(\chi_i+\chi'_i)$, it follows that
\begin{equation}\label{vt8}
{\mathbf P}\big(\Upsilon_{{\mathcal L}_m}>m^{1+b}\big)\leq\text{const }m^{1-b\varepsilon},
\end{equation}
which thus vanishes as $m\to\infty$ as soon as $b>1/\varepsilon$.
\smallskip
We may now proceed directly to showing that
\begin{equation}\label{cau1}
\{\varpi^M(n),\,n\geq1\}\,\text{ is a Cauchy sequence in distribution.}
\end{equation}
Let ux fix $m$ as above and $\mathbf{x}\in\{-m^{b+1},\dots,m^{b+1}\}$; we assume that $m>M^{1/(b+1)}$; we next set
\begin{equation}\label{R}
{\mathcal R}=\inf\{n\geq\cev{\vartheta}_{{\mathcal L}_m}:\,\mathsf z_n=\mathbf{x}\},
\end{equation}
which satisfies the conditions of Lemma~\ref{benv}, and we thus conclude from that lemma
that, if $\mathsf z\in{\mathcal J}$, then
\begin{equation}\label{benv2}
\limsup_{n\to\infty}{\mathbf P}^\mathsf z_{\hat\mu_0}\big(\omega'_{\mathbf{x}}(\tau_{(n-{\mathcal R}+\frac m2)^+})>(\log m)^2\big)\leq e^{-\alpha (\log m)^2},
\end{equation}
and thus that
\begin{equation}\label{benv3}
\limsup_{n\to\infty}{\mathbf P}^\mathsf z_{\hat\mu_0}\Big(\max_{\mathbf{x}\in\{-m^{b+1},\dots,m^{b+1}\}}\omega'_{\mathbf{x}}(\tau_{(n-{\mathcal R}+\frac m2)^+})>(\log m)^2\Big)\leq
e^{-c (\log m)^2},
\end{equation}
where $c$ is a positive number, depending on $\alpha$ and $b$ only. It readily follows from the argumemts in the proof of Lemma~\ref{benv}
(namely, the coupling of $\omega'_\mathbf{x}$ and $\omega^{\text{eq}}$) that
\begin{equation}\label{benv4}
\limsup_{n\to\infty}{\mathbf P}^\mathsf z_{\hat\mu_0}\Big(\max_{\mathbf{x}\in\{-m^{b+1},\dots,m^{b+1}\}}\omega'_{\mathbf{x}}(\tau_{(n-\cev{\vartheta}_{{\mathcal L}_m}+\frac m2)^+})>(\log m)^2\Big)\leq
e^{-c (\log m)^2}.
\end{equation}
Now let $\mathsf z\in{\mathcal J}\cap\{\Upsilon_{{\mathcal L}_m}\leq m^{1+b}\}$, and choose $n_0\geq \cev{\vartheta}_{{\mathcal L}_m}$ so large that for $n\geq n_0$ we have
\begin{equation}\label{benv5}
{\mathbf P}^\mathsf z_{\hat\mu_0}\Big(\max_{\mathbf{x}\in\{-\Upsilon_{{\mathcal L}_m},\dots,\Upsilon_{{\mathcal L}_m}\}}\omega'_{\mathbf{x}}(\tau_{n-\cev{\vartheta}_{{\mathcal L}_m}+\frac m2})>(\log m)^2\Big)\leq
e^{-c (\log m)^2}.
\end{equation}
\begin{figure}[!htb]
\centering
\includegraphics[scale=0.48,trim=80 30 180 70,clip]{ilu44.pdf}
\caption{Schematic depiction of a stretch of the (backward) trajectory of $\mathsf z\in\{\Upsilon_{{\mathcal L}_m}\leq m^{1+b}\}$.}
\label{fig:ilu42}
\end{figure}
Now let us consider a family of BDPs on the timelines of $\mathbf{x}\in\{-\Upsilon_{{\mathcal L}_m},\dots,\Upsilon_{{\mathcal L}_m}\}-\mathsf z_{n-1}$, which we
denote by $(\omega^{\text{eq}}_\mathbf{x})_{\mathbf{x}\in\{-\Upsilon_{{\mathcal L}_m},\dots,\Upsilon_{{\mathcal L}_m}\}-\mathsf z_{n-1}}$,
starting at time $\tau_{n-\cev{\vartheta}_{{\mathcal L}_m}+\frac m2}$ in the product of equilibrium distributions
$\hat\nu_m:=\!\!\!\bigotimes\limits_{\mathbf{x}\in\{-\Upsilon_{{\mathcal L}_m},\dots,\Upsilon_{{\mathcal L}_m}\}-\mathsf z_{n-1}}\!\!\!\ \nu$,
independently of $\omega\big(\tau_{n-\cev{\vartheta}_{{\mathcal L}_m}+\frac m2}\big)$,
with $\omega^{\text{eq}}_\mathbf{x}$ coupled to $\omega_\mathbf{x}$ so that they move independently till first meeting, after which time they coalesce.
\begin{observacao}\label{eq}
We have that $\big(\omega^{\text{eq}}_\mathbf{x}(\tau_{n-\vec{\vartheta}_{{\mathcal L}_m}+1})\big)_{\mathbf{x}\in\{-\Upsilon_{{\mathcal L}_m},\dots,\Upsilon_{{\mathcal L}_m}\}-\mathsf z_{n-1}}\sim\hat\nu_m$.
This follows from the fact that in the period from $\tau_{n-\cev{\vartheta}_{{\mathcal L}_m}+\frac m2}$ to $\tau_{n-\vec{\vartheta}_{{\mathcal L}_m}+1}$ the jump time lengths
of $X$ depend solely on $\big(\omega_\mathbf{x}(\tau_{n-\cev{\vartheta}_{{\mathcal L}_m}+\frac m2})\big)_{\mathbf{x}\notin\{-\Upsilon_{{\mathcal L}_m},\dots,\Upsilon_{{\mathcal L}_m}\}-\mathsf z_{n-1}}$,
and on the birth-and-death processes
evolving on timelines of ${\mathbb Z}\setminus\{-\Upsilon_{{\mathcal L}_m},\dots,\Upsilon_{{\mathcal L}_m}\}-\mathsf z_{n-1}$.
\end{observacao}
Arguing similarly as in the proof of Lemma~\ref{benv} --- see Remark~\ref{couple} ---, we find that
\begin{equation}\label{benv6}
{\mathbf P}^\mathsf z_{\hat\mu_0}\Big(\omega_{\mathbf{x}}(\tau_{n-\vec{\vartheta}_{{\mathcal L}_m}+1})\ne\omega^{\text{eq}}_{\mathbf{x}}(\tau_{n-\vec{\vartheta}_{{\mathcal L}_m}+1})\,
\text{for some }\mathbf{x}\in\{-\Upsilon_{{\mathcal L}_m},\dots,\Upsilon_{{\mathcal L}_m}\}-\mathsf z_{n-1}\Big)\leq
e^{-c m}.
\end{equation}
Now, given $\mathsf z\in{\mathcal J}\cap\{\Upsilon_{{\mathcal L}_m}\leq m^{1+b}\}$, let $\varpi^{\text{eq}}(n):=(\varpi^{\text{eq}}_\mathbf{x}(n))_{\mathbf{x}\in{\mathbb Z}}$ represent the environment at time $n$ of a time
inhomogeneous Markov jump process $(X(t),\omega(t))$ starting at time $\tau_{n-\vec{\vartheta}_{{\mathcal L}_m}}$ from $\hat\nu$
(with $X$ starting at that time from $\mathsf z_{\vec{\vartheta}_{{\mathcal L}_m}-1}$, and jumping, forwards in time, along the backward trajectory of $\mathsf z$).
\begin{observacao}\label{eq2}
Notice that the distribution of $\varpi^{\text{eq}}(n)$ does not depend on $n$.
\end{observacao}
Since, given $\mathsf z$, the distribution of $\varpi^M(n)$ depends only on environments at timelines of sites in
$\{-\Upsilon_{{\mathcal L}_m},\dots,\Upsilon_{{\mathcal L}_m}\}-\mathsf z_{n-1}$
from time $\tau_{n-\vec{\vartheta}_{{\mathcal L}_m}}$ to $\tau_n$, we have, for $\mathsf z\in{\mathcal J}\cap\{\Upsilon_{{\mathcal L}_m}\leq m^{1+b}\}$, and in the complement
of the event under the probability sign on the left hand side of~\eqref{benv6}, and resorting to an obvious coupling, that
$\varpi^M_\mathbf{x}(n)=\varpi^{\text{eq}}_\mathbf{x}(n)$ for $|x|\leq M$.
Now, finally, given $\varepsilon>0$, we may choose $m$ large enough, and then $n_0$ large enough so that, for
$\mathsf z\in{\mathcal J}\cap\{\Upsilon_{{\mathcal L}_m}\leq m^{1+b}\}$, and by~\eqref{benv6}, we have that the distance\footnote{associated to the usual product topology}
of the conditional distribution of $\varpi^M(n)$ given $\mathsf z$ to that of
the conditional distribution of $\big(\varpi^{\text{eq}}_\mathbf{x}(n)\big)_{|\mathbf{x}|\leq M}$ given $\mathsf z$ is smaller than $\varepsilon$.
We conclude from Remark~\ref{eq2} that the sequence in $n$ of conditional distributions of $\varpi^M(n)$ given $\mathsf z$ is Cauchy, and
thus the same holds for the sequence of unconditional distributions. It readily follows from the above arguments that the limit
is the same no matter what are the details of $\hat\nu_0$ satisfying the conditions in the paragraph of~\eqref{expdec}.
\subsubsection{Second case: ${\mathbf E}(\xi_1)=0$ and $d\geq2$}\label{m02+
\noindent
Let us first note that each coordinate of $\mathsf z$ performs mean zero walks with the same moment condition as in the $d=1$ case,
so arguments for that case apply to, say, the first coordinate, and we get control of the location of that coordinate at (backward)
time $\cev{\vartheta}_{{\mathcal L}_m}$: it is with high probability in $\{-m^{1+b},\ldots,m^{1+b}\}$ if $m$ is large, according to~\eqref{vt8}.
But now we need to control the location of the other coordinates; and we naturally seek a similar polynomial such control
as for the first coordinate.
In order to achieve that, we will simply show that, with high probability, we have polynomial control on the size of $\cev{\vartheta}_{{\mathcal L}_m}$,
and that follows from standard arguments once we condition on the event that $\Upsilon_{{\mathcal L}_m}\leq m^{1+b}$ --- which
has high probability according to~\eqref{vt8}. Indeed, on that event $\cev{\vartheta}_{{\mathcal L}_m}$ is stochastically dominated by
$\vartheta^*=\inf\{n>0: |\mathsf z_n|>m^{1+b}\}$, the hitting time by $\mathsf z$ of the complement of $\{-m^{1+b},\ldots,m^{1+b}\}$.
It is well known that under our conditions we have that $\vartheta^*\leq m^{2(1+b+\delta)}$ with high probability for all $\delta>0$
--- see Theorem 23.2 in~\cite{Spi76}, and thus, again recalling a well known result (see Theorem 23.3 in~\cite{Spi76}), we have that
the max over the $j=$ 2 to $d$, and times from 0 to $\cev{\vartheta}_{{\mathcal L}_m}$, of the absolute value of $j$-th coordinate of $\mathsf z$ is bounded
by $m^{1+b+\delta}$ with high probability for any $\delta>0$.
With this control over the maximum dislocation of $|z|$ from time 0 to $\cev{\vartheta}_{{\mathcal L}_m}$, we may repeat essentially the same argument
as for $d=1$ (with minor and obvious changes).
\subsubsection{Last case: ${\mathbf E}(\xi_1)\ne0$}\label{mune0
\noindent
A similar, but simpler approach works in this case as in the previous case.
Let us assume without loss that ${\mathbf E}(\xi_1(1))<0$ --- so that ${\mathbf E}(\xi'_1(1))>0$, where the '1' within parentheses indicate the coordinate.
We consider the quantities introduced in Subsubsection~\ref{m01} for $\mathsf z(1)$ instead of $\mathsf z$,
and let ${\mathcal L}_\infty=\inf\{\ell\geq1:\varrho_\ell=\infty\}$. It is quite clear that this is an a.s.~finite random variable.
Now, given a typical $\mathsf z$, as soon as $n\geq{\mathcal L}_\infty$, we have that $\varpi^M(n)$ is again distributed as $\varpi^{\text{eq}}(n)$ given above; see paragraph
right below~\eqref{benv6}; by Remark~\ref{eq2}, this distribution does not depend on $n$; notice that the latter definition and property make
sense and hold in higher dimensions as well.
The result follows for $\varpi^M(n)$ conditioned on $\mathsf z$, and thus also for the unconditional distribution.
Notice that we did not need a positive lower bound for $\varphi$ (and neither a finite uper bound).
\begin{observacao}\label{cond}
We note that the above proof established, in every case, the convergence of the conditional distribution of $\varpi(n)$ given $\mathsf z$ as $n\to\infty$
to, say, $\varpi^\mathsf z$.
\end{observacao}
\begin{observacao}
It is natural to ask about the asymptotic environment seen by the particle at large deterministic times.
A strategy based on looking at the environment seen at the most recent jump time,
which might perhaps allow for an approach like the above one,
seems to run into a sampling paradox-type issue, which may pose considerable difficulties in the inhomogeneous setting.
We chose not to pursue the matter here.
\end{observacao}
\subsection{Proof of the second assertion of Theorem~\ref{conv_env}
It is enough, taking into account Remark~\ref{cond}, to show the result for the limit of the conditional distribution of $\varpi(n)$ given $\mathsf z$, which we denote by $\varpi^\mathsf z$, for $\mathsf z$ in an event of arbitrarily large probability, as follows.
For $N\geq0$, let ${\mathfrak Q}_N=\{\mathbf{x}\in{\mathbb Z}^d:\,\|\mathbf{x}\|\leq N\}$ and $T_N=\inf\{k\geq0:\,\|\mathsf z_k\|\geq N\}$.
One may check that, by our conditions on the tail of $\pi$ and the Law of Large Numbers, we have that for some $a>0$
there a.s.~exists $N_0$ such that for all $N\geq N_0$ we thave that $T_N>aN$.
For $\mathbf{x}\in\mathbb{Z}^d$, let ${\mathcal R}_\mathbf{x}=\inf\{k\geq0:\,\mathsf z_k=\mathbf{x}\}$, and let ${\mathfrak R}=\{\mathbf{x}\in\mathbb{Z}^d:\,{\mathcal R}_\mathbf{x}<\infty\}$.
Consider now the event $\tilde{\mathcal J}_N:=\cap_{\mathbf{x}\in{\mathfrak R},\|\mathbf{x}\|\geq N}{\mathcal J}(\mathbf{x},a\|\mathbf{x}\|)$, with ${\mathcal J}(\cdot,\cdot)$ as in the paragraph of~\eqref{k1}.
It follows from~\eqref{k1} that
\begin{equation}\label{jm}
{\mathbf P}\big(\tilde{\mathcal J}_N^c\big)\leq e^{-cN},
\end{equation}
for some positive constant $c$ (again, not the same in every appearance).
Lemma~\ref{benv} and the remark in the above paragraph then ensure that for $\mathbf{x}\in{\mathfrak R}$ such that
$\|\mathbf{x}\|\geq N\geq a^{-1}m_0\vee N_0$ and a.e.~$\mathsf z\in\tilde{\mathcal J}_N$, we have that
\begin{equation}\label{abc1}
%
{\mathbf P}^\mathsf z_{\hat\mu_0}\big(\omega_{\mathbf{x}-\mathsf z_{n-1}}(\tau_{(n-{\mathcal R}_\mathbf{x}+\frac{a\|\mathbf{x}\|}2)^+})>(\log a\|\mathbf{x}\|)^2\big)\leq e^{-c( (\log a\|\mathbf{x}\|)^2)};
%
\end{equation}
it readily follows fromm Lemma~\ref{fbe} that the same bound holds for $\mathbf{x}\notin{\mathfrak R}$;
For $\|\mathbf{x}\|\geq N\geq a^{-1}m_0\vee N_0$, let us couple $\omega_\mathbf{x}$ from $\tau_{(n-{\mathcal R}_\mathbf{x}+\frac{a\|\mathbf{x}\|}2)^+}$ onwards, in a coalescing way,
as done multiple times above, to $\omega^{\text{eq}}_\mathbf{x}$, a BDP starting at $\tau_{(n-{\mathcal R}_\mathbf{x}+\frac{a\|\mathbf{x}\|}2)^+}$ from $\nu$, its equilibrium distribution.
We assume that $\omega^{\text{eq}}_\mathbf{x}$, $\|\mathbf{x}\|\geq N$, are independent.
It readily follows, from arguments already used above, that, setting
${\mathfrak C}_N=\cap_{\|\mathbf{x}\|\geq N}\{\omega_{\mathbf{x}-\mathsf z_{n-1}}(\tau_n)=\omega_{\mathbf{x}-\mathsf z_{n-1}}(\tau_n)\}$,
we have that
\begin{equation}\label{abc2}
{\mathbf P}^\mathsf z_{\hat\mu_0}\big({\mathfrak C}_N^c\big)\leq e^{-cN}.
\end{equation}
For $N\geq0$ and $n>T_N$, let us consider $\omega^\sz_{\text{eq}}(n)=({\hat\omega}^\sz_{\text{eq}}(n),{\check\omega}^\sz_{\text{eq}})$, where ${\hat\omega}^\sz_{\text{eq}}(n)$ is $\varpi^\mathsf z(n)$ restricted to ${\mathfrak Q}_N$,
and ${\check\omega}^\sz_{\text{eq}}$ is distributed as the product of $\nu$ over ${\mathbb N}^{\mathbb{Z}^d\setminus{\mathfrak Q}_N}$, independently of ${\hat\omega}^\sz_{\text{eq}}(n)$.
The considerations of the previous paragraph imply that we may couple $\varpi^\mathsf z(n)$ and $\omega^\sz_{\text{eq}}(n)$ so that they coincide outside a probability
which vanishes as $N\to\infty$ {\em uniformly} in $n>T_N$.
Now set $\omega^\sz_{\text{eq}}:=({\hat\omega}^\sz_{\text{eq}},{\check\omega}^\sz_{\text{eq}})$, where ${\hat\omega}^\sz_{\text{eq}}$ is $\varpi^\mathsf z$ restricted to ${\mathfrak Q}_N$,
and ${\check\omega}^\sz_{\text{eq}}$ is distributed as the product of $\nu$ over ${\mathbb N}^{\mathbb{Z}^d\setminus{\mathfrak Q}_N}$, independently of ${\hat\omega}^\sz_{\text{eq}}$.
From the first item of Theorem~\ref{conv_env}, we have that ${\hat\omega}^\sz_{\text{eq}}(n)$ converges in distribution to ${\hat\omega}^\sz_{\text{eq}}$ as $n\to\infty$.
It follows from this and the considerations in the previous paragraph that we may couple $\varpi^\mathsf z=({\hat\omega}^\sz_{\text{eq}},{\check\omega}^\sz)$ and $\omega^\sz_{\text{eq}}$ so that
${\check\omega}^\sz={\check\omega}^\sz_{\text{eq}}$ outside an event of vanishing probability as $N\to\infty$.
In order to conclude, let us take an event $A$ of the product $\sigma$-algebra generated by the cylinders of ${\mathbb N}^{\mathbb{Z}^d}$ such that
$\hat\nu(A)=0$. Given $\eta\in{\mathbb N}^{{\mathfrak Q}_N}$, let $A_\eta=\{\zeta\in{\mathbb N}^{\mathbb{Z}^d\setminus{\mathfrak Q}_N} : (\eta,\zeta)\in A\}$.
Since $\hat\nu$ assigns positive probability to every cylindrical configuration of ${\mathbb N}^{\mathbb{Z}^d}$, it follows that $\check\nu(A_\eta)=0$
for every $\eta\in{\mathbb N}^{{\mathfrak Q}_N}$, where $\check\nu$ is the product of $\nu$'s over ${\mathbb N}^{\mathbb{Z}^d\setminus{\mathfrak Q}_N}$.
Thus, resorting to the coupling in the previous paragraph, we find that
\begin{equation}\label{abc3}
{\mathbf P}^\mathsf z_{\hat\mu_0}\big(\varpi^\mathsf z\in A\big)\leq {\mathbf P}^\mathsf z_{\hat\mu_0}\big(\omega^\sz_{\text{eq}}\in A\big)+{\mathbf P}^\mathsf z_{\hat\mu_0}\big({\check\omega}^\sz\ne{\check\omega}^\sz_{\text{eq}}\big)
={\mathbf P}^\mathsf z_{\hat\mu_0}\big({\check\omega}^\sz\ne{\check\omega}^\sz_{\text{eq}}\big),
\end{equation}
since the first probability on the right hand side equals
\begin{equation}\label{abc4}
\sum_{\eta\in{\mathbb N}^{{\mathfrak Q}_N}}\check\nu\big(A_\eta\big){\mathbf P}^\mathsf z_{\hat\mu_0}\big(\hat\varpi^\mathsf z=\eta\big)=0,
\end{equation}
as follows from what was pointed out above in this paragraph. Since the right hand side of~\eqref{abc3} vanishes as $N\to\infty$,
as pointed out at the end of the one but previous paragraph, and the left hand side does not depend on $N$, we have that
${\mathbf P}^\mathsf z_{\hat\mu_0}\big(\varpi^\mathsf z\in A\big)=0$, as we set out to prove.
| {'timestamp': '2022-11-07T02:03:07', 'yymm': '2211', 'arxiv_id': '2211.02154', 'language': 'en', 'url': 'https://arxiv.org/abs/2211.02154'} |
\section{Introduction}
Since many light scalar mesons such as $\sigma (600), \kappa (800), f_0(980)$ and $a_0(980)$
were found in experiments \cite{PDG}, a lot of theoretical studies have been devoted to
investigation of their states.
For the structure of light salar mesons, in addition to the conventional two quark state from the quark model,
several possibilities are proposed \cite{Jaffe1}; four-quark states, molecular states and scattering states.
Because the sigma meson is considered as a chiral partner of the pion in the mechanism of hadron mass generation,
it would be interesting to investigate a roll of four-quark states in the mechanism.
The study of four-quark states in light scalar mesons gives us insight of important QCD feature.
Since Alford and Jaffe showed the possibility
that scalar mesons exist as four-quark states on the lattice \cite{Jaffe},
tetraquark search on the lattice started actively.
The pioneer work of the sigma meson which is one of candidates of four-quark states
was done with full QCD by SCALAR collaboration \cite{Scalar}.
They found that disconnected diagrams which contain effectively four-quark states,
glueballs and so on make the sigma meson lighter.
Recently using tetraquark interpolators,
not only ground states but also resonance states of scalar mesons on the lattice were reported \cite{BGR}.
Following the procedure which was proposed by Alford and Jaffe \cite{Jaffe},
we explore the existence of four-quark states in the scalar channel on larger lattice with a finer lattice spacing.
We also investigate the dependence of bound energies for four-quark states
on the ratio of pion mass and rho meson mass ($m_{\pi}/m_{\rho}$) and on the pion mass ($m_{\pi}$).
\begin{figure}[b]
\vspace{0.35cm}
\centering
\begin{picture}(250,50)
\qbezier(-90,32)(-40,82)(10,32)
\qbezier(-90,32)(-40,52)(10,32)
\qbezier(-90,28)(-40,8)(10,28)
\qbezier(-90,28)(-40,-22)(10,28)
\put(-38,57){\vector(1,0){0.1}}
\put(-42,42){\vector(-1,0){0.1}}
\put(-38,18){\vector(1,0){0.1}}
\put(-42,3){\vector(-1,0){0.1}}
\qbezier(20,32)(70,82)(120,32)
\qbezier(20,32)(33,38)(51,41)
\qbezier(89,41)(107,38)(120,32)
\qbezier(20,28)(33,22)(51,19)
\qbezier(89,19)(107,22)(120,28)
\qbezier(20,28)(70,-22)(120,28)
\put(51.6,41){\line(5,-3){37}}
\put(51.6,19){\line(5,3){17}}
\put(88.6,41){\line(-5,-3){17}}
\put(72,57){\vector(1,0){0.1}}
\put(76,33.2){\vector(-4,-3){0.1}}
\put(58,22.8){\vector(-4,-3){0.1}}
\put(76,26.4){\vector(-4,3){0.1}}
\put(58,37.2){\vector(-4,3){0.1}}
\put(72,3){\vector(1,0){0.1}}
\qbezier(130,32)(180,82)(230,32)
\qbezier(130,28)(180,-22)(230,28)
\put(182,57){\vector(1,0){0.1}}
\put(178,3){\vector(-1,0){0.1}}
\qbezier(130,32)(158,50)(160,30)
\qbezier(130,28)(158,10)(160,30)
\qbezier(200,30)(202,50)(230,32)
\qbezier(200,30)(202,10)(230,28)
\put(160,32){\vector(0,1){0.1}}
\put(200,28){\vector(0,-1){0.1}}
\qbezier(240,32)(281,72)(285,30)
\qbezier(240,28)(281,-12)(285,30)
\qbezier(295,30)(299,72)(340,32)
\qbezier(295,30)(299,-12)(340,28)
\put(285,28){\vector(0,-1){0.1}}
\put(295,32){\vector(0,1){0.1}}
\qbezier(240,32)(268,50)(270,30)
\qbezier(240,28)(268,10)(270,30)
\qbezier(310,30)(312,50)(340,32)
\qbezier(310,30)(312,10)(340,28)
\put(270,32){\vector(0,1){0.1}}
\put(310,28){\vector(0,-1){0.1}}
\put(-91,52){$D(t)$}
\put(20,52){$C(t)$}
\put(130,52){$A(t)$}
\put(238,52){$G(t)$}
\end{picture}
\caption{The diagrams for four-quark correlators.}
\label{DCAG}
\end{figure}
\section{Four-quark states from Lattice QCD}
We calculate four-quark correlators in two-flavor ($N_f=2$) lattice QCD
under the assumption that four-quark states of scalar mesons are consisted of bound states of the two pions.
We employ the procedure which was proposed by Alford and Jaffe \cite{Jaffe}.
The four-quark correlators of isospin zero ($I=0$) and two ($I=2$) channels are given by,
\begin{eqnarray}
J_{I=0}(t) &=& D(t)+\frac{1}{2}C(t) -3A(t) + \frac{3}{2}G(t) \ , \\
J_{I=2}(t) &=& D(t)-C(t) \ ,
\end{eqnarray}
where $D(t)$, $C(t)$, $A(t)$ and $G(t)$ are corresponding to the diagrams in Fig.\ \ref{DCAG}.
To evaluate the four-quark states clearly,
we carry out the calculation in the quenched approximation
where two-quark, multi-quark and glueball states do not mix with four-quark states as intermediate states.
We drop the contribution of the diagrams for $A(t)$ and $G(t)$,
ignoring the two-quark annihilation diagrams and the vacuum channels.
To obtain the bound energy $\delta E_{I}$ for the four-quark states,
we construct the ratio of the four-quark correlator $J_I(t)$ and the pion correlator $P(t)$
and fit it to an exponential at large $t$,
\begin{eqnarray}
R_{I}(t) \ =\ \frac{J_{I}(t)}{\left(P(t)\right)^2}
&\longrightarrow& \frac{Z_{I}}{Z_{\pi}^2} \exp{(-\delta E_{I}\: t)} + \cdots \ . \\[-0.45cm]
&{\scriptstyle {t \to \infty}}& \nonumber
\end{eqnarray}
We discuss the possibility that the four-quark states exist as bound states
from the $N_L$ dependence of the bound energies.
If a four-quark state is a bound state,
$\delta E_{I}$ is negative and
would approach to a negative constant in the large $N_L$ region.
On the other hand
if a four-quark state is a scattering state,
it is expected that $\delta E_{I}$ obeys the scattering formula of a two-particle state
in a cube box of size $L$ with periodic boundary condition in Ref.\ \cite{Luscher},
\begin{eqnarray}
\delta E_{I}
= E_{I} - 2m_{\pi}
= \frac{T_{I}}{L^3} \left[ 1 + 2.8373 \left(\frac{m_{\pi}T_{I}}{4\pi L} \right)
+ 6.3752 \left(\frac{m_{\pi}T_{I}}{4\pi L} \right)^2 \right] + {\cal{O}} \left(L^{-6}\right) \ ,
\label{Luscher}
\end{eqnarray}
where $E_{I}$ is the total energy, and $T_{I}$ is the scattering amplitude
which can be written by the scattering length $a_I$ as
\begin{eqnarray}
T_{I} = - \frac{4\pi a_I}{m_{\pi}} \ .
\label{amp}
\end{eqnarray}
From Eq.(\ref{Luscher}), we find that
$\delta E_{I}$ is proportional to $L^{-3}$ in the region
where the physical spatial lattice size $L=N_L a$ ($a$ is the lattice spacing) is enough large.
\begin{figure}
\begin{center}
\includegraphics[scale=0.455]{delta_EE_mm.eps} \ \ \ \ \ \ \ \ \ \ \ \
\includegraphics[scale=0.455]{delta_EE_m.eps}
\caption{The physical lattice size $L(=N_L a)$ dependence of the energy shifts $\delta E_{I=2}$.
In left figure (a) (right figure (b)), the values of $m_{\pi}/m_{\rho}$ ($m_{\pi}$) are fixed.
In both figures the lines are proportional to $L^{-3}$.
In left figure (a), the three symbols are on the different $L^{-3}$ lines,
though they have the same values of $m_{\pi}/m_{\rho}$.
In right figure (b), the two symbols which have the same values of $m_{\pi}$ are almost on the same $L^{-3}$ line,
though they have the different values of $m_{\pi}/m_{\rho}$. }
\label{EE_mm_m}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[scale=0.455]{delta_EN_mm.eps} \ \ \ \ \ \ \ \ \ \ \ \
\includegraphics[scale=0.455]{delta_EN_m.eps}
\caption{The physical lattice size $L(=N_L a)$ dependence of the energy shifts $\delta E_{I=0}$.
In left figure (a) (right figure (b)), the values of $m_{\pi}/m_{\rho}$ ($m_{\pi}$) are fixed.
In both figures the lines are proportional to $L^{-3}$.
In left figure (a), the three symbols are on the different $L^{-3}$ lines,
though they have the same values of $m_{\pi}/m_{\rho}$.
In right figure (b), the two symbols which have the same values of $m_{\pi}$ are almost on the same $L^{-3}$ line,
though they have the different values of $m_{\pi}/m_{\rho}$. }
\label{mm_m}
\end{center}
\end{figure}
\section{Results}
In the work by Alford et al.\ \cite{Jaffe},
they observed that the $I=2$ channel is clearly the scattering state
but found the possibility that the $I=0$ channel is the four-quark bound state.
However to get the conclusive result for the four-quark bound state in the $I=0$ channel,
further calculations on a larger lattice with finer lattice spacing are required.
We calculate the four-quark correlators on a larger lattice with $N_f=2$ Wilson fermion
using configurations which are produced with both plaquette and Iwasaki gauge actions.
The lattice parameters are shown in Tables \ref{pa1} and \ref{pa2}.
We impose the Dirichlet boundary condition in the temporal direction on the quark fields.
The results for $I=2$ channel are shown in Fig.\ \ref{EE_mm_m}.
In Fig.\ \ref{EE_mm_m} (a),
the quark masses are set as the values of the ratio of pion mass to rho meson mass, $m_{\pi}/m_{\rho} \sim 0.74$
become close to those in Ref.\ \cite{Jaffe} (See Table \ref{para}).
In spite of the same values of $m_{\pi}/m_{\rho}$, the values of $m_{\pi}$ are different among them,
which comes from the lattice artifact due to the large lattice spacings.
In Fig.\ \ref{EE_mm_m} (b),
the bound energies are obtained under the same values of $m_{\pi} \sim 370$ MeV.
We can see that in all cases of Fig.\ref{EE_mm_m} (a) and (b) the symbols are on the $L^{-3}$ lines,
which is the same as that in Ref.\ \cite{Jaffe}.
On the other hand the results for $I=0$ channel are shown in Fig.\ \ref{mm_m}.
In Fig.\ \ref{mm_m} (a), the quark masses are set as the values of the ratio of $m_{\pi}/m_{\rho}$
become close to those in Ref.\ \cite{Jaffe} (See Table \ref{para}).
In Fig.\ \ref{mm_m} (b),
the bound energies are obtained under the same values of $m_{\pi} \sim 370$ MeV.
Again we find that in all cases of Figs.\ref{mm_m} (a) and (b) the symbols are on the $L^{-3}$ lines,
which is contradicted with the result of Ref.\ \cite{Jaffe}.
We can not observe the suggestion of the existence of the bound state in the $I=0$ channel.
In Fig.\ \ref{mm_m} (a)
the three symbols which have the same values of $m_{\pi}/m_{\rho}$ are on the different $L^{-3}$ lines,
but in Fig.\ \ref{mm_m} (b)
the two symbols which have the same values of $m_{\pi}$ are almost on the same $L^{-3}$ line.
It implies that $\delta E_{I}$ depends strongly on $m_{\pi}$ rather than $m_{\pi}/m_{\rho}$,
which is deduced from Eq.(\ref{amp}).
Since the scattering amplitude is proportional to the inversion of $m_{\pi}$,
the absolute value of $\delta E_{I}$ decreases with the increasing amount of $m_{\pi}$.
In Ref.\ \cite{Jaffe} they insisted that
they found the deviation from the $L^{-3}$ line (the black dashed line in Fig.\ \ref{pri})
in the physical lattice size dependence of energy shifts in the $I=0$ channel.
However we can give another explanation to it.
Because of the $m_{\pi}$ differences among data plots,
we can draw the three different $L^{-3}$ lines (the green, red and blue solid lines)
instead of one $L^{-3}$ line (the black dashed line) in Fig.\ \ref{pri}.
It does not suggest that the symbols deviate from the $L^{-3}$ line (the black dashed line),
but suggests that the symbols are on the three different $L^{-3}$ lines, respectively.
It means that there are no bound states in the $I=0$ channel.
\begin{figure}
\begin{center}
\includegraphics[scale=0.455]{delta_EN_pri.eps}
\caption{The physical lattice size $L(=N_L a)$ dependence of the energy shifts $\delta E_{I=0}$.
The lines are proportional to $L^{-3}$.
The color difference corresponds to $m_{\pi}$ difference.
The green, red and blue solid lines stand for $m_{\pi} = 940, 840$ and $790$ MeV, respectively.
The black dashed line was drawn in Fig.5 of Ref.\cite{Jaffe}.}
\label{pri}
\end{center}
\end{figure}
\section{Summary}
We have investigated a four-quark state as a bound state
from the spatial lattice size dependence of the bound energies for a four-quark state.
We found that
the four-quark states in the cases of both $I=0$ and $I=2$ channels exist as no bound states, i.e.,
all bound energies for the four-quark states as a function of spatial lattice size $L$ are on the $L^{-3}$ lines,
which is contradicted with the previous work \cite{Jaffe}.
The symbols which have the same values of $m_{\pi}/m_{\rho}$ are on the different $L^{-3}$ lines,
on the other hand the symbols which have the same values of $m_{\pi}$ are almost on the same $L^{-3}$ line.
This results suggest that
the absolute value of bound energy depends directly on $m_{\pi}$ rather than $m_{\pi}/m_{\rho}$.
This work is a staring point of study of four-quark states in the light scalar mesons.
To reach the conclusive results for the light scalar meson states,
further investigation is needed;
full QCD calculation with light quark masses on larger lattice size,
application of four-quark interpolators, state-of-the-art action and so on.
\begin{table}[b]
\caption{Lattice parameters.
$\kappa$ is the hopping parameter.
We adjust parameters, $\beta$ and $\kappa$ to fix the values of $m_{\pi}/m_{\rho} \sim 0.74$.
See Table \protect \ref{para}. }
\begin{center}
\begin{tabular}{c|c|c|c|c|cc|c}\hline
\ \ \ Gauge action \ \ \ & $\beta$ & $a \ \scriptstyle{\mathrm{[fm]}}$ & $\kappa$ &$N_{L}^3\times N_{T}$ & \# Conf. \\\hline
Iwasaki & 2.100 & 0.278(3) \cite{Iwa} & 0.1666 & $12^3 \times 20\ $ & 150 \\
& & & & $16^3 \times 20\ $ & 120 \\
& & & & $18^3 \times 20\ $ & 78 \\
& & & & $20^3 \times 20\ $ & 120 \\
& & & & $24^3 \times 20\ $ & 78 \\\hline
Iwasaki & 2.300 & 0.172(4) \cite{Iwa} & 0.1565 & $12^3 \times 20\ $ & 120 \\
& & & & $16^3 \times 20\ $ & 60 \\
& & & & $20^3 \times 20\ $ & 60 \\
& & & & $24^3 \times 20\ $ & 54 \\\hline
Plaquette & 5.700 & 0.140(4) \cite{Fuku} & 0.1640 & $ \ 8^3 \times 20$ & 1008 \\
& & & & $12^3 \times 20\ $ & 240 \\
& & & & $16^3 \times 20\ $ & 102 \\
& & & & $18^3 \times 20\ $ & 210 \\
& & & & $20^3 \times 20\ $ & 192 \\
& & & & $24^3 \times 20\ $ & 300 \\
& & & & $36^3 \times 20\ $ & 66 \\\hline
\end{tabular}
\end{center}
\label{pa1}
\end{table}
\begin{table}[b]
\caption{Lattice parameters.
We adjust parameters, $\beta$ and $\kappa$ to fix the values of $m_{\pi} \sim 370$ MeV.
See Table \protect \ref{para}.}
\begin{center}
\begin{tabular}{c|c|c|c|c|cc|c}\hline
\ \ \ Gauge action \ \ \ & $\beta$ & $a \ \scriptstyle{\mathrm{[fm]}}$ & $\kappa$ &$N_{L}^3\times N_{T}$ & \# Conf. \\\hline
Iwasaki & 2.100 & 0.278(3) \cite{Iwa} & 0.1690 & $12^3 \times 20\ $ & 150 \\
& & & & $16^3 \times 20\ $ & 120 \\
& & & & $20^3 \times 20\ $ & 120 \\
& & & & $24^3 \times 20\ $ & 60 \\\hline
Iwasaki & 2.300 & 0.172(4) \cite{Iwa} & 0.1594 & $12^3 \times 20\ $ & 300 \\
& & & & $16^3 \times 20\ $ & 180 \\
& & & & $20^3 \times 20\ $ & 180 \\
& & & & $24^3 \times 20\ $ & 60 \\\hline
\end{tabular}
\end{center}
\label{pa2}
\end{table}
\begin{table}
\caption{Lattice parameters in Ref.\ \cite{Jaffe}.
They adjust parameters, $\beta$ and $\kappa$ to fix the values of $m_{\pi}/m_{\rho} \sim 0.74$.
See Table \protect \ref{para}. }
\begin{center}
\begin{tabular}{c|c|c|c|c|c|cc|c}\hline
Group & Gauge action &$\beta$ & $a \ \scriptstyle{\mathrm{[fm]}}$ & $\kappa$ &$N_{L}^3\times N_{T}$ & \# Conf. \\\hline
Gupta et al. \cite{Gupta} &Plaquette & 6.000 & 0.0762(8)$\!\!$ \cite{Davies} & 0.1540 & $16^3 \times 80\ $ & 35 \\ \hline
Fukugita et al. \cite{Fuku} &Plaquette & 5.700 & 0.162(6) \ \cite{Davies} & 0.1640 & $12^3 \times 20\ $ & 70 \\ \hline
Alford et al. &L$\ddot{\rm{u}}$scher \& Weisz & 1.719 &0.249(5) & ----- & $\ 6^3 \times 40$ & ----- \\
& & & & & $\ 8^3 \times 40$ & ----- \\\hline
Alford et al. &L$\ddot{\rm{u}}$scher \& Weisz & 1.157 & 0.400(4) & ----- & $\ 5^3 \times 40$ & ----- \\
& & & & & $\ 6^3 \times 40$ & ----- \\
& & & & & $\ 8^3 \times 40$ & ----- \\\hline
\end{tabular}
\end{center}
\label{pa0}
\end{table}
\begin{table}
\caption{The values of $m_{\pi}/m_{\rho}$ and $m_{\pi}$.
In the upper table results of Ref.\ \cite{Jaffe} are shown.
In the lower table results of our calculation are shown.}
\begin{center}
\begin{tabular}{c|c|c}\hline
data & $m_{\pi}/m_{\rho}$ &$m_{\pi} \ \scriptstyle{\mathrm{[MeV]}}$ \\\hline
Gupta & ----- & 940(30) \\
Fukugita & 0.740(8) & 620(90) \\
Alford ($a=0.25$ [fm]) & 0.756(5) & 840(11) \\
Alford ($a=0.40$ [fm]) & 0.756(4) & 790(6) \ \ \\\hline\hline
Iwasaki ($\beta=2.1, \kappa=0.1666$) & 0.736(3) & 451(6) \ \ \\
Iwasaki ($\beta=2.3, \kappa=0.1565$) & 0.746(6) & 593(16) \\
Plaquette ($\beta=5.7, \kappa=0.1640$) \ \ \ & 0.748(4) & 754(12) \\\hline
Iwasaki ($\beta=2.1, \kappa=0.1690$) & 0.646(4) & 368(5) \ \ \\
Iwasaki ($\beta=2.3, \kappa=0.1594$) & 0.545(7) & 374(11) \\\hline
\end{tabular}
\end{center}
\label{para}
\end{table}
\section*{Acknowledgments}
This work is supported in part by
Nagoya University Global COE Program (G07).
Numerical calculations were performed on
the cluster system "$\varphi$" at KMI, Nagoya University.
Grant-in-Aid for Young Scientists (B) (22740156),
Grant-in-Aid for Scientific Research (S) (22224003) and the Kurata Memorial Hitachi
Science and Technology Foundation.
| {'timestamp': '2012-11-12T02:01:14', 'yymm': '1211', 'arxiv_id': '1211.2072', 'language': 'en', 'url': 'https://arxiv.org/abs/1211.2072'} |
\section{Automunge}
Automunge is an open source python library, available now for pip install, built on top of Pandas \citep{McKinney:10}, SciKit-Learn \citep{Pedregosa:11}, Scipy \citep{2020SciPy-NMeth}, and Numpy \citep{Walt:11}. It takes as input tabular data received in a tidy form \citep{Wickham:14}, meaning one column per feature and one row per observation, and returns numerically encoded sets with infill to missing points, thus providing a push-button means to feed raw tabular data directly to machine learning algorithms. The complexity of numerical encodings may be minimal, such as automated normalization of numerical sets and encoding of categorical, or may include more elaborate feature engineering transformations applied to distinct columns. Generally speaking, the transformations are performed based on a ``fit'' to properties of a column in a designated train set (e.g. based on a set's mean, standard deviation, or categorical entries), and then that same basis is used to consistently and efficiently apply transformations to subsequent designated test sets, such as may be intended for use in inference or for additional training data preparation.
The library consists of two master functions, automunge(.) and postmunge(.). The automunge(.) function receives a raw train set and if available also a consistently formatted test set, and returns a collection of encoded sets intended for training, validation, and inference. The function also returns a populated python dictionary, which we call the postprocess\_dict, capturing all of the steps and parameters of transformations. This dictionary may then be passed along with subsequent test data to the postmunge(.) function for consistent processing on the train set basis, such as for instance may be applied sequentially to streams of data for inference. Because it makes use of train set properties evaluated during a corresponding automunge(.) call instead of directly evaluating properties of the test data, processing of subsequent test data in the postmunge(.) function is very efficient.
Included in the platform is a library of feature engineering methods, which in some cases may be aggregated into sets to be applied to distinct columns. For such sets of transformations, as may include generations and branches of derivations, the order of implementation is designated by passing transformation categories as entries to a set of ``family tree'' primitives described further below.
\section{Categoric Encodings}
In tabular data applications, such as are the intended use for Automunge data preparations, a common tactic for practitioners is to treat categoric sets in a coarse-grained representation for presentation to a training operation. Such aggregations transform each unique categoric entry with distinct numeric encoding constructs, such as one-hot encoding in which unique entries are each represented with their own column for boolean activations, or ordinal encoding of single column integer representations. The Automunge library offers some further variations on categoric encodings [Fig. 1]. The default ordinal encoding `ord3', activated when the number of unique entries exceeds some heuristic threshold, has encoding integers sorted by frequency of occurrence followed by alphabetical, where frequency in general may be considered more useful to a training operation. For sets with a number of unique entries below this threshold, the categoric defaults instead make use of `1010' binary encodings, meaning multi-column boolean activations in which representations of distinct categoric entries may be achieved with multiple simultaneous activations, which has benefits over one-hot encoding of reduced memory bandwidth. Special cases are made for categoric sets with 2 unique entries, which are converted by `bnry' to a single column boolean representation, and sets with 3 unique entries are somewhat arbitrarily applied with `text' one-hot encoding. Note that each transform has a default convention for infill to missing values which may be updated for configure to distinct columns.
\begin{figure}[h]
\centering
\includegraphics[width=0.8\linewidth]{"Figure_1_052820.png}
\caption{Categoric encoding examples}
\end{figure}
The categoric encoding defaults in general are based on assumptions of training models in the decision tree paradigms (e.g. Random Forest, Gradient Boosting, etc.), particularly considering that a binary representation may sacrifice compatibility for an entity embedding of categorical variables [4] as may be applied in the context of a neural network (for which we recommend a seeded representation of `ord3' for ordinal encoding by frequency). The defaults for automation are all configurable, and alternate root categories of transformations may also be specified to distinct columns to overwrite the automation defaults.
The characterization of these encodings as a coarse graining is meant to elude to the full discarding of the structure of the received representations. When we convert a set of unique values \{`circle', `square', `triangle'\} to \{1, 2, 3\}, we are hiding from the training operation any information that might be inferred from grammatical structure, such as the recognition of the prefix ``tri'' meaning three. Consider the word ``Automunge''. You can probably infer from the common prefix ``auto'' that there might be some automation involved, or if you are a data scientist you might recognize the word ``munge'' as referring to data transformations. Naturally we may then ask how we might incorporate practices from NLP into our tabular encodings, such as vectorized representations in a model like Word2Vec \citep{Mikolov:13}. While this is a worthy line for further investigation, some caution is deserved that the validity of such representations is built on a few assumptions, most notably the consistency of vocabulary interpretations between any pre-trained model and the target tabular application. In NLP practice it is common to fine-tune \citep{Howard:16} a pre-trained model to accommodate variation in a target domain. In tabular applications obstacles to this type of approach may arise from the limited context surrounding the categoric entries - which in practice is not uncommon to find entries as single words, or sometimes character sets that aren't even words such as e.g. serial numbers or addresses. We may thus be left without the surrounding corpus vocabulary necessary to fine tune a NLP model for some tabular target, and thus the only context available to extract domain properties may be the other entries shared in a categoric set or otherwise just the surrounding corresponding tabular features.
\section{String Parsing}
Automunge offers a novel means to extract some additional grammatical context from categoric entries prior to encoding by way of comparisons between entries in the unique values of a feature set. The operation is conducted by a kind of string parsing, in which the set of unique entries in a column are parsed to identify character subset overlaps between entries. In its current form the implementation is probably not at an optimal computational efficiency, which we estimate complexity scaling on the order of $\mathcal{O}((LN)\textsuperscript{2})$, where N is the number of unique entries and L is the average character length of those entries. Thus the intended use case is for categoric sets with a bounded range of unique entries. In general, the application of comparable transformations to test data in the postmunge(.) function is materially more computationally efficient than the initial fitting of the transformations to the train set in the automunge(.) function, particularly in variations with added assumptions for test set composition in relation to the corresponding train data.
The implementation to identify character subset overlaps is performed by first inspecting the set of unique entries from the train set, determining the longest string length (-1), then for each entry comparing each subset of that length to every equivalent length subset of the other entries, where if composition overlaps are identified, the results are populated in a data structure matching identified overlaps to their corresponding source column unique entries, and after each overlap inspection cycle incrementing that inspection length by negative steps until a configurable minimum length overlap detection threshold is reached. To keep things manageable, in the base configuration overlaps are limited to a single identification per unique entry, prioritized by longest length. Note also that transformation function parameters may be activated to exclude from overlap detections any subsets with space, punctuation, or other designated character sets such as to promote single word activations.
\begin{figure}[h]
\centering
\includegraphics[width=0.9\linewidth]{"Figure_2_053120.png}
\caption{String parsing for bounded sets}
\end{figure}
There are a few options of these methods to choose from [Fig. 2]. In the first version `splt' any identified substring overlap partitions are given unique columns for boolean activations. In a second version `spl2' the full entries with identified overlaps are replaced with the overlap partitions, resulting in a reduced number of unique entries - such as may be intended as a reduced information content supplement to the original set, which we speculate could be beneficial in the context of a curriculum learning regime \citep{Bengio:09}. A third version `spl5' is similar to the second with distinction that those entries not replaced with identified overlap partitions are instead replaced with an infill plug value, such as to avoid too much redundancy between different configurations derived from the same set. The fourth shown version `sp15' is comparable to `splt' but with the allowance for multiple concurrent activations to each entry, demonstrating a tradeoff between information retention and dimensionality of returned data. Each of these versions have corresponding variants with improved computational efficiency to test set processing with the postmunge(.) function based on incorporating the assumption that the set of unique entries in the test set will be the same or a subset of those found in the train set.
Some further variations include `sp19' in which concurrent activation sets are collectively consolidated with a binary transform to reduce dimensionality. Another variation is available as `sbst' which instead of comparing string character subsets of entries to string character subsets of other entries, only compares string character subsets of entries to complete character sets of other entries.
\section{Parsing Unbounded Sets}
The transformations of the preceding section were somewhat constrained toward use against categoric sets with a bounded range of unique entries in the training data due to the complexity scaling on train set implementation. For cases where categoric sets may be presented with an unbounded range of unique entries, such as in cases for all unique entries or otherwise unbounded, Automunge offers a few more string parsing transformations to extract grammatical structure prior to categoric encodings.
As a first example, categoric entries may be passed to a search function `srch' in which entries are string parsed to identify presence of user-specified substring partitions, which are then presented to machine learning by way of recorded activations in returned boolean columns specific to each search term or alternatively with the set of activations collected into a single ordinal encoded column. Some variations on the search functions include the allowance to aggregate multiple search terms into common activations or variations for improved processing efficiency based on added assumptions of whether the target set has a narrow range of entries. Note that these methods are supported by the Automunge infrastructure allowing a user to pass transformation function specific parameters to those applied to distinct columns or also to reset transformation function parameter defaults in the context of an automunge(.) call.
Another string parsing option suitable for application to both bounded and unbounded categoric sets is intended for purposes of detecting numeric portions of categoric string entries, which are extracted and returned in a dedicated numeric column. Some different numeric formats are supported, including entries with commas via `nmcm', and using the family tree primitives the returned sets may in the same operation be normalized such as with a z-score or min-max scaling. As with other string-parsing methods priority of extracts are given to identified partitions with the longest character length. A comparable operation may instead be performed to extract string partitions from majority numeric entries, although we suggest caution of applying these methods towards numerical sets which include units of measurement for instance, as we recommend reserving engineering domain evaluations for oversight by human intelligence.
\begin{figure}[h]
\centering
\includegraphics[width=0.8\linewidth]{"Figure_3_052820.png}
\caption{Numeric extraction and search for unbounded sets}
\end{figure}
\section{Family Tree Aggregations}
An example composition of string parsing transformation aggregations, including generations and branches of derivations by way of entries to the family tree primitives, are now demonstrated for the root transformation category `or19' [Fig. 4], which is available for assignment to source columns in context of an automunge(.) call. This transformation set is intended to automatically extract grammatical context from tabular data categoric features with a bounded range of unique entries.
\begin{figure}[h]
\centering
\includegraphics[width=0.8\linewidth]{"or19_and_or20_treesandcolumns_rev052520.jpg}
\caption{Example family tree aggregations for bounded categoric sets}
\end{figure}
The sequence of four character keys represent transformation functions based on the transformation categories applied to a column. Note that each key represents a set of functions which may include one for application to train/test set(s) in an automunge(.) call for initial fitting of transformations to properties of a train set or a corresponding function for processing of a comparably formatted test set in postmunge(.) on the basis of properties from the train set. The steps of transformations for each returned column are logged by way of transformation function specific suffix appenders to the original column headers. Note also that some of the intermediate steps of transformations may not be retained in the returned set based on presence of downstream replacement primitive entries to family tree primitives as described further below.
The upstream application of an `UPCS' transform serves the purpose of converting categoric entry strings to all uppercase characters, thus consolidating entries with different case configurations, e.g. a received set with unique entry strings {`usa', `Usa', `USA'} would all be considered an equivalent entry (there may be some domains where this convention is not preferred in which case a user may deactivate a parameter to exclude this step). Adjacent to the `UPCS' transform is a `NArw' which returns boolean activations for those rows corresponding to infill based on the root category ``processdict'' defined type of source column values that will be target for infill (processdict is a data structure discussed further below, whose entries may either be a default or user specified). The included `1010' transform for binary encoding distinguishes all distinct entries prior to any string parsing aggregations to ensure full information retention after the `UPCS' transform both for purposes of ML training and also to support any later inversion operation to recover the original format of data pre-transforms as is supported in the postmunge(.) function. An alternate configuration could replace `1010' with another categoric transform such as one-hot or ordinal encodings. The `nmc7' transformation function is similar to `nmcm' discussed earlier, but only string parses those unique entries which were not found in the train set for a more efficient application in the postmunge(.) function. These numeric extractions are followed by a `nmbr' function for z-score normalization. Note that in some cases a numerical extract, such as those derived here from zip codes of passed addresses, may in fact be more suitable to a categoric encoding instead of numeric normalization. Such alternate configurations may easily be defined with the family tree primitives.
The remaining branches downstream of the `UPCS' start with a `spl9' function performing string parsing replacement operations comparable to the `spl2' demonstrated above, but with the assumption that the set of unique entries in the test data will be the same or a subset of the train set for efficiency considerations. The `spl9' parses through unique entry strings to identify character subset overlaps and replaces entries with the longest identified overlap, which results in returned columns with a fewer number of unique entries by aggregating entries with shared overlaps into a common representation, which may then be numerically encoded such as shown here with an `ord3' transform. The `or19' family tree also incorporates a second tier of string parsing with overlap replacement by use of the `sp10' transform (comparable to `spl5' with similar added assumptions for efficiency considerations as `spl9'). `sp10' differs from `spl9' in that unique entries without overlap are replaced in aggregate with an infill plug value to avoid unnecessary redundancy between encodings, again resulting in a reduced number of unique entries which may then be numerically encoded such as with `ord3'. Note that Fig. 4 also demonstrates an alternate root category configuration as `or20' in which an additional tier of `spl9' is incorporated prior to the `sp10'.
\begin{figure}[h]
\centering
\includegraphics[width=0.95\linewidth]{"Figure_5_092020.png}
\caption{Demonstration of `or19' returned data}
\end{figure}
Fig. 5 demonstrates the numerical encodings as would be returned from the application of the `or19' root category to a small example feature set of categoric strings. It might be worth restating that due to the complexity scaling of the string parsing operation this type of operation is intended preferably for categoric sets with a bounded range of unique entries in the train set. The composition of returned sets are derived based on properties of the source column received in a designated train set, and these same bases are applied to consistently prepare data for test sets, such as sets that may be intended for an inference operation. In other words, when preparing corresponding test data, the same type and order of columns are returned, with equivalent encodings for corresponding entries and equivalent activations for specific string subset overlap partitions that were found in the train set.
\section{Specification}
The specification of transformation set compositions for these methods are conducted by way of transformation category entries to a set of family tree primitives, which distinguish for each transformation category the upstream transformation categories for when that category is applied as a root category to a source column and also the downstream transformation categories for when that category is found as an entry in an upstream primitive with offspring. Downstream primitive entries are treated as upstream primitive entries for the successive generation, and primitives further distinguish source column retention and generation of offspring.
\begin{figure}[h]
\centering
\includegraphics[width=0.56\linewidth]{"family_tree_rev_101719.png}
\caption{Family tree primitives}
\end{figure}
Transformation category family tree sets may be passed to an automunge(.) call by way of a ``transformdict'' data structure, which is complemented by a second ``processdict'' data structure populated for each transformation category containing entries for the associated transformation functions and data properties. The transformdict with transformation category entries to family tree primitives and corresponding processdict transformation function entries associated with various columns returned from the `or19' root category set are demonstrated here [Fig. 7]. Here the single processdict entry of the transformation function associated with a transformation category is an abstraction for the set of corresponding transformation functions to be directed at train and/or test set feature sets.
\begin{figure}[h]
\centering
\includegraphics[width=0.9\linewidth]{"String_Thery_Fig_7.png}
\caption{`or19' specifications}
\end{figure}
\section{Experiments}
Some experiments were run to evaluate comparison between standard tabular categoric representation techniques and parsed categoric encodings. The data set from the IEEE-CIS Kaggle competition \citep{IEEE:19} was selected based on known instances of feature sets containing serial number entries which were expected as a good candidate for string parsing.
The experiments were supported by the Automunge library's feature importance evaluation methods, in which a model is trained on the full feature set to determine a base accuracy on a partitioned validation set, and metrics are calculated by shuffle permutation \citep{Parr:18}, where the target feature has it's entries shuffled between rows to measure damping effect on the resulting accuracy, the delta of which may serve as a metric for feature importance. Automunge actually aggregates two metrics for feature importance, the first metric specific to a source feature by shuffling all columns derived from the same feature, in which a higher metric represents more importance, and the second metric specific to each derived column by shuffling all but the target of the columns derived form the same feature, in which a lower metric represents more relative importance between columns derived from the same feature. The experiment findings discussed below are based on the first metric. Although other auto ML options are supported in the library, this experiment used the base configuration of Random Forest \citep{Breiman:01} for the model.
For the experiment, the training data was paired down to the top ten features based on feature importance in addition to two features selected as targets for the experiments, identified in the data set by `id\_30' and `id\_31'. These two features contained entries corresponding to operating system serial numbers and browser serial numbers associated with origination of financial transactions. By inspection, there were many cases where serial numbers shared portions of grammatical structure, as for example the entries \{`Mac OS X 10\_11\_6', `Mac OS X 10\_7\_5'\} or \{`chrome 62.0', `chrome 49.0'\}. Scenarios were run in which both of these features were treated to different types of encodings, including `text' one-hot encoding, `ord3' ordinal, `1010' binary [Fig 1], and two string parse scenarios, the first with `or19' [Fig 4, 5, 7] and the second with an aggregation of: `sp19' (string parse with concurrent activations consolidated by binary encoding) supplemented by `nmcm' (numeric extraction) [Fig 3] and `ord3' (ordinal encoding). The feature importance was then evaluated corresponding to each of these encoding scenarios [Table 1].
\begin{table}[h]
\caption{Feature Importance Metric Results}
\label{sample-table}
\begin{center}
\begin{tabular}{lllll}
\multicolumn{1}{c}{\bf Encoding} &\multicolumn{1}{c}{\bf Category} &\multicolumn{1}{c}{\bf Accuracy} &\multicolumn{1}{c}{\bf `id\_30'} &\multicolumn{1}{c}{\bf `id\_31'}
\\ \hline \\
one-hot & `text' & 0.98029 & 0.00135 & 0.00490 \\
ordinal & `ord3' & 0.98040 & 0.00193 & 0.00581 \\
binary & `1010' & 0.98045 & 0.00245 & 0.00699 \\
string parse & `or19' & 0.98082 & 0.00295 & 0.00914 \\
string parse & `sp19' & 0.98081 & 0.00279 & 0.00924 \\
\end{tabular}
\end{center}
\end{table}
The experiments illustrate some important points about the impact of categoric encodings even outside of string parsing. Here we see that binary encoding materially outperforms one-hot encoding and ordinal encoding. We believe that one-hot encoding is best suited for labels or otherwise just used for interpretability purposes. We believe ordinal encoding is best suited for very high cardinality when a set has large number of entries. We believe binary is the best default for categoric encodings outside of vectorization, and thus serves as our categoric default under automation.
The string parsing was found to have a material benefit to both of our target features. It appears the `or19' version of string parsing was more beneficial to the `id\_30' feature and the `sp19' version to the `id\_31' feature.
Part of the challenge of benchmarking parsed categoric encodings is the nature of the application, in that performance impact of string parsing is highly dependent on data set properties, and not necessarily generalizable to a metric that would be relevant for comparison between different features or data sets. We believe this experiment has successfully demonstrated that string parsing has the potential to train better performing models in tabular applications based on improved model accuracy and feature importance in comparisons for these specific features.
\section{Discussion}
To be clear, we believe the family tree primitives [Fig 6] represent a scientifically novel framework, serving as a fundamental reframing of command line specification for multi-transform sets as may include generations and branches of derivations applied by recursion. They are built on assumptions of tidy data and that derivations are all downstream of a specific target feature set, and are well suited for the final data preprocessing steps applied prior to the application of machine learning in tabular applications. We consider these primitives a universal language for univariate data transformation specification and an improvement on mainstream practice.
Although this paper is being submitted under the subject of NLP, it should be noted that the string parsing methods as demonstrated are kind of a compromise from vocabulary vectorization, intended for tabular applications in esoteric domains with limited context or surrounding language such as could be used to fine-tune a pre-trained model, and thus not suitable for mainstream NLP models like BERT. We have attempted in this work a comprehensive overview of various permutations of string parsing that may be applied for scenarios excluding vectorization. That is not to say that a vectorization may not still be achievable - for instance each of the returned categoric encodings of varying information content returned from `or19' could be fed as input to an entity embedding layer [4] when the returned sets are used to train a model.
Further, this paper is not just intended to propose theory and methods. Automunge is a downloadable software toolkit, and the methods demonstrated here are available now in the library for push-button operation. It really is just as simple as passing a dataframe and designating a root category of `or19' to a target column. We believe the automation of string parsing for categoric encodings is a novel invention that will prove very useful for machine learning researchers and practitioners alike.
The value of the library extends well beyond string parsing. For instance, Automunge is an automated solution to missing data infill. In addition to the infill defaults for each transformation, a user can select for each column other infill options from the library, including ``ML infill'' in which column specific Random Forest models \citep{Breiman:01} are trained from partitioned subsets of the training data to predict infill to train and test sets. For example, when ML infill is applied to the `or19' set, each of the returned subsets will have their own trained infill model.
An important point of value is not just the transformations themselves, but the means by which they are applied between train and test sets. In a traditional numerical set normalization operation for instance, it is not uncommon that each of these sets is evaluated individually for a mean and standard deviation, which runs a risk of inconsistency of transformations between sets, or alternate methods to measure prior to validation set extraction runs the risk of data leakage between training and validation operations. In a postmunge(.) test set application, all of the transformation parameters are derived from corresponding columns in the train set passed to automunge(.) after partitioning validation sets, which in addition to solving these problems of inconsistency and data leakage, we speculate that at data center scale could have material benefit to computational overhead and associated carbon intensity of inference, perhaps also relevant to edge device battery constraints.
Another key point of value for this platform is simply put the reproducibility of data preprocessing. If a researcher wants to share their results for exact duplication to the same data or similar application to comparable data, all they have to do is publish the simple python dictionary returned from an automunge(.) call, and other researchers can then exactly duplicate. The same goes for archival of preprocessing experiments - a source data set need only be archived once, and every performed experiment remains accessible and reproducible with this simple python dictionary.
Beyond the core points of feature engineering and infill, the Automunge library contains several other push-button methods. The goal is to automate the full workflow for tabular data for the steps between receipt of tidy data and returned sets suitable for machine learning application. Some of the options include feature importance evaluation (by shuffle permutation \citep{Breiman:01}), dimensionality reduction (including by means of PCA \citep{Jolliffe:16}, feature importance, and binary encodings), preparation for oversampling in cases of label set class imbalance \citep{Buda:18}, evaluation of data distribution drift between initial train sets and subsequent test sets, and perhaps most importantly the simplest means for consistently and efficiently processing subsequent data with postmunge(.).
Oh, and once you try it out, please let us know.
\subsubsection*{Acknowledgments}
A thank you owed to: the Kaggle IEEE-CIS competition which helped me recognize the potential for string parsing. Mark Ryan who shared a comment in Deep Learning with Structured Data that was inspiration for the ‘UPCS’ transform. Thanks to Stack Overflow, Python, PyPI, GitHub, Colaboratory, Anaconda, and Jupyter. Special thanks to Scikit-Learn, Numpy, Scipy Stats, and Pandas.
| {'timestamp': '2022-02-24T02:02:37', 'yymm': '2202', 'arxiv_id': '2202.09498', 'language': 'en', 'url': 'https://arxiv.org/abs/2202.09498'} |
\section{Introduction}
A {\em $\lambda$-fold} triple system of order $v$, denoted by TS$_{\lambda}(v)$, is a pair $(V,\cal{B})$ where
$V$ is a $v$-set of points and $\cal{B}$ is a set of $3$-subsets {\em (blocks)} such
that any $2$-subset of $V$ appears in exactly $\lambda$ blocks. An {\em automorphism group} of an TS$_{\lambda}(v)$ is a permutation on $V$ leaving $\cal{B}$ invariant. An TS$_{\lambda}(v)$ is {\em cyclic} if its automorphism group contains a $v$-cycle. If $\lambda=1$, an TS$_{\lambda}(v)$ is called {\em Steiner triple system} and is denoted by STS$(v)$. A cyclic STS$(v)$ is denoted by CSTS$(v)$.
An TS$_\lambda(v)$ is {\em simple} if it contains no repeated blocks. An TS$_\lambda(v)$ is called {\em indecomposable} if its blocks set $\cal{B}$ cannot be partitioned into sets $\mathcal{B}_1$, $\mathcal{B}_2$ of blocks of the form TS$_{\lambda_1}(v)$ and TS$_{\lambda_2}(v)$, where $\lambda_1+\lambda_2=\lambda$ with $\lambda_1,\lambda_2\geq 1$. A cyclic TS$_\lambda(v)$ is called {\em cyclically indecomposable} if its block set $\cal{B}$ cannot be partitioned into sets $\mathcal{B}_1$, $\mathcal{B}_2$ of blocks to form a cyclic TS$_{\lambda_1}(v)$ and TS$_{\lambda_2}(v)$, where $\lambda_1+\lambda_2=\lambda$ with $\lambda_1,\lambda_2\geq 1$.
The constructions of triple systems with the properties cyclic, simple, and indecomposable, were studied by many researchers for one property at a time; for example,
cyclic triple systems for all $\lambda$s were constructed in \cite{colbourn2, silvesan}, simple for $\lambda=2$
in \cite{stinson} and simple for every
$v$ and $\lambda$ satisfying the necessary conditions in \cite{dehon}. Also, some of the properties were combined in studies. For example, in \cite{wang},
cyclic and simple two-fold triple systems for all admissible orders were constructed, while in \cite{archdeacon, colbournrosa, dinitz, kramer, milici, zhang2}, simple and
indecomposable designs for $\lambda=2,3,4,5,6$ and all admissible $v$ were constructed. In \cite{zhang}, simple and indecomposable designs
were constructed for all $v\geq 24\lambda -5$ satisfying the necessary conditions. For the general case of $\lambda > 6$, Colbourn and Colbourn \cite{colbourn}
constructed a single indecomposable TS$_{\lambda}(v)$ for each odd $\lambda$. Shen \cite{shen} used Colbourn and Colbourn result and some recursive constructions
to prove the necessary conditions are asymptotically sufficient. Specifically, if $\lambda$ is odd, then there exists a constant $v_0$ depending on $\lambda$ with an
indecomposable simple TS$_\lambda(v)$ design for all $v\geq v_0$ satisfying the necessary conditions. In \cite{gruttmuller}, the authors constructed two-fold
cyclically indecomposable triple systems for all admissible orders. The authors also checked exhaustively the cyclic triple systems TS$_\lambda(v)$ for
$\lambda=2,\;v\leq 33$ and $\lambda=3,\;v\leq21$ that are cyclically indecomposable and determined if they are decomposable (to non cyclic) or not.
In $2000$, Rees and Shalaby \cite{reesshalaby} constructed simple indecomposable two-fold cyclic triple systems for all $v\equiv 0,~1,~3,~4,~7,~\textup{and}~9~(\textup{mod}~12)$
where $v=4$ or $v\geq12$ using Skolem-type sequences. They acknowledged that the analogous problem for $\lambda >2$ is more difficult.
In $1974$, Kramer \cite{kramer} constructed indecomposable three-fold triple systems for all admissible orders. We noticed that Kramer's construction for
$v\equiv 1\;\textup{or}\;5\,(\textup{mod}\;6)$ gives also cyclic and simple designs.
In this paper, we construct three-fold triple systems having the properties of being cyclic, simple, and indecomposable for all admissible orders
$v\equiv 3~(\textup{mod}~6)$, except for $v=9$ and $v=24c+57$, $c\geq 2$.
\section{Preliminaries}
Let $D$ be a multi set of positive integers with $|D|=n$. A {\em Skolem-type sequence of order $n$} is a sequence $(s_1,\ldots,s_t),t\geq 2n$ of $i\in D$ such that
for each $i\in D$ there is exactly one $j\in \{1,\ldots,t-i\}$ such that $s_j=s_{j+i}=i$. Positions in the sequence not occupied by integers $i\in D$ contain null
elements. The null elements in the sequence are also called {\em hooks}, {\em zeros} or {\em holes}. As examples, $(1,1,6,2,5,2,1,1,6,5)$ is a Skolem-type sequence
of order $5$ and $(7,5,2,0,2,0,5,7,1,1)$ is a Skolem-type sequence of order $4$.
Some special Skolem-type sequences are described below.
A {\it Skolem sequence of order $n$} is a sequence $S_n=(s_{1},s_{2},\ldots, s_{2n})$ of $2n$ integers which satisfies the conditions:
\begin{enumerate}
\item for every $k\in \{1,2,\ldots,n\}$ there are exactly two elements $s_{i},s_{j}\in S$ such that $s_{i}=s_{j}=k$, and
\item if $s_{i}=s_{j}=k,\;i<j$, then $j-i=k.$
\end{enumerate}
Skolem sequences are also written as collections of ordered pairs $\{(a_i,b_i):1\leq i\leq n,\;b_i-a_i=i\}$ with $\cup_{i=1}^{n}\{a_i,b_i\}=\{1,2,\ldots,2n\}$.
For example, $S_5=(1,1,3,4,5,3,2,4,2,5)$ is a Skolem sequence of order $5$ or, equivalently, the collection $\{(1,2),(7,9),(3,6),(4,8),(5,10)\}$.
Equivalently, a {\it Skolem sequence of order $n$} is a Skolem-type sequence with $t=2n$ and $D=\{1,\ldots,n\}$.
A {\em hooked Skolem sequence of order $n$} is a sequence $hS_n=(s_{1},\ldots, s_{2n-1}, s_{2n+1})$ of $2n+1$ integers which satisfies the above definition, as well as $s_{2n}=0.$
As an example, $hS_6=(1,1,2,5,2,4,6,3,5,4,3,0,6)$ is a hooked Skolem sequence of order $6$ or, equivalently, the collection $\{(1,2),(3,5),(8,11),
(6,10), (4,9),(7,13)\}$.
A {\em (hooked) Langford sequence of length $n$ and defect $d$, $n>d$} is a sequence $L_d^n=(l_{i})$ of $2n$ $(2n+1)$ integers which satisfies:
\begin{enumerate}
\item for every $k\in \{d,d+1,\ldots,d+n-1\}$, there exist exactly two elements $l_{i},l_{j}\in L$ such that $l_{i}=l_{j}=k$,
\item if $l_{i}=l_{j}=k$ with $i<j$, then $j-i=k$,
\item in a hooked sequence $l_{2n}=0$.
\end{enumerate}
We noticed that Kramer's construction \cite{kramer} can be obtained using the canonical starter
$v-2,v-4,\ldots,3,1,1,3,\ldots,v-4,v-2$ and taking the base blocks $\{0,i,b_i\}(\textup{mod}\;v)|i=1,2,\ldots,\frac{1}{2}(v-1)$. So, Kramer's construction can
be obtained using Skolem-type sequences.
We prove next, that Kramer's construction for indecomposable triple systems produces simple designs.
\begin{theorem}\cite{kramer}{\label{kramer}}
The blocks $\{\{0,\alpha,-\alpha\}(\textup{mod}\;v)|\alpha=1,\ldots,\frac{1}{2}(v-1)\}$ for $v\equiv 1\;\textup{or}\;5\,(\textup{mod}\;6)$ form a cyclic, simple, and indecomposable
three-fold triple system of order $v$.
\end{theorem}
\begin{proof}
Let $v=6n+1$. The design is cyclic and indecomposable \cite{kramer}. We prove that the cyclic three-fold triple systems produced by
$\{\{0,\alpha,-\alpha\}(\textup{mod}\;v)|\alpha=1,\ldots,\frac{1}{2}(v-1)\}$ is also simple.
Suppose that the construction above produces $\{x,y,z\}$ as a repeated block. Any block $\{x,y,z\}$ is of the form $\{0,i,6n+1-i\}+k$ for some
$i=1,2,\ldots,\frac{1}{2}(v-1)$ and $k \in \mathbb {Z}_{6n+1}$. Hence, if $\{x,y,z\}$ is a repeated block we have
$$\{0,i_1,6n+1-i_1\}+k_1=\{0,i_2,6n+1-i_2\}+k_2$$
whence,
$$\{0,i_2,6n+1-i_2\}=\{0,i_1,6n+1-i_1\}+k$$
for some $i_1,i_2\in\{1,2,\ldots,\frac{1}{2}(v-1)\}$ and some $k\in \mathbb{Z}_{6n+1}$.
If $k=0$, we have $i_2=6n+1-i_1$ and $i_1=6n+1-i_2$, which is impossible since $6n+1-i_1> i_2$ and $6n+1-i_2> i_1$ by definition (i.e., $i_1,i_2\in\{1,2,\ldots,3n\}$
while $6n+1-i_1,\,6n+1-i_2\in\{3n+1,\ldots,6n\}$.
If $k=i_2$, we have $\begin{cases} i_1+i_2=6n+1 \\ 6n+1-i_1+i_2=6n+1-i_2 \end{cases}$ or
$\begin{cases} i_1+i_2=6n+1-i_2 \\ 6n+1-i_1+i_2=6n+1. \end{cases}$
Since both $i_1$ and $i_2$ are at most $3n$, it is impossible to have $i_1+i_2=6n+1$. Also $i_1\neq i_2$.
If $k=6n+1-i_2$ we have $\begin{cases} i_1+6n+1-i_2=6n+1 \\ 6n+1-i_1+6n+1-i_2=i_2+6n+1 \end{cases} \Leftrightarrow \linebreak \begin{cases} i_1+i_2=6n+1-i_1 \\ 6n+1-i_2+i_1=6n+1 \end{cases}$ or $\begin{cases} i_1+6n+1-i_2=i_2 \\ 6n+1-i_1+6n+1-i_2=6n+1. \end{cases}$
Since $6n+1-i_2 > i_2$ , it is impossible to have $i_1+6n+1-i_2=i_2$.
It follows that our design is simple. The case for $v=6n+5$ is similar. \end{proof}
In order to completely solve the case $\lambda=3$, we have new constructions that give cyclic, simple, and indecomposable three-fold triple systems for $v\equiv 3~(\textup{mod}\;6)$, $v\neq 9$ and
$v\neq 24c+57$, $c\geq 2$.
\section{Simple Three-fold Cyclic Triple Systems}
\begin{lemma} \label{lem1} For every $n\equiv 0 ~ \textup{or} ~1 ~(\textup{mod}\,4)$, $n\geq 8$, there is a Skolem sequence of order $n$ in
which $s_1=s_2=1$ and $s_{2n-2}=s_{2n}=2$.
\end{lemma}
\begin{proof}
To get a Skolem sequence of order $n$ for $n\equiv 0 ~ \textup{or} ~1 ~(\textup{mod}\,4)$, $n\geq 8$, take $(1,1,hL_3^{n-2})$, replace the hook with
a $2$ and add the other $2$ at the end of the sequence.
For $n=8$, take $hL_3^6=(8,3,5,7,3,4,6,5,8,4,7,0,6)$, for $n=12$ take $hL_3^{10}=(9,11,3,12,\linebreak 4, 3,7, 10,4,9,8,5,11,7,6,12,5,10,8,0,6)$ and for the
remaining $hL_3^{n-2}$ hook a $hL_4^{n-3}$ (see \cite{simpson}, Theorem 2, Case 1) to $(3,0,0,3)$.
For $n\equiv 1~(\textup{mod}~4)$, $n\geq9$, take $hL_3^{n-2}$ (see \cite{simpson}, Theorem 2, Case 1).
\end{proof}
\begin{ex}
From the above lemma we have $S_8=(1,1,8,3,5, 7,3,4,6,5,8,4,7,2,6,2)$, $S_{12}=(1,1,9,11,3,12,4,3,7, 10,4,9,8,5,11,7,6, 12,5,10,8,2,6,
2)$ and $S_{16}=(1,1,9,6,4,14,\linebreak 15,11,4,6,13,9,16,7,12,10,8, 5,11,14,7,15,5,13,8,10,12,3,16,2,3,2)$.
\end{ex}
We use the following construction to get cyclic TS$_3(2n+1)$ for $n\equiv 0$ or $1$ (mod $4$):
\begin{construction} \cite{silvesan} {\label {c5}}
Let $S_n=(s_1,s_2,\ldots,s_{2n})$ be a Skolem sequence of order $n$ and let $\{(a_i,b_i)|1\leq i\leq n\}$ be the pairs of positions
in $S_n$ for which $b_i-a_i=i$. Then the set $\cal{F}$=$\{\{0,i,b_i\}|1\leq i\leq n\}(\textup{mod}\; 2n+1)$ is a $(2n+1,3,3)-DF$. Hence, the set of triples in $\cal{F}$ form the base blocks of a cyclic TS$_3(2n+1)$.
\end{construction}
Then, we apply Construction \ref{c5} to the Skolem sequences given by Lemma \ref{lem1} to get cyclic three-fold triple systems that are simple and indecomposable.
\begin{construction}{\label{c7}}
Let $S_n=(s_1,s_2,\ldots,s_{2n})$ be a Skolem sequence of order $n$ given by Lemma \ref{lem1}, and let $\{(a_i,b_i)|1\leq i\leq n\}$ be the pairs of positions
in $S_n$ for which $b_i-a_i=i$. Then the set $\cal{F}$=$\{\{0,i,b_i\}|1\leq i\leq n\}(\textup{mod}\; 2n+1)$ form the base blocks of a cyclic, simple, and
indecomposable TS$_3(2n+1)$.
\end{construction}
\begin{ex}
If we apply Construction \ref{c7} to the Skolem sequence of order $8$: $(1,1,8,3,5,7,\linebreak 3,4,6,5,8,4,7,2,6,2)$ we get the base blocks
$\{\{0,1,2\}, \{0,2,16\},
\{0,3,7\},\{0,4,12\},\{0,5,10\},\linebreak \{0,6,15\},\{0,7,13\},\{0,8,11\}\} (\textup{mod}~17)$. These base blocks form a cyclic TS$_3(17)$ by Construction \ref{c5}.
We are going to prove next that this design is also indecomposable and simple.
\end{ex}
\begin{theorem} \label{th1}
The TS$_3(6n+3)$, $n\geq2$ produced by applying Construction \ref{c7} are simple, except for $v=24c+57$, $c\geq 2$.
\end{theorem}
\begin{proof}
Let $v=2n+1$, $n\equiv 0 ~ \textup{or} ~1\,(\textup{mod}\,4)$, $n\geq 8$.
Suppose that the construction above produces $\{x,y,z\}$ as a repeated block. With regards to Construction \ref{c7}, any block $\{x,y,z\}$ is of the
form $\{0,i,b_i\}+k$ for some $i=1,2,\ldots,n$ and $k \in \mathbb {Z}_{2n+1}$. Hence, if $\{x,y,z\}$ is a repeated block we have
$$\{0,i_1,b_{i_1}\}+k_1=\{0,i_2,b_{i_2}\}+k_2$$
whence,
$$\{0,i_2,b_{i_2}\}=\{0,i_1,b_{i_1}\}+k$$
for some $i_1,i_2\in\{1,2,\ldots,n\}$ and some $k\in \mathbb{Z}_{2n+1}$.
If $k=0$, we have $i_2=b_{i_1}$ and $i_1=b_{i_2}$ which is impossible since $b_{i_1}\geq i_1+1$ and $b_{i_2}\geq i_2+1$ from the definition of a Skolem sequence.
If $k=i_2$, we have $\begin{cases} i_1+i_2=2n+1 \\ b_{i_1}+i_2=b_{i_2} \end{cases}$ or $\begin{cases} i_1+i_2=b_{i_2} \\ b_{i_1}+i_2=2n+1. \end{cases}$
Since both $i_1$ and $i_2$ are at most $n$, it is impossible to have $i_1+i_2=2n+1$.
If $k=b_{i_2}$, we have $\begin{cases} i_1+b_{i_2}=2n+1 \\ b_{i_1}+b_{i_2}=i_2+2n+1 \end{cases} \Leftrightarrow \begin{cases} i_1+i_2=b_{i_1} \\ b_{i_2}+i_1=2n+1 \end{cases}$ or $\begin{cases} i_1+b_{i_2}=i_2 \\ b_{i_1}+b_{i_2}=2n+1. \end{cases}$
Since $b_{i_2} > i_2$ , it is impossible to have $i_1+b_{i_2}=i_2$.
So, to prove that a system has no repeated blocks is enough to show that $\begin{cases} i_1+i_2=b_{i_2}
\\ b_{i_1}+i_2=2n+1 \end{cases}$ or $\begin{cases} i_1+i_2=b_{i_1} \\ b_{i_2}+i_1=2n+1 \end{cases}$ are not satisfied.
Also, we show that $i=\frac{v}{3}$ and $b_{i}=\frac{2v}{3}$ is not allowed, which means that our systems has no short orbits.
For $n=8$ and $n=12$, it is easy to see that the Skolem sequences of order $n$ given by Lemma \ref{lem1} produce simple designs.
For $n\equiv 0(\textup{mod}~4),~n\geq 16$, let $S_n$ be the Skolem sequence given by Lemma \ref{lem1}. This Skolem sequence is constructed using
the hooked Langford sequence $hL_4^{n-3}$ from \cite{simpson}, Theorem 2, Case 1. Since $d=4$, will use only lines $(1)-(7),(14),(8*),(10*)$ and
$(11*)$ in Simpson's Table. Note that $n-3=9+4r$ in Simpson's Table, so $n=12+4r$ and $v=25+8r$, $r\geq 1$ in this case. Because we add the pair
$(1,1)$ at the beginning of the Langford sequence $hL_4^{n-3}$, $a_i$ and $b_i$ will be shifted to the right by two positions. To make it easier
for the reader, we give in Table \ref{hL4n-3} the $hL_4^{n-3}$ taken from Simpson's Table and adapted for our case.
\begin{table}[!h]
\begin{center}
\begin{tabular}{ccccc}
\hline
& $a_i+2$ & $b_i+2$ & $i=b_i-a_i$ & $0\leq j\leq $\\
\hline
$(1)$ & $2r+3-j$ & $2r+7+j$ & $4+2j$ & $r$\\
$(2)$ & $r+2-j$ & $3r+9+j$ & $2r+7+2j$ & $r-1$\\
$(3)$ & $6r+12-j$ & $6r+17+j$ & $5+2j$ & $r-1$\\
$(4)$ & $5r+12-j$ & $7r+18+j$ & $2r+6+2j$ &$r$\\
$(5)$ & $3r+8$ & $7r+17$ & $4r+9$ &-\\
$(6)$ & $4r+9$ & $8r+21$ & $4r+12$ &-\\
$(7)$ & $2r+6$ & $6r+13$ & $4r+7$ & -\\
$(14)$ & $2r+5$ & $6r+16$ & $4r+11$ & -\\
$(8*)$ & $4r+11$ & $8r+19$ & $4r+8$ & -\\
$(10*)$ & $4r+10$ & $6r+15$ & $2r+5$ & -\\
$(11*)$ & $2r+4$ & $6r+14$ & $4r+10$ & -\\
\hline
\end{tabular}
\caption{$hL_4^{n-3}$} \label{hL4n-3}
\end{center}
\end{table}
So, the base blocks of the cyclic designs
produced by Construction \ref{c7} are $\{0,1,2\},\{0,2,v-1\},\{0,3,v-2\}$ and $\{0,i,b_i+2\}$ for $i=4,\ldots,n$ and $i=b_i-a_i$.
We show first that $i=\frac{v}{3}$ and $b_{i}+2=\frac{2v}{3}$ is not allowed in the above system. In the first three base blocks is obvious
that $i\neq \frac{v}{3}$. For the remaining base blocks we check lines $(1)-(7),(14),(8*),(10*)$ and $(11*)$ in Table \ref{hL4n-3}.
Line $(1)$ $\begin{cases} 4+2j=\frac{25+8r}{3} \\ 2r+7+j=\frac{2(25+8r)}{3} \end{cases} \Leftrightarrow \emptyset$.
Line $(2)$ $\begin{cases} 2r+7+2j=\frac{25+8r}{3} \\ 3r+9+j=\frac{2(25+8r)}{3} \end{cases} \Leftrightarrow \emptyset$.
Line $(3)$ $\begin{cases} 5+2j=\frac{25+8r}{3} \\ 6r+17+j=\frac{2(25+8r)}{3} \end{cases} \Leftrightarrow \emptyset$.
Line $(4)$ $\begin{cases} 2r+6+2j=\frac{25+8r}{3} \\ 7r+18+j=\frac{2(25+8r)}{3} \end{cases} \Leftrightarrow \emptyset$.
Line $(5)$ $\begin{cases} 4r+9=\frac{25+8r}{3} \\ 7r+17=\frac{2(25+8r)}{3} \end{cases} \Leftrightarrow \emptyset$.
Line $(6)$ $\begin{cases} 4r+12=\frac{25+8r}{3} \\ 8r+21=\frac{2(25+8r)}{3} \end{cases} \Leftrightarrow \emptyset$.
Line $(7)$ $\begin{cases} 4r+7=\frac{25+8r}{3} \\ 6r+13=\frac{2(25+8r)}{3} \end{cases} \Leftrightarrow \emptyset$.
Line $(14)$ $\begin{cases} 4r+11=\frac{25+8r}{3} \\ 6r+16=\frac{2(25+8r)}{3} \end{cases} \Leftrightarrow \emptyset$.
Line $(8*)$ $\begin{cases} 4r+8=\frac{25+8r}{3} \\ 8r+19=\frac{2(25+8r)}{3} \end{cases} \Leftrightarrow \emptyset$.
Line $(10*)$ $\begin{cases} 2r+5=\frac{25+8r}{3} \\ 6r+15=\frac{2(25+8r)}{3} \end{cases} \Leftrightarrow \emptyset$.
Line $(11*)$ $\begin{cases} 4r+10=\frac{25+8r}{3} \\ 6r+14=\frac{2(25+8r)}{3} \end{cases} \Leftrightarrow \emptyset$.
Therefore, this systems has no short orbits.
Next, we have to check that $\begin{cases} i_1+i_2=b_{i_2}
\\ b_{i_1}+i_2=2n+1 \end{cases}$ or $\begin{cases} i_1+i_2=b_{i_1} \\ b_{i_2}+i_1=2n+1 \end{cases}$ are not satisfied.
Lines $(1)-(1)$: $\begin{cases} 4+2j_1+4+2j_2=2r+7+j_2 \\ 4+2j_2+2r+7+j_1=25+8r \end{cases} \Leftrightarrow \begin{cases} j_1=\frac{-2r-15}{3} \\
j_2=\frac{10r+27}{3} \end{cases}$ which is impossible since $j_1\geq 0$ and also integer.
Lines $(1)-(2)$: $\begin{cases} 4+2j_1+2r+7+2j_2=3r+9+j_2 \\ 2r+7+2j_2+2r+7+j_1=25+8r \end{cases} \Leftrightarrow \begin{cases} j_1=\frac{-2r-15}{3} \\ j_2=r-2-2j_1
\end{cases}$
which is impossible since $j_1\geq 0$ and also integer.
Lines $(1)-(3)$: $\begin{cases} 4+2j_1+5+2j_2=6r+17+j_2 \\ 5+2j_2+2r+7+j_1=25+8r \end{cases} \Leftrightarrow \begin{cases} j_1=j_2-5 \\ j_2=2r+9\end{cases}$
which is impossible since $j_2\leq r-1$.
Lines $(1)-(4)$: $\begin{cases} 4+2j_1+2r+6+2j_2=7r+18+j_2 \\ 2r+6+2j_2+2r+7+j_1=25+8r \end{cases} \Leftrightarrow \begin{cases} j_1=j_2+r-4 \\ j_2=\frac{3r+16}{3} \end{cases}$
which is impossible since $j_2\leq r$.
Lines $(1)-(5)$: $\begin{cases} 4r+2j+13=7r+17 \\ j+6r+16=25+8r \end{cases} \Leftrightarrow \emptyset$.
Lines $(1)-(6)$: $\begin{cases} 4r+2j+16=8r+21 \\ j+6r+19=25+8r \end{cases} \Leftrightarrow \emptyset$.
Lines $(1)-(7)$: $\begin{cases} 4r+2j+11=6r+13 \\ j+6r+14=25+8r \end{cases} \Leftrightarrow \emptyset$.
Lines $(1)-(14)$: $\begin{cases} 4r+2j+15=6r+16 \\ j+6r+18=25+8r \end{cases} \Leftrightarrow \emptyset$.
Lines $(1)-(8*)$: $\begin{cases} 4r+2j+12=8r+19 \\ j+6r+15=25+8r \end{cases} \Leftrightarrow \emptyset$.
Lines $(1)-(10*)$: $\begin{cases} 2r+2j+9=6r+15 \\ j+4r+12=25+8r \end{cases} \Leftrightarrow \emptyset$.
Lines $(1)-(11*)$: $\begin{cases} 4r+2j+14=6r+14 \\j+6r+17=25+8r \end{cases} \Leftrightarrow \emptyset$.
We implemented a program in Mathematica that checks all the pairs of rows in Simpson's table using the above approach. The code for the program and the results can be found in Appendix.
From the results, we can easily see that if we check any combination of two lines in Simpson's Table the conditions are not satisfied in almost all of the cases. There are
two cases where this conditions are satified. The first case is when we check line 3 with line 1, and we get that for $r=4+3c$, $j_1=2c$, and $j_2=6+2c$, $c\geq 2$ the system is
not simple. This implies that our system is not simple when $v=24c+57$, $c\geq2$. The second case is when we check line 3 with line 2. Here, we get $r=5$ and therefore $v=59$.
But $v=59$ is not congruent to $3$ (mod $6$). A TS$_3(59)$ is simple, cyclic, and indecomposable by Theorem \ref{kramer}.
For $n\equiv 1(\textup{mod}~4), n\geq 9$, let $S_n$ be the Skolem sequence given by Lemma \ref{lem1}. This Skolem sequence is constructed
using a $hL_3^{n-2}$ (\cite{simpson}, Theorem 2, Case 1). Since $d=3$, will use only lines
$(1)-(6),(14),(7'),(8')$ and $(10')$ in Simpson's Table. Note that $n-2=7+4r$ in Simpson's Table, so $n=9+4r$ and $v=19+8r$ in this case.
Because we add the pair $(1,1)$ at the beginning of the Langford sequence $hL_3^{n-2}$, $a_i$ and $b_i$ will be shifted to the right
by two positions.
Table \ref{hL3n-2} gives the $hL_3^{n-2}$ from Simpson's Table adapted to our case.
\begin{table}[!h]
\begin{center}
\begin{tabular}{ccccc}
\hline
& $a_i+2$ & $b_i+2$ & $i=b_i-a_i$ & $0\leq j\leq $\\
\hline
$(1)$ & $2r+3-j$ & $2r+6+j$ & $3+2j$ & $r$\\
$(2)$ & $r+2-j$ & $3r+8+j$ & $2r+6+2j$ & $r-1$\\
$(3)$ & $6r+10-j$ & $6r+14+j$ & $4+2j$ & $r-1$\\
$(4)$ & $5r+10-j$ & $7r+15+j$ & $2r+5+2j$ &$r$\\
$(5)$ & $3r+7$ & $7r+14$ & $4r+7$ &-\\
$(6)$ & $4r+8$ & $8r+17$ & $4r+9$ &-\\
$(14)$ & $2r+4$ & $6r+12$ & $4r+8$ & -\\
$(7')$ & $2r+5$ & $6r+11$ & $4r+6$ & -\\
$(10')$ & $4r+9$ & $6r+13$ & $2r+4$ & -\\
\hline
\end{tabular}
\caption{$hL_3^{n-2}$} \label{hL3n-2}
\end{center}
\end{table}
So, the base blocks of the cyclic designs produce by Construction \ref{c7} are $\{0,1,2\},\{0,2,v-1\}$ and $\{0,i,b_i+2\}$ for
$i=3,\ldots,n$. Using the same argument as before, we show that these designs are simple.
First, we show that $i=\frac{v}{3}$ and $b_{i}+2=\frac{2v}{3}$ is not allowed in the above system. In the first two base blocks is obvious
that $i\neq \frac{v}{3}$. For the remaining base blocks we check lines $(1)-(6),(14),(7')$ and $(10')$ in Table \ref{hL3n-2}.
Line $(1)$ $\begin{cases} 3+2j=\frac{19+8r}{3} \\ 2r+6+j=\frac{2(19+8r)}{3} \end{cases} \Leftrightarrow \emptyset$.
Line $(2)$ $\begin{cases} 2r+6+2j=\frac{19+8r}{3} \\ 3r+8+j=\frac{2(19+8r)}{3} \end{cases} \Leftrightarrow \emptyset$.
Line $(3)$ $\begin{cases} 4+2j=\frac{19+8r}{3} \\ 6r+14+j=\frac{2(19+8r)}{3} \end{cases} \Leftrightarrow \emptyset$.
Line $(4)$ $\begin{cases} 2r+5+2j=\frac{19+8r}{3} \\ 7r+15+j=\frac{2(19+8r)}{3} \end{cases} \Leftrightarrow \emptyset$.
Line $(5)$ $\begin{cases} 4r+7=\frac{19+8r}{3} \\ 7r+14=\frac{2(19+8r)}{3} \end{cases} \Leftrightarrow \emptyset$.
Line $(6)$ $\begin{cases} 4r+9=\frac{19+8r}{3} \\ 8r+17=2\frac{19+8r}{3} \end{cases} \Leftrightarrow \emptyset$.
Line $(14)$ $\begin{cases} 4r+8=\frac{19+8r}{3} \\ 6r+12=\frac{2(19+8r)}{3} \end{cases} \Leftrightarrow \emptyset$.
Line $(7')$ $\begin{cases} 4r+6=\frac{19+8r}{3} \\ 6r+11=\frac{2(19+8r)}{3} \end{cases} \Leftrightarrow \emptyset$.
Line $(10')$ $\begin{cases} 2r+4=\frac{19+8r}{3} \\ 6r+13=\frac{2(19+8r)}{3} \end{cases} \Leftrightarrow \emptyset$.
Next, we have to check that $\begin{cases} i_1+i_2=b_{i_2}
\\ b_{i_1}+i_2=2n+1 \end{cases}$ or $\begin{cases} i_1+i_2=b_{i_1} \\ b_{i_2}+i_1=2n+1 \end{cases}$ are not satisfied. As with the previous case, the results can be found in Appendix. As before, when we check line 3 and line 1 the conditions are satisfied. But, in this case $v=24c+35$, $c\geq1$ which is not congruent to $3$ (mod $6$).
So, a TS$_3(24c+35)$ for $c\geq 1$ is cyclic, simple, and indecomposable by Theorem \ref{kramer}.\end{proof}
\begin{lemma} \label{lem2} For every $n\equiv 2 ~ \textup{or} ~3 ~(\textup{mod}\,4)$, $n\geq 7$, there is a hooked Skolem sequence of order $n$ in which $s_1=s_2=1$ and
$s_{2n-1}=s_{2n+1}=2$.
\end{lemma}
\begin{proof}
For $n\equiv 2 ~ \textup{or} ~3 ~(\textup{mod}\,4)$, $n\geq 7$, take hS$_n=(1,1,L_3^{n-2},2,0,2)$.
When $n\equiv 2~(\textup{mod}~4)$, take $L_3^{n-2}$ (\cite{simpson}, Theorem 1, Case 3). When $n\equiv 3~(\textup{mod}~4)$, take
$L_3^5=(6,7,3,4,5,3,6,4,7,5) $ and for $n\geq 11$ take $L_3^{n-2}$ (see \cite{bermond}, Theorem 2).
\end{proof}
We are going to use the following construction to get cyclic TS$_3(2n+1)$ for $n\equiv 2$ or $3$ (mod $4$):
\begin{construction} \cite{silvesan} {\label{c6}}
Let $hS_n=(s_1,s_2,\ldots,s_{2n-1},s_{2n+1})$ be a hooked Skolem sequence of order $n$ and let $\{(a_i,b_i)|1\leq i\leq n\}$ be
the pairs of positions in $hS_n$ for which $b_i-a_i=i$. Then the set $\cal{F}$=$\{\{0,i,b_i+1\}|1\leq i\leq n\}(\textup{mod}\;2n+1)$ is a $(2n+1,3,3)-DF$. Hence, the set of triples in $\cal{F}$ form the base blocks of a cyclic TS$_3(2n+1)$.
\end{construction}
Then, we apply Construction \ref{c6} to the hooked Skolem sequences given by Lemma \ref{lem2} to get cyclic TS$_3(2n+1)$ for $n\equiv 2$ or $3$ (mod $4$) that are simple and indecomposable.
\begin{construction} {\label{c8}}
Let $hS_n=(s_1,s_2,\ldots,s_{2n-1},s_{2n+1})$ be a hooked Skolem sequence of order $n$ given by Lemma \ref{lem2}, and let $\{(a_i,b_i)|1\leq i\leq n\}$ be
the pairs of positions in $hS_n$ for which $b_i-a_i=i$. Then, the set $\cal{F}$=$\{\{0,i,b_i+1\}|1\leq i\leq n\}(\textup{mod}\;2n+1)$ form the base
blocks of a cyclic, simple, and indecomposable TS$_3(2n+1)$.
\end{construction}
\begin{ex}
If we apply Construction \ref{c8} to the hooked Skolem sequence of order $7$: $(1,1,6,7,3,4,5,3,6,4,7,5,2,0,2)$ we get the base blocks $\{\{0,1,3\},\linebreak \{0,2,1\},
\{0,3,9\},\{0,4,11\}, \{0,5,13\},\{0,6,10\},\{0,7,12\}\} (\textup{mod}~15)$. These base blocks form a cyclic TS$_3(15)$ by Construction
\ref{c6}. We are going to prove next, that this design is indecomposable and simple.
\end{ex}
\begin{theorem} \label{th2}
The TS$_3(6n+3)$, $n\geq 2$, produced by applying Construction \ref{c8}, are simple.
\end{theorem}
\begin{proof}
The proof is similar to Theorem \ref{th1}. Let $v=2n+1$, $n\equiv 2 ~ \textup{or} ~3\,(\textup{mod}\,4)$, $n\geq 10$.
For $n\equiv 2(\textup{mod}~4),~n\geq 10$, let $hS_n$ be the hooked Skolem sequence given by Lemma \ref{lem2}. This hooked Skolem sequence is
constructed using the Langford sequence $L_3^{n-2}$ from \cite{simpson}, Theorem 1, Case 3. Since $d=3$, will use only lines $(1)-(4),(6),(9),(11)$ and
$(13)$ in Simpson's Table. Note that $m=n-2=4r$ in Simpson's Table, so $n=4r+2$, $v=8r+5$, $r\geq 2$, $d=3$, $s=1$, in this case. Because we add the pair $(1,1)$ at the
beginning of the hooked Langford sequence $hL_3^{n-2}$, $a_i$ and $b_i$ will be shifted to the right by two positions. To make it easier for the reader, we
give in Table \ref{L3n-2}, the $L_3^{n-2}$ taken from Simpson's Table and adapted for our case (omit row $(1)$ when $r=2$).
\begin{table}[!h]
\begin{center}
\begin{tabular}{ccccc}
\hline
& $a_i+2$ & $b_i+2$ & $i=b_i-a_i$ & $0\leq j\leq $\\
\hline
$(1)$ & $2r-j$ & $2r+4+j$ & $4+2j$ & $r-3$\\
$(2)$ & $r+2-j$ & $3r+3+j$ & $2r+1+2j$ & $r-1$\\
$(3)$ & $6r+1-j$ & $6r+4+j$ & $3+2j$ & $r-2$\\
$(4)$ & $5r+2-j$ & $7r+4+j$ & $2r+2+2j$ &$r-2$\\
$(6)$ & $2r+3$ & $4r+3$ & $2r$ &-\\
$(9)$ & $3r+2$ & $7r+3$ & $4r+1$ &-\\
$(11)$ & $2r+1$ & $6r+3$ & $4r+2$ & -\\
$(13)$ & $2r+2$ & $6r+2$ & $4r$ & -\\
\hline
\end{tabular}
\caption{$L_3^{n-2}$} \label{L3n-2}
\end{center}
\end{table}
So, the base blocks of the cyclic designs
produce by Construction \ref{c8} are $\{0,1,3\},\{0,2,1\}$ and $\{0,i,b_i+2+1\}$ for $i=3,\ldots,n$ and $i=b_i-a_i$.
First, we show that $i=\frac{v}{3}$ and $b_{i}+2+1=\frac{2v}{3}$ is not allowed in the above system. In the first two base blocks is obvious
that $i\neq \frac{v}{3}$. For the remaining base blocks we check lines $(1)-(4),(6),(9),(11)$ and $(13)$ in Table \ref{L3n-2}.
Line $(1)$ $\begin{cases} 4+2j=\frac{8r+5}{3} \\ 2r+5+j=\frac{2(8r+5)}{3} \end{cases} \Leftrightarrow r=\frac{1}{4}$ which is impossible since
$r\geq 2$ and also integer.
Line $(2)$ $\begin{cases} 2r+1+2j=\frac{8r+5}{3} \\ 3r+4+j=\frac{2(8r+5)}{3} \end{cases} \Leftrightarrow r=\frac{1}{2}$ which is impossible since
$r\geq 2$ and also integer.
Line $(3)$ $\begin{cases} 3+2j=\frac{8r+5}{3} \\ 6r+5+j=\frac{2(8r+5)}{3} \end{cases} \Leftrightarrow r=-\frac{1}{2}$ which is impossible since
$r\geq 2$ and also integer.
Line $(4)$ $\begin{cases} 2r+2+2j=\frac{8r+5}{3} \\ 7r+5+j=\frac{2(8r+5)}{3} \end{cases} \Leftrightarrow r=-\frac{3}{4}$ which is impossible since
$r\geq 2$ and also integer.
Line $(6)$ $\begin{cases} 2r=\frac{8r+5}{3} \\ 4r+4=\frac{2(8r+5)}{3} \end{cases} \Leftrightarrow \emptyset$. Line $(9)$ $\begin{cases} 4r+1=\frac{8r+5}{3} \\ 7r+4=2\frac{8r+5}{3} \end{cases} \Leftrightarrow \emptyset$.
Line $(11)$ $\begin{cases} 4r+2=\frac{8r+5}{3} \\ 6r+4=\frac{2(8r+5)}{3} \end{cases} \Leftrightarrow \emptyset$. Line $(13)$ $\begin{cases} 4r=\frac{8r+5}{3} \\ 6r+3=\frac{2(8r+5)}{3} \end{cases} \Leftrightarrow \emptyset$.
Next, we have to show that $\begin{cases} i_1+i_2=b_{i_2}
\\ b_{i_1}+i_2=2n+1 \end{cases}$ or $\begin{cases} i_1+i_2=b_{i_1} \\ b_{i_2}+i_1=2n+1 \end{cases}$ are not satisfied. The results for this can be found Appendix. As before, when we check line $3$ with line $1$, the conditions are satisfied. But $v=24c+5$, $c\geq2$ in this case which is not congruent to $3$ (mod $6$). So,
by Theorem \ref{kramer}, there exists a cyclic, simple, and indecomposable TS$_3(24c+5)$ for $c\geq 2$.
For $n\equiv 3(\textup{mod}~4), n\geq 11$, let $hS_n$ be the hooked Skolem sequence given by Lemma \ref{lem2}. This hooked Skolem sequence is constructed
using a $L_3^{n-2}$ (\cite{bermond}, Theorem 2). Since $d=3$ will use only lines
$(1)-(4),(6)-(10)$ in \cite{bermond}. Note that $m=n-2=4r+1, r\geq 2$, $e=4$ in \cite{bermond}, so $n=4r+3$ and $v=8r+7$ in this case.
Because we add the pair $(1,1)$ at the beginning of the Langford sequence $L_3^{n-2}$, $a_i$ and $b_i$ will be shifted to the right
by two positions.
Table \ref{L3n-2germa} gives the $L_3^{n-2}$ from \cite{bermond} adapted to our case.
\begin{table}[!h]
\begin{center}
\begin{tabular}{ccccc}
\hline
& $a_i+2$ & $b_i+2$ & $i=b_i-a_i$ & $0\leq j\leq $\\
\hline
$(1)$ & $2r+2-j$ & $2r+6+j$ & $4+2j$ & $r-2$\\
$(2)$ & $r+2-j$ & $3r+5+j$ & $2r+3+2j$ & $r-2$\\
$(3)$ & $3$ & $4r+4$ & $4r+1$ & -\\
$(4)$ & $2r+4$ & $4r+5$ & $2r+1$ &-\\
$(6)$ & $r+3$ & $5r+5$ & $4r+2$ &-\\
$(7)$ & $2r+5$ & $6r+5$ & $4r$ &-\\
$(8$ & $2r+3$ & $6r+6$ & $4r+3$ & -\\
$(9)$ & $6r+4-j$ & $6r+7+j$ & $3+2j$ & $r-2$\\
$(10)$ & $5r+4-j$ & $7r+6+j$ & $2r+2+2j$ & $r-2$\\
\hline
\end{tabular}
\caption{$L_3^{n-2}$} \label{L3n-2germa}
\end{center}
\end{table}
So, the base blocks of the cyclic designs produce by Construction \ref{c8} are $\{0,1,3\},\{0,2,1\}$ and $\{0,i,b_i+2+1\}$ for
$i=3,\ldots,n$. Using the same argument as before, we show that these designs are simple.
First we show that $i=\frac{v}{3}$ and $b_{i}+2+1=\frac{2v}{3}$ is not allowed in the above system. In the first two base blocks is obvious
that $i\neq \frac{v}{3}$. For the remaining base blocks we check lines $(1)-(4),(6)-(10)$ in Table \ref{L3n-2germa}.
Line $(1)$ $\begin{cases} 4+2j=\frac{8r+7}{3} \\ 2r+7+j=\frac{2(8r+7)}{3} \end{cases} \Leftrightarrow r=\frac{3}{4}$ which is impossible since
$r\geq 2$ and also integer.
Line $(2)$ $\begin{cases} 2r+3+2j=\frac{8r+7}{3} \\ 3r+6+j=\frac{2(8r+7)}{3} \end{cases} \Leftrightarrow r=\frac{1}{2}$ which is impossible since
$r\geq 2$ and also integer.
Line $(3)$ $\begin{cases} 4r+1=\frac{8r+7}{3} \\ 4r+5=\frac{2(8r+7)}{3} \end{cases} \Leftrightarrow \emptyset$. Line $(4)$ $\begin{cases} 2r+1=\frac{8r+7}{3} \\ 4r+6=\frac{2(8r+7)}{3} \end{cases} \Leftrightarrow \emptyset$.
Line $(6)$ $\begin{cases} 4r+2=\frac{8r+7}{3} \\ 5r+6=\frac{2(8r+7)}{3} \end{cases} \Leftrightarrow \emptyset$. Line $(7)$ $\begin{cases} 4r=\frac{8r+7}{3} \\ 6r+6=\frac{2(8r+7)}{3} \end{cases} \Leftrightarrow \emptyset$.
Line $(8)$ $\begin{cases} 4r+3=\frac{8r+7}{3} \\ 6r+7=\frac{2(8r+7)}{3} \end{cases} \Leftrightarrow \emptyset$. Line $(9)$ $\begin{cases} 3+2j=\frac{8r+7}{3} \\ 6r+8+j=
\frac{2(8r+7)}{3} \end{cases} \Leftrightarrow \emptyset$.
Line $(10)$ $\begin{cases} 2r+2+2j=\frac{8r+7}{3} \\ 7r+7+j=\frac{2(8r+7)}{3} \end{cases} \Leftrightarrow \emptyset$.
Next, we have to check that $\begin{cases} i_1+i_2=b_{i_2}
\\ b_{i_1}+i_2=2n+1 \end{cases}$ or $\begin{cases} i_1+i_2=b_{i_1} \\ b_{i_2}+i_1=2n+1 \end{cases}$ are not satisfied. The results for this can be found in Appendix. Here, for $v=3c-1$, $c\geq 4$ and for $v=55$ the conditions are satisfied but these orders are not congruent to $3$ (mod $6$). Therefore, by Theorem \ref{kramer},
there exists cyclic, simple, and indecomposable TS$_3(3c-1)$ for $c\geq 4$ and cyclic, simple, and indecomposable TS$_3(55)$.
\end{proof}
\section{Indecomposable Three-fold Cyclic Triple Systems}
In this section, we prove that the Constructions \ref{c7} and \ref{c8} produce indecomposable three-fold triple systems for $v\equiv~3~(\textup{mod}~6)$,
$v\geq 15$.
\begin{theorem}\label{ind1}
The TS$_3(v)$ produced by Constructions \ref{c7} and \ref{c8} are indecomposable for every $v\equiv~3~(\textup{mod}~6)$, $v\geq 15$.
\end{theorem}
\begin{proof}
Suppose that $v\equiv~3~(\textup{mod}~6)$ and write $v=2n+1$, $n\equiv 0 ~ \textup{or} ~1\,(\textup{mod}\,4)$, $n\geq 8$.
Now, for an TS$_3(2n+1)$ to be decomposable, there must be a Steiner triple system STS($2n+1$) inside the TS$_3(2n+1)$.
If $2n+1\equiv ~3~(\textup{mod}\,6)$, let $\{x_i,x_j,x_k\}$ be a triple using symbols from N$_{2n+1}=\{0,1,\ldots,2n\}$. Let
$d_{ij}=\textup{min}~\{|x_i-x_j|,2n+1-|x_i-x_j|\}$ be the difference between $x_i$ and $x_j$. An STS($2n+1$) on N$_{2n+1}$ must
have a set of triples with the property that each difference $d$, $1\leq d\leq n$, occurs exactly $2n+1$ times. Assume there is an
STS($2n+1$) inside our TS$_3(2n+1)$ and let $f_{\alpha}$ be the number of triples inside the STS($2n+1$) which are a cyclic shift of $\{0,\alpha,b_{\alpha}\}$.
It is enough to look at the first two base blocks of our TS$_3(2n+1)$. These are $\{0,1,2\}$ $(\textup{mod} ~ 2n+1)$ and $\{0,2,2n\}~(\textup{mod} ~ 2n+1)$.
Then the existence of an STS($2n+1$) inside our TS$_3(2n+1)$ requires that the equation $2f_1+f_2=2n+1$ must have a solution in nonnegative integers (we need
the difference $1$ to occur exactly $2n+1$ times).
{\bf Case \mbox {\boldmath $1$}: \mbox {\boldmath $f_1=1$}}
Suppose we choose one block from the orbit $\{0,1,2\}(\textup{mod}\;2n+1)$. Since this orbit uses the difference $1$ twice and the difference $2$, and the
orbit $\{0,2,2n\}(\textup{mod} ~ 2n+1)$ uses the differences $1,\,2$ and $3$, whenever we pick one block from the first orbit we cannot choose three blocks from
the second orbit (i.e., those blocks where the pairs $(0,1)$, $(0,2)$ and $(1,2)$ are included). So, we just have $2n-2$ blocks in the second orbit to choose from.
But we need $2n-1$ blocks from the second orbit in order to cover difference $1$ exactly $2n+1$ times.
Therefore, we have no solution in this case.
{\bf Case \mbox {\boldmath$2$}: \mbox {\boldmath $f_1=2$}}
Since $f_2=\frac{2n-3}{2}$ is not an integer, we have no solution in this case.
{\bf Case \mbox {\boldmath$3$}: \mbox {\boldmath $f_1=3,5,\ldots, n\,(\textup{or}\,n-1)$}}
Similar to Case $1$. So, there is no solution in this case.
{\bf Case \mbox {\boldmath$4$}: \mbox {\boldmath $f_1=4,6,\ldots, n\,(\textup{or}\,n-1)$}}
Similar to Case $2$. So, there is no solution in this case.
{\bf Case \mbox {\boldmath$5$}: \mbox {\boldmath $f_1=0$}}
Note that our cyclic TS$_3(v)$ has no short orbits (Theorem \ref{th1}), while a cyclic STS$(v)$ will have a short orbit. Therefore, if a design
exists inside our TS$_3(v)$, that design is not cyclic.
Now, we choose no block from the first orbit and all the blocks in the second orbit (i.e., $f_1=0,\;f_2=2n+1$). Therefore differences $1$, $2$ and $3$ are all
covered each exactly $2n+1$ in the STS$(v)$. From the remaining $n-2$ orbits $\{0,i,b_i\},\,i\geq 3$ there will be two or three orbits which will use differences $2$ and $3$.
Since differences $1$, $2$ and $3$ are already covered, we cannot choose any block from those orbits that uses these three differences. So, we are left with $n-4$
or $n-5$ orbits to choose from.
We need to cover differences $4,5,\ldots,n$ ($n-3$ differences) each exactly $v=2n+1$ times.
We form a system of $n-3$ equations with $n-4$ or $n-5$ unknowns in the following way: when a difference appears in different orbits,
the sum of the blocks that we choose from each orbit has to equal $v$, i.e., if difference $4$ appears in $\{0,5,b_5\}$, $\{0,7,b_7\}$ and $\{0,10,b_{10}\}$ we have
$f_5+f_7+f_{10}=v$ or
if difference $4$ appears in $\{0,7,b_7\}$ twice and in $\{0,9,b_9\}$ once, we have $2f_7+f_9=v$. The system that we form has two or three entries in each row non-zero
while all the others entries will equal zero. The rows in the system can be rearranged so that we get an upper triangular matrix. Therefore, the system of equations is non-singular and it has the unique solution
$f_{i_1}=f_{i_2}=\ldots =f_{i_k}=v$ for some $4\leq i_1,i_2,\ldots,i_k\leq n$ and $f_{j_1}=f_{j_2}=\ldots =f_{j_k}=0$ for some $4\leq j_1,j_2,\ldots,j_k\leq n$. But this
implies that the STS$(v)$ inside our TS$_3(v)$ is cyclic, which is impossible.
Therefore, we have no solution in this case. It follows that our TS$_3(2n+1)$ is indecomposable.
Now, suppose that $v=2n+1$, $n\equiv 2 ~ \textup{or} ~3 ~(\textup{mod} ~4)$, $n\geq 7$. Let $f_{\alpha}$ be the number of triples inside the STS($2n+1$) which are a cyclic shift of $\{0,\alpha,b_{\alpha}+1\}$.
Using the same argument as before, it is easy to show that the equation $2f_2+f_1=2n+1$ has no solution. Therefore our TS$_3(2n+1)$ is indecomposable.
\end{proof}
\section{Cyclic, Simple, and Indecomposable Three-fold Triple Systems}
\begin{theorem}
There exists cyclic, simple, and indecomposable three-fold triple systems, TS$_3(v)$, for every $v\equiv 1~(\textup{mod}~2)$, $v\geq 5, v\neq 9$ and $v\neq 24c+57$, $c\geq 2$.
\end{theorem}
\begin{proof}
Let $v\equiv 1~\textup{or}~5~(\textup{mod}~6)$ and take the base blocks $\{0,\alpha,-\alpha\}(\textup{mod}\;v)|\alpha=0,1,\ldots,\frac{1}{2}(v-1)$. By Theorem
\ref{kramer}, these will be the base blocks of a cyclic, simple, and indecomposable three-fold triple system.
Let $v\equiv 3~(\textup{mod}~6)$, and write $v=2n+1$, $n\equiv 0 ~ \textup{or} ~1\,(\textup{mod}\,4)$, $n\geq 8$. Apply Construction \ref{c7} to the Skolem sequence
of order $n$ given by Lemma \ref{lem1}. These designs are cyclic by Construction \ref{c5}, simple for all $v$ except $v=24c+57$, $c\geq2$ by Theorem \ref{th1} and indecomposable by Theorem \ref{ind1}.
Let $v\equiv 3~(\textup{mod}~6)$, and write $v=2n+1$, $n\equiv 2 ~ \textup{or} ~3\,(\textup{mod}\,4)$, $n\geq 7$. Apply Construction \ref{c8} to the hooked Skolem
sequence of order $n$ given by Lemma \ref{lem2}. These designs are cyclic by Construction \ref{c6}, simple by Theorem \ref{th2} and indecomposable by
Theorem \ref{ind1}.
\end{proof}
\section{Conclusion and Open Problems}
We constructed, using Skolem-type sequences, three-fold triple systems having all the properties of being cyclic, simple and indecomposable,
for $v\equiv~3~(\textup{mod}~6)$ except for $v=9$ and $v=24c+57$, $c\geq2$. Our results, together with Kramer's results \cite{kramer}, completely solve the problem of finding
three-fold triple systems having three properties: cyclic, simple, and indecomposable with some possible exceptions for $v=9$ and $v=24c+7$, $c\geq2$.
In our approach of finding cyclic, simple, and indecomposable three-fold triple systems, proving the simplicity of the designs was a
tedious and long task. Another approach that we tried was in constructing three disjoint (i.e. no two pairs in the same
positions) sequences of order $n$ and taking the
base blocks $\{0,i,b_i+n\}, i=1,2,\ldots,n$. These base blocks form a cyclic TS$_3(6n+1)$. Also, someone can take three disjoint
hooked sequences of order $n$ and taking the base blocks
$\{0,i,b_i+n\}, i=1,2,\ldots,n$ together with the short orbit $\{0,\frac{v}{3},\frac{2v}{3}\}$. These base blocks form a cyclic
TS$_3(6n+3)$.
\begin{ex}
For $n=5$, take the three disjoint hooked sequences of order $n$:
\begin{center}
$(1,1,4,1,1,0,4,2,3,2,0,3)$
$(2,3,2,3,3,0,3,4,1,1,0,4)$
$(4,5,5,5,4,0,5,5,5,2,0,2)$
\end{center}
Then the base blocks $\{0,1,7\},\{0,1,10\},\{0,1,15\},\{0,2,8\},\{0,2,15\},\linebreak \{0,2,17\},\{0,3,10\},
\{0,3,12\},\{0,3,17\},\{0,4,10\},\{0,4,17\},\{0,4,12\},\linebreak \{0,5,12\},\{0,5,13\},\{0,5,14\},\{0,11,22\}$ are the base blocks of a cyclic, simple, and indecomposable TS$_3(33)$.
\end{ex}
The simplicity is easy to prove here since the three hooked sequences that we used share no pairs in the same positions. On the other
hand, on first inspection, to prove the indecomposability of such designs appears to be more difficult. Also, someone needs to find
three disjoint such sequences for all admissible orders $n$.
\begin{problem}
Can the above approach of finding cyclic, simple, and indecomposable three-fold triple systems of order $v$ be generalized for all admissible orders?
\end{problem}
\begin{problem}
Are there cyclic, simple, and indecomposable TS$_3(24c+57)$, $c\geq 2$?
\end{problem}
\begin{problem}
Are there cyclic, simple, and indecomposable designs for $\lambda\geq 4$ and all admissible orders?
\end{problem}
\begin{problem}
For $\lambda\geq 3$ what is the spectrum of those $v$ for which there exists a cyclically indecomposable but decomposable cyclic TS$_\lambda(v)$?
\end{problem}
\begin{ex}
Let $V=\{0,1,2,3,4,5,6,7,8\}$. Then the blocks of a cyclic TS$_3(9)$ are:
$\{0,1,2\}, \{0,2,7\}, \{0,3,6\}, \{0,4,8\}$
$\{1,2,3\}, \{1,3,8\}, \{1,4,7\}, \{1,5,0\}$
$\{2,3,4\}, \{2,4,0\}, \{2,5,8\}, \{2,6,1\}$
$\{3,4,5\}, \{3,5,1\}, \{3,6,0\}, \{3,7,2\}$
$\{4,5,6\}, \{4,6,2\}, \{4,7,1\}, \{4,8,3\}$
$\{5,6,7\}, \{5,7,3\}, \{5,8,2\}, \{5,0,4\}$
$\{6,7,8\}, \{6,8,4\}, \{6,0,3\}, \{6,1,5\}$
$\{7,8,0\}, \{7,0,5\}, \{7,1,4\}, \{7,2,6\}$
$\{8,0,1\}, \{8,1,6\}, \{8,2,5\}, \{8,3,7\}$
This design is cyclic, simple, and is decomposable. The blocks $\{0,1,2\},\linebreak \{3,4,5\},\{6,7,8\}, \{1,3,8\},\{4,6,2\},\{7,0,5\},\{0,3,6\},\{1,4,7\},\{2,5,8\},\linebreak \{0,4,8\},
\{3,7,2\},\{6,1,5\}$ form an STS$(9)$. The designs is cyclically indecomposable since no CSTS$(9)$ exists.
\end{ex}
| {'timestamp': '2014-04-03T02:08:00', 'yymm': '1404', 'arxiv_id': '1404.0528', 'language': 'en', 'url': 'https://arxiv.org/abs/1404.0528'} |
\section{Introduction and statement of results}
\label{sect:intro}
\subsection{The Milnor fibration}
\label{subsec:mf}
In his seminal book on complex hypersurface singularities,
Milnor \cite{Mi} introduced a fibration that soon became the
central object of study in the field, and now bears his name.
In its simplest manifestation, Milnor's construction associates
to each homogeneous polynomial $Q\in \mathbb{C}[z_1,\dots, z_{\ell}]$
a smooth fibration over $\mathbb{C}^*$, by restricting the polynomial map
$Q\colon \mathbb{C}^{\ell} \to \mathbb{C}$ to the complement of the zero-set of $Q$.
The Milnor fiber of the polynomial, $F=Q^{-1}(1)$, is a Stein manifold,
and thus has the homotopy type of a finite CW-complex of
dimension $\ell-1$. The monodromy of the fibration
is the map $h\colon F\to F$, $z\mapsto e^{2\pi \mathrm{i}/n} z$,
where $n=\deg Q$. The induced homomorphisms
in homology, $h_q\colon H_q(F,\mathbb{C})\to H_q(F,\mathbb{C})$, are all
diagonalizable, with $n$-th roots of unity as eigenvalues.
A key question, then, is to compute the characteristic polynomials
of these operators in terms of available data. We will only address here
the case $q=1$, which is already far from solved if the
polynomial $Q$ has a non-isolated singularity at $0$.
\subsection{Hyperplane arrangements}
\label{subsec:hyp}
Arguably the simplest situation is when the polynomial $Q$ completely
factors into distinct linear forms. This situation is neatly described
by a hyperplane arrangement, that is, a finite collection ${\pazocal{A}}$ of
codi\-mension-$1$ linear subspaces in $\mathbb{C}^{\ell}$. Choosing
a linear form $f_H$ with kernel $H$ for each hyperplane $H\in {\pazocal{A}}$,
we obtain a homogeneous polynomial, $Q=\prod_{H\in {\pazocal{A}}} f_H$,
which in turn defines the Milnor fibration of the arrangement.
To analyze this fibration, we turn to the rich combinatorial structure
encoded in the intersection lattice of the arrangement, $L({\pazocal{A}})$,
that is, the poset of all intersections of hyperplanes in ${\pazocal{A}}$
(also known as flats), ordered by reverse inclusion, and ranked
by codimension. We then have the following, much studied problem,
which was raised in \cite[Problem 9A]{HR} and \cite[Problem 4.145]{Kb},
and still remains open.
\begin{problem}
\label{prob:mfa}
Given a hyperplane arrangement ${\pazocal{A}}$, is the characteristic
polynomial of the algebraic monodromy of the Milnor fibration,
$\Delta_{{\pazocal{A}}}(t)=\det(tI-h_1)$, determined by the intersection lattice
$L({\pazocal{A}})$? If so, give an explicit combinatorial formula to compute it.
\end{problem}
Without essential loss of generality, we may assume that the
ambient dimension is $\ell=3$, in which case the projectivization
$\bar{\pazocal{A}}$ is an arrangement of lines in $\mathbb{CP}^2$. In
Theorem \ref{thm:main0}, we give a complete (positive)
answer to Problem \ref{prob:mfa} in the case
(already analyzed in \cite{Di, DIM, Li})
when those lines intersect only in double or triple points.
As the multiplicities of those intersection points increase,
we still get some answers, albeit not complete ones.
For instance, in Theorem \ref{thm:main1} we identify
in combinatorial terms the number of times the cyclotomic
factor $\Phi_3(t)$ appears in $\Delta_{{\pazocal{A}}}(t)$, under the
assumption that $\bar{\pazocal{A}}$ has no intersection points of
multiplicity $3r$, with $r>1$, while in Theorem \ref{thm:2main1}
we treat the analogous problem for the cyclotomic factors
$\Phi_2(t)$ and $\Phi_4(t)$.
\subsection{Combinatorics and the algebraic monodromy}
\label{subsec:char poly}
In order to describe our results in more detail, we need to introduce
some notation. Let $M({\pazocal{A}})$
be the complement of the arrangement, and let $Q\colon M({\pazocal{A}}) \to \mathbb{C}^*$
be the Milnor fibration, with Milnor fiber $F({\pazocal{A}})$. Set ${e}_d({\pazocal{A}})=0$, if $d\not\mid n$.
The polynomial $\Delta_{{\pazocal{A}}}(t) = (t-1)^{n-1} \cdot \prod_{1<d\mid n} \Phi_d(t)^{{e}_d({\pazocal{A}})}$
encodes the structure of the vector space $H_1(F({\pazocal{A}}),\mathbb{C})$,
viewed as a module over the group algebra $\mathbb{C}[\mathbb{Z}_n]$ via the action of
the monodromy operator $h_1$. More precisely,
\begin{equation}
\label{eq:h1f}
H_1(F({\pazocal{A}}),\mathbb{C}) = (\mathbb{C}[t]/t-1)^{n-1} \oplus \bigoplus_{1<d\mid n}
(\mathbb{C}[t]/\Phi_d(t))^{{e}_d({\pazocal{A}})}.
\end{equation}
Therefore, Problem \ref{prob:mfa} amounts to deciding whether
the integers ${e}_d({\pazocal{A}})$ are combinatorially determined, and, if so,
computing them explicitly.
Let $L_s({\pazocal{A}})$ be the set of codimension $s$ flats in $L({\pazocal{A}})$. For
each such flat $X$, let ${\pazocal{A}}_X$ be the subarrangement consisting of
all hyperplanes that contain $X$. Finally, let $\mult({\pazocal{A}})$ be the set
of integers $q\ge 3$ for which there is a flat $X\in L_2({\pazocal{A}})$ such that
$X$ has multiplicity $q$, i.e., $\abs{{\pazocal{A}}_X}=q$. Not all divisors of $n$
appear in the above formulas. Indeed, as shown in \cite{Li02, MP}, if $d$
does not divide one of the integers comprising $\mult ({\pazocal{A}})$, the exponent
${e}_d({\pazocal{A}})$ vanishes. In particular, if $\mult({\pazocal{A}}) \subseteq \{3\}$, then only
$e_3({\pazocal{A}})$ may be non-zero. Our first main result computes this integer
under this assumption.
\begin{theorem}
\label{thm:main0}
Suppose $L_2({\pazocal{A}})$ has only flats of multiplicity $2$ and $3$. Then
\begin{equation}
\label{eq:delta23}
\Delta_{{\pazocal{A}}}(t)=(t-1)^{\abs{{\pazocal{A}}}-1}\cdot (t^2+t+1)^{\beta_3({\pazocal{A}})},
\end{equation}
where $\beta_3({\pazocal{A}})$ is an integer between $0$ and $2$ that
depends only on $L_{\le 2}({\pazocal{A}})$.
\end{theorem}
As we shall explain below, the combinatorial invariant $\beta_3({\pazocal{A}})$ is
constructed from the mod $3$ cohomology ring of $M({\pazocal{A}})$.
Formula \eqref{eq:delta23} was previously established
in \cite{MP} in the case when ${\pazocal{A}}$ is a subarrangement
of rank at least $3$ in a non-exceptional Coxeter arrangement.
In this wider generality, our theorem recovers,
in stronger form, the following result of Libgober \cite{Li}:
under the above assumption on multiplicities, the question
whether $\Delta_{{\pazocal{A}}}(t)=(t-1)^{\abs{{\pazocal{A}}}-1}$ can be
decided combinatorially.
\subsection{Resonance varieties and multinets}
\label{subsec:intro-res}
Fix a commutative Noetherian ring $\k$. A celebrated theorem
of Orlik and Solomon \cite{OS} asserts that the cohomology ring
$H^*(M({\pazocal{A}}),\k)$ is isomorphic to the OS--algebra
of the underlying matroid, $A^*({\pazocal{A}})\otimes \k$,
and thus is determined by the (ranked) intersection poset $L({\pazocal{A}})$.
Key to our approach are the resonance varieties of ${\pazocal{A}}$, which
keep track in a subtle way of the vanishing cup products in this ring.
For our purposes here, we will only be interested in resonance
in degree $1$.
For an element $\tau\in A^1({\pazocal{A}})\otimes \k= \k^{{\pazocal{A}}}$, left-multiplication by $\tau$ in the
cohomology ring gives rise to a $\k$-cochain complex, $(A^*({\pazocal{A}})\otimes \k, \tau \cdot)$.
The (first) resonance variety of ${\pazocal{A}}$ over $\k$, denoted ${\mathcal R}_1({\pazocal{A}},\k)$, is the locus of those
elements $\tau$ for which the homology of the above complex in degree $1$ is non-zero.
When $\k$ is a field, this set is a homogeneous subvariety of the affine space $\k^{{\pazocal{A}}}$.
When $\k=\mathbb{C}$, all the irreducible components of
${\mathcal R}_1({\pazocal{A}},\mathbb{C})$ are linear subspaces, intersecting
transversely at $0$, see \cite{CS99, LY}. In positive characteristic,
the components of ${\mathcal R}_1({\pazocal{A}},\k)$ may be non-linear, or with
non-transverse intersection, see \cite{Fa07}.
Very useful to us will be a result of Falk and Yuzvinsky \cite{FY},
which describes all components of ${\mathcal R}_1({\pazocal{A}},\mathbb{C})$ in terms of
multinets on ${\pazocal{A}}$ and its subarrangements.
A {\em $k$-multinet}\/ on ${\pazocal{A}}$ is a partition of the arrangement into
$k\ge 3$ subsets ${\pazocal{A}}_{\alpha}$, together with an assignment
of multiplicities $m_H$ to each $H\in {\pazocal{A}}$, and a choice of rank $2$
flats, called the base locus. All these data must satisfy certain
compatibility conditions. For instance, any two hyperplanes from
different parts of the partition intersect in the base locus, while the sum
of the multiplicities over each part is constant. Furthermore, if
$X$ is a flat in the base locus, then the sum
$n_{X}=\sum_{H\in{\pazocal{A}}_\alpha\cap {\pazocal{A}}_X} m_H$ is independent
of $\alpha$.
A multinet as above is {\em reduced}\/ if all the multiplicities
$m_H$ are equal to $1$. If, moreover, all the numbers
$n_X$ are equal to $1$, the multinet is, in fact, a
{\em net}---a classical notion from combinatorial geometry.
Hyperplane arrangements may be viewed as simple matroids
realizable over $\mathbb{C}$. For an arbitrary simple matroid ${\mathcal{M}}$, one may speak
about (reduced) multinets and nets, as well as resonance varieties ${\mathcal R}_1({\mathcal{M}},\k)$
with arbitrary coefficients. Let $B_{\k}({\mathcal{M}}) \subseteq \k^{{\mathcal{M}}}$ be the
constant functions, and let $\sigma \in B_{\k}({\mathcal{M}})$ be the function
taking the constant value $1$. The {\em cocycle space}\/
$Z_{\k}({\mathcal{M}}) \subseteq \k^{{\mathcal{M}}}$ is defined by the linear condition
$\sigma\cdot\tau=0$. Plainly, $\sigma\in {\mathcal R}_1({\mathcal{M}},\k)$ if and
only if $Z_{\k}({\mathcal{M}}) \ne B_{\k}({\mathcal{M}})$. When $\k$ is a field, the
{\em Aomoto--Betti number}\/ of the matroid is defined as
\begin{equation}
\label{eq:betak}
\beta_{\k}({\mathcal{M}})= \dim_{\k} Z_{\k}({\mathcal{M}}) / B_{\k}({\mathcal{M}}).
\end{equation}
Clearly, this integer depends only on $p:=\ch (\k)$, and so will often be
denoted simply by $\beta_{p}({\mathcal{M}})$.
Having multinets in mind, let us consider a finite set $\k$ with $k\ge 3$ elements, and
define $B_{\k}({\mathcal{M}}) \subseteq \k^{{\mathcal{M}}}$ as before. The subset of `special'
$\k$-cocycles, $Z'_{\k}({\mathcal{M}}) \subseteq \k^{{\mathcal{M}}}$, consists of those functions $\tau$ with the
property that their restriction to an arbitrary flat from $L_2({\mathcal{M}})$ is either constant or bijective.
Given a partition, ${\mathcal{M}} =\coprod_{\alpha \in \k} {\mathcal{M}}_{\alpha}$, we associate to it the element
$\tau \in \k^{{\mathcal{M}}}$ defined by $\tau_u=\alpha$, for $u\in {\mathcal{M}}_{\alpha}$.
Our starting point is the following result, which relates nets to modular resonance,
and which will be proved in \S\ref{subsec:special}.
\begin{theorem}
\label{teo=lambdaintro}
Let ${\mathcal{M}}$ be a simple matroid, and let $k\ge 3$ be an integer. Then:
\begin{romenum}
\item \label{li1}
For any $k$-element set $\k$, the above construction induces a bijection,
\[
\xymatrix{\lambda_{\k} \colon \{\text{$k$-nets on ${\mathcal{M}}$}\} \ar^(.52){\simeq}[r]&
Z'_{\k}({\mathcal{M}})\setminus B_{\k}({\mathcal{M}})} .
\]
\item \label{li2}
If $k \not\equiv 2\bmod 4$, there is a commutative ring $\k$ of cardinality
$k$ such that $Z'_{\k}({\mathcal{M}}) \subseteq Z_{\k}({\mathcal{M}})$.
If, in fact, $k =p^s$, for some prime $p$, then $\k$ can be chosen to be the
Galois field $\k=\mathbb{F}_{p^s}$.
\end{romenum}
\end{theorem}
\subsection{Matroid realizability and essential components}
\label{subsec:intr-mat}
It is well-known that non-trivial $k$-nets on simple matroids exist, for all $k\ge 3$.
For realizable matroids, the picture looks completely different:
by a result of Yuzvinsky \cite{Yu09}, non-trivial $k$-nets exist only
for $k=3$ or $4$; many examples of $3$-nets appear naturally, while
the only known $4$-net comes from the famous Hessian configuration \cite{Yu04}.
The difference between realizable and non-realizable matroids
comes to the fore in \S\ref{sec:matr}. Using a delicate analysis
of $3$-nets supported by a family of matroids ${\mathcal{M}}(m)$ with ground
set $\mathbb{F}_3^m$, and a result of Yuzvinsky \cite{Yu04} in
projective geometry, we establish in Corollary \ref{cor=t16gral} the
following non-realizability criterion.
\begin{theorem}
\label{thm:intro-mat}
Let ${\mathcal{M}}$ be a simple matroid, and suppose there are $3$-nets
${\pazocal{N}}$, ${\pazocal{N}}'$, and ${\pazocal{N}}''$ on ${\mathcal{M}}$ such that $[\lambda_{\mathbb{F}_3}({\pazocal{N}})]$,
$[\lambda_{\mathbb{F}_3}({\pazocal{N}}')]$, and $[\lambda_{\mathbb{F}_3}({\pazocal{N}}'')]$ are independent in
$Z_{\mathbb{F}_3}({\mathcal{M}})/ B_{\mathbb{F}_3}({\mathcal{M}})$.
Then ${\mathcal{M}}$ is not realizable over $\mathbb{C}$.
\end{theorem}
For an arrangement ${\pazocal{A}}$, the irreducible components of ${\mathcal R}_1({\pazocal{A}},\mathbb{C})$
corresponding to multinets on ${\pazocal{A}}$ are called {\em essential}. We denote
those components arising from $k$-nets by ${\rm Ess}_k ({\pazocal{A}})$.
By the the above discussion, ${\rm Ess}_k({\pazocal{A}})=\emptyset$ for $k\ge 5$.
In \S\ref{ssec=55}, we use Theorem \ref{teo=lambdaintro}
to obtain a good estimate on the size of these sets in the
remaining cases.
\begin{theorem}
\label{thm:essintro}
Let ${\pazocal{A}}$ be an arrangement. For $k=3$ or $4$,
\begin{equation}
\label{eq=essboundintro}
\abs{{\rm Ess}_k({\pazocal{A}})} \le \frac{k^{\beta_{\k}({\pazocal{A}})}-1}{(k-1)!},
\end{equation}
where $\k =\mathbb{F}_k$.
Moreover, the sets
${\rm Ess}_3({\pazocal{A}})$ and ${\rm Ess}_4({\pazocal{A}})$ cannot be simultaneously non-empty.
\end{theorem}
\subsection{Modular bounds}
\label{subsec:intro-bound}
Work of Cohen and Orlik \cite[Theorem 1.3]{CO},
as sharpened by Papadima and Suciu \cite[Theorem 11.3]{PS-tams},
gives the following inequalities:
\begin{equation}
\label{eq:bound}
\text{${e}_{p^s} ({\pazocal{A}}) \le \beta_p({\pazocal{A}})$, for all $s\ge 1$}.
\end{equation}
In other words, the exponents of prime-power order $p^s$ are bounded above by the
(combinatorially defined) $\beta_p$-invariants of the
arrangement. As shown in \cite{PS-tams}, these bounds
are of a topological nature: they are valid for spaces much more
general than arrangement complements, but they are far from being
sharp in complete generality. The modular bounds were first used
in \cite{MP} to study the algebraic monodromy of the Milnor fibration,
especially in the context of (signed) graphic arrangements.
We are now ready to state our next main result, which in particular
shows that, under certain combinatorial conditions, the
above modular bounds are sharp, at least for the prime
$p=3$ and for $s=1$.
\begin{theorem}
\label{thm:main1}
Let ${\mathcal{M}}$ be a simple matroid.
Suppose $L_2({\mathcal{M}})$ has no flats of multiplicity $3r$, for any $r>1$.
Then, the following conditions are equivalent:
\begin{romenum}
\item \label{a1}
$L_{\le 2}({\mathcal{M}})$ admits a reduced $3$-multinet.
\item \label{a2}
$L_{\le 2}({\mathcal{M}})$ admits a $3$-net.
\item \label{a3}
$\beta_3({\mathcal{M}}) \ne 0$.
\suspend{romenum}
Moreover, if ${\mathcal{M}}$ is realized by an arrangement ${\pazocal{A}}$, the following hold:
\resume{romenum}
\item \label{a5}
$\beta_3({\pazocal{A}})\le 2$.
\item \label{a6}
$e_3({\pazocal{A}})=\beta_3({\pazocal{A}})$.
\item \label{a7}
$\abs{{\rm Ess}_3({\pazocal{A}})} = (3^{\beta_{3}({\pazocal{A}})}-1)/2$.
\end{romenum}
\end{theorem}
In the matroidal part of the above result, the key equivalence,
\eqref{a2}$\Leftrightarrow$\eqref{a3}, uses Theorem \ref{teo=lambdaintro}.
For arrangements ${\pazocal{A}}$, we establish inequality \eqref{a5}
with the aid of Theorem \ref{thm:intro-mat}.
In the particular case when $\mult({\pazocal{A}})\subseteq \{3\}$, parts \eqref{a5} and
\eqref{a6} together imply Theorem \ref{thm:main0}.
Our assumption on multiplicities is definitely needed. This is illustrated in
Example \ref{ex:B3 bis}, where we produce a family of
arrangements $\{{\pazocal{A}}_{3d+1}\}_{d\ge 1}$ having rank-$2$
flats of multiplicity $3(d+1)$: these arrangements support no
reduced $3$-multinets, yet satisfy $e_3({\pazocal{A}}_{3d+1})=\beta_3({\pazocal{A}}_{3d+1})=1$;
in particular, property \eqref{a7} fails.
Nevertheless, both \eqref{a5} and \eqref{a6} hold for this
family of arrangements, as well as for the related
family of monomial arrangements from Example \ref{ex:cevad},
which also violate our hypothesis.
Our approach also allows us to characterize $4$-nets in terms of
mod $2$ resonance.
\begin{theorem}
\label{thm:2main1}
For a simple matroid ${\mathcal{M}}$, the following are equivalent:
\begin{romenum}
\item \label{2m1}
$Z'_{\mathbb{F}_4}({\mathcal{M}}) \ne B_{\mathbb{F}_4}({\mathcal{M}})$.
\item \label{2m3}
$L_{\le 2}({\mathcal{M}})$ supports a $4$-net.
\end{romenum}
If ${\mathcal{M}}$ is realized by an arrangement ${\pazocal{A}}$ with $\beta_2({\pazocal{A}})\le 2$
and the above conditions hold, then
${e}_2({\pazocal{A}})= {e}_4({\pazocal{A}})=\beta_2({\pazocal{A}})=2$.
\end{theorem}
Thus, the modular bounds \eqref{eq:bound} are again sharp in this case,
for $p=2$ and $s\le 2$.
\subsection{Flat connections}
\label{ssec=flatintro}
Foundational results due to Goldman and Millson \cite{GM}, together with
related work from \cite{KM, DPS, DP}, imply that the local geometry of representation
varieties of fundamental groups of arrangement complements in linear algebraic groups,
near the trivial representation, is determined by the global geometry of varieties of flat connections
on Orlik--Solomon algebras with values in the corresponding Lie algebras.
In this paper, we establish a link between the information on modular resonance encoded by
non-constant special $\k$-cocycles on an arrangement ${\pazocal{A}}$, and ${\mathfrak{g}}$-valued flat connections on
$A({\pazocal{A}})\otimes \mathbb{C}$, for an arbitrary finite-dimensional complex Lie algebra ${\mathfrak{g}}$. More precisely,
we denote by ${\pazocal H}^{\k}({\mathfrak{g}}) \subseteq {\mathfrak{g}}^{\k}$ the subspace of vectors with zero sum of coordinates,
and declare a vector in ${\pazocal H}^{\k}({\mathfrak{g}})$ to be regular if the span of its coordinates has dimension
at least $2$. Inside the variety of flat connections, ${\mathcal{F}} (A({\pazocal{A}})\otimes \mathbb{C}, {\mathfrak{g}})$, the elements which
do not come from Lie subalgebras of ${\mathfrak{g}}$ of dimension at most $1$ are also called regular.
In Proposition \ref{prop=liftev}, we associate to every special cocycle
$\tau \in Z'_{\k}({\pazocal{A}})\setminus B_{\k}({\pazocal{A}})$ an embedding
$\ev_{\tau}\colon {\pazocal H}^{\k}({\mathfrak{g}}) \hookrightarrow {\mathcal{F}} (A({\pazocal{A}})\otimes \mathbb{C}, {\mathfrak{g}})$
which preserves the regular parts. Building on recent work
from \cite{DP, MPPS}, we then exploit this construction in two ways,
for ${\mathfrak{g}}=\sl_2 (\mathbb{C})$. On one hand, as noted in Remark \ref{rem=lambdainv},
the construction gives the inverse of the map $\lambda_{\k}$ from
Theorem \ref{teo=lambdaintro}\eqref{li1}. On the other hand,
we use a version of this construction, involving a subarrangement of ${\pazocal{A}}$
as a second input, to arrive at the following result.
\begin{theorem}
\label{teo=modtoflatintro}
Suppose that, for every subarrangement ${\pazocal{B}} \subseteq {\pazocal{A}}$, all essential components
of ${\mathcal R}_1({\pazocal{B}},\mathbb{C})$ arise from nets on ${\pazocal{B}}$. Then
\[
{\mathcal{F}}_{\reg}(A({\pazocal{A}})\otimes \mathbb{C}, \sl_2 (\mathbb{C}))= \bigcup_{{\pazocal{B}}, \tau} \;
\ev^{{\pazocal{B}}}_{\tau} ({\pazocal H}^{\k}_{\reg} (\sl_2 (\mathbb{C})))\, ,
\]
where the union is taken over all ${\pazocal{B}} \subseteq {\pazocal{A}}$
and all non-constant special $\k$-cocycles $\tau\in Z'_{\k}({\pazocal{B}})\setminus B_{\k}({\pazocal{B}})$.
\end{theorem}
When ${\pazocal{A}}$ satisfies the above combinatorial condition (for instance, when ${\pazocal{A}}$ is an
unsigned graphic arrangement), it follows that the variety of $\sl_2 (\mathbb{C})$-valued
flat connections has an interesting property: it can be reconstructed in an explicit way
from information on modular resonance.
\subsection{Discussion}
\label{subsec:disc}
We return now to Problem \ref{prob:mfa}, and discuss the literature
surrounding it, as well as our approach to solving it
in some notable special cases.
Nearly half the papers in our bibliography are directly related to this
problem. This (non-exhaustive) list of papers may give the reader an idea about the
intense activity devoted to this topic, and the variety of tools used
to tackle it.
In \cite{BDS, CL, CDO, Di, D12, DIM, DL, Li02, Li}, mostly geometric
methods (such as superabundance of linear systems of polynomials,
logarithmic forms, and Mixed Hodge theory) have been used.
It seems worth mentioning that our approach also provides
answers to rather subtle geometric questions. For instance,
a superabundance problem raised by Dimca in \cite{Di}
is settled in Remark \ref{rem:fourth}.
The topological approach to Problem \ref{prob:mfa} traces its origins
to the work of Cohen and Suciu \cite{CS95, CS99} on Milnor fibrations
and characteristic varieties of arrangements. A crucial ingredient
in our approach is the idea to connect the Orlik--Solomon algebra in positive
characteristic to the monodromy of the Milnor fibration. This idea, which
appeared in \cite{CO, De02}, was developed and generalized in \cite{PS-tams}.
The modular bounds from \eqref{eq:bound}, first exploited in a systematic
way by M\u{a}cinic and Papadima in \cite{MP}, have since been put to
use in \cite{BY, DIM}.
On the combinatorial side, multinets and their relationship with complex
resonance varieties, found by Falk and Yuzvinsky in \cite{FY} and further
developed in \cite{PY, Yu09} play an important role in \cite{DS13, DIM, DP11, Su},
and are key to our approach. Here, the novelty in our viewpoint is to relate
(multi)nets to modular resonance and varieties of flat connections.
\subsection{Conclusion}
\label{subsec:conclude}
The many examples we discuss in this paper show a strikingly
similar pattern, whereby the only interesting primes, as far as
the algebraic monodromy of the Milnor fibration goes, are $p=2$
and $p=3$. Furthermore, all rank $3$ simplicial arrangements examined
by Yoshinaga in \cite{Yo} satisfy $e_3({\pazocal{A}})=0$ or $1$, and $e_d({\pazocal{A}})=0$,
otherwise. Finally, we do not know of any arrangement ${\pazocal{A}}$
of rank at least $3$ for which $\beta_p({\pazocal{A}}) \ne 0$ if
$p>3$. By \cite{MP}, no such example may be found among
subarrangements of non-exceptional Coxeter arrangements.
Theorems \ref{thm:main1} and \ref{thm:2main1}, together
with these and other considerations lead us to formulate the following
conjecture.
\begin{conjecture}
\label{conj:mf}
Let ${\pazocal{A}}$ be an arrangement of rank at least $3$. Then
${e}_{p^s}({\pazocal{A}})=0$ for all primes $p$ and integers $s\ge 1$, with two
possible exceptions:
\begin{equation}
\label{eq:e2e3}
{e}_2({\pazocal{A}})= {e}_4({\pazocal{A}})=\beta_2({\pazocal{A}}) \:\text{ and }\: {e}_3({\pazocal{A}})=\beta_3({\pazocal{A}}).
\end{equation}
\end{conjecture}
When ${e}_d({\pazocal{A}})=0$ for all divisors $d$ of $\abs{{\pazocal{A}}}$ which
are not prime powers, this conjecture would give the following
complete answer to Problem \ref{prob:mfa}:
\begin{equation}
\label{eq:delta arr}
\Delta_{{\pazocal{A}}}(t)=(t-1)^{\abs{{\pazocal{A}}}-1} ((t+1)(t^2+1))^{\beta_2({\pazocal{A}})} (t^2+t+1)^{\beta_3({\pazocal{A}})}.
\end{equation}
\section{Matroids and multinets}
\label{sect:nets}
The combinatorics of a hyperplane arrangement is encoded
in its intersection lattice, which in turn can be viewed as a
lattice of flats of a realizable matroid. In this section, we
discuss multinet structures on matroids, with special emphasis on nets.
\subsection{Matroids}
\label{subsec:matroids}
We start by reviewing the notion of matroid. There are many ways to
axiomatize this notion, which unifies several concepts in linear algebra,
graph theory, discrete geometry, and the theory of hyperplane arrangements,
see for instance Wilson's survey \cite{W}. We mention here only the
ones that will be needed in the sequel.
A {\em matroid}\/ is a finite set ${\mathcal{M}}$, together with a collection of subsets,
called the {\em independent sets}, which satisfy the following axioms:
(1) the empty set is independent; (2) any proper subset of
an independent set is independent; and (3) if $I$ and $J$
are independent sets and $\abs{I} > \abs{J}$, then there exists
$u \in I \setminus J$ such that $J \cup \{u\}$ is independent.
A maximal independent set is called a {\em basis}, while a
minimal dependent set is called a {\em circuit}.
The {\em rank}\/ of a subset $S\subset {\mathcal{M}}$ is the size of the largest
independent subset of $S$. A subset is {\em closed}\/ if it is maximal
for its rank; the closure $\overline{S}$ of a subset $S\subset {\mathcal{M}}$ is
the intersection of all closed sets containing $S$. Closed sets are
also called {\em flats}.
We will consider only {\em simple} matroids, defined by the condition that
all subsets of size at most two are independent.
The set of flats of ${\mathcal{M}}$, ordered by inclusion, forms a
geometric lattice, $L({\mathcal{M}})$, whose atoms are the elements
of ${\mathcal{M}}$. We will denote by $L_s({\mathcal{M}})$ the set of rank-$s$ flats,
and by $L_{\le s}({\mathcal{M}})$ the sub-poset of flats of rank
at most $s$.
We say that a flat $X$ has multiplicity $q$ if $\abs{X}=q$.
The join of two flats $X$ and $Y$ is given by
$X\vee Y=\overline{X\cup Y}$,
while the meet is given by $X\wedge Y=X\cap Y$.
\subsection{Hyperplane arrangements}
\label{subsec:arrs}
An arrangement of hyperplanes is a finite set ${\pazocal{A}}$ of
codi\-mension-$1$ linear subspaces in a finite-dimensional,
complex vector space $\mathbb{C}^{\ell}$. We will assume throughout
that the arrangement is central, that is, all the hyperplanes pass
through the origin. Projectivizing, we obtain an arrangement
${\bar{\A}}=\{\bar{H}\mid H\in {\pazocal{A}}\}$ of projective, codimension-$1$
subspaces in $\mathbb{CP}^{\ell-1}$, from which ${\pazocal{A}}$ can be reconstructed
via a coning construction.
The combinatorics of the arrangement is encoded in its
{\em intersection lattice}, $L({\pazocal{A}})$. This is the poset of all
intersections of hyperplanes in ${\pazocal{A}}$ (also known as {\em flats}),
ordered by reverse inclusion, and ranked by codimension.
The join of two flats $X,Y\in L({\pazocal{A}})$ is given by $X\vee Y=X\cap Y$, while
the meet is given by $X\wedge Y=\bigcap \{Z\in L({\pazocal{A}}) \mid X+Y \subseteq Z\}$.
Given a flat $X$, we will denote by ${\pazocal{A}}_X$ the subarrangement
$\{H\in {\pazocal{A}}\mid H\supset X\}$.
We may view ${\pazocal{A}}$ as a simple matroid, whose points correspond to the
hyperplanes in ${\pazocal{A}}$, with dependent subsets given by linear algebra,
in terms of the defining equations of the hyperplanes. In this way, the
lattice of flats of the underlying matroid is identified with $L({\pazocal{A}})$.
Under this dictionary, the two notions of rank coincide.
A matroid ${\mathcal{M}}$ is said to be {\em realizable}\/ (over $\mathbb{C}$) if
there is an arrangement ${\pazocal{A}}$ such that $L({\mathcal{M}})=L({\pazocal{A}})$.
The simplest situation is when ${\mathcal{M}}$ has rank $2$, in
which case ${\mathcal{M}}$ can always be realized by a pencil
of lines through the origin of $\mathbb{C}^2$.
For most of our purposes here, it will be enough to assume that
the arrangement ${\pazocal{A}}$ lives in $\mathbb{C}^3$, in which case ${\bar{\A}}$ is an
arrangement of (projective) lines in $\mathbb{CP}^2$. This is clear when the
rank of ${\pazocal{A}}$ is at most $2$, and may be achieved otherwise
by taking a generic $3$-slice. This operation does not
change the poset $L_{\le 2}({\pazocal{A}})$, or derived invariants
such as $\beta_p({\pazocal{A}})$, nor does it change the monodromy
action on $H_1(F({\pazocal{A}}),\mathbb{C})$.
For a rank-$3$ arrangement, the set $L_1({\pazocal{A}})$ is in $1$-to-$1$ correspondence
with the lines of ${\bar{\A}}$, while $L_2({\pazocal{A}})$ is in $1$-to-$1$ correspondence
with the intersection points of ${\bar{\A}}$. The poset structure of $L_{\le 2}({\pazocal{A}})$
corresponds then to the incidence structure of the point-line configuration ${\bar{\A}}$.
This correspondence is illustrated in Figure \ref{fig:braid}.
We will say that a flat $X \in L_2({\pazocal{A}})$ has multiplicity $q$ if
$\abs{{\pazocal{A}}_X}=q$, or, equivalently, if the point $\bar{X}$ has
exactly $q$ lines from $\bar{{\pazocal{A}}}$ passing through it.
\begin{figure}
\centering
\begin{tikzpicture}[scale=0.68]
\hspace*{-0.45in}
\draw[style=thick,densely dashed,color=blue] (-0.5,3) -- (2.5,-3);
\draw[style=thick,densely dotted,color=red] (0.5,3) -- (-2.5,-3);
\draw[style=thick,color=dkgreen] (-3,-2) -- (3,-2);
\draw[style=thick,densely dotted,color=red] (3,-2.68) -- (-2,0.68);
\draw[style=thick,densely dashed,color=blue] (-3,-2.68) -- (2,0.68);
\draw[style=thick,color=dkgreen] (0,-3.1) -- (0,3.1);
\node at (-0.3,-2) {$\bullet$};
\node at (3.7,-2) {$\bullet$};
\node at (1.7,2) {$\bullet$};
\node at (1.7,-0.7) {$\bullet$};
\node at (-0.7,0.75) {$4$};
\node at (4.1,0.75) {$6$};
\node at (1.7,-3.5) {$2$};
\node at (-1.7,-2) {$1$};
\node at (-0.85,-3.4) {$3$};
\node at (4.25,-3.4) {$5$};
\end{tikzpicture}
\hspace*{0.65in}
\begin{tikzpicture}[scale=0.6]
\path[use as bounding box] (1,-1.5) rectangle (6,6.8);
\path (0,0) node[draw, ultra thick, shape=circle,
inner sep=1.3pt, outer sep=0.7pt, color=blue] (v1) {\small{5}};
\path (2,2.5) node[draw, ultra thick, shape=circle,
inner sep=1.3pt, outer sep=0.7pt, color=red!90!white] (v2) {\small{3}};
\path (4,5) node[draw, ultra thick, shape=circle,
inner sep=1.3pt, outer sep=0.7pt, color=dkgreen!80!white] (v3) {\small{1}};
\path (6,2.5) node[draw, ultra thick, shape=circle,
inner sep=1.3pt, outer sep=0.7pt, color=red!90!white] (v4) {\small{4}};
\path (8,0) node[draw, ultra thick, shape=circle,
inner sep=1.3pt, outer sep=0.7pt, color=blue] (v5) {\small{6}};
\path (4,1.666) node[draw, ultra thick, shape=circle,
inner sep=1.3pt, outer sep=0.7pt, color=dkgreen!80!white] (v6) {\small{2}};
\draw (v1) -- (v2) -- (v3) -- (v4) -- (v5);
\draw (v2) -- (v6) -- (v5);
\draw (v1) -- (v6) -- (v4);
\end{tikzpicture}
\caption{A $(3,2)$-net on the ${\rm A}_3$ arrangement,
and on the corresponding matroid}
\label{fig:braid}
\end{figure}
\setcounter{figure}{0}
\subsection{Multinets on matroids}
\label{subsec:multinets}
Guided by the work of Falk and Yuzvinsky \cite{FY}, we
define the following structure on a matroid ${\mathcal{M}}$---in
fact, on the poset $L_{\le 2}({\mathcal{M}})$.
A {\em multinet}\/ on ${\mathcal{M}}$ is a partition into $k\ge 3$ subsets
${\mathcal{M}}_1, \dots, {\mathcal{M}}_k$, together with an assignment of multiplicities,
$m\colon {\mathcal{M}}\to \mathbb{N}$, and a subset ${\pazocal X}\subseteq L_2({\mathcal{M}})$
with $\abs{X}>2$ for each $X\in {\pazocal X}$, called the base locus, such that:
\begin{enumerate}
\item \label{mu1}
There is an integer $d$ such that $\sum_{u\in {\mathcal{M}}_{\alpha}} m_u=d$,
for all $\alpha\in [k]$.
\item \label{mu2}
For any two points $u, v\in {\mathcal{M}}$ in different classes, the flat spanned by
$\set{u,v}$ belongs to ${\pazocal X}$.
\item \label{mu3}
For each $X\in{\pazocal X}$, the integer
$n_X:=\sum_{u\in {\mathcal{M}}_\alpha \cap X} m_u$
is independent of $\alpha$.
\item \label{mu4}
For each $1\leq \alpha \leq k$ and $u,v\in {\mathcal{M}}_{\alpha}$, there is a
sequence $u=u_0,\ldots, u_r=v$ such that $u_{i-1}\vee u_i\not\in{\pazocal X}$ for
$1\leq i\leq r$.
\end{enumerate}
We say that a multinet ${\pazocal{N}}$ as above has $k$ classes
and weight $d$, and refer to it as a $(k,d)$-multinet, or simply as
a $k$-multinet. Without essential loss of generality, we may assume that
$\gcd \{m_u\}_{u\in {\mathcal{M}}}=1$.
If all the multiplicities are equal to $1$, the multinet is said to
be {\em reduced}. If $n_X=1$, for all $X\in {\pazocal X}$, the multinet
is called a {\em $(k,d)$-net}; in this case, the multinet is reduced,
and every flat in the base locus contains precisely one element
from each class. A $(k,d)$-net ${\pazocal{N}}$ is {\em non-trivial}\/ if $d>1$, or,
equivalently, if the matroid ${\mathcal{M}}$ has rank at least $3$.
The symmetric group $\Sigma_k$ acts freely on the set of $(k,d)$-multinets
on ${\mathcal{M}}$, by permuting the $k$ classes. Note that the $\Sigma_k$-action
on $k$-multinets preserves reduced $k$-multinets as well as $k$-nets.
\begin{figure}
\centering
\subfigure{%
\begin{minipage}{0.47\textwidth}
\centering
\begin{tikzpicture}[scale=0.7]
\draw[style=thick,color=dkgreen] (0,0) circle (3.1);
\node at (-2.4,0.3){2};
\node at (0,-2.6){2};
\node at (3.35,0.5){2};
\clip (0,0) circle (2.9);
\draw[style=thick,densely dashed,color=blue] (-1,-2.1) -- (-1,2.5);
\draw[style=thick,densely dotted,color=red] (0,-2.2) -- (0,2.5);
\draw[style=thick,densely dashed,color=blue] (1,-2.1) -- (1,2.5);
\draw[style=thick,densely dotted,color=red] (-2.5,-1) -- (2.5,-1);
\draw[style=thick,densely dashed,color=blue] (-2.5,0) -- (2.5,0);
\draw[style=thick,densely dotted,color=red] (-2.5,1) -- (2.5,1);
\draw[style=thick,color=dkgreen] (-2,-2) -- (2,2);
\draw[style=thick,color=dkgreen](-2,2) -- (2,-2);
\end{tikzpicture}
\caption{\!A $(3,4)$-multinet}
\label{fig:b3 arr}
\end{minipage}
}
\subfigure{%
\begin{minipage}{0.5\textwidth}
\centering
\begin{tikzpicture}[scale=0.8]
\draw[style=thick,densely dotted,color=red] (0,0) circle (3.1);
\clip (0,0) circle (2.9);
\draw[style=thick,densely dotted,color=red] (0,-2.8) -- (0,2.8);
\draw[style=thick,densely dotted,color=red] (-2.6,-1) -- (2.6,-1);
\draw[style=thick,densely dotted,color=red] (-2.6,1) -- (2.6,1);
\draw[style=thick,densely dashed,color=blue] (-0.5,-2.7) -- (-0.5,2.7);
\draw[style=thick,densely dashed,color=blue] (1.5,-2.4) -- (1.5,2.4);
\draw[style=thick,densely dashed,color=blue] (-2.5,-2) -- (2.2,2.7);
\draw[style=thick,densely dashed,color=blue](-2.2,1.7) -- (2.2,-2.7);
\draw[style=thick,color=dkgreen] (-1.5,-2.4) -- (-1.5,2.4);
\draw[style=thick,color=dkgreen] (0.5,-2.7) -- (0.5,2.7);
\draw[style=thick,color=dkgreen] (-1.7,-2.2) -- (2.2,1.7);
\draw[style=thick,color=dkgreen](-2,2.5) -- (2.2,-1.7);
\end{tikzpicture}
\caption{A reduced $(3,4)$-multi\-net, but not a $3$-net}
\label{fig:non-net}%
\end{minipage}
}
\end{figure}
We will say that an arrangement ${\pazocal{A}}$ admits a multinet if the matroid
realized by ${\pazocal{A}}$ does.
The various possibilities are illustrated in the above figures.
Figure \ref{fig:braid} shows a $(3,2)$-net on a planar slice
of the reflection arrangement of type ${\rm A}_3$.
Figure \ref{fig:b3 arr} shows a non-reduced $(3,4)$-multinet
on a planar slice of the reflection arrangement of type ${\rm B}_3$.
Finally, Figure \ref{fig:non-net} shows a simplicial arrangement of $12$
lines in $\mathbb{CP}^2$ supporting a reduced $(3,4)$-multinet which is not a $3$-net.
For more examples, we refer to \cite{FY, Yo, Yu04}.
\subsection{Reduced multinets and nets}
\label{subsec:red multi}
Let ${\pazocal{N}}$ be a multinet on a matroid ${\mathcal{M}}$, with associated
classes $\{{\mathcal{M}}_1,\dots, {\mathcal{M}}_k\}$. For each flat $X\in L_2({\mathcal{M}})$, let us write
\begin{equation}
\label{eq:supp}
\supp_{{\pazocal{N}}}(X)=\{\alpha \in [k] \mid {\mathcal{M}}_{\alpha}\cap X \ne \emptyset\}.
\end{equation}
Evidently, $\abs{\supp_{{\pazocal{N}}}(X)} \le \abs{X}$.
Notice also that $\abs{\supp_{{\pazocal{N}}}(X)}$ is either $1$ (in which case
we say $X$ is mono-colored), or $k$ (in which case we say
$X$ is multi-colored). Here is an elementary lemma.
\begin{lemma}
\label{lem:rednet}
Suppose a matroid ${\mathcal{M}}$ has no $2$-flats of multiplicity $kr$,
for any $r>1$.
Then every reduced $k$-multinet on ${\mathcal{M}}$ is a $k$-net.
\end{lemma}
\begin{proof}
Let $X$ be a flat in the base locus of a $k$-multinet ${\pazocal{N}}$;
then $\abs{\supp_{{\pazocal{N}}}(X)}=k$. If the multinet is reduced, we
have that $\abs{X}=k n_X$. Since, by assumption,
$L_2({\mathcal{M}})$ has no flats of multiplicity $kr$, with $r>1$,
we must have $n_X=1$. Thus, the multinet is a net.
\end{proof}
In the case when $k=3$, Lemma \ref{lem:rednet} proves implication
\eqref{a1} $\Rightarrow$ \eqref{a2} from Theorem \ref{thm:main1}.
(The implication \eqref{a2} $\Rightarrow$ \eqref{a1} from that theorem
is of course trivial.)
Work of Yuzvinsky \cite{Yu04, Yu09} and Pereira--Yuzvinsky \cite{PY}
shows that, if ${\pazocal{N}}$ is a $k$-multinet on a realizable matroid,
with base locus of size greater than $1$, then $k=3$
or $4$; furthermore, if ${\pazocal{N}}$ is not reduced, then $k$
must equal $3$.
Work of Kawahara from \cite[\S3]{Ka} shows that nets on matroids abound.
In particular, this work shows that, for any $k\ge 3$, there is a simple matroid
${\mathcal{M}}$ supporting a non-trivial $k$-net. By the above, if ${\mathcal{M}}$ is realizable,
then $k$ can only be equal to $3$ or $4$.
Let us look in more detail at the structure of nets on matroids.
\begin{lemma}
\label{lem:net props}
Assume a matroid ${\mathcal{M}}$ supports a $k$-net with parts
${\mathcal{M}}_{\alpha}$. Then:
\begin{enumerate}
\item \label{n1}
Each submatroid ${\mathcal{M}}_{\alpha}$ has the same cardinality,
equal to $d:=\abs{{\mathcal{M}}}/k$.
\item \label{n3}
If $X\in L_2({\mathcal{M}}_{\alpha})$, then $X$ is also closed in ${\mathcal{M}}$.
\end{enumerate}
\end{lemma}
\begin{proof}
Part \eqref{n1} follows from the definitions. To
prove \eqref{n3}, let $X$ be a flat in $L_2({\mathcal{M}}_{\alpha})$, and
suppose there is a point $u\in ({\mathcal{M}}\setminus {\mathcal{M}}_{\alpha})\cap \overline{X}$,
where $\overline{X}$ denotes the closure in ${\mathcal{M}}$.
Since $X\in L_2({\mathcal{M}}_{\alpha})$, there exist distinct points
$v, w\in {\mathcal{M}}_{\alpha}\cap X$. Since also $u\in \overline{X}$, the flat
$\overline{X}$ must belong to the base locus. On the other hand,
$\overline{X}$ contains a single point from ${\mathcal{M}}_{\alpha}$. This is
a contradiction, and we are done.
\end{proof}
In view of the above lemma, we make the following definition. Given
a matroid ${\mathcal{M}}$ and a subset $S \subseteq {\mathcal{M}}$,
we say that $S$ is {\em line-closed}\/ (in ${\mathcal{M}}$) if $X$ is closed in ${\mathcal{M}}$,
for all $X\in L_2(S)$. Clearly, this property is stable under intersection.
\subsection{Nets and Latin squares}
\label{subsec:nets}
The next lemma provides an alternative definition of nets.
The lemma, which was motivated by \cite[Definition 1.1]{Yu04}
in the realizable case, will prove useful in the sequel.
\begin{lemma}
\label{lem:lsq}
A $k$-net ${\pazocal{N}}$ on a matroid ${\mathcal{M}}$ is a partition with non-empty blocks,
${\mathcal{M}} =\coprod_{\alpha \in [k]} {\mathcal{M}}_{\alpha}$, with the property that,
for every $u\in {\mathcal{M}}_{\alpha}$ and $v\in {\mathcal{M}}_{\beta}$ with $\alpha \ne \beta$
and every $\gamma\in [k]$,
\begin{equation}
\label{eq=altnet}
\abs{(u \vee v) \cap {\mathcal{M}}_{\gamma}}=1.
\end{equation}
\end{lemma}
\begin{proof}
Plainly, the net axioms \eqref{mu2} and \eqref{mu3} from \S\ref{subsec:multinets}
imply the following dichotomy, for an arbitrary flat $X\in L_2({\mathcal{M}})$: either $X$ is mono-colored,
or $X$ belongs to the base locus ${\pazocal X}$ and $\abs{X}=\abs{\supp_{{\pazocal{N}}}(X)}=k>2$.
Hence, we may replace ${\pazocal X}$ by the subset of multi-colored, rank-$2$ flats,
i.e., the set of flats of the form $u\vee v$, where $u\in {\mathcal{M}}_{\alpha}$,
$v\in {\mathcal{M}}_{\beta}$, and $\alpha \ne \beta$. In this way, axiom \eqref{mu2}
may be eliminated, and axiom \eqref{mu3} reduces to property \eqref{eq=altnet}.
In turn, this property clearly implies axiom \eqref{mu4}, by taking $r=1$,
$u_0=u$, and $u_1=v$.
In order to complete the proof, we are left with showing that
property \eqref{eq=altnet} implies axiom \eqref{mu1}. To this end,
let us fix $v\in {\mathcal{M}}_{\gamma}$, and define for each $\alpha \ne \gamma$
a function
\begin{equation}
\label{eq:fv}
f_v \colon {\mathcal{M}}_{\alpha} \to \{ X\in {\pazocal X} \mid v\in X \}
\end{equation}
by setting $f_v (u)=u\vee v$. By property \eqref{eq=altnet}, the function
$f_v$ is a bijection. Finally, given $\alpha \ne \beta$ in $[k]$, pick a third
element $\gamma \in [k]$ and a point $v\in {\mathcal{M}}_{\gamma}$ to infer that
$\abs{{\mathcal{M}}_{\alpha}}= \abs{{\mathcal{M}}_{\beta}}$. This verifies axiom \eqref{mu1},
and we are done.
\end{proof}
A {\em Latin square}\/ of size $d$ is a matrix corresponding
to the multiplication table of a quasi-group of order $d$; that is to say,
a $d\times d$ matrix $\Lambda$, with each row and column a
permutation of the set $[d]=\{1,\dots, d\}$.
In the sequel, we will make extensive use of $3$-nets. In view of
Lemma \ref{lem:lsq}, a $3$-net on a matroid ${\mathcal{M}}$ is a partition into
three non-empty subsets ${\mathcal{M}}_1, {\mathcal{M}}_2, {\mathcal{M}}_3$ with the
property that, for each pair of points $u,v\in {\mathcal{M}}$ in
different classes, we have $u\vee v=\set{ u,v,w }$, for some
point $w$ in the third class.
Three-nets are intimately related to Latin squares. If
${\mathcal{M}}$ admits a $(3,d)$-net with parts ${\mathcal{M}}_1,{\mathcal{M}}_2,{\mathcal{M}}_3$, then
the multi-colored $2$-flats define a Latin square $\Lambda$:
if we label the points of ${\mathcal{M}}_{\alpha}$ as
$u^{\alpha}_{1},\dots, u^{\alpha}_{d}$,
then the $(p,q)$-entry of this matrix is the integer $r$
given by the condition that
$\{ u^1_p, u^2_q, u^3_{r} \} \in L_2({\mathcal{M}})$.
A similar procedure shows that a $k$-net is
encoded by a $(k-2)$-tuple of orthogonal Latin squares.
The realizability of $3$-nets by line arrangements in $\mathbb{CP}^2$
has been studied by several authors, including Yuzvinsky \cite{Yu04},
Urz\'{u}a \cite{Ur}, and Dimca, Ibadula, and M\u{a}cinic \cite{DIM}.
\begin{example}
\label{ex:kawa}
Particularly simple is the following construction, due to
Kawahara \cite{Ka}: given any Latin square, there is a
matroid with a $3$-net realizing it, such that each submatroid
obtained by restricting to the parts of the $3$-net is a uniform matroid.
In turn, some of these matroids may be realized by line arrangements
in $\mathbb{CP}^2$. For instance, suppose $\Lambda$ is the multiplication table
of one of the groups $\mathbb{Z}_2$, $\mathbb{Z}_3$, $\mathbb{Z}_4$, or $\mathbb{Z}_2\times \mathbb{Z}_2$.
Then the corresponding realization is the braid arrangement,
the Pappus $(9_3)_1$ configuration, the Kirkman configuration,
and the Steiner configuration, respectively.
\end{example}
In general, though, there are many other realizations of Latin
squares. For example, the group $\mathbb{Z}_3$ admits two more
realizations, see \cite[Theorem 2.2]{DIM} and Examples \ref{ex:second},
\ref{ex:third}.
\section{Modular Aomoto--Betti numbers and resonance varieties}
\label{sect:res}
We now study two inter-related matroidal invariants:
the Ao\-moto--Betti numbers $\beta_p$ and the resonance
varieties in characteristic $p>0$. In the process, we explore
some of the constraints imposed on the $\beta_p$-invariants
by the existence of nets on the matroid.
\subsection{The Orlik--Solomon algebra}
\label{subsec:os alg}
As before, let ${\pazocal{A}}$ be an arrangement of hyperplanes in $\mathbb{C}^{\ell}$.
The main topological invariant associated to such an arrangement
is its complement, $M({\pazocal{A}})=\mathbb{C}^{\ell}\setminus \bigcup_{H\in {\pazocal{A}}} H$.
This is a smooth, quasi-projective variety, with the homotopy type
of a connected, finite CW-complex of dimension $\ell$.
Building on work of Brieskorn, Orlik and Solomon described in
\cite{OS} the cohomology ring $A({\pazocal{A}})=H^*(M({\pazocal{A}}),\mathbb{Z})$ as the quotient
of the exterior algebra on degree-one classes dual to the
meridians around the hyperplanes of ${\pazocal{A}}$, modulo a certain
ideal determined by the intersection lattice.
Based on this combinatorial description, one may associate an
Orlik--Solomon algebra $A({\mathcal{M}})$ to any (simple) matroid ${\mathcal{M}}$, as follows.
Let $E=\bigwedge({\mathcal{M}})$ be the exterior algebra on degree
$1$ elements $e_u$ corresponding to the points of the matroid,
and define a graded derivation $\partial\colon E\to E$ of degree $-1$
by setting $\partial(1)=0$ and $\partial (e_u)=1$, for all
$u\in {\mathcal{M}}$. Then
\begin{equation}
\label{eq:os matroid}
A({\mathcal{M}}) = E/\text{ideal $\{\partial (e_S) \mid$ \text{$S$ a circuit in ${\mathcal{M}}$}\}},
\end{equation}
where $e_S = \prod_{u\in S} e_u$. As is well-known, this graded ring
is torsion-free, and the ranks of its graded pieces are determined
by the M\"obius function of the matroid. In particular, $A^1({\mathcal{M}})=\mathbb{Z}^{{\mathcal{M}}}$
(this is one instance where the simplicity assumption on ${\mathcal{M}}$ is needed).
This construction enjoys the following naturality property:
if ${\mathcal{M}}' \subseteq {\mathcal{M}}$ is a submatroid, then the canonical
embedding $\bigwedge({\mathcal{M}}')\hookrightarrow \bigwedge({\mathcal{M}})$ induces
a monomorphism of graded rings, $A({\mathcal{M}}')\hookrightarrow A({\mathcal{M}})$.
\subsection{Resonance varieties}
\label{subsec:res}
Let $A$ be a graded, graded-commutative algebra over a
commutative Noetherian ring $\k$. We will assume that each
graded piece $A^q$ is free and finitely generated over $\k$,
and $A^0=\k$. Furthermore, we will assume that $a^2=0$, for all $a\in A^1$,
a condition which is automatically satisfied if $\k$ is a field with $\ch(\k)\ne 2$,
by the graded-commutativity of multiplication in $A$.
For each element $a \in A^1$, we turn the algebra $A$ into a
cochain complex,
\begin{equation}
\label{eq:aomoto}
\xymatrix{(A , \delta_a)\colon \
A^0\ar^(.66){ \delta_{a}}[r] & A^1\ar^{ \delta_{a}}[r] & A^2 \ar[r]& \cdots},
\end{equation}
using as differentials the maps $ \delta_{a}(b)=ab$. For a finitely generated $\k$-module $H$,
we denote by $\rank_{\k} H$ the minimal number of $\k$-generators for $H$.
The (degree $q$, depth $r$) {\em resonance varieties}\/ of $A$
are then defined as the jump loci for the cohomology of this complex,
\begin{equation}
\label{eq:resa}
{\mathcal R}^q_r(A)= \{a \in A^1 \mid \rank_{\k} H^q(A , \delta_{a}) \ge r\}.
\end{equation}
When $\k$ is a field, it is readily seen that these sets are Zariski-closed, homogeneous
subsets of the affine space $A^1$.
For our purposes here, we will only consider the degree $1$ resonance
varieties, ${\mathcal R}_r(A)={\mathcal R}^1_r(A)$. Clearly, these varieties depend
only on the degree $2$ truncation of $A$, denoted $A^{\le 2}$. Over a field $\k$,
${\mathcal R}_r(A)$ consists of $0$, together with all elements
$a \in A^1$ for which there exist $b_1,\dots ,b_r \in A^1$
such that $\dim_{\k}\spn\{a,b_1,\dots,b_r\}=r+1$ and $ab_i=0$ in $A^2$.
The degree $1$ resonance varieties over a field enjoy the following naturality
property: if $\varphi\colon A\to A'$ is a morphism of commutative
graded algebras, and $\varphi$ is injective in degree $1$, then
$\varphi^1$ embeds ${\mathcal R}_r(A)$ into ${\mathcal R}_r(A')$, for each $r\ge 1$.
\subsection{The Aomoto--Betti numbers}
\label{subsec:aomoto betti}
Consider now the algebra $A=A({\mathcal{M}})\otimes \k$, i.e., the Orlik--Solomon
algebra of the matroid ${\mathcal{M}}$ with coefficients in a commutative Noetherian
ring $\k$. Since $A$ is a quotient of an exterior algebra, we have that
$a^2=0$ for all $a\in A^1$. Thus, we may define the resonance varieties
of the matroid ${\mathcal{M}}$ as
\begin{equation}
\label{eq:res mat}
{\mathcal R}_r({\mathcal{M}},\k) := {\mathcal R}_r(A({\mathcal{M}})\otimes \k).
\end{equation}
If the coefficient ring is a field, these varieties essentially depend only on
the characteristic of the field. Indeed, if $\k\subset \mathbb{K}$ is a
field extension, then ${\mathcal R}_r({\mathcal{M}},\k) = {\mathcal R}_r({\mathcal{M}},\mathbb{K}) \cap \k^{{\mathcal{M}}}$.
The resonance varieties of a realizable matroid were first
defined and studied by Falk \cite{Fa97} for $\k=\mathbb{C}$, by
Matei and Suciu \cite{MS00} for arbitrary fields, and then
by Falk \cite{Farx, Fa07} in the general case.
Over an arbitrary Noetherian ring $\k$, the $\k$-module $A^1=\k^{{\mathcal{M}}}$
comes endowed with a preferred basis, which we will also write as
$\{e_u\}_{u\in {\mathcal{M}}}$. Consider the ``diagonal" element
\begin{equation}
\label{eq:omega1}
\sigma=\sum_{u\in {\mathcal{M}}} e_u \in A^1,
\end{equation}
and define the {\em cocycle space}\/ of the matroid (with respect to $\sigma$)
to be
\begin{equation}
\label{eq:z1}
Z_{\k}({\mathcal{M}}) = \{ \tau\in A^1 \mid \sigma \cup \tau=0\}.
\end{equation}
Over a field, the dimension of $Z_{\k}({\mathcal{M}})$
depends only on the characteristic of $\k$, and not on $\k$ itself.
The following lemma gives a convenient system of linear
equations for the cocycle space $Z_{\k}({\mathcal{M}})$, in general.
\begin{lemma}
\label{lem:eqs}
Let ${\mathcal{M}}$ be a matroid, and let $\k$ be a commutative Noetherian ring.
A vector $\tau=\sum_{u\in {\mathcal{M}}}\tau_u e_u \in \k^{{\mathcal{M}}}$
belongs to $Z_\k({\mathcal{M}})$ if and only if, for each flat $X\in L_2({\mathcal{M}})$ and $v\in X$,
the following equation holds:
\begin{equation}
\label{eq=zfalk}
\sum_{u\in X} \tau_u =\abs{X} \cdot \tau_v .
\end{equation}
Furthermore, if $\k$ is a field of characteristic $p>0$, the above equations
are equivalent to the system
\begin{equation}
\label{eq:zsigma}
\begin{cases}
\:\sum_{u\in X} \tau_u = 0 & \text{if $p\mid \abs{X}$},\\[3pt]
\:\tau_{u}=\tau_{v},\ \text{for all $u,v\in X$} & \text{if $p\nmid \abs{X}$}.
\end{cases}
\end{equation}
\end{lemma}
\begin{proof}
The first assertion follows from \cite[Theorem 3.5]{Farx}, while
the second assertion is a direct consequence of the first one.
\end{proof}
Note that $\sigma \in Z_\k({\mathcal{M}})$. Define the {\em coboundary space}\/
of the matroid to be the submodule $B_\k({\mathcal{M}})\subseteq Z_\k({\mathcal{M}})$
spanned by $\sigma$. Clearly, a vector $\tau$ as above belongs to
$B_\k({\mathcal{M}})$ if and only if all its components $\tau_u$ are equal.
Let us define the {\em Aomoto--Betti number}\/ over $\k$ of the matroid ${\mathcal{M}}$ as
\begin{equation}
\label{eq=betafalk}
\beta_{\k}({\mathcal{M}}):=\rank_{\k} Z_\k({\mathcal{M}})/B_\k({\mathcal{M}}) .
\end{equation}
Clearly, $\beta_{\k}({\mathcal{M}})=0$ if and only if $Z_\k({\mathcal{M}})= B_\k({\mathcal{M}})$.
If $\k$ is a field of positive characteristic, the Aomoto--Betti number
of ${\mathcal{M}}$ depends only on $p=\ch \k$, and so we will write it simply as
$\beta_p({\mathcal{M}})$. We then have
\begin{equation}
\label{eq:betap bis}
\beta_p({\mathcal{M}})= \dim_{\k} Z_\k({\mathcal{M}}) -1 .
\end{equation}
Note that $Z_\k({\mathcal{M}})/B_\k({\mathcal{M}}) = H^1(A({\mathcal{M}})\otimes \k,\delta_{\sigma})$.
Thus, the above $\bmod$-$p$ matroid invariant may be reinterpreted as
\begin{equation}
\label{eq:betap again}
\beta_p({\mathcal{M}}) = \max\{r \mid \sigma\in {\mathcal R}_r({\mathcal{M}},\k)\}.
\end{equation}
\subsection{Constraints on the Aomoto--Betti numbers}
\label{subsec:constraints}
We now use Lemma \ref{lem:eqs} to derive useful
information on the matroidal invariants defined above.
We start with a vanishing criterion, which is an immediate
consequence of that lemma.
\begin{corollary}
\label{cor:beta zero}
If $p\nmid \abs{X}$, for any $X\in L_2({\mathcal{M}})$ with $\abs{X}>2$,
then $\beta_p({\mathcal{M}})=0$.
\end{corollary}
For instance, if ${\mathcal{M}}$ is a rank~$3$ uniform matroid, then $\beta_p({\mathcal{M}})=0$,
for all $p$. At the other extreme, if all the points of ${\mathcal{M}}$ are collinear,
and $p$ divides $\abs{{\mathcal{M}}}$, then $\beta_p({\mathcal{M}})=\abs{{\mathcal{M}}}-2$.
The next application provides constraints on the Aomoto--Betti
numbers in the presence of nets.
\begin{prop}
\label{prop:betapknet}
Assume that ${\mathcal{M}}$ supports a $k$-net. Then $\beta_p ({\mathcal{M}})=0$ if $p\not\mid k$,
and $\beta_p ({\mathcal{M}}) \ge k-2$, otherwise.
\end{prop}
\begin{proof}
Set $\k=\mathbb{F}_p$.
First suppose that $p\not\mid k$. Let $\tau\in Z_{\k}({\mathcal{M}})$.
Pick $u,v\in {\mathcal{M}}_{\alpha}$, and then $w\in {\mathcal{M}}_{\beta}$, with $\beta\ne \alpha$.
Since $\abs{u\vee w}= \abs{v\vee w}=k$, by the net property, we infer from
Lemma \ref{lem:eqs} that $\tau_u=\tau_w=\tau_v$. Hence, $\tau$ is constant
on each part ${\mathcal{M}}_{\alpha}$. Applying once again Lemma \ref{lem:eqs} to a multi-colored
flat, we deduce that $\tau\in B_{\k}({\mathcal{M}})$.
Now suppose $p\mid k$, and consider the $k$-dimensional subspace of $\k^{{\mathcal{M}}}$
consisting of those elements $\tau$ that are constant (say, equal to $c_{\alpha}$)
on each ${\mathcal{M}}_{\alpha}$. By Lemma \ref{lem:eqs} and the net property,
the subspace of $\k^{{\mathcal{M}}}$ given by the equation
$\sum_{\alpha\in [k]} c_{\alpha}=0$ is contained in
$Z_{\k}({\mathcal{M}})$. Thus, $\dim_{\k} Z_{\k}({\mathcal{M}}) \ge k-1$,
and the desired inequality follows at once.
\end{proof}
Finally, let us record a construction which relates the cocycle space
of a matroid to the cocycle spaces of the parts of a net supported
by the matroid.
\begin{lemma}
\label{lem:proj}
Let $\k$ be a field.
Suppose a matroid ${\mathcal{M}}$ supports a net. For each part ${\mathcal{M}}_{\alpha}$,
the natural projection $\k^{{\mathcal{M}}}\to \k^{{\mathcal{M}}_{\alpha}}$ restricts to a
homomorphism $h_{\alpha}\colon Z_\k({\mathcal{M}}) \to Z_\k({\mathcal{M}}_{\alpha})$,
which in turn induces a homomorphism
$\bar{h}_{\alpha}\colon Z_\k({\mathcal{M}})/B_\k({\mathcal{M}}) \to
Z_\k({\mathcal{M}}_{\alpha})/B_\k({\mathcal{M}}_{\alpha})$.
\end{lemma}
\begin{proof}
In view of Lemmas \ref{lem:net props}\eqref{n3} and \ref{lem:eqs},
the equations defining $Z_\k({\mathcal{M}}_{\alpha})$ form a subset of the set of
equations defining $Z_\k({\mathcal{M}})$. Thus, the projection $\k^{{\mathcal{M}}}\to \k^{{\mathcal{M}}_{\alpha}}$
restricts to a homomorphism $Z_\k({\mathcal{M}}) \to Z_\k({\mathcal{M}}_{\alpha})$. Clearly, this
homomorphism takes $\sigma$ to $\sigma_{\alpha}$, and the second
assertion follows.
\end{proof}
\subsection{More on $3$-nets and the $\beta_3$ numbers}
\label{eq:beta3}
In the case when the net has $3$ parts and the ground field
has $3$ elements, the previous lemma can be made be more precise.
\begin{prop}
\label{prop:ker1}
Let ${\pazocal{N}}$ be a $3$-net on a matroid ${\mathcal{M}}$, and let $\k=\mathbb{F}_3$.
For each part ${\mathcal{M}}_{\alpha}$, we have an exact sequence of
$\k$-vector spaces,
\begin{equation}
\label{eq:ex seq}
\xymatrix{0\ar[r] & \k \ar[r] & Z_\k({\mathcal{M}})/B_\k({\mathcal{M}})
\ar^(.48){\bar{h}_{\alpha}}[r] & Z_\k({\mathcal{M}}_{\alpha})/B_\k({\mathcal{M}}_{\alpha})}.
\end{equation}
\end{prop}
\begin{proof}
Let $\tau_{\alpha}\in \k^{{\mathcal{M}}}$ be the vector whose components
are equal to $0$ on ${\mathcal{M}}_{\alpha}$, and are equal to $1$,
respectively $-1$ on the other two parts of the net.
It follows from Lemmas \ref{lem:net props}\eqref{n3}
and \ref{lem:eqs} that $\tau_{\alpha}\in Z_\k({\mathcal{M}})$.
Clearly, $\tau_{\alpha}\in \ker(h_{\alpha})$. In fact,
as we shall see next, $\tau_{\alpha}$ generates the
kernel of $h_{\alpha}$.
Suppose $h_{\alpha}(\eta)=0$, for some $\eta \in Z_\k({\mathcal{M}})$.
We first claim that $\eta$ must be constant on the other two
parts, ${\mathcal{M}}_{\beta}$ and ${\mathcal{M}}_{\gamma}$. To verify this claim,
fix a point $u\in {\mathcal{M}}_{\gamma}$ and pick $v,w \in {\mathcal{M}}_{\beta}$.
By the net property and \eqref{eq:zsigma}, $\eta_{u} + \eta_{v} +
\eta_{v'}=0$ and $\eta_{u} + \eta_{w} + \eta_{w'}=0$,
for some $v',w'\in {\mathcal{M}}_{\alpha} $. But $\eta_{v'}=\eta_{w'}=0$,
by assumption. Hence, $\eta_{v}=\eta_{w}$, and our claim follows.
Writing now condition
\eqref{eq:zsigma} for $\eta$ on a multi-colored flat of ${\pazocal{N}}$,
we conclude that $\eta \in \k \cdot \tau_{\alpha}$.
It follows that sequence \eqref{eq:ex seq} is exact in the middle.
Exactness at $\k$ is obvious, and so we are done.
\end{proof}
\begin{corollary}
\label{cor:betabounds}
If a matroid ${\mathcal{M}}$ supports a $3$-net with parts ${\mathcal{M}}_{\alpha}$, then
$1\le \beta_3({\mathcal{M}})\le \beta_3({\mathcal{M}}_{\alpha})+1$, for all $\alpha$.
\end{corollary}
If $\beta_3({\mathcal{M}}_{\alpha})= 0$, for some $\alpha$, then $\beta_3({\mathcal{M}})=1$, while
if $\beta_3({\mathcal{M}}_{\alpha})=1$, for some $\alpha$, then $\beta_3({\mathcal{M}})=1$ or $2$.
Furthermore, as the next batch of examples shows, all three possibilities
do occur, even among realizable matroids.
\begin{figure}
\centering
\subfigure{%
\begin{minipage}{0.46\textwidth}
\centering
\begin{tikzpicture}[scale=0.58]
\path[use as bounding box] (-2,-0.5) rectangle (6,7);
\path (0,4) node[fill, draw, inner sep=2pt, shape=circle, color=blue] (v1) {};
\path (2,4) node[fill, draw, inner sep=2pt, shape=circle, color=red!90!white] (v2) {};
\path (4,4) node[fill, draw, inner sep=2pt, shape=circle, color=dkgreen!80!white] (v3) {};
\path (0,2) node[fill, draw, inner sep=2pt, shape=circle, color=blue] (v4) {};
\path (2,2) node[fill, draw, inner sep=2pt, shape=circle, color=red!90!white] (v5) {};
\path (4,2) node[fill, draw, inner sep=2pt, shape=circle, color=dkgreen!80!white] (v6) {};
\path (0,0) node[fill, draw, inner sep=2pt, shape=circle, color=blue] (v7) {};
\path (2,0) node[fill, draw, inner sep=2pt, shape=circle, color=red!90!white] (v8) {};
\path (4,0) node[fill, draw, inner sep=2pt, shape=circle, color=dkgreen!80!white] (v9) {};
\draw (v1) -- (v2) -- (v3); \draw (v4) -- (v5) -- (v6); \draw (v7) -- (v8) -- (v9);
\draw (v1) -- (v4) -- (v7); \draw (v2) -- (v5) -- (v8); \draw (v3) -- (v6) -- (v9);
\draw (v2) -- (v4) -- (v8) -- (v6) -- (v2);
\draw (v1) -- (v5) -- (v9); \draw (v3) -- (v5) -- (v7);
\draw (v7) .. controls (-5,6) and (-1,6.8) .. (v2);
\draw (v4) .. controls (-4,7) and (0,7.8) .. (v3);
\draw (v1) .. controls (5,7.8) and (8,7) .. (v6);
\draw (v2) .. controls (6,6.8) and (9,6) .. (v9);
\end{tikzpicture}
\caption{A $(3,3)$-net on the Ceva matroid}
\label{fig:ceva}%
\end{minipage}
}
\subfigure{%
\begin{minipage}{0.5\textwidth}
\centering
\begin{tikzpicture}[scale=0.5]
\path[use as bounding box] (-1.5,-3.2) rectangle (6,6.8);
\path (0,4) node[fill, draw, inner sep=2pt, shape=circle, color=blue] (v1) {};
\path (2,4) node[fill, draw, inner sep=2pt, shape=circle, color=red!90!white] (v2) {};
\path (4,4) node[fill, draw, inner sep=2pt, shape=circle, color=dkgreen!80!white] (v3) {};
\path (0,2) node[fill, draw, inner sep=2pt, shape=circle, color=red!90!white] (v4) {};
\path (2,2) node[fill, draw, inner sep=2pt, shape=circle, color=dkgreen!80!white] (v5) {};
\path (4,2) node[fill, draw, inner sep=2pt, shape=circle, color=blue] (v6) {};
\path (0,0) node[fill, draw, inner sep=2pt, shape=circle, color=dkgreen!80!white] (v7) {};
\path (2,0) node[fill, draw, inner sep=2pt, shape=circle, color=blue] (v8) {};
\path (4,0) node[fill, draw, inner sep=2pt, shape=circle, color=red!90!white] (v9) {};
\draw (v1) -- (v2) -- (v3); \draw (v4) -- (v5) -- (v6); \draw (v7) -- (v8) -- (v9);
\draw (v1) -- (v4) -- (v7); \draw (v2) -- (v5) -- (v8); \draw (v3) -- (v6) -- (v9);
\draw (v4) -- (v8); \draw (v2) -- (v6);
\draw (v1) -- (v5) -- (v9);
\draw (v7) .. controls (-5,6) and (-1,6.8) .. (v2);
\draw (v4) .. controls (-4,7) and (0,7.8) .. (v3);
\path (2,-3) node[fill, draw, inner sep=2pt, shape=circle, color=dkbrown!90!white] (v10) {};
\path (6.12,-2.12) node[fill, draw, inner sep=2pt, shape=circle, color=dkbrown!90!white] (v11) {};
\path (7,2) node[fill, draw, inner sep=2pt, shape=circle, color=dkbrown!90!white] (v12) {};
\draw (v8) .. controls (2.65,-0.6) and (3,-1.5) .. (v11);
\draw (v6) .. controls (4.6,1.35) and (5.5,1) .. (v11);
\draw (v7) .. controls (0.4,-2.2) and (1.2,-2.6) .. (v10);
\draw (v9) .. controls (3.6,-2.2) and (2.8,-2.6) .. (v10);
\draw (v8) -- (v10); \draw (v9) -- (v11); \draw (v6) -- (v12);
\draw (v3) .. controls (5,3.8) and (6,3.6) .. (v12);
\draw (v9) .. controls (5,0.2) and (6,0.4) .. (v12);
\end{tikzpicture}
\caption{A $(4,3)$-net on the Hessian matroid}
\label{fig:hessian}%
\end{minipage}
}
\end{figure}
\begin{example}
\label{ex:first}
First, let ${\pazocal{A}}$ be the braid arrangement from Figure \ref{fig:braid}.
Then ${\pazocal{A}}$ admits a $(3,2)$-net with all parts ${\pazocal{A}}_{\alpha}$ in general
position. Hence, $\beta_3({\pazocal{A}}_{\alpha})=0$
for each $\alpha$, and thus $\beta_3({\pazocal{A}})=1$.
\end{example}
\begin{example}
\label{ex:second}
Next, let ${\pazocal{A}}$ be the realization of the configuration described by the
Pappus hexagon theorem. As noted in \cite[Example 2.3]{DIM},
${\pazocal{A}}$ admits a $(3,3)$-net with two parts in general position and
one not. Hence, $\beta_3({\pazocal{A}}_1)=\beta_3({\pazocal{A}}_2)=0$ while
$\beta_3({\pazocal{A}}_3)=1$. Therefore, $\beta_3({\pazocal{A}})=1$.
\end{example}
\begin{example}
\label{ex:third}
Finally, let ${\pazocal{A}}$ be the Ceva arrangement, defined by the polynomial
$Q=(z_1^3-z_2^3)(z_1^3-z_3^3)(z_2^3-z_3^3)$. As can be seen
in Figure \ref{fig:ceva}, this arrangement admits a $(3,3)$-net with
no parts in general position. Hence, $\beta_3({\pazocal{A}}_{\alpha})=1$
for each $\alpha$. Moreover, $\beta_3({\pazocal{A}})=2$, by direct computation,
or by Proposition \ref{prop=mb3} below.
The classification results from \cite{DIM} and the above considerations
imply that the only rank $3$ arrangement ${\pazocal{A}}$ of at most $9$ planes
that supports a $3$-net and has $\beta_3({\pazocal{A}}) \ge 2$ is the Ceva arrangement.
\end{example}
\section{From modular resonance to nets}
\label{sect:small}
In this section, we construct suitable parameter sets for nets
on matroids, and relate these parameter sets to modular resonance.
The approach we take leads to a proof of Theorem \ref{teo=lambdaintro}
and the combinatorial parts of Theorems \ref{thm:main1} and
\ref{thm:2main1} from the Introduction.
\subsection{A parameter set for nets on a matroid}
\label{ssec=34}
As before, let ${\mathcal{M}}$ be a simple matroid. Generalizing the
previous setup, let $\k$ be a finite set of size $k\ge 3$.
Inside the set $\k^{{\mathcal{M}}}$ of all functions $\tau\colon {\mathcal{M}}\to \k$,
we isolate two subsets.
The first subset, $Z'_{\k}({\mathcal{M}})$, consists of all functions $\tau$
with the property that, for every $X\in L_2({\mathcal{M}})$, the restriction
$\tau\colon X\to \k$ is either constant or bijective. The second
subset, $B_{\k}({\mathcal{M}})$, consists of all constant functions.
Plainly, $B_{\k}({\mathcal{M}})\subseteq Z'_{\k}({\mathcal{M}})$. In view of
Lemma \ref{lema=sigmadich} below, we will call the
elements of $Z'_{\k}({\mathcal{M}})$ {\em special}\/ $\k$-cocycles.
Now define a function
\begin{equation}
\label{eq=lambda}
\xymatrix{\lambda_{\k} \colon \{\text{$k$-nets on ${\mathcal{M}}$}\} \ar[r]& \k^{{\mathcal{M}}}},
\end{equation}
by associating to a $k$-net ${\pazocal{N}}$, with partition ${\mathcal{M}}= \coprod_{\alpha \in \k} {\mathcal{M}}_{\alpha}$,
the element $\tau:= \lambda_{\k}({\pazocal{N}})$ which takes the value $\alpha$ on ${\mathcal{M}}_{\alpha}$.
\begin{lemma}
\label{lema=lambdabij}
The above construction induces a bijection,
\[
\xymatrix{\lambda_{\k} \colon \{\text{$k$-nets on ${\mathcal{M}}$}\}
\ar^(.52){\simeq}[r]& Z'_{\k}({\mathcal{M}})\setminus B_{\k}({\mathcal{M}})}.
\]
\end{lemma}
\begin{proof}
Plainly, $\lambda_{\k}$ is injective, with image disjoint from $B_{\k}({\mathcal{M}})$.
To show that $\lambda_{\k}({\pazocal{N}})$ belongs to $Z'_{\k}({\mathcal{M}})$, for any $k$-net ${\pazocal{N}}$,
pick $X\in L_2({\mathcal{M}})$. If the flat $X$ is mono-colored with respect to ${\pazocal{N}}$,
the restriction $\tau\colon X\to \k$ is constant, by construction.
If $X$ is multi-colored, this restriction is a bijection, according
to Lemma \ref{lem:lsq}, and we are done.
Finally, let $\tau$ be an element in $Z'_{\k}({\mathcal{M}})\setminus B_{\k}({\mathcal{M}})$.
Define a partition ${\mathcal{M}}= \coprod_{\alpha \in \k} {\mathcal{M}}_{\alpha}$ by setting
${\mathcal{M}}_{\alpha}= \{ u\in {\mathcal{M}} \mid \tau_u= \alpha \}$. Since $\tau$ is
non-constant on ${\mathcal{M}}$, there must be a flat $X\in L_2({\mathcal{M}})$ with
$\tau\colon X\to \k$ bijective, which shows that all blocks of the
partition are non-empty. For $u\in {\mathcal{M}}_{\alpha}$ and $v\in {\mathcal{M}}_{\beta}$
with $\alpha \ne \beta$, we infer that $\tau\colon u \vee v\to \k$ must
be bijective, since $\tau\in Z'_{\k}({\mathcal{M}})$. It follows from Lemma \ref{lem:lsq}
that the partition defines a $k$-net ${\pazocal{N}}$ on ${\mathcal{M}}$. By construction,
$\lambda_{\k}({\pazocal{N}})=\tau$, and this completes the proof.
\end{proof}
\begin{corollary}
\label{coro=realobs}
If $\abs{\k}>4$, then $Z'_{\k}({\mathcal{M}})= B_{\k}({\mathcal{M}})$, for any realizable matroid of
rank at least $3$.
\end{corollary}
\subsection{Sums in finite abelian groups}
\label{subsec:finite abelian}
Before proceeding with our main theme, let us
consider a purely algebraic general situation. Given a finite abelian
group $\k$, define
\begin{equation}
\label{eq:sigmak}
\Sigma (\k)= \sum_{\alpha\in \k} \alpha.
\end{equation}
to be the sum of the elements of the group.
The following formula is then easily checked:
\begin{equation}
\label{eq=sigmaprod}
\Sigma (\k \times \k')= (\abs{\k'} \cdot \Sigma (\k), \abs{\k} \cdot \Sigma (\k')) .
\end{equation}
Clearly, $2 \Sigma (\k)=0$. Moreover, if $\k$ is cyclic,
then $\Sigma (\k)=0$ if and only if the order of $\k$ is odd.
These observations readily imply the following elementary lemma.
\begin{lemma}
\label{lema=sigmazero}
Let $\k=\prod_{\text{$p$ prime}} \prod_{s\ge 1} (\mathbb{Z}/p^s \mathbb{Z})^{e(p,s)}$
be the primary decomposition of $\k$.
Then $\Sigma (\k)=0$ if and only if\/ $\sum_s e(2,s) \ne 1$.
\end{lemma}
Next, we examine the conditions under which $\Sigma(\k)$ vanishes
when the group $\k$ is, in fact, a (finite) commutative ring.
\begin{lemma}
\label{lema=2mod4}
Let $k$ be a positive integer. There is a finite commutative ring $\k$
with $k$ elements and satisfying $\Sigma (\k)=0$
if and only if $k \not \equiv 2$ mod $4$. Moreover, if $2\ne k=p^s$, we
may take $\k=\mathbb{F}_{p^s}$.
\end{lemma}
\begin{proof}
Let $k=\prod_p p^{v_p(k)}$ be the prime decomposition. If $\abs{\k}=k$, then
$\sum_s se(2,s)= v_2(k)$, and $k \equiv 2$ mod $4$ if and only if $v_2(k)=1$.
If $v_2(k)\ne 1$, we may take $\k= \prod_p \mathbb{F}_p^{v_p(k)}$, and infer from
Lemma \ref{lema=sigmazero} that $\Sigma (\k)=0$. If $v_2(k)=1$ and $\abs{\k}=k$,
then clearly $\sum_s e(2,s)= 1$. Again by Lemma \ref{lema=sigmazero}, this implies that
$\Sigma (\k)\ne 0$. For the last claim, note that $\mathbb{F}_{p^s}= \mathbb{F}_{p}^s$, as an additive group.
\end{proof}
\subsection{Modular resonance and multinets}
\label{subsec:special}
For the rest of this section, we will assume $\k$ is a finite commutative ring.
In this case, $B_{\k}({\mathcal{M}})$ coincides with the coboundary space defined in
\S\ref{subsec:aomoto betti}.
The next lemma establishes a relationship between the subset $Z'_{\k}({\mathcal{M}}) \subseteq \k^{{\mathcal{M}}}$
and the modular cocycle space $Z_{\k}({\mathcal{M}})$.
\begin{lemma}
\label{lema=sigmadich}
Let ${\mathcal{M}}$ be a matroid and let $\k$ be a finite commutative ring. If $\Sigma (\k)=0$,
then $Z'_{\k}({\mathcal{M}})\subseteq Z_{\k}({\mathcal{M}})$. Otherwise, $Z'_{\k}({\mathcal{M}})\cap Z_{\k}({\mathcal{M}})= B_{\k}({\mathcal{M}})$.
\end{lemma}
\begin{proof}
For $\tau\in Z'_{\k}({\mathcal{M}})$ and $X\in L_2({\mathcal{M}})$ with $\tau \equiv \alpha$ on $X$,
equations \eqref{eq=zfalk} reduce to $\abs{X}\cdot \alpha=\abs{X}\cdot \alpha$.
When $\tau\colon X\to \k$ is a bijection, these equations take the form
$\Sigma (\k)= \abs{\k}\cdot \alpha$, for every $\alpha\in \k$, or, equivalently,
$\Sigma (\k)=0$. The desired conclusions follow.
\end{proof}
Lemmas \ref{lema=lambdabij}, \ref{lema=2mod4}, and \ref{lema=sigmadich}
together prove Theorem \ref{teo=lambdaintro} from the Introduction.
In turn, Theorem \ref{teo=lambdaintro} applied to the case when $\k=\mathbb{F}_4$
proves the combinatorial part of Theorem \ref{thm:2main1}, i.e.,
the equivalence \eqref{2m1}$\Leftrightarrow$\eqref{2m3} from there.
\begin{remark}
\label{rem:zz}
When ${\mathcal{M}}$ supports a $4$-net, the inclusion
$Z'_{\mathbb{F}_4}({\mathcal{M}})\subseteq Z_{\mathbb{F}_4}({\mathcal{M}})$ from Theorem \ref{teo=lambdaintro}\eqref{li2}
is always strict. Indeed, let ${\mathcal{M}} =\coprod_{\alpha \in [4]} {\mathcal{M}}_{\alpha}$ be a $4$-net
partition. Define $\tau\in \mathbb{F}_4^{{\mathcal{M}}}= (\mathbb{F}_2 \times \mathbb{F}_2)^{{\mathcal{M}}}$ to be
equal to $(0,0)$ on ${\mathcal{M}}_1$ and ${\mathcal{M}}_2$, and to $(1,0)$
on ${\mathcal{M}}_3$ and ${\mathcal{M}}_4$. Using \eqref{eq:zsigma}, we easily see that
$\tau \in Z_{\mathbb{F}_4}({\mathcal{M}})\setminus Z'_{\mathbb{F}_4}({\mathcal{M}})$.
In the case of $3$-nets, this phenomenon no longer occurs. For instance,
if ${\pazocal{A}}$ is the Ceva arrangement from Example \ref{ex:third}, then
${\pazocal{A}}$ admits a $3$-net, while $Z'_{\mathbb{F}_3}({\pazocal{A}})= Z_{\mathbb{F}_3}({\pazocal{A}})$,
by Lemma \ref{lem:lambda}.
\end{remark}
Next, we provide an extension of Lemma \ref{lema=lambdabij},
from nets to multinets.
\begin{lemma}
\label{lem:special}
Let $\k$ be a finite commutative ring with $\Sigma (\k)=0$. Then the
function $\lambda_{\k} \colon \{\text{$k$-nets on
${\mathcal{M}}$}\} \hookrightarrow Z_{\k}({\mathcal{M}})\setminus B_{\k}({\mathcal{M}})$ has an injective extension,
\[
\lambda_{\k}\colon \{\text{reduced $k$-multinets on ${\mathcal{M}}$}\}
\hookrightarrow Z_{\k}({\mathcal{M}})\setminus B_{\k}({\mathcal{M}}).
\]
\end{lemma}
\begin{proof}
Let ${\pazocal{N}}$ be a reduced $k$-multinet on ${\mathcal{M}}$. Define $\lambda_{\k} ({\pazocal{N}})\in \k^{{\mathcal{M}}}$
by using the underlying partition, ${\mathcal{M}}= \coprod_{\alpha \in \k} {\mathcal{M}}_{\alpha}$, exactly as in
\eqref{eq=lambda}. Clearly, $\lambda_{\k} ({\pazocal{N}})$ determines ${\pazocal{N}}$. By the multinet axiom
\S\ref{subsec:multinets}\eqref{mu3}, the map $\lambda_{\k} ({\pazocal{N}})\colon {\mathcal{M}} \to \k$
is surjective; hence
$\lambda_{\k} ({\pazocal{N}}) \not\in B_\k ({\mathcal{M}})$.
Now, if $X\in L_2({\mathcal{M}})$ is mono-colored, i.e.,
$X\subseteq {\mathcal{M}}_{\alpha}$ for some $\alpha\in \k$, then the system of
equations \eqref{eq=zfalk} reduces to
$\abs{X}\cdot \alpha=\abs{X}\cdot \alpha$, which is trivially satisfied.
Otherwise, those equations take the form
\[
\sum_{\alpha\in \k} \abs{X\cap {\mathcal{M}}_{\alpha}} \cdot \alpha= \abs{X} \cdot \beta,
\]
for all $\beta\in \k$, or, equivalently, $n_X \cdot \Sigma (\k)=0$ and $\abs{X}=0$,
and we are done.
\end{proof}
\subsection{A multiplicity assumption}
\label{ssec=41}
Finally, let us consider the case when $k=3$ (and $\k=\mathbb{F}_3$) in
Theorem \ref{teo=lambdaintro}. Under a natural multiplicity
assumption, we are then able to say more about the cocycle space
of our matroid ${\mathcal{M}}$.
\begin{lemma}
\label{lem:lambda}
Suppose $L_2({\mathcal{M}})$ has no flats of multiplicity properly divisible by $3$.
Then $Z'_{\mathbb{F}_3}({\mathcal{M}})=Z_{\mathbb{F}_3}({\mathcal{M}})$.
\end{lemma}
\begin{proof}
By \eqref{eq:zsigma}, an element $\tau=\sum_{u\in {\mathcal{M}}} \tau_u e_u \in \mathbb{F}_3^{{\mathcal{M}}}$
belongs to $Z_{\mathbb{F}_3}({\mathcal{M}})$ if and only if, for each
$X\in L_2({\mathcal{M}})$, either $3$ divides $\abs{X}$
and $\sum_{u\in X} \tau_u =0$, or else $\tau$
is constant on $X$.
In view of our multiplicity hypothesis, the first possibility
only occurs when $X$ has size $3$, in which case the equation
$\sum_{u\in X} \tau_u =0$ implies that the restriction
$\tau\colon X \to \mathbb{F}_3$ is either constant or bijective.
Hence, the element $\tau$ belongs to $Z'_{\mathbb{F}_3}({\mathcal{M}})$, and we are done.
\end{proof}
Putting now together Theorem \ref{teo=lambdaintro}\eqref{li1}
with Lemma \ref{lem:lambda} establishes equivalence
\eqref{a2}$\Leftrightarrow$\eqref{a3} from Theorem \ref{thm:main1}
in the Introduction. The remaining combinatorial part of Theorem \ref{thm:main1},
i.e., equivalence \eqref{a1}$\Leftrightarrow$\eqref{a2}, follows from
Lemma \ref{lem:rednet}.
\section{Flat connections and holonomy Lie algebras}
\label{sect:flat}
In this section, we study the space of ${\mathfrak{g}}$-valued flat connections
on the Orlik--Solomon algebra of a simple matroid ${\mathcal{M}}$, and the closely related
holonomy Lie algebra ${\mathfrak{h}}({\mathcal{M}})$. We construct flat connections from non-constant
special cocycles on ${\mathcal{M}}$, and we characterize the key axiom for multinets
on ${\mathcal{M}}$ by using ${\mathfrak{h}}({\mathcal{M}})$.
\subsection{Flat, ${\mathfrak{g}}$-valued connections}
\label{subsec:flat}
We start by reviewing some standard material on flat connections,
following the approach from \cite{DP, DPS, MPPS}.
Let $(A,d)$ be a commutative, differential graded
algebra over $\mathbb{C}$, for short, a \ensuremath{\mathsf{cdga}}.
We will assume that $A$ is connected (i.e., $A^0=\mathbb{C}$)
and of finite $q$-type, for some $q\ge 1$ (i.e., $A^i$ is
finite-dimensional, for all $i\le q$).
The cohomology groups $H^i(A)$ are $\mathbb{C}$-vector spaces,
of finite dimension if $i\le q$.
Since $A$ is connected, we may view $H^1(A)$ as a
linear subspace of $A^1$.
Now let ${\mathfrak{g}}$ be a finite-dimensional Lie algebra over $\mathbb{C}$.
On the graded vector space $A\otimes {\mathfrak{g}}$, we may define
a bracket by $[a\otimes x, b\otimes y]= ab\otimes [x,y],\
\text{for $a,b\in A$ and $x,y\in {\mathfrak{g}}$}$.
This functorial construction produces a differential
graded Lie algebra $A\otimes {\mathfrak{g}}$, with grading inherited
from $A$, and differential $d (a\otimes x) = da \otimes x$.
An element $\omega\in A^1\otimes {\mathfrak{g}}$ is called an {\em infinitesimal,
${\mathfrak{g}}$-valued flat connection}\/ on $(A,d)$ if $\omega$ satisfies the
Maurer--Cartan equation,
\begin{equation}
\label{eq:flat}
d\omega + \tfrac{1}{2} [\omega,\omega] = 0 .
\end{equation}
We will denote by ${\mathcal{F}}(A,{\mathfrak{g}})$ the subset of $A^1\otimes {\mathfrak{g}}$
consisting of all flat connections. This set has a
natural affine structure, and depends functorially
on both $A$ and ${\mathfrak{g}}$. Notice that ${\mathcal{F}}(A,{\mathfrak{g}})$ depends
only on the degree $2$ truncation $A^{\le 2}=A/\bigoplus_{i>2}A^i$
of our \ensuremath{\mathsf{cdga}}.
Consider the algebraic map
$\pi\colon A^1\times {\mathfrak{g}}\to A^1\otimes {\mathfrak{g}}$ given by
$(a,x)\mapsto a\otimes x$. Notice that $\pi$ restricts
to a map $\pi\colon H^1(A)\times {\mathfrak{g}} \to {\mathcal{F}}(A,{\mathfrak{g}})$.
The set ${\mathcal{F}}^{(1)}(A,{\mathfrak{g}}):=\pi(H^1(A)\times {\mathfrak{g}})$ is an irreducible,
Zariski-closed subset of ${\mathcal{F}}(A,{\mathfrak{g}})$, which is equal to either $\{0\}$,
or to the cone on $\mathbb{P}(H^1(A)) \times \mathbb{P}({\mathfrak{g}})$. We call its
complement the {\em regular}\/ part of ${\mathcal{F}}(A,{\mathfrak{g}})$.
\subsection{Holonomy Lie algebra}
\label{subsec:holo}
An alternate view of the parameter space of flat connections
involves only Lie algebras. Let us briefly review this approach,
following the detailed study done in \cite{MPPS}.
Let $A_i=\Hom_{\mathbb{C}} (A^i, \mathbb{C})$ be the dual vector space.
Let $\nabla \colon A_2 \to A_1\wedge A_1$ be the dual
of the multiplication map
$A^1\wedge A^1\to A^2$, and let $d_1\colon A_2\to A_1$ be
the dual of the differential $d^1\colon A^1\to A^2$.
By definition, the {\em holonomy Lie algebra}\/ of $(A,d)$ is the
quotient of the free Lie algebra on the $\mathbb{C}$-vector space
$A_1$ by the ideal generated by the image of $d_1 + \nabla$:
\begin{equation}
\label{eq:holo}
{\mathfrak{h}}(A) = \Lie(A_1) / (\im(d_1 + \nabla)).
\end{equation}
This construction is functorial. Indeed, if $\varphi\colon A\to A'$
is a \ensuremath{\mathsf{cdga}}~map, then the linear map $\varphi_1=(\varphi^1)^*\colon A'_1\to A_1$
extends to Lie algebra map $\Lie(\varphi_1)\colon \Lie(A'_1)\to \Lie(A_1)$,
which in turn induces a Lie algebra map ${\mathfrak{h}}(\varphi)\colon {\mathfrak{h}}(A')\to {\mathfrak{h}}(A)$.
When $d=0$, the holonomy Lie algebra ${\mathfrak{h}}(A)$ inherits a natural
grading from the free Lie algebra, compatible with the Lie bracket.
Thus, ${\mathfrak{h}}(A)$ is a finitely presented, graded Lie algebra,
with generators in degree $1$, and relations in degree $2$.
In the particular case when $A$ is the cohomology algebra
of a path-connected space $X$ with finite first Betti number,
${\mathfrak{h}}(A)$ coincides with the classical holonomy Lie algebra ${\mathfrak{h}}(X)$
of K.T.~Chen.
Given a finite set $\k=\{c_1,\dots ,c_k\}$, let us define the
{\em reduced}\/ free Lie algebra on $\k$ as
\begin{equation}
\label{eq:holo set}
\bLie(\k)=\Lie(c_1,\dots ,c_k)/\Big(\sum\nolimits_{\alpha=1}^{k} c_\alpha =0 \Big).
\end{equation}
Clearly, $\bLie(\k)$ is a graded Lie algebra, isomorphic to the free Lie
algebra of rank $k-1$.
\begin{example}
\label{ex:holo surf}
Consider the $k$-times punctured sphere, $S=\mathbb{CP}^1\setminus \{\text{$k$ points}\}$.
Letting $\k=\{c_1,\dots,c_k\}$
be the set of homology classes in $H_1(S,\mathbb{C})$ represented by standardly
oriented loops around the punctures, we readily see that ${\mathfrak{h}}(S)=\bLie(\k)$.
\end{example}
As before, let ${\mathfrak{g}}$ be a finite-dimensional Lie algebra.
As noted in \cite{MPPS}, the canonical isomorphism
$\iota\colon A^1\otimes {\mathfrak{g}} \xrightarrow{\,\simeq\,} \Hom_{\mathbb{C}} (A_1,{\mathfrak{g}})$
restricts to a functorial isomorphism
\begin{equation}
\label{eq:hom holo}
\iota\colon {\mathcal{F}}(A,{\mathfrak{g}}) \xrightarrow{\,\simeq\,} \Hom_{\Lie} ({\mathfrak{h}}(A), {\mathfrak{g}}) .
\end{equation}
Under this isomorphism, the subset ${\mathcal{F}}^{(1)}(A,{\mathfrak{g}})$ corresponds
to the set $\Hom^1_{\Lie} ({\mathfrak{h}}(A), {\mathfrak{g}})$ of Lie algebra morphisms
whose image is at most $1$-dimensional.
If $\varphi\colon A\to A'$ is a \ensuremath{\mathsf{cdga}}~map, we will let
$\varphi^{!}\colon \Hom_{\Lie} ({\mathfrak{h}}(A), {\mathfrak{g}}) \to \Hom_{\Lie} ({\mathfrak{h}}(A'), {\mathfrak{g}})$
denote the morphism of algebraic varieties induced by ${\mathfrak{h}}(\varphi)$.
\subsection{The holonomy Lie algebra of a matroid}
\label{subsec:holo matroid}
Let ${\mathcal{M}}$ be a simple matroid, and let $A=A({\mathcal{M}})\otimes \mathbb{C}$ be the
Orlik--Solomon algebra of ${\mathcal{M}}$ with coefficients in $\mathbb{C}$.
As noted before, the $\mathbb{C}$-vector space $A^1$ has basis
$\{e_u\}_{u\in {\mathcal{M}}}$. Let $A_1$ be the dual vector space,
with dual basis $\{a_u\}_{u\in {\mathcal{M}}}$.
By definition, the holonomy Lie algebra of the matroid,
${\mathfrak{h}}({\mathcal{M}}):={\mathfrak{h}}(A)$, is the quotient of the free Lie algebra on $A_1$
by the ideal generated by the image of the dual of the multiplication map,
$A^1\wedge A^1\to A^2$. Using the presentation \eqref{eq:os matroid}
for the algebra $A({\mathcal{M}})$, it is proved in \cite[\S 11]{PS04} that ${\mathfrak{h}}({\mathcal{M}})$
has the following quadratic presentation:
\begin{equation}
\label{eq:holo rel}
{\mathfrak{h}}({\mathcal{M}})= \Lie(a_u,\: u\in {\mathcal{M}})\Big/
\Big( \sum_{v\in X}\, [a_u, a_v ],\:\: X\in L_2({\mathcal{M}}), u\in X \Big)\Big. .
\end{equation}
Now let ${\mathfrak{g}}$ be a finite-dimensional Lie algebra over $\mathbb{C}$.
Once we identify $A^1\otimes {\mathfrak{g}} \cong {\mathfrak{g}}^{{\pazocal{A}}}$, a ${\mathfrak{g}}$-valued $1$-form
$\omega$ may be viewed as a vector with components
$\omega_u \in {\mathfrak{g}}$ indexed by the points $u\in {\mathcal{M}}$.
By \eqref{eq:hom holo}, $\omega\in {\mathcal{F}}(A,{\mathfrak{g}})$ if and only if
\begin{equation}
\label{eq:flat omega}
\sum_{v\in X}\, [\omega_u, \omega_v ]=0,\: \text{for all $X\in L_2({\mathcal{M}})$
and $u\in X$}.
\end{equation}
Let $\spn(\omega)$ be the linear subspace of ${\mathfrak{g}}$ spanned
by the set $\{\omega_u\}_ {u\in {\mathcal{M}}}$. Clearly,
if $\dim \spn(\omega)\le 1$, then $\omega$
is a solution to the system of equations \eqref{eq:flat omega};
the set of such solutions is precisely ${\mathcal{F}}^{(1)}(A,{\mathfrak{g}})$.
We call a solution $\omega$
{\em regular}\/ if $\dim \spn(\omega) \ge 2$.
Noteworthy is the case when ${\mathfrak{g}}=\sl_2$, a case studied in a more
general context in \cite{MPPS}. In this setting, an $\sl_2$-valued $1$-form
$\omega=(\omega_u)_{u\in {\mathcal{M}}}$ is a solution to the system of
equations \eqref{eq:flat omega} if and only if, for each $X\in L_2({\mathcal{M}})$,
\begin{equation}
\label{eq:flat sl2}
\text{either}\ \sum_{v\in X} \omega_v=0, \text{ or }
\dim\spn \{\omega_v\}_{v\in X} \le 1.
\end{equation}
\subsection{Holonomy Lie algebra and multinets}
\label{subsec:holo multinet}
We may now characterize the key multinet axiom \eqref{mu3}
from \S\ref{subsec:multinets} in terms of certain Lie algebra
morphisms defined on the holonomy Lie algebra, as follows.
Let ${\mathcal{M}}$ be a matroid, endowed with a partition into non-empty blocks,
${\mathcal{M}}= \coprod_{\alpha \in \k} {\mathcal{M}}_{\alpha}$, and a multiplicity function,
$m\colon {\mathcal{M}} \to \mathbb{N}$. For each flat $X\in L_2({\mathcal{M}})$, define $\supp (X)$
as in \eqref{eq:supp}, and call the flat mono-colored if $\abs{\supp (X)}=1$,
and multi-colored, otherwise.
Write $\k=\{c_1,\dots, c_k\}$, and let $\bLie(\k)$ be the reduced
free Lie algebra from \eqref{eq:holo set}.
To these data, we associate a graded epimorphism of free Lie algebras,
\begin{equation}
\label{eq:phimap}
\xymatrix{\varphi \colon \Lie (a_u, u\in {\mathcal{M}}) \ar@{->>}[r]& \bLie(\k)}
\end{equation}
by sending $a_u$ to $m_u \cdot c_{\alpha}$, for each $u\in {\mathcal{M}}_{\alpha}$.
\begin{prop}
\label{prop=multihol}
Given a matroid partition ${\mathcal{M}}= \coprod_{\alpha \in \k} {\mathcal{M}}_{\alpha}$
and a multiplicity function $m\colon {\mathcal{M}} \to \mathbb{N}$, the following conditions
are equivalent:
\begin{enumerate}
\item \label{lie1}
The map $\varphi$ defined above factors through a graded
Lie algebra epimorphism, $\varphi \colon {\mathfrak{h}}({\mathcal{M}}) \twoheadrightarrow \bLie(\k)$.
\item \label{lie2}
The integer $n_X:=\sum_{u\in {\mathcal{M}}_\alpha \cap X} m_u$
is independent of $\alpha$, for each multi-colored flat $X\in L_2({\mathcal{M}})$.
\end{enumerate}
\end{prop}
\begin{proof}
The morphism $\varphi$ factors through ${\mathfrak{h}}({\mathcal{M}})$ if and only if
equations \eqref{eq:flat omega} are satisfied by $\omega_u= \varphi (a_u)$.
In turn, these equations are equivalent to
\begin{equation}
\label{eq:lie alg eqs}
\Big[ \sum_{\alpha\in \k} \Big(\sum_{u\in X\cap {\mathcal{M}}_{\alpha}}m_u \Big)
c_{\alpha}, c_{\beta} \Big]=0,
\end{equation}
for all $X\in L_2({\mathcal{M}})$ and $\beta \in \supp (X)$. Clearly, equations \eqref{eq:lie alg eqs}
are always satisfied if $X$ is a mono-colored flat.
Now assume condition \eqref{lie1} holds. As is well-known, commuting elements in a
free Lie algebra must be dependent; see for instance \cite{MKS}. It follows that
$\sum_{\alpha\in \k} (\sum_{u\in X\cap {\mathcal{M}}_{\alpha}}m_u) c_{\alpha}$ belongs to
$\mathbb{C} \cdot c_{\beta} + \mathbb{C} \cdot (\sum_{\alpha \in \k} c_{\alpha})$,
for all $\beta \in \supp (X)$. When $X$ is multi-colored, this constraint implies that
$\sum_{u\in X\cap {\mathcal{M}}_{\alpha}}m_u$ is independent of $\alpha$.
Conversely, assume \eqref{lie2} holds. Equations \eqref{eq:lie alg eqs} for
a multi-colored flat $X$ reduce then to $n_X \cdot [\sum_{\alpha \in \k}
c_{\alpha}, c_{\beta}]=0$, and these are satisfied since
$\sum_{\alpha \in \k} c_{\alpha}=0$ in $\bLie(\k)$.
\end{proof}
\subsection{An evaluation map}
\label{subsec:ev}
Let $V$ be a finite-dimensional $\mathbb{C}$-vector space, and
let $\k$ be a finite set with $k\ge 3$ elements. Inside the vector space
$V^{\k}$, consider the linear subspace
\begin{equation}
\label{eq:hyper}
{\pazocal H}^{\k} (V)=\big\{ x=(x_{\alpha}) \in V^{\k} \mid
\sum_{\alpha \in \k} x_{\alpha}=0\big\}.
\end{equation}
Given a family of elements of a vector space, we may speak about
its {\em rank}, that is, the dimension of the vector subspace generated
by that family. Inside ${\pazocal H}^{\k} (V)$, we define the {\em regular part}\/
to be the set
\begin{equation}
\label{eq:hyper-reg}
{\pazocal H}^{\k}_{\reg}(V)=\{ x \in {\pazocal H}^{\k} (V) \mid \rank (x)>1 \}.
\end{equation}
Let us view an element $x \in V^{\k}$ as a map, $x\colon \k \to V$.
Given a matroid ${\mathcal{M}}$, let us denote the induced map, $\k^{{\mathcal{M}}} \to V^{{\mathcal{M}}}$,
by $\ev_{\hdot}(x)$. For a fixed element $\tau \in \k^{{\mathcal{M}}}$, we obtain in
this way a linear ``evaluation" map
\begin{equation}
\label{eq:ev}
\ev_{\tau} \colon V^{\k} \to V^{{\mathcal{M}}}, \quad \ev_{\tau}(x)_u = x_{\tau (u)},
\ \text{for $u\in {\mathcal{M}}$}.
\end{equation}
We will use this simple construction in the case when $V$ is a Lie algebra ${\mathfrak{g}}$ and
$\tau$ is a non-constant, special $\k$-cocycle on ${\mathcal{M}}$.
\begin{prop}
\label{prop=liftev}
Let ${\mathcal{M}}$ be a matroid and let ${\mathfrak{g}}$ be a finite-dimensional, complex Lie algebra.
For every $\tau \in Z'_{\k}({\mathcal{M}})\setminus B_{\k}({\mathcal{M}})$, the map $\ev_{\tau}$ induces a
linear embedding,
\[
\ev_{\tau}\colon {\pazocal H}^{\k}({\mathfrak{g}}) \hookrightarrow {\mathcal{F}} (A({\mathcal{M}})\otimes \mathbb{C}, {\mathfrak{g}}).
\]
Moreover, $\ev_{\tau}$ is rank-preserving, and so the regular parts are preserved.
\end{prop}
\begin{proof}
We first check that $\ev_{\tau}(x)\in {\mathfrak{g}}^{{\mathcal{M}}}$ satisfies the flatness conditions
\eqref{eq:flat omega}, for $x\in {\pazocal H}^{\k}({\mathfrak{g}})$, where $\omega_u= x_{\tau (u)}$,
for $u\in {\mathcal{M}}$. If $\tau$ is constant on $X\in L_2({\mathcal{M}})$, this is clear. Otherwise,
$\tau \colon X \to \k$ is a bijection, hence the system \eqref{eq:flat omega} becomes
$[\sum_{\alpha \in \k} x_{\alpha}, x_{\tau (u)}]=0$, for all $u\in X$, and
we are done, since $x\in {\pazocal H}^{\k}({\mathfrak{g}})$.
We also know that $\tau\colon {\mathcal{M}} \to \k$ is surjective, since $\tau \not\in B_{\k}({\mathcal{M}})$.
This implies that $\rank (\ev_{\tau}(x))= \rank (x)$ for all $x\in {\mathfrak{g}}^{\k}$.
In particular, $\ev_{\tau}$ is injective.
\end{proof}
In the setup from Theorem \ref{teo=lambdaintro}\eqref{li2}, the above result may be
interpreted as an explicit way of lifting information on modular resonance to $\mathbb{C}$,
via flat connections.
\section{Complex resonance varieties and pencils}
\label{sect:res vars}
We now narrow our focus to realizable matroids, and recall the description
of the (degree $1$, depth $1$) complex resonance variety of an
arrangement ${\pazocal{A}}$ in terms of multinets on subarrangements of ${\pazocal{A}}$.
As an application of our techniques, we prove
Theorem \ref{thm:essintro} and Theorem \ref{thm:main1}\eqref{a7}
from the Introduction.
\subsection{Resonance varieties of arrangements}
\label{subsec:res arr}
Let ${\pazocal{A}}$ be a hyperplane arrangement in $\mathbb{C}^{\ell}$, and let
$A=H^*(M({\pazocal{A}}),\mathbb{C})$ be its Orlik--Solomon algebra over $\mathbb{C}$.
The (first) resonance variety of the arrangement,
${\mathcal R}_1({\pazocal{A}}):={\mathcal R}_1(A)$, is a closed algebraic subset
of the affine space $H^1(M({\pazocal{A}}),\mathbb{C})=\mathbb{C}^{{\pazocal{A}}}$.
Since the slicing operation described in \S\ref{subsec:arrs}
does not change ${\mathcal R}_1({\pazocal{A}})$, we may assume without loss
of generality that $\ell=3$.
The basic structure of the (complex) resonance varieties of
arrangements is explained in the following theorem, which
summarizes work of Cohen--Suciu \cite{CS99} and
Libgober--Yuzvinsky \cite{LY}. (We refer to \cite{DPS}
for a more general context where such a statement holds.)
\begin{theorem}
\label{thm:tcone}
All irreducible components
of the resonance variety ${\mathcal R}_1({\pazocal{A}})$ are linear subspaces,
intersecting pairwise only at $0$. Moreover, the positive-dimensional
components have dimension at least two, and the cup-product map
$A^1\wedge A^1 \to A^2$ vanishes identically on each such component.
\end{theorem}
We will also need a basic result from Arapura
theory \cite{Ar} (see also \cite{DPS}), a result which adds
geometric meaning to the aforementioned properties of
${\mathcal R}_1({\pazocal{A}})$. Let $S$ denote $\mathbb{CP}^1$ with at least $3$ points removed.
A map $f\colon M({\pazocal{A}}) \to S$ is said to be {\em admissible}\/
if $f$ is a regular, non-constant map with connected generic fiber.
\begin{theorem}
\label{thm:arapura}
The correspondence $f\leadsto f^*(H^1(S,\mathbb{C}))$ gives a bijection between
the set of admissible maps (up to reparametrization at the target) and
the set of positive-dimensional components of ${\mathcal R}_1({\pazocal{A}})$.
\end{theorem}
\subsection{Pencils and multinets}
\label{subsec:pen multi}
Most important for our purposes are the {\em essential}\/ components
of ${\mathcal R}_1({\pazocal{A}})$, i.e., those irreducible components which do not lie in
any coordinate subspace of $\mathbb{C}^{{\pazocal{A}}}$. As shown by Falk and Yuzvinsky
in \cite{FY}, such components can be described in terms of pencils
arising from multinets on $L_{\le 2}({\pazocal{A}})$, as follows.
Suppose we have a $k$-multinet ${\pazocal{N}}$ on ${\pazocal{A}}$,
with parts ${\pazocal{A}}_1,\dots, {\pazocal{A}}_k$ and multiplicity vector $m$.
Let $Q({\pazocal{A}})=\prod_{H\in {\pazocal{A}}} f_H$ be a defining polynomial
for ${\pazocal{A}}$, and set
\begin{equation}
\label{eq:qalpha}
Q_{\alpha}=\prod_{H\in{\pazocal{A}}_{\alpha}}f_H^{m_H}.
\end{equation}
The polynomials $Q_1,\dots,Q_k$ define a pencil of
degree $d$ in $\mathbb{CP}^1$, having $k$ completely reducible fibers that correspond
to ${\pazocal{A}}_1,\dots, {\pazocal{A}}_k$. For each $\alpha>2$, we may write
$Q_{\alpha}$ as a linear combination $a_{\alpha} Q_1+b_{\alpha}Q_2$.
In this way, we obtain a $k$-element subset
\begin{equation}
\label{eq:d set}
D=\set{(0:-1), (1:0), (b_3:-a_3), \dots , (b_k:-a_k)} \subset \mathbb{CP}^1.
\end{equation}
Consider now the arrangement ${\pazocal{A}}'$ in $\mathbb{C}^2$ defined by the
polynomial $Q({\pazocal{A}}')=g_1\cdots g_k$, where
$g_{\alpha}(z_1,z_2)= a_{\alpha} z_1+b_{\alpha}z_2$. Projectivizing
gives a canonical projection $\pi\colon M({\pazocal{A}}') \to S:=\mathbb{CP}^1\setminus D$.
Setting $\psi(x)=(Q_1(x), Q_2(x))$ gives a regular map $\psi\colon M({\pazocal{A}}) \to M({\pazocal{A}}')$.
It is now readily verified that the regular map
\begin{equation}
\label{eq:pen}
f_{{\pazocal{N}}}= \pi \circ \psi\colon M({\pazocal{A}}) \to S
\end{equation}
is admissible. Hence, the linear subspace
$f_{{\pazocal{N}}}^*(H^1(S,\mathbb{C}))\subset H^1(M({\pazocal{A}}),\mathbb{C})$
is a component of ${\mathcal R}_1({\pazocal{A}})$. Moreover, this subspace has
dimension $k-1$, and does not lie in any coordinate subspace.
Conversely, as shown in \cite[Theorem 2.5]{FY}, every essential
component of ${\mathcal R}_1({\pazocal{A}})$ can be realized as $f_{{\pazocal{N}}}^*(H^1(S,\mathbb{C}))$,
for some multinet ${\pazocal{N}}$ on ${\pazocal{A}}$.
\subsection{An induced homomorphism}
\label{subsec:induced}
To describe the above subspace explicitly, and for further purposes,
we need to compute the homomorphism induced in homology
by the map $f_{{\pazocal{N}}}$. To that end, let
$\gamma_1,\dots ,\gamma_k$ be compatibly oriented,
simple closed curves on $S=\mathbb{CP}^1\setminus D$, going around
the points of $D$, so that $H_1(S,\mathbb{Z})$ is generated by the homology
classes $c_{\alpha}=[\gamma_{\alpha}]$, subject to the single relation
$\sum_{\alpha=1}^k c_{\alpha}=0$.
Recall that the cohomology ring $H^{\bullet}(M({\pazocal{A}}),\mathbb{Z})$
is generated by the degree $1$ classes $\{e_H\}_{H\in {\pazocal{A}}}$
dual to the meridians about the hyperplanes of ${\pazocal{A}}$.
We shall abuse notation, and denote by the same symbol
the image of $e_H$ in $H^1(M({\pazocal{A}}),\mathbb{C})$. As is well-known,
$e_H$ is the de~Rham cohomology class of the logarithmic $1$-form
$\frac{1}{2\pi \mathrm{i}}\, d\log f_H$ on $M({\pazocal{A}})$.
For each index $\alpha\in [k]$, set
\begin{equation}
\label{eq=defu}
u_{\alpha} :=\sum_{H\in {\pazocal{A}}_{\alpha}} m_H e_H \in H^1(M({\pazocal{A}}),\mathbb{C}) .
\end{equation}
\begin{lemma}
\label{lem:pen h1}
The induced homomorphism $(f_{{\pazocal{N}}})_* \colon H_1(M({\pazocal{A}}),\mathbb{Z}) \to H_1(S,\mathbb{Z})$
is given by
\begin{equation*}
\label{eq:multi hom}
(f_{{\pazocal{N}}})_*(a_H) = m_H c_{\alpha}, \quad\text{for $H\in {\pazocal{A}}_{\alpha}$}.
\end{equation*}
In other words, $(f_{{\pazocal{N}}})_*$ is the $\mathbb{Z}$-form of the homomorphism $\varphi$
associated to ${\pazocal{N}}$ that appears in Proposition \ref{prop=multihol}.
\end{lemma}
\begin{proof}
Given the construction of $f_{{\pazocal{N}}}$ from \eqref{eq:pen},
it is plainly enough to check that the dual homomorphism,
$\psi^* \colon H^1(M({\pazocal{A}}'),\mathbb{C}) \to H^1(M({\pazocal{A}}),\mathbb{C})$, sends the de~Rham
cohomology class of $d\log g_{\alpha}$ to $u_{\alpha}$. An easy
calculation shows that
\begin{equation}
\label{eq:dlog}
\psi^*(d\log g_{\alpha})=d\log Q_{\alpha}=
\sum_{H\in {\pazocal{A}}_{\alpha}} m_H d\log f_H,
\end{equation}
and the claim follows.
\end{proof}
Taking the transpose of $(f_{{\pazocal{N}}})_*$ and using linear algebra, we obtain
the following immediate corollary.
\begin{corollary}[\cite{FY}]
\label{cor:res comp}
Let ${\pazocal{N}}$ be a $k$-multinet on an arrangement ${\pazocal{A}}$, and let
$f_{{\pazocal{N}}}\colon M({\pazocal{A}})\to S=\mathbb{CP}^1 \setminus \{\text{$k$ points}\}$
be the associated admissible map. Then the pull-back
$f_{{\pazocal{N}}}^*(H^1(S,\mathbb{C}))$ is the linear subspace of
$H^1(M({\pazocal{A}}),\mathbb{C})$ spanned by the vectors $u_2-u_1,\dots , u_k-u_1$,
where $u_{\alpha} =\sum_{H\in {\pazocal{A}}_{\alpha}} m_H e_H$.
\end{corollary}
\subsection{Mapping multinets to resonance components}
\label{subsec:multinet map}
Let ${\rm Ess}({\pazocal{A}})$ be the set of essential components of the resonance
variety ${\mathcal R}_1({\pazocal{A}})$. The preceding discussion allows us to define a map
\begin{equation}
\label{eq:multi map}
\Psi \colon \{ \text{multinets on ${\pazocal{A}}$} \} \to {\rm Ess}({\pazocal{A}}), \quad
{\pazocal{N}}\mapsto f_{{\pazocal{N}}}^*(H^1(S,\mathbb{C})).
\end{equation}
This map sends $k$-multinets to essential, $(k-1)$-dimensional
components of ${\mathcal R}_1({\pazocal{A}})$.
By the above-mentioned result of Falk and Yuzvinsky, the map
$\Psi$ is surjective. The next lemma describes the fibers of this map.
\begin{lemma}
\label{lem=fibres}
The surjective map $\Psi$ defined in \eqref{eq:multi map} is constant
on the orbits of the natural $\Sigma_k$-action on $k$-multinets.
Moreover, the fibers of $\Psi$ coincide with those orbits.
\end{lemma}
\begin{proof}
The first claim is
an immediate consequence of the description of the action of $\Sigma_k$
on $k$-multinets, given in \S\ref{subsec:multinets}, coupled with the
construction of $\Psi ({\pazocal{N}})$.
Suppose now that ${\pazocal{N}}$ is a $k$-multinet, and $\Psi ({\pazocal{N}})=\Psi ({\pazocal{N}}')$,
for some multinet ${\pazocal{N}}'$. As noted before, $\dim \Psi ({\pazocal{N}})=k-1$;
hence, ${\pazocal{N}}'$ is also a $k$-multinet. Let $f_{{\pazocal{N}}}$ and $f_{{\pazocal{N}}'}$
be the corresponding admissible maps from $M({\pazocal{A}})$ to
$S=\mathbb{CP}^1 \setminus \{\text{$k$ points}\}$. Since
$f_{{\pazocal{N}}}^*(H^1(S,\mathbb{C}))=f_{{\pazocal{N}}'}^*(H^1(S,\mathbb{C}))$, Arapura theory
implies that $f_{{\pazocal{N}}}$ and $f_{{\pazocal{N}}'}$ differ by an automorphism
of the curve $S$.
In turn, this automorphism extends to an automorphism of $\mathbb{CP}^1$,
inducing a permutation
$g\in \Sigma_k$ of the $k$ points. Hence, the automorphism induced on
$H_1(S, \mathbb{Z})$ sends $c_{\alpha}$ to $c_{g\alpha}$, for each $\alpha \in [k]$.
Using Lemma \ref{lem:pen h1}, we conclude that ${\pazocal{N}}$ and
${\pazocal{N}}'$ are conjugate under the action of $g$.
\end{proof}
More generally, every positive-dimensional component
$P$ of ${\mathcal R}_1({\pazocal{A}})$ may be described in terms of multinets.
To see how this works, denote by $\proj_H \colon \mathbb{C}^{{\pazocal{A}}} \to \mathbb{C}$
the coordinate projections, and consider the subarrangement ${\pazocal{B}}\subseteq {\pazocal{A}}$
consisting of those hyperplanes $H$ for which $\proj_H \colon P \to \mathbb{C}$
is non-zero. It is easy to check that $P\subseteq \mathbb{C}^{{\pazocal{B}}}$ belongs to ${\rm Ess}({\pazocal{B}})$.
Hence, there is a multinet ${\pazocal{N}}$ on ${\pazocal{B}}$ such that $P=\Psi ({\pazocal{N}})$,
and this multinet is unique up to the natural permutation action
described in Lemma \ref{lem=fibres}.
Conversely, suppose there is a subarrangement
${\pazocal{B}}\subseteq {\pazocal{A}}$ supporting a multinet ${\pazocal{N}}$.
The inclusion $M({\pazocal{A}}) \hookrightarrow M({\pazocal{B}})$ induces a
monomorphism $H^1(M({\pazocal{B}}),\mathbb{C}) \hookrightarrow H^1(M({\pazocal{A}}),\mathbb{C})$,
which restricts to an embedding ${\mathcal R}_1({\pazocal{B}}) \hookrightarrow {\mathcal R}_1({\pazocal{A}})$.
The linear space $\Psi ({\pazocal{N}})$, then,
lies inside ${\mathcal R}_1({\pazocal{B}})$, and thus, inside ${\mathcal R}_1({\pazocal{A}})$.
\subsection{Counting essential components}
\label{ssec=55}
Recall that ${\rm Ess}_k({\pazocal{A}})$ denotes the set of essential
components of ${\mathcal R}_1({\pazocal{A}})$ arising from $k$-nets on ${\pazocal{A}}$.
As mentioned previously, this set is empty for $k\ge 5$.
\begin{proof}[Proof of Theorem \ref{thm:essintro}]
Let $k=3$ or $4$, and let $\k$ be the corresponding Galois field, $\mathbb{F}_k$.
By Lemma \ref{lem=fibres} and Theorem \ref{teo=lambdaintro}, we have that
\begin{equation}
\label{eq:eka}
\abs{{\rm Ess}_k({\pazocal{A}})}= \frac{1}{k!} \abs{Z'_{\k} ({\pazocal{A}}) \setminus B_{\k} ({\pazocal{A}})} \le
\frac{1}{k!} \abs{Z_{\k} ({\pazocal{A}}) \setminus B_{\k} ({\pazocal{A}})} .
\end{equation}
Clearly,
\begin{equation}
\label{eq:zka}
\abs{Z_{\k} ({\pazocal{A}}) \setminus B_{\k} ({\pazocal{A}})}= \abs{\k} \cdot
\abs{Z_{\k} ({\pazocal{A}})/ B_{\k} ({\pazocal{A}}) \setminus \{ 0\} }
= k\cdot (k^{\beta_{\k}({\pazocal{A}})}-1) .
\end{equation}
Inequality \eqref{eq=essboundintro} now follows at once.
Next, assume that both ${\rm Ess}_3({\pazocal{A}})$
and ${\rm Ess}_4({\pazocal{A}})$ are non-empty.
From Proposition \ref{prop:betapknet} we then infer that
$\beta_2 ({\pazocal{A}})=0$ and $\beta_2 ({\pazocal{A}}) \ge 2$, a contradiction.
This completes the proof.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:main1}\eqref{a7}]
Suppose $L_2({\pazocal{A}})$ has no flats of multiplicity properly divisible by $3$.
By Lemma \ref{lem:lambda}, $Z'_{\mathbb{F}_3} ({\pazocal{A}}) = Z_{\mathbb{F}_3} ({\pazocal{A}})$.
The above proof then shows that $\abs{{\rm Ess}_3({\pazocal{A}})}= (3^{\beta_{3}({\pazocal{A}})}-1)/2$,
and we are done.
\end{proof}
\section{Evaluation maps and multinets}
\label{sect:flat res net}
In this section, we extend our construction of evaluation maps, from nets to
multinets. For realizable matroids, we exploit evaluation maps in two directions.
First, we construct the inverse of the bijection $\lambda_{\k}$ from Lemma
\ref{lema=lambdabij} by using the variety of $\sl_2 (\mathbb{C})$-valued flat
connections on the Orlik--Solomon algebra. Second, we find a combinatorial
condition insuring that this variety can be reconstructed explicitly from information
on modular resonance.
\subsection{Flat connections coming from multinets}
\label{ssec=60}
We first extend the construction from Proposition \ref{prop=liftev} to
a broader context. Let $\k$ be a finite set with $k\ge 3$ elements
and let ${\mathcal{M}}$ be a simple matroid. For a $k$-multinet ${\pazocal{N}}$ on ${\mathcal{M}}$,
denote by $\varphi_{{\pazocal{N}}} \colon {\mathfrak{h}} ({\mathcal{M}}) \twoheadrightarrow \bLie(\k)$
the epimorphism of graded Lie algebras constructed in
Proposition \ref{prop=multihol}.
Let ${\mathfrak{g}}$ be a finite-dimensional complex Lie algebra, and let
\begin{equation}
\label{eq:shriek}
\xymatrix{\varphi_{{\pazocal{N}}}^! \colon \Hom_{\Lie}(\bLie (\k), {\mathfrak{g}}) \ar[r]&
\Hom_{\Lie}({\mathfrak{h}}({\mathcal{M}}), {\mathfrak{g}})}
\end{equation}
be the induced map on $\Hom$-sets. Using \eqref{eq:holo set}
and \eqref{eq:hom holo}, we may identify $\Hom_{\Lie}(\bLie (\k), {\mathfrak{g}})$
with ${\pazocal H}^{\k}({\mathfrak{g}})$ and $ \Hom_{\Lie}({\mathfrak{h}}({\mathcal{M}}), {\mathfrak{g}})$ with ${\mathcal{F}} (A({\mathcal{M}})\otimes \mathbb{C}, {\mathfrak{g}})$.
Let
\begin{equation}
\label{eq:bigev}
\xymatrix{\Ev_{{\pazocal{N}}} \colon {\pazocal H}^{\k}({\mathfrak{g}}) \ar[r]&{\mathcal{F}} (A({\mathcal{M}})\otimes \mathbb{C}, {\mathfrak{g}})}
\end{equation}
be the map corresponding to $\varphi_{{\pazocal{N}}}^!$ under these identifications.
Finally, for each $\alpha\in \k$, let
$\proj_{\alpha} \colon {\pazocal H}^{\k}({\mathfrak{g}}) \to {\mathfrak{g}}$ be the restriction to ${\pazocal H}^{\k}({\mathfrak{g}})$ of the
$\alpha$-coordinate projection ${\mathfrak{g}}^{\k}\to {\mathfrak{g}}$.
\begin{prop}
\label{prop=multiflat}
With notation as above, the following hold.
\begin{enumerate}
\item \label{pmf1}
The evaluation map $\Ev_{{\pazocal{N}}}$ is a rank-preserving, linear embedding.
\item \label{pmf2}
For any $u\in {\mathcal{M}}$, there is $\alpha\in \k$ such that the restriction of
$\proj_u \otimes \id_{{\mathfrak{g}}} \colon A^1 ({\mathcal{M}}) \otimes {\mathfrak{g}} \to {\mathfrak{g}}$ to ${\pazocal H}^{\k}({\mathfrak{g}})$
via $\Ev_{{\pazocal{N}}}$ belongs to $\mathbb{C}^* \cdot \proj_{\alpha}$.
\item \label{pmf3}
If ${\pazocal{N}}$ is a $k$-net, then $\Ev_{{\pazocal{N}}}= \ev_{\tau}$, where $\tau= \lambda_{\k} ({\pazocal{N}})$.
\end{enumerate}
\end{prop}
\begin{proof}
\eqref{pmf1} By construction, the map $\Ev_{{\pazocal{N}}}$ is linear.
Since $\varphi_{{\pazocal{N}}}$ is surjective, the map $\varphi_{{\pazocal{N}}}^!$ is rank-preserving;
hence, $\Ev_{{\pazocal{N}}}$ is also rank-preserving, and thus, injective.
\eqref{pmf2} Using the underlying partition of ${\pazocal{N}}$, we find that $u\in {\mathcal{M}}_{\alpha}$,
for a unique $\alpha \in \k$. By construction of $\varphi_{{\pazocal{N}}}$, we have that
$\Ev_{{\pazocal{N}}}^* (\proj_u \otimes \id_{{\mathfrak{g}}})= m_u \cdot \proj_{\alpha}$.
\eqref{pmf3} Let ${\pazocal{N}}$ be a $k$-net. For $x\in {\pazocal H}^{\k}({\mathfrak{g}})$ and $u\in {\mathcal{M}}_{\alpha}$,
we have that $\Ev_{{\pazocal{N}}}(x)_u= m_u x_{\alpha}$, by construction. Moreover, $m_u=1$,
since ${\pazocal{N}}$ is a reduced multinet. On the other hand,
$\ev_{\tau}(x)_u= x_{\tau (u)}$, by \eqref{eq:ev}, and $\tau (u)=\alpha$,
by \eqref{eq=lambda}. This completes the proof.
\end{proof}
\subsection{Flat connections and complex resonance varieties}
\label{subsec:flat res}
In the case of realizable matroids, a crucial ingredient in our approach
is a general result relating resonance and flat connections, based
on the detailed study done in \cite{MPPS}.
To start with, let $A$ be a graded, graded-commutative algebra over $\mathbb{C}$.
Recall we assume $A$ is connected and $A^1$ is finite-dimensional.
Given a linear subspace $P\subset A^1$, define a connected
sub-algebra $A_{P} \subset A^{\le 2}$ by setting $A^1_{P}=P$ and
$A^2_{P}=A^2$, and then restricting the multiplication map accordingly.
Now let ${\mathfrak{g}}$ be a complex Lie algebra. The following equality is then easily verified:
\begin{equation}
\label{eq:fap}
{\mathcal{F}}(A_{P},{\mathfrak{g}}) = {\mathcal{F}}(A, {\mathfrak{g}})\cap (P\otimes {\mathfrak{g}}).
\end{equation}
Thus, if ${\mathfrak{g}}$ is finite-dimensional, then ${\mathcal{F}}(A_P, {\mathfrak{g}})$ is a Zariski-closed
subset of ${\mathcal{F}}(A, {\mathfrak{g}})$.
\begin{theorem}
\label{lem:disjoint}
Suppose ${\mathcal R}_1(A)= \bigcup_{P\in {\mathcal{P}}} P$, where ${\mathcal{P}}$ is a finite
collection of linear subspaces of $A^1$, intersecting pairwise only
at $0$. Then, for any finite-dimensional Lie algebra ${\mathfrak{g}}$, the
following hold:
\begin{enumerate}
\item \label{r2}
${\mathcal{F}}(A_{P},{\mathfrak{g}})\cap {\mathcal{F}}(A_{P'},{\mathfrak{g}})=\{0\}$, for all distinct subspaces $P,P'\in {\mathcal{P}}$.
\\[-9pt]
\item \label{r1}
${\mathcal{F}}(A,{\mathfrak{g}}) \supseteq {\mathcal{F}}^{(1)}(A,{\mathfrak{g}}) \cup \bigcup_{P\in {\mathcal{P}}} {\mathcal{F}}(A_{P},{\mathfrak{g}})$.
\\[-9pt]
\item \label{r3}
If ${\mathfrak{g}}=\sl_2$, then the above inclusion holds as an equality.
\item \label{r4}
If ${\mathfrak{g}}= \sl_2$ and all subspaces from ${\mathcal{P}}$ are isotropic, then
${\mathcal{F}}(A_{P},{\mathfrak{g}}) = P\otimes {\mathfrak{g}}$, for every $P \in {\mathcal{P}}$.
\end{enumerate}
\end{theorem}
\begin{proof}
Claim \eqref{r2} follows from our transversality hypothesis, while
claim \eqref{r1} is obvious, by the naturality property of flat connections.
Claim \eqref{r3} is proved in \cite[Proposition 5.3]{MPPS}.
Here, the assumption that ${\mathfrak{g}}=\sl_2$ is crucial. In the proof of
Proposition 5.3 from \cite{MPPS} it is also shown that $P\otimes {\mathfrak{g}} \subseteq {\mathcal{F}}(A,{\mathfrak{g}})$,
when $P$ is isotropic. Claim \eqref{r4} follows then from \eqref{eq:fap}.
\end{proof}
\subsection{From evaluation maps to multinets}
\label{subsec:pencil}
We now return to the situation when $A=H^*(M({\pazocal{A}}),\mathbb{C})$ is
the Orlik--Solomon algebra of an arrangement ${\pazocal{A}}$.
In view of Theorem \ref{thm:tcone},
all the hypotheses of Theorem \ref{lem:disjoint} are satisfied
in this case.
Guided by Proposition \ref{prop=multiflat}, we take ${\mathfrak{g}}=\sl_2 (\mathbb{C})$ and
define the evaluation space ${\mathcal{E}}_k ({\pazocal{A}})$ to be the set of all maps,
$e\colon {\pazocal H}^{\k}({\mathfrak{g}}) \to {\mathcal{F}} (A({\pazocal{A}})\otimes \mathbb{C}, {\mathfrak{g}})$, satisfying properties
\eqref{pmf1} and \eqref{pmf2} from that proposition, and having the
property that $\im (e)$ is an irreducible component of ${\mathcal{F}} (A({\pazocal{A}})\otimes \mathbb{C}, {\mathfrak{g}})$.
\begin{lemma}
\label{lem:submat}
Let $A$ be the complex Orlik--Solomon algebra of an arrangement ${\pazocal{A}}$,
and let ${\mathcal R}_1({\pazocal{A}}) = \bigcup_{P\in {\mathcal{P}}} P$ be the irreducible decomposition of its
resonance variety. The following then hold.
\begin{enumerate}
\item \label{731}
The irreducible decomposition of the variety ${\mathcal{F}}(A,\sl_2)$
is given by
\[
{\mathcal{F}}(A,\sl_2) = {\mathcal{F}}^{(1)}(A,\sl_2) \cup \bigcup_{P\in {\mathcal{P}}} {\mathcal{F}}(A_{P},\sl_2).
\]
\item \label{732}
For every $k$-multinet ${\pazocal{N}}$ on ${\pazocal{A}}$,
\[
\im (\Ev_{{\pazocal{N}}})= P\otimes \sl_2= {\mathcal{F}} (A_P, \sl_2),
\]
where $P=\Psi ({\pazocal{N}})$.
\end{enumerate}
\end{lemma}
\begin{proof}
\eqref{731} By Theorem \ref{thm:arapura}, every non-zero
subspace $P\in {\mathcal{P}}$ is of the form $P=f^*(H^1(S, \mathbb{C}))$, for some admissible map
$f \colon M({\pazocal{A}}) \to S:= \mathbb{CP}^1 \setminus \{\text{$k$ points}\}$. Theorem 7.4 from \cite{MPPS}
gives the irreducible decomposition of ${\mathcal{F}} (A, \sl_2)$, with ${\mathcal{F}} (A_P, \sl_2)$
replaced by $ f^! ({\mathcal{F}} (H^{\hdot}(S, \mathbb{C}), \sl_2))$.
But ${\mathcal{F}} (H^{\hdot}(S, \mathbb{C}), \sl_2)= H^{1}(S, \mathbb{C})\otimes \sl_2$, since $H^{2}(S, \mathbb{C})=0$.
Hence, by Theorem \ref{lem:disjoint}\eqref{r4},
\begin{equation}
\label{eq:fhdot}
f^! ({\mathcal{F}} (H^{\hdot}(S, \mathbb{C}), \sl_2))= P\otimes \sl_2= {\mathcal{F}} (A_P, \sl_2),
\end{equation}
and this proves our claim.
\eqref{732} Let $f_{{\pazocal{N}}}\colon M({\pazocal{A}}) \to S$
be the admissible map associated to the multinet ${\pazocal{N}}$. By Lemma \ref{lem:pen h1},
the map $\varphi^1_{{\pazocal{N}}} \colon {\mathfrak{h}}^1 ({\pazocal{A}}) \to \bLie^1(\k)$ may be identified with
$(f_{{\pazocal{N}}})_* \otimes \mathbb{C}$. It follows that $\im (\Ev_{{\pazocal{N}}})= \Psi ({\pazocal{N}})\otimes \sl_2$,
by the construction \eqref{eq:multi map} of $\Psi$.
In view of \eqref{eq:fhdot}, we are done.
\end{proof}
In view of the above lemma,
the construction from Proposition \ref{prop=multiflat}
gives a correspondence,
\begin{equation}
\label{eq:bigev corr}
\xymatrix{\Ev \colon \{ \text{$k$-multinets on ${\pazocal{A}}$}\} \ar[r]& {\mathcal{E}}_k({\pazocal{A}})} .
\end{equation}
We now define another function,
\begin{equation}
\label{eq:nmap}
\xymatrix{\Net \colon {\mathcal{E}}_k({\pazocal{A}}) \ar[r]& \{ \text{$k$-multinets on ${\pazocal{A}}$}\}/ \Sigma_k} ,
\end{equation}
as follows. Given a map $e\colon {\pazocal H}^{\k}(\sl_2) \to {\mathcal{F}} (A({\pazocal{A}})\otimes \mathbb{C}, \sl_2)$
belonging to ${\mathcal{E}}_k({\pazocal{A}})$, the variety $\im (e)$ cannot be the irreducible component
${\mathcal{F}}^{(1)} (A, \sl_2)$. Indeed, ${\pazocal H}^{\k}(\sl_2)$ contains a regular element $x$,
since $\dim \sl_2 \ge 2$. Since $e$ is a rank-preserving linear map
we must have $\rank (e(x))\ge 2$, and therefore $e(x)$ is a regular
flat connection. Hence, by Lemma \ref{lem:submat}\eqref{731}
and Theorem \ref{lem:disjoint}\eqref{r4}, $\im (e)= P\otimes \sl_2$,
for a unique $0\ne P\in {\mathcal{P}}$.
We claim that $P$ must be an essential component of ${\mathcal R}_1({\pazocal{A}})$.
For otherwise we could find a hyperplane $H\in {\pazocal{A}}$ such that
$\proj_H \colon A^1 \to \mathbb{C}$ vanishes on $P$. But now recall $e$
satisfies property \eqref{pmf2} from Proposition \ref{prop=multiflat}; hence,
$\proj_{\alpha}$ must vanish on $ {\pazocal H}^{\k}(\sl_2)$, for some $\alpha \in \k$,
a contradiction. By Lemma \ref{lem=fibres}, then, $P=\Psi ({\pazocal{N}})$, for some $k'$-multinet
${\pazocal{N}}$ on ${\pazocal{A}}$, uniquely determined up to the natural $\Sigma_{k'}$-action.
Moreover,
\[
3(k'-1)= \dim P\otimes \sl_2= \dim {\pazocal H}^{\k}(\sl_2)= 3(k-1),
\]
and thus $k'=k$. We then define $\Net(e)$ to be the class modulo $\Sigma_k$ of the
$k$-multinet ${\pazocal{N}}$.
\begin{corollary}
\label{coro=lambdasurj}
The composition $\Net\circ \Ev$ is the canonical projection that associates to a
$k$-multinet ${\pazocal{N}}$ on ${\pazocal{A}}$ its $\Sigma_k$-orbit.
\end{corollary}
\begin{proof}
By Lemma \ref{lem:submat}, we have that $\im (\Ev_{{\pazocal{N}}})= \Psi ({\pazocal{N}})\otimes \sl_2$.
By construction, $\Net(\Ev_{{\pazocal{N}}})$ is the $\Sigma_k$-orbit of ${\pazocal{N}}$.
\end{proof}
\begin{remark}
\label{rem=lambdainv}
Returning to the bijection from Lemma \ref{lema=lambdabij},
let us take a special $\k$-cocycle $\tau \in Z'_{\k}({\pazocal{A}})\setminus B_{\k}({\pazocal{A}})$,
and set ${\pazocal{N}}= \lambda_{\k}^{-1} (\tau)$. By Proposition \ref{prop=multiflat}\eqref{pmf3}
and Corollary \ref{coro=lambdasurj}, $\ev_{\tau}\in {\mathcal{E}}_k ({\pazocal{A}})$ and $\Net(\ev_{\tau})$
is the $\Sigma_k$-orbit of the $k$-net ${\pazocal{N}}$.
This shows that, for realizable matroids, the inverse of the modular construction
$\lambda_{\k}$ may be described in terms of $\sl_2 (\mathbb{C})$-valued flat
connections on the Orlik--Solomon algebra.
\end{remark}
\subsection{Flat connections from special cocycles}
\label{subsect:flat 3net}
Let ${\pazocal{B}} \subseteq {\pazocal{A}}$ be a subarrangement. Denote the associated monomorphism
between complex Orlik--Solomon algebras by $\psi \colon B\hookrightarrow A$. This map
in turn induces a rank-preserving inclusion,
$\psi\otimes \id_{{\mathfrak{g}}} \colon {\mathcal{F}} (B, {\mathfrak{g}}) \hookrightarrow {\mathcal{F}} (A, {\mathfrak{g}})$, for
any finite-dimensional complex Lie algebra ${\mathfrak{g}}$. For
$\tau\in Z'_{\k}({\pazocal{B}})\setminus B_{\k}({\pazocal{B}})$, denote by
$\ev^{{\pazocal{B}}}_{\tau} \colon {\pazocal H}^{\k} ({\mathfrak{g}}) \to {\mathcal{F}} (A, {\mathfrak{g}})$ the linear, rank-preserving embedding
$(\psi\otimes \id_{{\mathfrak{g}}}) \circ \ev_{\tau}$.
\begin{theorem}
\label{thm=freg3}
Assume that all essential components of ${\mathcal R}_1({\pazocal{B}})$ arise from nets on ${\pazocal{B}}$,
for every subarrangement ${\pazocal{B}} \subseteq {\pazocal{A}}$. Then
\begin{equation}
\label{eq:freg}
{\mathcal{F}}_{\reg}(A({\pazocal{A}})\otimes \mathbb{C}, \sl_2 (\mathbb{C}))= \bigcup_{{\pazocal{B}}, \tau} \;
\ev^{{\pazocal{B}}}_{\tau} ({\pazocal H}^{\k}_{\reg} (\sl_2 (\mathbb{C})))\, ,
\end{equation}
where the union is taken over all ${\pazocal{B}} \subseteq {\pazocal{A}}$
and all $\tau\in Z'_{\k}({\pazocal{B}})\setminus B_{\k}({\pazocal{B}})$.
\end{theorem}
\begin{proof}
Let ${\mathcal R}_1(A)= \bigcup_{P \in {\mathcal{P}}} P$ be the decomposition
of the complex resonance variety of ${\pazocal{A}}$ into (linear) irreducible
components. For simplicity, write ${\pazocal H}={\pazocal H}^{\k}(\sl_2)$.
In view of Theorem \ref{lem:disjoint}, we only have to show that, for
every non-zero component $P$ of ${\mathcal R}_1(A)$, the set
$(P\otimes \sl_2)_{\reg}$ is contained in the right-hand side
of \eqref{eq:freg}. As explained in \S\ref{subsec:multinet map},
the subspace $P$ belongs to ${\rm Ess}({\pazocal{B}})$, for some subarrangement
${\pazocal{B}} \subseteq {\pazocal{A}}$. Thus, we may replace ${\pazocal{A}}$ by ${\pazocal{B}}$, and
reduce our proof to showing that, for any $P\in {\rm Ess} ({\pazocal{B}})$, the set
$(P\otimes \sl_2)_{\reg}$ is contained in $\ev_{\tau}({\pazocal H}_{\reg})$,
for some $\tau\in Z'_{\k}({\pazocal{B}})\setminus B_{\k}({\pazocal{B}})$.
Using our hypothesis, we infer that $P=\Psi ({\pazocal{N}})$,
for some $k$-net ${\pazocal{N}}$ on ${\pazocal{B}}$. Set
$\tau= \lambda_{\k} ({\pazocal{N}}) \in Z'_{\k}({\pazocal{B}})\setminus B_{\k}({\pazocal{B}})$.
It follows from Proposition \ref{prop=multiflat}\eqref{pmf3} and
Lemma \ref{lem:submat} that $\ev_{\tau}({\pazocal H}) =P\otimes \sl_2$.
By construction, $\rank (\ev_{\tau}(x))= \rank (x)$, for all
$x\in {\pazocal H}$. Hence, $\ev_{\tau}({\pazocal H}_{\reg})=
(P\otimes \sl_2)_{\reg}$.
\end{proof}
\subsection{Examples}
\label{subsec:ess disc}
We conclude this section with a couple of extended examples.
\begin{example}
\label{ex=flatsharp}
Let ${\pazocal{A}}$ be the reflection arrangement of type ${\rm B}_3$,
defined by the polynomial
$Q=z_1z_2z_3(z_1^2-z_2^2)(z_1^2-z_3^2)(z_2^2-z_3^2)$.
As shown in \cite{MP}, we have that $\beta_p({\pazocal{A}})=0$, for all $p$.
In particular, ${\pazocal{A}}$ supports no net, by \eqref{eq=essboundintro}.
On the other hand, this arrangement admits the multinet from Figure \ref{fig:b3 arr}.
Thus, the hypothesis of Theorem \ref{thm=freg3} is violated in this example.
We claim that Theorem \ref{thm=freg3} does not hold
for this arrangement. To verify this claim, pick
$x=(x_0,x_1,x_2)\in {\pazocal H}^{\mathbb{F}_3}_{\reg}(\sl_2)$ and define
$\omega= \sum_{H\in {\pazocal{A}}} e_H \otimes \omega_ H \in A^1\otimes \sl_2$ by
\[
\omega_{z_1}=2 x_1,\, \omega_{z_2}=2 x_2, \, \omega_{z_3}=2 x_0,\,
\omega_{z_1 \pm z_2}=x_0, \,
\omega_{z_2 \pm z_3}=x_1, \, \omega_{z_1 \pm z_3}=x_2.
\]
It is easy to check that $\omega \in {\mathcal{F}}_{\reg}(A, \sl_2)$.
Since clearly $x_i\ne 0$ for all $i$, we infer that
$\omega$ is supported on the whole arrangement ${\pazocal{A}}$.
On the other hand, all elements from the
right-hand side of \eqref{eq:freg} are supported on proper
subarrangements of ${\pazocal{A}}$, by Lemma \ref{lema=lambdabij}.
Thus, equality does not hold in \eqref{eq:freg} in this case.
\end{example}
\begin{example}
\label{ex:graphic}
Let $\Gamma$ be a finite simplicial graph, with vertex set
$[\ell]$ and edge set $E$. The corresponding
(unsigned) graphic arrangement, ${\pazocal{A}}_{\Gamma}$,
is the arrangement in $\mathbb{C}^{\ell}$ defined by the polynomial
$Q=\prod_{(i,j)\in E} (z_i-z_j)$.
For instance, if $\Gamma=K_{\ell}$ is the complete graph on
$\ell$ vertices, then $A_{\Gamma}$ is the reflection arrangement
of type $\operatorname{A}_{\ell-1}$.
The Milnor fibrations of graphic arrangements were studied in \cite{MP}.
Clearly, $\mult({\pazocal{A}}_{\Gamma}) \subseteq \{3\}$, and so $\beta_p({\pazocal{A}}_{\Gamma})=0$,
unless $p=3$. It turns out that $\beta_3({\pazocal{A}}_{\Gamma})=0$ for all graphs
$\Gamma$ except $\Gamma=K_3$ and $K_4$, in which case
$\beta_3({\pazocal{A}}_{\Gamma})=1$.
Theorem \ref{thm:main0} was proved in \cite{MP} for the class
of graphic arrangements.
For such arrangements, the inequalities \eqref{eq=essboundintro} are sharp.
Indeed, ${\mathcal R}_1({\pazocal{A}}_{\Gamma})$
has an essential component if and only if $\Gamma=K_3$
or $K_4$, in which case $\abs{{\rm Ess}({\pazocal{A}}_{\Gamma})}= \abs{{\rm Ess}_3({\pazocal{A}}_{\Gamma})}=1$;
see \cite{SS, CS99}. Moreover, it follows that Theorem \ref{thm=freg3}
holds for all graphic arrangements.
Finally, by \cite[Theorem A]{MP}, Conjecture \ref{conj:mf} holds in the strong form
\eqref{eq:delta arr}, for all (not necessarily unsigned) graphic arrangements.
\end{example}
\section{Characteristic varieties and the Milnor fibration}
\label{sect:cjl milnor}
In this section, topology comes to the fore. Using the jump loci
for homology in rank $1$ local systems, we prove implication
\eqref{a5}$\Rightarrow$\eqref{a6} from Theorem \ref{thm:main1},
and we finish the proof of Theorem \ref{thm:2main1}.
\subsection{Characteristic varieties and finite abelian covers}
\label{subsec:cv}
Let $X$ be a connected, finite-type CW-complex. Without loss
of generality, we may assume $X$ has a single $0$-cell.
Let $\pi=\pi_1(X,x_0)$ be the fundamental group of $X$,
based at this $0$-cell.
Let $\Hom(\pi,\mathbb{C}^*)$ be the affine
algebraic group of $\mathbb{C}$-valued, multiplicative characters on $\pi$,
which we will identify with $H^1(\pi,\mathbb{C}^*)=H^1(X,\mathbb{C}^*)$.
The (degree $q$, depth $r$) {\em characteristic varieties}\/ of
$X$ are the jump loci for homology with coefficients in rank-$1$
local systems on $X$:
\begin{equation}
\label{eq:cvs}
{\mathcal V}^q_r(X)=\{\xi\in\Hom(\pi,\mathbb{C}^*) \mid
\dim_{\mathbb{C}} H_q(X,\mathbb{C}_\xi)\ge r\}.
\end{equation}
By construction, these loci are Zariski-closed subsets of
the character group. Here is a simple example, that we will need
later on.
\begin{example}
\label{ex:cv surf}
Let $S=\mathbb{CP}^1\setminus \{\text{$k$ points}\}$. Then
${\mathcal V}^1_r(S)$ equals $H^1(S,\mathbb{C}^*)=(\mathbb{C}^*)^{k-1}$ if
$1\le r \le k-2$, it equals $\{1\}$ if $r = k-1$, and it is empty
if $r \ge k$.
\end{example}
As is well-known, the geometry of the characteristic varieties
controls the Betti numbers of regular, finite abelian covers of $X$.
For instance, suppose that the deck-trans\-formation group is
cyclic of order $n$, and fix an inclusion $\iota \colon \mathbb{Z}_n \hookrightarrow \mathbb{C}^*$,
by sending $1 \mapsto e^{2\pi \mathrm{i}/n}$. With this choice, the
epimorphism $\nu \colon \pi\twoheadrightarrow \mathbb{Z}_n$ defining the cyclic cover $X^{\nu}$
yields a (torsion) character, $\rho=\iota\circ \nu\colon \pi \to \mathbb{C}^*$.
We then have an isomorphism of $\mathbb{C}[\mathbb{Z}_n]$-modules,
\begin{equation}
\label{eq:equiv}
H_q(X^{\nu},\mathbb{C}) \cong H_q(X,\mathbb{C}) \oplus
\bigoplus_{1<d\mid n} (\mathbb{C}[t]/\Phi_d(t))^{\depth(\rho^{n/d})},
\end{equation}
where $\depth(\xi):=\dim_{\mathbb{C}} H_q(X, \mathbb{C}_{\xi}) =
\max\{r \mid \xi\in {\mathcal V}^q_r(X)\}$. For a quick proof of this classical
formula (originally due to A.~Libgober, M.~Sakuma, and E.~Hironaka),
we refer to \cite[Theorem 2.5]{DS13} or \cite[Theorem B.1]{Su}.
As shown in \cite{PS-tams}, the exponents in formula \eqref{eq:equiv}
coming from prime-power divisors can be estimated in terms of the
corresponding Aomoto--Betti numbers. More precisely, suppose
$n$ is divisible by $d=p^s$, for some prime $p$. Composing the
canonical projection $\mathbb{Z}_n \twoheadrightarrow \mathbb{Z}_p$ with $\nu$ defines a
cohomology class $\bar\nu\in H^1(X,\mathbb{F}_p)$.
\begin{theorem}[\cite{PS-tams}]
\label{thm:mod bound}
With notation as above, assume $H_*(X,\mathbb{Z})$ is
torsion-free. Then
\[
\dim_{\mathbb{C}} H_q(X, \mathbb{C}_{\rho^{n/d}}) \le
\dim_{\mathbb{F}_p} H^q( H^{\hdot}(X, \mathbb{F}_p), \delta_{\bar\nu}).
\]
\end{theorem}
\subsection{Characteristic varieties of arrangements}
\label{subsec:cv arr}
Let ${\pazocal{A}}$ be a hyperplane arrangement in $\mathbb{C}^{\ell}$. Since the
slicing operation described in \S\ref{subsec:arrs} does not affect the
character torus, $H^1(M({\pazocal{A}}),\mathbb{C}^*)=(\mathbb{C}^*)^{{\pazocal{A}}}$, or the degree $1$
characteristic varieties of the arrangement, ${\mathcal V}_r({\pazocal{A}}):={\mathcal V}^1_r(M({\pazocal{A}}))$,
we will assume from now on that $\ell=3$.
The varieties ${\mathcal V}_r({\pazocal{A}})$ are closed algebraic
subsets of the character torus.
Since $M({\pazocal{A}})$ is a smooth, quasi-projective variety,
a general result of Arapura \cite{Ar} insures that
${\mathcal V}_r({\pazocal{A}})$ is, in fact, a finite union of translated
subtori. Moreover, as shown in \cite{CS99, LY}, and,
in a broader context in \cite{DPS}, the tangent cone
at the origin to ${\mathcal V}_1({\pazocal{A}})$ coincides with the resonance
variety ${\mathcal R}_1({\pazocal{A}})$.
More explicitly, consider the exponential map
$\mathbb{C}\to \mathbb{C}^*$, and the coefficient homomorphism
$\exp\colon H^1(M({\pazocal{A}}),\mathbb{C})\to H^1(M({\pazocal{A}}),\mathbb{C}^*)$.
Then, if $P\subset H^1(M({\pazocal{A}}),\mathbb{C})$ is one of the linear
subspaces comprising ${\mathcal R}_1({\pazocal{A}})$, its image under
the exponential map, $\exp(P)\subset H^1(M({\pazocal{A}}),\mathbb{C}^*)$,
is one of the subtori comprising ${\mathcal V}_1({\pazocal{A}})$. Moreover,
this correspondence gives a bijection between the components
of ${\mathcal R}_1({\pazocal{A}})$ and the components of ${\mathcal V}_1({\pazocal{A}})$ passing through
the origin.
Now recall from Arapura theory (\cite{Ar, DPS})
that each positive-dimensional component of ${\mathcal R}_1({\pazocal{A}})$
is obtained by pullback along
an admissible map $f\colon M({\pazocal{A}})\to S$, where $S=\mathbb{CP}^{1}\setminus
\{\text{$k$ points}\}$ and $k\ge 3$. Thus, each positive-dimensional
component of ${\mathcal V}_1({\pazocal{A}})$ containing the origin is of the form
$\exp(P)=f^*(H^1(S,\mathbb{C}^*))$, with $f$ admissible.
In view of Example \ref{ex:cv surf},
the subtorus $f^*(H^1(S,\mathbb{C}^*))$ is a positive-dimensional
component of ${\mathcal V}_1({\pazocal{A}})$ through the origin that
lies inside ${\mathcal V}_{k-2}({\pazocal{A}})$, for any admissible map $f$ as above.
Next, let $\bar{{\pazocal{A}}}$ be the projectivized line arrangement in
$\mathbb{CP}^{2}$, and let $U({\pazocal{A}})$ be its complement. The Hopf
fibration, $\pi\colon \mathbb{C}^{3} \setminus \set{0} \to \mathbb{CP}^{2}$
restricts to a trivializable bundle map,
$\pi\colon M({\pazocal{A}})\to U({\pazocal{A}})$, with fiber $\mathbb{C}^*$.
Therefore, $M({\pazocal{A}})\cong U({\pazocal{A}})\times \mathbb{C}^*$,
and the character torus $H^1(M({\pazocal{A}}),\mathbb{C}^*)$ splits as
$H^1(U({\pazocal{A}}),\mathbb{C}^*)\times \mathbb{C}^*$. Under this splitting,
the characteristic varieties ${\mathcal V}^1_r(M({\pazocal{A}}))$ get
identified with the varieties ${\mathcal V}^1_r(U({\pazocal{A}}))$ lying
in the first factor.
\subsection{The homology of the Milnor fiber}
\label{subsec:milnor}
Let $Q=Q({\pazocal{A}})$ be a defining polynomial for our arrangement.
The restriction of $Q$ to the complement defines
the Milnor fibration, $Q\colon M({\pazocal{A}}) \to \mathbb{C}^*$, whose
typical fiber, $F({\pazocal{A}})=Q^{-1}(1)$, is the Milnor fiber of
the arrangement.
The map $\pi\colon M({\pazocal{A}})\to U({\pazocal{A}})$
restricts to a regular, $\mathbb{Z}_n$-cover
$\pi \colon F({\pazocal{A}}) \to U({\pazocal{A}})$, where $n=\abs{{\pazocal{A}}}$.
As shown in \cite{CS95} (see \cite[Theorem 4.10]{Su} for
full details), this cover is classified by the ``diagonal" epimorphism
\begin{equation}
\label{eq:nu milnor}
\nu\colon H_1(U({\pazocal{A}}),\mathbb{Z})\twoheadrightarrow \mathbb{Z}_n, \quad
\nu(\pi_*(a_H) )= 1 \bmod n.
\end{equation}
For each divisor $d$ of $n$, let $\rho_d\colon H_1(M({\pazocal{A}}),\mathbb{Z})\to \mathbb{C}^*$
be the character defined by $\rho_d(a_H)= e^{2\pi \mathrm{i} /d}$.
Using formula \eqref{eq:equiv}, we conclude that
\begin{equation}
\label{eq:h1milnor}
H_1(F({\pazocal{A}}),\mathbb{C}) = \mathbb{C}^{n-1} \oplus \bigoplus_{1<d\mid n}
(\mathbb{C}[t]/\Phi_d(t))^{{e}_d({\pazocal{A}})},
\end{equation}
as modules over $\mathbb{C} [\mathbb{Z}_n]$, where $e_d({\pazocal{A}})=\depth (\rho_d)$. Furthermore,
applying Theorem \ref{thm:mod bound}, we obtain the ``modular
upper bound"
\begin{equation}
\label{eq:bound bis}
{e}_{p^s} ({\pazocal{A}}) \le \beta_p({\pazocal{A}}),
\end{equation}
valid for all primes $p$ and integers $s\ge 1$.
\subsection{Milnor fibration and multinets}
\label{subsec:milnor multi}
A key task now is to find suitable lower bounds for the exponents
appearing in formula \eqref{eq:h1milnor}. The next result provides
such bounds, in the presence of reduced multinets on the arrangement.
\begin{theorem}
\label{thm:be3}
Suppose that an arrangement ${\pazocal{A}}$ admits a reduced $k$-multinet. Letting
$f\colon M({\pazocal{A}})\to S$ denote the associated admissible map, the following hold.
\begin{enumerate}
\item \label{bb1}
The character $\rho_k$ belongs to $f^*(H^1(S,\mathbb{C}^*))$, and
${e}_k({\pazocal{A}})\ge k-2$.
\item \label{bb2}
If $k=p^s$, then $\rho_{p^r}\in f^*(H^1(S,\mathbb{C}^*))$
and ${e}_{p^r}({\pazocal{A}})\ge k-2$, for all $1\le r\le s$.
\end{enumerate}
\end{theorem}
\begin{proof}
First let ${\pazocal{N}}$ be an arbitrary $k$-multinet on ${\pazocal{A}}$, with
parts ${\pazocal{A}}_{\alpha}$ and multiplicity function $m$, and let
$f=f_{{\pazocal{N}}}\colon M({\pazocal{A}})\to S$. It follows from
Lemma \ref{lem:pen h1} that the induced morphism between
character groups, $f^*\colon H^1(S, \mathbb{C}^*) \to H^1(M({\pazocal{A}}), \mathbb{C}^*)$,
takes the character $\rho$ given by
$\rho(c_{\alpha})=\zeta_{\alpha}$, where $\zeta_1\cdots \zeta_k=1$,
to the character given by
\begin{equation}
\label{eq:char hom}
f^*(\rho)(a_H) = \zeta_{\alpha}^{m_H}, \quad\text{for $H\in {\pazocal{A}}_{\alpha}$}.
\end{equation}
Now assume ${\pazocal{N}}$ is reduced. Taking $\zeta_{\alpha}=e^{2\pi \mathrm{i}/k}$
in the above, we see that $\rho_k=f^*(\rho)$ belongs to the
subtorus $T=f^*(H^1(S, \mathbb{C}^*))$. Since $T$ lies inside ${\mathcal V}_{k-2}({\pazocal{A}})$,
formula \eqref{eq:h1milnor} shows that $e_k({\pazocal{A}})=\depth (\rho_k)\ge k-2$.
Finally, suppose $k=p^s$. Then $\rho_{p^r}=f^*(\rho)^{p^{s-r}}$, which
again belongs to the subtorus $T$, for $1\le r\le s$.
The inequality ${e}_{p^r}({\pazocal{A}})\ge k-2$ now follows as above.
\end{proof}
\begin{remark}
\label{rem:multik}
An alternate way of proving Theorem \ref{thm:be3}, part \eqref{bb1} is by
putting together \cite[Theorem 3.11]{FY} and \cite[Theorem 3.1(i)]{DP11}.
The proof we give here, though, is more direct, and, besides, it
will be needed in the proof of Theorem \ref{thm:b3e3} below.
Furthermore, the additional part \eqref{bb2} will be used
in proving Theorem \ref{thm:2main1}.
\end{remark}
\subsection{From $\beta_3({\pazocal{A}})$ to $e_3({\pazocal{A}})$}
\label{subsec:e3beta3}
Before proceeding, we need to recall a result of Artal Bartolo,
Cogolludo, and Matei, \cite[Proposition 6.9]{ACM}.
\begin{theorem}[\cite{ACM}]
\label{thm:acm}
Let $X$ be a smooth, quasi-projective variety.
Suppose $V$ and $W$ are two distinct, positive-dimensional
irreducible components of ${\mathcal V}_r(X)$ and ${\mathcal V}_s(X)$,
respectively. If $\xi\in V\cap W$ is a torsion character,
then $\xi \in {\mathcal V}_{r+s} (X)$.
\end{theorem}
The next theorem establishes implication \eqref{a5} $\Rightarrow$ \eqref{a6}
from Theorem \ref{thm:main1} in the Introduction.
\begin{theorem}
\label{thm:b3e3}
Let ${\pazocal{A}}$ be an arrangement that has no rank-$2$ flats
of multiplicity properly divisible by $3$. If\, $\beta_3({\pazocal{A}}) \le 2$,
then ${e}_3({\pazocal{A}}) =\beta_3({\pazocal{A}})$.
\end{theorem}
\begin{proof}
From the modular bound \eqref{eq:bound bis}, we know that
${e}_3({\pazocal{A}}) \le \beta_3({\pazocal{A}})$. Since we are assuming
that $\beta_3({\pazocal{A}}) \le 2$, there are only three cases to consider.
First, if $\beta_3({\pazocal{A}})=0$, then clearly $e_3({\pazocal{A}})=0$.
Second, if $\beta_3({\pazocal{A}})=1$, then $e_3({\pazocal{A}})=1$, by
implication \eqref{a3} $\Rightarrow$ \eqref{a1} from
Theorem \ref{thm:main1} and Theorem \ref{thm:be3}\eqref{bb1}.
Finally, suppose $\beta_3({\pazocal{A}})=2$.
We then know from Theorem \ref{thm:main1}\eqref{a7} that the resonance variety
${\mathcal R}_1({\pazocal{A}})$ has at least $4$ essential components, all corresponding
to $3$-nets on ${\pazocal{A}}$. Pick two of them, given by $3$-nets ${\pazocal{N}}$ and ${\pazocal{N}}'$,
constructed as in Corollary \ref{cor:res comp}.
By Theorem \ref{thm:be3}, the characteristic variety ${\mathcal V}_1({\pazocal{A}})$
has two positive-dimensional components, $f_{{\pazocal{N}}}^*(H^1(S, \mathbb{C}^*))$
and $f_{{\pazocal{N}}'}^*(H^1(S, \mathbb{C}^*))$, both passing through the torsion
character $\rho_3$. These components must be distinct, since the
corresponding components of the resonance variety, $f_{{\pazocal{N}}}^*(H^1(S, \mathbb{C}))$
and $f_{{\pazocal{N}}'}^*(H^1(S, \mathbb{C}))$, are distinct. By Theorem \ref{thm:acm},
then, $\rho_3$ belongs to ${\mathcal V}_2({\pazocal{A}})$. Formula \eqref{eq:h1milnor}
now gives $e_3({\pazocal{A}})\ge 2$. By the modular bound, $e_3({\pazocal{A}})= 2$,
and the proof is complete.
\end{proof}
\subsection{From $\beta_2({\pazocal{A}})$ to $e_2({\pazocal{A}})$ and $e_4({\pazocal{A}})$}
\label{subsec:e2beta2}
We are now ready to complete the proof of Theorem \ref{thm:2main1}
from the Introduction.
\begin{proof}[Proof of Theorem \ref{thm:2main1}]
The equivalence \eqref{2m1}$\Leftrightarrow$\eqref{2m3} is a direct consequence
of Theorem \ref{teo=lambdaintro}, case $\k=\mathbb{F}_4$.
Suppose ${\pazocal{A}}$ admits a $4$-net.
By Theorem \ref{thm:be3}\eqref{bb2}, then, both $e_2({\pazocal{A}})$
and $e_4({\pazocal{A}})$ are at least $2$. On the other hand,
the modular bound \eqref{eq:bound bis} gives that both $e_2({\pazocal{A}})$ and
$e_4({\pazocal{A}})$ are at most $\beta_2({\pazocal{A}})$. The further assumption that
$\beta_2({\pazocal{A}})\le 2$ implies that this bound is sharp, and we are done.
\end{proof}
\begin{example}[cf.~\cite{DP11, FY, Yu04}]
\label{ex:hesse}
In Theorem \ref{thm:2main1}, we were guided
by the properties of the Hessian arrangement.
This is the arrangement ${\pazocal{A}}$ of $12$ lines in $\mathbb{CP}^2$ which
consists of the $4$ completely reducible fibers of the cubic pencil
generated by $z_1^3+z_2^3+z_3^3$ and $z_1z_2z_3$.
Each of these fibers is a union of $3$ lines in general position.
The resulting partition defines a $(4,3)$-net
on $L_{\le 2}({\pazocal{A}})$, depicted in Figure \ref{fig:hessian}.
Clearly, $\mult ({\pazocal{A}})= \{ 4\}$.
From the above information, we find that $\beta_2({\pazocal{A}})=2$.
Using Theorem \ref{thm:2main1},
we recover the known result that $\Delta_{{\pazocal{A}}}(t)= (t-1)^{11}[(t+1)(t^2+1)]^2$.
The Hessian arrangement shows that the hypothesis
on multiplicities is needed in Theorem \ref{thm:main0}. Indeed,
$\beta_p({\pazocal{A}})=0$ for all primes $p\ne 2$, by Corollary \ref{cor:beta zero};
consequently, $\Delta_{{\pazocal{A}}}(t)\ne (t-1)^{11}(t^2+t+1)^{\beta_3({\pazocal{A}})}$.
Finally, we infer from the above discussion that Conjecture \ref{conj:mf}
holds for the Hessian arrangement, in the strong form \eqref{eq:delta arr}.
\end{example}
\subsection{More examples}
\label{subsec:examples}
We conclude this section with several applications to
some concrete classes of examples. To start with,
Theorem \ref{thm:main1} provides a partial answer
to the following question, raised by Dimca, Ibadula, and
M\u{a}cinic in \cite{DIM}: If $\rho_d \in {\mathcal V}_{1}({\pazocal{A}})$,
must $\rho_d$ actually belong to a component of ${\mathcal V}_{1}({\pazocal{A}})$
passing through the origin?
\begin{corollary}
\label{cor:dimq}
Suppose $L_2({\pazocal{A}})$ has no flats of multiplicity $3r$, for any $r>1$.
If $\rho_3 \in {\mathcal V}_{1}({\pazocal{A}})$, then $\rho_3$ belongs to a
$2$-dimensional component of ${\mathcal V}_{1}({\pazocal{A}})$, passing
through $1$.
\end{corollary}
\begin{proof}
Since $\rho_3 \in {\mathcal V}_{1}({\pazocal{A}})$, formulas \eqref{eq:h1milnor}--\eqref{eq:bound bis}
imply that $\beta_3({\pazocal{A}})\neq 0$. By Theorem \ref{thm:main1}, then,
${\pazocal{A}}$ supports a reduced $3$-multinet ${\pazocal{N}}$. By Theorem \ref{thm:be3},
the character $\rho_3$ belongs to the $2$-dimensional subtorus
$f_{{\pazocal{N}}}^*(H^1(S, \mathbb{C}^*))\subset {\mathcal V}_{1}({\pazocal{A}})$.
\end{proof}
\begin{remark}
\label{rem:fourth}
In \cite{Di}, A. Dimca used superabundance methods to analyze the
algebraic monodromy action on $H_1(F({\pazocal{A}}),\mathbb{C})$, for an arrangement ${\pazocal{A}}$
which has at most triple points, and which admits a reduced $3$-multinet.
In the case when $\abs{{\pazocal{A}}}=18$, he discovered an interesting type of
combinatorics, for which he proved the following dichotomy result:
there are two possibilities for ${\pazocal{A}}$, defined in superabundance terms,
and leading to different values for $e_3({\pazocal{A}})$.
An example due to M. Yoshinaga and recorded in \cite{Di} shows
that one of these two cases actually occurs. Our Theorem \ref{thm:main0}
then shows that the other case is impossible, thereby answering the
subtle question raised in \cite[Remark 1.2]{Di}. This indicates that
our topological approach may also be used to solve difficult
superabundance problems.
\end{remark}
\begin{example}
\label{ex:cevad}
Let $\{{\pazocal{A}}_m\}_{m\ge 1}$ be the family of monomial arrangements,
corresponding to the complex reflection groups of type $G(m,m,3)$, and
defined by the polynomials $Q_m=(z_1^m-z_2^m)(z_1^m-z_3^m)(z_2^m-z_3^m)$.
As noted in \cite{FY}, each arrangement ${\pazocal{A}}_m$ supports a $(3,m)$-net
with partition given by the factors of $Q_m$, and has Latin square
corresponding to $\mathbb{Z}_m$. There are $3$ mono-colored flats of multiplicity $m$,
and all the others flats in $L_2({\pazocal{A}}_m)$ have multiplicity $3$.
With this information at hand, Lemma \ref{lem:eqs} easily implies that
$\beta_3({\pazocal{A}}_m)=1$ if $3 \nmid m$, and $\beta_3({\pazocal{A}}_m)=2$, otherwise.
In the first case, we infer from Theorem \ref{thm:main1}
that $e_3({\pazocal{A}}_m)=1$. If $m=3$, then ${\pazocal{A}}_3$ is the Ceva arrangement from
Example \ref{ex:third}; in this case, Theorem \ref{thm:main1} shows that
$e_3({\pazocal{A}}_3)=2$. Finally, if $m=3d$, with $d>1$, the multiplicity assumption
from Theorem \ref{thm:main1} no longer holds; nevertheless, the
methods used here can be adapted to show that $e_3({\pazocal{A}}_{3d})=2$,
for all $d$. In fact, it can be shown that $e_p({\pazocal{A}}_m)=\beta_p({\pazocal{A}}_m)$, for
all $m\ge 1$ and all primes $p$.
\end{example}
Finally, the next example shows that implication \eqref{a3} $\Rightarrow$ \eqref{a1}
and property \eqref{a7} from Theorem \ref{thm:main1}
fail without our multiplicity restrictions.
\begin{example}
\label{ex:B3 bis}
Let $\{{\pazocal{A}}_m\}_{m\ge 1}$ be the family of full monomial arrangements,
corresponding to the complex reflection groups of type $G(m,1,3)$,
and defined by the polynomials
$Q_m=z_1z_2z_3(z_1^m-z_2^m)(z_1^m-z_3^m)(z_2^m-z_3^m)$.
It is easy to see that $\mult ({\pazocal{A}}_m)= \{ 3,m+2 \}$.
Using Lemma \ref{lem:eqs}, we infer that $\beta_3({\pazocal{A}}_m)=1$
if $m=3d+1$, and $\beta_3({\pazocal{A}}_m)=0$, otherwise.
As noted in \cite{FY}, each arrangement ${\pazocal{A}}_m$ supports a
$3$-multinet ${\pazocal{N}}_m$ (non-reduced for $m>1$), with multiplicity
function equal to $m$ on the hyperplanes $z_1=0$, $z_2=0$, $z_3=0$,
and equal to $1$, otherwise. Let $f\colon M({\pazocal{A}}_m)\to
S= \mathbb{CP}^1\setminus \{\text{$3$ points}\}$ be the associated
admissible map, and let $\rho\in H^1(S, \mathbb{C}^*)$ be the
diagonal character used in the proof of Theorem \ref{thm:be3} for $k=3$.
If $m=1$, then ${\pazocal{A}}_1$ is the reflection arrangement of type
$\operatorname{A}_{3}$ from Example \ref{ex:graphic}, and
${\pazocal{N}}_1$ is the $3$-net from Figure \ref{fig:braid}. In this
case, Theorem \ref{thm:main1} applies, giving $e_3({\pazocal{A}})=\beta_3({\pazocal{A}})=1$.
Now suppose $m=3d+1$, with $d>0$. In this case, even though ${\pazocal{N}}_m$
is not reduced, the equality $f^*(\rho)=\rho_3$ still holds,
since $m\equiv 1 \bmod 3$. Consequently, $\rho_3$
belongs to the component $T=f^*(H^1(S,\mathbb{C}^*))$ of ${\mathcal V}_1({\pazocal{A}}_m)$.
Hence, by formula \eqref{eq:h1milnor}, $e_3({\pazocal{A}}_m)\ge 1$.
Although $\beta_3({\pazocal{A}}_m)=1$, we claim that ${\pazocal{A}}_m$
supports no reduced $3$-multinet. Indeed,
suppose there was such a multinet ${\pazocal{N}}'_m$, with
corresponding admissible map $f' \colon M({\pazocal{A}}_m)\to S$.
The same argument as above shows that $\rho_3$
lies in the component $T'=f'^*(H^1(S,\mathbb{C}^*))$ of ${\mathcal V}_1({\pazocal{A}})$.
If $T= T'$, then $f^*(H^1(S,\mathbb{C}))=f'^*(H^1(S,\mathbb{C}))$, forcing ${\pazocal{N}}_m$
and ${\pazocal{N}}'_m$ to be conjugate under the natural $\Sigma_3$-action,
by Lemma \ref{lem=fibres}. This is clearly impossible, since
${\pazocal{N}}'_m$ is reduced and ${\pazocal{N}}_m$ is not reduced. Hence,
the components $T$ and $T'$ are distinct, and so
Theorem \ref{thm:acm} implies that $\rho_3\in {\mathcal V}_2({\pazocal{A}}_m)$.
By formula \eqref{eq:h1milnor}, we must then have ${e}_3({\pazocal{A}}_m)\ge 2$,
thereby contradicting inequality \eqref{eq:bound bis}.
This example also shows that the inclusion $Z'_{\mathbb{F}_3}({\mathcal{M}}) \subseteq Z_{\mathbb{F}_3}({\mathcal{M}})$
can well be strict, even for the realizable matroids ${\pazocal{A}}_{3d+1}$ with $d>0$.
Indeed, the equality $Z'_{\mathbb{F}_3}({\pazocal{A}}_{3d+1})= Z_{\mathbb{F}_3}({\pazocal{A}}_{3d+1})$
would imply that $Z'_{\mathbb{F}_3}({\pazocal{A}}_{3d+1})\setminus B_{\mathbb{F}_3}({\pazocal{A}}_{3d+1}) \ne \emptyset$,
since $\beta_3 ({\pazocal{A}}_{3d+1})\ne 0$. By Lemma \ref{lema=lambdabij},
${\pazocal{A}}_{3d+1}$ would then support a $3$-net.
At the same time, we see that the inequality \eqref{eq=essboundintro}
is strict for ${\pazocal{A}}_{3d+1}$ and $k=3$ when $d>0$, since
${\rm Ess}_3 ({\pazocal{A}}_{3d+1})= \emptyset$ and $\beta_3 ({\pazocal{A}}_{3d+1})=1$.
We conclude that $e_3({\pazocal{A}}_m)=1$, and thus \eqref{eq:bound bis}
holds as an equality if $3\mid m+2$. Clearly, equality also holds
($e_3({\pazocal{A}}_m)=\beta_3({\pazocal{A}}_m)=0$) if $3\nmid m+2$.
In fact, it can be checked that Conjecture \ref{conj:mf} holds in
the strong form \eqref{eq:delta arr}, for all full monomial arrangements.
\end{example}
\section{A family of matroids}
\label{sec:matr}
We conclude by constructing an infinite family of matroids ${\mathcal{M}}(m)$
which are realizable over $\mathbb{C}$ if and only if $m\le 2$, and with the
property that $\beta_3({\mathcal{M}}(m))=m$. As an application, we
establish part \eqref{a5} of Theorem \ref{thm:main1} from
the Introduction, thereby completing the proof of that theorem.
\subsection{Matroids coming from groups}
\label{ss71}
Given a finite group $G$ and an integer $m\ge 1$, there is a simple matroid ${\mathcal{M}}_G(m)$
of rank at most $3$ on the product group $G^m$. The dependent subsets of size $3$
of this matroid are all $3$-tuples $\{ v,v',v'' \}$ for which $v\cdot v' \cdot v''=1$.
In this section, we take $G=\mathbb{F}_3$ and omit $G$ from the notation.
A useful preliminary remark is that $v+v'+v''=0$ in $\mathbb{F}_3$ if and
only if either $v=v'=v''$ or $v$, $v'$, and $v''$ are all distinct.
Note also that $\GL_m(\mathbb{F}_3)$ acts naturally on the matroid ${\mathcal{M}}(m)$.
Clearly, ${\mathcal{M}}(1)$ has rank $2$, and is realized in $\mathbb{CP}^2$ by an arrangement of $3$
concurrent lines. It is equally clear that ${\mathcal{M}}(m)$ has rank $3$, for all $m>1$. Indeed,
if $v_1,v_2\in \mathbb{F}_3^m$ are the first two standard basis vectors, then
$\{ 0, v_1,v_2 \}$ is not a dependent subset. It is not hard to check that
${\mathcal{M}}(2)$ is realized over $\mathbb{C}$ by the Ceva arrangement from
Example \ref{ex:third}. Our first goal is to show that ${\mathcal{M}}(m)$
cannot be realized over $\mathbb{C}$, for any $m>2$.
To this end, we start with a couple of simple lemmas, to be used later
on in this section. For each $m\ge 1$, define a map
\begin{equation}
\label{eq=defq}
q\colon {\mathcal{M}}(m) \times {\mathcal{M}}(m) \rightarrow {\mathcal{M}}(m)
\end{equation}
by setting $q(v,v')=-v-v'$ if $v\neq v'$, and $q(v,v)=v$.
\begin{lemma}
\label{lem=mflats}
For any $v\neq v'$, the set $\{ v,v',q(v,v') \}$ belongs to $ L_2({\mathcal{M}}(m))$.
Conversely, every rank $2$ flat of ${\mathcal{M}}(m)$ is of this form. In particular,
all flats in $L_2({\mathcal{M}}(m))$ have multiplicity $3$.
\end{lemma}
\begin{proof}
Note that $v,v'$ and $q(v,v')$ are distinct. Indeed, if, say, $v=-v-v'$, then
$v'=-2v=v$ in $\mathbb{F}_3^m$. By construction, $q(v,v')$ is the unique point
in $v\vee v'$ (the flat generated by $v,v'$) different from $v$ and $v'$.
The desired conclusions follow at once.
\end{proof}
The matroids ${\mathcal{M}}(m)$ have a lot of $3$-nets. More precisely,
fix $a\in [m]$ and write
\begin{equation}
\label{eq=mpart}
{\mathcal{M}}(m)= \coprod_{i\in \mathbb{F}_3} {\mathcal{M}}_i(m),
\end{equation}
where ${\mathcal{M}}_i(m)= \{ v=(v_1,\dots,v_m) \in \mathbb{F}_3^m \mid v_a=i \}$.
We then have the following lemma.
\begin{lemma}
\label{lem=mnets}
For each $m\ge 2$ and $a\in [m]$, the partition \eqref{eq=mpart}
defines a $3$-net on ${\mathcal{M}}(m)$, with all submatroids ${\mathcal{M}}_i(m)$
being line-closed and isomorphic to ${\mathcal{M}}(m-1)$.
\end{lemma}
\begin{proof}
Clearly, $\mathbb{F}_3^{m-1}$ embeds in $\mathbb{F}_3^{m}$, by putting $i\in \mathbb{F}_3$
in position $a\in [m]$, with image ${\mathcal{M}}_i(m)$. In particular,
$\abs{{\mathcal{M}}_i(m)}=3^{m-1}$, for all $i$. To verify that the partition
defines a $3$-net (as explained in \S\ref{subsec:nets}), pick points
$v\in {\mathcal{M}}_i(m)$ and $v'\in {\mathcal{M}}_j(m)$, with $i\neq j$. By
Lemma \ref{lem=mflats}, we have that $v\vee v'= \{ v,v',v''\}$, where $v''=-v-v'$.
We need to check that $\proj_a(v'')=k$, where $k$ is the third
element of $\mathbb{F}_3$. This in turn is clear, since
$\proj_a(v)+ \proj_a(v')+\proj_a(v'')=0\in \mathbb{F}_3$
and $i\neq j$. The fact that ${\mathcal{M}}_i(m)$ is line-closed in ${\mathcal{M}}(m)$
for all $i$ then follows from Lemma \ref{lem:net props}\eqref{n3}.
Finally, consider the embedding
$\iota_i\colon {\mathcal{M}}(m-1) \xrightarrow{\,\simeq\,} {\mathcal{M}}_i(m) \subseteq {\mathcal{M}}(m)$. The asserted
matroid isomorphism follows from the easily checked fact that $v+v'+v''=0$
if and only if $\iota_i(v)+\iota_i(v')+\iota_i(v'')=0$, for any distinct elements
$v,v',v'' \in \mathbb{F}_3^{m-1}$.
\end{proof}
\subsection{The $\beta$-invariants and matroid realizability}
\label{subsec:beta realize}
Next, we show that our family of matroids is universal for
$\beta_3$-computations, in the following sense.
\begin{prop}
\label{prop=mb3}
For all $m\ge 1$, we have that $\beta_3({\mathcal{M}}(m))=m$.
\end{prop}
\begin{proof}
For $m=1$, this is clear. The inequality $\beta_3({\mathcal{M}}(m))\le m$ follows
by induction on $m$, using Corollary \ref{cor:betabounds} and
Lemma \ref{lem=mnets}.
For each $a\in [m]$, define a vector $\eta_a\in \mathbb{F}_3^{{\mathcal{M}}(m)}$
by setting $\eta_a(v)=v_a$. We claim that $\eta_a\in Z_{\mathbb{F}_3}({\mathcal{M}}(m))$.
Indeed, let $ \{ v,v',v'' \}$ be a flat in $L_2({\mathcal{M}}(m))$, so that $v+v'+v''=0$.
Then $\eta_a(v)+\eta_a(v')+\eta_a(v'')=0$, and the claim follows
from Lemma \ref{lem:eqs}.
To prove that $\beta_3({\mathcal{M}}(m))\ge m$, it is enough to show
that $\eta_1,\dots,\eta_m$ and $\sigma$
are independent, where $\sigma \in \mathbb{F}_3^{{\mathcal{M}}(m)}$ is the standard
diagonal vector (i.e., $\sigma_v = 1$, for all $v\in {\mathcal{M}}(m)$).
To that end, suppose $\sum_a c_a \eta_a + c\sigma =0$. Evaluating on $v=0$,
we find that $c=0$. Finally, evaluation on the standard basis vectors
of $\mathbb{F}_3^m$ gives $c_a=0$ for all $a$, as needed.
\end{proof}
A straightforward modification of the above construction
works for an arbitrary Galois field $\k=\mathbb{F}_{p^s}$ with at least $3$ elements.
Let ${\mathcal{M}}_{\k}(m)$ be the simple matroid of rank at most $3$ on $\k^m$,
whose size $3$ dependent subsets are given by all collinearity relations.
(Clearly, ${\mathcal{M}}(m)= {\mathcal{M}}_{\mathbb{F}_3} (m)$, for all $m$.) As before, ${\mathcal{M}}_{\k} (m)$
has rank $2$ for $m=1$, and rank $3$ for $m>1$; moreover, all
$2$-flats have multiplicity $p^s$.
\begin{prop}
\label{prop:mpm}
Let $\k=\mathbb{F}_{p^s}$ be a finite field different from $\mathbb{F}_2$. Then,
\begin{enumerate}
\item \label{mat1}
$\beta_p({\mathcal{M}}_{\k} (m))\ge m$, for all $m\ge 1$.
\item \label{mat2}
If $\k \ne \mathbb{F}_3$, the matroids ${\mathcal{M}}_{\k} (m)$ are non-realizable over $\mathbb{C}$,
for all $m\ge 2$.
\end{enumerate}
\end{prop}
\begin{proof}
From our hypothesis on $\k$, we have that $\Sigma(\k)=0$,
by Lemma \ref{lema=sigmazero}.
In view of Lemma \ref{lem:eqs}, any affine function $\tau \in \k^{{\mathcal{M}}_{\k} (m)}$
belongs to $Z_{\k}({\mathcal{M}}_{\k} (m))$. Indeed, let
$X= \{ \alpha u + (1-\alpha)v \mid \alpha \in \k \}$ be a rank-$2$ flat of ${\mathcal{M}}_{\k} (m)$.
Then
\[
\sum_{w\in X} \tau_w= \sum_{\alpha \in \k} \alpha \tau_u + (1-\alpha) \tau_v=
p^s \cdot \tau_v + \Sigma(\k) (\tau_u -\tau_v)=0,
\]
as needed. An argument as in the proof of Proposition \ref{prop=mb3}
now shows that assertion \eqref{mat1} holds.
To prove assertion \eqref{mat2}, we use the Hirzebruch--Miyaoka--Yau
inequality from \cite{Hi}. This inequality involves the numbers
\begin{equation}
\label{eq:ti}
t_i = \abs{\{ X\in L_2({\mathcal{M}}) \mid \abs{X}=i \}}
\end{equation}
associated to a matroid ${\mathcal{M}}$ for each $1<i \le \abs{{\mathcal{M}}}$. When there are no rank-$2$
flats of multiplicity $\abs{{\mathcal{M}}}$ or $\abs{{\mathcal{M}}}-1$, and ${\mathcal{M}}$ is realized by a line arrangement
in $\mathbb{CP}^2$, then
\begin{equation}
\label{eq=hmy}
t_2+\frac{3}{4} t_3 \ge \abs{{\mathcal{M}}}+ \sum_{i\ge 4} (i-4)t_i .
\end{equation}
In our case, $\abs{{\mathcal{M}}}=p^{sm}$, and the only non-zero number
$t_i$ occurs when $i=p^s>3$. This clearly violates \eqref{eq=hmy},
thus showing that ${\mathcal{M}}$ is not realizable. (Note that this argument breaks down
for $\k=\mathbb{F}_3$.)
\end{proof}
\subsection{Realizability of the ${\mathcal{M}}(m)$ matroids}
\label{subsec:realize}
The case $\k=\mathbb{F}_3$ is more subtle. In order to proceed
with this case, we need to recall a result of
Yuzvinsky (Corollary 3.5 from \cite{Yu04}). Let ${\pazocal{A}}$ be an
arrangement in $\mathbb{C}^3$.
\begin{lemma}[\cite{Yu04}]
\label{lem:yuz}
Let ${\pazocal{N}}$ be a $(3,mn)$-net ($m\ge 3$) on ${\pazocal{A}}$, such
that each class ${\pazocal{A}}_i$ can be partitioned into $n$ blocks of size $m$,
denoted ${\pazocal{A}}_{ij}$, and for every pair $i,j$, there is a $k$ such that
${\pazocal{A}}_{1i}$, ${\pazocal{A}}_{2j}$, and ${\pazocal{A}}_{3k}$ are the three classes of a $(3,m)$-subnet
${\pazocal{N}}_{ij}$ of ${\pazocal{N}}$. If, moreover, each class of ${\pazocal{N}}_{ij}$ is a pencil,
then every class of ${\pazocal{N}}$ is also a pencil.
\end{lemma}
Realizability in the family of matroids $\{{\mathcal{M}}(m)\}_{m\ge 1}$ is
settled by the next result.
\begin{theorem}
\label{thm=mnotr}
For any $m\ge 3$, the sub-lattice $L_{\le 2}({\mathcal{M}}(m))$ is not realizable over $\mathbb{C}$,
i.e., there is no arrangement ${\pazocal{A}}$ in $\mathbb{C}^{\ell}$ such that $L_{\le 2}({\mathcal{M}}(m)) \cong
L_{\le 2}({\pazocal{A}})$, as lattices.
\end{theorem}
\begin{proof}
By Lemma \ref{lem=mnets}, the matroid ${\mathcal{M}}(m-1)$ embeds in ${\mathcal{M}}(m)$;
thus, we may assume $m=3$. Clearly, it is enough to show that
$L_{\le 2}({\mathcal{M}}(3))$ cannot be realized by any arrangement in $\mathbb{C}^3$.
Assuming the contrary, we will use Lemma \ref{lem:yuz}
to derive a contradiction. Take $a=1$ in Lemma \ref{lem=mnets},
and denote by ${\pazocal{N}}$ the associated $(3,9)$-net on ${\mathcal{M}}(3)$.
Write each class in the form
\begin{equation}
\label{eq:mi3}
{\mathcal{M}}_i(3)= \{ i \} \times \mathbb{F}_3 \times \mathbb{F}_3 = \coprod_{j\in \mathbb{F}_3} {\mathcal{M}}_{ij}(3),
\end{equation}
where ${\mathcal{M}}_{ij}(3)= \{ i \} \times\{ j \} \times \mathbb{F}_3$. For $j,j'\in \mathbb{F}_3$,
define $j''\in \mathbb{F}_3$ by $j+j'+j''=0$. To check the first assumption from
Lemma \ref{lem:yuz}, we have to show that the partition
${\mathcal{M}}_{0j}(3) \coprod {\mathcal{M}}_{1j'}(3) \coprod {\mathcal{M}}_{2j''}(3)$ is a $3$-subnet of ${\pazocal{N}}$.
Pick $v=(i,j,k)$ and $v'=(i',j',k')$ in two different classes of this partition. Note
that necessarily $i\neq i'$, by construction. By Lemma \ref{lem=mflats},
$v\vee v'= \{ v,v',v''\}$, where $v''=-v-v' $. Thus, we must have $v''=(i'',j'',k'')$,
where $i+i'+i''=j+j'+j''=0$. This implies that $v''$ belongs to the third class of
the partition, as required for the $3$-net property. Clearly, the $3$-net defined
by this partition is a $3$-subnet of ${\pazocal{N}}$.
As noted before, each class of the partition has rank $2$, being
isomorphic to the matroid ${\mathcal{M}}(1)$. Hence, Lemma \ref{lem:yuz}
applies, and implies that all classes of ${\pazocal{N}}$ have rank $2$.
On the other hand, Lemma \ref{lem=mnets} insures that
these classes are isomorphic to ${\mathcal{M}}(2)$. This is a
contradiction, and so the proof is complete.
\end{proof}
\subsection{Collections of $3$-nets}
\label{ss72}
For the rest of this section, ${\pazocal{A}}$ will denote an arrangement in $\mathbb{C}^3$.
Our goal is to prove the following theorem, which verifies assertion
\eqref{a5} from Theorem \ref{thm:main1} in the Introduction.
\begin{theorem}
\label{thm:main4}
Suppose $L_2({\pazocal{A}})$ has no flats of multiplicity properly divisible by $3$.
Then $\beta_3({\pazocal{A}}) \le 2$.
\end{theorem}
Our strategy is based on the map $\lambda_{\mathbb{F}_3} \colon \{\text{$3$-nets on ${\pazocal{A}}$}\} \to
Z_{\mathbb{F}_3}({\pazocal{A}})\setminus B_{\mathbb{F}_3}({\pazocal{A}})$ from Theorem \ref{teo=lambdaintro}.
Recall that the map $\lambda_{\mathbb{F}_3}$ is always injective. Moreover, as shown
in Lemma \ref{lem:lambda}, this map is also surjective
when the above assumption on multiplicities is satisfied.
In this case, $\beta_3({\pazocal{A}})\ge m$ if and only if there is a collection
${\pazocal{N}}^1,\dots,{\pazocal{N}}^m$ of $3$-nets on ${\pazocal{A}}$ such that the classes
$[\lambda_{\mathbb{F}_3}({\pazocal{N}}^1)],\dots,[\lambda_{\mathbb{F}_3}({\pazocal{N}}^m)]$ are independent in
$Z_{\mathbb{F}_3}({\pazocal{A}})/B_{\mathbb{F}_3}({\pazocal{A}})$.
Let ${\pazocal{A}}$ be an arbitrary arrangement. When the above property holds, we call the nets
$\{ {\pazocal{N}}^a \}_{a\in [m]}$ {\em independent}. For $v=(v^1,\dots,v^m)\in {\mathcal{M}}(m)$,
we set ${\pazocal{A}}_v= \bigcap_{a\in [m]} {\pazocal{N}}^a_{v^a}$, where ${\pazocal{A}}= \coprod_{i\in \mathbb{F}_3} {\pazocal{N}}^a_i$
is the partition associated to ${\pazocal{N}}^a$. We say that the family $\{ {\pazocal{N}}^a \}_{a\in [m]}$
has the {\em intersection property}\/ if ${\pazocal{A}}_v \neq \emptyset$ for all $v$,
and the {\em strong intersection property}\/ if there is an integer $d>0$
such that $\abs{{\pazocal{A}}_v} =d$ for all $v$.
Clearly, we have a partition, ${\pazocal{A}}_v \coprod {\pazocal{A}}_{v'} \coprod {\pazocal{A}}_{v''}$, for
any flat $\{v,v',v'' \} \in L_2({\mathcal{M}}(m))$. If all these partitions define $3$-nets,
we say that $\{ {\pazocal{N}}^a \}_{a\in [m]}$ has the {\em net property}.
Our starting point towards the proof of Theorem \ref{thm:main4} is
the following theorem.
\begin{theorem}
\label{thm=3mnets}
Suppose there is an arrangement ${\pazocal{A}}$ in $\mathbb{C}^3$, and a collection
of $3$-nets on ${\pazocal{A}}$, $\{ {\pazocal{N}}, {\pazocal{N}}', {\pazocal{N}}'' \}$, that has both the strong
intersection property and the net property. Then the sub-lattice
$L_{\le 2}({\mathcal{M}}(3))$ is realizable over $\mathbb{C}$.
\end{theorem}
\begin{proof}
Let $S^d(3)\subseteq \Sym^{\hdot}(3)$ be the vector space of degree $d$
polynomials in $3$ variables. We will realize $L_{\le 2}({\mathcal{M}}(3))$ in the
dual space, $V=S^d(3)^*$.
For each hyperplane $H\in {\pazocal{A}}$, choose a linear form $f_H$ in $3$
variables such that $H=\ker(f_H)$. Next, we associate to a point $v\in {\mathcal{M}}(3)$
the vector $Q_v=\prod_{H\in {\pazocal{A}}_v} f_H \in V^* \setminus \{ 0\}$,
using the strong intersection property. Note that $\{ f_H \}_{H\in {\pazocal{A}}}$
are distinct primes in the ring $\Sym (3)$. In particular,
$\{ \overline{Q}_v \}_{v\in {\mathcal{M}}(3)}$ are distinct elements of $\mathbb{P} (V^*)$,
since ${\pazocal{A}}_v \cap {\pazocal{A}}_{v'}= \emptyset$ for $v\neq v'$.
We have to show that $\{ v_1,v_2,v_3 \}$ is a dependent set in
${\mathcal{M}}(3)$ if and only if the set $\{ Q_{v_1}, Q_{v_2}, Q_{v_3} \}$
has rank $2$. If $\{ v_1,v_2,v_3 \}$ is a flat in $L_2({\mathcal{M}}(3))$,
the rank property for $\{ Q_{v_1}, Q_{v_2}, Q_{v_3} \}$ follows
from the fact that the partition ${\pazocal{A}}_{v_1} \coprod {\pazocal{A}}_{v_2} \coprod {\pazocal{A}}_{v_3}$
defines a $3$-net, according to \cite[Theorem 3.11]{FY}.
Conversely, let $\{ v_1,v_2,v'_3 \}$ be a size $3$ independent
subset of ${\mathcal{M}}(3)$, so that $v_1+v_2+v'_3 \neq 0$. Consider the flat
$\{ v_1,v_2,v_3 \} \in L_2({\mathcal{M}}(3))$, where $v_1+v_2+v_3 = 0$;
in particular, $v_3 \neq v'_3$. Assume that
$Q_{v'_3}= c_1 Q_{v_1} + c_2 Q_{v_2}$. Pick hyperplanes $H_1 \in {\pazocal{A}}_{v_1}$
and $H_2 \in {\pazocal{A}}_{v_2}$, and let $X= H_1 \cap H_2$.
By the net property, there is a hyperplane $H_3 \in {\pazocal{A}}_{v_3}$ such that
$\{ H_1, H_2, H_3 \} \subseteq {\pazocal{A}}_X$.
Let $x=\overline{X}$ be the intersection point of the projective lines $\overline{H_1}$
and $\overline{H_2}$.
We then have $Q_{v_1}(x)=Q_{v_2}(x)=0$, and thus $Q_{v'_3}(x)=0$.
Therefore, there is a hyperplane $H'_3 \in {\pazocal{A}}_{v'_3} \cap {\pazocal{A}}_X$.
Consequently, the arrangement ${\pazocal{A}}_X$ contains the four distinct hyperplanes
$\{ H_1, H_2, H_3 , H'_3\}$. Hence, the flat $X$ must be monocolor with
respect to the collection $\{ {\pazocal{N}}, {\pazocal{N}}', {\pazocal{N}}'' \}$, that is,
${\pazocal{A}}_X \subseteq {\pazocal{N}}_i \cap {\pazocal{N}}'_j \cap {\pazocal{N}}''_k ={\pazocal{A}}_v$, where
$v=(i,j,k)\in {\mathcal{M}}(3)$. Since $H_i \in {\pazocal{A}}_{v_i}$ for $i=1,2$, we infer
that $v_1=v_2=v$, a contradiction. Our realizability claim is thus verified.
\end{proof}
In view of Theorem \ref{thm=mnotr}, Theorem \ref{thm:main4} will be proved
once we are able to show that the independence property for $\{ {\pazocal{N}}, {\pazocal{N}}', {\pazocal{N}}'' \}$
forces both the strong intersection property and the net property.
\begin{lemma}
\label{lem=upgr1}
For a collection $\{ {\pazocal{N}}^1, \dots, {\pazocal{N}}^m\}$ of $3$-nets on ${\pazocal{A}}$, the intersection
property implies both the strong intersection property and the net property.
\end{lemma}
\begin{proof}
We start with the net property. Let $X=\{ v,v',v'' \}$ be a flat in $L_2({\mathcal{M}}(m))$.
We know that the first two classes of the partition
$({\pazocal{A}}_v , {\pazocal{A}}_{v'} , {\pazocal{A}}_{v''})$ are non-empty. Write $v=(v_a)\in \mathbb{F}_3^m$,
and similarly for $v',v''$. By construction of the matroid ${\mathcal{M}}(m)$,
the elements $\{ v_a,v'_a,v''_a \}$ are either all equal or all distinct,
for any $a\in [m]$.
Pick $H\in {\pazocal{A}}_v$ and $H'\in {\pazocal{A}}_{v'}$. Then
$H\in {\pazocal{N}}^a_{v_a}$ and $H' \in {\pazocal{N}}^a_{v'_a}$, for some $a$ with
$v_a\neq v'_a$, since $v\neq v'$. Hence $\{ H,H',H'' \} \in L_2({\pazocal{A}})$,
for a unique hyperplane $H''$, distinct from $H$ and $H'$, and
which belongs to ${\pazocal{N}}^a_{v''_a}$, by the net property for ${\pazocal{N}}^a$.
We are left with checking that $H''\in {\pazocal{A}}_{v''}$, that is, $H''\in {\pazocal{N}}^b_{v''_b}$
for all $b\in [m]$. If $v_b\neq v'_b$, we may use the previous argument.
If $v_b =v'_b$, then $v''_b =v_b =v'_b$. Note that $X\in L_2({\pazocal{N}}^b_{v_b})$
and therefore $H'' \in {\pazocal{N}}^b_{v_b}$, which is line-closed in ${\pazocal{A}}$,
by Lemma \ref{lem:net props}\eqref{n3}. By the intersection property,
${\pazocal{A}}_{v''}\neq \emptyset$. According to Lemma \ref{lem:lsq}, then,
$\{ {\pazocal{N}}^a\}$ has the net property.
It will be useful later on to extract from the preceding argument
the following implication:
\begin{equation}
\label{eq=impli}
{\pazocal{A}}_v\, ,{\pazocal{A}}_{v'}\neq \emptyset \Rightarrow {\pazocal{A}}_{q(v,v')}\neq \emptyset \, ,
\end{equation}
where $q$ is defined in \eqref{eq=defq}.
We deduce from the net property that
$\abs{{\pazocal{A}}_v}=\abs{{\pazocal{A}}_{v'}}>0$, for any $v\neq v' \in {\mathcal{M}}(m)$, by
using the flat $X=\{ v,v',q(v,v') \}$. The strong intersection property
follows.
\end{proof}
\subsection{A closure operation}
\label{ss73}
We have to analyze the relationship between the independence
and the intersection properties. In one direction, things are easy.
\begin{lemma}
\label{lem=intind}
If a collection $\{ {\pazocal{N}}^1, \dots, {\pazocal{N}}^m\}$ of $3$-nets on ${\pazocal{A}}$ has the
intersection property, the nets are independent.
\end{lemma}
\begin{proof}
We have to show that $\lambda_{\mathbb{F}_3}({\pazocal{N}}^1),\dots,\lambda_{\mathbb{F}_3}({\pazocal{N}}^m)$ and
$\sigma$ are independent in $\mathbb{F}_3^{{\pazocal{A}}}$. Note that
$\lambda_{\mathbb{F}_3}({\pazocal{N}}^a)_H= v^a$, by construction, for any
$v=(v^1,\dots,v^m)\in \mathbb{F}_3^m$ and $H\in {\pazocal{A}}_v$. Independence
then follows exactly as in the proof of Proposition \ref{prop=mb3}.
\end{proof}
For the converse, we will need certain natural closure operators
on the matroids ${\mathcal{M}}(m)$, defined as follows. For a subset
${\mathcal{M}} \subseteq {\mathcal{M}}(m)$, set $C{\mathcal{M}}= \{ q(u,v) \mid u,v\in {\mathcal{M}}\}$.
Then iterate and define $\overline{C}{\mathcal{M}}= \bigcup_{s\ge 1} C^s {\mathcal{M}}$.
Note that the operators $C$, $C^s$, and $\overline{C}$ depend
only on the matroid structure of ${\mathcal{M}}(m)$, as follows from
Lemma \ref{lem=mflats}.
\begin{lemma}
\label{lem=cprop}
For each $m\ge 1$, the following hold.
\begin{enumerate}
\item \label{cp1}
For any ${\mathcal{M}} \subseteq {\mathcal{M}}(m)$, ${\mathcal{M}} \subseteq C{\mathcal{M}} \subseteq \overline{C}{\mathcal{M}}$.
\item \label{cp2}
If ${\mathcal{M}} \subseteq {\mathcal{M}}'$, then $C^s{\mathcal{M}} \subseteq C^s{\mathcal{M}}'$ for all $s$,
and $\overline{C}{\mathcal{M}} \subseteq \overline{C}{\mathcal{M}}'$.
\item \label{cp3}
If ${\mathcal{M}} \subseteq {\mathcal{M}}'$ and the submatroid ${\mathcal{M}}'$ is line-closed in ${\mathcal{M}}(m)$, then
$C^s{\mathcal{M}}$ and $\overline{C}{\mathcal{M}}$ coincide, when computed in ${\mathcal{M}}(m)$ and ${\mathcal{M}}'$.
\end{enumerate}
\end{lemma}
\begin{proof}
Follows by easy induction on $s$, using the definitions.
\end{proof}
Given a collection $\{ {\pazocal{N}}^a \}_{a\in [m]}$ of $3$-nets on ${\pazocal{A}}$,
set ${\mathcal{M}}= \{ v\in {\mathcal{M}}(m) \mid {\pazocal{A}}_v \neq \emptyset \}$. We deduce from
\eqref{eq=impli} the following characterization of the intersection property.
\begin{corollary}
\label{cor=cprop}
The family $\{ {\pazocal{N}}^1,\dots, {\pazocal{N}}^m \}$ has the intersection property
if and only if $\overline{C}{\mathcal{M}}= {\mathcal{M}}(m)$.
\end{corollary}
For $m=1$, it is clear that independence implies the intersection property.
We need to establish this implication for $m=3$. We have to start with the case $m=2$.
In order to minimize the amount of subcase analysis, it is useful to make a couple of
elementary remarks on matroid symmetry in the family $\{ {\mathcal{M}}(m)\}_{m\ge 1}$.
Clearly, $\Aut ({\mathcal{M}}(1))= \Sigma_3$. It is equally clear that a partition $[m]=[n] \coprod [n']$
induces a natural morphism, $\Aut ({\mathcal{M}}(n)) \times \Aut ({\mathcal{M}}(n')) \to \Aut ({\mathcal{M}}(m))$.
We will need more details for $m=2$.
Let $X= \{v,v',v''\}$ be a size $3$ subset of $\mathbb{F}_3^2$. It is easy to see that
$X$ is dependent if and only if $X= \{ (i,0), (i,1), (i,2) \}$, or $X= \{ (0,j), (1,j), (2,j) \}$,
or $X= \{ (i,gi) \mid i\in \mathbb{F}_3 \}$, for some $g\in \Sigma_3$.
Now assume that $X= \{ (i,j), (i',j'), (i'',j'') \}$ is independent. Modulo
$\Sigma_2 \subseteq \GL_2$, we may assume that $\abs{\{ i,i',i'' \}}=2$.
By $(\Sigma_3 \times \id)$-symmetry, we may normalize this
to $i=i'=0$ and $i''=1$, hence $j\neq j'$. If $\abs{\{ j,j',j'' \}}=3$, the flat
$X$ is normalized to $\{ (0,0), (0,1), (1,2) \}$, by $(\id \times \Sigma_3)$-symmetry.
Otherwise, $X= \{ (0,0), (0,1), v'' \}$, with $v''=(1,0)$ or $v''=(1,1)$, and these
two cases are $\GL_2$-conjugate, as well as $\{ (0,0), (0,1), (1,0) \}$ and
$\{ (0,0), (0,1), (1,2) \}$. To sum up, any independent subset of size $3$
can be put in the normal form $\{ (0,0), (0,1), (1,0) \}$, modulo $\Aut ({\mathcal{M}}(2))$.
\begin{lemma}
\label{lem=prelm2}
Let ${\mathcal{M}} \subseteq {\mathcal{M}}(2)$ be a submatroid with at least $3$ elements.
\begin{enumerate}
\item \label{prm1}
If $\abs{{\mathcal{M}}}=3$ and ${\mathcal{M}}$ is independent, then $\overline{C}{\mathcal{M}}= {\mathcal{M}}(2)$.
\item \label{prm2}
If $\abs{{\mathcal{M}}}\ge 4$, then $\overline{C}{\mathcal{M}}= {\mathcal{M}}(2)$.
\end{enumerate}
\end{lemma}
\begin{proof}
Part \eqref{prm1}. First, put ${\mathcal{M}}$ in normal form, as explained above.
Then compute
\begin{align*}
&q((0,0), (0,1))= (0,2), & q((0,0), (1,0))= (2,0), && q((0,2), (2,0))= (1,1),\\[-2pt]
& q((1,0), (1,1))= (1,2), & q((0,1), (1,1))= (2,1), && q((0,0), (1,1))= (2,2),
\end{align*}
and note that all the resulting values of $q$ belong to $\overline{C}{\mathcal{M}}$.
Part \eqref{prm2}. Pick a size $4$ subset $\{ v_1,v_2,v_3,v_4 \} \subseteq {\mathcal{M}}$.
Then $\{ v_1,v_2,v_3\}$ and $\{ v_1,v_2,v_4\}$ cannot be both dependent.
Our claim follows from part \eqref{prm1} and Lemma \ref{lem=cprop}\eqref{cp2}.
\end{proof}
We will need to know the behavior of the map
$\lambda_{\mathbb{F}_3} \colon \{\text{$3$-nets on ${\pazocal{A}}$}\} \to Z_{\mathbb{F}_3}({\pazocal{A}})\setminus B_{\mathbb{F}_3}({\pazocal{A}})$
with respect to the natural $\Sigma_3$-action on $3$-nets. The description below does not require
the realizability of the matroid ${\pazocal{A}}$.
Denote by $\sigma \in B_{\mathbb{F}_3}({\pazocal{A}})$ the constant cocycle equal to $1$ on ${\pazocal{A}}$, as usual.
Let $g\in \Sigma_3$ be the $3$-cycle
$(1,2,0)$ and let $h\in \Sigma_3$ be the transposition $(1,2)$, both acting on $\mathbb{F}_3$.
It is readily checked that, for any $3$-net ${\pazocal{N}}$ on ${\pazocal{A}}$,
\begin{equation}
\label{eq=actions}
\lambda_{\mathbb{F}_3} (g\cdot {\pazocal{N}})= \sigma+ \lambda_{\mathbb{F}_3} ({\pazocal{N}}) \quad\text{and}\quad
\lambda_{\mathbb{F}_3} (h\cdot {\pazocal{N}})= - \lambda_{\mathbb{F}_3} ({\pazocal{N}}).
\end{equation}
We may now settle the case $m=2$.
\begin{prop}
\label{prop=indint2}
If ${\pazocal{N}}$ and ${\pazocal{N}}'$ are independent $3$-nets on ${\pazocal{A}}$, then the pair
$\{ {\pazocal{N}},{\pazocal{N}}'\}$ has the intersection property.
\end{prop}
\begin{proof}
Plainly, for any $i\in \mathbb{F}_3$ there is a $j\in \mathbb{F}_3$ such that
${\pazocal{A}}_{(i,j)}= {\pazocal{N}}_i \cap {\pazocal{N}}'_j \neq \emptyset$ and similarly,
for any $j\in \mathbb{F}_3$ there is an $i\in \mathbb{F}_3$ such that
${\pazocal{A}}_{(i,j)} \neq \emptyset$. In particular, $\abs{{\mathcal{M}}}\ge 3$. If
either $\abs{{\mathcal{M}}}\ge 4$, or $\abs{{\mathcal{M}}}= 3$ and ${\mathcal{M}}$ is independent,
we are done, in view of Lemma \ref{lem=prelm2} and Corollary \ref{cor=cprop}.
Assume then that $\abs{{\mathcal{M}}}= 3$ and ${\mathcal{M}}$ is dependent. According to
a previous remark, ${\mathcal{M}}= \{ (i,gi) \mid i\in \mathbb{F}_3 \}$, for some $g\in \Sigma_3$.
For any $i\in \mathbb{F}_3$, we infer that ${\pazocal{N}}_i= \coprod_j {\pazocal{N}}_i \cap {\pazocal{N}}'_j \subseteq {\pazocal{N}}'_{gi}$.
Therefore, ${\pazocal{N}}= g \cdot {\pazocal{N}}'$. It follows from \eqref{eq=actions}
that $[\lambda_{\mathbb{F}_3}({\pazocal{N}})]= \pm [\lambda_{\mathbb{F}_3}({\pazocal{N}}')]$,
in contradiction with our independence assumption.
\end{proof}
\begin{corollary}
\label{cor=indint2}
Assume $L_2({\pazocal{A}})$ contains no flats of multiplicity $3r$, with $r>1$. Then
$\beta_3({\pazocal{A}})\ge 2$ if and only if there exist two $3$-nets on ${\pazocal{A}}$,
with parts $\{ {\pazocal{N}}_i \}$ and $\{ {\pazocal{N}}'_j \}$, respectively,
such that ${\pazocal{N}}_i \cap {\pazocal{N}}'_j \neq \emptyset$, for all $i,j\in \mathbb{F}_3$.
\end{corollary}
\begin{proof}
Follows from Lemma \ref{lema=lambdabij}, Lemma \ref{lem:lambda}, Lemma \ref{lem=intind}
and Proposition \ref{prop=indint2}.
\end{proof}
\subsection{A bound of $\beta_3({\pazocal{A}})$}
\label{ss74}
In this last subsection, we complete the proof of Theorem \ref{thm:main1}
from the Introduction. We start by analyzing the critical case, $m=3$.
Let $\{ {\pazocal{N}},{\pazocal{N}}', {\pazocal{N}}''\}$ be a triple of $3$-nets on an arrangement ${\pazocal{A}}$,
with parts $\{ {\pazocal{N}}_i \}$, $\{ {\pazocal{N}}'_j \}$, and $\{ {\pazocal{N}}''_k \}$.
For a fixed $k\in \mathbb{F}_3$, set ${\mathcal{M}}_k= \{ u\in {\mathcal{M}}(2) \mid (u,k)\in {\mathcal{M}} \}$. Hence,
${\mathcal{M}} = \coprod_{k\in \mathbb{F}_3} {\mathcal{M}}_k \times \{k \}$, where ${\mathcal{M}}_k$
is identified with ${\mathcal{M}} \cap ({\mathcal{M}} (2) \times \{k \})$, and all submatroids
${\mathcal{M}} (2) \times \{k \}$ are line-closed in ${\mathcal{M}}(3)$ and isomorphic to ${\mathcal{M}}(2)$,
as follows from Lemma \ref{lem=mnets}.
We first exploit the independence property.
\begin{lemma}
\label{lem=ind3}
If ${\pazocal{N}}$, ${\pazocal{N}}'$, and ${\pazocal{N}}''$ are independent $3$-nets, then there is no
$k\in \mathbb{F}_3$ such that ${\mathcal{M}}_k$ is a size $3$ dependent subset of ${\mathcal{M}}(2)$.
\end{lemma}
\begin{proof}
Assuming the contrary, let $h''_k \colon Z_{\mathbb{F}_3}({\pazocal{A}}) \to Z_{\mathbb{F}_3}({\pazocal{A}}''_k)$
be the canonical homomorphism associated to the net ${\pazocal{N}}''$,
as in Lemma \ref{lem:proj}. We know from Proposition \ref{prop:ker1}
that $\ker(h''_k)$ is $1$-dimensional. Let $h$ be
the restriction of $h''_k$ to the $4$-dimensional subspace of $Z_{\mathbb{F}_3}({\pazocal{A}})$
spanned by $\lambda_{\mathbb{F}_3}({\pazocal{N}})$, $\lambda_{\mathbb{F}_3}({\pazocal{N}}')$, $\lambda_{\mathbb{F}_3}({\pazocal{N}}'')$
and $\sigma$, a subspace we shall denote by $Z$.
Note that all these $4$ elements of $\mathbb{F}_3^{{\pazocal{A}}}$ are constant on ${\pazocal{A}}_v$,
for any $v\in {\mathcal{M}}$, by construction. We deduce from our assumption on
${\mathcal{M}}_k$ that ${\pazocal{A}}''_k$ is of the form
\[
{\pazocal{A}}''_k= {\pazocal{A}}_{(u_1,k)} \coprod {\pazocal{A}}_{(u_2,k)} \coprod {\pazocal{A}}_{(u_3,k)},
\]
with $u_1+u_2+u_3=0 \in \mathbb{F}_3^2$. As noted before, composing $h$ with
the restriction maps from ${\pazocal{A}}''_k$ to ${\pazocal{A}}_{(u_i,k)}$ gives three linear maps,
denoted $r_i \colon Z\to \mathbb{F}_3$. Moreover, $\ker (h)=\ker (r)$, where
$r=(r_1\: r_2\: r_3)\colon Z \to \mathbb{F}_3^3$.
Since $\lambda_{\mathbb{F}_3} ({\pazocal{N}})\equiv i$ on ${\pazocal{N}}_i$, and similarly for ${\pazocal{N}}'$ and ${\pazocal{N}}''$,
we infer that the matrix of $r$ is $\left( \begin{smallmatrix}
u_1 & u_2 & u_3 \\ k & k & k\\ 1 & 1 & 1 \end{smallmatrix}\right)$.
The fact that $u_1+u_2+u_3=0$ implies that $\dim \ker (r)\ge 2$.
Therefore, $\dim \ker (h''_k)\ge 2$, a contradiction.
\end{proof}
Here is the analog of Proposition \ref{prop=indint2}.
\begin{prop}
\label{prop=indint3}
If ${\pazocal{N}}$, ${\pazocal{N}}'$, ${\pazocal{N}}''$ are independent $3$-nets on ${\pazocal{A}}$, then this triple
of nets has the intersection property.
\end{prop}
\begin{proof}
By Corollary \ref{cor=cprop}, we have to show that $\overline{C}{\mathcal{M}}= {\mathcal{M}}(3)$.
Due to Proposition \ref{prop=indint2}, we know that
${\pazocal{N}}_i \cap {\pazocal{N}}'_j \neq \emptyset$, for all $i,j\in \mathbb{F}_3$. We deduce that,
for any $u\in {\mathcal{M}}(2)$, there is $k\in \mathbb{F}_3$ such that $(u,k)\in {\mathcal{M}}$.
In particular, $\abs{{\mathcal{M}}}\ge 9$.
Similarly, for any $u\in {\mathcal{M}}(2)$, there is a $k\in \mathbb{F}_3$ such that $(k,u)\in {\mathcal{M}}$,
by using ${\pazocal{N}}'$ and ${\pazocal{N}}''$. This shows that we cannot have
${\mathcal{M}} \subseteq {\mathcal{M}}(2) \times \{ k\}$, for some $k$, by taking $u=(j',k')$,
with $k'\neq k$.
We claim that if ${\mathcal{M}}(2) \times \{ k\} \subseteq \overline{C}{\mathcal{M}}$,
for some $k\in \mathbb{F}_3$, then we are done. Indeed, pick $(u',k')\in {\mathcal{M}}$
with $k'\neq k$, take an arbitrary element $u\in {\mathcal{M}}(2)$, and compute
$q((u,k),(u',k'))= (-u-u', k'')\in \overline{C}{\mathcal{M}}$, where $k''$ is the
third element of $\mathbb{F}_3$. This shows that
${\mathcal{M}}(2) \times \{ k''\} \subseteq \overline{C}{\mathcal{M}}$. Again,
$q((u,k''),(u',k))= (-u-u', k')\in \overline{C}{\mathcal{M}}$, for all $u,u' \in {\mathcal{M}}(2)$.
Hence, ${\mathcal{M}}(2) \times \{ k'\} \subseteq \overline{C}{\mathcal{M}}$, and consequently
$\overline{C}{\mathcal{M}}= {\mathcal{M}}(3)$, as claimed.
If $\abs{{\mathcal{M}}_k}\ge 4$ for some $k\in \mathbb{F}_3$, then ${\mathcal{M}}(2) \times \{ k\}
\subseteq \overline{C}{\mathcal{M}}$, by Lemma \ref{lem=prelm2}\eqref{prm2}
and Lemma \ref{lem=cprop}\eqref{cp2}--\eqref{cp3}.
Otherwise, $\abs{{\mathcal{M}}_k}=3$, for all $k$. If ${\mathcal{M}}_k$ is
independent in ${\mathcal{M}}(2)$ for some $k$,
Lemma \ref{lem=prelm2}\eqref{prm1} implies as before that
${\mathcal{M}}(2) \times \{ k\} \subseteq \overline{C}{\mathcal{M}}$. The case when each
${\mathcal{M}}_k$ is dependent in ${\mathcal{M}}(2)$ is ruled out by Lemma \ref{lem=ind3}.
Our proof is now complete.
\end{proof}
Putting together Proposition \ref{prop=indint3}, Lemma \ref{lem=upgr1},
Theorem \ref{thm=3mnets}, and Theorem \ref{thm=mnotr}, we obtain
the following corollary.
\begin{corollary}
\label{cor=t16gral}
No arrangement ${\pazocal{A}}$ supports a triple of $3$-nets ${\pazocal{N}},{\pazocal{N}}',{\pazocal{N}}''$ such that
$[\lambda_{\mathbb{F}_3}({\pazocal{N}})]$, $[\lambda_{\mathbb{F}_3}({\pazocal{N}}')]$, and $[\lambda_{\mathbb{F}_3}({\pazocal{N}}'')]$
are independent in $Z_{\mathbb{F}_3}({\pazocal{A}})/B_{\mathbb{F}_3}({\pazocal{A}})$.
\end{corollary}
We are finally in a position to prove Theorem \ref{thm:main4}, and
thus complete the proof of Theorem \ref{thm:main1} in the Introduction.
\begin{proof}[Proof of Theorem \ref{thm:main4}]
By assumption, $L_2({\pazocal{A}})$ has no flats of multiplicity properly
divisible by $3$. Hence, by Lemma \ref{lema=lambdabij} and
Lemma \ref{lem:lambda}, the image of the map
$\lambda_{\mathbb{F}_3}$ is $Z_{\mathbb{F}_3}({\pazocal{A}}) \setminus B_{\mathbb{F}_3}({\pazocal{A}})$.
Therefore, by Corollary \ref{cor=t16gral},
we must have $\beta_3({\pazocal{A}})\le 2$.
\end{proof}
\begin{ack}
The foundational work for this paper was started while the two authors
visited the Max Planck Institute for Mathematics in Bonn in April--May 2012.
The work was pursued while the second author visited the Institute of
Mathematics of the Romanian Academy in June, 2012 and
June, 2013, and MPIM Bonn in September--October 2013.
Thanks are due to both institutions for their hospitality,
support, and excellent research atmosphere.
A preliminary version of some of the results in this paper was presented
by the first author in an invited address, titled {\em Geometry of homology
jump loci and topology}, and delivered at the Joint International Meeting of
the American Mathematical Society and the Romanian Mathematical
Society, held in Alba Iulia, Romania, in June 2013. He thanks the
organizers for the opportunity.
The construction of the matroid family from Section \ref{sec:matr}
emerged from conversations with Anca M\u{a}cinic, to whom we are grateful.
We also thank Masahiko Yoshinaga, for a useful discussion related to
Lemma \ref{lem:lambda}, which inspired us to obtain Theorem \ref{teo=lambdaintro}.
\end{ack}
\newcommand{\arxiv}[1]
{\texttt{\href{http://arxiv.org/abs/#1}{arxiv:#1}}}
\newcommand{\arx}[1]
{\texttt{\href{http://arxiv.org/abs/#1}{arXiv:}}
\texttt{\href{http://arxiv.org/abs/#1}{#1}}}
\newcommand{\doi}[1]
{\texttt{\href{http://dx.doi.org/#1}{doi:#1}}}
\renewcommand{\MR}[1]
{\href{http://www.ams.org/mathscinet-getitem?mr=#1}{MR#1}}
| {'timestamp': '2014-04-09T02:10:50', 'yymm': '1401', 'arxiv_id': '1401.0868', 'language': 'en', 'url': 'https://arxiv.org/abs/1401.0868'} |
\section{Introduction}
Stencil computations are widely implemented in many numerical algorithms that involve structured grids. In acoustic field simulation based on two- or three-dimensional transient wave equations, the finite difference time domain method is a standard approach leading to stencil operations \cite{Hamilton2015}. In an explicit time-stepping finite difference scheme, a solution value at each point in a time-space grid is calculated using a linear combination of values at its spatial neighbors from previous time steps. Among such schemes the main attention in the literature has been given to two-step schemes (which operate over three time levels $t_{k+1}=(k+1)\tau$, $t_{k}=k\tau$ and $t_{k-1}=(k-1)\tau$ where $\tau$ is a fixed time increment). They have been intensively studied and reviewed in many articles and books (see, e.g., $^{2-7}$).
Explicit two-step numerical schemes for the scalar wave equation can also be devised based on spherical means representations as it is done in \cite{Alpert2000} where an integral evolution formula with three time levels is derived. For the 2D case, the evolution formula has been implemented in \cite{Li2004} and \cite{Hagstrom2015} using piecewise polynomial interpolation in 2D mesh cells and numerical integration.
The two-step schemes pose some challenges when imposing the initial conditions. To calculate the value of a sought solution $u$ at the first time level $t_1$, one needs values of $u$ from time levels $t_0$ and $t_{-1}$. The initial condition for $u$ provides the required values at $t_0$. However, $u({\bf x},t_{-1})$ must be inferred from the other initial condition. So, there is a need for transforming the two-step scheme at the first time step to a one-step form which has been discussed, e.g., in \cite{Strikwerda2004} and \cite{Langtangen2017}. A conventional approach for the 2D case (described, e.g., in \cite{Langtangen2017}) uses the central difference in time for approximating the initial condition for $v=\partial u / \partial t$. For any point ${\bf x}$ in $R^2$, the value of $u({\bf x},t_{-1})$ is inferred as $u({\bf x},t_1)-2 \tau v({\bf x},0)$. Even though this approach is attractive due to its simplicity, it is worthwhile to consider the numerical schemes for both the first time step and the next steps derived using a unified approach.
The present work focuses on building explicit time-stepping stencil computation schemes for the transient 2D acoustic (scalar) wave equation using spherical means formulas including Poisson's formula \cite{Evans1998} and a similar integral formula involving three time levels \cite{Alpert2000} that are described in section 2. A general form of the implemented time-stepping algorithm is presented in section 3 where two different integral expressions are given for the first time-step and for the next steps. Sections 4 and 5 consider some known results regarding polynomial interpolation on stencils and exact integration needed for deriving explicit stencil computation schemes from these integral expressions. Particular numerical schemes for five-, nine- and 13-point stencils are obtained in section 6. For each scheme, two separate expressions are derived: 1) a one-step expression for the first time-step through the stencil interpolated initial conditions; 2) a two-step expression for the second and next time-steps. All the obtained stencil expressions for the first time-step have not been previously presented in the literature. A derived two-step expression for the nine-point stencil is also new. In contrast, the obtained two-step expressions for five- and 13-point stencils coincide with the corresponding finite-difference stencil expressions. It is shown by simulation that the derived numerical schemes can significantly improve accuracy of stencil computations in comparison with conventional approaches.
\section{Representation Formulas}
Consider a Cauchy problem for the transient 2D scalar wave equation
\begin{gather} \label{we1}
\frac{\partial^2 u}{\partial t} -
c^2 \left( \frac{\partial^2 u}{\partial x_1^2}
+ \frac{\partial^2 u}{\partial x_2^2}
\right)=0,
\quad
u=u({\bf x},t), \quad {\bf x}=(x_1,x_2) \mbox{ in } \mathbb{R}^2,
\\
u|_{t=0}=u_0({\bf x}), \quad \frac{\partial u}{\partial t}|_{t=0} =v_0({\bf x}).
\end{gather}
Its solution is given by the representation formula which is often named Poisson's formula for the 2D wave equation \cite{Evans1998}:
\begin{equation}
u({\bf x},t)=\frac{1}{2\pi c t^2} \int_{|{\bf y}-{\bf x}|^2<t^2}
\frac{t u_0({\bf y})+t \ \nabla u_0({\bf y}) \cdot ({\bf y} - {\bf x}) +t^2 v_0({\bf y}) }{\sqrt{c^2 t^2-|{\bf y}-{\bf x}|^2}} d y_1 d y_2
\label{RF1}
\end{equation}
where $\nabla$ denotes the gradient operator and ${\bf y}=(y_1,y_2) \in \mathbb{R}^2$ is a variable of integration.
Rewriting the previous integral on the unit disk, we get
\begin{equation} \label{RF2}
u({\bf x},t)=\frac{1}{2\pi} \int_{|{\bf z}|<1}
\frac{u_0({\bf x}+c t {\bf z})+c t \nabla u_0({\bf x}+c t {\bf z}) \cdot {\bf z} }{\sqrt{1-|{\bf z}|^2}} d z_1 d z_2
+\frac{t}{2\pi} \int_{|{\bf z}|<1}
\frac{v_0({\bf x}+c t {\bf z})}{\sqrt{1-|{\bf z}|^2}} d z_1 d z_2
\end{equation}
where ${\bf z}=(z_1,z_2) \in \mathbb{R}^2$ is a new variable of integration.
Formulas \eqref{RF1} and \eqref{RF2} are also valid for negative $t$ which can be proved based on the time reversal property of the wave equation.
By substituting $-t$ for $t$ in formula \eqref{RF2} and changing ${\bf z}$ to $-{\bf z}$ inside its integrals, we obtain the following analog of that formula for negative time:
\begin{equation} \label{RF3}
u({\bf x},-t)=\frac{1}{2\pi} \int_{|{\bf z}|<1}
\frac{u_0({\bf x}+c t {\bf z})+c t \nabla u_0({\bf x}+c t {\bf z}) \cdot {\bf z} }{\sqrt{1-|{\bf z}|^2}} d z_1 d z_2 \\
-\frac{t}{2\pi} \int_{|{\bf z}|<1}
\frac{v_0({\bf x}+c t {\bf z})}{\sqrt{1-|{\bf z}|^2}} d z_1 d z_2.
\end{equation}
The only difference between the right-hand sides of \eqref{RF2} and \eqref{RF3} is the opposite signs of the second term. So, one can eliminate this term by summing \eqref{RF2} and \eqref{RF3}. Shifting the initial moment in the resulting formula from $t=0$ to $t=t_*$, we obtain the following expression involving three time points with a time increment $\tau$:
\begin{equation}
u({\bf x},t_* + \tau)+u({\bf x},t_* - \tau)=\frac{1}{\pi} \int_{|{\bf z}|<1}
\frac{u({\bf x}+c t {\bf z},t_*)+c t \nabla u({\bf x}+c t {\bf z},t_*) \cdot {\bf z} }{\sqrt{1-|{\bf z}|^2}} d z_1 d z_2
\label{RF4}
\end{equation}
where the right-hand side does not include the time derivative $v$. The same representation formula (in a different form) has been derived in \cite{Alpert2000} without using Poisson's formula.
Both formulas \eqref{RF2} and \eqref{RF4} will be used below to build a time-marching stencil computation algorithm for the wave equation.
\section{An Integral Time-Stepping Algorithm}
Consider a uniform time grid $\{t_0=0, t_1=\tau,\ldots, t_k=k\tau,\ldots\}$ where $\tau$ is a fixed time-step. Denote by $u_k({\bf x})$ the restriction of $u({\bf x},t)$ to a moment $t=t_k$.
Next, denote by $A({\bf x},\tau)$ and $B({\bf x},\tau)$ the following integral operators acting on continuous functions defined in $\mathbb{R}^2$:
\begin{equation}
A({\bf x},\tau)f(\cdot)=\frac{1}{2 \pi} \int_{|{\bf z}|<1}
\frac{f({\bf x}+c \tau {\bf z})+c \tau \nabla f({\bf x}+c \tau {\bf z}) \cdot {\bf z} }{\sqrt{1-|{\bf z}|^2}} d z_1 d z_2,
\label{Op1}
\end{equation}
\begin{equation}
B({\bf x},\tau)f (\cdot)=\frac{\tau}{2 \pi} \int_{|{\bf z}|<1}
\frac{f({\bf x}+c \tau {\bf z})}{\sqrt{1-|{\bf z}|^2}} d z_1 d z_2
\label{Op2}
\end{equation}
The time-stepping algorithm proposed here consists of two procedures based on the representation formulas \eqref{RF2} and \eqref{RF4}:
1) \textit{The procedure for the first time-step} which, according to \eqref{RF2}, calculates $u_1({\bf x})$ as
\begin{equation}
u_1({\bf x})=A({\bf x},\tau)u_0(\cdot)+B({\bf x},\tau)v_0(\cdot), {\bf x} \in \mathbb{R}^2
\label{A1}
\end{equation}
2) \textit{The procedure for the second and next time-steps} which, using \eqref{RF4} for $t_*=t_k$, calculates $u_{k+1}({\bf x})$ as
\begin{equation}
u_{k+1}({\bf x})=2 A({\bf x},\tau)u_k(\cdot)-u_{k-1}({\bf x}), {\bf x} \in \mathbb{R}^2, k=1,2,...
\label{A2}
\end{equation}
While formula \eqref{A2} involves three time levels (two time-steps), formula \eqref{A1} for the first time-step incorporates only two time levels (one time-step) without using any finite difference approximation of the time derivative.
\section{Using Polynomial Interpolation on Stencils}
Consider a two-dimensional uniform Cartesian grid $\{(i_1h,i_2h)\}$ where $i_1$ and $i_2$ are integers, and $h$ is the grid spacing in both directions $x_1$ and $x_2$. Suppose that the evaluation point ${\bf x}$ in formulas \eqref{A1} and \eqref{A2} is a grid point ${\bf x}_{ij}=(ih,jh)$. Our intention is to choose a stencil in the Cartesian grid and reduce the integral formulas \eqref{A1} and \eqref{A2} to linear combinations of the stencil node values of $u_k({\bf x})\, (k=0,1,2,\ldots)$ and $v_0({\bf x})$. Such a reduction will be done by using polynomial interpolation.
Assume that a particular stencil with $m$ nodes is chosen for polynomial interpolation. The corresponding index set $\{(q_1,q_2)\}$ is denoted by $Q_m$. The stencil index components $q_1$ and $q_2$ are numbered relative to the referencing point located at the evaluation point ${\bf x}_{ij}$. So, polynomial interpolation in a neighborhood of the evaluation point ${\bf x}_{ij}$ will be carried out using interpolation points
\begin{equation}
{\bf x}_{i,j}+{\bf x}_{q_1,q_2}={\bf x}_{i+q_1,j+q_2}, \quad (q_1,q_2) \in Q_m.
\end{equation}
Following \cite{McKinney1972} we associate with $Q_m$ a set of $m$ distinct bivariate monomials
\begin{equation} \label{McKinney}
\mathcal{M}_m= \{x_1^{\alpha(q_1)} x_2^{\alpha(q_2)}, (q_1,q_2) \in Q_m \}
\end{equation}
where
\begin{equation} \label{alpha}
\alpha(q)=
\begin{dcases}
2|q|-1 & \text{if } q < 0 \\
2q & \text{if } q\geq 0.
\end{dcases}
\end{equation}
The above function has a unique inverse function
\begin{equation} \label{invMcKinney}
q(\alpha)=(-1)^\alpha \left[\frac{\alpha+1}{2}\right]
\end{equation}
where $[\cdot]$ is the whole part function.
So, for each index value $(q_1,q_2)) \in Q_m$ there exists a unique monomial from $\mathcal{M}_m$ and vice versa.
Consider a polynomial space $\mathcal{P}_m$ spanned by $\mathcal{M}_m$. We will use only those stencils for which the Lagrange interpolation problem is unisolvent in $\mathcal{P}_m$ (see \cite{Gasca2012}). In this case, there exists a Lagrange basis for $\mathcal{P}_m$ that can be built as described below.
Suppose that there is an ordering imposed on the monomials in $\mathcal{M}_m$
\begin{equation} \label{monomials}
\{\mu_1({\bf x}), \ldots, \mu_m({\bf x})\}.
\end{equation}
The corresponding stencil nodes are numbered accordingly using \eqref{McKinney}-\eqref{invMcKinney}:
\begin{equation} \label{nodes}
\{{\bf x}_{(1)}, ..., {\bf x}_{(m)}\}.
\end{equation}
Thus, one can compute the following matrix \cite{Gasca2012}:
\begin{equation} \label{matrix1}
D=\left[\mu_s({\bf x}_{(r)})\right]_{m \times m}
\end{equation}
If this matrix is non-singular, i.e., $\det(D) \ne 0$, which means that the Lagrange basis exists, then the inverse matrix
\begin{equation} \label{matrix2}
C=\left[c_{sr}\right]_{m \times m}=D^{-1}
\end{equation}
can be calculated. Its components are instrumental in expressing the Lagrange basis functions through the chosen set of monomials:
\begin{equation} \label{Lagrange}
L_s({\bf x})=\sum_{r=1}^m c_{sr} \mu_r ({\bf x}), \quad s=1,...,m.
\end{equation}
Once the Lagrange basis is obtained, we again need two indexes to denote the Lagrange basis functions in accordance with the two index notation for grid points. The sequence \eqref{monomials} defines a relationship $s=g(\alpha_1,\alpha_2)$ between the ordinal number $s$ and monomial exponents $(\alpha_1,\alpha_2)$. Then, the resulting relationship between an index pair $(q_1,q_2)$ and the corresponding ordinal number $s$ is given by the following expression:
\begin{equation} \label{Lagrange3}
s=\gamma(q_1, q_2)=g(\alpha(q_1),\alpha(q_2)).
\end{equation}
where $\alpha(\cdot)$ is specified in \eqref{alpha}.
Therefore, by introducing a new (two index) notation for the Lagrange basis functions
\begin{equation} \label{Lagrange4}
\phi_{q_1,q_2}({\bf x})=L_{\gamma(q_1,q_2)}({\bf x}),
\end{equation}
we get the interpolation formula for a continuous function $f({\bf x})$ in the form
\begin{equation} \label{Lagrange5}
\tilde{f}({\bf x})=\sum_{(q_1, \, q_2) \in Q_m} f_{i+q_1,j+q_2} \phi_{q_1 q_2} ({\bf x})
\end{equation}
Even though different sets of monomials and their sequences can be employed for building Lagrange bases, we will use a particular method of monomial ordering that is described below.
Denote by $\mathcal{M}$ the set of all monomials $x_1^{\alpha_1} x_2^{\alpha_2}$ where $\alpha_1$ and $\alpha_2$ are natural numbers including $0$. For each monomial $x_1^{\alpha_1} x_2^{\alpha_2}$, the corresponding ordinal number $s$ will be assigned using the following function:
\begin{equation} \label{g}
s=g(\alpha_1,\alpha_2 )=\frac{(\alpha_1+\alpha_2)(\alpha_1+\alpha_2+1)}{2}
+\begin{dcases}
\alpha_1 - \alpha_2 & \text{if } \alpha_2 < \alpha_1, \\
\alpha_2 - \alpha_1 + 1 & \text{if } \alpha_2 \geq \alpha_1.
\end{dcases}
\end{equation}
It is easy to prove that the function \eqref{g} provides a one-to-one correspondence between $\mathcal{M}$ and the set of all positive natural numbers with the usual ordering. The order induced by \eqref{g} uses the total degree as the first sorting parameter (similarly to the more common graded lexicographic order) while the difference between individual degrees with the same total degree is used as the next sorting variable.
The set $\mathcal{M}$ endowed with the order induced by \eqref{g} will be denoted by $\mathcal{M}^*$. The initial segment of $\mathcal{M}^*$ with $m$ members is denoted below by $\mathcal{M}_{\le m}^*$. It will be shown in section 6 that such ordered monomial sets play a useful role in building particular numerical schemes.
\section{Calculating the Integrals}
Let ${\bf x}={\bf x}_{ij}$ in the integral operators \eqref{Op1} and \eqref{Op2}. Without loss of generality, assume that the origin of the 2D coordinate system is located at ${\bf x}_{ij}$ which can be achieved by a parallel translation of coordinates. Let $f({\bf y})$ appearing in \eqref{Op1} and \eqref{Op2} be a monomial in local scaled variables $y_1/h$ and $y_2/h$:
\begin{equation} \label{monomial2}
\mu({\bf y})=\left(\frac{y_1}{h}\right)^{\alpha_1} \left(\frac{y_2}{h}\right)^{\alpha_2} , \quad \alpha_1,\alpha_2 = 0,1, \ldots.
\end{equation}
Then the integrals in \eqref{Op1} and \eqref{Op2} can be exactly calculated and expressed through the Courant number
\begin{equation} \label{Courant}
\lambda=\frac{c\tau}{h}
\end{equation}
Indeed, using the table of integrals of Gradshteyn \& Ryzhik \cite{Gradshteyn1965}, expressions \eqref{Op1} and \eqref{Op2} are reduced to the following exact values:
\begin{equation} \label{exact1}
A({\bf x}_{ij},\tau)\mu(\cdot)=0, \, B({\bf x}_{ij},\tau) \mu(\cdot)=0 \, \mbox{ if $\alpha_1$ or $\alpha_2$ are non-negative odd integers},
\end{equation}
\begin{equation} \label{exact2}
\begin{aligned}
A({\bf x}_{ij},\tau) \mu(\cdot)=\frac{(\alpha_1-1)!! (\alpha_2-1)!!}{(\alpha_1+\alpha_2-1)!!} \lambda^{\alpha_1+\alpha_2}, \\
B({\bf x}_{ij},\tau) \mu(\cdot)=\frac{\tau}{\alpha_1+\alpha_2+1} A({\bf x}_{ij},\tau) \mu(\cdot) \\
\mbox{if $\alpha_1$ and $\alpha_2$ are both non-negative even integers},
\end{aligned}
\end{equation}
where $(\cdot)!!$ is the double factorial. It is assumed that $(-1)!!=1,0!!=1$.
The above formulas allow one to exactly calculate integrals \eqref{Op1} and \eqref{Op2} when $f(\cdot)$ is a polynomial from the Lagrange basis (see section 4).
\section{Particular Explicit Two-Step Schemes}
Now we can start building some numerical schemes by transforming the procedures \eqref{A1} and \eqref{A2} into algebraic expressions. All the functions $u_0({\bf x}), v_0({\bf x})$ and $u_k({\bf x}), k=1,2,...$ included in these procedures will be interpolated in a stencil's center neighborhood using the same stencil nodes. The following standard notations for grid values of the solution and initial conditions will be used in the computation schemes:
\begin{equation} \label{grid0}
u_{ij}^0=u({\bf x}_{ij},0),v_{ij}^0 =v({\bf x}_{ij},0),
\end{equation}
\begin{equation} \label{grid1}
u_{ij}^k = u({\bf x}_{ij},k\tau), k=1,2,\ldots.
\end{equation}
\subsection{The five-point stencil}
Consider building the Lagrange basis for the space of complete second degree polynomials. In this case, the monomial basis sequence ordered according to \eqref{g} is as follows:
\begin{equation} \label{mono6}
\mathcal{M}_{\le 6}^*=\lbrace 1,x_1,x_2,x_1 x_2,x_1^2,x_2^2 \rbrace
\end{equation}
The corresponding interpolation stencil node sequence will be written according to \eqref{invMcKinney} as
\begin{equation*} \label{index6}
\lbrace(0,0),(-h,0),(0,-h),(-h,-h),(h,0),(0,h)\rbrace
\end{equation*}
The matrix $D$ for this stencil is non-singular with $\det(D)=4 h^8$. As a result, we get the Lagrange basis as
\begin{equation} \label{LB6}
\begin{split}
\phi_{0,0}=L_1 &= 1+\frac{x_1 x_2}{h^2}-\frac{x_2^2}{h^2}-\frac{x_1^2}{h^2}, \quad
\phi_{-1,0}=L_2=-\frac{x_1}{2h}-\frac{x_1 x_2}{h^2}+\frac{x_1^2}{2h^2}, \\
\phi_{0,-1}&=L_3 =-\frac{x_2}{2h}-\frac{x_1 x_2}{h^2}+\frac{x_2^2}{2h^2}, \quad
\phi_{-1,-1}=L_4 = \frac{x_1 x_2}{h^2}, \\
&\phi_{1,0}=L_5 =\frac{x_1}{2 h}+\frac{x_1^2}{2 h^2}, \quad
\phi_{0,1}=L_6 =\frac{x_2}{2 h}+\frac{x_2^2}{2 h^2}.
\end{split}
\end{equation}
One can see that the Lagrange basis \eqref{LB6} includes monomials in the scaled variables $x_1/h$ and $x_2/h$ with coefficients independent of $h$.
Using formulas from section 5 and notation \eqref{Lagrange4} we get
\begin{eqnarray} \label{A6}
A({\bf x}_{ij},\tau)\phi_{00}(\cdot)=1-2 \lambda ^2, \quad A({\bf x}_{ij},\tau)\phi_{-1,-1}(\cdot)=0, \nonumber \\
A({\bf x}_{ij},\tau)\phi_{\pm 1,0}(\cdot)=A({\bf x}_{ij},\tau)\phi_{0, \pm 1}(\cdot)=\frac{1}{2} \lambda ^2,
\end{eqnarray}
\begin{eqnarray} \label{B6}
B({\bf x}_{ij},\tau)\phi_{0,0}(\cdot)=\tau (1-\frac{2}{3}\lambda^2), \quad B({\bf x}_{ij},\tau)\phi_{-1,-1}(\cdot)=0, \nonumber \\
B({\bf x}_{ij},\tau)\phi_{\pm 1,0}(\cdot)=B({\bf x}_{ij},\tau)\phi_{0, \pm 1}(\cdot)=\frac{\tau}{6} \lambda ^2,
\end{eqnarray}
So, all coefficients for node $(-h,-h)$ disappear and the corresponding computational scheme contains only 5 spatial points which is shown in Figure 1.
\begin{figure}[ht]
\centering
\begin{tikzpicture}
\begin{axis}[xmin=-2, xmax=2,
ymin=-2, ymax=2,
extra x ticks={-2,-1,0,1,2},
extra y ticks={-2,-1,0,1,2},
extra tick style={grid=major}]
\addplot [only marks,mark=*, mark size=3pt]
coordinates
{(0,0) (-1,0) (1,0) (0,-1) (0,1)};
\end{axis}
\end{tikzpicture}
\caption{Index set for the five-point numerical scheme}
\label{fig:Q5}
\end{figure}
\subsubsection{A new first time-step expression}
As a result, the proposed five-point numerical scheme is as follows:
1) \textit{for the first time-step}
\begin{multline} \label{comp5.1}
u_{ij}^1 = u_{ij}^0+\tau v_{ij}^0
+\frac{\lambda^2}{2} \left( u_{i-1,j}^0+u_{i+1,j}^0+u_{i,j-1}^0+u_{i,j+1}^0-4 u_{ij}^0 \right) \\
+\frac{\tau\lambda^2}{6} \left(v_{i-1,j}^0+v_{i+1,j}^0+v_{i,j-1}^0
+v_{i,j+1}^0-4 v_{ij}^0 \right);
\end{multline}
2) \textit{for the second and next time-steps}
\begin{equation} \label{comp5.2}
u_{ij}^{k+1} = 2 u_{ij}^k - u_{ij}^{k-1}
+\lambda^2 \left( u_{i-1,j}^k+u_{i+1,j}^k+u_{i,j-1}^k+u_{i,j+1}^k-4 u_{ij}^k \right) ,\quad k=1,2,\ldots.
\end{equation}
A conventional approach for the five-point stencil (see, e.g., \cite{Langtangen2017}) uses the same procedure \eqref{comp5.2} for the second and next time-steps. However, for the first time-step, this approach uses another formula (rather than \eqref{comp5.1}) based on the central difference in time for approximating the initial condition for $v$ combined with \eqref{comp5.2} for $k=0$:
\begin{equation} \label{comp5.1a}
u_{ij}^1 = u_{ij}^0+\tau v_{ij}^0
+\frac{\lambda^2}{2} \left( u_{i-1,j}^0+u_{i+1,j}^0+u_{i,j-1}^0+u_{i,j+1}^0-4 u_{ij}^0 \right).
\end{equation}
Comparing \eqref{comp5.1} and \eqref{comp5.1a} one can see that the first three terms of the right-hand parts of these equations coincide, but the forth term present in the right-hand side of \eqref{comp5.1} is absent in \eqref{comp5.1a}. Thus, the difference between these two first time-step expressions depends on properties of $v_0({\bf x})$. If, for example, $v_0({\bf x})$ is a linear function of spatial coordinate, then there is no difference between \eqref{comp5.1} and \eqref{comp5.1a}. However, for more general cases encountered in practice, the difference may exist and can be influential from the accuracy point of view which is shown in subsection 6.1.2.
The stability condition for numerical schemes utilizing \eqref{comp5.2} is well known from von Neumann stability analysis \cite{Strikwerda2004}:
\begin{equation} \label{stability5}
\lambda \le \lambda_{\max}=\frac{\sqrt 2}{2}.
\end{equation}
\subsubsection{Numerical simulation}
To make a numerical comparison of both approaches, we use the initial and boundary conditions corresponding to a standing wave exact solution:
\begin{equation} \label{exact sol}
u_e(x_1,x_2,t)=\sin(2 \pi x_1) \sin(2 \pi x_2) \sin(2\sqrt2 \pi c t).
\end{equation}
This solution creates the following pair of initial conditions for the numerical simulation:
\begin{equation} \label{Exact IC}
u_0(x_1,x_2)=0, \quad v_0(x_1,x_2)=2\sqrt2\pi c \ \sin(2 \pi x_1) \sin(2 \pi x_2).
\end{equation}
We consider the unit square $\Omega=[0,1]^2$ as a space region for the numerical solution and apply boundary condition $u=0$ on $\partial\Omega$ generated by \eqref{exact sol}. In addition, we assume that $c=1$. Let $n_t$ be the number of time-steps and $n \geq 2$ be the space discretization number related to $h$ by equation $nh=1$. The above numerical schemes have been employed over a spatial index set $\{(i,j), 0<i,j<n\}$
using grid boundary conditions
\begin{equation*}
u_{0,j}^k=u_{n,j}^k=u_{i,0}^k=u_{i,n}^k=0, \quad 0 \leq i,j \leq n,
\quad k \geq 0.
\end{equation*}
The accuracy of both numerical schemes have been estimated using the relative $L^2$ error defined as
\begin{equation} \label{error}
E(n,n_t)=\left(\frac{\sum_{k=1}^{n_t}\sum_{i=0}^{n}\sum_{j=0}^{n} \left(u_{ij}^k-u_e(ih,jh,k\tau)\right)^2}{\sum_{k=1}^{n_t}\sum_{i=0}^{n}\sum_{j=0}^{n} \left(u_e(ih,jh,k\tau)\right)^2}\right)^{1/2}
\end{equation}
The relative $L^2$ errors for the proposed new five-point scheme \eqref{comp5.1}$\&$\eqref{comp5.2} (based on Poisson's formula) and the conventional one \eqref{comp5.1a}$\&$\eqref{comp5.2} are denoted by $E_{P5}(n,n_t)$ and $E_{C5}(n,n_t)$, respectively. Calculated values of $E_{P5}(n,n_t)$, $E_{C5}(n,n_t)$ and their ratio for different combinations of $n$, $n_t$ and $\lambda=0.707$ are presented in table 1 below. For every value of $n$ used, three values of $n_t$ are considered: $n_t=1$ (to check errors after the first step), $n_t=n$ and $n_t=2n$ (to demonstrate the error accumulation process).
\begin{table}[h]
\centering
\ra{1.3}
\caption{Wave \eqref{exact sol} simulations using the five-point stencil.}
\begin{tabular}{@{}cccccc@{}}\toprule
$n \quad$ & $n_t \quad$ & $\lambda$ & $E_{P5} \quad$ & $E_{C5}$ \\ \midrule
10 & 1 & $0.707$ & $9.0843\cdot10^{-4}$ & $6.8938\cdot10^{-2}$ \\
10 & 10 & $0.707$ & $9.1540\cdot10^{-4}$ & $6.8945\cdot10^{-2}$ \\
10 & 20 & $0.707$ & $9.1604\cdot10^{-4}$ & $6.8945\cdot10^{-2}$ \\ \midrule
20 & 1 & $0.707$ & $5.4767\cdot10^{-5}$ & $1.6636\cdot10^{-2}$ \\
20 & 20 & $0.707$ & $5.6800\cdot10^{-5}$ & $1.6638\cdot10^{-2}$ \\
20 & 40 & $0.707$ & $5.7372\cdot10^{-5}$ & $1.6638\cdot10^{-2}$ \\ \midrule
40 & 1 & $0.707$ & $3.3924\cdot10^{-6}$ & $4.1230\cdot10^{-3}$ \\
40 & 40 & $0.707$ & $4.0331\cdot10^{-6}$ & $4.1234\cdot10^{-3}$ \\
40 & 80 & $0.707$ & $4.4928\cdot10^{-6}$ & $4.1234\cdot10^{-3}$ \\ \midrule
80 & 1 & $0.707$ & $2.1158\cdot10^{-7}$ & $1.0285\cdot10^{-3}$ \\
80 & 80 & $0.707$ & $4.3820\cdot10^{-7}$ & $1.0286\cdot10^{-3}$ \\
80 &160 & $0.707$ & $6.5824\cdot10^{-7}$ & $1.0286\cdot10^{-3}$ \\ \bottomrule
\end{tabular}
\end{table}
Table 1 demonstrated a much higher accuracy of the new scheme in comparison with the conventional one (with the error ratio $E_{C5}/E_{P5}$ exceeding $10^3$ for more dense grids). Observed increases of the relative $L^2$ errors from $n_t=n$ to $n_t=2n$ are negligible (less than $10^{-6}$).
\subsection{A nine-point square stencil}
Consider the initial segment $\mathcal{M}_{\le 11}^*$ of $\mathcal{M}^*$ with the last member equal to $x_1^2 x_2^2$:
\begin{equation} \label{monomials11}
\{1,x_1,x_2,x_1 x_2,x_1^2,x_2^2,x_1^2 x_2,x_1 x_2^2,x_1^3, x_2^3,x_1^2 x_2^2\}
\end{equation}
This set includes all the monomials of total degree $\le 3$ and one monomial of total degree $=4$, and is the minimal initial segment of $\mathcal{M}^*$ that includes the bivariate tensor-product of the monomial bases for the second degree polynomials in each spatial coordinate. The corresponding index set $Q_{11}$ is shown in Figure 2 where both solid dots and empty circles denote interpolation points.
\begin{figure}[b!]
\centering
\begin{tikzpicture}
\begin{axis}[xmin=-3, xmax=2,
ymin=-3, ymax=2,
extra x ticks={-3,-2,-1,0,1,2},
extra y ticks={-3,-2,-1,0,1,2},
extra tick style={grid=major}]
\addplot [only marks,mark=*, mark size=3pt]
coordinates {(0,0) (-1,0) (0,-1) (-1,-1) (1,0) (0,1) (1,-1) (-1,1) (1,1)};
\addplot [only marks,mark=o, mark size=3pt]
coordinates {(-2,0) (0,-2)};
\end{axis}
\end{tikzpicture}
\caption{Index set for the 11-point interpolation stencil}
\label{fig:Q9}
\end{figure}
The corresponding Lagrange basis can be easily calculated using matrix \eqref{matrix1} and presented similar to \eqref{LB6}. However, it is preferable to avoid presenting long expressions that include 11 monomials and display only terms that will be used later. Denote by $\phi^e_{q_1,q_2}$ a part of $\phi_{q_1,q_2}$ that includes all the monomial terms with even exponents in both coordinates. Then we get
\begin{equation} \label{LB11}
\begin{split}
\phi^e_{0,0}=1-\frac{x_1^2}{h^2}-\frac{x_2^2}{h^2}+\frac{x_1^2 x_2^2}{h^4}, \quad &\phi^e_{-2,0}=\phi^e_{0,-2}=0, \\
\phi^e_{\pm 1,0}=\frac{1}{2}\left(\frac{x_1^2}{h^2}- \frac{x_1^2 x_2^2}{h^4} \right), \quad
\phi^e_{0,\pm 1}=\frac{1}{2}&\left(\frac{x_2^2}{h^2}- \frac{x_1^2 x_2^2}{h^4} \right), \quad \phi^e_{\pm 1,\pm 1}=\frac{1}{4}\frac{x_1^2 x_2^2}{h^4}.
\end{split}
\end{equation}
\subsubsection{A new nine-point time-stepping scheme}
It follows from the results \eqref{exact1} and \eqref{exact2} of section 5 that functions $\phi^e_{q_1,q_2}$ rather than the complete Lagrange basis will be used in building numerical schemes. Therefore, two points (-2,0) and (0.-2) (presented by empty circles in Figure 2) will disappear in the corresponding numerical scheme. The remaining nodes (solid circles in Figure 2) create the nine-point square-shaped computational stencil. Using \eqref{exact1}-\eqref{exact2} we obtain:
\begin{equation} \label{A11}
\begin{split}
A({\bf x}_{ij},\tau)\phi_{00}(\cdot)&=1-2 \lambda ^2+\frac{\lambda ^4}{3}, \quad A({\bf x}_{ij},\tau)\phi_{\pm 1,\pm 1}(\cdot)= \frac{\lambda ^4}{12}, \\
A({\bf x}_{ij},\tau)&\phi_{\pm 1,0}(\cdot)= A({\bf x}_{ij},\tau)\phi_{0,\pm 1}(\cdot)=\frac{\lambda ^2}{2}-\frac{\lambda ^4}{6},
\end{split}
\end{equation}
\begin{equation} \label{B11}
\begin{split}
B({\bf x}_{ij},\tau)\phi_{00}(\cdot)&=\tau\left(1-\frac{2\lambda ^2}{3}+\frac{\lambda ^4}{15}\right), \quad B({\bf x}_{ij},\tau)\phi_{\pm 1,\pm 1}(\cdot)= \frac{\tau\lambda ^4}{60}, \\
B({\bf x}_{ij},\tau)&\phi_{\pm 1,0}(\cdot)= B({\bf x}_{ij},\tau)\phi_{0,\pm 1}(\cdot)=\frac{\tau\lambda ^2}{6} \left(1-\frac{\lambda ^2}{5}\right).
\end{split}
\end{equation}
Now we can use \eqref{A1} and \eqref{A2} to build a new nine-point numerical scheme similar to \eqref{comp5.1}-\eqref{comp5.2}. However, to avoid writing long expressions, some additional notations will be needed:
\begin{equation} \label{notations2}
\begin{split}
\delta^k_{ij}(q_1,q_2)=u^k_{i+q_1,j+q_2}+u^k_{i-q_2,j+q_1}
+u^k_{i1-q_1,j-q_2}
& +u^k_{i+q_2,j-q_1}-4 u^k_{i,j}, \\
\epsilon^0_{ij}(q_1,q_2)=v^0_{i+q_1,j+q_2}+v^0_{i-q_2,j+q_1}
+v^0_{i-q_1,j-q_2}
& +v^0_{i+q_2,j-q_1}-4 v^0_{i,j}, \\
&k=0,1,2,\ldots.
\end{split}
\end{equation}
Thus, the following time-stepping numerical scheme is derived:
1) \textit{for the first time-step}
\begin{equation} \label{comp9.1}
\begin{split}
u_{ij}^1 = u_{ij}^0+\tau v_{ij}^0+\frac{\lambda^2}{2} \Bigg[ \left(1-\frac{\lambda^2}{3}\right) \delta^0_{ij}(1,0)+\frac{\lambda^2}{6}\delta^0_{ij}(1,1) \Bigg] \\
+\frac{\tau\lambda^2}{6}\Bigg[\left(1-\frac{\lambda^2}{5}\right) \epsilon^0_{ij}(1,0)+\frac{\lambda^2}{10}\epsilon^0_{ij}(1,1)
\Bigg];
\end{split}
\end{equation}
2) \textit{for the second and next time-steps}
\begin{multline} \label{comp9.2}
u_{ij}^{k+1} = 2 u_{ij}^k - u_{ij}^{k-1}+\lambda^2 \Bigg[ \left(1-\frac{\lambda^2}{3}\right) \delta^k_{ij}(1,0)+\frac{\lambda^2}{6}\delta^k_{ij}(1,1)\Bigg],\quad k=1,2, \ldots.
\end{multline}
Using the von Neumann stability analysis method (see, e.g., \cite{Strikwerda2004}), we obtain the stability condition for the above numerical scheme as
\begin{equation} \label{stability9}
\lambda \le \lambda_{\max}=\frac{\sqrt {3-\sqrt 3}}{2} \approx 0.796.
\end{equation}
To our knowledge this numerical scheme has not been presented previously in the literature. A conventional explicit nine-point square-shaped scheme (dubbed the isotropic scheme (see, e.g., \cite{Trefethen1982}) has a different form obtained using a nine-point finite difference approximation of the Laplace operator in the two-dimensional space \cite{Kantorovich1958}:
\begin{equation} \label{comp9.2b}
u_{ij}^{k+1} = 2 u_{ij}^k - u_{ij}^{k-1}+\lambda^2\Bigg[\frac{2}{3} \delta^k_{ij}(1,0) +\frac{1}{6}\delta^k_{ij}(1,1)\Bigg],\quad k=1,2, \ldots.
\end{equation}
Comparing the coefficients in \eqref{comp9.2} and \eqref{comp9.2b} it is easy to see that the term $\delta^k_{ij}(1,1)$ is less influential in the new nine-point scheme than in the conventional nine-point scheme (with a coefficient ratio equal to $\lambda^2$). The stability condition for the conventional scheme is less restrictive than for the new one: $\lambda \le \lambda_{\max}=\sqrt {3}/{2} \approx 0.866$. On the other hand, the new scheme has some accuracy advantages in comparison with the conventional nine-point approach for $\lambda \le 0.796$ which is shown in the next subsection.
\subsubsection{Numerical simulation}
We consider simulation results for the proposed nine-point scheme \eqref{comp9.1}-\eqref{comp9.2} using initial and boundary conditions generated by the exact solutions of the previous subsection. Let $c=1$. We assume that the unit square $\Omega=[0,1]^2$ is used as a space grid region, the time grid interval is $[0,\lambda]$ with $n_t=n$. A comparison is made with simulated results on the same grids for the conventional scheme \eqref{comp9.2b}.
Even though no expression for the first time-step corresponding to the conventional nine-point scheme is presented in \cite{Trefethen1982}, the usual approach based on the central difference for approximating the initial condition for $v$ combined with \eqref{comp9.2b} provides the corresponding expression
\begin{equation} \label{comp9.1b}
u_{ij}^1 = u_{ij}^0+\tau v_{ij}^0 +\frac{\lambda^2}{2}\Bigg[ \frac{2}{3}\delta^0_{ij}(1,0) +\frac{1}{6} \delta^0_{ij}(1,1) \Bigg]
\end{equation}
which is used for numerical simulation.
The simulation results are presented in the next table for standing wave simulations. We have used two values of $\lambda$ for the simulation: $\lambda=0.707$, as in subsection 6.1, and $\lambda=0.796$, according to the stability condition \eqref{stability9}. Relative $L^2$ errors for the schemes \eqref{comp9.1}$\&$\eqref{comp9.2} and \eqref{comp9.1b}$\&$\eqref{comp9.2b} are denoted by $E_{P9}$ and $E_{C9}$, respectively. According to the simulated data, the new scheme \eqref{comp9.1}$\&$\eqref{comp9.2} appears to be more accurate than the other scheme for both $\lambda=0.707$ and $\lambda=0.796$.
\begin{table}[h]
\centering
\ra{1.3}
\caption{Wave \eqref{exact sol} simulations using nine-point stencils}
\begin{tabular}{@{}ccccc@{}}\toprule
$ n $ & $ n_t $ & $\lambda$ & $E_{P9}$ & $E_{C9}$ \\ \midrule
10 & 10 & $0.707$ & $3.7058\cdot10^{-2}$
&$1.1741\cdot10^{-1}$ \\
10 & 10 & $0.796$ & $2.9587\cdot10^{-2}$ & $1.1241\cdot10^{-1}$ \\ \midrule
20 & 20 & $0.707$ & $8.9333\cdot10^{-3}$ & $2.8002\cdot10^{-2}$ \\
20 & 20 & $0.796$ & $8.0697\cdot10^{-3}$ & $2.7523\cdot10^{-2}$ \\ \midrule
40 & 40 & $0.707$ & $2.3723\cdot10^{-3}$ & $6.8821\cdot10^{-3}$ \\
40 & 40 & $0.796$ & $2.5737\cdot10^{-3}$ & $6.8668\cdot10^{-3}$ \\ \midrule
80 & 80 & $0.707$ & $7.5573\cdot10^{-4}$ & $1.7084\cdot10^{-3}$ \\
80 & 80 & $0.796$ & $1.0274\cdot10^{-3}$ & $1.7187\cdot10^{-3}$ \\ \bottomrule
\end{tabular}
\end{table}
The obtained results show a loss in accuracy of the both nine-point schemes for the simulated problem in comparison with the new five-point scheme results presented in Table 1.
\subsection{A 13-point stencil}
The next numerical stencil is based on complete bivariate interpolation polynomials of the fourth degree. Consider the initial segment $\mathcal{M}_{\le 15}^*$ of $\mathcal{M}^*$:
\begin{equation} \label{monomials15}
\{1,x_1,x_2,x_1 x_2,x_1^2,x_2^2,x_1^2 x_2,x_1 x_2^2,x_1^3, x_2^3,x_1^2 x_2^2, x_1^3 x_2, x_1 x_2^3, x_1^4,x_2^4 \}
\end{equation}
This set includes all the monomials of total degree $\le 4$.
The corresponding index set $Q_{15}$ is presented in Figure 3 where both thirteen solid dots and two empty circles denote interpolation points.
\begin{figure}[ht]
\centering
\begin{tikzpicture}
\begin{axis}[xmin=-3, xmax=3,
ymin=-3, ymax=3 ,
extra x ticks={-3,-2,-1,0,1,2,3},
extra y ticks={-3,-2,-1,0,1,2,3},
extra tick style={grid=major}]
\addplot [only marks,mark=*, mark size=3pt]
coordinates {(0,0) (-1,0) (0,-1) (-1,-1) (1,0) (0,1) (1,-1) (-1,1) (1,1) (-2,0) (2,0) (0,-2) (0,2)};
\addplot [only marks,mark=o, mark size=3pt]
coordinates {(-2,-1) (-1,-2)};
\end{axis}
\end{tikzpicture}
\caption{Index set for the 15-point interpolation stencil}
\label{fig:Q13}
\end{figure}
After calculating the corresponding Lagrange basis (see section 4) and using \eqref{exact1}-\eqref{exact2}, one can determine that all the coefficients in expressions \eqref{A1}-\eqref{A2} related to the two empty circles will disappear. As a result, we get a numerical scheme that involves the 13-point stencil (the solid dots in Figure 3). The notations \eqref{notations2} will be used in presenting the scheme to make the expressions more compact:
1) \textit{for the first time-step}
\begin{equation} \label{comp13.1}
\begin{split}
u_{ij}^1 = u_{ij}^0+\tau v_{ij}^0+\frac{\lambda^2}{2} \Bigg[ \frac{4-2\lambda^2}{3} \delta^0_{ij}(1,0)
+\frac{\lambda^2}{6}\delta^0_{ij}(1,1)
+\frac{\lambda^2-1}{12}\delta^0_{ij}(2,0) \Bigg] \\
+\frac{\tau\lambda^2}{6}\Bigg[ \left(\frac{4}{3}-\frac{2\lambda^2}{5}\right) \epsilon^0_{ij}(1,0)
+\frac{\lambda^2}{10}\epsilon^0_{ij}(1,1)
+\left(\frac{\lambda^2}{20}-\frac{1}{12}\right)\epsilon^0_{ij}(2,0)
\Bigg];
\end{split}
\end{equation}
2) \textit{for the second and next time-steps}
\begin{equation} \label{comp13.2}
\begin{split}
u_{ij}^{k+1} = 2 u_{ij}^k - u_{ij}^{k-1}
+\lambda^2 \Bigg[\frac{4-2\lambda^2}{3} \delta^0_{ij}(1,0)
+\frac{\lambda^2}{6}\delta^0_{ij}(1,1)
+\frac{\lambda^2-1}{12}\delta^0_{ij}(2,0) \Bigg], \\
k=1,2, \ldots.
\end{split}
\end{equation}
Formula \eqref{comp13.2} for the second and next time-steps completely coincides with that obtained previously \cite{Cohen1987,Cohen1996} using the finite-difference method.
However, expression \eqref{comp13.1} for the first time-step has not been presented in the literature before. An advantage of using this expression rather than the conventional one (based on the central difference for approximating the initial condition for $v$) is shown in Table 3 where simulation results for the standing wave \eqref{exact sol} solution are presented. The relative $L^2$ errors for the conventional and new (Poisson's) approaches are denoted by $E_{C13}$ and $E_{P13}$, respectively. For simplicity sake, the periodic boundary conditions have been incorporated in the simulation taking into account that the solution \eqref{exact sol} is periodic in both spatial directions.
Since the maximal Courant number needed for stability of this scheme is $1/\sqrt{2}$, a value of $\lambda=0.707$ was used.
\begin{table}[h!]
\centering
\ra{1.3}
\caption{Wave \eqref{exact sol} simulations using the 13-point stencil.}
\begin{tabular}{@{}cccccc@{}}\toprule
$n \quad$ & $n_t \quad$ & $\lambda$ & $E_{P13} \quad$ & $E_{C13}$ \\ \midrule
10 & 10 & $0.707$ & $4.2146\cdot10^{-5}$ & $6.8938\cdot10^{-2}$ \\ \midrule
20 & 20 & $0.707$ & $6.6004\cdot10^{-7}$ & $1.6636\cdot10^{-2}$ \\ \midrule
40 & 40 & $0.707$ & $1.1471\cdot10^{-8}$ & $4.1230\cdot10^{-3}$ \\ \midrule
80 & 80 & $0.707$ & $2.8884\cdot10^{-10}$ & $1.0285\cdot10^{-3}$ \\ \bottomrule
\end{tabular}
\end{table}\textit{}
A large advantage in accuracy for the new scheme demonstrated in Table 3 can be attributed to a higher accuracy of the new first time-step expression \eqref{comp13.1} in comparison with the conventional one. On the other hand, it is worth to notice that the relative $L^2$ error $E_{C13}$ (corresponding to the conventional first time-step approach) in Table 3 has almost the same values as the error $E_{C5}$ in Table 1 despite using a higher degree interpolation stencil in Table 3. That is, an error introduced at the first time-step probably suppresses advantages of using a higher degree interpolation at later time-steps.
\section{Summary}
A new method is implemented to build explicit time-stepping stencil computation schemes for the transient 2D acoustic wave equation. It is based on using Poisson's formula and a similar three time level expression combined with polynomial stencil interpolation of the solution at each time-step and exact integration. As a result, for each chosen 2D stencil and a set of monomials, a unified time-marching scheme is created that includes two explicit computation procedures: for the first time-step and for the next steps.
Particular explicit stencil computation schemes (with five, nine and 13 space points) are derived. All of the obtained first time-step computation expressions are different from those used in conventional finite-difference methods. The obtained two-step stencil expressions for the five- and 13-point stencils (where the complete interpolation polynomials have been used) coincide with the corresponding finite difference schemes. The obtained two-step expression for the nine-point stencil is new. Its stability region is determined by the von Neumann analysis.
Simulation comparison results are presented for a benchmark problem with an exact solution. It is demonstrated by simulation that the proposed stencil computation approach maintains an accuracy advantage in comparison with conventional finite difference schemes which is mostly attributed to the new first time-step computation expressions.
| {'timestamp': '2019-06-19T02:04:34', 'yymm': '1904', 'arxiv_id': '1904.04048', 'language': 'en', 'url': 'https://arxiv.org/abs/1904.04048'} |
\section*{Supplementary Material}
In this Supplementary Material (SM), we include some miscellaneous extra material, which is not central to the description of our results in the main text, but can provide some extra information on the nature of the calculations performed and the scope of the results. The numbers of the equations and figures in this material contain the prefix ``SM'' to distinguish the usual numbers of the equations and figures in the main text.
\subsection*{Robustness of scheme to collision angle}
In the main text, we show in Fig.~3 the angular distribution of high-energy ($s > 0.11$ corresponding to $>1.1\,\trm{GeV}$) $E$-polarised photons emitted by a pump electron bunch with an angular divergence \mbox{$\Theta=0.2~\textrm{mrad}$}. Because the bunch angular divergence is several times broader than the field-induced angular spread, \mbox{$2\xi/\gamma_p$}, the harmonic structure in the polarised-photon angular distribution is smoothed out.
Integrating the angular distributions over a particular detector acceptance angle $\theta$ and multiplying the number of the electrons in the bunch, we can obtain the total number of photons accepted by the detector. In the main text, as an example we present a calculation of the photon source brilliance from an electron bunch incident at an angle $\vartheta_{i}=100\,$mrad with divergence $\Theta=0.2\,$mrad.
In Fig.~\ref{Fig_Brilliance}, we present the brilliance obtained for the high-energy $E$-polarised photon source for a broad range of bunch incident angles and different bunch angular divergence. As shown, the brilliance decreases less than $5\%$ as the bunch incident angle $\vartheta_i$ is increased from $0$ to $200\,\textrm{mrad}$, and further increase of the incident angle to $500\,\textrm{mrad}$ would result in a faster decay of the brilliance.
\subsection*{Increased Brilliance of Highly Polarised Source}
One way to improve the brilliance of the polarised source, would be to reduce the electron bunch's angular divergence. In Fig.~\ref{Fig_Brilliance}, we show example results for an angular divergence of $\Theta=0.05\,\textrm{mrad}$. It follows that the angular divergence of the photon beam is determined by the properties of the electron bunch and we find that the brilliance of the photon source is inversely proportional to the electron bunch divergence, $\propto\Theta^{-2}$.
\begin{figure}[t!!]
\center{\includegraphics[width=0.44\textwidth]{figureSM1}}
\caption{Brilliance of the $E$-polarised high-energy photons ($s>0.11$ corresponding to $>1.1~\textrm{GeV}$) for different incident angle $\vartheta_i$ and electron beam angular divergence $\Theta=0.05~\textrm{mrad}$ and $0.2~\textrm{mrad}$. The acceptance angle of the detector is set to $\theta=\Theta$.
The field parameters are the same as in the main text (for a laser pulse with $\xi=0.5$, $\omega_l=1.24~\textrm{eV}$ and duration $26.7\,\trm{fs}$ ($8$ cycles)). The electron bunch has a mean energy of $10\,$GeV and a root-mean-square energy spread of $6\%$. The photon detector is set downstream along the bunch propagation direction.}
\label{Fig_Brilliance}
\end{figure}
\subsection*{Calculation of the angular spectrum}
As mentioned in the main text, to obtain a brilliant photon source, a high-energy electron bunch is required. The calculation we performed for the electron bunch, assumed a normalised momentum distribution of the form:
\bea
&\rho(\bm{p})=\frac{1}{\sqrt{2\pi^3}\sigma_{\parallel}\sigma^2_{\perp} m^3}\exp\left[-\frac{(\bm{p}\cdot\bm{n}-\tilde{p})^2}{2\sigma^2_{\parallel}m^2}-\frac{\left|\bm{p}-\bm{n}(\bm{p}\cdot\bm{n})\right|^2}{\sigma^2_{\perp} m^2}\right]\,,\nonumber
\eea
with modulus average momentum $\tilde{p}$ and rms momentum spread in the longitudinal, $\sigma_{\parallel} m$, and transverse, $\sigma_{\perp} m $ directions, where $\bm{n}$ is the incident direction of the electron bunch.
The differential probability of emitting a photon in the polarisation state $\eps_{j}$ with momentum $k$ is straightforwardly acquired from Eq.~(1) of the main text:
\begin{align}
\frac{d^3\tsf{P}_{j}}{ds\,d^2\bm{r}} &= \frac{\alpha}{(2\pi\eta)^{2}}\int^{\infty}_{s} \frac{d\lambda}{\lambda^2}\int d^2 p^{\LCperp} p^{0}\rho(\bm{p}) \nonumber \\
& \times \frac{s}{t}\int d\phi\,d\phi'~\tsf{T}_{j}~\e^{i\int^{\phi}_{\phi'}\frac{k\cdot \pi_{p}(\vphi)}{m^2t\eta}d\vphi}\,,\label{eqn:sfi1}
\end{align}
where $\lambda=\vkap\cdot p/m^2\eta$ is the electron light-front momentum normalised by the interaction energy parameter $\eta$ used in the manuscript, $s=\vkap\cdot k/m^2\eta$, $t=\lambda-s$. The expression of $\tsf{T}_{j}$ ($j=1,2$ for the linear case, and $j=+,-$ for the circular case) is same as in the manuscript except the dependency on $s$ and $t$ is replaced with $s/\lambda$ and $t/\lambda$, respectively.
From Eq.~(\ref{eqn:sfi1}), the angular distribution of the polarised photon can be acquired:
\begin{align}
\frac{d^2\tsf{P}_{j}}{dr_x\,dr_y} =\frac{\alpha}{(2\pi\eta)^{2}} \int^{\infty}_{s_d} \frac{d\lambda}{\lambda^2}\int d^2 p^{\LCperp}\, p^{0}\rho(\bm{p})\,g\big(\lambda, \bm{r}-\frac{p^{\LCperp}}{m\lambda}\big)
\label{eqn:sfi2}
\end{align}
in which we use the shorthand
\begin{align}
g\left(\lambda, \bm{r}-\frac{p^{\LCperp}}{m\lambda}\right)=\int^{\lambda}_{s_d}ds\frac{s}{t}\int d\phi\,d\phi'~\tsf{T}_{j}~\e^{i\int^{\phi}_{\phi'}\frac{k\cdot \pi_{p}(\vphi)}{m^2 t \eta}d\vphi}\,.
\end{align}
\begin{figure}[t!!]
\center{\includegraphics[width=0.48\textwidth]{figureSM2}}
\caption{Angular distribution $d^2\tsf{P}/d\theta_xd\theta_y$ of the polarised photons ($s_d=0$) in a circularly-polarised background (upper panels) and a linearly-polarised background (bottom panels). Left column: (a) and (c) angular distribution of $E$-polarised photons. Right column: (b) and (d) angular distribution of $B$-polarised photons. The field parameters are same as in the main text. The parameters of electron bunch are listed: angular divergence $\Theta=0.05\,$mrad, incident angle $\vartheta_i=100\,$mrad, average energy $10$~GeV and root-mean-square energy spread $6\%$, and. The photon detector is set downstream along the bunch propagation direction.}
\label{Fig_bunch_angle_dist_005}
\end{figure}
To view the angular harmonic structure shown in Fig.~1 from the single-particle results, the bunch angular divergence has to be reduced to the same level of the angular spread induced by field, thus to be $\Theta \approx 2\xi/\gamma_p$. Fig.~\ref{Fig_bunch_angle_dist_005} shows the angular distribution of polarised photons emitted by a pump electron bunch of average energy $\tilde{p}=\gamma_p m$ ($10\,$GeV),
$\sigma_{\parallel}=0.03\gamma_p$ and $\sigma_{\perp}=2.5\times 10^{-5} \gamma_p$ corresponding to a $6\%$ root-mean-square energy spread and an angular divergence $\Theta=0.05\,\textrm{mrad}$, incident in the direction $\bm{n}=[\sin\vartheta_{i},0,-\cos\vartheta_{i}]$ at an angle $\vartheta_{i}=100\,\textrm{mrad}$. As shown in the figure, well-defined harmonic structures in the angular distribution can be clearly observed.
\bibliographystyle{apsrev}
\providecommand{\noopsort}[1]{}
| {'timestamp': '2020-03-09T01:12:17', 'yymm': '2003', 'arxiv_id': '2003.03246', 'language': 'en', 'url': 'https://arxiv.org/abs/2003.03246'} |
\section{\label{Intro}Introduction}
In quantum chromodynamics (QCD), parton distribution and fragmentation functions (PDFs and FFs) are used to describe the long distance bound state nature of hadrons. Historically PDFs and FFs have been taken to only be dependent on the longitudinal momentum fraction $x$ and $z$, respectively. In the last several decades this has been extended to include transverse momentum dependence so that the ``unintegrated" transverse-momentum-dependent (TMD) PDFs and FFs are written explicitly as dependent on both the longitudinal momentum and transverse momentum of the partons. Since the framework explicitly includes small transverse momenta, the reevaluation of important principles of QCD like factorization and universality has been necessary. \par
In particular, the role of color interactions due to soft gluon exchanges between participants of the hard scattering and remnants of the interaction in collisions involving a hadron have been found to have profound effects regarding these principles. For example, the Sivers function, a particular TMD PDF that correlates the transverse spin of the proton with the orbital angular momentum of the parton, was predicted to be the same magnitude but opposite in sign when measured in semi-inclusive deep-inelastic scattering (SIDIS) and Drell-Yan (DY)~\cite{collins_signchange}. This prediction arises from the different color flows that are possible between these two interactions due to the possibility of soft gluon exchanges in the initial-state and final-state for DY and SIDIS, respectively. Factorization of the nonperturbative functions is still predicted to hold in both SIDIS and DY.\par
In $p$$+$$p$ collisions where two nearly back-to-back hadrons are measured, soft gluon exchanges are possible in both the initial and final state since there is color present in both the initial and final states. In this process, factorization breaking was predicted in a TMD framework~\cite{trog_fact,collins_qiu_fact}. For processes where factorization breaking is predicted, individual TMD PDFs and TMD FFs become correlated with one another and cannot be written as a convolution of nonperturbative functions. The ideas behind the predicted sign change of certain TMD PDFs and factorization breaking result from the same physical process of soft gluons being exchanged between participants of the hard scattering and remnants of the collision. These predictions represent major qualitative departures from purely perturbative approaches which do not consider the remnants of the collision at all. \par
In calculations of TMD processes where factorization is predicted to hold, the Collins-Soper (CS) equation is known to govern the evolution of nonperturbative functions in a TMD framework with the hard scale of the interaction $Q^2$~\cite{cs1,cs2}. Contrary to the purely perturbative collinear DGLAP evolution equations, the CS evolution equation for TMD processes involves the Collins-Soper-Sterman (CSS) soft factor~\cite{css_soft}, which contains nonperturbative contributions. The theoretical expectation from CSS evolution is that momentum widths sensitive to nonperturbative transverse momentum should increase with the hard scale of the interaction. This can intuitively be thought of as due to an increased phase space for hard gluon radiation, and thus a broader transverse momentum distribution. This behavior has been studied and confirmed in several phenomenlogical analyses of DY and Z boson data (see e.g.~\cite{dy1,dy2,sidisdy}) as well as phenemonological analyses of SIDIS data (see e.g.~\cite{sidisdy,sidis1,sidis2}). Since the CS evolution equation comes directly out of the derivation of TMD factorization~\cite{css_tmd}, it then follows that a promising avenue to potentially observe factorization breaking effects is to study possible deviations from CSS evolution in processes where factorization breaking is predicted such as dihadron and direct photon-hadron angular correlations in $p$$+$$p$ collisions. \par
\section{Dihadron and Direct Photon-Hadron Angular Correlations}
Dihadron and direct photon-hadron angular correlations are both predicted to be sensitive to factorization breaking effects because there are hadrons in both the initial and final states, thus the potential for soft gluon exchanges in both the initial and final states exists. Additionally these processes can be treated in a TMD framework when the two particles are nearly back-to-back and have large $p_T$; a hard scale is defined with the large $p_T$ of the particles and the process is also sensitive to the convolution of initial-state and final-state small transverse momentum scales $k_T$ and $j_T$. Here $k_T$ refers to the initial-state partonic transverse momentum due to the confined nature of the partons and soft or hard gluon radiation, and $j_T$ refers to the final-state fragmentation transverse momentum due to soft or hard gluon radiation in the hadronization process. \par
\begin{figure}[thb]
\includegraphics[width=1.0\linewidth]{kTkinematics.pdf} \\
\includegraphics[width=1.0\linewidth]{dp_kt_kinematics.pdf}
\caption{\label{fig:ktkinematics}
A schematic diagram showing event hard-scattering kinematics for (a) dihadron and (b) direct photon-hadron processes in the transverse plane. Two hard scattered partons with transverse momenta $\hat{p}_T^{\rm trig}$ and $\hat{p}_T^{\rm assoc}$ are acoplanar due to the initial-state partonic $k_T$, given by $|\vec{k}_T^1+\vec{k}_T^2|$. The scattered partons result in two fragmented hadrons that are additionally acoplanar due to the final-state transverse momentum perpendicular to the jet axis $j_{T_y}$. For (b) direct photon-hadrons only one away-side jet fragment is produced since the direct photon is colorless. The transverse momentum component perpendicular to the trigger particle's axis is labeled as $\mbox{$p_{\rm out}$}\xspace$.
}
\end{figure}
Figure~\ref{fig:ktkinematics} shows a kinematic diagram in the transverse plane for both dihadron (top) and direct photon-hadron (bottom) events. At leading order the hard scattered partons are exactly back-to-back, but due to initial-state $k_T$ the partons are acoplanar by some amount $|\vec{k}_T^1+\vec{k}_T^2|$. The fragmentation process introduces an additional transverse momentum component $j_{T_y}$ which is assumed to be Gaussian distributed about the parton axes such that $\sqrt{\langle j_T^2\rangle}=\sqrt{2\langle j_{T_y}^2\rangle}$. The transverse momentum component perpendicular to the trigger particle's axis is labeled as $\mbox{$p_{\rm out}$}\xspace$ and is sensitive to initial and final state $k_T$ and $j_T$, where the trigger particle refers collectively to the direct photon or near-side hadron. Measuring the azimuthal angular separation between the near-side trigger particle and away-side associated particle allows calculating \mbox{$p_{\rm out}$}\xspace with the following equation:
\begin{equation}
\mbox{$p_{\rm out}$}\xspace = \mbox{$p_{\rm T}^{\rm assoc}$}\xspace\sin\mbox{$\Delta\phi$}\xspace
\end{equation}
The results presented were measured by the PHENIX collaboration at the Relativistic Heavy Ion Collider at Brookhaven National Lab. The PHENIX detector covers a pseudorapidity interval of $|\eta|<0.35$, and has two arms which in total span an azimuthal region of $\Delta\phi\sim\pi$. Lead scintillator and lead glass electromagnetic calorimeters provide measurements of isolated direct photons and neutral pions via their two photon decay. To measure away-side particles, the PHENIX detector employs a drift chamber and pad chamber tracking system that measures nonidentified charged hadrons. These results were measured from $p$$+$$p$ collisions at a center-of-mass energy of $\sqrt{s}=510$ GeV, and were recently submitted to the arXiv~\cite{ppg195}.
\section{Results}
\begin{figure*}[thb]
\includegraphics[width=1.0\linewidth]{dphi_correlations_rmspoutfits_nearsidefits.pdf}
\caption{\label{fig:dphis}
Per trigger yields of charged hadrons as a function of $\mbox{$\Delta\phi$}\xspace$ are shown in several \mbox{$p_{\rm T}^{\rm trig}$}\xspace and \mbox{$p_{\rm T}^{\rm assoc}$}\xspace bins for both \mbox{$\pi^{0}$}\xspace-hadron and direct photon-hadron correlations. The near side yield of the isolated direct photon-hadron correlations is not shown due to the presence of an isolation cut. The blue and red solid lines shown on the away-sides of the distributions are fits to extract the quantity \mbox{$\sqrt{\langle p_{\rm out}^{2}\rangle}$}\xspace.
}
\end{figure*}
Figure~\ref{fig:dphis} shows the measured per-trigger yields as a function of \mbox{$\Delta\phi$}\xspace for both \mbox{$\pi^{0}$}\xspace-hadron and direct photon-hadron correlations. The \mbox{$\pi^{0}$}\xspace-hadron correlations show the expected two jet structure, with two peaks that are nearly back-to-back around \mbox{$\Delta\phi$}\xspace$\sim0$ and \mbox{$\Delta\phi$}\xspace$\sim\pi$. The near side of the direct photon-hadron correlations is omitted due to the presence of an isolation cut on the direct photons; thus the near side is not physically interpretable. Additionally, the away-side jets are sensitive to the effects of $k_T$ broadening, so they are the yields of interest. The direct photon-hadron yields have a smaller yield than the \mbox{$\pi^{0}$}\xspace-hadron yields due to the direct photon emerging from the hard scattering at leading order; thus the direct photon-hadron correlations probe a smaller jet energy than the \mbox{$\pi^{0}$}\xspace-hadron correlations. \par
The measured $\mbox{$p_{\rm out}$}\xspace$ per-trigger yields are shown in Fig.~\ref{fig:pouts} for both $\mbox{$\pi^{0}$}\xspace$-$\mbox{\text{h}$^\pm$}\xspace$ and direct photon-$\mbox{\text{h}$^\pm$}\xspace$ angular correlations. The open points show the distributions for $\mbox{$\pi^{0}$}\xspace$-hadron correlations while the filled points show the distributions for the direct photon-hadron correlations. The distributions are constructed for only away-side charged hadrons that are sensitive to initial-state $k_T$ and final-state $j_T$. The $\mbox{$p_{\rm out}$}\xspace$ distributions show two distinct regions; at small $\mbox{$p_{\rm out}$}\xspace\sim0$ where the particles are nearly back-to-back a Gaussian shape can be seen, while at larger $\mbox{$p_{\rm out}$}\xspace$ a power law shape is clear. These two shapes indicate a transition from nonperturbatively generated transverse momentum due to soft gluon emission in the Gaussian region to perturbatively generated transverse momentum due to hard gluon emission in the power law region. The distributions are fit with a Gaussian function at small $\mbox{$p_{\rm out}$}\xspace$ and a Kaplan function over the entire distribution. The Gaussian function clearly fails at $\mbox{$p_{\rm out}$}\xspace\sim$1.3 while the Kaplan function accurately describes the transition from Gaussian behavior to power law behavior. \par
\begin{figure*}[thb]
\includegraphics[width=1.0\linewidth]{pouts_allfits.pdf}
\caption{\label{fig:pouts}
Per trigger yields of charged hadrons as a function of $\mbox{$p_{\rm out}$}\xspace$ are shown in several $\mbox{$p_{\rm T}^{\rm trig}$}\xspace$ bins for both $\mbox{$\pi^{0}$}\xspace$-$\mbox{\text{h}$^\pm$}\xspace$ and direct photon-$\mbox{\text{h}$^\pm$}\xspace$ correlations. The distributions are fit with a Gaussian function at small $\mbox{$p_{\rm out}$}\xspace$ and a Kaplan function over the entire range. The Gaussian fit clearly fails after $\sim$1.3 GeV/c, indicating a transition from nonperturbatively generated $k_T$ and $j_T$ to perturbatively generated $k_T$ and $j_T$.
}
\end{figure*}
To search for effects from factorization breaking, a comparison to the expectation from CSS evolution must be made using momentum widths which are sensitive to nonperturbative transverse momentum. To make a comparison, two different momentum widths were extracted from the measured correlations. Figure~\ref{fig:rmspouts} shows the root mean square of \mbox{$p_{\rm out}$}\xspace as a function of $\mbox{$p_{\rm T}^{\rm trig}$}\xspace$, which is extracted from fits to the entire away-side jet region in Fig.~\ref{fig:dphis}. The away-side fits can be seen in Fig.~\ref{fig:dphis} as red and blue solid lines around $\mbox{$\Delta\phi$}\xspace\sim\pi$. The values of \mbox{$\sqrt{\langle p_{\rm out}^{2}\rangle}$}\xspace clearly decrease for both the \mbox{$\pi^{0}$}\xspace-hadron correlations and direct photon-hadron correlations, which is the opposite of what is predicted from CSS evolution. The direct photon-hadron correlations show a stronger dependence than the \mbox{$\pi^{0}$}\xspace-hadron correlations in the same region of \mbox{$p_{\rm T}^{\rm trig}$}\xspace. Since the values of \mbox{$\sqrt{\langle p_{\rm out}^{2}\rangle}$}\xspace are extracted from the entire away-side jet region, \mbox{$\sqrt{\langle p_{\rm out}^{2}\rangle}$}\xspace is sensitive to both perturbatively and nonperturbatively generated $k_T$ and $j_T$. While it is dominated by nonperturbatively generated transverse momentum since the majority of charged hadrons are in the nearly back-to-back region $\mbox{$p_{\rm out}$}\xspace\sim0$ or $\mbox{$\Delta\phi$}\xspace\sim\pi$, an observable that is sensitive to only nonperturbatively generated transverse momentum is the most ideal for comparisons to CSS evolution.\par
\begin{figure}[thb]
\includegraphics[width=\linewidth]{rmspout_vspttrig.pdf}
\caption{\label{fig:rmspouts}
The extracted values of \mbox{$\sqrt{\langle p_{\rm out}^{2}\rangle}$}\xspace are shown as a function of the interaction hard scale \mbox{$p_{\rm T}^{\rm trig}$}\xspace for both \mbox{$\pi^{0}$}\xspace-hadron and direct photon-hadron correlations. The momentum widths decrease with the interaction hard scale, which is opposite the prediction from CSS evolution.
}
\end{figure}
Since the Gaussian fits to the \mbox{$p_{\rm out}$}\xspace distributions are taken in only the nearly back-to-back region, the widths of the Gaussian functions are momentum widths that are sensitive to only nonperturbative transverse momentum. Figure~\ref{fig:widths} shows the measured Gaussian widths as a function of the interaction hard scale \mbox{$p_{\rm T}^{\rm trig}$}\xspace for both direct photon-hadron and \mbox{$\pi^{0}$}\xspace-hadron correlations. Similarly to the values of \mbox{$\sqrt{\langle p_{\rm out}^{2}\rangle}$}\xspace from Fig.~\ref{fig:rmspouts}, the Gaussian widths decrease as a function of the hard scale, which is the opposite of the prediction from CSS evolution. The direct photon-hadron widths show a stronger dependence on \mbox{$p_{\rm T}^{\rm trig}$}\xspace than the \mbox{$\pi^{0}$}\xspace-hadron widths, similar to the values of \mbox{$\sqrt{\langle p_{\rm out}^{2}\rangle}$}\xspace. \par
\begin{figure}[thb]
\includegraphics[width=\linewidth]{gausswidths_withpythia.pdf}
\caption{\label{fig:widths}
The measured and {\sc pythia} simulated Gaussian widths from the \mbox{$p_{\rm out}$}\xspace distributions are shown as a function of the hard scale \mbox{$p_{\rm T}^{\rm trig}$}\xspace. The widths, sensitive to only nonperturbative transverse momentum, decrease with \mbox{$p_{\rm T}^{\rm trig}$}\xspace. Surprisingly, {\sc pythia} nearly replicates the evolution behavior for both the direct photon-hadron and \mbox{$\pi^{0}$}\xspace-hadron correlations despite a $\sim$15\% difference in the magnitude of the {\sc pythia} widths.
}
\end{figure}
Since theoretical calculations were not available, {\sc pythia}~\cite{pythia} was used to simulate direct photon-hadron and \mbox{$\pi^{0}$}\xspace-hadron correlations and study the evolution of momentum widths with the hard scale of the interaction. The Perugia0~\cite{perugia} tune was used since it was tuned to Tevatron Z boson data at small \mbox{$p_T$}\xspace; therefore it should adequately reproduce events with a small amount of total \mbox{$p_T$}\xspace. {\sc pythia} direct photon and dijet events were produced and the \mbox{$p_{\rm out}$}\xspace distributions were constructed directly from the simulation in exactly the same way that they were measured in data. Although the magnitude of the widths from the simulation is roughly 15\% different in each bin, the simulation remarkably nearly reproduces the measured evolution of the Gaussian widths as seen in Fig.~\ref{fig:widths}. It is plausible that {\sc pythia} could be sensitive to the effects from factorization breaking due to the way a \mbox{$p$+$p$}\xspace event is processed. Unlike a standard perturbative QCD calculation, {\sc pythia} forces all particles to color neutralize in the event. This includes allowing initial and final-state gluon interactions, which are the necessary physical mechanism for factorization breaking and are additionally necessary to color neutralize all of the objects in the event. \par
\section{Conclusion}
In hadronic collisions where at least one final-state hadron is measured, factorization breaking has been predicted in a TMD framework. When color is present in both the initial and final states, soft gluon exchanges between participants in the hard scattering and the remnants of the collision are possible, leading to novel color flows throughout the entire scattering process. Nearly back-to-back dihadron and direct photon-hadron angular correlations in \mbox{$p$+$p$}\xspace collisions at \mbox{$\sqrt{s}$}\xspace=510 GeV from the PHENIX experiment at the Relativistic Heavy Ion Collider were measured to probe possible effects from factorization breaking~\cite{ppg195}. The transverse momentum component perpendicular to the near-side trigger particle, \mbox{$p_{\rm out}$}\xspace, was used to compare predictions from CSS evolution. CSS evolution, which comes directly out of the derivation of TMD factorization~\cite{css_tmd}, predicts that momentum widths sensitive to nonperturbative transverse momentum should increase with increasing hard scale due to the broadened phase space for perturbatively generated gluon radiation. This dependence has been observed in phenomenological fits to both DY and SIDIS data~\cite{dy1,dy2,sidisdy,sidis1,sidis2}. \par
The measured correlations at PHENIX show the opposite dependence from the prediction of CSS evolution; momentum widths in hadronic collisions where at least one final-state hadron is measured decrease with the interaction hard scale \mbox{$p_{\rm T}^{\rm trig}$}\xspace~\cite{ppg195}. Remarkably, {\sc pythia} replicates this behavior in both direct photon-hadron correlations and \mbox{$\pi^{0}$}\xspace-hadron correlations. While {\sc pythia} certainly does not consider effects from factorization breaking as it relies on collinear factorization, the necessary physical mechanism that results in the predicted factorization breaking is present in {\sc pythia}; gluon exchanges with the remnants are possible in a {\sc pythia} simulated event since all colored objects are forced to color neutralize in any given event, unlike a standard perturbative QCD calculation. \par
| {'timestamp': '2017-01-04T02:04:34', 'yymm': '1701', 'arxiv_id': '1701.00681', 'language': 'en', 'url': 'https://arxiv.org/abs/1701.00681'} |
\section{Introduction}
The discovery of halo nuclei (neutron-halo and proton-halo) in the neighborhood of drip lines is one of the major achievements of the advancements of the radioactive ion-beam facilities. Halo structure is characterized by a relatively stable and denser core surrounded by weakly bound one or more valence nucleon(s) giving rise to long extended tail in the density distribution. This low-density tail is supposed to be the consequence of quantum mechanical tunneling of the last nucleon(s) through a-shallow barrier following an attractive well that appears due to the short-range nuclear interaction, at energies smaller than the height of the barrier. In halo nuclei, one seldom finds any excited bound states because of the utmost support one bound state at energies less than 1 MeV. Halo nuclei have high scientific significance as they exhibit one or more resonance state(s) just above the binding threshold. The observed halo nuclei-$^{17}$B, $^{19}$C show one-neutron halo; $^6$He, $^{11}$Li, $^{11, 14}$Be show two-neutron halo; $^8$B, $^{26}$P show one proton-halo; $^{17}$Ne, $^{27}$S show two proton-halo and $^{14}$Be, $^{19}$B show four neutron-halo structure respectively \cite{tanihata-1985, kobayashi-2012, hwash-2017, jensen-2000, schwab-2000, tanaka-2010}. Halo-nuclei is characterized by their unusually large r.m.s. matter radii (larger than the liquid-drop model prediction of $R_A\propto A^{1/3}$) \cite{audi-2003, acharya-2013} and sufficiently small two-nucleon separation energies (typically less than 1 MeV). Tanaka et al. 2010. \cite{tanaka-2010} observed of a large reaction cross-section in the drip-line nucleus $^{22}$C, Kobayashi et al. 2012 \cite{kobayashi-2012}, conducted research on one- and two-neutron removal reactions from the most neutron-rich carbon isotopes, Gaudefroy et al 2012 \cite{gaudefroy-2012} carried a direct mass measurements of $^{19}$B, $^{22}$C, $^{29}$F, $^{31}$Ne, $^{34}$Na and some other light exotic nuclei. Togano et al, 2016 \cite{togano-2016} studied interaction cross-section of the two-neutron halo nucleus $^{22}$C. \newline Nuclear matter distribution profile of such nuclei has an extended low-density tail forming a halo around the more localized dense nuclear core. Thus, in addition to bound state properties, continuum spectra is another significant parameter that is highly involved in the investigation of structure and interparticle interactions in the exotic few-body systems like the halo nuclei. It is worth stating here that the study of resonances is of particular interest in many branches of physics involving weakly bound systems in which only few bound states are possible.\newline In the literature survey, we found three main theoretical approaches that were used to explore the structure of 2n-halo nuclei. The first one is the microscopic model approach in which the valence neutrons are supposed to move around the conglomerate of other nucleons (protons and neutrons) without having any stable core. The second one is the three-body cluster model in which the valence nucleons are assumed to move around the structureless inert core. And the third one is the microscopic cluster model in which the valence nucleons move around the deformed excited core \cite{saaf-2014, nesterov-2010, korennov-2004}. There are several theoretical approaches which are employed for computation of resonant states. Some of those are the positive energy solution of the Faddeev equation \cite{cobis-1997}, complex coordinate rotation (CCR) \cite{csoto-1993, ayoma-1995}, the analytic computation of bound state energies \cite{tanaka-1997}, the algebraic version of resonating group method (RGM) \cite{vasilevsky-2001}, continuum-discretized coupled-channels (CDCC) method clubbed to the cluster-orbital shell model (COSM) \cite{ogata-2013}, hyperspherical harmonics method (HHM) for scattering states \cite{danilin-1997}, etc. In most of the theoretical approaches, Jacobi coordinates are used to derive the relative coordinates separating the center of mass motion.
\newline One of the most challenging obstacles that are involved in the calculation of resonances in any weakly bound nucleus is the large degree of computational error. In our case, we overcome this obstacle by adopting a novel theoretical approach by interfacing the algebra of supersymmetric quantum mechanics with the algebra involved in the hyperspherical harmonics expansion method. In this scheme, one can handle the ground state as well as the resonant states on the same footing. The technique is based on the fact that, for any arbitrarily given potential (say, {$U$}), one can construct a family of isospectral potentials ($\hat{U}$), in which the latter depends on an adjustable parameter ($\lambda$). And when the original potential has a significantly low and excessively wide barrier (poorly supporting the resonant state), $\lambda$ can be chosen judiciously to enhance the depth of the well together with the height of the barrier in $\hat{U}$. This enhanced well-barrier combination in $\hat{U}$ facilitates trapping of the particle which in turn facilitates the computation of resonant state more accurately at the same energy, as that in the case of {$U$}. This is because, {$U$} and $\hat{U}$ are {\bf strictly isospectral}.
\newline To test the effectiveness of the scheme we apply the scheme to the first $0^+$ resonant states of the carbon isotopes $^A$C, for A equal to 18 and 20 respectively. We chose three-body (2n+$^{\rm A-2}$C) cluster model for each of the above isotopes, where outer core neutrons move around the relatively heavier core $^{A-2}$C. The lowest eigen potential derived for the three-body systems has a shallow well following a skinny and sufficiently wide barrier. This skinny-wide barrier gives rise to a large resonance width. One can, in principle, find quasi-bound states in such a shallow potential, but that poses a difficult numerical task. For a finite height of the barrier, a particle can temporarily be trapped in the shallow well when its energy is close to the resonance energy. However, there is a finite possibility that the particle may creep in and tunnel out through the barrier. Thus, a more accurate calculation of resonance energy is easily masked by the large resonance width resulting from a large tunneling probability due to a low barrier height. Hence, a straightforward calculation of the resonance energies of such systems fails to yield accurate results.
\newline We adopt the hyperspherical harmonics expansion method (HHEM) \cite{ballot-1982} to solve the three-body Schr\"{o}dinger equation in relative coordinates. In HHEM, three-body relative wavefunction is expanded in a complete set of hyperspherical harmonics. The substitution of the wavefunction in the Schr\"{o}dinger equation and use of orthonormality of HH gives rise to an infinite set of coupled differential equations (CDE). The method is an essentially exact one, involving no other approximation except an eventual truncation of the expansion basis subject to the desired precision in the energy snd the capacity of available computer. However, hyperspherical convergence theorem \cite{schneider-1972} permits extrapolation of the data computed for the finite size of the expansion basis, to estimate those for even larger expansion bases. However, the convergence of HH expansion being significantly slow one needs to solve a large number of CDE's to achieve desired precision causing another limitation, hence we used the hyperspherical adiabatic approximation (HAA) \cite{ballot1-1982}to construct single differential equation (SDE) to be solved for the lowest eigen potential, ${U}_0(\rho)$) to get the ground state energy $E_0$ and the corresponding wavefunction $\psi_0(\rho)$ \cite{das-1982}.
\newline We next derive the isospectral potential $\hat{U}(\lambda,\rho)$ following algebra of the SSQM \cite{cooper-1995, khare-1989, nieto-1984}. Finally, we solve the SDE for $\hat{U}(\lambda,\rho)$ for various positive energies to get the wavefunction. We then compute the probability density corresponding to the wavefunction for finding the particle within the deep-sharp well following the enhanced barrier. A plot of probability density as a function of energy shows a sharp peak at the resonance energy. The actual width of resonance can be obtained by back-transforming the wave function $\hat{\psi}(\lambda, \rho)$ corresponding to $\hat{U}(\lambda,\rho)$ to $\psi(\rho)$ of $U(\rho)$.
\newline The paper is organized as follows. In sections 2, we briefly review the HHE method. In section 3, we present a precise description of the SSQM algebra to construct the one-parameter family of isospectral potential $\hat{U}(\lambda,\rho)$. The results of our calculation are presented in section 4 while conclusions are drawn in section 5.
\section{Hyperspherical Harmonics Expansion Method}
For a the three-body model of the nuclei $^{A-2}$C+n+n, the relatively heavy core $^{A-2}$C is labeled as particle 1, and two valence neutrons are labelled as particle 2 and 3 respectively. Thus there are three possibile partitions for the choice of Jacobi coordinates. In any chosen partition, say the $i^{th}$ partition, particle labelled $i$ plays the role of spectator while remaining two paricles form the interacting pair. In this partition the Jacobi coordinates are defined as
\begin{equation}
\vec{x_{i}} = a_i(\vec{r_{j}} - \vec{r_{k}}); \vec{y_{i}} = \frac{1}{a_i} \left(\vec{r_{i}} -\frac{m_{j}\vec{r_{j}} + m_{k} \vec{r_{k}}}{ m_{j} + m_{k}} \right); \vec{R}= \frac{\sum_{i=1}^3 m_{i}\vec{r_{i}}}{M} \end{equation}
where $i,j, k$ form a cyclic permutation of 1,2,3. The parameter $a_i= \left[\frac{m_{j} m_{k}M}{m_{i}(m_{j}+m_{k})^{2}} \right]^{\frac{1}{4}}$; $m_{i}, \vec{r_{i}}$ are the mass and position of the $i^{th}$ particle and $M(=\sum_{i=1}^3m_{i})$, $\vec{R}$ are those of the centre of mass (CM) of the system. Then in terms of Jacobi coordinates, the relative motion of the three-body system can be described by the equation \begin{equation}
\left\{ - \frac{\hbar^{2}}{2\mu} \nabla_{x_{i}}^{2}- \frac{\hbar^{2}}{2\mu} \nabla_{y_{i}}^{2}+
V_{jk} (\vec{x_{i}})
+V_{ki} (\vec{x_{i}}, \vec{y_{i}} ) + V_{ij} (\vec{x_{i}}, \vec{y_{i}})-E
\: \right\} \Psi (\vec{x_{i}}, \vec{y_{i}}) = 0
\end{equation}
where ${\mu = \left[ \frac{m_{i} m_{j} m_{k}}{M} \right]^{\frac{1}{2}}}\rightarrow$ is the reduced mass of the system, $V_{ij}$ represents the interaction potential between the particles $i$ and $j$, $x_{i} = \rho \cos \phi_{i}$; $y_{i}= \rho \sin \phi_{i}$; $\phi_i=\tan^{-1}(\frac{y_i}{x_i})$; $\rho=\sqrt{x_i^2+y_i^2}$. The hyperradius $\rho$ together with five angular variables $\Omega_{i} \rightarrow \{\phi_{i}, \theta_{x_{i}}, {\cal \phi}_{x_{i}}, \theta_{y_{i}}, {\cal \phi}_{y_{i}} \}$ constitute hyperspherical coordinates of the system. The Schr\"{o}dinger equation in hyperspherical variables $(\rho, \Omega_{i})$ becomes
\begin{equation}
\left\{ - \frac{\hbar^{2}}{2\mu}\frac{1}{\rho^5} \frac{\partial^2}{\partial\rho^2} - \frac{\hbar^{2}}{2\mu}\frac{4}{\rho}\frac{\partial}{\partial\rho}+
\frac{\hbar^{2}}{2\mu}\frac{\hat{{\cal K}}^{2}(\Omega_{i})}{\rho^{2}} + V(\rho, \Omega_{i}
) - E \right\} \Psi(\rho, \Omega_{i})= 0.
\end{equation}
In Eq.(3) $V(\rho, \Omega_{i}) = V_{jk} + V_{ki} + V_{ij}$ is the total interaction potential in the $i^{th}$ partition and $\sl{\hat{{\cal K}}^{2}}(\Omega_{i})$ is the square of the hyperangular momentum operator satisfying the eigenvalue equation
\begin{equation}
\hat{{\cal K}}^{2}(\Omega_{i}) {\cal Y}_{K \alpha_{i}}(\Omega_{i}) = K (K + 4) {\cal Y}_{K \alpha_{i}}(\Omega_{i}) \end{equation}
$K$ is the hyperangular momentum quantum number and $\alpha_{i}$ $\equiv\{l_{x_{i}}, l_{y_{i}}, L, M \}$, ${\cal Y}_{K\alpha_{i}}(\Omega_{i})$ are the hyperspherical harmonics (HH) for which a closed analytic expressions can be found in ref. \cite{cobis-1997}.
\newline In the HHEM, $\Psi(\rho, \Omega_{i})$ is expanded in the complete set of HH corresponding to the partition "$i$" as
\begin{equation}
\Psi(\rho, \Omega_{i}) = \sum_{K\alpha_{i}}\frac{\psi_{K\alpha_{i}} (\rho)}{\rho^{5/2}} {\cal Y}_{K\alpha_{i}}(\Omega_{i}) \end{equation}
Use of Eq. (5), in Eq. (3) and application of the orthonormality of HH leads to a set of coupled differential equations (CDE) in $\rho$
\begin{equation}
\left\{-\frac{\hbar^{2}}{2\mu}\frac{d^{2}}{d\rho^{2}}
+\frac{\hbar^{2}}{2\mu}\frac{(K+3/2)(K+5/2)}{\rho^2}-E\right\}
\psi_{K\alpha_{i}}(\rho)
+\sum_{K^{\prime}\alpha_{i}^{\prime}} {\cal M}_{K\alpha_{i}}^{K^{\prime}\alpha_{i}^{\prime}}\psi_{K^{\prime}\alpha_{i}^{\prime}}(\rho)=0.
\end{equation}
where \begin{equation}
{\cal M}_{K\alpha_{i}}^{K^{\prime}\alpha_{i}^{\prime}} = \int {\cal Y}_{K\alpha_{i}}^{*}(\Omega_{i}) V(\rho, \Omega_{i}) {\cal Y}_{K^{\prime} \alpha_{i}^{~\prime}}(\Omega_{i}) d\Omega_{i}.
\end{equation}
The infinite set of CDE's represented by Eq. (6) is truncated to a finite set by retaining all K values up to a maximum of $K_{max}$ in the expansion (5). For a given $K$, all allowed values of $\alpha_{i}$ are included. The size of the basis states is further restricted by symmetry requirements and associated conserved quantum numbers. The reduced set of CDE's are then solved by adopting hyperspherical adiabatic approximation (HAA) \cite{ballot1-1982}. In HAA, the CDE's are approximated by a single differential equation assuming that the hyperradial motion is much slower compared to hyperangular motion. For this reason, the angular part is first solved for a fixed value of $\rho$. This involves diagonalization of the potential matrix (including the hyper centrifugal repulsion term) for each $\rho$-mesh point and choosing the lowest eigenvalue $U_0(\rho)$ as the lowest eigen potential \cite{das-1982}. Then the energy of the system is obtained by solving the hyperradial motion for the chosen lowest eigen potential ($U_0(\rho)$), which is the effective potential for the hyperradial motion \begin{equation}
\left\{-\frac{\hbar^{2}}{2\mu}\frac{d^{2}}{d\rho^{2}} + U_0(\rho) - E \right\} \psi_{0}(\rho) = 0 \end{equation}
Renormalized Numerov algorithm subject to appropriate boundary
conditions in the limit $\rho \rightarrow 0$ and $\rho\rightarrow\infty$
is then applied to solve Eq. (8) for E ($\leq E_0$). The hyper-partial wave $\psi_{K\alpha_{i}}(\rho)$ is given by
\begin{equation}
\psi_{K \alpha_{i}}(\rho) = \psi_{0}(\rho) \chi_{K \alpha_{i},0}(\rho)
\end{equation}
where $\chi_{K \alpha_{i},0}(\rho)$ is the ${(K\alpha_{i})}^{th}$ element of the eigenvector, corresponding to the lowest eigen potential $U_0(\rho)$.
\section{Construction of Isospectral Potential}
In this section we present a bird's eye view of the scheme of construction of one parameter family of isospectral potentials. We have from Eq. (8) \begin{equation}
U_0(\rho) = E_0 + \frac{\hbar^{2}}{2\mu}\frac{\psi_0^{\prime\prime}(\rho)}{\psi_0(\rho)} \end{equation}
In 1-D supersymmetric quantum mechanics, one defines a superpotential for a system in terms of its ground state wave function ($\psi_{0}$) \cite{cooper-1995} as
\begin{equation}
W(\rho)= -\frac{\hbar}{\sqrt{2m}}\frac{\psi_{0}^{\prime}(\rho)}{\psi_{0}(\rho)}.
\end{equation}
The energy scale is next shifted by the ground state energy $(E_{0})$ of the
potential $U_0(\rho)$, so that in this shifted energy scale the new potential become
\begin{equation}
U_1(\rho) = U_0(\rho) - E_{0}=\frac{\hbar^{2}}{2\mu}\frac{\psi_0^{\prime\prime}(\rho)}{\psi_0(\rho)}\end{equation}
having its ground state at zero energy. One can then easily verify that $U_1(\rho)$ is expressible in terms of the superpotential via the Riccati equation
\begin{equation}
U_1(\rho) = W^{2}(\rho) - \frac{\hbar}{\sqrt{2m}}W^{\prime }(\rho).
\end{equation}
By introducing the operator pairs
\begin{equation}
\left. \begin{array}{lcl}
A^{\dag}& =& -\frac{\hbar}{\sqrt{2m}}\frac{d}{d\rho}+W(\rho)\\
A &=& \frac{\hbar}{\sqrt{2m}}\frac{d}{d\rho}+W(\rho)
\end{array} \right\}
\end{equation}
the Hamiltonian for $U_1$ becomes
\begin{equation}
H_1 = -\frac{\hbar^{2}}{2m} \frac{d^{2}}{d\rho^{2}} + U_1(\rho) = A^{\dag}A.
\end{equation}
The pair of opertors $A^{\dag}, A$ serve the purpose of creation and annihilation of nodes in the wave function. Next we introduce a partner Hamiltonian $H_{2}$, corresponding to the SUSY partner potential $U_2$ of $U_1$ as
\begin{equation}
H_{2}=-\frac{\hbar^{2}}{2m}\frac{d^{2}}{d\rho^{2}}+U_2(\rho)=AA^{\dag}
\end{equation}
where
\begin{equation}
U_2(\rho)=W^{2}(\rho)+\frac{\hbar}{\sqrt{2m}}W^{\prime}(\rho).
\end{equation}
Energy eigen values and wavefunctons corresponding to the SUSY partner Hamiltonians $H_1$ and $H_2$ are connected via the relations
\begin{equation}
\left. \begin{array}{lcl}
E_n^{(2)} & = & E_{n+1}^{(1)}, E_0^{(1)}=0 \; (n=0, 1, 2, 3,...),\\
\psi_n^{(n)}&=&\sqrt{E_{n+1}^{(1)}}A\psi_{n+1}^{(1)}\\
\psi_{n+1}^{(1)}&=&\sqrt{E_n^{(2)}}A^{\dagger}\psi_{n}^{(2)}\\
\end{array} \right\}\\
\end{equation}
where $E_{n}^{(i)}$ represents the energy of the $n^{th}$ excited state of $H_{i}$ (i=1, 2). Thus $H_{1}$ and $H_{2}$ have identical spectra, except the fact that the partner state of $H_{2}$ corresponding to the ground state of $H_{1}$ is absent in the spectrum of $H_{2}$ \cite{cooper-1995}. Hence the potentials $U_1$ and $U_2$ are {\bf not strictly isospectral}.
\newline However, one can construct, a one parameter family of {\bf strictly isospectral} potentials $\hat{U_1}(\lambda, \rho)$, explointing the fact that for a given $U_1(\rho)$, $U_0(\rho)$ and $W(\rho)$ are not unique (see Eqs. (12) \& (13)), since the Riccati equation is a nonlinear one. Following \cite{cooper-1995, khare-1989, darboux-1982}, it can be shown that the most general superpotential satisfying Riccati equation for $U_1(\rho)$ (Eq. (16)) is given by
\begin{equation}
\hat{W}(\rho)=W(\rho)+\frac{\hbar}{\sqrt{2m}}\frac{d}{d\rho}\log [I_{0}(\rho) +
\lambda]
\end{equation}
where $\lambda$ is a constant of integration, and $I_0$ is given by
\begin{equation}
I_{0}(\rho)={\displaystyle\int}_{\rho^{\prime}=0}^{\rho} {[\psi_{0}(\rho^{\prime})]}^{2} d\rho^{\prime},
\end{equation}
in which $\psi_0(\rho)$ is the normalized ground state wave function of $U_0(\rho)$. The potential
\begin{equation}
\hat{U_1}(\lambda, \rho)= \hat{W}^{2}(\rho)-\frac{\hbar}{\sqrt{2m}}
\hat{W}^{\prime}(\rho)
= U_1(\rho) - 2\frac{\hbar^{2}}{2m}
\frac{d^{2}}{d\rho^{2}}\log [I_0(\rho) + \lambda],
\end{equation}
has the same SUSY partner $U_2(\rho)$. $\hat{U_1}(\lambda, \rho)$ has its ground state at zero energy with the corresponding wavefunction given by
\begin{equation}
\hat{\psi_1}(\lambda, \rho)= \frac{\psi_1}{I_0+\lambda}.
\end{equation}
Hence, potentials $\hat{U_1}(\lambda, \rho)$ and $U_1(\rho)$ are {\bf strictly isospectral}. The parameter $\lambda$ is arbitrary in the intervals $-\infty<\lambda<-1$ and $0<\lambda<\infty$. $I_{0}(\rho)$ lies between 0 and 1, so the interval $-1\leq \lambda\leq 0 $ is forbidden, in order to bypass singularities in $\hat{U_1}(\lambda, \rho)$. For $\lambda \rightarrow \pm \infty$, $\hat{U_1} \rightarrow U_1$ and for $\lambda \rightarrow 0+$, $\hat{U_1}$ develops a narrow and deep attractive well in the viscinity of the origin. This well-barrier combination effectively traps the particle giving rise to a sharp resonance. This method has been
tested successfully for 3D finite square well potential \cite{das-2001} choosing parameters capabe of supporting one or more resonance state(s) in addition to one bound state. Nuclei $^{18, 20}$C have in their ground states $T=1, J^{\pi}=0^+$ and there exists a resonance state of the same $J^{\pi}$. Thus, the forgoing procedure starting from the ground state of $^{18, 20}$C will give $T=1, J^{\pi}=0^+$ resonance(s). In an attempt to search for the correct resonance energy, we compute the probability of finding the system within the well region of the potential $\hat{U_1}(\lambda, \rho)$ corresponding to the energy $E$ ($>0$) by integrating the probability density up to the top of the barrier:
\begin{equation}
G(E)=\int_{\rho^{\prime}=0}^{\rho_B} |\hat{\psi_E}(\rho^{\prime},\lambda)|^2d\rho^{\prime}
\end{equation}
where $\rho_B$ indicates position of the top of the barrier component of the potential $\hat{U_1}(\lambda, \rho)$ for a chosen $\lambda$. Here $\hat{\psi_E}(\lambda, \rho)$ that represents the solution of the potential $\hat{U_1}(\lambda, \rho)$, corresponding to a positive energy $E$, is normalized to have a constant amplitude in the assymptotic region. Plot of the quantity $G(E)$ against increasing $E$ ($E > 0$) shows a peak at the resonance energy $E=E_R$. Choice of $\lambda$ has to be made judiciously to avoid numerical errors entering in the wavefunction in the extremely narrow well for $\lambda\rightarrow 0+$. The width of resonance can be obtained from the mean life of the state using the energy-time uncertainty relation. The mean life is reciprocal to the decay constant. And the decay constant is the product of the number of hit per unit time on the barrier and the corresponding probability of tunneling through the barrier.
\section{Results and discussions}
Eq.(6) is solved for the GPT n-n potential [31] and core-n SBB potential [32]. The range parameter for the core-n potential $b_{cn}$ is slightly adjusted to match the experimetal ground state spectra. The calculated two-neutron separation energies ($S_{2n}$), the relative convergence in energy (=$\frac{E(K_{max}+4)-E(K_{max})}{E(K_{max}+4)}$) and the rms matter radii (R$_A$) for gradualy increasing $K_{max}$ are listed in Table 1 for both of $^{18}$C and $^{20}$C. Although the computed results indicate a clear convergence trend with increasing $K_{max}$, it is far away from full convergence even at $K_{max}=24$. For this reason, we used an extrapolation technique succesfully used for atomic systems \cite{das-1994, khan-2012} as well as for nuclar system \cite{khan-2001}, to get the converged value of about 4.91 MeV for $^{18}$C and 3.51 MeV for $^{20}$C as shown in columns 2 and 4 of Table 2. Partial contribution of the different partial waves to the two-neutron separation enegies corresponding to $l_x = 0, 1, 2, 3, 4$ are presented in Table 3. Variation of the two-neutron separation as a function of $K_{max}$ is shown in Figure 1 for both the nuclei $^{18}$C and $^{20}$C. In Figure 2 we have shown the relative convergence trend in enrgies as a function of $K_{max}$. In Figures 3 and 4 we have presented a 3D view of the correlation density profile of the halo nuclei $^{18}$C and $^{20}$C. While in Figures 5 and 6 we have shown the 2D projection of the 3D probability density distribution. The figures clearly indicate the halo structure of the nuclei comprising a dense core surrounded by low density tail.
\newline After getting the ground state energy and wavefunctions we constructed the isospectral potential invoking principles of SSQM to investigate the resonant states. The lowest eigen potential obtained for the ground states of $^{18}$C and $^{20}$C as shown in yellow lines in Figures 7 and 8 exhibits shallow well followed by a broad and low barrier. This low well-barrier combination may indicate resonant states. However, since the well is very shallow and the barrier is not sufficiently high, the resonance width is very large and a numerical calculation of the resonant state is quite challenging. Hence, we constructed the one-parameter family of isospectral potentials $\hat{U}(\lambda, \rho)$ following Eq.(19) by appropriate selection $\lambda$ parameter values, such that a narrow and sufficiently deep well followed by a high barrier is obtained which are also shown in Figures 7 and 8 in representative cases. The enhanced well-barrier combination effectively traps the particles to form a strong resonant state. Calculated parameters of the isospectral potential for some $\lambda $ values, along with the original lowest eigen potential $U_0(\rho)$ (which corresponds to $\lambda\rightarrow\infty$) are presented in Table 4. One can, for example, note from Table 4 under $^{18}$C that, when $\lambda$ changes from = 0.1 to 0.0001, the depth of the well increases from -24.2 MeV at 2.7 fm to -247.6 MeV at 1.3 fm while the height of the barrier increases from 5.4 MeV at 5.1 fm to 121.5 MeV at 1.9 fm. The same trend is observed for $^{20}$C also. Thus the application of SSQM produces a {\bf dramatic effect} in the isospectral potential $\hat{U_1}(\lambda, \rho)$ as $\lambda$ approaches 0+. Further smaller positive values of $\lambda$ are not desirable since that will make the well too narrow to compute the wave functions accurately by a standard numerical technique.
The probability of trapping, $G(E)$, of the particle within the enhanced well-barrier combination as a function of the particle energies E shown in Figures 9 and 10 exhibit resonance peak at the energies $E_R\simeq 1.89$ MeV for $^{18}$C and at energy $E_R\simeq 3.735$ MeV for $^{20}$C respectively. It is interesting to see that the resonance energy is independent of the $\lambda$ parameter. The enhancement of accuracy in the determination of $E_R$ is the principal advantage of using Supersymmetric formalism. Since $\hat{U_1}(\lambda;\rho)$ is strictly isospectral with $U(\rho)$, any value of $\lambda$ is admissible in principle. However, a judicious choice of $\lambda$ is necessary for accurate determination of the resonance energy. The calculated two-neutron separation energies are in excellent agreement with the observed values $4.910\pm 0.030$ MeV for $^{18}$C and $3.510\pm 0.240$ MeV for $^{20}$C \cite{audi1-2003} and also with results of Yamaguchi et al
\cite{yamaguchi-2011} as presented in Table 5. The calculated RMS matter radii also agree fairly with the experimental values \cite{ozawa1-2001}.
\section{Summary and conclusions}
In this communication we have investigated the structure of $^{18, 20}$C using hyperspherical harmonics expansion method assuming $^{16,18}$C$+n+n$ three-body model. Standard GPT \cite{gogny-1970} potential is chosen for the $n-n$ pair while a three-term Gaussian SBB potential \cite{sack-1954} with adustable range parameter is used to compute the ground state energy and wavefunction. The one parameter familly of isospectral potentials constructed using the ground state wavefunctions succesfully explains the resonance states in both the systems. The method is a robust one and can be applied for any weakly bound system even if the system lacks any bound ground state.
\ack
\hspace*{1cm} This work has been supported by computational facility at Aliah University, India.
\section*{References}
| {'timestamp': '2020-10-06T02:20:41', 'yymm': '2010', 'arxiv_id': '2010.01585', 'language': 'en', 'url': 'https://arxiv.org/abs/2010.01585'} |
\section{Introduction}
Despite recent observational progress, the 40 years old Gamma-Ray Bursts (GRBs) mystery is far from being completely
solved (see e.g. \cite{piran05} for a review). There is general consensus on the cosmological nature of these
transient sources of gamma-ray radiation, and, at least for the long burst category, the association with the
explosion of massive stars (>30 M$_{\odot}$) is a scenario that reproduces most observed features, although
the number of GRBs spectroscopically associated with Supernovae is still limited. Whatever the exact progenitors of long GRBs are, they have been detected up to redshift 6.7 \cite{fynbo08}, making them powerful tools to investigate the early Universe (star formation history, re-ionization, etc.), and to possibly derive cosmological parameters. For short bursts, on the other hand, the situation is less clear, mainly because of the lack of a statistically compelling number of good quality observations of their afterglows. Models involving the coalescence of two compact objects (black holes, neutron stars or white dwarfs) are however commonly invoked to explain these events, as their average distance is smaller with respect to long bursts. Many questions concerning
GRBs are still open, such as the physical processes at work during the prompt phase, in terms of particle acceleration
and radiation processes, the GRBs classification, a better characterization of GRB host galaxies and progenitors,
as well as some fundamental physics issues like Lorentz invariance, the origin of cosmic rays and gravitational waves.
In order to contribute to address the above questions, the French Space Agency (CNES), the
Chinese Academy of Sciences (CAS) and the Chinese Space Agency (CNSA) are developing the {\it SVOM} mission
(Space-based multi-band astronomical Variable Object Monitor). {\it SVOM} has successfully reached the end
of its phase A design study, and is planned to be launched in 2013 in a circular orbit with an inclination of
$\sim$30$^{\circ}$ and altitude of $\sim$600 km. It will carry four instruments: ECLAIRs, a coded mask
wide field telescope that will provide real time localizations of GRB to arcminute level, two GRMs units, non-imaging gamma-ray spectrometers, and two narrow-field instruments, XIAO and VT, for arcsecond localizations, and
for the study of the early afterglow phases in the X-ray and optical bands. Indeed, once a GRB is detected
within field of view of ECLAIRs, the satellite will autonomously perform a slew towards the GRB direction, in order
to allow the observations of the afterglow by the XIAO and VT telescopes. A possible realization of the
{\it SVOM} satellite is sketched in Fig. \ref{fig:svom}. The {\it SVOM} pointing strategy derives from a combination
of two main constraints: the avoidance of bright X-ray galactic sources and an anti-solar pointing, to have
the GRBs always detected on the night side of the Earth. Even if the latter choice induces some dead time at
mission level, due to the Earth passages occulting ECLAIRs field of view once per orbit,
it will enhance the possibility of successful follow-up with large ground based facilities, with a goal
of 75\% of {\it SVOM} GRBs easily observable during their early afterglow phase.
\begin{figure}[h]
\includegraphics[height=.2\textheight]{svom.pdf}
\hspace{2cm}
\includegraphics[height=.2\textheight]{SVOM_multil.pdf}
\caption{Left: a possible implementation of SVOM. ECLAIRs is in green, XIAO in blue, the VT in red, and the GRM units are hanging on ECLAIRs side. Right: the SVOM multi-wavelength coverage.}
\label{fig:svom}
\end{figure}
Besides the space flown instruments, the {\it SVOM} mission includes a set of ground based instruments, in order
to broaden the wavelength coverage of the prompt and of the afterglow phase: GWACs are a set of wide field optical
cameras that cover a large fraction of ECLAIRs field of view. They will be based in China and will follow ECLAIRs
pointings, in order to catch the prompt optical emission associated with GRBs. Two robotic telescopes (GFTs), one based
in China, and one provided by CNES, complete the ground based instrumentation. Their goal is to measure the photometric
properties of the early afterglow from the near infra-red to the optical band, and to refine the afterglow
position provided by the on-board instruments. A summary of the multi-wavelength capabilities of SVOM
is reported in Fig .\ref{fig:svom}. In the following sections the instruments and the mission are described in some
detail.
\section{ECLAIRs}
ECLAIRs is made of a coded mask telescope working in the 4--250 keV energy range (CXG), and a real-time
data-processing electronic system (UTS, see below). The CXG has a wide field of view ($\sim$ 2 sr), and
a fair localization accuracy ($\sim$10 arcmin error radius (90\% c.l.) for the faintest sources, down to a couple of arcmin for the brightest ones).
Its detector plane is made of 80$\times$80 CdTe pixels yielding a geometrical area of 1024 cm$^{2}$. The telescope
is passively shielded, and a new generation electronics developed at CEA Saclay together with the careful detector selection, and the optimized hybridization done at CESR Toulose allow to lower the detection threshold
with respect to former CdTe detectors by about 10 keV, reaching $\sim$4 keV. The CXG, in spite
of its rather small geometrical surface, is thus more sensitive to GRBs
with soft spectra, potentially the most distant ones, than currently flying telescopes. A scheme of the CXG telescope is shown in
left panel of Fig. \ref{fig:ECLAIRs}, while the right panel of the same figure shows how the CXG predicted sensitivity
compares to current and past missions as a function of the GRB peak energy.
\begin{figure}[ht]
\includegraphics[height=.2\textheight]{ECL.pdf}
\hspace{2cm}
\includegraphics[height=.2\textheight]{cSensitivity_4Instr_alpha1_beta3_v20081212a-3.pdf}
\caption{Left: CXG mechanical structure. The top layer represents the coded mask, the orange layers the passive shielding, and the bottom box contains the detectors electronics. Right: ECLAIRs/CXG sensitivity compared to previous and current instrumentation. The curves have been computed as a function of the GRB peak energy for a 5.5 $\sigma$ detection assuming a Band \cite{band93} spectrum with the other spectral as parameters reported in the plot.}
\label{fig:ECLAIRs}
\end{figure}
The ECLAIRs/CXG telescope is expected to localize about 70 GRBs per year. This estimate takes
into account the dead time induced by the passages over the Southern Atlantic Anomaly, that increase
significantly the instrumental particle induced background, and by the passage of the Earth in the CXG field of
view.
\subsection{UTS}
The UTS (Unit\'e de Traitement Scientifique) is in charge of analyzing
ECLAIRs data stream in real-time and of detecting and localizing the GRBs occurring within its field of view. It will implement two separate
triggering algorithms, one based on the detection of excesses in the detectors count rate, and using
imaging as trigger confirmation and localization, and a second one based on imaging, that is better
suited for long, slowly rising GRBs. Once a GRB is detected, its position will be sent to the ground via
a VHF emitter antenna. At the same time the GRB position will be transmitted to the platform that may
autonomously slew to the GRB direction in three to five minutes in order to bring it in the field of view of the narrow field instruments, XIAO and VT.
The UTS will also collect relevant information in the CXG and GRM data stream, such as light curves
and sub-images, and send them to the Earth via the VHF channel, to promptly provide additional information
on the trigger quality and on the GRB spectral and temporal characteristics. For more details on ECLAIRs, see \cite{schanne08} and \cite{remoue08}.
\section{GRM}
The Gamma-Ray Monitor (GRM) on board {\it SVOM} is composed of two identical units each made of a phoswich (NaI/CsI) detector of 280 cm$^{2}$, read by a photomultiplier. In front of each detector there is a collimator in order to reduce the background and to match the CXG and GRM fields of view. A schematic view of one GRM unit and the combined
CXG/GRM sensitivity are shown in Fig. \ref{fig:GRM}. The GRM does not have imaging capabilities, however as can be seen from Fig. \ref{fig:GRM}, the GRM extends the spectral coverage of the {\it SVOM} satellite to the MeV range. This
is an important point, since the current detectors like BAT on board {\it Swift} \cite{swift} or IBIS/ISGRI on board {\it INTEGRAL} \cite{isgri} have a comparable or better localization accuracy with respect to the CXG, but they lack a broad band coverage, hampering a correct spectral characterization of the prompt high-energy emission of GRBs, which is a key input
of any sensible modeling of GRBs' radiative processes. The recent successful launch of {\it Fermi}, and the availability
of the GBM detectors (non-imaging spectrometers with a 2$\pi$ field of view) used in synergy with BAT and ISGRI will
partially fill this need before the launch of {\it SVOM}, but the different orbits, pointing constrains, and sensitivities
of the three instruments imply a low rate of simultaneous detections. On the other hand, for every {\it SVOM} GRB a good
localization and good spectral information will be available at the same time.
\begin{figure}[ht]
\includegraphics[height=.2\textheight]{GRMUnit.pdf}
\hspace{2cm}
\includegraphics[height=.3\textheight]{CXG_GRM_2.pdf}
\caption{Left: one GRM unit. Right: combined GRM (two units) ECLAIRs on axis sensitivity for a 1 s integration time and a 5 $\sigma$ detection. A Band spectrum with $\alpha$=1, $\beta$=2.5, $E_{0}$=100 keV, and F$_{50-300 keV}$=1 photon cm$^{-2}$ s$^{-1}$ is overplotted for comparison.}
\label{fig:GRM}
\end{figure}
\section{XIAO}
The X-ray Imager for Afterglow Observations (XIAO) will be provided by an Italian consortium, lead
by the INAF-IASF institute in Milan. It is a focusing X-ray telescope, based on the grazing incidence (Wolter-1) technique. It has a short focal length of $\sim$0.8 m, and a field of view (25 arcmin diameter) adequate to cover the whole error region provided by the CXG telescope, so that after the satellite slew the GRB position should always be inside the XIAO field of view. XIAO has an effective area of about 120 cm$^{2}$ and the mirrors are coupled to a very compact, low noise, fast read out CCD camera, sensitive in the 0.5--2 keV energy range. The sensitivity of the XIAO telescope is reported
in Fig. \ref{fig:XIAO}, and simulations based on a sample of light curves collected with XRT
on board {\it Swift} show that virtually all GRB X-ray afterglows are detectable by XIAO during the first hours. This means in practice that each GRB, for which a satellite slew is performed (not all GRB can be pointed due to different
constraints at platform level), can be localized with a $\sim$ 5 arcsec accuracy, see Fig. \ref{fig:XIAO}. Indeed the source
localization accuracy, is linked to the number of detected photons as $k / \sqrt{N}$ where k is a constant depending on the instrument point spread function, and N is the number of detected photons, and the early afterglows will provide
enough photons to reach the degree of positional accuracy mentioned above.
For more details on the XIAO instrument, see \cite{mereghetti08}
\begin{figure}[ht]
\includegraphics[height=.25\textheight]{10ksSensi_new.pdf}
\includegraphics[height=.25\textheight]{xiao_errors.pdf}
\caption{Left: Expected sensitivity of the XIAO telescope (5$\sigma$ detection in 10 ks), computed assuming a 30$^{\prime\prime}$ Half Energy Diameter, a source extraction circle of 15$^{\prime\prime}$ radius, and an energy bin $\Delta$E=E/2. Right: XIAO location accuracy as a function of the number of detected photons. For each afterglow at least 100 counts are expected to be collected by XIAO.}
\label{fig:XIAO}
\end{figure}
\section{VT}
The space-borne Visible Telescope will be able to improve the GRB localizations obtained by the CXG and XIAO to sub-arcsecond precision through the observation of the optical afterglow. In addition it will provide
a deep and uniform light-curve sample of the detected optical afterglows, and allow
to do primary selection of optically dark GRBs and high-redshift GRB candidates (z>4).
The field of view of the telescope will be 21$\times$21 arcmin, sufficient to cover the error box of the CXG. The detecting area of the CCD has 2048 x 2048 pixels to ensure the sub-arcsecond localization of detected sources.
The aperture of the telescope should guarantee a limiting magnitude of $M_{V}$ = 23 (5$\sigma$) for a 300 s exposure time. Such a sensitivity is a significant improvement over the UVOT on board the {\it Swift} satellite (see Fig. \ref{fig:vt}) and over existing ground-based robotic GRB follow-up telescopes. The VT is expected to detect nearly 70\% of SVOM GRBs for which a slew is performed.
The telescope will have at least two bands in order to select high-redshift GRB candidates. They are separated at 650 nm, which corresponds to a redshift of z$\sim$4-4.5 using Ly$\alpha$ absorption as the redshift indicator.
\begin{figure}[ht]
\includegraphics[height=.2\textheight]{VT.pdf}
\hspace{2cm}
\includegraphics[height=.2\textheight]{vtvsuvot.pdf}
\caption{Left: VT. Right: Predicted optical afterglow light curves compared to UVOT and VT sensitivities in 1000 s.}
\label{fig:vt}
\end{figure}
\section{Ground segment}
The ground segment of the mission will be composed of X- and S-band antennas (for data and housekeeping telemetry
download), a mission operation center, based in China, two science centers (based in China (CSC) and France (FSC) and in charge of operations and monitoring of the scientific payload),
and a VHF alert network.The latter will be composed of a series of receivers distributed over the
globe in order to guarantee continuous coverage for the alerts dispatched by the platform. The alerts
will contain the information about the GRB positions, that will be sent to the ground as soon as more
accurate information is derived on board, followed by complementary quality indicators (light curves, images,
etc.) produced on board. The VHF network is directly connected to the FSC, which is in charge of
formatting and dispatching the alerts to the scientific community through the Internet (GCNs, VO Events, SVOM web page, etc.). The first alerts corresponding to the initial localization by the CXG are expected to reach the recipients
one minute after the position has been derived on board. Then the following alerts, containing the X-ray
position of the afterglow, and eventually the optical position computed from VT data, will reach the community
within 10 minutes from the first notice. In case of a refined (sub-arcsec) position is available from the prompt
data analysis of the GFTs or GWACS (see below), this information will immediately reach the the FSC, and dispatched to the
scientific community through the channels mentioned above. For more details on the alert distribution strategy, see \cite{claret08}. In addition the FSC will be on charge of publishing the CXG pointing direction in order to facilitate
ground based robotic telescopes to quickly react, minimizing the slew time.
\subsection{Ground Based Telescopes}
\subsubsection{GWACS}
The Ground-based Wide-Angle Camera array is designed to observe the visible emission of more than 20\% of
{\it SVOM} GRBs from 5 minutes before to 15 minutes after the GRB onset. The array is expected to have an assembled field of view of about 8000 deg$^{2}$ and a 5$\sigma$ limiting magnitude $M_{V}$ = 15 for a 15 s exposure time for full moon nights. To comply with both the science requirements and technical feasibility, each camera unit will have an aperture size of 15 cm, a 2048 x 2048 CCD and a field of view of 60 deg$^{2}$. In total about 128 camera units are required to cover the 8000 deg$^{2}$.
\subsubsection{GFTs}
Two SVOM robotic telescopes will automatically position their 20-30 arcmin field-of-view to the position of GRB alerts and, in case of a detection, they will determine the position of the source with 0.5 arcsec accuracy. Both telescopes will be provided with multi-band optical cameras and the French GFT will also have a near infra-red CCD.
Since one telescope can only observe those candidates occurring above the horizon and during night at the telescope site, these sites will be located in tropical zones and at longitudes separated by 120$^{\circ}$ at least, in order to fulfill the requirement of a 40\% efficiency. These telescopes could also be used to follow alerts which are not considered to be reliable enough to be distributed to the whole community. This procedure allows increasing the chance of detecting low S/N events, while not wasting the observing time of instruments which are outside the {\it SVOM} collaboration.
The scientific objectives of the GFTs include the quick identification and characterization of interesting GRBs
(e.g. highly redshifted GRBs, whose visible emission is absorbed by the Lyman alpha cutoff and the Lyman alpha forest, dark bursts, nearby GRBs), and multi-wavelength follow-up of 40\% of {\it SVOM} GRBs (at optical and X-ray wavelengths) from 30 to 10$^{4}$ seconds after the trigger. This will be done with the GFT and the VT at optical wavelengths and CXG and the XIAO at X-ray wavelengths. This will allow measuring the spectral energy distribution of the burst during the critical transition between the GRB and the afterglow.
\begin{theacknowledgments}
D.G. acknowledges the French Space Agency (CNES) for financial support. J.D. \& Y.Q. are supported by NSFC (No. 10673014). S.M. acknowledges the support of the Italian Space Agency through contract I/022/08/0.
\end{theacknowledgments}
\bibliographystyle{aipproc}
| {'timestamp': '2009-06-23T11:00:28', 'yymm': '0906', 'arxiv_id': '0906.4195', 'language': 'en', 'url': 'https://arxiv.org/abs/0906.4195'} |
\section{Introduction}
The calculation of quantum corrections to the mass of solitonic
objects has been a subject of intense interest in the past \cite{raj} and has
recently been
revived in the light of a recent breakthrough in our
understanding of non-perturbative dynamics in 3+1-dimensional
supersymmetric (susy) gauge
theories~\cite{SW}. One of the most important ingredients of the
non-perturbative analysis of such theories is the duality between
extended solitonic objects, such as monopoles or dyons, and
point-like particles. Another important ingredient is the concept of
the BPS spectrum --- the particles whose masses are
proportional to their charges. Due to the supersymmetry algebra
BPS states may be annihilated by the action of some of the
supersymmetry generators and hence give rise to a smaller number
of superpartners (``multiplet shortening'').
Therefore the BPS value of the mass becomes a
qualitative, rather than just a quantitative property \cite{WO}.
Susy models in 1+1 dimensions provide a valuable
nontrivial testing ground for these concepts. Solitons in these
models are examples of BPS states whose mass at the classical level
is proportional to the topological charge.
The calculation
of quantum corrections to the masses of these states has been
subject to a long controversy.
A number of one-loop calculations
have been performed yielding different contradicting
results \cite{dAdV,sch,Rouh,kar1,imb1,Y,CM,Uch,uchi1,uchi2,rvn}.
The one-loop corrections to the mass
of a soliton are given by
\begin{equation}
M_{sol}^{(1)}={1\over2}\sum \left(\omega^B-\omega^F\right) -
{1\over2}\sum\left(\tilde{\omega}^B-\tilde{\omega}^F\right)+ \delta M
\label{mass}
\end{equation}
where $\omega^{B,F}$ are the energies of the small
bosonic (fermionic) fluctuations about the
classical soliton solution, $\tilde{\omega}$ are the corresponding
energies of the linearized
theory in the trivial vacuum,
while $\delta M$ is the counter\-term which can be obtained
from the expression for the classical
mass of the soliton in terms of unrenormalized parameters by expanding
into renormalized ones \cite{col}.
In order to make the above sums well defined,
spatial boundaries are temporarily introduced to make
the entire spectrum discrete.
One can identify two sources of ambiguities. As was discussed in
\cite{sch}, imposing different spatial boundary conditions on the
small quantum fluctuations around the classical soliton gives
different, sometimes even ultraviolet divergent, results. In this
paper we present an analysis which answers the question: which
boundary conditions are to be used in the one-loop calculation? The
answer to this question can be found if one re-examines the original
formulation of the problem. We consider the vacuum energy as a
functional of the boundary conditions. We then single out a class of
boundary conditions which do not introduce surface effects --- the
topological boundary conditions. They close the system on
itself. There is a trivial (periodic) as well as a topologically
non-trivial (with a Moebius-like twist) way of doing this.
This definition of boundary conditions does not rely on a
semiclassical loop expansion. We do not separate the classical part of
the field from its quantum fluctuations; rather, the boundary
conditions are imposed on the whole field. One then infers the correct
boundary conditions for the classical part as well as for the quantum
fluctuations from this single general condition.
Another source of ambiguity, as was pointed out recently \cite{rvn}, is the
choice of the ultraviolet regularization scheme. The dependence on
the choice of regularization scheme can be understood as a peculiar
property of those quantities, such as the soliton mass, which involve a
comparison between two sectors with different boundary conditions, i.e.,
different topology. Indeed, the difference of the vacuum
energies in the two sectors measures the mass of the soliton.
The one-loop correction is then given by a sum
over zero-point frequencies in the soliton sector
which is quadratically divergent.
A similar sum in another (the trivial) sector is to be subtracted
in order to get an expression which is finite if written in terms
of the renormalized parameters. It turns out that due to the bad
ultraviolet behavior of both sums the result depends on the
choice of the cut-off \cite{rvn}. Cutting both sums off
at the same energy
in both sectors \cite{kar1,imb1,Y,CM} or taking equal numbers of modes
in both sectors \cite{dhn1,sch,uchi1,uchi2} leads to different results.
To add to the confusion, some authors do not include bound states and/or
zero modes when they consider equal numbers of states in both
sectors.\footnote{See for example the textbook \cite{raj},
eq.~(5.60). Actually in this
reference the result of \cite{dhn1} is obtained but this requires neglecting
a boundary term in the partial integration of (5.63).}
In this paper
we propose a simple way of reducing the ultraviolet divergence
of the sums over the zero-point energies, which eliminates
the sensitivity to the ultraviolet cutoff. Instead of calculating
the sums we calculate their
derivative with respect to the physical mass scale in the theory.
The constant of integration can
be fixed by using the following observation: the vacuum energy
should not depend on the topology when the mass is zero.
This is the physical principle which allows us to perform the
calculation unambiguously. It should be viewed as a renormalization
condition.
From a practical point of view we need to use our condition that the
vacuum energy functional does not depend on topology at zero mass only
in the one-loop calculation. However, to preserve the spirit of our
approach, we formulate this condition, as we do with our
boundary conditions, for the full theory regardless of
the semiclassical loop expansion. The mass that needs to be taken to
zero is the physical, renormalized, mass scale. From this point of
view this condition is a trivial consequence of dimensional analysis
if we work with a renormalizable theory where all the physical masses
are proportional to one mass scale. All the masses, including the
soliton mass, vanish then at the same point, which is the conformal,
or critical, point in the theory. Another way to look at our
condition to fix the integration constant is to consider the Euclidean
version of the 1+1 theory as a classical statistical field theory.
Then the mass of the soliton is the interface tension between two
phases. As is well known \cite{jdl} the interface tension vanishes at
the critical point; moreover, it vanishes with the same
exponent as the inverse correlation length.
In section~\ref{sec:ddm} we demonstrate our new unambiguous method of
calculation using as an example the bosonic kink. We show that, as
argued in \cite{rvn}, the correct result corresponds to mode number
cutoff. (The same conclusion was recently reached for nontopological
solitons in 3+1 dimensions \cite{jaffe}.) In section~\ref{sec:N=1} we
apply our analysis of the topological boundary conditions to the case
of an $N=1$ susy soliton, where it leads to nontrivial
consequences. We analyze the relation of our results to the BPS bound
in section~\ref{sec:bound}. In section~\ref{sec:N=2} we analyze the
$N=2$ susy solitons and conclude that the one-loop corrections vanish
completely. In section~\ref{sec:2loop} we redo the 2-loop corrections
for the case of the bosonic sine-Gordon soliton \cite{vega,verw},
paying this time close attention to possible ambiguities, and find
that no ultraviolet ambiguities appear. The ultraviolet ambiguity
is thus purely a one-loop effect which leads to the
interesting conjecture that it may be formulated in terms of a
topological quantum anomaly.
\section{Eliminating the one-loop ultraviolet ambiguity using
a physical principle}
\label{sec:ddm}
In this section we present a general analysis regarding
the calculation of the soliton mass which will help
us eliminate the ultraviolet ambiguity discussed in \cite{rvn}.
We consider the $\phi^4$ theory (kink) as an example, but the
arguments can be applied to the sine-Gordon theory as well.
The crucial property of these models from which our boundary
conditions follow is the $Z_2$ symmetry $\phi\to-\phi$.
Let us take a step back from the actual calculation and try to
define the mass of the
soliton {\em before} we do the semiclassical expansion.
We start from the observation that the soliton carries a conserved charge
--- the topological charge.
This means that we can define the mass of the soliton as the difference
between
the energy of the system with nontrivial topology and the energy of the system
with trivial topology.
This definition coincides with the definition based on path integrals
in the topological sector which are normalized by path integrals
in the trivial sector \cite{vega}.
The topological charge of the system is determined by
the conditions at the spatial boundary. We view the
vacuum energy as a functional
of the boundary conditions. In general, a boundary condition could
induce surface effects associated with the interaction of the
system with the external forces responsible for the
given boundary condition. We would like to avoid these contributions.
There is a class of boundary conditions which do not produce such
effects.
These are what we call topological boundary conditions, which identify
the degrees of freedom at different points on the boundary
modulo a symmetry transformation. In our case there are two
such possibilities: periodic and antiperiodic. These are dictated
by the internal $Z(2)$ symmetry: $\phi(x)\to(-1)^p\phi(x)$, $p=0,1$. Crossing
the boundary is associated with a change of variables leaving the
action invariant: $\phi(-L/2)=(-1)^p\phi(L/2)$.
The system behaves continuously across the
boundary, only our description changes. In effect such boundary
conditions do not introduce a boundary, rather they close
a system in a way similar to the Moebius strip.%
\footnote{Though topological boundary conditions do not induce
boundary effects, finite volume effects vanishing as $L\to\infty$
are of course present. Such effects will be discussed
in section \ref{sec:bound}.
}
We spent so much time on this, perhaps, trivial point in order
to make the choice of boundary conditions for the theory
with fermions clear. The analysis of fermions, however,
will be postponed until the next section.
In the literature a large number of other boundary conditions have been
considered, both in the bosonic and in the fermionic sectors but from
our perspective they all introduce surface effects or are even
inconsistent.
It should now be clear that the mass of the soliton can be
defined as the difference of the vacuum energy with antiperiodic
and with periodic boundary conditions when the volume $L\to\infty$.
This definition
does not rely on the semiclassical expansion. Returning
now to the semiclassical calculation we see that at the classical
level the equations of motion select the trivial or the soliton
vacuum configuration depending on the topology. Less trivially,
we see that at the one-loop level the boundary conditions that
should be used for the small fluctuations about the soliton
configuration should be antiperiodic. We must point out that
the choice of boundary conditions does not affect
the result of the calculation in the purely bosonic case. We
shall nevertheless use antiperiodic boundary conditions
in the soliton sector in this section to be faithful
to our nonperturbative definition of the soliton mass. We shall
see in the next section that for fermions
the choice of the boundary conditions becomes crucial.
A few points about the classical antiperiodic soliton should
be stressed here. The topological boundary condition
reads
\begin{equation}\label{apbc}
\phi(-L/2)=(-1)^p\phi(L/2) \qquad \mbox{ and } \qquad
\phi^\prime(-L/2)=(-1)^p\phi^\prime(L/2),
\end{equation}
where the nontrivial sector is selected when $p=1$.
Note that the derivative with respect to $x$, $\phi^\prime$,
must also be antiperiodic in the soliton sector.
The classical soliton solution $\phi(x)$ can be viewed
as a trajectory of a particle with coordinate $\phi$ moving
in time $x$ in the potential $-V(\phi)$. The particle
is oscillating about the origin $\phi=0$ with a period which
depends on the amplitude. When the period is equal to $2L$
the trajectory during half of the period is the antiperiodic soliton
satisfying the boundary conditions (\ref{apbc}). The
endpoints of the trajectory need not necessarily be
the turning points. For example, the particle
at time $-L/2$ can start downhill at some $\phi_0$ with nonzero velocity,
then pass the point at the same height on the opposite
side, i.e., $-\phi_0$, going uphill, then
turn and after that at time $L/2$ pass the point $-\phi_0$
again, but going downhill. Clearly, for this trajectory, (\ref{apbc}) is
satisfied, whereas restricting the usual soliton solution centered at
$x=0$ to the interval
$(-L/2,L/2)$ would lead to a solution for which
$\phi(-L/2)=-\phi(L/2)$, but
$\phi^\prime(-L/2)=+\phi^\prime(L/2)$.
When $L\to\infty$
the turning points of the trajectory come infinitesimally close
to the minima of $V(\phi)$ and we recover the usual
$L=\infty$ soliton.
Next, we want to address the problem of the ultraviolet ambiguity
in the one-loop calculation. To summarize the beginning of this
section
\begin{equation}\label{e1-e0}
M \equiv E_1 - E_0,
\end{equation}
where $E_p$, $p=0,1$ are the energies of the system with the periodic
and antiperiodic boundary conditions of (\ref{apbc}) respectively.
At the classical level this gives $M_{\rm cl} \sim
m^3/(3\lambda)$ for the kink,
where $m$ is the mass of the elementary boson at tree
level, and $\lambda$ the dimensionful coupling constant.
The order $\hbar$ correction is due to the fact that
boundary conditions change the spectrum of zero-point fluctuations.
Two factors are responsible for the ultraviolet
ambiguity discussed in \cite{rvn}. One is the fact that the terms in the sums
over zero-point energies grow making each sum strongly (quadratically)
ultraviolet cutoff dependent. Second is that one has to compare the
spectrum in two different vacua. Taking all the modes below a certain
energy in both systems leads to a different result than taking equal
numbers of modes. It would be nice if there was a parameter
in the theory whose variation would continuously interpolate between the two
vacua. This is not possible due to the topological
nature of the difference between the vacua. However, we can
identify a certain value of the dimensionful parameters
of the theory at which the vacuum energy
should become independent of the topology. This will be one ingredient of our
calculation.
Another ingredient is the observation that one can reduce the
ultraviolet divergence of the
sums of zero-point energies by differentiating w.r.t. $m$.
The terms in the differentiated sums become then decreasing
and as a result the sums (now only logarithmically divergent)
can be unambiguously calculated. But the price
is that we need to supply the value for the integration constant to
recover the function from its derivative.
This can be done using a physical principle that
relates the energies of the two vacua at some value
of the mass. One must
realize that the difference in the energies arises because of the
nontrivial potential for the scalar field. If this potential
vanishes the energies of the two vacua become equal. In the
absence of the potential the mass $m$ of the boson is zero and
the soliton disappears. Therefore
the constant of integration over $m$ is fixed by the condition that the
energy difference between the two vacua must vanish when $m\to0$.
A subtlety here is that $m$ should be sufficiently
large compared to $1/L$ so that finite volume effects can be
neglected. The limit $m\to0$ should be understood in the sense
that the mass approaches $O(1/L)$, where $L$ is large.
Then the difference between the vacuum energies must be $O(1/L)$.
Also note, that other dimensionful parameters in
the theory should be scaled accordingly when $m\to0$, e.g.,
$\lambda/m^2={\rm const}$ in the $\lambda\phi^4$ theory.
We want to relate the mass of the soliton to other parameters of the
theory. The relation to the bare parameters $m_0$ and $\lambda$
will contain infinities. The infinities in the
relation of physical quantities to the bare parameters in
this theory should be eliminated if we renormalize the mass
$m_0 = m + \delta m$, where
\begin{equation}
\delta m = {3\lambda\over2m}\int_{-\infty}^\infty{dk\over2\pi}{1\over\sqrt{k^2+m^2}}.
\end{equation}
With this renormalization of $m_{0}$ tadpole diagrams vanish. In the
$\phi^4$ theory the
physical pole mass of the meson differs from $m$ by a {\em finite}
amount $-\sqrt3\lambda/(4m)$ \cite{rvn}; however, it suffices to
use $m$ for our purposes. If we rerun this analysis for the
sine-Gordon theory, the tadpole renormalized mass $m$ would,
at one-loop order, coincide with the physical meson mass.
If we use this renormalized mass in the expression for the soliton mass
we get an additional one-loop counterterm, $\delta M$,
\begin{equation}
{m_0^3\over3\lambda} = {m^3\over3\lambda} + \delta M,
\end{equation}
where
\begin{equation}\label{deltaM}
\delta M = {m^2 \delta m\over\lambda} =
{3m\over2}\int_{-\infty}^\infty{dk\over2\pi}{1\over\sqrt{k^2+m^2}}.
\end{equation}
Now we differentiate the well-known expression for the one-loop
correction $M^{(1)}$ to the soliton mass with respect to the mass $m$
\begin{equation}\label{M1}
{dM^{(1)}\over dm} = {d(\delta M)\over dm}
+ {1\over2} \sum_n {d\omega_n\over dm}
- {1\over2}\sum_n {d\tilde\omega_n\over dm}.
\end{equation}
For the spectrum $\tilde\omega_n$ in the trivial sector one obtains
\begin{equation}\label{kn_vac}
{d\tilde\omega_n\over dm} = {m\over\sqrt{\tilde k_n^2+m^2}}, \qquad
\tilde k_nL = 2\pi n.
\end{equation}
For the soliton sector
\begin{eqnarray}\label{kn_sol}
{d\omega_n\over dm} = {1\over\sqrt{k_n^2+m^2}}
\left(m + k_n {dk_n\over dm}\right) =
{1\over\sqrt{k_n^2+m^2}}
\left(m + {1\over L}{k_n^2\over m}\delta^\prime(k_n)\right),
\nonumber \\
k_nL + \delta(k_n) = 2\pi n + \pi,
\end{eqnarray}
where we used the fact that $\delta(k)$ depends on $m$ only
through $k/m$ to convert the derivative w.r.t. $m$ into the derivative
w.r.t. $k$.
We convert the sums over the spectrum in (\ref{M1}) into integrals
over $k$ using
\begin{equation}\label{sum_triv}
\sum_n f(\tilde k_n) = L \int_{-\infty}^\infty{dk\over2\pi}f(k) + O(1/L);
\end{equation}
and
\begin{equation}\label{sum_delta}
\sum_n f(k_n) = L \int_{-\infty}^\infty{dk\over2\pi}f(k)
\left(1 + {\delta^\prime(k)\over L}\right) + O(1/L).
\end{equation}
These expressions follow from the Euler-Maclaurin
formula which is valid for a smooth function $f(k)$ vanishing
at $k=\infty$. In our case $f(\tilde k)=d\tilde\omega/dm$ and
$f(k)=d\omega/dm$ satisfy
these conditions. From the Euler-Maclaurin formula one can also see that
in the naive calculation with $f(k)=\omega$ the ambiguous
contribution, which comes from regions
$\delta/L$ at the ultraviolet ends of the integration interval,
is non-vanishing due to the fact that $f(k)$ grows with $k$.
\begin{figure}[hbt]
\centerline{
\epsfxsize 2in \epsfbox{kink.eps}
}
\caption[]{The left- and the right-hand sides of the equation
$\delta(k) = 2\pi n + \pi - kL$ are plotted schematically
by solid lines in the case of the $\phi^4$ kink (two bound states).
The dashed line represents the value of $\delta(k)$ without
the discontinuity $2\pi\varepsilon(k)$. Observe that with this
discontinuity the mode numbers $n=-1,0$ should be left out,
while the spectrum of allowed values of $k$ is not affected.
}
\label{fig:kink}
\end{figure}
We can use the following expression for the phase shifts $\delta(k)$
in the case of the kink in $\phi^4$ theory:
\begin{equation}\label{deltak}
\delta(k) = \left(2\pi - 2 \arctan {3m|k|\over m^2 - 2 k^2}\right)
\varepsilon(k),
\end{equation}
where we added the term $2\pi\varepsilon(k)$ to ensure that
$\delta(|k|\to\infty)\to0$. As a result, $\delta(k)$ is discontinuous
at $k=0$: $\delta(0_\pm)=\pm 2\pi$. It is then easy to see
(Fig.~\ref{fig:kink}) that for
$n=-1$ and $n=0$ the equation (\ref{kn_sol}) does not have solutions
for $k$. It is pleasing to observe that this defect is matched by the
existence of two discrete modes: $\omega_0=0$ (the translational
zero mode) and
$\omega_{-1}=\sqrt3m/2$ (a genuine bound state). To these discrete
modes we can assign (somewhat arbitrarily)
those ``unclaimed'' $n$'s. That this matching is not a coincidence
follows from Levinson's theorem: $\delta(0_\pm)=\pm \pi n_{\rm ds}$,
where $n_{\rm ds}$ is the number of discrete solutions.
Since the discontinuity in $\delta(k)$ is an integer multiple of
$2\pi$, it does not change the spectrum of the allowed values
of $k$ (Fig.~\ref{fig:kink}). This spectrum near the origin is given by
$kL=\ldots,-3\pi,-\pi,\pi,3\pi,\ldots$ up to $O(1/L)$ for
$\delta$ either with or without the $2\pi\varepsilon(k)$ term
in (\ref{deltak}), and the values of $\delta^\prime(k)$ and $f(k)$
on this set of $k$'s are also not affected.
Putting now all the pieces together we obtain
\begin{equation}
{dM^{(1)}\over dm} = {d(\delta M)\over dm}
+ {1\over2} \sum_{\rm ds}{d\omega_{\rm ds}\over dm}
+ {1\over2m}\int_{-\infty}^\infty
{dk\over2\pi}\sqrt{k^2+m^2}\,\delta^\prime(k).
\end{equation}
This formula is universal.
Substituting for the $\phi^4$ theory the particular values for $\delta M$
from (\ref{deltaM}) and
$\delta(k)$ from (\ref{deltak}) we find%
\footnote{Use
$\int_{-\infty}^\infty(1+x^2)^{-1/2}(1+4x^2)^{-1}dx=2\sqrt3\pi/9$
and $\int_{-\infty}^\infty(1+x^2)^{-3/2}dx=2$.
}
\begin{eqnarray}
{dM^{(1)}\over dm} &=&
-{3m^2\over2}\int_{-\infty}^\infty{dk\over2\pi}{1\over(k^2+m^2)^{3/2}}
+ {\sqrt3\over4}
\nonumber\\
&&- {3m^2\over2}\int_{-\infty}^\infty
{dk\over2\pi}{1\over\sqrt{k^2+m^2}(m^2+4k^2)}
= {1\over4\sqrt3} - {3\over2\pi}.
\end{eqnarray}
Integrating over $m$ and using that $M^{(1)}=0$ when $m=0$
we obtain the result for the one-loop correction to the
kink mass which was previously obtained using mode number
cutoff~\cite{dhn1,rvn}
\begin{equation}
M^{(1)} = m\left({1\over4\sqrt3} - {3\over2\pi}\right).
\end{equation}
\section{Fermions, supersymmetry, and topological boundary conditions}
\label{sec:N=1}
In this section we shall extend the ideas introduced in the
previous section to theories with fermions, and
in particular theories with supersymmetry. The following analysis
can be applied to any $N=(1,1)$ supersymmetric theory with Lagrangian
\begin{equation}\label{lagrangian}
{\cal L} = -{1\over2}\partial_\mu\phi \partial^\mu\phi - {1\over2}U^2(\phi)
- {1\over2}\left( \bar\psi/\hspace{-0.5em}\partial \psi
+ U^\prime(\phi)\bar\psi\psi \right),
\end{equation}
where $U(\phi)$ is a symmetric function, admitting a classical
soliton solution. For example, for the kink $U(\phi)=\sqrt{\lambda/2}(\phi^{2}-
\frac{m_{0}^{2}}{2\lambda})$. We use $\{\gamma^\mu,\gamma^\nu\}=2g^{\mu\nu}$
with $g^{00}=-1$, $g^{11}=1$, and $\psi$ is a Majorana spinor:
$\bar\psi=\psi^\dagger i\gamma^0=\psi^TC$ with $C\gamma^\mu=-(\gamma^\mu)^TC$.
The action is invariant under $\delta\phi=\bar\epsilon\psi$ and
$\delta\psi=(/\hspace{-0.5em}\partial\phi-U)\epsilon$.
First of all we want to identify the class of topological boundary
conditions. The discrete transformation taking $\phi\to-\phi$
must be accompanied by $\psi\to\gamma_3\psi$ with
$\gamma_3=\gamma^0\gamma^1$
to leave the
action invariant. From this
symmetry transformation we obtain topological
boundary conditions
\begin{eqnarray}\label{topbc}
\phi(-L/2) = (-1)^p\phi(L/2),
\qquad
\phi^\prime (-L/2) = (-1)^p\phi^\prime (L/2),
\nonumber \\
\psi(-L/2) = (\gamma_3)^p\psi(L/2),
\qquad p=0,1.
\end{eqnarray}
The value $p=0$ gives a trivial
periodic vacuum while $p=1$ selects a nontrivial soliton vacuum.
As one can see, the reasons behind our choice
of the topological boundary conditions do not include supersymmetry.
The same arguments apply to any theory with a Yukawa-like
interaction between fermions and bosons. From this point of view it
is very gratifying to discover that the $p=1$ topological boundary
condition (\ref{topbc}) preserves half of the supersymmetry
of the Lagrangian (\ref{lagrangian}). An easy way to see that
is to consider the Noether current corresponding to the supersymmetry
\begin{equation}
J^\mu = -(/\hspace{-0.5em}\partial\phi + U)\gamma^\mu\psi.
\end{equation}
Integrating
the conservation equation $\partial_\mu J^\mu=0$ over space we find
\begin{equation}
{\partial Q\over \partial t} \equiv {\partial\over \partial t}
\int_{-L/2}^{L/2} dx J^0(x)
= -\left[J^1(x)\right]_{-L/2}^{L/2},
\end{equation}
where the r.h.s. is simply the total current flowing into the system.
Using the boundary condition (\ref{topbc}) we obtain
\begin{equation}\label{J1}
-\left[J^1(x)\right]_{-L/2}^{L/2} =
\left[(/\hspace{-0.5em}\partial\phi + U)\gamma^1\psi
- (-/\hspace{-0.5em}\partial\phi + U)\gamma^1\gamma^3\psi
\right]_{L/2} =
\left.(1 + \gamma_3)(/\hspace{-0.5em}\partial\phi + U)\gamma^1\psi
\right|_{L/2}.
\end{equation}
We see that the $(1-\gamma_3)$ projection of the supercharge $Q$ is
conserved. Note that different projections of $Q$ are {\em
classically} conserved on the soliton or the antisoliton background.
The soliton with $\phi^\prime+U=0$ and $\psi=0$ is invariant under a
susy transformation with a parameter $\epsilon$ if
$(1+\gamma^1)\epsilon=0$. This means the projection $P_+Q$ (with
$P_\pm=1\pm\gamma_1$) of the
supercharge vanishes on the soliton configuration to linear order in
the quantum fields. For the antisoliton $P_-Q$ has this property.
This should be expected since the topological boundary condition does
not distinguish between the soliton and the antisoliton.
Similarly one can see that the topological boundary
condition does not break translational invariance. The
conservation equation for the stress tensor reads
$\partial_\mu T^{\mu\nu} = 0$, where
\begin{equation}
T^{\mu\nu} = \partial^\mu \phi \partial^\nu \phi
+ {1\over4}\left(\bar\psi\gamma^\mu\partial^\nu\psi
+ \bar\psi\gamma^\nu\partial^\mu\psi \right)
+ {\cal L}g^{\mu\nu}.
\end{equation}
In general, the non-conservation of total momentum is again due to the
boundary term
\begin{equation}
{\partial P\over\partial t} \equiv
{\partial\over\partial t}\int_{-L/2}^{L/2} T^{01} =
-\left[T^{11}\right]_{-L/2}^{L/2}.
\end{equation}
We see that the defining property of the topological
boundary condition, that it relates the fields at
$-L/2$ and $L/2$ by a transformation leaving ${\cal L}$ invariant,
ensures momentum conservation.
Note also that there is another $Z(2)$ symmetry in the Lagrangian
(\ref{lagrangian}): $\psi\to(-1)^q\psi$. This can be used to
extend the set of topological boundary conditions (\ref{topbc}) to
\begin{eqnarray}\label{topbc2}
\phi(-L/2) = (-1)^p\phi(L/2),
\qquad
\phi^\prime (-L/2) = (-1)^p\phi^\prime (L/2),
\nonumber \\
\psi(-L/2) = (-1)^q (\gamma_3)^p\psi(L/2),
\qquad p,q=0,1.
\end{eqnarray}
The values $(p,q)=(0,0)$ give a topologically
trivial sector. The sector $(0,1)$
is also trivial, but the fermions have a twist.
Two classically nontrivial vacua are obtained with $p=1$ and $q=0,1$.
For $p=1$ the two values of $q$ correspond to the arbitrariness of the
sign choice of the $\gamma_3$ matrix, and are related to each other
by space parity transformation $\psi(x,t)\to\gamma^0\psi(-x,t)$.
Therefore one should expect
$E(1,0)=E(1,1)$, which one can check is true at one-loop.
As in the previous section we define the mass of the soliton as the
difference of the energies $E_p$ of these vacua: $M \equiv E_1 - E_0$.
At the classical level one finds $M=M_{\rm cl}$, where
$M_{\rm cl}$ is the classical soliton mass. The one-loop
correction is determined by integrating
\begin{equation}\label{M1loopSUSY}
{dM^{(1)}\over dm} = {d(\delta M)\over dm}
+ {1\over2}\sum_n {d\omega^B_n \over dm}
- {1\over2}\sum_n{d\omega^F_n\over dm} -
\left({1\over2}\sum_n {d\tilde\omega^B_n\over dm}
- {1\over2}\sum_n{d\tilde\omega^F_n\over dm} \right)
\end{equation}
over $m$. The expressions for the derivatives
$d\omega^B_n/dm$ and $d\tilde\omega^B_n/dm$ are the same as
in the bosonic case, see (\ref{kn_vac}) and (\ref{kn_sol}).
In order to find the corresponding expressions for the
fermionic frequencies we need to obtain the quantization condition
for $k_n$. For the trivial
sector we have simply $\tilde k_nL = 2\pi n$.
The nontrivial sector requires more careful analysis.
The frequencies $\omega$ are obtained by finding solutions of the
equation
\begin{equation}\label{small_ferm}
\left(\gamma^\mu\partial_\mu + U^\prime\right)\psi=0
\end{equation}
of the form $\psi=\psi(x)\exp\{-i\omega t\}$. Multiplying
this equation by $\left(-\gamma^\mu\partial_\mu + U^\prime\right)$
we find
\begin{equation}
\left(-\partial^2 + (U^\prime)^2 + \gamma^1 U^{\prime\prime} U
\right)\psi = 0,
\end{equation}
where we used $\phi^\prime_{\rm sol} = -U(\phi_{\rm sol})$,
which follows from the classical equation of motion
for the soliton in the $L\to\infty$ limit.
Projecting this equation using $P_\pm=(1\pm\gamma^1)/2$
we see that $\psi_+$ (where $\psi_\pm\equiv P_\pm\psi$)
obeys the same equation as the bosonic small fluctuations,
hence
\begin{equation}\label{+as}
\psi_+ = \alpha_+ e^{-i\omega t + i(kx \pm \delta/2)}
\quad \mbox{ when } x\to\pm \infty.
\end{equation}
The $\psi_-$ component can then be obtained by acting with $P_+$ on
(\ref{small_ferm})
\begin{equation}
\partial_0\gamma^0 \psi_- + \left(\partial_1 + U^\prime\right)\psi_+ = 0.
\end{equation}
which together with (\ref{+as}) gives the asymptotics of $\psi_-$
\begin{equation}\label{thetak}
\psi_- =
- \gamma^0 \alpha_+{k\mp im\over\sqrt{k^2+m^2}}
e^{-i\omega t + i(kx \pm \delta/2 )} =
- \gamma^0
\alpha_+ e^{\pm i\theta/2} e^{-i\omega t + i(kx \pm \delta/2 )}
\quad \mbox{ when } x\to\pm \infty,
\end{equation}
where $\theta(k) = -2\arctan(m/k)$ and we used the fact that
in the $L\to\infty$ limit $U^\prime\to\pm m$.
Therefore the solutions of (\ref{small_ferm})
have asymptotics
\begin{equation}\label{evectorgamma}
\psi = \psi_+ + \psi_- = \left(
1 - \gamma^0 e^{\pm i\theta/2}\right)\alpha_+
e^{-i\omega t + i(kx \pm \delta/2)}
\quad \mbox{ when } x\to\pm \infty
,
\end{equation}
where $\alpha_+$ is the eigenvector of $\gamma^1$ with eigenvalue $+1$.
Although one could continue the derivation without specifying the
representation for the $\gamma$ matrices (an exercise for the reader)
we find it more concise to adopt a certain representation. The most
convenient is the following one in terms of the
Pauli matrices: $\gamma^0 = -i\tau_2$,
$\gamma^1=\tau_3$, and hence $\gamma_3=\tau_1$.
It has two advantages. First,
$\alpha_+$ has now only an upper component. Second,
the Majorana condition becomes simply $\psi^*=\psi$
and the equation (\ref{small_ferm}) is real.
In this representation (\ref{evectorgamma}) becomes
\begin{equation}\label{as_psi}
\psi = \left( \begin{array}{c}
1 \\ - e^{\pm i\theta/2}
\end{array}\right) \alpha e^{-i\omega t + i(kx \pm \delta/2)}
\quad \mbox{ when } x\to\pm \infty,
\end{equation}
where $\alpha$ is a complex number.
Now we impose the boundary condition
$\psi(-L/2) = \gamma_3\psi(L/2)$. The field $\psi$ in equation
(\ref{small_ferm}) must be real. This means that
only the real part of (\ref{as_psi}) need to satisfy the boundary
condition. This condition should, however, be maintained for
all $t$. Therefore, due to the oscillating phase $\exp(-i\omega t)$,
a complex equation must be satisfied
\begin{equation}\label{BCGamma}
e^{-i(kL+\delta)}
\left( \begin{array}{c} 1 \\ - e^{ -i\theta/2} \end{array}\right)
=\Gamma
\left(\begin{array}{c} 1 \\ - e^{ i\theta/2} \end{array}\right)
\end{equation}
We introduced $\Gamma=\gamma_3$ in order to discuss briefly
the following point. One could consider a more general
boundary condition: $\psi(-L/2) = \Gamma\psi(L/2)$ with some matrix
$\Gamma$ (real in the Majorana representation we have chosen).
One can see from (\ref{BCGamma}) that certain boundary
conditions cannot be satisfied, for example,
the frequently employed \cite{kar1,CM,Uch,uchi1,uchi2,rvn}
periodic boundary conditions with $\Gamma=1$.
Equation (\ref{BCGamma}) provides an additional consistency
check for our choice of boundary condition.
With the topological boundary condition $\Gamma=\gamma_3=\tau_1$,
we find that equation (\ref{BCGamma}) is satisfied provided
\begin{equation}\label{kn_ferm}
kL + \delta + {\theta\over2} = 2 \pi n + \pi.
\end{equation}
Using this quantization rule we find
\begin{equation}\label{doF/dm}
{d\omega^F_n\over dm} = {1\over\sqrt{k^2+m^2}}
\left(m + k_n {dk_n\over dm}\right) =
{1\over\sqrt{k^2+m^2}}\left(m
+ {1\over L}{k_n^2\over m}
\left(\delta^\prime(k)+{\theta^\prime(k)\over2}\right)\right)
\end{equation}
\begin{figure}[hbt]
\centerline{
\epsfxsize 2in \epsfbox{skink.eps}
}
\caption[]{
The left- and the right-hand sides of the equation $\delta
+ \theta/2 = 2\pi n + \pi - kL$ are plotted schematically in the case
of the supersymmetric $\phi^4$ kink (two bound states). The dashed
line represent the value of $\delta(k)+\theta(k)/2$ without the discontinuity
$2\pi\varepsilon(k)$. As in the bosonic spectrum the discontinuity
leads to $n=-1,0$ mode numbers being skipped, while the spectrum of
allowed values of $k$ is not affected.
}
\label{fig:skink}
\end{figure}
Now we convert the sums over modes into integrals.
For the bosonic and fermionic sums in the trivial sector
formula (\ref{sum_triv}) applies and the
sums cancel each other (no cosmological constant in the
trivial susy vacuum). For the bosonic sum
in the nontrivial sector we use again (\ref{sum_delta}).
For the fermionic sum a formula analogous to (\ref{sum_delta})
applies with $\delta+\theta/2$ instead of $\delta$, which
follows from (\ref{kn_ferm}). Again, as for the bosonic modes,
due to the discontinuity in $\delta$ there are $n_{\rm ds}$
values of $n$ which do not lead to a solution of (\ref{kn_ferm}).
The remaining $n$ lead to $k$ values (namely, in the case of the kink:
$kL=\ldots,-5\pi/2,-\pi/2,3\pi/2,7\pi/2,\ldots$, see
Fig.~\ref{fig:skink}) which in the
limit $L\to\infty$ give a continuous integration measure
$Ldk/(2\pi)$ near $k=0$.
Putting now everything into formula (\ref{M1loopSUSY})
we find
\begin{equation}\label{dm1theta}
{dM^{(1)}\over dm} = {d(\delta M)\over dm}
-{1\over4m}\int_{-\infty}^\infty {dk\over2\pi} \sqrt{k^2+m^2}\,\theta^\prime(k).
\end{equation}
The one-loop mass counterterm is given by \cite{rvn}
\begin{equation}\label{deltaMSUSY}
\delta M = {m\over2} \int_{-\infty}^\infty {dk\over2\pi} {1\over\sqrt{k^2+m^2}}.
\end{equation}
It follows from the renormalization counterterm $\delta m$ which is
chosen to cancel the sum of the bosonic and fermionic tadpole
diagrams. Substituting into (\ref{dm1theta}) we find
$dM^{(1)}/dm=-1/(2\pi)$, hence
\begin{equation}\label{M1SUSY}
M^{(1)} = - {m\over2\pi}.
\end{equation}
This result differs from the one two of us have obtained previously
\cite{rvn} using a mode-number regularization scheme with the
conventionally employed (but, as we have argued, untenable) periodic
boundary conditions.
In the special case of the supersymmetric sine-Gordon model,
we can compare this result with the one obtained from the Yang-Baxter
equation assuming the factorization of the S-matrix \cite{scho}.
The mass spectrum is then given by~\cite{ahn}
\begin{equation}\label{m_n}
m_n = 2M\sin(n\gamma/16),
\end{equation}
where $\gamma$ in the notation of ref. \cite{ahn} is related to
the bare coupling $\beta$ through
\begin{equation}\label{gammabeta}
{1\over\gamma} = {1 - \beta^2/4\pi\over4\beta^2}
= {1\over4\beta^2} - {1\over16\pi}.
\end{equation}
Expanding (\ref{m_n}) for $n=1$ we find
\begin{equation}
m_1 = {M\gamma\over8} + O(\gamma^3).
\end{equation}
Since this is the lightest mass in the spectrum we identify
it with the meson mass $m=m_1$. Taking the ratio $M/m_1$ and using
(\ref{gammabeta}) we obtain
\begin{equation}
{M\over m} = {8\over\gamma} + O(\gamma) = {2\over\beta^2}
- {1\over2\pi} + O(\beta^2).
\end{equation}
The first term is the classical result, the second is the 1-loop
correction. This means that the 1-loop correction to $M$ following
from the exact S-matrix factorization calculation
\cite{ahn} is the same as our 1-loop result (\ref{M1SUSY}).
The next question we address is whether such a negative
correction is in agreement with the well-known BPS bound.
This question has been subject to controversy and deserves a separate
section.
\section{Quantum BPS bound, soliton mass, and finite size effects}
\label{sec:bound}
As was first realized by Olive and Witten
\cite{WO}, the naive supersymmetry algebra in a topologically
nontrivial sector is modified by central charges. The susy generators
for the $N=1$ model read in the representation of section 3
\begin{equation}\label{qpm}
Q_\pm\equiv P_\pm\int_{-L/2}^{L/2} [ -
(/\hspace{-0.5em}\partial\varphi + U)\gamma^0\psi ] dx
=\int_{-L/2}^{L/2} [ \dot\varphi\psi_\pm+(\varphi'\pm U)\psi_\mp]dx,
\end{equation}
where $P_\pm$ is again $(1\pm\gamma_1)/2$. Using canonical commutation
relations we arrive at the following algebra
\begin{equation}\label{algebra}
\left\{ Q_\pm, Q_\pm \right\}=2H\pm 2Z;\quad
\left\{ Q_+,Q_-\right\}=2P,
\end{equation}
where
\begin{eqnarray}\label{hpz}
H&\equiv&\int_{-L/2}^{L/2} [ \frac12 \dot\varphi^2+\frac12(\varphi')^2+
\frac12 U^2
+ \frac{i}2(\psi_+\psi_-'+\psi_-\psi_+')-iU'\psi_+\psi_-]dx \\
P&\equiv&\int_{-L/2}^{L/2} [ \dot\varphi\varphi'+
\frac{i}2(\psi_+\psi_+'+\psi_-\psi_-')]dx \\
Z&\equiv&\int_{-L/2}^{L/2} \varphi' U dx
\end{eqnarray}
The central charge $Z$ is clearly a boundary term.
Let us comment on some subtleties in the derivation of
(\ref{algebra}). The Dirac delta
functions in the equal-time canonical commutation relations can be written as
$\delta (x,y)=\sum_{m} \eta_{m}(x)\eta_{m}(y)$, where $\eta_{m}$ is a complete
set of functions {\em satisfying the boundary conditions} of the corresponding
field. For such $\delta (x,y)$ one has:
\begin{eqnarray}
\int_{-L/2}^{L/2}\phi(x)\delta(x-L/2)&=&\frac{\phi(L/2)\pm\phi(-L/2)}{2}\\
\int_{-L/2}^{L/2}\phi(x)\delta(x+L/2)&=&\frac{\phi(-L/2)\pm\phi(L/2)}{2}
\end{eqnarray}
For the bosons in the topological sector we need the $-$ signs. For the
fermions $\psi_{+}(x)+\psi_{-}(x)$ one needs the $+$ signs,
but for the fermions
$\psi_{+}(x)-\psi_{-}(x)$ one needs the $-$ signs. That {\em some} subtlety in
the delta functions is present, is immediately clear if one considers the
double integral $\int\int dxdy\;f(x)\partial_{x}\delta(x-y)g(y)$, and either
directly partially integrates the derivative $\partial/\partial {x}$, or first
replaces $\partial_{x}\delta(x-y)$ by $-\partial_{y}\delta(x-y)$ and then
partially integrates w.r.t. $y$. One gets the same result provided
\begin{equation}
fg(x)|_{x\in B}=\int_{-L/2}^{L/2}dy\;f(x)\delta(x-y)g(y)|_{x\in B}+
\int_{-L/2}^{L/2}dx \;f(x)\delta(x-y)g(y)|_{y\in B}
\label{consist}
\end{equation}
where the notation $h(x)|_{x\in B}$ implies $h(L/2)-h(-L/2)$. Naively, there
is a factor of 2 missing in this equation, but with the more careful
definitions of $\delta(x-y)$ for periodic or antiperiodic functions,
consistency is obtained.
With these delta functions one finds that the boundary terms in the
$\{ Q_{\pm}, Q_{\pm}\}$ anticommutators reproduce the consistency condition
(\ref{consist}), hence cancel, whereas in the $\{Q_{+},Q_{-}\}$ anticommutator
one finds the boundary term
\begin{equation}
-i\int dx\psi_{+}(x)\delta_{a}(x-y)\psi_{+}(y)|_{y\in B}-i\int dx \psi_{-}(x)
\delta_{a}(x-y)\psi_{-}(y)|_{y\in B}
\end{equation}
where $\delta_{a}(x-y)$ is the bosonic (antisymmetric) delta function defined
above. These terms cancel if one uses our topological boundary conditions.
In the $\{ Q,Q\}$ relations one does not encounter subtleties involving delta
functions for the fermions because there are no derivatives of fermions in
$Q_{\pm}$.
Since the operators $Q_\pm$ are
hermitian, one finds that the following relation exists between the
expectation values of operators $H$ and $Z$:
\begin{equation}\label{thebound}
\langle s | H | s \rangle \geq | \langle s | Z | s \rangle |,
\end{equation}
for any quantum state $s$.
As we have already pointed out in section 3 (\ref{J1}), only one linear
combination of $Q_+$ and $Q_-$ is conserved in the soliton sector:
$Q_+\pm Q_-$ for $q=1,0$ respectively. Taking for definiteness $q=0$
we derive from (\ref{qpm}),(\ref{hpz}) the following commutation relations:
\begin{eqnarray}\label{47}
i[H,Q_+ + Q_-]&=&
2[(\dot\varphi + \varphi')(\psi_+ +\psi_-)+U(\psi_+ - \psi_-)
]_{x=L/2} \\
i[P,Q_+ + Q_-]&=&
2[(\dot\varphi + \varphi')(\psi_+ +\psi_-)-U(\psi_+ - \psi_-)
]_{x=L/2},
\end{eqnarray}
while the other linear combination $Q_+-Q_-$ commutes with both $H$
and $P$.\footnote{In these relations one must use
the proper definitions of the delta functions for the
fermions since derivatives of $\psi_\pm$ appear in $H$.
As one can see, (\ref{47}) agrees with (\ref{J1}).}
We also find that the operator $Z$ does not commute
with the Hamiltonian in the soliton sector:
\begin{equation}
i[H,Z] = 2U\dot\varphi\Big|_{x=L/2}
\end{equation}
Let us examine carefully the meaning of this last result. Strictly
speaking, it implies that $Z$ is not a central charge. We shall show
that this fact reflects a certain property of $Z$ in a {\it finite}
volume $L$. Let us ask the following question: what is the
expectation value of $Z$ in the soliton vacuum state $|{\rm
sol}\rangle$? It is clear that for any finite volume $L$: $\langle
{\rm sol} | Z | {\rm sol}\rangle =0$, since neither a positive nor a
negative value is distinguished by our boundary conditions. This is a
consequence of the fact that $H$ and $Z$ do not commute. The ground
state $|{\rm sol}\rangle $ is an eigenvector of $H$, but it need not
be and is not an eigenvector of $Z$.
To make the next step clear it is convenient to use the following
observation: The value of $Z$ measures the position of the
soliton. Indeed, for the classical configuration if the center of the
soliton is at $x=0$ exactly then $Z$ is $M$ (up to $O(1/L)$). If we
now move the center of the classical soliton the value of $Z$ will
decrease and reach zero when the center of the soliton is at $L/2$. If
we continue shifting the solution in the same direction, and bearing
in mind its antiperiodicity, $Z$ will become increasingly negative
and it will reach the value $-M$ when there is an antisoliton at
$x=0$. If we deal with a configuration which is a distortion of the
soliton, the center of the soliton is not well defined but $Z$ is and
can give one an idea of where the soliton is. On the quantum level
this corresponds to the fact that $P$ and $Z$ do not commute. If we
act with $P$ (this is our spatial shift) on an eigenstate of $Z$ we
generate a different eigenstate of $Z$ and the expectation value of
$Z$ changes.
Now, if we think of $Z$ as (a nonlinear function of) the coordinate
and $P$ as the momentum of the soliton, the next step becomes
clear. The vacuum is an eigenstate of $P$ with eigenvalue
0. Therefore, it is a superposition of the eigenstates of
$Z$. Positive and negative values of $Z$ enter with the same weight
into this superposition (the corresponding eigenvectors can be
obtained from each other by acting with $\exp(iLP)$ --- a shift by
$L$). Therefore $\langle {\rm sol} | Z | {\rm sol} \rangle$=0 for any
finite $L$.
It is, however, too early to conclude that the equation
(\ref{thebound}) does not lead to any condition on $\langle {\rm
sol}|H|{\rm sol}\rangle$ apart from semipositivity. The expectation
value of $Z$ can be compared to an order parameter in a system with
spontaneous symmetry breaking. It is only nonzero if the thermodynamic
limit $L\to\infty$ is taken properly. We shall now analyze how the
limit $L\to\infty$ must be taken in the case of the soliton.
One can view the soliton as an almost classical particle (as long as
the coupling constant is small) which is subject to Brownian motion
due to quantum fluctuations. This is the meaning of the fact that $H$
and $Z$ do not commute: $Z$, or the position of the soliton, depends
on time. If we wait for a sufficiently long time it will cover all
possible positions and the expectation value of $Z$ will be
zero. However, it is obvious that for large $L$ most of the time the
soliton will spend away from the boundaries. It means that if one
starts from a state of the soliton away from the boundary so that $Z=M$
and limits the time of observation the expectation value of $Z$
will remain close to $M$.
How long does it take for the soliton to cover all the volume $L$?
Since it is a random walk the distance from the original position is
$O(\sqrt t)$ and it will stay away from the boundary if we restrict
the interval $t$ of the observation to $t\ll O(L^2)$. Such a
restriction will mean that we introduce an error of at most $O(1/L^2)$
in the energy due to uncertainty principle. This is small when $L$ is
large. Alternatively, one can do Euclidean rotation and consider
classical statistical theory of an interface in 2 dimensions. The
random walk in this case is the well-known roughening of the interface
\cite{jdl}. It leads to smearing of the interface to the width of
$w={\rm const}\sqrt{t/M}$, where $M$ is the one-dimensional interface
tension, which is the soliton mass. The correction to the tension
turns out to be $O(1/t)$, as we have already seen. (More rigorously,
the partition function of the wall is not just $\exp(-Mt)$ in the
$t\to\infty$ limit but has a preexponent $L\sqrt{M/t}$ due to the
fluctuations of the interface. The factor $L$ arises from the integration
over the volume of the collective coordinate with a familiar measure
$\sqrt{Mt}$, and an additional factor $1/t$ comes from
the determinant of nonzero soft vibrational modes of the interface
\cite{amst}).
Therefore the thermodynamic limit $L\to\infty$ in our system which
gives nonzero $\langle {\rm sol}|Z|{\rm sol} \rangle$ corresponds to
the energy measurement whose duration $t$ is small compared to $ML^2$.
The bound (\ref{thebound}) must apply to the result of such a
measurement. The expectation value of $Z$ in such a thermodynamic
limit is what was calculated in \cite{imb1,rvn}:
\begin{eqnarray}
Z &=& Z_{\rm cl}(m_0) + \int_{-\infty}^{\infty} dx {d\over dx}
\left[\frac12 U^\prime(\phi_{\rm sol}(x))\langle \eta^2(x)\rangle\right]
\nonumber \\
&=& M_{\rm cl}(m_0) - \frac m2 \int_{-\infty}^{\infty} {dk\over2\pi}
{1\over\sqrt{k^2+m^2}} = M_{\rm cl}(m).
\end{eqnarray}
The logarithmically divergent integral is exactly cancelled by the
counterterm $\delta M$ (\ref{deltaMSUSY}) after the renormalization
of $m$.
The question we need to answer now is what is the value of
$\langle {\rm sol}|H|{\rm sol} \rangle$? We recall that our calculation
of the soliton mass was aimed at finding the dependence of
$M$, or, $\langle {\rm sol}|H|{\rm sol} \rangle$ on $m$, the renormalized mass.
We were not able to determine a constant term in the
unrenormalized $\langle {\rm sol}|H|{\rm sol} \rangle$, but we knew that it
must be subtracted to satisfy the renormalization condition
$M|_{m=0}=0$. To evaluate the l.h.s. of (\ref{thebound})
we have to know this constant. In order to find it
we must evaluate directly the sums of bosonic and fermionic frequencies
in the soliton sector. Although the sums are quadratically
divergent supersymmetry improves the situation. It
requires that bosonic and fermionic modes come
in pairs. Therefore we need to apply the Euler-Maclaurin formula
to a function $f(n)=(\omega_n^B-\omega_n^F)/2$ which has much better
behavior at large $n$. Using the spectral relations for the bosons
(\ref{kn_sol}) and the fermions (\ref{kn_ferm}) we find:
\begin{eqnarray}
&\langle {\rm sol}|H|{\rm sol} \rangle &= M_{\rm cl}(m_0)
+ \frac12 \sum_{n=-N}^{N} (\omega_n^B-\omega_n^F)
=
M_{\rm cl}(m_0) + \frac12 \int_{-\Lambda}^{\Lambda} {dk\over2\pi}
{\theta(k)\over2}{d\over dk}\sqrt{k^2+m^2}
\nonumber\\
&&=
M_{\rm cl}(m_0) +
{1\over8\pi}\sqrt{k^2+m^2}\,\theta(k)\Big|_{-\Lambda}^{\Lambda}
- \frac14 \int_{-\Lambda}^{\Lambda}{dk\over2\pi}
\sqrt{k^2+m^2}\,\theta^\prime(k)
\nonumber\\
&&\stackrel{\Lambda\to\infty}=
M_{\rm cl}(m_0) +
{\Lambda\over4} - {m\over 2\pi}
- \frac12 \int_{-\infty}^{\infty}{dk\over2\pi}{m\over\sqrt{k^2+m^2}}
.
\end{eqnarray}
The constant $\Lambda$ is the ultraviolet cutoff related to the number
of modes: $L\Lambda=2\pi (N+1/2) + O(1/L)$, and we recall that
$\theta=-2\arctan(m/k)$. The last integral is logarithmically
divergent and is exactly cancelled by the counter-term $\delta M$
(\ref{deltaMSUSY}) when we renormalize $m_0=m+\delta m$. The same
divergence appears in $\langle {\rm sol}|Z|{\rm sol} \rangle$ and is
also removed when $m$ is renormalized. On the other hand, the linear
divergent term $\Lambda/4$ does not appear in $\langle {\rm
sol}|Z|{\rm sol} \rangle$. In terms of the renormalized mass $m$ and
$M_{\rm cl}(m)$ the left- and the right-hand sides of (\ref{thebound})
are:
\begin{equation}
\langle {\rm sol}|H|{\rm sol} \rangle = M_{\rm cl} + {\Lambda\over4} - {m\over2\pi}
\qquad \mbox{ and } \qquad \langle {\rm sol}|Z|{\rm sol} \rangle = M_{\rm cl}.
\end{equation}
We see that the bound is observed by the soliton vacuum state with an
infinite overkill due to a linearly divergent constant $\Lambda/4$. This
constant is only nonzero in the soliton sector. In
this sector the bound is not saturated, which shows that ``what can
happen --- does happen'': the argument of Olive and Witten for the
saturation of the bound is based on the ``multiplet shortening'' and
does not apply to $N=1$ susy solitons.
In other words, to resolve the long-standing problem of the Bogomolnyi
bound in the $N=1$ supersymmetric soliton/kink model we must realize
that the bound is imposed on the unrenormalized expectation value of
the Hamiltonian $\langle {\rm sol}|H| {\rm sol} \rangle$. Both sides
of equation (\ref{thebound}) contain ultraviolet divergences. The
supersymmetry ensures that quadratic divergences do not appear in
$\langle H \rangle$. However, $N=1$ is not enough and a linear
divergence remains in the topologically nontrivial sector (there is
also a logarithmic divergence, but it is matched on both sides,
$\langle H \rangle$ and $\langle Z \rangle$). This linearly divergent
term is positive in accordance with the bound (\ref{thebound}).
Note that this divergence is different from a cosmological constant
(which vanishes because of supersymmetry) in that it is proportional
to $L^0$ rather than $L^1$.
Since $\langle {\rm sol}|H|{\rm sol} \rangle$ is divergent even after
standard renormalization of the mass $m$ we need to use an additional
renormalization condition to find the physical soliton mass. This is
the condition $M|_{m=0}=0$ which we introduced. Therefore
\begin{equation}
M=\langle {\rm sol}|H|{\rm sol} \rangle - \langle {\rm sol}|H|{\rm sol} \rangle|_{m=0} =
\langle {\rm sol}|H|{\rm sol} \rangle - {\Lambda\over4}.
\end{equation}
Our new renormalization condition is based on the physical requirement
that the physical vacuum energy should not depend on topology in the conformal
point $m=0$. Therefore, in principle, it also requires a subtraction
of the expectation value of $H$ in the topologically trivial vacuum
$\langle0|H|0\rangle$, or rather $\langle0|H|0\rangle -
\langle0|H|0\rangle|_{m=0}$. In the non-supersymmetric case this is
essential to cancel the background bulk contributions linear in $L$,
which are $m$-dependent, and which are the same in both sectors.
However, supersymmetry ensures that $\langle0|H|0\rangle=0$. Finally,
we find a negative finite quantum correction to the physical soliton
mass given by (\ref{M1SUSY}). However, it is not the physical mass
$M$, but rather it is $\langle {\rm sol}|H|{\rm sol} \rangle$, to
which the bound (\ref{thebound}) applies. As we shall see in the next
section, in the case of $N=2$ supersymmetry all corrections, even
finite, are cancelled and
$M=\langle {\rm sol}|H|{\rm sol} \rangle$ saturates the bound as the
``multiplet shortening'' demands.
\section{The N=2 case}
\label{sec:N=2}
Consider the action for the following N=(2,2) susy model in
1+1 dimensions
\begin{equation}\label{n2action}
{\cal L} = - \partial_\mu \varphi^* \partial^\mu \varphi - \bar{\psi} \gamma^\mu
\partial_\mu \psi - U^* U - {1\over 2} (U' \psi^T i \gamma^0 \psi + U^{*'} \psi^{*T}
i \gamma^0 \psi^*)
\end{equation}
where $\varphi$ and $\psi$ are complex, $U=U (\varphi ), U' =
(\partial/\partial \varphi) U$, and $\gamma^1 = \tau_3$ and $\gamma^0 =- i \tau_2$
with $\tau_3$ and $\tau_2$ the usual Pauli matrices.\footnote{With
$\gamma^1=\tau_3$ instead of $\gamma^1 = \tau_1$ the diagonalization of the
fermionic actions is easier.}
The action is invariant under
$\delta \varphi = \bar{\epsilon} \psi, \delta \varphi^* = \bar{\psi} \epsilon, \delta \psi =
/\hspace{-.5em}\partial
\varphi \epsilon - U^* \epsilon^*$ and $\delta \bar{\psi} = - \bar{\epsilon}
/\hspace{-.5em}\partial \varphi^* - U \epsilon^T i
\gamma^0$ with complex two-component spinors
$\epsilon$, with $\bar{\epsilon}=\epsilon^\dag i\gamma^0$ and $\bar{\psi}=\psi^\dag i\gamma^0$.
In terms of the components $\varphi = (\varphi_1 + i
\varphi_2)/\sqrt{2}$ and $\psi^T = (\psi_+, \psi_-)$ the action reads
\begin{eqnarray}
{\cal L} &=& - \frac{1}{2} \partial_\mu \varphi_1 \partial^\mu \varphi_1 - \frac{1}{2}
\partial_\mu \varphi_2 \partial^\mu \varphi_2 + i \psi_+^* (\dot{\psi_+} -
\psi_-^\prime) + i \psi_-^* (\dot{\psi}_- - \psi_+^\prime)
\nonumber\\
&& - U^*U + i U' \psi_+ \psi_- + i {U^*}'
\psi_+^* \psi_-^*
\end{eqnarray}
For the superkink
\begin{equation}
U_{\mathrm kink}=\sqrt\lambda(\varphi^2-\mu_0^2/2\lambda)
\end{equation}
while for
super sine-Gordon theory
\begin{equation}
U_{\mathrm sine-Gordon}=m_0^2\sqrt{2/\lambda}\cos(\sqrt{\lambda/2}\phi/m_0)
\end{equation}
In terms of $\varphi_1$ and $\varphi_2$ the potential is given by
\begin{eqnarray}
U^*U_{\mathrm kink}&=&
{\lambda \over 4} \left[ \left( \varphi_1^2 - \mu_0^2 /\lambda \right)^2
+ 2 \varphi_1^2 \varphi_2^2 + \varphi_2^4 + 2
{\mu_0^2 \over \lambda} \varphi^2_2 \right]\\
U^*U_{\mathrm sine-Gordon}&=&{m_0^4\over\lambda}\left[
\cos{\sqrt\lambda\over m_0}\varphi_1+\cosh{\sqrt\lambda\over m_0}\varphi_2\right]
\end{eqnarray}
This already shows that for the kink
the trivial solutions $\varphi^{(0)}_1 = \pm
\mu_0/\sqrt{\lambda}$ and the kink solution $\varphi_1=\varphi_K =
( \mu_0/\sqrt{\lambda}) \tanh\mu_0 x/\sqrt{2} $
(and the antikink solution
$\varphi_{\bar{K}} = - \varphi_K)$ of the bosonic kink model
remain solutions of this susy model
while $\varphi_2=0$. Because the potential is of
the form $(\varphi^2 - \mu^2_0 /2\lambda) (\varphi^{*2} -
\mu^2_0/2\lambda)$ instead of $(\varphi \varphi^* - \mu_0^2 / 2\lambda)^2$,
there is no U(1) symmetry acting on $(\varphi_1 , \varphi_2)$
which can rotate the kink away. Hence there is a genuine
soliton.
In sine-Gordon theory, the trivial vacua are at $\varphi^{(0)}_2=0$ and
$\varphi^{(0)}_1=(n+{1\over2})\pi m/\sqrt\lambda$, while for the vacuum in the
topological sector we choose the solution $$\varphi^{\mathrm sG}_{\mathrm
sol}(x)=m/\sqrt\lambda(4\arctan(\exp mx)-\pi).$$ This solution is
antisymmetric in $x$, in agreement with the $Z_2$ topological boundary
conditions of section~2.\footnote{This solution
is obtained from the one in \cite{rvn} by the substitution
$\varphi=\varphi'+m\pi/\sqrt\lambda$. Actually, the sine-Gordon model has $Z$
symmetry, and we could choose solutions which interpolate between two
other minima of the potential by constant shifts. }
The transformation rules in component form read
\begin{eqnarray}
\delta \varphi = - i \epsilon_+^* \psi_- + i \epsilon_-^* \psi_+ && \delta \psi_+ =
\varphi^\prime \epsilon_+ - \dot{\varphi} \epsilon_- - U^* \epsilon^*_+
\nonumber\\
\delta \varphi^* = - i \psi_+^* \epsilon_- + i \psi_-^* \epsilon_+ && \delta \psi_- = -
\varphi^\prime \epsilon_- + \dot{\varphi} \epsilon_+ - U^* \epsilon_-^*
\end{eqnarray}
and since for the soliton
$\varphi_{\mathrm sol}^\prime + \sqrt{2}U (\varphi_{\mathrm sol}/\sqrt2) =
0$, the solution $\varphi = (1/\sqrt{2}) \varphi_{\mathrm sol} , \psi_+ = \psi_- =
0$ is only preserved by half the susy transformations, namely
those with $ {Im}\; \epsilon_+$ and $ {Re}\; \epsilon_-$.
In the sector without soliton we set $\varphi_1 = \varphi^{(0)}_1 + \eta$
and find then the following linearized field
equations
\begin{eqnarray}
&& \Box \eta - m_0^2 \eta = 0 \; , \; \Box \varphi_2 -
m_0^2 \varphi_2 = 0 \nonumber\\
&& \left. \begin{array}{l} \dot{\psi}_+ - \psi_-^\prime +
m_0 \psi_-^* = 0 \\ \dot{\psi}_- - \psi_+^\prime - m_0
\psi_+^* = 0 \end{array}\right\} \begin{array}{l} \ddot{\psi}_+
-\psi_+^{\prime\prime} + m_0^2 \psi_+ = 0 \\ \ddot{\psi}_- -
\psi_-^{\prime\prime} + m_0^2 \psi_- = 0 \end{array}
\end{eqnarray}
where $m_0=U'(\phi_1^{(0)}/\sqrt2)$ and we used that
$U(\phi_1^{(0)}/\sqrt2)=0$.
Hence all fields satisfy the same second-order field equation
in the trivial sector, with a common mass $m_0$ (for the kink $m_0^2 = 2
\mu_0^2$).
In the sector with a soliton, we set $\varphi_1 = \varphi_{\mathrm sol} (x)
+ \eta$, and find then the linearized field equations
\begin{eqnarray}
\label{lfe}
\Box \eta - (U'^2+UU'')\Big|_{\varphi_{\mathrm sol}} \eta=0 &;&
\dot{\psi}_+ - \psi_-^\prime + U'\Big|_{\varphi_{\mathrm sol}} \psi_-^* = 0
\nonumber\\
\Box \varphi_2 - (U'^2-UU'')\Big|_{\varphi_{\mathrm sol}}
\varphi_2 = 0 &;& \dot{\psi}_- - \psi_+^\prime - U'\Big|_{\varphi_{\mathrm sol}}
\psi_+^* = 0
\end{eqnarray}
Decomposing $\psi_+ = { Re}\, \psi_+ + i { Im}\, \psi_+$
and similarly for $\psi_-$,
the fermionic field equations split into one pair of equations
which couple $Re\, \psi_+ $ to $Re\, \psi_-$, and another pair which
couple $Im\, \psi_+$ to $Im\, \psi_-$. Iteration and the relation
$\varphi_{\mathrm sol}'=-\sqrt2 U$ lead to
\begin{eqnarray}
&& Re\, \psi_+^{''} - Re\, \ddot{\psi}_+ - (U'^2+UU'')\Big|_{\varphi_{\mathrm sol}}
Re\, \psi_+ = 0 \; , \; {\rm idem \; for } \; Im\, \psi_-
\nonumber\\
&& Re\, \psi_-^{''} - Re\, \ddot{\psi}_- - (U'^2-UU'')\Big|_{\varphi_{\mathrm sol}}
Re\, \psi_- = 0 \; , \; {\rm idem \; for} \; Im\, \psi_+
\end{eqnarray}
Hence, the real triplet $\eta, Re\, \psi_+$ and $Im\, \psi_-$ satisfies
the same field equation as the real scalar $\eta$ and the real upper
component $\psi_+$ of the Majorana fermion in the N=(1,1)
model, whereas the real triplet $\varphi_2, Im\, \psi_+$ and $Re\,
\psi_-$ satisfies the same field equation as the real spinor
$\psi_-$ in the N=(1,1) model \cite{rvn}.
In principle one can directly determine the discrete spectrum
of $\varphi_2$ by solving the Schr\"odinger equation with
$V=(U')^2-UU''$ \cite{mor}, but susy already gives the
answer. In the N=(1,1) model, the spinors $u_\pm $ in $\psi_\pm
(x,t) = u_\pm (x) \exp - i \omega t$ satisfy the coupled equations
$(\partial_x + \tilde{U}^\prime) u_+ + i \omega u_- = 0$ and $(\partial_x -
\tilde{U}^\prime) u_- + i \omega u_+ = 0$, where $\tilde{U}=
\sqrt2 U(\varphi=\varphi_1/\sqrt2)$.
Any solution for $u_{+}$ and $u_{-}$ yields
also a solution for $\varphi_2$ (with $\varphi_2 \sim u_{-}$),
and any solution for $\varphi_2$ leads also to a solution for $u_{+}$ and
$u_{-}$ (with $u_{-}\sim\varphi_2$ and $u_{+}\sim (\partial_{x}-\tilde U^\prime)
\varphi_2$). Hence, there are as many
bound states for $\varphi_2$ as for $\eta$, namely $\varphi_{2,B} \sim
(\partial_x + \tilde{U}^\prime) \eta_B$. (The zero mode $u_+ \sim
\varphi_{\mathrm sol}^\prime$ does not lead to a corresponding solution
for $u_-$ and $\varphi_2$
since $(\partial_x + \tilde{U}^\prime) \varphi_{\mathrm sol}^\prime = 0)$.
The discrete spectrum of the fermions in the N=(2,2) model is then
as follows: in the sector with $Re\, \psi_+$ and $Re\, \psi_-$ there
are one discrete state with zero energy $(Re\, \psi_+ \sim
\varphi_{\mathrm sol}^\prime)$ and bound states with energy $\omega_B$
$(Re\, \psi_+ \sim \eta_B)$. In the sector with $Re\, \psi_-$ and $Im\, \psi_-$ the
same normalizable solutions are found. Thus, as expected, the massive
bosonic spectrum of small oscillations around the soliton
background is equal to the corresponding fermionic
spectrum, and consists of massive quartets. There are also one bosonic
(for translations) and two fermionic zero modes (zero energy solutions
of the linearized field equations). The latter are proportional to the
nonvanishing susy variations
$\delta \psi\sim\varphi_{\mathrm sol}^\prime \epsilon$, which are
due to the susy parameters $Re\, \epsilon_+$ and $Im\, \epsilon_-$.%
\footnote{One can directly determine these fermionic zero modes
from (\ref{lfe}) by looking for
time-independent normalizable solutions. One finds
then $Re\,\psi_+ \sim Im\,\psi_- \sim
\exp[-\int_0^x U'(\phi_{\mathrm sol} (x')/\sqrt2) dx']$.
These functions are indeed proportional to $\varphi_{\mathrm sol}^\prime$. }
The zero modes do not form a susy multiplet (there are two fermionic
and only one bosonic zero mode) but this poses no problem as quantization
of collective coordinates tells us that the translational zero
mode does not correspond to a physical particle.
The fermionic zero modes are due to translations in superspace,
namely when the susy generators $Q_\pm$ act on the superfield
$\Phi(x,\theta)=\varphi_{\mathrm sol}$.
The topological boundary conditions for the action (\ref{n2action})
are
\begin{eqnarray}
\phi(-L/2)=(-1)^p\phi(L/2); \quad
\phi^\prime(-L/2)=(-1)^p\phi^\prime(L/2); \nonumber\\
\psi(-L/2)=(-1)^q(\gamma_3)^p\psi(L/2).
\end{eqnarray}
For the continuous spectrum with $p=1$ and $q=0$ or $q=1$
boundary conditions, the quantization conditions are
\begin{eqnarray}
\eta:&& kL+\delta(k)=2n\pi+\pi \nonumber\\
\varphi_2:&&kL+\delta(k)+\theta(k)=2n\pi+\pi \nonumber\\
Re\,\psi_+,Im\,\psi_+,Re\,\psi_-,Im\,\psi_-:&&
kL+\delta(k)+{1\over2}\theta(k)=2n\pi+q\pi
\end{eqnarray}
\goodbreak
The one-loop corrections to the soliton mass, differentiated w.r.t. $m$,
are then given by
\begin{eqnarray}
\partial_m M^{(1)}&=&\partial_m \delta M^{(1)}\nonumber\\
&&\hspace{-1.5cm}+{1\over2m}\int_{-\infty}^\infty
\sqrt{k^2+m^2}\left(\delta'(k)+\{\delta'(k)+\theta'(k)\}
-2\{\delta'(k)+{1\over2}\theta'(k)\}\right){dk\over2\pi}
\end{eqnarray}
where the massive bound states do not contribute because they come
in susy multiplets. The continuum states do not contribute either, because
all phase shifts clearly cancel. The counterterm $\delta M$ vanishes in the
$N=2$ model since the $\eta$ and $\phi_2$ tadpole give
$-3\over2$ and $-1\over2$
times a fermionic tadpole, respectively.
Note that all these cancelations are in fact
independent of any particular regularization scheme since all
integrands cancel.
To decide whether the Bogomolnyi bound is saturated we now turn to the
central charges.
The super Poincar\'e charges are obtained by the Noether method and
read
\begin{eqnarray}
Q_+ &=& \int (U \psi_- +\psi_+^* \dot{\varphi}+\psi_-^* \varphi ') dx
\nonumber\\
Q_- &=& -\int(U \psi_+ - \psi_-^* \dot{\varphi} -\psi_+^* \varphi ') dx
\end{eqnarray}
With topological boundary conditions the combination $Q_+ - Q_-$ is conserved,
but not $Q_+ + Q_-$,
\begin{equation}
\dot Q_+ + \dot Q_- = -2\left[U(\psi_+-\psi_-)+
(\psi_+^* + \psi_-^*)(\dot\varphi+\varphi')\right]_{L/2}
\end{equation}
There are no ordering ambiguities in these operators, and
using equal-time canonical commutation relations one finds the following
algebra for $A_\pm = Q_+ \pm (Q_+)^*$ and $B_\pm = Q_- \pm Q_-^*$
\begin{eqnarray}
\{ A_\pm , A_\pm\} &=& \pm 2 H - (Z + Z^*) \; ; \;
\{ A_+ , A_- \} = - (Z - Z^*) \nonumber\\
\{ B_\pm , B_\pm \} &=& \pm 2 H + 2 Re\, Z \; ; \;\;\;\;
\{ B_+ , B_- \} = Z- Z^* \nonumber\\
\{ A_\pm , B_\pm \} &=& \pm 2 P ; \{ A_+, B_- \} = \{A_- , B_+ \} = 0
\end{eqnarray}
The generators which are produced on the right-hand side are
\begin{eqnarray}
H &=& \int^\infty_{-\infty} \left( \dot{\varphi} \dot{\varphi}^* +
\varphi^\prime \varphi^{*\prime} + U U^* + i\psi_+^* \dot{\psi}_+ + i\psi_-^*
\dot{\psi}_- \right) d x \nonumber\\
P &=& \int _{-\infty}^{+\infty} \left( \varphi '\dot{\varphi}^* +\dot{\varphi}
\varphi^{*\prime} +i\psi^*_+\psi_+ '+i\psi_-^*\psi_- ' \right) d x \nonumber\\
Z &=& 2\int^\infty_{-\infty} U \varphi^\prime d x
\end{eqnarray}
Since an $N=1$ massless multiplet in $D=(3,1)$
(which is always without central
charge) becomes a massive $N=2$
multiplet in $D=(1,1)$ whose (mass)$^2$ is
equal to the square of the central charges, while the N=(1,1) susy
algebra in $D=(3,1)$ becomes an N=(2,2) susy algebra in $D=(1,1)$ with two
central charges (the generators $P_2$ and $P_3$), it is clear why massive
multiplets of $D = (1,1)$ $N=(2,2)$ models with maximal central charge are
shortened.
Since $A_+$ and $i A_-$, and
$B_+$ and $i B_-$ are hermitian, the BPS bound is $H \geq |Re\, Z|$. For
the soliton one has classically that $Z$ is real and $H= Re\, Z$,
i.e., the bound is
saturated. At the quantum level, the 1-loop corrections to $Re\, Z$ are given
by expanding $\varphi=\varphi_{\mathrm sol}/\sqrt2+\chi$ and taking
the vacuum expectation values in the soliton vacuum
\begin{eqnarray}
Z &=& Z_{cl} +
Re \int \left\{ U^\prime (\varphi_{\mathrm sol}/\sqrt2)
\langle \chi \chi^\prime \rangle +
\frac{1}{2} U^{\prime\prime} (\varphi_{\mathrm sol}/\sqrt2) \langle \chi \chi \rangle
\varphi_{\mathrm sol}^\prime/\sqrt2 \right\}
dx \nonumber\\
&=& Z_{cl} + Re \int \partial_x \left[ \frac{1}{2} U^\prime
(\varphi_{\mathrm sol}/\sqrt2 ) \langle \chi\chi \rangle
\right] dx \nonumber\\
&=& Z_{cl} +
{m\over 2} Re\left( \langle \chi (+ \infty) \chi (+\infty)\rangle - \langle \chi (-\infty)
\chi (- \infty) \rangle \right)
\end{eqnarray}
Since {\sf asymptotically}
$\langle \chi \chi \rangle = 0$ (only $\langle \chi^* \chi\rangle$ is nonzero),
there is no correction
to the central charges. Because there is also no correction to the mass of
the soliton, the BPS bound remains saturated.
\section{Higher loops}
\label{sec:2loop}
In this section we repeat the two-loop calculation of the mass of the soliton
in the sine-Gordon theory \cite{verw,vega}, paying close attention this
time to possible ambiguities. We begin with a review of the method of
quantization of collective coordinates, focusing on possible ordering
ambiguities. For early work on quantization of collective coordinates
see \cite{ger2,ger3,ger4,jev,chrlee,gj,jac,dhn2,tomb,ger1,poly}.
For an introduction see \cite{raj}.
To compute the higher loop corrections to the mass of a soliton, one may use
standard quantum mechanical perturbation theory. One expands the
renormalized Hamiltonian
into a free part and an interaction part, $H=H^{0}+H_{\rm int}$, and the latter is
expanded in terms of the dimensionless
interaction parameter $\sqrt{\hbar c\lambda /m^{_2}}$
as $H_{\rm int}=\sum_{n=1}^{\infty}H_{\rm int}^{(n)}$.\footnote{Terms with
$n$ quantum fields $\eta$ contain in addition a dimensionless factor
$(\omega L/c)^{-n/2}$. We set $c=1$ but keep $\hbar$ when useful.}
This expansion is performed both in the soliton sector and in the trivial
sector, and the corrections to the mass of the soliton can then be evaluated
to any given order in $\hbar\lambda /m^2$ by subtracting the energy of the
vacuum in the trivial sector from the energy of the vacuum in the soliton
sector. For the
two-loop corrections (themselves of order $\hbar^{2} \lambda /m$ since the
classical energy $M_{cl}$
of the soliton is proportional to $m^{3}/\lambda$) this means that we must
evaluate
\begin{eqnarray}\label{71}
M^{(2)}&=&\langle {\rm sol}|H_{\rm int,sol}^{(2)}|{\rm sol}\rangle-\langle0|H_{\rm int,triv}^{(2)}|0\rangle\nonumber\\&&+
{\sum_{p}}^\prime
\frac{\langle {\rm sol}|H_{\rm int,sol}^{(1)}|p\rangle\langle p|H_{\rm
int,sol}^{(1)}|{\rm sol}\rangle}{E_{\rm sol}-E_{p}}
\nonumber\\&&
-{\sum_{p}}^\prime\frac{\langle0|H_{\rm int,triv}^{(1)}|p\rangle\langle p|H_{\rm int,triv}^{(1)}|0\rangle}{E_{0}-
E_{p}} \label{M2}
\end{eqnarray}
Here $|{\rm sol}\rangle$ is the ground state in the soliton sector
(the soliton vacuum) with classical energy $E_{\rm sol}=M_{cl}$,
$|0\rangle$ is the ground state in the trivial (nontopological) sector
with vanishing classical energy, $|p\rangle$ are the complete sets of
eigenstates of $H^{(0)}_{\rm sol}$ and $H^{(0)}_{\rm triv}$ with positive
energies $E_{p}$, and the sums extend over all excitations but do not
include the ground state. Hence $E_{\rm sol}-E_{p}$ and $E_{0}-E_{p}$
never vanish. The fields for the quantum fluctuations in the trivial
and the topological sectors are expanded into modes with creation and
annihilation operators, and both $|0\rangle$ and $|{\rm sol}\rangle$
are annihilated by the annihilation operators.
\subsection{The quantum Hamiltonian}
To apply this approach to the soliton in sine-Gordon theory, we begin by
defining the sine-Gordon action\footnote{To compare with \cite{vega,verw}
we use their action. It is related to the action in section~5 by
the shift $\sqrt\lambda\varphi_1/m_0\to\sqrt\lambda\varphi_1/m_0+\pi$. Note that
in \cite{verw} mass is renormalized by
$m_{0}^{2}=m^{2}-\delta m^{2}$.}
\begin{equation}
{\cal L}=\frac{1}{2} \dot{\phi}^{2}-\frac{1}{2}(\phi^\prime)^{2}-\frac{m_{0}^{4}}{
\lambda}\left(1-\cos\frac{\sqrt{\lambda}}{m_{0}}\phi\right)
\label{sG}
\end{equation}
The action in the trivial sector is obtained by expanding $\phi$ about the
trivial vacuum. Since for sine-Gordon theory the latter is given by $\Phi$=0,
we obtain $\phi= \Phi +\eta=\eta$, and
\begin{equation}
{\cal L}=\frac{1}{2}\dot{\eta}^{2}-\frac{1}{2}(\eta ')^{2}-\frac{1}{2}m_{0}^{2}
\eta^{2} +\frac{1}{4!}\lambda\eta^{4}+\ldots
\end{equation}
In 1+1 dimensional linear sigma models only mass renormalization is needed,
$m_{0}^{2}=m^{2}+\delta m^{2}$, and at the one-loop level
$\delta m^{2}$ is fixed by requiring that
the graph with a seagull loop and two external $\eta $ fields
cancels the contribution from $-(1/2) \delta m^{2}\eta ^{2}$. This yields
\begin{equation}
\delta m^{2}=\frac{\lambda}{2}\sum \frac{\hbar}{2\omega_{\rm vac}L}=
\frac{\hbar \lambda }{4\pi}\int _{0}^{\Lambda}\frac{dk}{\sqrt{
k^{2}+m^{2}}}
\end{equation}
The mass $m$ is thus the physical mass of the meson
at the pole of the propagator to this order.
A complete counterterm which removes all equal-time contractions is
\cite{col}
\begin{equation}\label{75}
\Delta H=(e^{\delta m^{2}/m^{2}}-1)\frac{m^{4}}{\lambda}\int_{-\infty}^{
+\infty}\left(1-\cos\frac{\sqrt{\lambda}}{m}\phi (x)\right)dx
\end{equation}
The corrections to the physical mass at higher loop orders are then
finite, and by expanding the final result for the soliton mass in
terms of the physical mass of the mesons,
any ambiguity due to defining a $\Delta H$
which differs from (\ref{75}) by finite terms will be eliminated. In
particular, the contributions from other renormalization conditions
for $m$, and finite renormalization of $\lambda$ and $\eta$, should
cancel.
The Hamiltonian in the trivial sector is simply
\begin{eqnarray}
H_{\rm triv}^{(0)}&=&\int_{-\infty}^{+\infty}\left[\frac{1}{2}\Pi_{0}^{2}(x)+\frac{1}{2}
(\eta^\prime(x))^{2} +\frac{1}{2} m^{2}\eta(x)^{2}\right]dx\\
H_{\rm int,triv}&=&-\frac{1}{4!}\lambda\eta^{4}+\ldots +\frac{1}{2}\delta m^{2}\eta
^{2}+\ldots
\end{eqnarray}
where $\Pi_{0}(x)$ is the momentum canonically conjugate to $\eta (x)$.
So $H_{\rm int,triv}^{(n)}$ contains only terms for even $n$ and to obtain the
contributions of the trivial sector to the two-loop corrections to the mass of
the soliton we must evaluate $-\frac{1}{4!}\lambda\langle0|\eta^{4}|0\rangle+\frac{1}{2}
\delta m^{2}\langle0|\eta^{2}|0\rangle$. Note that there are no ordering ambiguities in
the Hamiltonian of the trivial sector.
To obtain the Hamiltonian in the soliton sector, one must use the formalism
for quantization of collective coordinates \cite{raj}.
Although the final formulas look somewhat
complicated, the basic idea is very simple: one expands $\phi (x,t)$ again
into a sum of a background field (the soliton) and a complete set of small
fluctuations about the background field, but instead of simply writing
$\phi (x,t)=\phi_{\rm sol}(x)+\sum q^{m}(t)\eta_{m}(x)$ where $\eta_{m}(x)$ stands
for all modes (eigenfunctions of the linearized field equations),
one deletes the zero mode for translations from the sum, and
reintroduces it by replacing $x$ by $x- X(t)$ on the right hand side of the
expansion of $\phi$. For small $X(t)$, the expansion of $\phi_{\rm sol}(x-X(t))$
into a Taylor series gives $\phi_{\rm sol}-X(t)\phi'_{\rm sol}(x,t)+...$,
and since $\phi '_{\rm sol}
(x,t)$ is the translational zero mode (the solution of the linearized field
equations with vanishing energy), one has not lost any degrees of freedom.
Hence one substitutes
\begin{equation}
\phi (x,t)= \phi_{\rm sol}(x-X(t))+{\sum}^\prime q^{m}(t)\eta_{m}(x-X(t))
\end{equation}
into the action in (\ref{sG}), and using the chain rule, one finds an action
of the form of a quantum mechanical nonlinear sigma model (but with
infinitely many degrees of freedom)
\begin{equation}
L=\dot{u}^{I}g_{IJ}(u)\dot{u}^{J}-V(u);u^{I}=\{ X(t),q^{m}(t)\}
\end{equation}
The metric $g_{IJ}$ is given by
\begin{equation}
g_{IJ}=\int{\partial\phi(x,t)\over\partial u^I}
{\partial\phi(x,t)\over\partial u^J}dx
\end{equation}
and contains space integrals over expressions which depend on
$q^{m}(t), \eta_{m}(x)$ and $\phi_{\rm sol}(x)$, but not on $X(t)$ due to the
translational invariance of the integral over $x$. The Hamiltonian is then
simply given by
\begin{equation}
\label{qmham}
H=\pi_{I}g^{IJ}(u)\pi_{J}+V(u);\pi_{I}=\{ P(t),\pi_{m}(t)\}
\end{equation}
where $g^{IJ}(u)$ is the matrix inverse of the metric $g_{IJ}(u)$ and
$P(t)$ is the center of mass momentum (the momentum conjugate to
$X(t)$),while $\pi_{m}(t)$ are momenta canonically conjugate to
$q^{m}(t)$.
Classically, this is the whole
result. One may check that the equal-time Poisson brackets $\{ Q,P \}=1,\{
q^{m},\pi_{n} \} =\delta_{n}^{m}$ imply $\{\phi(x),\Pi_{0}(y)\}=\delta (x-y)$
where $\Pi_{0}(x,t)=\dot{\phi}(x,t)$, and vice-versa. Hence, the
transition from
$\phi(x,t)$ and $\Pi_{0}(x,t)$ to $\{ X(t),q^{m}(t)\}$ and $\{ P(t),\pi_{m}(t)
\}$ is a canonical transformation. It is useful to recast the ``quantum
mechanical'' Hamiltonian in (\ref{qmham}) into a form which resembles more
the Hamiltonian of a 1+1 dimensional field theory.
To this purpose we introduce fields constructed from $q^{m}$ and $\pi_{m}$
as follows
\begin{eqnarray}
\eta(x,t)&\equiv& \sum\,\! ' q^{m}(t)\eta_{m}(x-X(t))\\
\pi(x,t)&\equiv&\sum\,\! ' \pi_{m}(t)\eta_{m}(x-X(t))
\end{eqnarray}
By combining the $\pi_{m}$ and $q^{m}$ with the functions $\eta_{m}(x)$ which
appear in $g^{IJ}(u)$, one can write the complete Hamiltonian only in terms
of the fields $\eta(x,t)$ and $\pi(x,t)$ and the background field
$\phi_{\rm sol}(x)$. To simplify the notation, we introduce an inner product
$(f,h)\equiv\int_{-\infty}^{+\infty}f^{*}(x)h(x)dx$. Note that the functions
$\eta_{m}$ which parameterize the small fluctuations are orthogonal to the
zero mode $\phi_{\rm sol}'$ since they correspond to different
eigenvalues of the kinetic operator
\begin{equation}
(\phi_{\rm sol}',\eta_{m})=0
\end{equation}
a result we shall use repeatedly. The classical Hamiltonian density ${\cal
H}=T_{00}$ is given by
\begin{eqnarray}
{\cal H}&=&\frac{1}{2} \Pi_{0}^{2}(x,t)+\frac{1}{2}\phi '(x,t)^{2}+V(\phi)
\nonumber \\&=&\frac{1}{2}\pi^{2}(x-X(t),t)-\pi(x-X(t),t)\frac{ P+(\pi,\eta '
) }{M_{cl}[1+(\eta ',\phi_{\rm sol}')/M_{cl}]}\phi_{\rm sol}'(x-X(t))\nonumber \\
&+&\frac{[P+(\pi,\eta ')]^{2}}{2 M_{cl}^{2}(1+(\eta ',\phi_{\rm sol}')/M_{cl})}
[\phi_{\rm sol}'(x-X(t))]^{2}+\nonumber \\
&+&\frac{1}{2}[\eta '(x-X(t),t)+\phi_{\rm sol}'(x-X(t))]^{2}+V(\phi_{\rm sol}+\eta)
\end{eqnarray}
A great simplification occurs in
the Hamiltonian $H=\int_{-\infty}^{+\infty}{\cal H}dx$ because due to the
orthogonality of the zero mode $\phi_{\rm sol}'$ to the fluctuations $\eta_m$, the
complicated second term in ${\cal H}$ cancels.
There should be no terms linear in the fluctuations $\eta$ and $\pi$
and the collective coordinates $X$, $P$ in $H$ (i.e., after
integrating ${\cal H}$ over $x$). That this is indeed the case follows
from the field equation
\begin{equation}
\phi_{\rm sol}''=V'(\phi_{\rm sol})
\end{equation}
The classical energy of the soliton at rest is given by
\begin{equation}
M_{cl}=\int_{-\infty}^{+\infty}(\phi_{\rm sol}')^{2}dx
\end{equation}
which follows from equipartition of energy
\begin{equation}
\frac{1}{2}(\phi_{\rm sol}')^{2}=V(\phi_{\rm sol})
\end{equation}
Thus we arrive at the following expression for the classical Hamiltonian
in the topological sector
\begin{eqnarray}\label{89}
H_{\rm sol}^{(0)}&=&M_{cl}+\int_{-\infty}^{+\infty}\left[\frac{1}{2}\pi(x,t)^{2}+
\frac{1}{2}\eta '(x,t)^{2}+\frac{1}{2}\eta^{2}V''(\phi_{\rm sol})\right]dx\\
\label{90}
H^{\rm cl}_{\rm int,sol}&=&\frac{1}{2M_{cl}}\frac{[P+(\pi,\eta ')]^{2}}{1+(\eta ',
\phi_{\rm sol}')/M_{cl}}+\int \left[\frac{1}{3!}\eta^{3} V'''(\phi_{\rm sol})+
\right.
\nonumber\\&& \left.
+\frac{1}{4!}\eta^{4}V''''(\phi_{\rm sol})+\ldots\right]dx
\end{eqnarray}
All $X(t)$ dependence has disappeared from $H$ due to translational invariance
of the integration over $x$.
We must now discuss the subtle issue of operator ordering in $H$. We shall
consider a soliton at rest, so we set $P=0$. Furthermore, due to $[q^{m},
\pi_{n}]=i\hbar\delta_{n}^{m}$ and $(\eta_{m},\eta_{m}')=0$
(since we work in a finite volume, and $\eta$ and $\eta'$ have the
same boundary conditions)
one has the
equality $(\pi,\eta ')=(\eta ',\pi)$, at least if one considers a finite
number of modes in $\eta$ and $\pi$. However, there are operator ordering
ambiguities both in $(\pi, \eta ')^{2}$ and also with respect to the term
$(\eta ', \phi_{\rm sol} ')/M_{\rm cl}$ in the denominator.
In general, one may require that the generators $H, P=\int T_{01}dx$ and $L=
\int x T_{00} dx$ satisfy the Poincar\'e algebra \cite{tomb}%
\footnote{The Noether current for the orbital part of the angular
momentum $J_{\rho\sigma}$ is given by $j^{\mu}_{\rho \sigma}=(x_{\rho} T^{\mu}_
{\sigma}-x_{\sigma}T_{\rho}^{\mu})$ so $J_{01}=L=\int (x_{0}T^{0}_{1}-x_{1}
T^{0}_{0})dx$. At t=0 this reduces
to $\int xT_{00} dx$.}.
The expressions for these operators are quite complex, and in general it
seems likely that the operator ordering which leads to closure of the Poincar\'e
algebra is unique (in quantum gravity, such an ordering has never been found).
There is, however, an ordering which guarantees closure, and this is the
ordering we shall adopt. It is obtained by making the canonical transformation
at the quantum level. One begins with the quantum Hamiltonian in
``Cartesian coordinates'' (i.e., in terms of the operators $\Pi_{0}(x)$ and
$\phi (x)$). In the Schr\"{o}dinger representation the operator
$\Pi_{0}(x)$ is represented by $\partial/\partial
\phi(x)$, and making the change of coordinates from $\phi(x)$ to $X$
and $q^{m}$, one obtains the Laplacian in curved space by applying the chain
rule
\begin{equation}
\sum\left(\frac{\partial}{\partial \alpha^{i}}\right)^{2}=\frac{1}{\sqrt{g}}\frac
{\partial}{\partial u^{I}}\sqrt{g(u)}g^{IJ}(u)\frac{\partial}{\partial u^{J}}
\end{equation}
where $\alpha^{i}$ is the set $\phi(x)$ and $u^{I}$ the set $X$, $q^{m}$. If the
inner product in $\alpha$ space is given by $(f,h)=\int f^{*}(\alpha)h(\alpha)
(\prod d\alpha^{i})$, it becomes in $u$ space
$(f,h)=\int f^{*}(\alpha(u))h(\alpha(
u))\sqrt{g(u)}(\prod du^{I})$. With this inner product, the relation between
$\partial /\partial u^{J}$ and the conjugate momenta $\pi_{J}$ in the
Schr\"{o}dinger representation is not
simply $\pi_{J}=\partial /\partial u^{J}$ , but rather
\begin{equation}
\label{trans}
\frac{\partial}{\partial u^{J}}=g^{1/4}(u)\pi_{J} g^{-1/4}(u)
\end{equation}
as one may check.\footnote{In \cite{raj}
this derivation of the operator ordering of the Hamiltonian
is given, but at the end $\partial /\partial u^{J}$ is replaced by $\pi_{J}$
which is incorrect. In \cite{ger1} the correct quantum Hamiltonian is obtained,
but
the factors $g^{-1/4}(u)$ are produced by ``Redefining the Hilbert space so
as to eliminate the measure from this scalar product...''. We claim that
the relation (\ref{trans}) is not a convention or a choice of basis,
but is fixed because the inner product has been specified.}
Hence the correct quantum Hamiltonian is given by
\begin{equation}
\hat{H}=\frac{1}{2}\frac{1}{g(u)^{1/4}}\pi_{I}\sqrt{g(u)}g^{IJ}(u)\pi_{J}\frac
{1}{g(u)^{1/4}}+V(u)+\Delta H
\end{equation}
with $\Delta H$ given by (\ref{75}).
It is often useful to {\em rewrite} this Hamiltonian such that all
expressions are
Weyl ordered, because then one can use Berezin's theorem and find at once the
action to be used in the path integral. The result is
\begin{eqnarray}
\hat{H}&=&\frac{1}{2}(\pi_{I}g^{IJ}\pi_{J})_{W}+V(u)+\Delta V +\Delta H
\\ \Delta V&=& \frac{\hbar^{2}}{8}\left[\partial_{I}\partial_{J}g^{IJ}(u)-4g^{-1/4}
(u)\partial_{I}\{ g^{1/2}(u)g^{IJ}(u)\partial_{J}g^{-1/4}(u)\}\right]
\end{eqnarray}
The operator $(1/2)\left(\pi_Ig^{IJ}\pi_J\right)_W$ is obtained by
promoting (\ref{89}) and (\ref{90}) to operators and Weyl ordering.
Weyl ordering yields then $(1/2)((1/4)\pi_I\pi_Jg^{IJ}+
(1/2)\pi_Ig^{IJ}\pi_J+(1/4)g^{IJ}\pi_I\pi_J)$.
Substituting the expression for $g^{IJ}(u)$ \cite{raj} one finds
\begin{eqnarray}
\partial_{I}\partial_{J}g^{IJ}&=&\partial_{q^{m}}\partial_{q^{n}}\left\{ \frac{
(\eta^{m},\eta ')(\eta^{n}, \eta ')}{(\psi_{0}, \phi_{\rm sol}'+\eta ')^{2}}\right\}
\nonumber\\
4g^{-1/4}\partial_{I}\left\{ g^{1/2}g^{IJ}\partial_{J}g^{-1/4}\right\}
&=&\frac{1}{(\psi_{0},\phi ')^{1/2}}\frac{\partial}{\partial q^{m}}\left[\frac{1}{
(\psi_{0},\phi ')^{3/2}}\frac{\partial}{\partial q^{m}}(\psi_{0},\phi ')^{2}+
\right.\nonumber\\&&\left.
+\frac{(\eta ',\eta_{m})(\eta ', \eta_{n})}{(\psi_{0},\phi ')^{7/2}}\frac{
\partial}{\partial q^{n}}(\psi_{0},\phi ')^{2}\right]
\end{eqnarray}
where $\phi=\phi_{\rm sol}+\eta$, and $\psi_{0}=\phi_{\rm sol} '/\sqrt{M_{\rm cl}}$ is the
normalized zero mode. This leads to
\begin{eqnarray}
\Delta V&=&\frac{\hbar^{2}}{8}\left[-\frac{(\psi_{0},\eta_{m}')(\eta_{m}',\psi_{0})
}{(\psi_{0},\phi ')^{2}} \right.\nonumber\\&&
-2\frac{ (\psi_{0},\eta_{m}')(\eta_{m},\eta_{n}')(
\eta_{n}, \eta ')+(\psi_{0},\eta_{m}')(\eta_{m},\eta ')(\eta_{n},\eta_{n}') }
{(\psi_{0},\phi ')^{3}}\nonumber\\
&&\left.+\frac{\{(\psi_{0},\eta_{m} ')(\eta_{m},\eta ')\}^{2}}{(\psi_{0},\phi ')^{4}}
+\frac{(\eta_{m},\eta_{n}')(\eta_{n},\eta_{m} ')+(\eta_{m},\eta_{m} ')^{2}}
{(\psi_{0},\phi ')^{2}} \right]
\end{eqnarray}
Further simplifications result by using the identities
\begin{eqnarray}
&&(\psi_{0},\psi_{0}')=0,\;(\psi_{0},\eta ')=(\psi_{0},\phi '),\; (\psi_{0},
\eta_{m}')=-(\psi_{0}',\eta_{m})\nonumber\\&&(\eta_n,\eta')=(\eta_n,\phi'),\;
(\eta_{m},\eta_{m}')=0,\;
{\sum}^\prime \eta_{m}(x)\eta_{m}(y)=\delta(x-y)-\psi_{0}(x)\psi_{0}(y)
\end{eqnarray}
The final answer for $\Delta V$ reads then
\begin{eqnarray}
\Delta V&=&\frac{\hbar^{2}}{8}\left[ -\frac{(\psi_{0}',\psi_{0}')}{(\psi_{0},\phi
')^{2}} + 2\left\{\frac{(\psi_{0}',\phi '')}{(\psi_{0},\phi ')^{3}} -
\frac{(\psi_{0}
',\psi_{0}')}{(\psi_{0}',\phi ')^{2}}\right\}\right.\nonumber\\&&\left.
+ \frac{(\psi_{0}',\phi ')^{2}}
{(\psi_{0},\phi ')^{4}} -\sum_{m,n}\frac{|(\eta_{m},\eta_{n}')|^{2}}
{(\psi_{0}',\phi ')^{2}}\right]
\end{eqnarray}
The total Hamiltonian is then the sum of $H^{(0)}_{\rm sol}$ in
(\ref{89}) in which no ordering
problems are present, and $\left(H^{\rm cl}_{\rm
int,sol}\right)_W +\Delta V+\Delta H$ with $\left(H^{\rm cl}_{\rm
int,sol}\right)_W$ given by (\ref{90}) with the complicated momentum
dependent term Weyl-ordered.This is the result in \cite{ger1}.
A drastic
simplification is obtained by rewriting the latter term in a particular
non-Weyl-ordered way in such a way that it absorbs all terms in $\Delta V$
except the first
one \cite{tomb}. This leads to the final form of the interaction
Hamiltonian\footnote{
The first term in (\ref{102}) can be written for $P=0$ as $[(\pi,\eta '/F)+(\eta'
/F,\pi)]^{2}$, where $F=1+(\eta ',\phi_{\rm sol}')/M_{\rm cl}$.
Weyl-ordering of $(\pi , \eta '/F)^{2}$ yields
\[
\frac{1}{4}
\int_{-\infty}^{+\infty}\left[\pi (x)(\pi,\eta '/F) \frac{\eta '}{F} (x)+\frac
{\eta '}{F} (x)(\eta '/F,\pi)\pi (x)\right] dx +2(\pi,\eta '/F)(\eta '/F,\pi)
\]
Evaluating the difference of the two expressions one needs the following
commutators
\begin{eqnarray}
[\pi (x),\eta '(y)] &=&-i\partial_{y}\delta (x-y)+\frac{i}{M_{\rm cl}}\phi ' _
{\rm sol}(x)\phi''_{\rm sol}(y)\nonumber \\ \;
[\pi (x),1/F] &=&-i\phi ''_{\rm sol}(x)/(M_{\rm cl}F^{2})
\nonumber
\end{eqnarray}
Straightforward algebra produces then all terms in $\Delta V$ except the
first one.
}
\begin{eqnarray}\label{102}
H_{\rm int,sol}&=&\frac{1}{8 M_{\rm cl}}\left\{ (P+(\eta ',\pi)),\frac{1}{1+(\eta ',\phi
'_{\rm sol})/M_{\rm cl}}\right\}^{2}\nonumber \\&&
-\frac{\hbar^{2}}{8 M_{\rm cl}^{2}}\int_{-\infty}^{+\infty}\frac{(\phi ''_{\rm sol})
^{2} dx}{[1+(\eta ',\phi_{\rm sol}')/M_{\rm cl}]^{2}}+\Delta H
\nonumber\\&&
+\int_{-\infty}^{+\infty}\left[\frac{1}{3!}\eta^{3}
V'''(\phi_{\rm sol})+\frac{1}{4!}\eta^{4}V''''(\phi_{\rm sol})+\ldots\right]dx
\end{eqnarray}
Note that the first term is the square of a Weyl-ordered operator, but
is not itself Weyl-ordered.
For the two-loop calculation we are going to perform we set P=0, and we
only need the terms as far as quartic in $\eta $ and $\pi$. This leads to
\begin{eqnarray}
H_{\rm int,sol}&=&\frac{1}{2M_{\rm cl}}\left(\int_{-\infty}^{+\infty}\eta '\pi dx\right)
\left(\int_{-\infty}^{+\infty}\eta '\pi dy\right)
-\frac{\hbar^{2}}{8 M_{\rm cl}^{2}}\int_{-
\infty}^{+\infty}(\phi ''_{\rm sol}(x))^{2} dx \nonumber \\&&
+\Delta H
-\frac{\sqrt{\lambda}m}{3!}\int_{-\infty}^{+\infty}\eta^{3}(x)\sin
\frac{\sqrt{\lambda}}{m}\phi_{\rm sol}(x)dx \nonumber\\&&-
\frac{\lambda}{4!}
\int_{-\infty}^{+\infty}\eta^{4}(x)\cos\frac{\sqrt{\lambda}}{m}\phi_{\rm sol}(x)dx
+\ldots
\label{hint}
\end{eqnarray}
The counterterms are the same in the topological sector as in the trivial sector,
but in the trivial sector we decomposed $\phi=\Phi +\eta=\eta$, while here
we expand $\phi=\phi_{\rm sol}+\eta $. We obtain then
\begin{eqnarray}
\Delta H&=&\frac{m^{4}}{\lambda}\left\{e^{\frac{\delta m^{2}}{m^{2}}}-1\right\}
\int_{-\infty}^{+\infty}\left\{1-\cos \frac{\sqrt{\lambda}}{m}\phi (x)\right\}dx
\;\nonumber\\ \;
&=&\frac{m^{4}}{\lambda}\left[\frac{\delta m^{2}}{m^{2}}+\frac{1}{2}\left(\frac{\delta
m^{2}}{m^{2}}\right)^{2}\right]\int_{-\infty}^{+\infty}\left\{ 1-\cos\frac{\sqrt{\lambda}}{m}
\phi_{\rm sol}(x)\right\} dx \nonumber\\
\;&+&\frac{\delta m^{2}m}{\sqrt{\lambda}}\int_{-\infty}^{+\infty}\eta (x)
\sin\frac{\sqrt{\lambda}}{m}\phi_{\rm sol}(x)dx\nonumber\\&&
+\frac{1}{2}\delta m^{2}\int_{-\infty}^{+\infty}\eta ^{2}(x)\cos\frac
{\sqrt{\lambda}}{m}\phi_{\rm sol}(x)dx+...
\label{hct}
\end{eqnarray}
The first term is the counterterm for the one-loop graphs and will not
contribute to our two-loop calculation.
\subsection{The actual two-loop calculation}
Using the explicit expressions for the classical soliton solution
\begin{equation}
\phi_{\rm sol}=\frac{4m}{\sqrt{\lambda}} \arctan (e^{mx})
\end{equation}
one finds $M_{\rm cl}=\frac{8m^{3}}{\lambda}$ and
\begin{eqnarray}
\sin \left(\frac{\sqrt{\lambda}}{m}\phi_{\rm sol}\right)&=&-4\frac{e^{mx}-e^{-mx}}{(e^{mx}
+e^{-mx})^{2}};\;\cos \frac{\sqrt{\lambda}}{m}\phi_{\rm sol}=1-\frac{8}{(e^{mx}+
e^{-mx})^{2}}\nonumber\\&&
-\frac{1}{8M_{\rm cl}^{2}}\int_{-\infty}^{+\infty}(\phi_{\rm sol}''(x))^{2}dx=-\frac
{\lambda}{192 m}
\end{eqnarray}
Substituting these results into (\ref{hint},\ref{hct}),
we find for the Hamiltonian
\begin{eqnarray}
H_{\rm int,sol}^{(1),I}&=&-\frac{\sqrt{\lambda}m}{6}\int_{-\infty}^{+\infty}\frac{
(-4)(e^{mx}-e^{-mx})}{(e^{mx}+e^{-mx})^{2}}\eta^{3}(x) dx \label{H1I}\\
H_{\rm int,sol}^{(1),II}&=&\frac{\delta m^{2} m}{\sqrt{\lambda}}\int_{-\infty}^{
+\infty}\frac{(-4)(e^{mx}-e^{-mx})}{(e^{mx}+e^{-mx})^{2}}\eta (x)dx\\
H_{\rm int,sol}^{(2),I} &=&-\frac{\lambda}{24}\int_{-\infty}^{+\infty}[1-
\frac{8}{(e^{mx}+e^{-mx})^{2}}]\eta^{4}(x)dx\\
H_{\rm int,sol}^{(2),II}&=&\frac{1}{2}\delta m^{2} \int_{-\infty}^{+\infty}[1-
\frac{8}{(e^{mx}+e^{-mx})^{2}}]\eta^{2}(x) dx\\
H_{\rm int,sol}^{(2),III}&=&\frac{\lambda}{16 m^{3}}(\int_{-\infty}^{+\infty}
\eta '(x)\pi (x)dx)^{2}\\
H_{\rm int,sol}^{(2),{\rm rest}}&=&-\frac{\lambda \hbar^{2}}{192 m}+\frac{(\delta m^{2})
^{2}}{2\lambda}\frac{4}{m}
\end{eqnarray}
We now put the system in a box of length $L$.
We expand $\eta (x)$ into creation and annihilation operators
\begin{eqnarray}
\eta(x)&=& \sum_{n=-\infty}^{+\infty}\sqrt{\frac{\hbar}{2\omega_{n}}}
(a(q_{n})E(q_{n},x)e^{i k_{n} x}+h.c.)\\
E(q_{n},x)&=&\frac{i\tanh mx +q_{n}}{[(1+q_{n}^{2})L-\frac{2}{m}\tanh \frac{
mL}{2}]^{1/2}}\\
q_{n}&=&\frac{k_{n}}{m};\omega_{n}^{2}=k_{n}^{2}+m^{2}
\end{eqnarray}
The functions $E(q_{n},x)e^{ik_{n}x-i\omega_{n}t}\equiv \eta_{n}(x)e^{-i
\omega_{n}t}$ are eigenfunctions of $H^{(0)}_{\rm sol}$ and
satisfy the linearized field equations
\begin{equation}
[\partial_{x}^{2}+\omega_{n}^{2}-m^{2}(1-2\cosh ^{-2}(x))]\eta_{n}=0
\end{equation}
and for large $q_{n}$ they tend to $\frac{1}{\sqrt{L}}$ which leads to the
familiar normalization factor $(2\omega_{n}L)^{-1/2}$ for free fields.
At $x=\pm L/2$ we find the phase shift
\begin{equation}
\delta (k)=2 \arctan\{ \frac{m}{k}\tanh
\frac{1}{2}mL\}\label{deltaka}
\end{equation}
The momenta $k_{n}$ are discretized by adopting antiperiodic
boundary conditions\footnote{Of course, we differ here from refs.
\cite{vega,verw} who choose periodic boundary conditions. The
necessity for the antiperiodic boundary conditions has been
discussed at length in the preceding sections.}
\begin{equation}
k_{n}L+\delta(k_{n})=2\pi n+\pi
\label{pb}
\end{equation}
As eigenfunctions of a self-adjoint operator on a compact space,
the functions $E(q_{n},x)$ have the same orthogonality properties as
plane waves and they are normalized to unity,
for example $\int_{-L/2}^{+L/2}|E(q_{n},x)|^{2}dx=1$.
As in the case of the kink $\delta(k)$ in (\ref{deltaka}) is
discontinuous. The values of $n=\ldots,-2,-1,0,1,2\ldots$
give solutions of (\ref{pb}) $kL=
\ldots,-2\pi,0,0,2\pi,4\pi,\ldots$. Again, we see a defect in
the $n$ to $k$ mapping: the $k=0$ solution is obtained
for $n=-1$ and $n=0$.
If we map $k=0$ onto $n=-1$, the remaining number $n=0$ can be assigned
to the only discrete solution (the translational zero mode). Then using
the Euler-Maclaurin formula the
sum over $n$ can be converted into integral over $k$ with continuous
measure. We now
evaluate the various terms in (\ref{M2}), adding subsets of terms
which combine to cancel divergences.
\subsubsection*{The $\eta^{4}$, $\delta m^{2}\eta^{2}$, and $(\delta m^2)^2$ terms in $H_{\rm int,sol}^
{(2)}$ and $H_{\rm int,vac}^{(2)}$}
Since there are 3 ways to contract the 4 $\eta$'s, one finds
from the term with $\eta^4$
\begin{eqnarray}
&&\langle {\rm sol}|H_{\rm int,sol}^{(2),I}|{\rm sol}\rangle=-\frac{\lambda}{8}\int_{-\infty}^{+\infty}\left[1-
\frac{8}{(e^{mx}+e^{-mx})^{2}}\right]
\sum_{n,p=-\infty}^{+\infty\prime}\nonumber
\\&& \frac{(\tanh^{2}mx-1)+(q_{n}^{2}+1)}{(1+q_{n}^{2})L-\frac{2}{m}\tanh\frac
{mL}{2}}\frac{(\tanh^{2}mx-1)+(q_{p}^{2}+1)}{(1+q_{p}^{2})L-\frac{2}{m}\tanh
\frac{mL}{2}}\frac{dx}{2\omega_{n}2\omega_{p}}
\end{eqnarray}
Using the integrals in the appendix, we record the contributions due to
terms with none, one and two factors $\tanh^{2} x-1$ in 3 separate lines
\begin{eqnarray}
&=&-\frac{\lambda}{32}[ \frac{1}{L}(\sum\frac{1}{\omega})^{2}-\frac{4}{mL^{2}}
(\sum\frac{1}{\omega})^{2}+\{ \frac{4}{L^{2}}(\sum\frac{1}{\omega})
(\sum\frac{1}{\omega^{2}})+...\}\nonumber\\&&
+\frac{4}{3}\frac{m}{L^{2}}(\sum\frac{1}{\omega})(\sum\frac{1}{\omega^{3}})+
\{\frac{8}{3}\frac{1}{L^{3}}(\sum\frac{1}{\omega^{3}})^{2}+\frac{8}{3}\frac{1}
{L^{3}}(\sum\frac{1}{\omega})(\sum\frac{1}{\omega^{5}})+...\}\nonumber\\
&&-\frac{4m^{3}}{5L^{2}}(\sum\frac{1}{\omega^{3}})^{2}+\{-\frac{16 m^{2}}{5
L^{3}}(\sum\frac{1}{\omega^{3}}(\sum\frac{1}{\omega^{5}})+...\} ]
\label{res}
\end{eqnarray}
The first two terms come from the factor $1-8/(e^{mx}+e^{-mx})^{2}$ whose
$x$-integral is equal to $L-\frac{4}{m}$. Clearly, there is a divergence
proportional to $L$ which will be cancelled by the corresponding contribution
from the vacuum sector. The integrals of $1-8/(e^{mx}+e^{-mx})^{2}$ times
$(\tanh^{2} x-1)^{p}$ for $p=1$ and $p=2$ are given by $\frac{2}{3m}$ and
$-\frac{4}{5m}$, respectively, and do not contain divergences which are due
to the $x$ integral, but they still contain divergences due to the
sums over $n$ and $p$. The terms inside curly brackets are due to expanding the denominators
$(1+q_{n}^{2})L-\frac{2}{m}\tanh \frac{mL}{2}
\sim(1+q_n^2)L-\frac2m$. (We have already set
$\tanh \frac{mL}{2}=1$ since the difference vanishes exponentially fast for $
L\rightarrow\infty$). Not all these terms vanish for $L\rightarrow \infty$,
but they will cancel with similar terms from other contributions. The sums
\begin{equation}
\sum \frac{1}{\omega}\equiv\sum_{-\infty}^{+\infty}\,\! '\frac{1}{\omega_{n}}
\sim \frac{L}{2\pi}2\int_{0}^{\infty}\frac{1}{\omega (k)}\left[1+\frac{1}{L}\delta '
(k)\right]dk
\end{equation}
will get contributions from $\delta '(k)$, but by first combining such sums,
we shall find that only differences like $\sum\frac{1}{\omega}-\sum\frac{1}{
\tilde{\omega}}$ occur, and this will simplify the analysis significantly. We
wrote an approximation symbol $\sim$ in $(\sum\frac{1}{\omega})$ because the
evaluation of such sums has been found to be regularization dependent
at one-loop level. This is the issue we want to study now at the two-loop level.
The evaluation of $\langle {\rm sol}|H_{\rm int,sol}^{(2),II}|{\rm sol}\rangle$ is straightforward and
yields
\begin{eqnarray}
&&\frac{1}{2}\delta m^{2}\int_{-\infty}^{+\infty}[1-\frac{8}{(e^{mx}+e^{-mx})
^{2}}]\sum_{n}\frac{(\tanh^{2}mx-1)+(q_{n}^{2}+1)}{(q_{n}^{2}+1)L-\frac{2}{m}}
\frac{dx}{2\omega_{n}}\nonumber\\
&=&\frac{1}{2}\delta m^{2}(L-\frac{4}{m})(\frac{1}{2L}\sum\frac{1}{\omega}
+\{ \frac{1}{mL^{2}}\sum\frac{1}{\omega^{3}}+...\} )\nonumber\\
&&+\frac{1}{2}\delta m^{2}(\frac{2}{3m})\frac{m^{2}}2\frac1L\sum{1\over\omega^3}
\label{com}
\end{eqnarray}
where the last line contains the contributions from the terms with $\tanh^{2}x
-1$.
The contributions from the trivial sector are only $\frac{1}{2} \delta m^{2}
\langle0|\eta^{2}|0\rangle-\frac{\lambda}{4!}\; \langle0|\eta^{4}|0\rangle$, where $\eta (x)$ are now
simple plane waves and they yield
\begin{equation}
\langle0|H_{\rm triv}+\Delta H_{\rm triv}|0\rangle=-\frac{1}{2}\delta m^{2}\sum\frac{1}{2\tilde{
\omega}}-\frac{\lambda}{8}(\sum\frac{1}{2\tilde{\omega}})^{2}\frac{1}{L}
\end{equation}
Finally there is the contribution from the term proportional to $(\delta m^{2}
)^{2}$ in $\Delta H_{\rm sol}$; it yields
\begin{equation}
\langle {\rm sol}|\frac{1}{2}\frac{(\delta m^{2})^{2}}{\lambda}\int_{-\infty}^{+\infty}
[1-\cos \frac{\sqrt{\lambda}}{m}\phi_{\rm sol}(x)]dx=\frac{\lambda}{8m L^{2}}
(\sum\frac{1}{\tilde{\omega}})^{2}
\end{equation}
where we wrote $\delta m^{2}$ as $\frac{\lambda}{4L}\sum\frac{1}{\tilde{\omega}
}$.
We first demonstrate that the terms due to expanding the denominators
$(1+q_{n}^{2})L-\frac{2}{m}$ cancel. Consider as an example the last term
in the first line of (\ref{res}). It cancels the last term in the first line
of (\ref{com}).To see this, write $\delta m^{2}$ into its original form as
a sum over modes
\begin{equation}
\delta m^{2}=\frac{\hbar\lambda}{4L}(\sum\frac{1}{\tilde{\omega}})
\end{equation}
One finds for the terms with $L^{-2}$
\begin{eqnarray}
&&(-\frac{\lambda}{32})\frac{4}{L^{2}}(\sum\frac{1}{\omega})(\sum\frac{1}
{\omega^{3}}-\frac{1}{2}(-\frac{\lambda}{4L}\sum\frac{1}{\tilde{\omega}})\frac
{1}{mL}\sum\frac{1}{\omega^{3}}\nonumber\\
&&=-\frac{\lambda}{8L^{2}}(\sum\frac{1}{\omega}-\sum\frac{1}{\tilde{\omega}})(
\sum\frac{1}{\omega ^{3}})
\end{eqnarray}
Note now that $\frac{1}{L}\sum\frac{1}{\omega^{3}}$ is finite, while
\begin{eqnarray}
\frac{1}{L}(\sum\frac{1}{\omega}-\sum\frac{1}{\tilde{\omega}})&=&\frac{1}
{L}[(\sum_{-N}^{-1}+\sum_{1}^{N})\frac{1}{\omega_{n}}-\sum_{-N}^{N}\frac{1}
{\tilde{\omega}_{n}}]\nonumber\\&=&\frac{1}{2\pi}\int_{-\infty}^{+\infty}\frac{
1}{\sqrt{k^{2}+m^{2}}}(\frac{\delta (k)}{L})dk\rightarrow 0
\label{dif}
\end{eqnarray}
is unambiguous and vanishes (the absence of the $n=0$ contribution
allows the application of the Euler-Maclaurin formula as discussed after
(\ref{pb})). Hence, the contributions due to
expanding the denominator $(q_{n}^{2}+1)L-\frac{2}{m}$ cancel. Similar
cancelations occur in other pairs of corresponding terms, and we shall
therefore only use the terms $(q_{n}^{2}+1)L$ in the denominators of
$E(x,q_{n})$ from now on.
The sum of all contributions from the terms with $\eta^{4}$, $\delta
m^{2}\eta^{2}$, and $(\delta m^2)^2$
in the topological and trivial sectors is then found to combine
into differences, except for one term, namely the contribution from the first
term in the third line in (\ref{res})
\begin{eqnarray}
\langle H^{(2)}\rangle \Big|_{\eta^{4}\; {\rm and} \;\delta m^{2}\eta^{2}}
=&& \nonumber\\
\epsfysize 20pt\epsfbox{oo.eps}
&\raisebox{8pt}{$+$}&
\epsfysize 20pt\epsfbox{xo.eps}
\hspace{4em} \raisebox{8pt}{$-$}\quad
\raisebox{8pt}{\Big\{}\quad
\epsfysize 20pt\epsfbox{ood.eps}
\quad\raisebox{8pt}{$+$}\quad
\epsfysize 20pt\epsfbox{xod.eps}
\quad\raisebox{8pt}{\Big\}}
\quad\raisebox{8pt}{$+\quad
(\epsfysize 6pt\epsfbox{x.eps})^2=$}
\nonumber\\
-\frac{\lambda}{32 L} (\sum\frac{1}{\omega})^{2}&+&\frac{\lambda}{16 L}
(\sum\frac{1}{\omega})(\sum\frac{1}{\tilde{\omega}})+(\frac{\lambda}{32}-\frac
{\lambda}{16})\frac{1}{L}(\sum\frac{1}{\tilde{\omega}})^{2}\nonumber\\
+\frac{\lambda}{8mL^{2}} (\sum\frac{1}{\omega})^{2}&-&\frac{\lambda}{4mL^{2}}
(\sum\frac{1}{\omega})(\sum\frac{1}{\tilde{\omega}})+\frac{\lambda}{8mL^{2}}(
\sum\frac{1}{\tilde{\omega}})^{2}\nonumber\\
-\frac{\lambda m}{24 L^{2}}(\sum\frac{1}{\omega})(\sum\frac{1}{\omega^{3}})&+
&\frac{\lambda m}{24 L^{2}} (\sum\frac{1}{\omega^{3}})(\sum\frac{1}{\tilde{
\omega}})\nonumber\\
+\frac{\lambda m^{3}}{40 L^{2}}(\sum\frac{1}{\omega^{3}})^{2}&&\;\nonumber
\end{eqnarray}
\begin{eqnarray}
&&=-\frac{\lambda}{32 L}(\sum\frac{1}{\omega}-\sum\frac{1}{\tilde{\omega}})^{2}
+\frac{\lambda}{8m L^{2}}(\sum\frac{1}{\omega}-\sum\frac{1}{\tilde{\omega}})^
{2}\nonumber\\
&&-\frac{\lambda m}{24 L^{2}}(\sum\frac{1}{\omega^{3}})(\sum\frac{1}{\omega}
-\sum\frac{1}{\tilde{\omega}})+\frac{\lambda m^{3}}{40 L^{2}} (\sum \frac{1}{
\omega^{3}})^{2}
\end{eqnarray}
Drawn lines represent propagators in the soliton sector and dotted
lines
propagators in the trivial vacuum.
In the intermediate expression the first column gives the contributions from
the $\eta^{4}$ term in $H_{\rm int,sol}^{(2)}$ while the second column gives the
contribution from $\delta m^{2}\eta^{2}$ in $\Delta H_{\rm sol}$. The third column
contains the contributions from the vacuum sector (in the first row) and from
the term $(\delta m^{2})^{2}$ in the topological sector (in the second row).
We recall that $\omega$ and $\tilde\omega$ denote the frequencies in
the topological and trivial sectors respectively.
We claim that all differences cancel. Since we already proved this for
(\ref{dif}),
we only need to discuss the sum $\frac{1}{L}(\sum\frac{1}{\omega}-\sum\frac{
1}{\tilde{\omega}})^{2}$. {\em Each factor} $\sum\frac{1}{\omega}-\sum
\frac{1}{\tilde{\omega}}$ {\em is ambiguous but finite}, see (\ref{dif}).
Hence the extra $\frac{1}{L}$ factor ensures that also this term vanishes.
We conclude that all $\eta^{4}, \eta^{2}$ and $\eta^{0}$ terms contribute
only one term
\begin{equation}\label{129}
\frac{\lambda m^{3}}{40L^{2}}(\sum\frac{1}{\omega^{3}})^{2}=\frac{\lambda}{40
\pi^{2}m}
\end{equation}
This is the contribution due to the one-vertex two-loop graph in which
one only retains in both
propagators the deviations from the trivial space propagators.
The individual other terms are ambiguous and divergent but their sum cancels.
\subsubsection*{The $\eta^{3}$ and $\delta m^2\eta$ contributions with one
intermediate particle} We next evaluate the contributions to the
mass of the soliton which come from the $\eta^{3}$ term in $H_{\rm
int,sol}^{(1)}$, and the $\eta$ term in $\Delta H_{\rm sol}$. We first
take one-particle intermediate states in the sums over $|p\rangle$ in
(1). Since there are no terms in the trivial sector which are odd in
$\eta$, these contributions should sum up to a finite result, but we
are again interested in possible ambiguities. Using
\begin{eqnarray}
&&\langle n|\frac{\delta m^{2}m}{\sqrt{\lambda}}\int_{-\infty}^{+\infty}\frac{-4(
e^{mx}-e^{-mx})}{(e^{mx}+e^{-mx})^{2}}\eta dx |{\rm sol}\rangle\nonumber\\
&&=-\frac{\delta m^{2}\pi}{m\sqrt{2L\lambda}}\frac{\sqrt{\omega_{n}}}{
\cosh \frac{1}{2}\pi q_{n}}
\end{eqnarray}
(see the appendix), and
\begin{equation}
\frac{1}{L}\sum_{n}\frac{1}{ \cosh^{2}( \frac{1}{2} \pi q_{n} )}=\frac{2m}{
\pi^{2}}
\end{equation}
we find straightforwardly
\begin{equation}
\sum_{n}|\langle n|\eta\; {\rm term}|{\rm sol}\rangle|^{2}/(-\omega_{n})=-\frac{(\delta m^{2})^
{2}}{\lambda m}
\end{equation}
Next we evaluate $\langle p|\eta^{3}{\rm term}|{\rm sol}\rangle$ with $\langle p|$ a one-particle
state. This yields a factor of 3 times an equal-time contraction at the
point $x$. The $x$-integrals are given in the appendix and one finds
\begin{equation}
\langle{\rm p,1\;part |\eta^{3}term|sol}\rangle=\sqrt{\frac{\lambda}{2L}}\frac{\pi}{4m}
\sum_{p}\frac{\sqrt{\omega_{p}}}{\cosh \frac{1}{2}\pi q_{p}}(\frac{1}{L}
\sum_{n}\frac{1}{\omega_n}-\frac{\omega_{p}^{2}}{4\pi m^{2}})
\end{equation}
Then it is relatively easy to obtain
\begin{eqnarray}
&&\sum_{p}\frac{2}{-\omega_{p}}Re\langle {\rm sol}|\eta^{3}{\rm term|p,1\;part\rangle\langle p,1\;part|
\eta \,term|sol}\rangle\nonumber\\&&
=(\delta m^{2})[-\frac{1}{2mL}\sum\frac{1}{\omega}-\frac{1}{6\pi m}]
\end{eqnarray}
Finally we evaluate
\begin{eqnarray}
&&\sum\frac{1}{-\omega_{p}}|\langle{\rm p,1\;part |\eta^{3}term |sol}\rangle|^{2}
\nonumber\\&&
=-\frac{1}{\lambda m}(\frac{\lambda}{4L}\sum\frac{1}{\omega})^{2}+\frac{1}
{6\pi m}(\frac{\lambda}{4L}\sum\frac{1}{\omega})-\frac{\lambda}{120 \pi^{2} m}
\end{eqnarray}
The sum of the contributions from the $\delta m^{2}\eta$ and $\eta^{3}$ terms
with only one-particle intermediate states is then
\begin{eqnarray}\label{136}
&&
\epsfysize 9pt\raisebox{4pt}{\epsfbox{x-x.eps}}
\qquad\raisebox{8pt}{$+$}\qquad
\epsfysize 16pt\epsfbox{x-o.eps}
\qquad\raisebox{8pt}{$+$}\qquad
\epsfysize 16pt\epsfbox{o-o.eps}
\qquad\raisebox{8pt}{$=$}
\nonumber\\
&&
-\frac{1}{\lambda m}(\delta m^{2})^{2}+\delta m^{2}( \frac{1}{2 mL}\sum\frac{1}
{\omega}-\frac{1}{6\pi m})-\frac{1}{\lambda m}(\frac{\lambda}{4L}\sum\frac{1}{
\omega})^{2}\nonumber\\&&
+\frac{1}{6\pi m}(\frac{\lambda}{4L}\sum\frac{1}{\omega})-\frac{\lambda}{
120 \pi^{2} m}\nonumber\\&&
=-\frac{\lambda}{16 mL^{2}}(\sum\frac{1}{\omega}-\sum\frac{1}{\tilde{\omega}})^{2}+
\frac{\lambda}{24\pi mL}(\sum\frac{1}{\omega}-\sum\frac{1}{\tilde{\omega}})
\nonumber\\&&
-\frac{\lambda}{120\pi^{2}m}
\end{eqnarray}
The only nonvanishing contribution from these terms is thus due to the square
of the finite part of the matrix element of the $\eta^{3}$ term, and the latter
is again obtained by taking the deviation from the propagator in the trivial
vacuum (the part proportional to $\tanh^2 mx - 1$) in the equal-time contraction of two $\eta$ fields. All other matrix
elements are divergent, and none of them contributes.
\subsubsection*{The contribution from three intermediate particles and the
$\eta\pi\eta\pi$ term} From (\ref{H1I})
the matrix element with 3 intermediate particles can be
written as follows (after the substitution $e^{mx}=y$)
\begin{eqnarray}
&&\frac1{2m} \int_0^\infty dy^2 \biggl[ {(y^2-1)^4\over(y^2+1)^5} -i
{(y^2-1)^3\over(y^2+1)^4}(q_n+q_r+q_t)\nonumber\\
\qquad&&-{(y^2-1)^2\over(y^2+1)^3}(q_nq_r+q_nq_t+q_rq_t)+
i{y^2-1\over(y^2+1)^2}q_nq_rq_t \biggr](y^2)^{\frac{i}2Q-\frac12}
\end{eqnarray}
where $Q=q_n+q_r+q_t$. Using
\begin{equation}
\int_0^\infty dy^2 y^n y^{iQ-1} (y^2+1)^{-m}=B(\frac{iQ}2+\frac{n+1}2,
\frac{-iQ}2-\frac{n+1}2+m)
\end{equation}
where $B$ is the beta function, one finds
\begin{eqnarray}
&&\frac1{2m}\left[ -\frac18 Q^4+\frac14 Q^2 + \frac12 (Q^2-1)
(q_nq_r+q_nq_t+q_rq_t)-Q(q_nq_rq_t)+\frac38 \right] {\pi\over\cosh\frac12\pi Q}
\nonumber\\
&=&\frac1{2m}\left[ -\frac18 (\omega_n^2+\omega_r^2+\omega_t^2)^2+
\frac12(\omega_n^2\omega_r^2+\omega_n^2\omega_t^2+\omega_r^2\omega_t^2)
\right] {\pi\over\cosh\frac12\pi Q}
\end{eqnarray}
Squaring, one finds for the contributions to $M^{(2)}$ due to
intermediate states with 3 particles
\begin{eqnarray}
&&\frac16\sum_{n,r,t}\left| \langle q_n,q_r,q_t | H_{\rm
int,sol}^{(1),I} | {\rm sol} \rangle
\right|^2{-1\over\omega_n+\omega_r+\omega_t}
\qquad\raisebox{8pt}{$=$}\quad \epsfysize 20pt\epsfbox{sun.eps}
\nonumber\\ &=&
{-\lambda\over2^{10}\pi 6m}\int{dq_1 dq_2 dq_3\over\cosh^2\frac12\pi(q_1+q_2+q_3)}
\nonumber\\&&\times
\textstyle{[(1+q_1^2)^2+(1+q_2^2)^2+(1+q_3^2)^2
-2(1+q_1^2)(1+q_2^2)-2(1+q_1^2)(1+q_3^2)-2(1+q_2^2)(1+q_3^2)]^2\over
(1+q_1^2)^{3/2}(1+q_2^2)^{3/2}(1+q_3^2)^{3/2}
(\sqrt{1+q_1^2}+\sqrt{1+q_2^2}+\sqrt{1+q_3^2})}
\end{eqnarray}
The prefactor $\frac16$ is needed since we sum over all $q_n$, $q_r$, $q_t$
while the 3-particle states are given by $a^\dagger_n a^\dagger_r a^\dagger_t
|{\rm sol}\rangle$.
Next we evaluate the contribution from the $\int \eta'\pi\int \eta'\pi$ term.
Using that $\eta=\sum_n(2\omega_n L)^{-1}$ $(a_n E_n \exp -i\omega_n t + h.c.)$
while
$\pi=\sum_n(2\omega_n L)^{-1}(-i\omega_na_n E_n \exp -i\omega_n t + h.c.)$
one finds 3 contributions from the 3 possible contractions,\footnote{
Each of these 3 contributions is proportional to $\hbar^2$ because
each propagator gives a factor $\hbar$, vertices do not contribute
factors of $\hbar$, and the energy denominators in (\ref{71}) yield a
factor $\hbar^{-1}$.}
\begin{eqnarray}
\langle {\rm sol}|H^{(2),III}_{\rm int,sol}|{\rm sol}\rangle=
\qquad\epsfysize 20pt\epsfbox{141-1.eps}
\quad\raisebox{8pt}{$+$}\quad
\epsfysize 20pt\epsfbox{141-2.eps}
\quad\raisebox{8pt}{$+$}\quad
\epsfysize 20pt\epsfbox{141-3.eps}
\nonumber\\
=\frac\lambda{64m^3} \sum_{n,r}\left(-C_{n,n}C_{r,r}
+C_{r,-n}C_{-n,r}+{\omega_r\over\omega_n}C_{n,-r}C_{-n,r}\right)
\end{eqnarray}
where $C_{n,r}=\int (dE_n/dx)E_r dx$. Dashed lines denote $\pi$
fields. Using that $E_n$ and $E_r$ are
orthonormal, it follows that
\begin{equation}
C_{n,r}=imq_n\delta_{n,r}+\frac1L\frac1{\omega_n\omega_r}\left(q_n^2-q_r^2\right)
{i\pi\over\sinh\frac12\pi(q_n-q_r)}
\end{equation}
Since $C_{n,n}$ is odd in n $(C_{n,n}=imq_n+2iq_n/(L\omega_n^2))$
the contribution from $C_{n,n}C_{r,r}$ vanishes, and in the remainder
the term with $imq_n\delta_{n,r}$ cancels in the contribution
$\sum_{r,n}({\omega_r\over\omega_n}-1)|C_{n,r}|^2$. This leads to
\begin{eqnarray}
&&\langle {\rm sol}|\int \eta'\pi\int \eta'\pi
\mbox{-term}|{\rm sol}\rangle\nonumber\\
&=&\frac\lambda{2^{10}m}\int{dq_1 dq_2\over[\sinh\frac12\pi(q_1-q_2)]^2}
\left( {\sqrt{1+q_1^2}\over\sqrt{1+q_2^2}}-1\right)
{(q_1^2-q_2^2)^2\over(1+q_1^2)(1+q_2^2)}
\end{eqnarray}
According to Verwaest \cite{verw} the sum of these two contributions
is
\begin{equation}\label{144}
-\frac\lambda{60\pi^2m}.
\end{equation}
(In ref. \cite{vega} these contributions were numerically evaluated).
For us the crucial point is that both contributions are finite and
hence unambiguous.
Adding (\ref{129}), (\ref{136}) and (\ref{144}) one finds that the sum
of these contributions vanishes. Hence, the only contribution to the
two-loop correction of the soliton mass comes from the term
$-\hbar^2/(8M_{\rm cl}^2)\int(\phi'')^2dx$ in (\ref{102}) and it
yields
\begin{equation}
M^{(2)} = -{\lambda\over192m}.
\end{equation}
All other contributions combine into unambiguous finite integrals
which vanish due to factors $1/L$.
\section{Conclusions}
\label{sec:conclude}
In this article two new concepts have been introduced in the theory of
quantum solitons: topological boundary conditions and a physical
principle which fixes UV quantum ambiguities. The main idea
underlying these concepts is simple: the problem
must be formulated {\em before} the loop expansion
is performed. This means that the mass of the soliton
must be defined nonperturbatively, rather than as a sum of
the classical result, plus one-loop corrections, plus etc.
We define this mass as the difference between the vacuum
energies of the system with different boundary conditions.
It follows immediately that the boundary conditions must be formulated
for the full quantum field, rather than for small fluctuations.
The topological boundary conditions are, in fact, better viewed as
conditions which put the system on a Moebius strip without
boundaries. In the literature one usually employs periodic boundary
conditions in the soliton sector for all the quantum fluctuations (but
not for the classical field), or a mixture of periodic conditions for
bosons and others for fermions. All these conditions may distort the
system at the boundaries and introduce spurious extra energy $O(L^0)$
contributions which obscure the measurement of the mass of a
soliton. We even found that most boundary conditions in the literature
are not compatible with the Majorana condition for fermions in the
$N=(1,1)$ case. With topological boundary conditions there is no
spurious energy introduced at the boundaries since there are no
boundaries, and one obtains the genuine mass of a soliton.
In the long-standing problem of UV ambiguities of the quantum mass of
a soliton, we have taken the point of view that this problem should be
recognized as the well-known regularization dependence of loop
calculations in quantum field theory. The action is determined only up
to local counterterms, and one has to introduce renormalization
conditions to fix those. With solitons, we want to compare vacuum
energies in topologically distinct sectors. The new principle we have
introduced is that if all mass parameters in the theory tend to zero,
the topological and the trivial sector must have the same vacuum
energy (in the infinite volume limit).
This condition follows immediately from a simple dimensional analysis.
Indeed, if the physical meson mass $m$ and all dimensionful
couplings scale according to their dimensions as $m\to0$, then
the mass of the soliton $M$ as a function of these parameters
must also scale as $m$. One can take a different, but
equivalent, look at the problem: After rotation to Euclidean
space one finds that the soliton is a string-like interface between
two phases in two-dimensional classical statistical field theory.
Our principle is then related to the well-known property
of the interface tension: the tension vanishes at the critical (conformal)
point~\cite{jdl}. Note again that the formulation of this principle
is entirely nonperturbative.
It is when we do the loop expansion we see that the principle leads to
nontrivial consequences. This is because divergent contributions
proportional to the UV cutoff $\Lambda$ and independent of $m$ may
appear (and they do appear in the $N=1$ theory with mode number
cutoff and topological boundary conditions).
Our principle can be used as a renormalization condition to
eliminate such divergences.
Adopting this principle, the calculation of the soliton mass becomes
unambiguous if one first differentiates the sums over frequencies
w.r.t.\ the mass parameter, and then integrates setting the integration
constant equal to zero.
There exist other
methods to compute masses in certain 1+1-dimensional models~
\cite{dhn2,poly,mccoy,are,kor,fad,scho,ahn}.
For exactly solvable models one can determine the S-matrix exactly
by assuming that the Yang-Baxter equation holds and that the S-matrix
factorizes into products of two-particle S-matrices.
This program has been extended to the $N=1$ susy sine-Gordon model
\cite{scho,ahn}
and from the result one reads off that to one loop
the soliton mass is given by
\begin{equation}
M=M_{\rm cl}(m) - {m\over2\pi},
\end{equation}
where $m$ is the physical renormalized meson mass.
This is the same result as we
have found with topological boundary conditions and our renormalization
condition at the conformal point. From our point of view, choosing the
Yang-Baxter equation (or the thermodynamic Bethe ansatz together with
the quantum Lax-pairs approach --- the ``inverse scattering method'')
amounts to a choice of regularization scheme, which, at least as far as
the quantum mass of solitons is concerned, appears to be equivalent
to our principle.
However, it may seem that the BPS bound $\langle
H\rangle\geq|\langle Z\rangle|$ is violated at the one-loop level
because of the negative sign of $M^{(1)}$, since the central charge
does not receive quantum corrections at one loop~\cite{imb1,rvn}
(apart from those absorbed by the renormalization of $m$). We point
out that it is the unrenormalized expectation value $\langle
H\rangle$ of the Hamiltonian that should obey the bound,
not the physical soliton mass $M$, which may (and does in the $N=1$
case) differ from $\langle H\rangle$ by an $m$-independent counterterm.
Using the mode-number cut-off regularization and topological boundary
conditions, we have found that
\begin{equation}\label{HMNC}
\langle H \rangle_{\rm MNC} = M_{\rm cl}(m) + \frac\Lambda4 - {m\over2\pi}
\end{equation}
where $\Lambda$ is the ultraviolet cut-off. This means that the bound
is observed by $\langle H\rangle$. This happens because $N=1$ susy is
not enough to eliminate the linear divergent term in $\langle
H\rangle$. This term is positive in accordance with the BPS
bound. The bound is not saturated, which agrees with the observation
of Olive and Witten \cite{WO} that the saturation of the bound is
related to ``multiplet shortening''. The latter does not occur in the
$N=1$ model.
Therefore, in the $N=1$ theory with mode number cutoff we encounter
a situation where our physical principle $M|_{m=0}=0$ leads
to nontrivial consequences. The subtraction of the trivial vacuum
energy as in (\ref{e1-e0}) eliminates bulk volume $O(L)$
contributions (in susy theories they are absent anyway). This subtraction
cannot, however, eliminate a possible $O(L^0)$ $m$-independent but
regularization dependent $O(\Lambda)$ contribution. The required
subtraction constant, or counterterm,
is determined by the condition $M|_{m=0}=0$.
Therefore
\begin{equation}
M = \left( E_1 - E_0 \right)
- \left( E_{1} - E_{0} \right)_{m=0}
\end{equation}
is a more complete (compared to (\ref{e1-e0}))
definition of the physical soliton mass. It is clear that this is
exactly the definition implemented by our $d/dm$ calculation
\begin{equation}
M = \int_0^m {d\over dm} ( E_{1} - E_{0} ).
\end{equation}
In $N=2$ models we do not encounter any of these issues. There it turned
out that neither the soliton mass nor the central charge receive quantum
corrections, hence the BPS bound remains intact and saturated.
In contrast with the
$D=2$ $N=1$ case where all susy representations,
with and without saturation of the bound, are
two-dimensional, the BPS bound of the $N=2$ models is protected
by ``multiplet shortening'' \cite{WO}.
An alternative UV regularization which one may use is the
energy cutoff. In the mode cutoff regularization one truncates
the {\em sums} over the modes. The energy cutoff amounts
to first converting the sums into integrals over momenta
and then truncating these {\em integrals}. As was shown in \cite{rvn}
this regularization scheme in the sine-Gordon model
leads to a result in disagreement with the Dashen-Hasslacher-Neveu
spectrum \cite{fad,dhn1}. In the supersymmetric sine-Gordon case
the energy cutoff would lead to a vanishing one-loop correction (after
standard renormalization of $m$), which is in contradiction with
existing exact results~\cite{scho,ahn}. We examined
the two-loop corrections and found that no dependence on
the choice of regularization appears there. Therefore the difference
between the energy cutoff and the mode cutoff is purely a one-loop
effect. This suggests that, perhaps, a formulation of the theory exists
where this effect can be described in terms of a quantum topological
one-loop anomaly. Moreover, the one-loop
correction to the mass $M$ does not depend on the coupling constant,
thus it is, in a certain sense, a geometrical effect.
\subsection*{Acknowledgments}
We would like to thank N. Graham and R. Jaffe for discussions.
| {'timestamp': '1998-02-11T20:55:11', 'yymm': '9802', 'arxiv_id': 'hep-th/9802074', 'language': 'en', 'url': 'https://arxiv.org/abs/hep-th/9802074'} |
\section{Introduction}
A special class of non-diffracting light waves called Bessel beams
\cite{Durnin4} have been extensivelly studied over the last two
decades. Bessel beams are characterized by a transverse field
profile in the form of a zero-order Bessel function of the first
kind. They exhibit several intriguing properties, such as
diffraction-free propagation of the central peak over a distance
fixed only by the geometry of the source device, and superluminal
phase and group velocities in free space. Due to their
diffraction-free characteristics, they have found applications in
several fields of physics. Non-diffractive pump fields have been
utilized in nonlinear optical processes like parametric down
conversion \cite{BelviKazak,Belyi,Orloy,Trapani,Longhi,Piskarskas},
second \cite{Wulle,Arlt,Ding,Jedrkiewicz}, third \cite{Peet}, and
higher-order harmonic generation \cite{Averchi}. Furthermore, the
accuracy of optical tweezers \cite{Milne,Garces,Summers}, optical
trapping \cite{Arlt63,Garces66,Fan,Tatarkova,Arakelyan} and the
resolution of medical imaging \cite{Rolland,Jianyu} have been
enhanced by the implementation of diffraction-free beams.
In addition to exploiting the non-diffractive feature of Bessel
beams, superluminality has also been a subject of investigation.
There have been attempts to demonstrate this property using light in
the microwave range \cite{Mugnai} but the experimental uncertainties
induced by the apparatus were large compared to the magnitude of the
superluminal effect \cite{Ringermacher,Bigelow}. Recently, the
propagation velocity of an ionization wavefront induced by a Bessel
pulse has also been measured \cite{Alexeev}, but until now no
measurement revealed both superluminal group and phase velocities in
free space.
Here, we report a complete characterization of spatio-temporal
properties of an optical Bessel beam generated by reflection from a
conical mirror \cite{Fortin}. This novel technique of producing
Bessel beams has the advantage, with respect to commonly used
axicons \cite{Mcleod}, of avoiding dispersion and is thus more
suited for applications that require ultrashort laser pulses.
In the following we describe the spatially characterization of the
beam, showing non-diffractive properties over its propagation range.
Later we show a set of measurements of superluminal phase and group
velocities in free space that has greater accuracy than ever
previously measured.
\section{Beam Properties}
A non-diffractive Bessel beam is realized by a coherent
superposition of equal-amplitude, equal-phase plane waves whose
wavevectors form a constant angle $\theta$, called the axicon angle,
with the direction of propagation of the beam which we define as the
$z$-axis.
As originally proposed by Durnin \cite{Durnin4}, the analytical
solution of the Helmholtz equation for the electric field propagation is given by:
\begin{equation} \label{solution}
E(r,z,t) = A \exp\left[i(\beta z-\omega t)\right] J_0(\alpha r)
\end{equation}
where $A$ is constant, $\beta=k\cos\theta$,
$\alpha=k\sin\theta$, $k=\omega/c$ is the wave-vector,
$\omega$ is the angular frequency, $r$ is the radial coordinate and
$J_0$ is a zeroth-order Bessel function of the first kind. Equation
(\ref{solution}) defines a non-diffracting beam since the
intensity distribution is independent of $z$ and equal to
$J_0^2(\alpha r)$. Furthermore, the field described in Eq.
(\ref{solution}) propagates with a superluminal phase velocity given
by $v_p=c/\cos\theta>c$. The independence of the phase velocity
from the field frequency implies that in free space the group
velocity is also superluminal and is equal to the phase velocity.
In practical experiments the diffraction free region is limited by
the finite extent of the beam. As evidenced by Fig. \ref{fig:1}(a),
the maximum diffraction free propagation distance depends on the
radius $R$ of the optical element used for producing the beam and on
the axicon angle $\theta$, and is given by $z_{max}=R/\tan\theta$.
At propagation distances greater than $z_{max}$, the beam diffracts
very quickly, spreading the energy over an annular region.
\begin{figure}[b]
\center{\includegraphics{Kuntz_fig1.eps}}
\caption{(color online). a) Conical mirror used to produce the Bessel beam.
b) Schematic of the experimental setup used for the spatial and temporal characterization of a Bessel
beam. The elements added for temporal characterization are inside a dotted square. Black arrows denote the Gaussian beam path while red arrows
refer to the Bessel beam path. PBS, polarizing beamsplitter, $\lambda/4$, quarter waveplate.}
\label{fig:1}
\end{figure}
In our experiment, the optical element used to generate the Bessel
beam is a conical mirror with a radius of $1.27$ $\mathrm{cm}$ and
an apex angle of $\pi-\theta=179^\circ$. The mirror parameters
correspond to a non-diffractive distance of $73$ $\mathrm{cm}$ and
lead to the phase and group velocities exceeding $c$ by $0.015\%$.
\section{Spatial Characterization}
The non-diffractive propagation was verified by spatial
characterization of the Bessel beam. The experimental setup used for
these measurements is shown in Fig. \ref{fig:1}(b). The light source
is a continuous wave (CW) He:Ne laser operating at $543.5$
$\mathrm{nm}$. The horizontally polarized beam is first expanded so
that the resulting collimated light has a waist slightly larger than
the diameter of the conical mirror, and then is transmitted through
a polarizing beamsplitter (PBS). After the PBS, the Gaussian beam
passes through a $\lambda/4$ waveplate, reflects off the conical
mirror, passes through the waveplate a second time, changing to a
vertical polarization, and is reflected by the PBS. In this way,
65\% of the Gaussian beam power is converted into a nondiffractive
beam.
The intensity profile at various positions along the propagation
axis was recorded using a CCD camera. A microscope was constructed
to magnify the intensity profile by a factor of 3.5 so the camera
could resolve fine transverse features of the diffraction pattern.
The different radial intensity profiles recorded are shown in Fig.
\ref{fig:3D} and are almost identical over the entire diffraction
free region with a central peak width which is consistent with the
theoretical expectation of $(2\times 2.405)/\alpha=24 \pm1.5$
$\mathrm{\mu m}$, where 2.405 is the first zero of the Bessel
function.
\begin{figure}
\center{\includegraphics{Kuntz_fig2.eps}}
\caption{(color online). Normalized radial intensity profile for several CCD
camera positions.}
\label{fig:3D}
\end{figure}
We compared the measured intensity pattern with the pattern
calculated numerically by the Rayleigh-Sommerfield diffraction
integral \cite{Goodman} (Fig. \ref{fig:3}(a)). For this calculation
we assumed the amplitude transmission function of the mirror is
given by:
\begin{equation} \label{transmission}
T(r)=e^{-i2kr\tan(\theta/2)}.
\end{equation}
The intensity of the central peak as a function of the propagation
distance is shown in Fig. \ref{fig:3}(b) together with a the
numerical prediction. The central peak intensity behavior is
determined by the fact that the light that constructively interferes
to create the peak at a longitudinal position $z$ arrives from a
region of the mirror that has a circumference of $2\pi R=2\pi
z\tan\theta$. The intensity thus grows linearly for small values of
$z$ and then decreases when the intensity decrease of the Gaussian
profile of the input beam is more rapid then the linear increase of
the circumference. The oscillations at the end of the curve are due
to the Fresnel diffraction from the outer edge of the mirror.
The 73-cm non-diffractive distance estimated with geometrical optics
agreed with that obtained from the numerical calculation and with
the experimental results.
\begin{figure}[t]
\center{\includegraphics{Kuntz_fig3.eps}}
\caption{(color online). a) Measured intensity profile (dots) and numerical calculation (line).
b) On-axis Intensity: experimental data (dots) and theoretical calculation (line).
}
\label{fig:3}
\end{figure}
\section{Temporal Characterization}
The phase and group velocities in free space were measured in order
to verify that the propagation of the Bessel beam exceeds the speed
of light. We implemented an interferometric design using CW light
for the phase velocity measurement, and femtosecond pulses for the
group velocity measurement. The design, shown in Fig. \ref{fig:1}b,
is a mixed Michelson interferometer, with a plane mirror in one arm
and a conical mirror in the other. In the output channel of the
interferometer we placed a CCD camera to record the on-axis
intensity as a function of the camera's longitudinal position $z$.
We placed a $\lambda/4$ waveplate into each arm to permit
independent manipulation of the intensity of the two beams.
This arrangement allowed us to compare the group and phase
velocities of the Bessel beam relative to those of the Gaussian beam
(that are equal to $c$) and to observe the relatively small
superluminal effect expected.
\subsection{Phase Velocity Measurement}
Gaussian and Bessel beams have both the same frequency $\nu$ but
different phase velocities, $v_{p,G}$ and $v_{p,B}=v_{p,G}+\Delta
v_{p}$, respectively, along the $z$ direction. Thus they will show
interference fringes spaced at $\Delta z =v_{p,G}v_{p,B}/ (|\Delta
v_{p}|\nu)$. In our case, with $|\Delta v_{p}|<<v_{p,G}$,
$v_{p,B}\approx c$, we find $|\Delta v_{p}| = \lambda c/\Delta z$.
Measuring the spacing of the interference fringes thus yields the
magnitude of $\Delta v_{p}$. In order to determine the sign of
$\Delta v_{p}$, and hence whether the Bessel beam is propagating
superluminally, we varied the angle $\varphi$ between the direction
of propagation of the Gaussian and Bessel beam. By doing so, we set
the phase velocity component of the Gaussian beam in the direction
of propagation of the Bessel beam to
$v_{p,G}=\omega/k_z=\omega/k\cos\varphi=c/\cos\varphi$.
If the phase velocity of the Bessel beam exceeds that of the
Gaussian beam then with increasing $\varphi$ between
$0\leq\varphi<\theta$, the interference fringe spacing $\Delta z$
should also increase. Experimentally, we aligned the Gaussian beam
along the optical rail and manipulated $\varphi$ by slightly tilting
the conical mirror. We determined the value of $\varphi$ by
displacing the CCD along the optical rail and recording the relative
transverse positions of the centres of the two beams at each
location.
Figure \ref{fig:7} displays the measured interference period as a
function of $\varphi$ together with a theoretical fit in which the
axicon angle $\theta$ was the variable parameter. In particular,
note that with increasing misalignment, $\Delta z$ is increasing,
indicating that the phase velocity of the Bessel beam is higher than
that of the Gaussian beam. The data give a phase velocity of
$v_{p}=(1.000155\pm0.000003)c$ which is consistent with the expected
value $c/\cos(\theta)=1.000152c$, corresponding to an axicon angle
of $1.009^\circ\pm0.01^\circ$.
\begin{figure}[t]
\center{\includegraphics{Kuntz_fig4.eps}}
\caption{(color online). CW measurements: period of the interference fringes $\Delta z$ as a function
of the angle $\varphi$ between the direction of propagation of the Gaussian
and Bessel beams (dots) together with the theoretical fit (line).}
\label{fig:7}
\end{figure}
\subsection{Group Velocity Measurement}
The group velocity measurement was performed using 200 fs pulses at
$\lambda = 776.9$ $\mathrm{nm}$ from a Ti:Sapphire mode-locked laser
pumped by a $532$ $\mathrm{nm}$ solid-state laser. The optics were
replaced to accommodate the new wavelength. The Gaussian and Bessel
beams were aligned with each other to within $0.5$ $\mathrm{mrad}$.
The intensity in the output channel of the interferometer was
measured with the CCD camera. When there is no temporal overlap
between the Gaussian and the Bessel pulse, the CCD measured the sum
of the intensities of the two pulses. On the other hand, when the
pulse arrival was simultaneous, the intensity was affected by
interference.
Suppose that for some configuration of the interferometer the CCD is
located in a position of maximum visibility. If the camera is now
translated along the propagation axis away from the PBS by a
distance of $\Delta z$, the interferometer arm with the planar
mirror must be moved by a distance $\Delta x$ in order to restore
the same visibility. Therefore, the Bessel pulse travels an extra
distance of $\Delta z$ in the time the Gaussian pulse travels a
distance of $\Delta z + 2\Delta x$, giving
\begin{equation}
\Delta z=\frac{2v_{g,B}}{c-v_{g,B}}\Delta x.
\label{eq:groupeq}
\end{equation}
In the experiment, we acquired several pairs $(\Delta x, \Delta z)$
for which the interference visibility is optimized. The interference
was observed by scanning the plane mirror over a distance of $1-1.5$
wavelengths with a piezoelectric transducer. For a given position
$z$ of the camera, the interference visibility over a range of plane
mirror positions was evaluated (Fig. \ref{fig:8})
\begin{figure}
\center{\includegraphics{Kuntz_fig5.eps}}
\caption{(color online). Pulsed measurements: normalized fringe visibility for $z$ = 219($\scriptstyle{\blacksquare}$),
119(\textcolor{blue}{$\blacktriangle$}) and 9(\textcolor{red}{$\bullet$}) mm, as a function of the
plane mirror translation $\Delta x$. The experimental data are fitted with a Gaussian curve.}
\label{fig:8}
\end{figure}
\begin{figure}
\center{\includegraphics{Kuntz_fig6.eps}}
\caption{(color online). Displacement $x$ of the Michelson interferometer plane mirror, at which the visibility is
maximized, as a function of the CCD camera position $z$ (dots) with a linear fit (line).}
\label{fig:9}
\end{figure}
The linear relationship predicted by Eq. (\ref{eq:groupeq}) is
evidenced by Fig. \ref{fig:9}, and is characterized by a linear
regression slope $\partial x/\partial z =
(-7.5\pm0.3)\times10^{-5}$. The negativity of the slope indicates
that the relative path length of the Gaussian beam had to be reduced
as the overall travel distance increased (i.e. smaller $z$ values),
which corresponds to a superluminal group velocity, $v_{g} =
[1-2(\partial x/\partial z)]c = (1.000150\pm0.000006)c$. This
velocity corresponds to the apex angle of
$0.992^\circ\pm0.02^\circ$.
\section{Conclusion}
We have spatially and temporally characterized an optical Bessel beam
produced using a conical mirror propagating in
free space.
We ascertained the beam to have a constant peak size over the
propagation distance determined by the properties of the conical
mirror.
The phase and group velocities of the beam were determined with an
interferometric setup to be superluminal, with values of
$v_{p}=(1.000155\pm0.000003)c$ and $v_{g}=(1.000150\pm0.000006)c$,
respectively, which is consistent with the theoretical prediction.
\section*{Acknowledgements}
This work was supported by NSERC, CFI, AIF, Quantum$Works$ and CIFAR
(A.L.);
| {'timestamp': '2008-12-15T17:10:41', 'yymm': '0812', 'arxiv_id': '0812.2514', 'language': 'en', 'url': 'https://arxiv.org/abs/0812.2514'} |
\section{Introduction}
Let $\mathfrak g$ be a restricted Lie algebra over an algebraically closed field $k$ of positive characteristic $p$. Suslin, Friedlander, and Bendel \cite{bfs1paramCoh} have shown that the maximal spectrum of the cohomology of $\mathfrak g$ is isomorphic to the variety of $p$-nilpotent elements in $\mathfrak g$, i.e., the so called restricted nullcone $\mathcal N_p(\mathfrak g)$. This variety has become an important invariant in representation theory; for example, it can be used to give a simple definition of the local Jordan type of a $\mathfrak g$-module $M$ and consequently of the class of modules of constant Jordan type, a class first studied by Carlson, Friedlander, and Pevtsova \cite{cfpConstJType} in the case of a finite group scheme. Friedlander and Pevtsova \cite{friedpevConstructions} have initiated what is, in the case of a Lie algebra $\mathfrak g$, the study of certain sheaves over the projectivization, $\PG$, of $\mathcal N_p(\mathfrak g)$. These sheaves are constructed from $\mathfrak g$-modules $M$ so that representation theoretic information, such as whether $M$ is projective, is encoded in their geometric properties. Explicit computations of these sheaves can be challenging due not only to their geometric nature but also to the inherent difficulty in describing representations of a general Lie algebra.
The purpose of this paper is to explicitly compute examples of these sheaves for the case $\mathfrak g = \slt$. The Lie algebra $\slt$ has tame representation type and the indecomposable modules were described explicitly by Alexander Premet \cite{premetSl2} in 1991. This means there are infinitely many isomorphism classes of such modules, so the category is rich enough to be interesting, but these occur in finitely many parameterized families which allow for direct computations. We also note that the variety $\PG[\slt]$ over which we wish to compute these sheaves is isomorphic to $\mathbb P^1$. By a theorem of Grothendieck we therefore know that locally free sheaves admit a strikingly simple description: They are all sums of twists of the structure sheaf. This makes $\slt$ uniquely suited for such computations.
We begin in \Cref{secRev} with the case of a general restricted Lie algebra $\mathfrak g$. We will review the definition of $\mathcal N_p(\mathfrak g)$ and its projectivization $\PG$. We use this to define the local Jordan type of a module $M$. We define the global operator $\Theta_M$ associated to a $\mathfrak g$-module $M$ and use it to construct the sheaves we are interested in computing. We will review theorems which not only indicate the usefulness of these sheaves but are also needed for their computation.
In \Cref{secSl2} we begin the discussion of the category of $\slt$-modules. Our computations are fundamentally based on having, for each indecomposable $\slt$-module, an explicit basis and formulas for the $\slt$ action. To this end we review Premet's description. There are four families and for each family we specify not only the explicit basis and $\slt$-action, but also a graphical representation of the module and the local Jordan type of the module. For the Weyl modules $V(\lambda)$, dual Weyl modules $V(\lambda)^\ast$, and projective modules $Q(\lambda)$ this information was previously known (see for example Benkart and Osborn \cite{benkartSl2reps}) but for the so called non-constant modules $\Phi_\xi(\lambda)$ we do not know if such an explicit description has previously been given. Thus we give a proof that this description follows from Premet's definition of the modules $\Phi_\xi(\lambda)$. We also compute the Heller shifts $\Omega V(\lambda)$ of the Weyl modules for use in \Cref{secLieEx}.
In \Cref{secMatThms} we digress from discussing Lie algebras and compute the kernels of four particular matrices. These matrices, with entries in the polynomial ring $k[s, t]$, will represent sheaf homomorphisms over $\mathbb P^1 = \proj k[s, t]$ but in this section we do not work geometrically and instead consider these matrices to be maps of free $k[s, t]$-modules. This section contains the bulk of the computational effort of this paper.
In \Cref{secLieEx} we are finally ready to carry out the computations promised. Friedlander and Pevtsova have computed $\gker{V(\lambda)}$ for the case $0 \leq \lambda \leq 2p - 2$ \cite{friedpevConstructions}. We compute the sheaves $\gker{M}$ for every indecomposable $\slt$-module $M$. This computation is essentially the bulk of the work in the previous section; the four matrices in that section describe the global operators of the four families of $\slt$-modules. We also compute $\mathscr F_i(V(\lambda))$ for $i \neq p$ and $V(\lambda)$ indecomposable using an inductive argument. The base case is that of a simple Weyl module $(\lambda < p)$ and is done by noting that $\mathscr F_i(V(\lambda))$ is zero when $i \neq \lambda + 1$ and that $\mathscr F_{\lambda + 1}(V(\lambda))$ can be identified with the kernel sheaf $\gker{V(\lambda)}$. For the inductive step we use the Heller shift computation from \Cref{secSl2} together with a theorem of Benson and Pevtsova \cite{benPevtVectorBundles}.
\section{Jordan type and global operators for Lie algebras} \label{secRev}
In this section we review the definition of the restricted nullcone of a Lie algebra $\mathfrak g$ and of the local Jordan type of a $\mathfrak g$-module $M$. We also define the global operator associated to a $\mathfrak g$-module $M$ and the sheaves associated to such an operator. Global operators and local Jordan type can be defined for any infinitesimal group scheme of finite height. Here we give the definitions only for a restricted Lie algebra $\mathfrak g$ and take $\slt$ as our only example. For details on the general case as well as additional examples we refer the reader to Friedlander and Pevtsova \cite{friedpevConstructions} or Stark \cite{starkHo1}.
Let $\mathfrak g$ be a restricted Lie algebra over an algebraically closed field $k$ of positive characteristic $p$. Recall that this means $\mathfrak g$ is a Lie algebra equipped with an additional \emph{$p$-operation} $(-)^{[p]}\colon\mathfrak{g \to g}$ satisfying certain axioms (see Strade and Farnsteiner \cite{stradeFarnModularLie} for details). Here we merely note that for the classical subalgebras of $\mathfrak{gl}_n$ the $p$-operation is given by raising a matrix to the $p^\text{th}$ power.
\begin{Def}
The restricted nullcone of $\mathfrak g$ is the set
\[\mathcal N_p(\mathfrak g) = \set{x \ \middle| \ x^{[p]} = 0}\]
of $p$-nilpotent elements. This is a conical irreducible subvariety of the affine space $\mathfrak g$. We denote by $\PG$ the projective variety whose points are lines through the origin in $\mathcal N_p(\mathfrak g)$.
\end{Def}
\begin{Ex} \label{exNslt}
Let $\mathfrak g = \slt$ and take the usual basis
\[e = \begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix}, \qquad f = \begin{bmatrix} 0 & 0 \\ 1 & 0 \end{bmatrix}, \quad \text{and} \quad h = \begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix}.\]
Let $\set{x, y, z}$ be the dual basis so that $\slt$, as an affine space, can be identified with $\mathbb A^3$ and has coordinate ring $k[x, y, z]$. A $2 \times 2$ matrix over a field is nilpotent if and only if its square
\[\begin{bmatrix} z & x \\ y & -z \end{bmatrix}^2 = (xy + z^2)\begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}\]
is zero therefore, independent of $p$, we get that $\mathcal N_p(\slt)$ is the zero locus of $xy + z^2$.
By definition $\PG[\slt]$ is the projective variety defined by the homogeneous polynomial $xy + z^2$. Let $\mathbb P^1$ have coordinate ring $k[s, t]$ and define a map $\iota\colon\mathbb P^1 \to \PG[\slt]$ via $[s, t] \mapsto [s^2 : -t^2 : st]$. One then checks that the maps $[1 : y : z] \mapsto [1 : z]$ and $[x : 1 : z] \mapsto [-z : 1]$ defined on the open sets $x \neq 0$ and $y \neq 0$, respectively, glue to give an inverse to $\iota$. Thus $\PG[\slt] \simeq \mathbb P^1$.
\end{Ex}
To define the local Jordan type of a $\mathfrak g$-module $M$, recall that a combinatorial partition $\lambda = (\lambda_1, \lambda_2, \ldots, \lambda_n)$ is a weakly decreasing sequence of finitely many positive integers. We say that $\lambda$ is a partition of the integer $\sum_i\lambda_i$, for example $\lambda = (4, 4, 2, 1)$ is a partition of $11$. We call a partition $p$-restricted if no integer in the sequence is greater than $p$ and we let $\mathscr P_p$ denote the set of all $p$-restricted partitions. We will often write partitions using either Young diagrams or exponential notation. A Young diagram is a left justified two dimensional array of boxes whose row lengths are weakly decreasing from top to bottom. These correspond to the partitions obtained by reading off said row lengths. In exponential notation the unique integers in the partition are written surrounded by brackets with exponents outside the brackets denoting repetition.
\begin{Ex}
The partition $(4, 4, 2, 1)$ can be represented as the Young diagram
\[\ydiagram{4,4,2,1}\]
or written in exponential notation as $[4]^2[2][1]$.
\end{Ex}
If $A \in \mathbb M_n(k)$ is a $p$-nilpotent ($A^p = 0$) $n \times n$ matrix then the Jordan normal form of $A$ is a block diagonal matrix such that each block is of the form
\[\begin{bmatrix} 0 & 1 \\ & 0 & 1 \\ && \ddots & \ddots \\ &&& 0 & 1 \\ &&&& 0 \end{bmatrix} \qquad (\text{an} \ i \times i \ \text{matrix})\]
for some $i \leq p$. Listing these block sizes in weakly decreasing order yields a $p$-restricted partition of $n$ called the \emph{Jordan type}, $\jtype(A)$, of the matrix $A$. Note that conjugation does not change the Jordan type of a matrix so if $T\colon V \to V$ is a $p$-nilpotent operator on a vector space $V$ then we define $\jtype(T) = \jtype(A)$, where $A$ is the matrix of $T$ with respect to some basis. Finally, note that scaling a nilpotent operator does not change the eigenvalues or generalized eigenspaces so it is easy to see that $\jtype(cT) = \jtype(T)$ for any non-zero scalar $c \in k$.
\begin{Def}
Let $M$ be a $\mathfrak g$-module and $v \in \PG$. Set $\jtype(v, M) = \jtype(x)$ where $x \in \mathcal N_p(\mathfrak g)$ is any non-zero point on the line $v$ and its Jordan type is that of a $p$-nilpotent operator on the vector space $M$. The \emph{local Jordan type} of $M$ is the function
\[\jtype(-, M)\colon\PG \to \mathscr P_p\]
so defined.
\end{Def}
When computing the local Jordan type of a module the following lemma is useful. Recall that the conjugate of a partition is the partition obtained by transposing the Young diagram.
\begin{Lem} \label{lemConj}
Let $A \in \mathbb M_n(k)$ be $p$-nilpotent. The conjugate of the partition
\[(n - \rank A, \rank A - \rank A^2, \ldots, \rank A^{p-2} - \rank A^{p-1}, \rank A^{p-1}).\]
is $\jtype(A)$.
\end{Lem}
\begin{Ex} \label{exPart}
The conjugate of $[4]^2[2][1]$ is $[4][3][2]^2$.
\begin{center}
\begin{picture}(100, 70)(45, 0)
\put(0, 35){\ydiagram{4,4,2,1}}
\put(150, 35){\ydiagram{4,3,2,2}}
\put(75, 37){\vector(1, 0){55}}
\put(-5, 66){\line(1, -1){3}}
\put(0, 61){\line(1, -1){3}}
\put(5, 56){\line(1, -1){3}}
\put(10, 51){\line(1, -1){3}}
\put(15, 46){\line(1, -1){3}}
\put(20, 41){\line(1, -1){3}}
\put(25, 36){\line(1, -1){3}}
\put(30, 31){\line(1, -1){3}}
\put(35, 26){\line(1, -1){3}}
\put(40, 21){\line(1, -1){3}}
\put(45, 16){\line(1, -1){3}}
\put(50, 11){\line(1, -1){3}}
\put(37, 10){\vector(1, 1){15}}
\put(52, 25){\vector(-1, -1){15}}
\end{picture}
\end{center}
\end{Ex}
\begin{Ex} \label{exV2JT}
Assume $p > 2$ and consider the Weyl module $V(2)$, for $\slt$. This is a $3$-dimensional module where $e$, $f$, and $h$ act via
\[\begin{bmatrix} 0 & 2 & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 0 \end{bmatrix}, \qquad \begin{bmatrix} 0 & 0 & 0 \\ 1 & 0 & 0 \\ 0 & 2 & 0 \end{bmatrix}, \quad \text{and} \quad \begin{bmatrix} 2 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & -2 \end{bmatrix}\]
respectively. The matrix
\[A = \begin{bmatrix} 2z & 2x & 0 \\ y & 0 & x \\ 0 & 2y & -2z \end{bmatrix}\]
describes the action of $xe + yf + zh \in \mathfrak g$ on $V(2)$. For the purposes of computing the local Jordan type we consider $[x : y : z]$ to be an element in the projective space $\PG$. If $y = 0$ then $xy + z^2 = 0$ implies $z = 0$ and we can scale to $x = 1$. This immediately gives Jordan type $[3]$. If $y \neq 0$ then we can scale to $y = 1$ and therefore $x = -z^2$. This gives
\[A = \begin{bmatrix} 2z & -2z^2 & 0 \\ 1 & 0 & -z^2 \\ 0 & 2 & -2z \end{bmatrix} \quad \text{and} \quad A^2 = \begin{bmatrix} 2z^2 & -4z^3 & 2z^4 \\ 2z & -4z^2 & 2z^3 \\ 2 & -4z & 2z^2 \end{bmatrix}\]
therefore $\rank A = 2$ and $\rank A^2 = 1$. Using \cref{lemConj} we conclude that the Jordan type is the conjugate of $(3 - 2, 2 - 1, 1) = [1]^3$, which is $[3]$. Thus the local Jordan type of $V(2)$ is the constant function $v \mapsto [3]$.
\end{Ex}
\begin{Def}
A $\mathfrak g$-module $M$ has \emph{constant Jordan type} if its local Jordan type is a constant function.
\end{Def}
Modules of constant Jordan type will be significant for us for two reasons. The first is because of the following useful projectivity criterion.
\begin{Thm}[{\cite[7.6]{bfsSupportVarieties}}] \label{thmProjCJT}
A $\mathfrak g$-module $M$ is projective if and only if its local Jordan type is a constant function of the form $v \mapsto [p]^n$.
\end{Thm}
For the second note that when $\mathfrak g$ is the Lie algebra of an algebraic group $G$, the adjoint action of $G$ on $\mathfrak g$ induces an action of $G$ on the restricted nullcone and hence on $\PG$. One can show that the local Jordan type of a $G$-module (when considered as a $\mathfrak g$-module) is constant on the $G$-orbits of this action. The adjoint action of $\SL_2$ on $\PG[\slt]$ is transitive so we get the following.
\begin{Thm}[{\cite[2.5]{cfpConstJType}}] \label{thmRatCJT}
Every rational $\slt$-module has constant Jordan type.
\end{Thm}
Next we define the global operator associated to a $\mathfrak g$-module $M$ and the sheaves associated to such an operator. Let $\set{g_1, \ldots, g_n}$ be a basis for $\mathfrak g$ with corresponding dual basis $\set{x_1, \ldots, x_n}$. We define $\Theta_{\mathfrak g}$ to be the operator
\[\Theta_{\mathfrak g} = x_1 \otimes g_1 + \cdots + x_n \otimes g_n.\]
As an element of $\mathfrak g^\ast \otimes_k \mathfrak g \simeq \hom_k(\mathfrak g, \mathfrak g)$ this is just the identity map and is therefore independent of the choice of basis. Now $\Theta_{\mathfrak g}$ acts on $k[\mathcal N_p(\mathfrak g)] \otimes_k M \simeq k[\mathcal N_p(\mathfrak g)]^{\dim M}$ as a degree $1$ endomorphism of graded $k[\mathcal N_p(\mathfrak g)]$-modules (where $\deg x_i = 1$). The map of sheaves corresponding to this homomorphism is the global operator.
\begin{Def}
Given a $\mathfrak g$-module $M$ we define $\widetilde M = \OPG \otimes_k M$. The \emph{global operator} corresponding to $M$ is the sheaf map
\[\Theta_M\colon \widetilde M \to \widetilde M(1)\]
induced by the action of $\Theta_{\mathfrak g}$.
\end{Def}
\begin{Ex} \label{exV2glob}
We have $\Theta_{\slt} = x \otimes e + y \otimes f + z \otimes h$. Consider the Weyl module $V(2)$ from \cref{exV2JT}. The global operator corresponding to $V(2)$ is the sheaf map
\[\begin{bmatrix} 2z & 2x & 0 \\ y & 0 & x \\ 0 & 2y & -2z \end{bmatrix}\colon\OPG[\slt]^3 \to \OPG[\slt](1)^3.\]
Taking the pullback through the map $\iota\colon\mathbb P^1 \to \PG[\slt]$ from \cref{exNslt} we get that $\Theta_{V(2)}$ is the sheaf map
\[\begin{bmatrix} 2st & 2s^2 & 0 \\ -t^2 & 0 & s^2 \\ 0 & -2t^2 & -2st \end{bmatrix}\colon\mathcal O_{\mathbb P^1}^3 \to \mathcal O_{\mathbb P^1}(2)^3.\]
\end{Ex}
The global operator $\Theta_M$ is not an endomorphism but we may still compose it with itself if we shift the degree of successive copies. Given $j \in \mathbb N$ we define
\begin{align*}
\gker[j]{M} &= \ker\left[\Theta_M(j-1)\circ\cdots\circ\Theta_M(1)\circ\Theta_M\right], \\
\gim[j]{M} &= \im\left[\Theta_M(-1)\circ\cdots\circ\Theta_M(1-j)\circ\Theta_M(-j)\right], \\
\gcoker[j]{M} &= \coker\left[\Theta_M(-1)\circ\cdots\circ\Theta_M(1-j)\circ\Theta_M(-j)\right],
\end{align*}
so that $\gker[j]{M}$ and $\gim[j]{M}$ are subsheafs of $\widetilde M$, and $\gcoker[j]{M}$ is a quotient of $\widetilde M$.
To see how these sheaves encode information about the Jordan type of $M$ recall that the $j$-rank of a partition $\lambda$ is the number of boxes in the Young diagram of $\lambda$ that are not contained in the first $j$ columns. For example the $2$-rank of $[4]^2[2][1]$ (from \cref{exPart}) is $4$. If one knows the $j$-rank of a partition $\lambda$ for all $j$, then one knows the size of each column in the Young diagram of $\lambda$ and can therefore recover $\lambda$. Thus if one knows the local $j$-rank of a module $M$ for all $j$ then one knows its local Jordan type.
\begin{Def}
Let $M$ be a $\mathfrak g$-module and let $v \in \PG$. Set $\rank^j(v, M)$ equal to the $j$-rank of the partition $\jtype(v, M)$. The \emph{local $j$-rank} of $M$ is the function
\[\rank^j(-, M)\colon\PG \to \mathbb N_0\]
so defined.
\end{Def}
\begin{Thm}[{\cite[3.2]{starkHo1}}]
Let $M$ be a $\mathfrak g$-module and $U \subseteq \PG$ an open set. The local $j$-rank is constant on $U$ if and only if the restriction $\gcoker[j]{M}|_U$ is a locally free sheaf. When this is the case $\gker[j]{M}|_U$ and $\gim[j]{M}|_U$ are also locally free and $\rank^j(v, M) = \rank\gim[j]{M}$ for all $v \in U$.
\end{Thm}
We will also be interested in the sheaves $\mathscr F_i(M)$ for $1 \leq i \leq p$. These were first defined by Benson and Pevtsova \cite{benPevtVectorBundles} for $kE$-modules where $E$ is an elementary abelian $p$-group.
\begin{Def}
Let $M$ be a $\mathfrak g$-module and $1 \leq i \leq p$ an integer. Then
\[\mathscr F_i(M) = \frac{\gker{M} \cap \gim[i-1]{M}}{\gker{M} \cap \gim[i]{M}}.\]
\end{Def}
We end the section by stating two theorems which not only illustrate the utility of these sheaves but will be used in an essential way in \Cref{secLieEx} when calculating $\mathscr F_i(M)$ where $M$ is a Weyl module for $\slt$. Both theorems were originally published by Benson and Pevtsova \cite{benPevtVectorBundles} but with minor errors. These errors have been corrected in the given reference.
\begin{Thm}[{\cite[3.7]{starkHo1}}] \label{thmOm}
Let $M$ be a $\mathfrak g$-module and $1 \leq i < p$ an integer. Then
\[\mathscr F_i(M) \simeq \mathscr F_{p-i}(\Omega M)(p-i)\]
\end{Thm}
\begin{Thm}[{\cite[3.8]{starkHo1}}] \label{thmFi}
Let $U \subseteq \PG$ be open. The local Jordan type of a $\mathfrak g$-module $M$ is constant on $U$ if and only if the restrictions $\mathscr F_i(M)|_U$ are locally free for all $1 \leq i \leq p$. When this is the case and $a_i = \rank\mathscr F_i(M)$ we have $\jtype(v, M) = [p]^{a_p}[p-1]^{a_{p-1}}\cdots[1]^{a_1}$ for all $v \in U$.
\end{Thm}
\section{The category of $\slt$-modules} \label{secSl2}
The calculations in \Cref{secLieEx} will be based on detailed information about the category of $\slt$-modules, which we develop in this section. The indecomposable $\slt$-modules have been classified, each is one of the following four types: a Weyl module $V(\lambda)$, the dual of a Weyl module $V(\lambda)^\ast$, an indecomposable projective module $Q(\lambda)$, or a non-constant module $\Phi_\xi(\lambda)$. Explicit bases for the first three types are known; we will remind the reader of these formulas and develop similar formulas for the $\Phi_\xi(\lambda)$. We will also calculate the local Jordan type $\jtype(-, M)\colon\mathbb P^1 \to \mathscr P_p$ for each indecomposable $M$. Finally we will calculate the Heller shifts $\Omega(V(\lambda))$.
We begin by stating the results for each of the four types and the classification. Recall the standard basis for $\slt$ is $\set{e, f, h}$ where
\[e = \begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix}, \qquad f = \begin{bmatrix} 0 & 0 \\ 1 & 0 \end{bmatrix}, \quad \text{and} \quad h = \begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix}.\]
Let $\lambda$ be a non-negative integer and write $\lambda = rp + a$ where $0 \leq a < p$ is the remainder of $\lambda$ modulo $p$. Each type is parametrized by the choice of $\lambda$, with the parametrization of $\Phi_\xi(\lambda)$ requiring also a choice of $\xi \in \mathbb P^1$. The four types are as follows:
\begin{itemize}
\item The {\bf Weyl modules} $V(\lambda)$.
\begin{center}
\begin{tabular}{rrl}
Basis: & \multicolumn{2}{l}{$\set{v_0, v_1, \ldots, v_\lambda}$} \hspace{150pt} \\
Action: & $ev_i$ & \hspace{-7pt}$= (\lambda - i + 1)v_{i - 1}$ \\
& $fv_i$ & \hspace{-7pt}$= (i + 1)v_{i + 1}$ \\
& $hv_i$ & \hspace{-7pt}$= (\lambda - 2i)v_i$ \\
Graph: & \multicolumn{2}{l}{\Cref{figV}} \\
Local Jordan type: & \multicolumn{2}{l}{Constant Jordan type $[p]^r[a + 1]$}
\end{tabular}
\end{center}
\begin{sidewaysfigure}[p]
\centering
\vspace*{350pt}
\begin{tikzpicture} [description/.style={fill=white,inner sep=2pt}]
\useasboundingbox (-7, -5.5) rectangle (7, 4.2);
\scope[transform canvas={scale=.8}]
\matrix (m) [matrix of math nodes, row sep=31pt,
column sep=40pt, text height=1.5ex, text depth=0.25ex]
{ \\ \\ \\ \underset{v_0}{\bullet} & \underset{v_1}{\bullet} & \underset{v_2}{\bullet} & \underset{v_3}{\bullet} & \cdots & \underset{v_{\lambda - 3}}{\bullet} & \underset{v_{\lambda - 2}}{\bullet} & \underset{v_{\lambda - 1}}{\bullet} & \underset{v_\lambda}{\bullet} \\ \\ \\ \\ \\ \\ \\ \\ \underset{\hat v_0}{\bullet} & \underset{\hat v_1}{\bullet} & \underset{\hat v_2}{\bullet} & \underset{\hat v_3}{\bullet} & \cdots & \underset{\hat v_{\lambda - 3}}{\bullet} & \underset{\hat v_{\lambda - 2}}{\bullet} & \underset{\hat v_{\lambda - 1}}{\bullet} & \underset{\hat v_\lambda}{\bullet} \\};
\path[->,font=\scriptsize]
(m-4-1) edge [bend left=20] node[auto] {$1$} (m-4-2)
(m-4-2) edge [bend left=20] node[auto] {$\lambda$} (m-4-1)
edge [bend left=20] node[auto] {$2$} (m-4-3)
(m-4-3) edge [bend left=20] node[auto] {$\lambda - 1$} (m-4-2)
edge [bend left=20] node[auto] {$3$} (m-4-4)
(m-4-4) edge [bend left=20] node[auto] {$\lambda - 2$} (m-4-3)
edge [bend left=20] node[auto] {$4$} (m-4-5)
(m-4-5) edge [bend left=20] node[auto] {$\lambda - 3$} (m-4-4)
edge [bend left=20] node[auto] {$\lambda - 3$} (m-4-6)
(m-4-6) edge [bend left=20] node[auto] {$4$} (m-4-5)
edge [bend left=20] node[auto] {$\lambda - 2$} (m-4-7)
(m-4-7) edge [bend left=20] node[auto] {$3$} (m-4-6)
edge [bend left=20] node[auto] {$\lambda - 1$} (m-4-8)
(m-4-8) edge [bend left=20] node[auto] {$2$} (m-4-7)
edge [bend left=20] node[auto] {$\lambda$} (m-4-9)
(m-4-9) edge [bend left=20] node[auto] {$1$} (m-4-8);
\draw[<-] (m-4-1) .. controls +(70:50pt) and +(110:50pt) .. node[pos=.5, above]{\scriptsize $\lambda$} (m-4-1);
\draw[<-] (m-4-2) .. controls +(70:50pt) and +(110:50pt) .. node[pos=.5, above]{\scriptsize $\lambda - 2$} (m-4-2);
\draw[<-] (m-4-3) .. controls +(70:50pt) and +(110:50pt) .. node[pos=.5, above]{\scriptsize $\lambda - 4$} (m-4-3);
\draw[<-] (m-4-4) .. controls +(70:50pt) and +(110:50pt) .. node[pos=.5, above]{\scriptsize $\lambda - 6$} (m-4-4);
\draw[<-] (m-4-6) .. controls +(70:50pt) and +(110:50pt) .. node[pos=.5, above]{\scriptsize $6 - \lambda$} (m-4-6);
\draw[<-] (m-4-7) .. controls +(70:50pt) and +(110:50pt) .. node[pos=.5, above]{\scriptsize $4 - \lambda$} (m-4-7);
\draw[<-] (m-4-8) .. controls +(70:50pt) and +(110:50pt) .. node[pos=.5, above]{\scriptsize $2 - \lambda$} (m-4-8);
\draw[<-] (m-4-9) .. controls +(70:50pt) and +(110:50pt) .. node[pos=.5, above]{\scriptsize $-\lambda$} (m-4-9);
\path[draw] (-4.1, -2) rectangle (3.7, 0);
\draw (-3.5, -1) node {$e$:};
\draw[<-] (-3.2, -1) .. controls +(-20:18pt) and +(200:18pt) .. (-1.7, -1);
\draw (-.5, -1) node {$f$:};
\draw[->] (-.2, -1) .. controls +(20:18pt) and +(160:18pt) .. (1.3, -1);
\draw (2.5, -1) node {$h$:};
\draw[<-] (3.1, -1.5) .. controls +(70:40pt) and +(110:40pt) .. (2.9, -1.5);
\draw (-.5, 5) node {$V(\lambda)$};
\path[->,font=\scriptsize]
(m-12-1) edge [bend left=20] node[auto] {$\lambda$} (m-12-2)
(m-12-2) edge [bend left=20] node[auto] {$1$} (m-12-1)
edge [bend left=20] node[auto] {$\lambda - 1$} (m-12-3)
(m-12-3) edge [bend left=20] node[auto] {$2$} (m-12-2)
edge [bend left=20] node[auto] {$\lambda - 2$} (m-12-4)
(m-12-4) edge [bend left=20] node[auto] {$3$} (m-12-3)
edge [bend left=20] node[auto] {$\lambda - 3$} (m-12-5)
(m-12-5) edge [bend left=20] node[auto] {$4$} (m-12-4)
edge [bend left=20] node[auto] {$4$} (m-12-6)
(m-12-6) edge [bend left=20] node[auto] {$\lambda - 3$} (m-12-5)
edge [bend left=20] node[auto] {$3$} (m-12-7)
(m-12-7) edge [bend left=20] node[auto] {$\lambda - 2$} (m-12-6)
edge [bend left=20] node[auto] {$2$} (m-12-8)
(m-12-8) edge [bend left=20] node[auto] {$\lambda - 1$} (m-12-7)
edge [bend left=20] node[auto] {$1$} (m-12-9)
(m-12-9) edge [bend left=20] node[auto] {$\lambda$} (m-12-8);
\draw[<-] (m-12-1) .. controls +(70:50pt) and +(110:50pt) .. node[pos=.5, above]{\scriptsize $\lambda$} (m-12-1);
\draw[<-] (m-12-2) .. controls +(70:50pt) and +(110:50pt) .. node[pos=.5, above]{\scriptsize $\lambda - 2$} (m-12-2);
\draw[<-] (m-12-3) .. controls +(70:50pt) and +(110:50pt) .. node[pos=.5, above]{\scriptsize $\lambda - 4$} (m-12-3);
\draw[<-] (m-12-4) .. controls +(70:50pt) and +(110:50pt) .. node[pos=.5, above]{\scriptsize $\lambda - 6$} (m-12-4);
\draw[<-] (m-12-6) .. controls +(70:50pt) and +(110:50pt) .. node[pos=.5, above]{\scriptsize $6 - \lambda$} (m-12-6);
\draw[<-] (m-12-7) .. controls +(70:50pt) and +(110:50pt) .. node[pos=.5, above]{\scriptsize $4 - \lambda$} (m-12-7);
\draw[<-] (m-12-8) .. controls +(70:50pt) and +(110:50pt) .. node[pos=.5, above]{\scriptsize $2 - \lambda$} (m-12-8);
\draw[<-] (m-12-9) .. controls +(70:50pt) and +(110:50pt) .. node[pos=.5, above]{\scriptsize $-\lambda$} (m-12-9);
\draw (-.5, -4) node {$V(\lambda)^\ast$};
\endscope
\end{tikzpicture}
\caption{Graphs of $V(\lambda)$ and $V(\lambda)^\ast$} \label{figV}
\end{sidewaysfigure}
\item The {\bf dual Weyl modules} $V(\lambda)^\ast$.
\begin{center}
\begin{tabular}{rrl}
Basis: & \multicolumn{2}{l}{$\set{\hat v_0, \hat v_1, \ldots, \hat v_\lambda}$} \hspace{150pt} \\
Action: & $e\hat v_i$ & \hspace{-7pt}$= i\hat v_{i - 1}$ \\
& $f\hat v_i$ & \hspace{-7pt}$= (\lambda - i)\hat v_{i + 1}$ \\
& $h\hat v_i$ & \hspace{-7pt}$= (\lambda - 2i)\hat v_i$ \\
Graph: & \multicolumn{2}{l}{\Cref{figV}} \\
Local Jordan type: & \multicolumn{2}{l}{Constant Jordan type $[p]^r[a + 1]$}
\end{tabular}
\end{center}
\item The {\bf projectives} $Q(\lambda)$.
Define $Q(p - 1) = V(p - 1)$. For $0 \leq \lambda < p - 1$ we define $Q(\lambda)$ via
\begin{center}
\begin{tabular}{rrl}
Basis: & \multicolumn{2}{l}{$\set{v_0, v_1, \ldots, v_{2p - \lambda - 2}} \cup \set{w_{p - \lambda - 1}, w_{p - \lambda}, \ldots, w_{p - 1}}$} \hspace{0pt} \\
Action: & $ev_i$ & \hspace{-7pt}$= -(\lambda + i + 1)v_{i - 1}$ \\
& $fv_i$ & \hspace{-7pt}$= (i + 1)v_{i + 1}$ \\
& $hv_i$ & \hspace{-7pt}$= -(\lambda + 2i + 2)v_i$ \\
& $ew_i$ & \hspace{-7pt}$= -(\lambda + i + 1)w_{i - 1} + \frac{1}{i}v_{i - 1}$ \\
& $fw_i$ & \hspace{-7pt}$= (i + 1)w_{i + 1} - \frac{1}{\lambda + 1}\delta_{-1, i}v_p$ \\
& $hw_i$ & \hspace{-7pt}$= -(\lambda + 2i + 2)w_i$ \\
Graph: & \multicolumn{2}{l}{\Cref{figQ}} \\
Local Jordan type: & \multicolumn{2}{l}{Constant Jordan type $[p]^2$}
\end{tabular}
\end{center}
\begin{sidewaysfigure}[p]
\centering
\vspace*{350pt}
\begin{tikzpicture} [description/.style={fill=white,inner sep=2pt}]
\useasboundingbox (-8.5, -5) rectangle (8.5, 3.5);
\scope[transform canvas={scale=.8}]
\matrix (m) [matrix of math nodes, row sep=31pt,
column sep=12pt, text height=1ex, text depth=0.25ex]
{ &&&&& \underset{w_{p - \lambda - 1}}{\bullet} && \underset{w_{p - \lambda}}{\bullet} && \underset{w_{p - \lambda + 1}}{\bullet} && \cdots && \underset{w_{p - 3}}{\bullet} && \underset{w_{p - 2}}{\bullet} && \underset{w_{p - 1}}{\bullet} \\
\underset{v_0}{\bullet} && \cdots && \underset{v_{p - \lambda - 2}}{\bullet} &&&&&&&&&&&&&& \underset{v_p}{\bullet} && \cdots && \underset{v_{2p - \lambda - 2}}{\bullet} \\
&&&&& \underset{v_{p - \lambda - 1}}{\bullet} && \underset{v_{p - \lambda}}{\bullet} && \underset{v_{p - \lambda + 1}}{\bullet} && \cdots && \underset{v_{p - 3}}{\bullet} && \underset{v_{p - 2}}{\bullet} && \underset{v_{p - 1}}{\bullet} \\};
\path[->,font=\scriptsize]
(m-2-1) edge [bend left=20] node[auto] {$1$} (m-2-3)
(m-2-3) edge [bend left=20, shorten >=-7pt] node[auto, xshift=18pt] {$-\lambda - 2$} (m-2-5)
(m-1-6) edge [bend left=20] node[auto, xshift=-10pt] {$-\lambda$} (m-1-8)
(m-1-8) edge [bend left=20] node[auto, xshift=15pt] {$1 - \lambda$} (m-1-10)
(m-1-10) edge [bend left=20] node[auto, xshift=-15pt] {$2 - \lambda$} (m-1-12)
(m-1-12) edge [bend left=20] node[auto] {$-3$} (m-1-14)
(m-1-14) edge [bend left=20] node[auto] {$-2$} (m-1-16)
(m-1-16) edge [bend left=20] node[auto] {$-1$} (m-1-18)
(m-3-6) edge [bend left=20] node[auto, xshift=-10pt] {$-\lambda$} (m-3-8)
(m-3-8) edge [bend left=20] node[auto, xshift=15pt] {$1 - \lambda$} (m-3-10)
(m-3-10) edge [bend left=20] node[auto, xshift=-15pt] {$2 - \lambda$} (m-3-12)
(m-3-12) edge [bend left=20] node[auto] {$-3$} (m-3-14)
(m-3-14) edge [bend left=20] node[auto] {$-2$} (m-3-16)
(m-3-16) edge [bend left=20] node[auto] {$-1$} (m-3-18)
(m-2-3) edge [bend left=20] node[auto] {$-\lambda - 2$} (m-2-1)
(m-2-5) edge [bend left=20] node[auto, xshift=9pt] {$1$} (m-2-3)
(m-1-8) edge [bend left=20] node[auto, xshift=-10pt] {$-1$} (m-1-6)
(m-1-10) edge [bend left=20] node[auto, xshift=10pt] {$-2$} (m-1-8)
(m-1-12) edge [bend left=20] node[auto, xshift=-12pt] {$-3$} (m-1-10)
(m-1-14) edge [bend left=20] node[auto] {$2 - \lambda$} (m-1-12)
(m-1-16) edge [bend left=20] node[auto] {$1 - \lambda$} (m-1-14)
(m-1-18) edge [bend left=20] node[auto] {$-\lambda$} (m-1-16)
(m-3-8) edge [bend left=20] node[auto, xshift=-10pt] {$-1$} (m-3-6)
(m-3-10) edge [bend left=20] node[auto, xshift=10pt] {$-2$} (m-3-8)
(m-3-12) edge [bend left=20] node[auto, xshift=-12pt] {$-3$} (m-3-10)
(m-3-14) edge [bend left=20] node[auto] {$2 - \lambda$} (m-3-12)
(m-3-16) edge [bend left=20] node[auto] {$1 - \lambda$} (m-3-14)
(m-3-18) edge [bend left=20] node[auto] {$-\lambda$} (m-3-16)
(m-2-19) edge [bend left=20] node[auto] {$1$} (m-2-21)
(m-2-21) edge [bend left=20, shorten >=-7pt] node[auto, xshift=18pt] {$-\lambda - 2$} (m-2-23)
(m-2-23) edge [bend left=20] node[auto, xshift=8pt] {$1$} (m-2-21)
(m-2-21) edge [bend left=20] node[auto] {$-\lambda - 2$} (m-2-19)
(m-1-6) edge[shorten <=7pt] node[below, xshift=5pt] {$\frac{-1}{\lambda + 1}$} (m-2-5)
(m-2-5) edge[shorten <=7pt] node[pos=.45, below, xshift=-10pt] {$-\lambda - 1$} (m-3-6)
(m-1-18) edge[shorten <=5pt] node[below, xshift=-5pt] {$\frac{-1}{\lambda + 1}$} (m-2-19)
(m-2-19) edge node[auto, xshift=-5pt] {$-\lambda - 1$} (m-3-18)
(m-1-8) edge[shorten <=5pt] node[pos=.6, above, xshift=-5pt] {$\frac{-1}{\lambda}$} (m-3-6)
(m-1-10) edge[shorten <=6pt] node[pos=.6, above, xshift=-5pt] {$\frac{-1}{\lambda - 1}$} (m-3-8)
(m-1-12) edge node[pos=.6, above, xshift=-5pt] {$\frac{-1}{\lambda - 2}$} (m-3-10)
(m-1-14) edge[shorten <=5pt] node[pos=.6, above, xshift=-5pt] {$-\frac{1}{3}$} (m-3-12)
(m-1-16) edge[shorten <=5pt] node[pos=.6, above, xshift=-5pt] {$-\frac{1}{2}$} (m-3-14)
(m-1-18) edge[shorten <=5pt] node[pos=.6, above, xshift=-5pt] {$-1$} (m-3-16);
\draw[<-] (m-2-1) .. controls +(70:50pt) and +(110:50pt) .. node[pos=.5, above]{\scriptsize $-\lambda - 2$} (m-2-1);
\draw[<-] (m-2-5) .. controls +(70:50pt) and +(110:50pt) .. node[pos=.5, above]{\scriptsize $\lambda + 2$} (m-2-5);
\draw[<-] (m-2-19) .. controls +(70:50pt) and +(110:50pt) .. node[pos=.5, above]{\scriptsize $-\lambda - 2$} (m-2-19);
\draw[<-] (m-2-23) .. controls +(70:50pt) and +(110:50pt) .. node[pos=.5, above]{\scriptsize $\lambda + 2$} (m-2-23);
\draw[<-] (m-1-6) .. controls +(70:50pt) and +(110:50pt) .. node[pos=.5, above]{\scriptsize $\lambda$} (m-1-6);
\draw[<-] (m-1-8) .. controls +(70:50pt) and +(110:50pt) .. node[pos=.5, above]{\scriptsize $\lambda - 2$} (m-1-8);
\draw[<-] (m-1-10) .. controls +(70:50pt) and +(110:50pt) .. node[pos=.5, above]{\scriptsize $\lambda - 4$} (m-1-10);
\draw[<-] (m-1-14) .. controls +(70:50pt) and +(110:50pt) .. node[pos=.5, above]{\scriptsize $4 - \lambda$} (m-1-14);
\draw[<-] (m-1-16) .. controls +(70:50pt) and +(110:50pt) .. node[pos=.5, above]{\scriptsize $2 - \lambda$} (m-1-16);
\draw[<-] (m-1-18) .. controls +(70:50pt) and +(110:50pt) .. node[pos=.5, above]{\scriptsize $-\lambda$} (m-1-18);
\draw[shorten >=5pt, shorten <=5pt, <-] (m-3-6) .. controls +(250:50pt) and +(290:50pt) .. node[pos=.5, below]{\scriptsize $\lambda$} (m-3-6);
\draw[shorten >=5pt, shorten <=5pt, <-] (m-3-8) .. controls +(250:50pt) and +(290:50pt) .. node[pos=.5, below]{\scriptsize $\lambda - 2$} (m-3-8);
\draw[shorten >=5pt, shorten <=5pt, <-] (m-3-10) .. controls +(250:50pt) and +(290:50pt) .. node[pos=.5, below]{\scriptsize $\lambda - 4$} (m-3-10);
\draw[shorten >=5pt, shorten <=5pt, <-] (m-3-14) .. controls +(250:50pt) and +(290:50pt) .. node[pos=.5, below]{\scriptsize $4 - \lambda$} (m-3-14);
\draw[shorten >=5pt, shorten <=5pt, <-] (m-3-16) .. controls +(250:50pt) and +(290:50pt) .. node[pos=.5, below]{\scriptsize $2 - \lambda$} (m-3-16);
\draw[shorten >=5pt, shorten <=5pt, <-] (m-3-18) .. controls +(250:50pt) and +(290:50pt) .. node[pos=.5, below]{\scriptsize $-\lambda$} (m-3-18);
\path[draw] (-5.7, -6) rectangle (5.9, -4);
\draw (-5.1, -5) node {$e$:};
\draw[<-] (-4.8, -5) .. controls +(-20:18pt) and +(200:18pt) .. (-3.3, -5);
\draw (-2.9, -5) node {$+$};
\draw[<-] (-2.8, -5.7) -- (-1.8, -4.3);
\draw (-.9, -5) node {$f$:};
\draw[->] (-.6, -5) .. controls +(20:18pt) and +(160:18pt) .. (1.3, -5);
\draw (1.7, -5) node {$+$};
\draw[->] (1.8, -4.3) -- (2.8, -5.7);
\draw (3.5, -5) node {$h$:};
\draw[<-] (4.2, -5.5) .. controls +(70:40pt) and +(110:40pt) .. (4, -5.5);
\draw (4.7, -5) node {$+$};
\draw[<-] (5.2, -4.5) .. controls +(250:40pt) and +(290:40pt) .. (5.4, -4.5);
\draw (.25, 4) node {$Q(\lambda)$};
\endscope
\end{tikzpicture}
\caption{Graph of $Q(\lambda)$} \label{figQ}
\end{sidewaysfigure}
\item The {\bf non-constant modules} $\Phi_\xi(\lambda)$.
Assume $\lambda \geq p$ and let $\xi \in \mathbb P^1$. If $\xi = [1:\varepsilon]$ then $\Phi_\xi(\lambda)$ is defined by
\begin{center}
\begin{tabular}{rrl}
Basis: & \multicolumn{2}{l}{$\set{w_{a + 1}, w_{a + 2}, \ldots, w_\lambda}$} \hspace{122pt} \\
Action: & $ew_i$ & \hspace{-7pt}$= (i + 1)(w_{i + 1} - {d \choose i}\varepsilon^{i - a}\delta_{\lambda, i}w_{a + 1})$ \\
& $fw_i$ & \hspace{-7pt}$= (\lambda - i + 1)w_{i - 1}$ \\
& $hw_i$ & \hspace{-7pt}$= (2i - \lambda)w_i$ \\
Graph: & \multicolumn{2}{l}{\Cref{figPhi}} \\
Local Jordan type: & \multicolumn{2}{l}{$[p]^{r-1}[p - a - 1][a + 1]$ at $\xi$ and $[p]^r$ elsewhere}
\end{tabular}
\end{center}
If $\xi = [0:1]$ then $\Phi_\xi(\lambda)$ is defined to be the submodule of $V(\lambda)$ spanned by the basis elements $\set{v_{a + 1}, v_{a + 2}, \ldots, v_\lambda}$. It is also depicted in \Cref{figPhi} and has the same local Jordan type as above.
\begin{sidewaysfigure}[p]
\centering
\vspace*{350pt}
\begin{tikzpicture} [description/.style={fill=white,inner sep=2pt}]
\useasboundingbox (-9, -5.7) rectangle (8, 4.8);
\scope[transform canvas={scale=.8}]
\matrix (m) [matrix of math nodes, row sep=50pt,
column sep=20pt, text height=1ex, text depth=0.25ex]
{ & {} & {} &&&&&& {} &&& \\ \underset{w_\lambda}{\bullet} & \cdots & \underset{w_{qp + a + 1}}{\bullet} & \underset{w_{qp + a}}{\bullet} & \cdots & \underset{w_{qp}}{\bullet} & \underset{w_{qp - 1}}{\bullet} & \cdots & \underset{w_{(q - 1)p + a + 1}}{\bullet} & \underset{w_{(q - 1)p + a}}{\bullet} & \cdots & \underset{w_{a + 1}}{\bullet} \\ \\ \\ \\ \\ \\ && \underset{v_{a + 1}}{\bullet} & \underset{v_{a + 2}}{\bullet} & \underset{v_{a + 3}}{\bullet} & \underset{v_{a + 4}}{\bullet} & \cdots & \underset{v_{\lambda - 2}}{\bullet} & \underset{v_{\lambda - 1}}{\bullet} & \underset{v_{\lambda}}{\bullet} \\};
\path[->,font=\scriptsize]
(m-2-12) edge [bend left=20] node[auto] {$a + 2$} (m-2-11)
(m-2-11) edge [bend left=20, shorten >=3pt] node[auto, xshift=-5pt] {$a$} (m-2-10)
(m-2-10) edge [bend left=20, shorten >=7pt, shorten <=3pt] node[auto] {$a + 1$} (m-2-9)
(m-2-9) edge [bend left=20, shorten <=7pt] node[auto, xshift=7pt] {$a + 2$} (m-2-8)
(m-2-8) edge [bend left=20] node[auto, xshift=-10pt] {$-1$} (m-2-7)
(m-2-6) edge [bend left=20] node[auto] {$1$} (m-2-5)
(m-2-5) edge [bend left=20] node[auto, xshift=-5pt] {$a$} (m-2-4)
(m-2-4) edge [bend left=20] node[auto] {$a + 1$} (m-2-3)
(m-2-3) edge [bend left=20] node[auto, xshift=13pt] {$a + 2$} (m-2-2)
(m-2-2) edge [bend left=20] node[auto] {$a$} (m-2-1)
(m-2-1) edge [bend left=20] node[auto] {$1$} (m-2-2)
(m-2-2) edge [bend left=20, shorten >=-10pt] node[auto, xshift=12pt] {$-1$} (m-2-3)
(m-2-4) edge [bend left=20, shorten <=-7pt] node[auto, xshift=-7pt] {$1$} (m-2-5)
(m-2-5) edge [bend left=20, shorten >=-3pt] node[auto] {$a$} (m-2-6)
(m-2-6) edge [bend left=20, shorten <=-3pt, shorten >=-7pt] node[auto, xshift=13pt] {$a + 1$} (m-2-7)
(m-2-7) edge [bend left=20, shorten <=-7pt] node[auto, xshift=-15pt] {$a + 2$} (m-2-8)
(m-2-8) edge [bend left=20, shorten >=-10pt] node[auto, xshift=12pt] {$-1$} (m-2-9)
(m-2-10) edge [bend left=20, shorten <=-10pt] node[auto, xshift=-10pt] {$1$} (m-2-11)
(m-2-11) edge [bend left=20] node[auto] {$-1$} (m-2-12);
\draw[<-] (m-2-12) .. controls +(70:50pt) and +(110:50pt) .. node[pos=.5, above]{\scriptsize $a + 2$} (m-2-12);
\draw[<-] (m-2-10) .. controls +(70:50pt) and +(110:50pt) .. node[pos=.5, above]{\scriptsize $a$} (m-2-10);
\draw[<-] (m-2-9) .. controls +(70:50pt) and +(110:50pt) .. node[pos=.5, above]{\scriptsize $a + 2$} (m-2-9);
\draw[<-] (m-2-7) .. controls +(70:50pt) and +(110:50pt) .. node[pos=.5, above]{\scriptsize $-(a + 2)$} (m-2-7);
\draw[<-] (m-2-6) .. controls +(70:50pt) and +(110:50pt) .. node[pos=.5, above]{\scriptsize $-a$} (m-2-6);
\draw[<-] (m-2-4) .. controls +(70:50pt) and +(110:50pt) .. node[pos=.5, above]{\scriptsize $a$} (m-2-4);
\draw[<-] (m-2-3) .. controls +(70:50pt) and +(110:50pt) .. node[pos=.5, above]{\scriptsize $a + 2$} (m-2-3);
\draw[<-] (m-2-1) .. controls +(70:50pt) and +(110:50pt) .. node[pos=.5, above]{\scriptsize $a$} (m-2-1);
\draw[shorten >=5pt, ->] (m-2-1) .. controls +(210:170pt) and +(250:150pt) .. node[pos=.21, below, xshift=-20pt]{\scriptsize $-(a + 1)\varepsilon^{rp}$} (m-2-12);
\draw[shorten >=5pt, shorten <=5pt, ->] (m-2-4) .. controls +(220:130pt) and +(250:120pt) .. node[pos=.2, below, xshift=-30pt]{\scriptsize $-(a + 1)\binom{r}{q}\varepsilon^{qp}$} (m-2-12);
\draw[shorten >=5pt, shorten <=8pt, ->] (m-2-10) .. controls +(220:70pt) and +(250:70pt) .. node[pos=.3, below, xshift=-43pt]{\scriptsize $-(a + 1)\binom{r}{q - 1}\varepsilon^{(q - 1)p}$} (m-2-12);
\path[draw] (-6.2, -2.5) rectangle (4.9, -.5);
\draw (-5.6, -1.5) node {$e$:};
\draw[<-] (-5.3, -1.5) .. controls +(-20:18pt) and +(200:18pt) .. (-3.8, -1.5);
\draw (-3.4, -1.5) node {$+$};
\draw[->] (-2.8, -1.2) .. controls +(230:40pt) and +(250:30pt) .. (-1.8, -1.2);
\draw (-.3, -1.5) node {$f$:};
\draw[->] (0, -1.5) .. controls +(20:18pt) and +(160:18pt) .. (1.9, -1.5);
\draw (3.5, -1.5) node {$h$:};
\draw[<-] (4.2, -2) .. controls +(70:40pt) and +(110:40pt) .. (4, -2);
\draw (-.5, 7) node {$\Phi_{[1:\varepsilon]}(\lambda)$};
\draw (.4, -4) node {$\Phi_{[0:1]}(\lambda)$};
\path[->,font=\scriptsize]
(m-8-3) edge [bend left=20] node[auto] {$a + 2$} (m-8-4)
(m-8-4) edge [bend left=20] node[auto] {$-1$} (m-8-3)
edge [bend left=20] node[auto] {$a + 3$} (m-8-5)
(m-8-5) edge [bend left=20] node[auto] {$-2$} (m-8-4)
edge [bend left=20] node[auto] {$a + 4$} (m-8-6)
(m-8-6) edge [bend left=20] node[auto] {$-3$} (m-8-5)
edge [bend left=20] node[auto] {$a + 5$} (m-8-7)
(m-8-7) edge [bend left=20] node[auto] {$-4$} (m-8-6)
edge [bend left=20] node[auto] {$\lambda - 2$} (m-8-8)
(m-8-8) edge [bend left=20] node[auto] {$3$} (m-8-7)
edge [bend left=20] node[auto] {$\lambda - 1$} (m-8-9)
(m-8-9) edge [bend left=20] node[auto] {$2$} (m-8-8)
edge [bend left=20] node[auto] {$\lambda$} (m-8-10)
(m-8-10) edge [bend left=20] node[auto] {$1$} (m-8-9);
\draw[<-] (m-8-3) .. controls +(70:50pt) and +(110:50pt) .. node[pos=.5, above]{\scriptsize $-(a + 2)$} (m-8-3);
\draw[<-] (m-8-4) .. controls +(70:50pt) and +(110:50pt) .. node[pos=.5, above]{\scriptsize $-(a + 4)$} (m-8-4);
\draw[<-] (m-8-5) .. controls +(70:50pt) and +(110:50pt) .. node[pos=.5, above]{\scriptsize $-(a + 6)$} (m-8-5);
\draw[<-] (m-8-6) .. controls +(70:50pt) and +(110:50pt) .. node[pos=.5, above]{\scriptsize $-(a + 8)$} (m-8-6);
\draw[<-] (m-8-8) .. controls +(70:50pt) and +(110:50pt) .. node[pos=.5, above]{\scriptsize $-(a - 4)$} (m-8-8);
\draw[<-] (m-8-9) .. controls +(70:50pt) and +(110:50pt) .. node[pos=.5, above]{\scriptsize $-(a - 2)$} (m-8-9);
\draw[<-] (m-8-10) .. controls +(70:50pt) and +(110:50pt) .. node[pos=.5, above]{\scriptsize $-a$} (m-8-10);
\endscope
\end{tikzpicture}
\caption{Graphs of $\Phi_\xi(\lambda)$} \label{figPhi}
\end{sidewaysfigure}
\end{itemize}
\begin{Thm}[\cite{premetSl2}] \label{thmPremet}
Each of the following modules are indecomposable:
\begin{itemize}
\item $V(\lambda)$ and $Q(\lambda)$ for $0 \leq \lambda < p$.
\item $V(\lambda)$ and $V(\lambda)^\ast$ for $\lambda \geq p$ such that $p \nmid \lambda + 1$.
\item $\Phi_\xi(\lambda)$ for $\xi \in \mathbb P^1$ and $\lambda \geq p$ such that $p \nmid \lambda + 1$.
\end{itemize}
Moreover, these modules are pairwise non-isomorphic, save $Q(p-1) = V(p-1)$, and give a complete classification of the indecomposable restricted $\slt$-modules.
\end{Thm}
As stated before the explicit bases for $V(\lambda)$, $V(\lambda)^\ast$, and $Q(\lambda)$ are known; see, for example, Benkart and Osborn \cite{benkartSl2reps}. For the local Jordan type of $V(\lambda)$ and $V(\lambda)^\ast$ the matrix that describes the action of $e$ with respect to the given basis is almost in Jordan normal form, one needs only to scale the basis elements appropriately, so we can immediately read off the local Jordan type at the point $ke \in \PG[\slt]$, and \cref{thmRatCJT} gives that these modules have constant Jordan type. As \cref{thmProjCJT} gives the local Jordan type of the $Q(\lambda)$ all that is left is to justify the explicit description of $\Phi_\xi(\lambda)$ and its local Jordan type.
First we recall the definition of $\Phi_\xi(\lambda)$. Let $V = k^2$ be the standard representation of $\SL_2$, then the dual $V^\ast$ is a $2$-dimensional representation with basis $\set{x, y}$ (dual to the standard basis for $V$). The induced representation on the symmetric product $S(V^\ast)$ is degree preserving and the dual $S^\lambda(V^\ast)^\ast$ of the degree $\lambda$ subspace is the Weyl module $V(\lambda)$. Specifically, we let $v_i \in V(\lambda)$ be dual to $x^{\lambda - i}y^i$.
Let $B_2 \subseteq \SL_2$ be the Borel subgroup of upper triangular matrices and recall that the homogeneous space $\SL_2/B_2$ is isomorphic to $\mathbb P^1$ as a variety; the map $\phi\colon \mathbb P^1 \to \SL_2$ given by
\[[1:\varepsilon] \mapsto \begin{bmatrix} 0 & 1 \\ -1 & -\varepsilon \end{bmatrix} \qquad \text{ and } \qquad [0:1] \mapsto \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix},\]
factors to an explicit isomorphism $\mathbb P^1 \overset{\sim}{\to} \SL_2/B_2$.
\begin{Def}[\cite{premetSl2}]
Let $\Phi(\lambda)$ be the $\slt$-submodule of $V(\lambda)$ spanned by the vectors $\set{v_{a + 1}, v_{a + 2}, \ldots, v_\lambda}$. Given $\xi \in \mathbb P^1$ we define $\Phi_\xi(\lambda)$ to be the $\slt$-module $\phi(\xi)\Phi(\lambda)$.
\end{Def}
Observe first that $\Phi_{[0:1]}(\lambda) = \Phi(\lambda)$ so in this case we have the desired description. Now assume $\xi = [1:\varepsilon]$ where $\varepsilon \in k$. As $\phi(\xi)$ is invertible multiplication by it is an isomorphism so $\Phi_\xi(\lambda)$ has basis $\set{\phi(\xi)v_i}$. Our basis for $\Phi_\xi(\lambda)$ will be obtained by essentially a row reduction of this basis, so to proceed we now compute the action of $\SL_2$ on $V(\lambda)$. Observe:
\begin{align*}
\left(\begin{bmatrix} a & b \\ c & d \end{bmatrix}v_i\right)(x^{\lambda - j}y^j) &= v_i\left(\begin{bmatrix} d & -b \\ -c & a \end{bmatrix}x^{\lambda - j}y^j\right) \\
&= v_i\left(\sum_{s = 0}^{\lambda - j}\sum_{t = 0}^j\binom{\lambda - j}{s}\binom{j}{t}a^kb^{\lambda - j - k}c^td^{j - t}x^{s + t}y^{\lambda - k - s}\right) \\
&= \sum\binom{\lambda - j}{s}\binom{j}{t}a^sb^{\lambda - j - s}c^td^{j - t}
\end{align*}
where the sum is over pairs $(s, t) \in \mathbb N^2$ such that $0 \leq s \leq \lambda - j$, $0 \leq t \leq j$, and $s + t = \lambda - i$. Such pairs come in the form $(\lambda - i - t, t)$ where $t$ ranges from $\max(0, j - i)$ to $\min(j, \lambda - i)$ therefore
\[\begin{bmatrix} a & b \\ c & d \end{bmatrix}v_i = \sum_{j = 0}^r\sum_{t = \max(0, j - i)}^{\min(j, \lambda - i)}\binom{\lambda - j}{\lambda - i - t}\binom{j}{t}a^{\lambda - i - t}b^{t + i - j}c^td^{j - t}v_j.\]
For computing $\Phi_\xi(\lambda)$ we will need only the following special case:
\[\phi(\xi)v_i = \begin{bmatrix} 0 & 1 \\ -1 & -\varepsilon \end{bmatrix}v_i = \sum_{j = \lambda - i}^\lambda(-1)^j\binom{j}{\lambda - i}\varepsilon^{i + j - \lambda}v_j.\]
\begin{Prop} \label{propBas}
Given $i = qp + b$, $0 \leq b < p$, define
\[w_i = \begin{cases} v_{\lambda - i} - \binom{r}{q}\varepsilon^{qp}v_{\lambda - b} & \text{if} \ \ b \leq a \\ v_{\lambda - i} & \text{if} \ \ b > a. \end{cases}\]
Then the vectors $w_{a + 1}, w_{a + 2}, \ldots, w_\lambda$ form a basis of $\Phi_\xi(\lambda)$.
\end{Prop}
\begin{proof}
We will prove by induction that that for all $a + 1 \leq i \leq \lambda$ we have $\spn_k\set{\phi(\xi)v_{a + 1}, \ldots, \phi(\xi)v_i} = \spn_k\set{w_{a + 1}, \ldots, w_i}$. For the base case the formula above gives
\begin{align*}
\phi(\xi)v_{a + 1} &= \sum_{j = rp - 1}^\lambda(-1)^j\binom{j}{rp - 1}\varepsilon^{j - rp + 1}v_j \\
&= (-1)^{rp - 1}\binom{rp - 1}{rp - 1}v_{rp - 1} \\
&= (-1)^{rp - 1}w_{a + 1}
\end{align*}
so clearly the statement is true.
For the inductive step we assume the statement holds for integers less than $i$. Then by hypothesis we have
\[\spn_k\set{\phi(\xi)v_{a + 1}, \ldots, \phi(\xi)v_i} = \spn_k\set{w_{a + 1}, \ldots, w_{i - 1}, \phi(\xi)v_i}\]
and can replace $\phi(\xi)v_i$ with the vector
\[w' = (-1)^{\lambda - i}\phi(\xi)v_i - \sum_{j = a + 1}^{i - 1}(-1)^{i - j}\binom{\lambda - j}{\lambda - i}\varepsilon^{i - j}w_j\]
to get another spanning set. We then show that $w' = w_i$ by checking that the coordinates of each vector are equal. Note that for $j < \lambda - i$ the coefficient of $v_j$ in each of the factors of $w'$ is zero, as it is in $w_i$. The coefficient of $v_{\lambda - i}$ in $(-1)^{\lambda - i}\phi(\xi)v_i$ is $1$ and in each $w_j$, $a + 1 \leq j < i$ it is zero, hence the coefficient in $w'$ is $1$, as it is in $w_i$.
Next assume $\lambda - i < j < rp$. Then only $\phi(\xi)v_i$ and $w_{\lambda - j}$ contribute a $v_j$ term therefore the coefficient of $v_j$ in $w'$ is
\[(-1)^{\lambda - i + j}\binom{j}{\lambda - i}\varepsilon^{i + j - \lambda} - (-1)^{i + j - \lambda}\binom{j}{\lambda - i}\varepsilon^{i + j - \lambda} = 0\]
which again agrees with $w_i$. Now all that's left is to check the coefficients of $v_{rp}, v_{rp + 1}, \ldots, v_\lambda$. Note that $w_t$ has a $v_{rp + j}$ term only if
\[t = p + a - j, 2p + a - j, \ldots, \lambda - j\]
and the coefficient of $v_{rp + j}$ in $w_{tp + a - j}$, for $1 \leq t \leq r$, is
\[-\binom{r}{t}\varepsilon^{tp}.\]
Thus the coefficient of $v_{rp + j}$ in $w'$ is
\[(-1)^{a - i - j}\binom{rp + j}{\lambda - i}\varepsilon^{i + j - a} + \sum(-1)^{i + j - tp - a}\binom{r}{t}\binom{(r - t)p + j}{\lambda - i}\varepsilon^{i + j - a}\]
where the sum is over those $t$ such that $1 \leq t \leq r$ and $tp + a \leq i + j - 1$.
From here there are several cases. Assume first that $a < b$. Then, from $b < p$ we get $p + a - b > a \geq j$ hence any binomial coefficient those top number equals $a - b$ in $k$ and whose bottom number equals $j$ in $k$ will be zero because there will be a carry. Both $\binom{\lambda - t}{\lambda - i}$ and $\binom{(r - t)p + j}{\lambda - i}$ satisfy this condition therefore the coefficient of $v_{rp + j}$ in $w'$ is zero. Thus if $a < b$ then we have $w' = w_i$.
Next assume that $b \leq a$. Then the formula above for the coefficient of $v_{rp + j}$ in $w'$ becomes
\begin{align*}
&(-1)^{a - i - j}\left[\binom{r}{q} + \sum(-1)^{tp}\binom{r}{t}\binom{r - t}{r - q}\right]\binom{j}{a - b}\varepsilon^{i + j - a} \\
&\qquad = (-1)^{a - i - j}\left[\binom{r}{q} + \sum(-1)^t\binom{r}{q}\binom{q}{t}\right]\binom{j}{a - b}\varepsilon^{i + j - a} \\
&\qquad = (-1)^{a - i - j}\left[1 + \sum(-1)^t\binom{q}{t}\right]\binom{r}{q}\binom{j}{a - b}\varepsilon^{i + j - a}
\end{align*}
where the sum is over the same $t$ from above. If $j < a - b$ then clearly this is zero. If $j > a - b$ then the sum is over $t = 1, 2, \ldots, q$ and
\[1 + \sum_{t = 1}^q(-1)^t\binom{q}{t} = \sum_{t = 0}^q(-1)^t\binom{q}{t} = 0\]
so the only $v_{\lambda - b}$ occurs as a term in $w'$. In that case the sum is over $t = 1, 2, \ldots, q - 1$ and
\[1 + \sum_{t = 1}^{q - 1}(-1)^t\binom{q}{t} = (-1)^{q + 1} + \sum_{t = 0}^q(-1)^t\binom{q}{t} = (-1)^{q + 1}\]
so the coefficient is
\[(-1)^{a - i - (a - b) + q + 1}\binom{r}{q}\varepsilon^{i + (a - b) - a} = -\binom{r}{q}\varepsilon^{qp}\]
as desired. Thus $w' = w_i$ and the proof is complete.
\end{proof}
Now that we have a basis it's trivial to determine the action.
\begin{Prop}
Let $i = qp + b$, $a + 1 \leq i \leq \lambda$. Then
\begin{align*}
ew_i &= \begin{cases} (i + 1)\left(w_{i + 1} - \binom{\lambda}{i}\varepsilon^{qp}w_{b + 1}\right) & \text{if} \ \ a = b \\ (i + 1)w_{i + 1} & \text{if} \ \ a \neq b \end{cases} \\
fw_i &= (\lambda - i + 1)w_{i - 1} \\
hw_i &= (2i - \lambda)w_i
\end{align*}
where $w_a = w_{\lambda + 1} = 0$.
\end{Prop}
\begin{proof}
The proof is just a case by case analysis. We start with $e \in \slt$. If $b < a$ then
\[ew_i = ev_{\lambda - i} - \binom{r}{q}\varepsilon^{qp}ev_{\lambda - b} = (i + 1)v_{\lambda - i - 1} - (b + 1)\binom{r}{q}\varepsilon^{qp}v_{\lambda - b - 1} = (i + 1)w_{i + 1}.\]
If $b = a$ then
\[ew_i = (i + 1)v_{\lambda - i - 1} - (b + 1)\binom{r}{q}\varepsilon^{qp}v_{\lambda - b - 1} = (i + 1)\left(w_{i + 1} - \binom{\lambda}{i}\varepsilon^{qp}w_{b + 1}\right).\]
If $p - 1 > b > a$ then
\[ew_i = ev_{\lambda - i} = (i + 1)v_{\lambda - i - 1} = (i + 1)w_{i + 1}.\]
Finally if $b = p - 1$ then
\[ew_i = (i + 1)v_{\lambda - i - 1} = 0.\]
All the above cases fit the given formula so we are done with $e$. Next consider $f \in \slt$. If $0 < b \leq a$ then
\begin{align*}
fw_i &= fv_{\lambda - i} - \binom{r}{q}\varepsilon^{qp}fv_{\lambda - b} \\
&= (\lambda - i + 1)v_{\lambda - i + 1} - (\lambda - b + 1)\binom{r}{q}\varepsilon^{qp}v_{\lambda - b + 1} \\
&= (\lambda - i + 1)w_{i - 1}.
\end{align*}
If $b = 0$ then
\[fw_i = fv_{\lambda - i} - \binom{r}{q}\varepsilon^{qp}fv_\lambda = (\lambda + 1)v_{\lambda - i + 1} = (\lambda - i + 1)w_{i - 1}.\]
If $a + 1 = b$ then
\[fw_i = fv_{\lambda - i} = (\lambda - i + 1)v_{\lambda - i + 1} = spv_{\lambda - i + 1} = 0.\]
Finally if $b > a + 1$ then
\[fw_i = (\lambda - i + 1)v_{\lambda - i + 1} = (\lambda - i + 1)w_{i - 1}.\]
All the above cases fit the given formula so we are done with $f$. Last but not least consider $h \in \slt$. If $b \leq a$ then
\[hw_i = hv_{\lambda - i} - \binom{r}{q}\varepsilon^{qp}hv_{\lambda - b} = (2i - \lambda)v_{\lambda - i} - (2b - \lambda)\binom{r}{q}\varepsilon^{qp}v_{\lambda - b} = (2i - \lambda)w_i.\]
If $b > a$ then
\[hw_i = hv_{\lambda - i} = (2i - \lambda)v_{\lambda - i} = (2i - \lambda)w_i.\]
\end{proof}
Lastly we calculate that the Jordan type is as stated: $[p]^{r-1}[p - a - 1][a + 1]$ at $\xi$ and $[p]^r$ elsewhere. First note that the result holds for $\Phi_{[0:1]}(\lambda)$ by \cref{lemBJType}; furthermore, that the point $[0:1] \in \mathbb P^1$ at which the Jordan type is $[p]^{r - 1}[p - a - 1][a + 1]$ corresponds to the line through $f \in \mathcal N_p(\slt)$ under the map
\begin{align*}
\iota\colon\mathbb P^1 &\to \PG[\slt] \\
[s : t] &\mapsto \begin{bmatrix} st & s^2 \\ -t^2 & -st \end{bmatrix}
\end{align*}
from \cref{exNslt}. Let
\[\ad\colon\SL_2 \to \End(\slt)\]
be the adjoint action of $\SL_2$ on $\slt$. As $V(\lambda)$ is a rational $\SL_2$-module this satisfies
\[\ad(g)(E)\cdot m = g\cdot(E\cdot(g^{-1}\cdot m))\]
for all $g \in \SL_2$, $E \in \slt$, and $m \in V(\lambda)$. Along with $\Phi_{\xi}(\lambda) = \phi(\xi)\Phi_{[0:1]}(\lambda)$ this implies commutativity of the following diagram:
\[\xymatrix@C=50pt{\Phi_{[0:1]}(\lambda) \ar[r]^-{\phi(\xi)} \ar[d]_{\hspace*{40pt}E} & \Phi_\xi(\lambda) \ar[d]^{\ad(\phi(\xi))(E)} \\ \Phi_{[0:1]}(\lambda) \ar[r]_-{\phi(\xi)} & \Phi_\xi(\lambda)}\]
As multiplication by $\phi(\xi)$ is an isomorphism, letting $E$ range over $\mathcal N_p(\slt)$ we see that the module $\Phi_\xi(\lambda)$ has Jordan type $[p]^{r - 1}[p - a - 1][a + 1]$ at $\ad(\phi(\xi))(f)$ and $[p]^r$ elsewhere. Then we simply calculate
\begin{align*}
\ad(\phi(\xi))(f) &= \begin{bmatrix} 0 & 1 \\ -1 & -\varepsilon \end{bmatrix}\begin{bmatrix} 0 & 0 \\ 1 & 0 \end{bmatrix}\begin{bmatrix} 0 & 1 \\ -1 & -\varepsilon \end{bmatrix}^{-1} \\
&= \begin{bmatrix} -\varepsilon & -1 \\ \varepsilon^2 & \varepsilon \end{bmatrix}
\end{align*}
and observe that, as an element of $\PG[\slt]$, this is $\iota([1:\varepsilon])$.
Thus we now have a complete description of the indecomposable $\slt$-modules. We finish this section with one more computation that will be needed in \Cref{secLieEx}: the computation of the Heller shifts $\Omega(V(\lambda))$ for indecomposable $V(\lambda)$. Note that $V(p-1) = Q(p-1)$ is projective so $\Omega(V(p-1)) = 0$. For other $V(\lambda)$ we have the following.
\begin{Prop} \label{propOmega}
Let $\lambda = rp + a$ be a non-negative integer and $0 \leq a < p$ its remainder modulo $p$. If $a \neq p - 1$ then $\Omega(V(\lambda)) = V((r + 2)p - a - 2)$.
\end{Prop}
\begin{proof}
This will be a direct computation. We will determine the projective cover $f\colon P \to V(\lambda)$ and then set $f(x) = 0$ for an arbitrary element $x \in P$. This will give us the relations determining $\ker f = \Omega(V(\lambda))$ which we will convert into a basis and identify with $V((r + 2)p - a - 2)$.
The indecomposable summands of $P$ are in bijective correspondence with the indecomposable summands (all simple) of the top of $V(\lambda)$, i.e.\ $V(\lambda)/\rad V(\lambda)$. This correspondence is given as follows, if $\pi_q\colon V(\lambda) \to V(a)$ is the projection onto a summand of the top of $V(\lambda)$ then the projective cover $\phi_a\colon Q(a) \to V(a)$ factors through $\pi_q$.
\[\xymatrix{Q(a) \ar[rr]^{\phi_a} \ar@{-->}[dr]_{f_q} && V(a) \\ & V(\lambda) \ar[ur]_{\pi_q}}\]
The map $f_q\colon Q(a) \to V(\lambda)$ so defined is the restriction of $f\colon P \to V(\lambda)$ to the summand $Q(a)$ of $P$.
The module $V(\lambda)$ fits into a short exact sequence
\[0 \to V(p - a - 2)^{\oplus r + 1} \to V(\lambda) \overset{\pi}{\to} V(a)^{\oplus r + 1} \to 0\]
where $\pi$ has components $\pi_q$ for $q = 0, 1, \ldots, r$. Each $\pi_q\colon V(\lambda) \to V(a)$ is given by
\[v_i \mapsto \begin{cases} v_{i - qp} & \text{if} \ qp \leq i \leq qp + a \\ 0 & \text{otherwise.} \end{cases}\]
Hence the top of $V(\lambda)$ is $V(a)^{\oplus r + 1}$ and $P = Q(a)^{\oplus r + 1}$. Recall that $Q(a)$ has basis $\set{v_0, v_1, \ldots, v_{2p - a - 2}} \cup \set{w_{p - a - 1}, w_{p - a}, w_{p - 1}}$. The map $f_q$ is uniquely determined up to a nonzero scalar and is given by
\begin{align*}
v_i & \mapsto -(a + 1)^2\binom{p - i - 1}{a + 1}v_{(q - 1)p + a + i + 1} && \text{if} \ 0 \leq i \leq p - a - 2, \\
w_i & \mapsto (-1)^{i + a}\binom{a}{i + a + 1 - p}^{-1}v_{(q - 1)p + a + i + 1} && \text{if} \ p - a - 1 \leq i \leq p - 1, \\
v_i & \mapsto 0 && \text{if} \ p - a - 1 \leq i \leq p - 1, \\
v_i & \mapsto (-1)^{a + 1}(a + 1)^2\binom{i + a + 1 - p}{a + 1}v_{(q - 1)p + a + i + 1} && \text{if} \ p \leq i \leq 2p - a - 2.
\end{align*}
This gives $f = [f_0 \ f_1 \ \cdots \ f_r]$. To distinguish elements from different summands of $Q(a)^{\oplus r + 1}$ let $\set{v_{q,0}, v_{q,1}, \ldots, v_{q,2p - a - 2}} \cup \set{w_{q,p - a - 1}, w_{q,p - a}, w_{q,p - 1}}$ be the basis of the $q^\text{th}$ summand of $Q(a)^{\oplus r + 1}$. Then any element of the cover can be written in the form
\[x = \sum_{q = 0}^r\left[\sum_{i = 0}^{2p - a - 2}c_{q,i}v_{q,i} + \sum_{i = p - a - 1}^{p - 1}d_{q,i}w_{q,i}\right].\]
for some $c_{q,i}, d_{q,i} \in k$. Applying $f$ gives
\begin{align*}
f(x) = &\sum_{q = 0}^r\Bigg[-(a + 1)^2\sum_{i = 0}^{p - a - 2}\binom{p - i - 1}{a + 1}c_{q,i}v_{(q - 1)p + a + i + 1} \\
&\hspace{24pt} + (-1)^{a + 1}(a + 1)^2\sum_{i = p}^{2p - a - 2}\binom{i + a + 1 - p}{a + 1}c_{q,i}v_{(q - 1)p + a + i + 1} \\
&\hspace{24pt} + \sum_{i = p - a - 1}^{p - 1}(-1)^{a + i}\binom{a}{i + a + 1 - p}^{-1}d_{q,i}v_{(q - 1)p + a + i + 1}\Bigg]
\end{align*}
Observe that $0 \leq i \leq p - a - 2$ and $p \leq i \leq 2p - a - 2$ give $a + 1 \leq a + i + 1 \leq p - 1$ and $p + a + 1 \leq a + i + 1 \leq 2p - 1$ respectively, whereas $p - a - 1 \leq i \leq p - 1$ gives $p \leq a + i + 1 \leq p + a$. Looking modulo $p$ we see that the basis elements $v_{(q - 1)p + a + i + 1}$, for $0 \leq q \leq r$ and $p - a - 1 \leq i \leq p - 1$, are linearly independent. Thus $f(x) = 0$ immediately yields $d_{q,i} = 0$ for all $q$ and $i$.
Now rearranging we have
\begin{align*}
f(x) =& \sum_{q = 0}^r\sum_{i = 0}^{p - a - 2}\Bigg[(-1)^{a + 1}(a + 1)^2\binom{i + a + 1}{i}c_{q,i + p}v_{qp + a + 1 + i} \\
& -(a + 1)^2\binom{p - i - 1}{a + 1}c_{q,i}v_{(q - 1)p + a + 1 + i}\Bigg] \\
=& -(a + 1)^2\sum_{i = 0}^{p - a - 2}\Bigg[\sum_{q = 0}^{r - 1}(-1)^a\binom{i + a + 1}{i}c_{q,i + p}v_{qp + a + 1 + i} \\
& + \sum_{q = 1}^r\binom{p - i - 1}{a + 1}c_{q,i}v_{(q - 1)p + a + 1 + i}\Bigg] \\
=& \sum_{q = 0}^{r - 1}\sum_{i = 0}^{p - a - 2}\Bigg[(-1)^a\binom{i + a + 1}{i}c_{q,i + p} + \binom{p - i - 1}{a + 1}c_{q + 1,i}\Bigg]v_{qp + a + 1 + i}
\end{align*}
so the kernel is defined by choosing $c_{q,i}$, for $0 \leq q \leq r - 1$ and $0 \leq i \leq p - a - 2$, such that
\[(-1)^a\binom{i + a + 1}{i}c_{q,i + p} + \binom{p - i - 1}{a + 1}c_{q + 1,i} = 0.\]
Note that
\[\frac{\binom{p - i - 1}{a + 1}}{\binom{i + a + 1}{i}} = \frac{\binom{p - 1}{i + a + 1}}{\binom{p - 1}{i}} = \frac{(-1)^{i + a + 1}}{(-1)^i} = (-1)^{a + 1}\]
so the above equation simplifies to
\[c_{q,i + p} = c_{q + 1,i}.\]
Thus for $0 \leq i \leq (r + 2)p - a - 2$ the vectors
\[v_i^\prime = \begin{cases} v_{0,i} & \text{if} \ 0 \leq i < p, \\
v_{q,b} + v_{q - 1,p + b} & \text{if} \ 1 \leq q \leq r, \ 0 \leq b \leq p - a - 2, \\
v_{q,b} & \text{if} \ 1 \leq q \leq r, \ p - a - 1 \leq b < p, \\
v_{r,b} & \text{if} \ q = r + 1, \ 0 \leq b \leq p - a - 2. \end{cases}\]
form a basis for the kernel, where $i = qp + b$ with $0 \leq b < p$ the remainder of $i$ modulo $p$. It is now trivial to check that the $\slt$-action on this basis is identical to that of $V((r + 2)p - a - 2)$.
\end{proof}
\section{Matrix Theorems} \label{secMatThms}
In this section we determine the kernel of four particular maps between free $k[s, t]$-modules. While these maps are used to represent sheaf homomorphisms in \Cref{secLieEx} we do not approach this section geometrically. Instead we carry out these computations in the category of $k[s, t]$-modules.
The first map is given by the matrix $M_\varepsilon(\lambda) \in \mathbb M_{rp}(k[s, t])$ shown in \Cref{figMats}.
\begin{sidewaysfigure}[p]
\vspace{350pt}
\[\hspace{0pt}\begin{bmatrix} (a + 2)st & t^2 &&& -(a + 1)\binom{r}{1}\varepsilon^ps^2 &&& -(a + 1)\binom{r}{2}\varepsilon^{2p}s^2 &&& \cdots && -(a + 1)\binom{r}{r}\varepsilon^{rp}s^2 \\
(a + 2)s^2 & (a + 4)st & 2t^2 \\
&(a + 3)s^2 & \ddots & \ddots \\
&& \ddots & \ddots & -t^2 \\
&&& as^2 & ast & 0 \\
&&&& (a + 1)s^2 & \ddots & \ddots \\
&&&&& \ddots & \ddots & -t^2 \\
&&&&&& as^2 & ast & 0 \\
&&&&&&& (a + 1)s^2 & \ddots & \ddots \\
&&&&&&&& \ddots & \ddots & -3t^2 \\
&&&&&&&&& (a - 2)s^2 & (a - 4)st & -2t^2 \\
&&&&&&&&&& (a - 1)s^2 & (a - 2)st & -t^2 \\
&&&&&&&&&&& as^2 & ast
\end{bmatrix}\]
\caption{$M_\varepsilon(\lambda)$} \label{figMats}
\end{sidewaysfigure}
For convenience we index the rows and columns of this matrix using the integers $a + 1, a + 2, \ldots, \lambda$. Then we can say more precisely that the $(i, j)^\text{th}$ entry of this matrix is
\[M_\varepsilon(\lambda)_{ij} = \begin{cases}
is^2 & \text{if} \ \ i = j + 1 \\
(2i - a)st & \text{if} \ \ i = j \\
(i - a)t^2 & \text{if} \ \ i = j - 1 \\
-(a + 1)\binom{r}{q}\varepsilon^{qp}s^2 & \text{if} \ \ (i, j) = (0, qp + a) \\
0 & \text{otherwise.}
\end{cases}
\]
\begin{Prop} \label{propMker}
The kernel of $M_\varepsilon(\lambda)$ is a free $k[s, t]$-module (ungraded) of rank $r$ whose basis elements are homogeneous of degree $p - a - 2$.
\end{Prop}
\begin{proof}
The strategy is as follows: First we will determine the kernel of $M_\varepsilon(\lambda)$ when considered as a map of $k[s, \frac{1}{s}, t]$-modules. We do this by exhibiting a basis $H_1, \ldots, H_r$ via a direct calculation. Then by clearing the denominators from these basis elements we get a linearly independent set of vectors in the $k[s, t]$-kernel of $M_\varepsilon(\lambda)$. We conclude by arguing that these vectors in fact span, thus giving an explicit basis for the kernel of $M_\varepsilon(\lambda)$ considered as a map of $k[s, t]$-modules.
To begin observe that $s$ is a unit in $k[s, \frac{1}{s}, t]$,
\begin{sidewaysfigure}[p]
\vspace{350pt}
\[\begin{bmatrix} (a + 2)x & x^2 &&& -(a + 1)\binom{r}{1}\varepsilon^p &&& -(a + 1)\binom{r}{2}\varepsilon^{2p} &&& \cdots && -(a + 1)\binom{r}{r}\varepsilon^{rp} \\
a + 2 & (a + 4)x & 2x^2 \\
& a + 3 & \ddots & \ddots \\
&& \ddots & \ddots & -x^2 \\
&&& a & ax & 0 \\
&&&& a + 1 & \ddots & \ddots \\
&&&&& \ddots & \ddots & -x^2 \\
&&&&&& a & ax & 0 \\
&&&&&&& a + 1 & \ddots & \ddots \\
&&&&&&&& \ddots & \ddots & -3x^2 \\
&&&&&&&&& a - 2 & (a - 4)x & -2x^2 \\
&&&&&&&&&& a - 1 & (a - 2)x & -x^2 \\
&&&&&&&&&&& a & ax
\end{bmatrix}\]
\caption{$\frac{1}{s^2}M_\varepsilon(\lambda)$} \label{figxMat}
\end{sidewaysfigure}
thus over this ring the kernel of $M_\varepsilon(\lambda)$ is equal to the kernel of the matrix $\frac{1}{s^2}M_\varepsilon(\lambda)$ (shown in \Cref{figxMat}) with $(i, j)^\text{th}$ entries
\[\frac{1}{s^2}M_\varepsilon(\lambda)_{ij} = \begin{cases}
i & \text{if} \ \ i = j + 1 \\
(2i - a)x & \text{if} \ \ i = j \\
(i - a)x^2 & \text{if} \ \ i = j - 1 \\
-(a + 1)\binom{r}{q}\varepsilon^{qp} & \text{if} \ \ (i, j) = (0, qp + a) \\
0 & \text{otherwise.}
\end{cases}\]
where $x = \frac{t}{s}$. Let
\[f = \begin{bmatrix} f_{a + 1} \\ f_{a + 2} \\ \vdots \\ f_\lambda \end{bmatrix}\]
be an arbitrary element of the kernel. Given $i = qp + b$ where $0 \leq b < p$ and $a + 1 \leq i \leq \lambda$ we claim that
\begin{equation} \label{eqPhiForm}
f_i = (-1)^{\lambda - i}x^{\lambda - i}f_\lambda + (-1)^b\binom{p + a - b}{p - b - 1}x^{p - b - 1}h_{q + 1}
\end{equation}
for some choice of $h_1, \ldots, h_r \in k[s, \frac{1}{s}, t]$ and $h_{r + 1} = 0$. Moreover, any such choice defines an element $f \in k[s, \frac{1}{s}, t]^{rp}$ such that
\begin{equation} \label{eqQuasKer}
\frac{1}{s^2}M_\varepsilon(\lambda)f = \begin{bmatrix} \ast \\ 0 \\ \vdots \\ 0 \end{bmatrix}
\end{equation}
holds.
The proof of this claim is by completely elementary methods, we simply induct up the rows of $\frac{1}{s^2}M_\varepsilon(\lambda)$ observing that the condition imposed by each row in \autoref{eqQuasKer} either determines the next $f_i$ or is automatically satisfied allowing us to introduce a free parameter (the $h_i$).
For the base case plugging $i = \lambda$ into \Cref{eqPhiForm} gives $f_\lambda = f_\lambda$. The condition imposed by the last row in \Cref{eqQuasKer} is $af_{\lambda - 1} + axf_\lambda = 0$ so if $a \neq 0$ then $f_{\lambda - 1} = -xf_\lambda$ and if $a = 0$ then this condition is automatically satisfied. The formula, when $a = 0$, gives $f_{rp - 1} = -xf_\lambda + h_r$ so we take this as the definition of $h_r$.
Assume the formula holds for all $f_j$ with $j > i \geq a + 1$ and that these $f_j$ satisfy the conditions imposed by rows $i + 2, i + 3, \ldots, \lambda$ of $\frac{1}{s^2}M_\varepsilon(\lambda)$. Row $i + 1$ has nonzero entries $i + 1$, $(2i - a + 2)x$, and $(i - a + 1)x^2$ in columns $i$, $i + 1$, and $i + 2$ respectively. First assume $i + 1 \neq 0$ in $k$ or equivalently $b \neq p - 1$ where $i = qp + b$ and $0 \leq b < p$. Then the condition
\[(i + 1)f_i + (2i - a + 2)xf_{i + 1} + (i - a + 1)x^2f_{i + 2} = 0\]
imposed by row $i + 1$ can be taken as the definition of $f_i$. Observe that
\begin{align*}
&\frac{-1}{i + 1}\left((-1)^{\lambda - i - 1}(2i - a + 2) + (-1)^{\lambda - i - 2}(i - a + 1)\right) \\
&\qquad = \frac{(-1)^{\lambda - i}}{i + 1}\left((2i - a + 2) - (i - a + 1)\right) \\
&\qquad = \frac{(-1)^{\lambda - i}}{i + 1}\left(i + 1\right) \\
&\qquad = (-1)^{\lambda - i}
\end{align*}
so $f_i = (-1)^{\lambda - i}x^{\lambda - i}f_\lambda + (\text{terms involving } h_j)$. For the $h_j$ terms there are two cases. First assume $b < p - 2$. Then using $\frac{c}{e}\binom{c - 1}{e - 1} = \binom{c}{e}$ we see that
\begin{align*} \label{eqnBinom}
&\frac{-1}{i + 1}\left((-1)^{b + 1}(2i - a + 2)\binom{p + a - b - 1}{p - b - 2} + (-1)^{b + 2}(i - a + 1)\binom{p + a - b - 2}{p - b - 3}\right) \\
&\qquad = \frac{(-1)^b}{b + 1}\left((2b - a + 2)\binom{p + a - b - 1}{p - b - 2} + (p + a - b - 1)\binom{p + a - b - 2}{p - b - 3}\right) \\
&\qquad = \frac{(-1)^b}{b + 1}\left((2b - a + 2)\binom{p + a - b - 1}{p - b - 2} + (p - b - 2)\binom{p + a - b - 1}{p - b - 2}\right) \\
&\qquad = \frac{(-1)^b}{b + 1}(b - a)\binom{p + a - b - 1}{p - b - 2} \\
&\qquad = \frac{(-1)^b}{b + 1}(b + 1)\binom{p + a - b}{p - b - 1} \\
&\qquad = (-1)^b\binom{p + a - b}{p - b - 1}.
\end{align*}
Putting these together we get that
\begin{align*}
f_i &= \frac{-1}{i + 1}((2i - a + 2)xf_{i + 1} + (i - a + 1)x^2f_{i + 2}) \\
&= (-1)^{\lambda - i}x^{\lambda - i}f_\lambda + (-1)^b\binom{p + a - b}{p - b - 1}x^{p - b - 1}h_{q + 1}
\end{align*}
as desired. Next assume $b = p - 2$ so that $f_{i + 2} = f_{(q + 1)p}$. The coefficient of $h_{q + 2}$ in $f_{(q + 1)p}$ involves the binomial $\binom{p + a}{p - 1}$. As $0 \leq a < p - 1$ there is a carry when the addition $(p - 1) + (a + 1) = p + a$ is done in base $p$, thus this coefficient is zero and the $h_j$ terms of $f_i$ are
\begin{align*}
\frac{(-1)^{p}}{i + 1}(2i - a + 2)\binom{a + 1}{0}xh_{q + 1} &= \frac{(-1)^{p - 2}}{2}(a + 2)xh_{q + 1} \\
&= (-1)^b\binom{a + 2}{1}xh_{q + 1}
\end{align*}
as desired. Thus the induction continues when $i + 1 \neq 0$ in $k$.
Now assume $i + 1 = 0$ in $k$; equivalently, $b = p - 1$. Then the condition imposed by row $i + 1 = (q + 1)p$ is
\[- axf_{(q + 1)p} - ax^2f_{(q + 1)p + 1} = 0.\]
If $a = 0$ then this is automatic. If $a > 0$ then there is a base $p$ carry in the addition $(p - 2) + (a + 1) = p + a - 1$, hence
\begin{align*}
&xf_{(q + 1)p} + x^2f_{(q + 1)p + 1} \\
&\qquad = (-1)^{(r - q - 1)p + a}x^{(r - q - 1)p + a + 1}f_\lambda + (-1)^{(r - q - 1)p + a - 1}x^{(r - q - 1)p + a + 1}g \\
&\qquad\quad - \binom{p + a - 1}{p - 2}x^ph_{q + 2} \\
&\qquad = 0
\end{align*}
because the $f_\lambda$ terms cancel and the binomial is zero. So in either case the condition above is automatic. The formula for $f_i$ when $i = qp + (p - 1)$ is
\[f_i = (-1)^{(r - q - 1)p + a + 1}x^{(r - q - 1)p + a + 1}g + h_{q + 1}\]
so we take this as the definition of $h_{q + 1}$ and the induction is complete.
Now $f$ must have the given form for some choice of $h_1, \ldots, h_r$ and any such choice gives an element $f$ such that $\frac{1}{s^2}M_\varepsilon(\lambda)f$ is zero in all coordinates save the top ($a + 1$). All that is left is to impose the condition that $\frac{1}{s^2}M_\varepsilon(\lambda)f$ is zero in the $(a + 1)^\text{th}$ coordinate as well. This condition is
\[(a + 2)xf_{a + 1} + x^2f_{a + 2} - (a + 1)\sum_{q = 1}^r\binom{r}{q}\varepsilon^{qp}f_{qp + a} = 0.\]
In $(a + 2)xf_{a + 1} + x^2f_{a + 2}$ the $h_j$ terms are
\begin{align*}
&(-1)^{a + 1}\left((a + 2)\binom{p - 1}{p - a - 2} - \binom{p - 2}{p - a - 3}\right)x^{p - a - 1}h_1 \\
&\qquad = (-1)^{a + 1}\left((a + 2)\binom{p - 1}{p - a - 2} + (p - a - 2)\binom{p - 1}{p - a - 2}\right)x^{p - a - 1}h_1 \\
&\qquad = 0
\end{align*}
and the coefficient of the $h_j$ term in $f_{qp + a}$ involves the binomial $\binom{p}{p - a - 1}$ which is zero. Thus the top row imposes a condition only on $f_\lambda$, and this condition is
\begin{align*}
0 &= (-1)^{rp - 1}(a + 2)x^{rp}f_\lambda + (-1)^{rp - 2}x^{rp}f_\lambda \\
&\quad - (a + 1)\sum_{q = 1}^r(-1)^{(r - q)p}\binom{r}{q}\varepsilon^{qp}x^{(r - q)p}f_\lambda \\
&= (-1)^{rp - 1}(a + 1)\left[\sum_{q = 0}^r(-1)^{qp}\binom{r}{q}\varepsilon^{qp}x^{(r - q)p}\right]f_\lambda.
\end{align*}
Note that $x = \frac{t}{s}$ is algebraically independent over $k$ in $k[s, \frac{1}{s}, t]$ and by hypothesis $a + 1 \neq 0$ in $k$. The localization of an integral domain is again an integral domain therefore if $f$ is in the kernel then we must have $f_\lambda = 0$.
As the $h_1, \ldots, h_r$ can be chosen arbitrarily this completes the determination of the kernel of $M_\varepsilon(\lambda)$, considered as a map of $k[s, \frac{1}{s}, t]$-modules. It is free of rank $r$ and the basis elements are given by taking the coefficients of these $h_q$ in \Cref{eqQuasKer}. Let $H_q$ be the basis element that corresponds to $h_q$; it is shown in \Cref{figHq}.
\begin{sidewaysfigure}[p]
\vspace{400pt}
\[\begin{matrix} a + 1 \\ \vdots \\ qp - 1 \\ qp \\ qp + 1 \\ \vdots \\ qp + a \\ qp + a + 1 \\ \vdots \\ (q + 1)p - 2 \\ (q + 1)p - 1 \\ (q + 1)p \\ \vdots \\ \lambda \end{matrix}\begin{bmatrix} 0 \\ \vdots \\ 0 \\ \binom{a + p}{p - 1}x^{p - 1} \\ -\binom{a + p - 1}{p - 2}x^{p - 2} \\ \vdots \\ (-1)^{p - a - 1}\binom{p}{p - a - 1}x^{p - a - 1} \\ (-1)^{p - a - 2}\binom{p - 1}{p - a - 2}x^{p - a - 2} \\ \vdots \\ -\binom{a + 2}{1}x \\ \binom{a + 1}{0} \\ 0 \\ \vdots \\ 0 \end{bmatrix} = \begin{bmatrix} 0 \\ \vdots \\ 0 \\ 0 \\ 0 \\ \vdots \\ 0 \\ (-1)^{p - a - 2}\binom{p - 1}{p - a - 2}x^{p - a - 2} \\ \vdots \\ -(a + 2)x \\ 1 \\ 0 \\ \vdots \\ 0 \end{bmatrix} \overset{s^{p - a - 2}}{\longrightarrow} \begin{bmatrix} 0 \\ \vdots \\ 0 \\ (-1)^{p - a - 2}\binom{p - 1}{p - a - 2}t^{p - a - 2} \\ \vdots \\ -(a + 2)s^{p - a - 3}t \\ s^{p - a - 2} \\ 0 \\ \vdots \\ 0 \end{bmatrix}\begin{matrix} a + 1 \\ \vdots \\ qp + a \\ qp + a + 1 \\ \vdots \\ (q + 1)p - 2 \\ (q + 1)p - 1 \\ (q + 1)p \\ \vdots \\ \lambda \end{matrix}\]
\caption{$H_q \to s^{p - a - 2}H_q$} \label{figHq}
\end{sidewaysfigure}
I claim that $s^{p - a - 2}H_q$, for $1 \leq q \leq r$, is a basis for the kernel of $M_\varepsilon(\lambda)$, considered as a map of $k[s, t]$-modules.
First note that $H_q$ is supported in coordinates $(q + 1)p - 1$ through $qp + a + 1$. These ranges are disjoint for different $H_q$ therefore the $s^{p - a - 2}H_q$ are clearly linearly independent. Let $f \in k[s, t]^{rp}$ be an element of the kernel of $M_\varepsilon(\lambda)$. Then as an element of $k[s, \frac{1}{s}, t]$ we have that $f$ is in the kernel of $\frac{1}{s^2}M_\varepsilon(\lambda)$ and can write
\[f = \sum_{q = 1}^rc_qH_q.\]
where $c_q \in k[s, \frac{1}{s}, t]$. The $((q + 1)p - 1)^\text{th}$ coordinate of $f$ is $c_q$ hence $c_q \in k[s, t]$. Also the $(qp + a + 1)^\text{th}$ coordinate of $f$ is
\[(-1)^{p - a - 2}\binom{p - 1}{p - a - 2}c_qx^{p - a - 2}\]
and the binomial coefficient in that expression is nonzero in $k$ so $c_qx^{p - a - 2} \in k[s, t]$. In particular, $s^{p - a - 2}$ must divide $c_q$ so write $c_q = s^{p - a - 2}c^\prime_q$ for some $c^\prime_q \in k[s, t]$. We now have
\[f = \sum_{q = 1}^rc^\prime_qs^{p - a - 2}H_q\]
so the $s^{p - a - 2}H_q$ span and are therefore a basis. Each $H_q$ is homogeneous of degree $0$ so each $s^{p - a - 2}H_q$ is homogeneous of degree $p - a - 2$.
\end{proof}
The second map we wish to consider is given by the matrix $B(\lambda) \in \mathbb M_{\lambda + 1}(k[s, t])$ shown in \Cref{figMatB}.
\begin{sidewaysfigure}[p]
\vspace{350pt}
\[\begin{bmatrix} \lambda st & \lambda s^2 \\
-t^2 & (\lambda - 2)st & (\lambda - 1)s^2 \\
& -2t^2 & (\lambda - 4)st & (\lambda - 2)s^2 \\
&& -3t^2 & \ddots & \ddots \\
&&& \ddots & \ddots & (\lambda + 2)s^2 \\
&&&& t^2 & (\lambda + 2)st & (\lambda + 1)s^2 \\
&&&&& 0 & \lambda st & \lambda s^2 \\
&&&&&& -t^2 & \ddots & \ddots \\
&&&&&&& \ddots & \ddots & 3s^2 \\
&&&&&&&& -(\lambda - 2)t^2 & -(\lambda - 4)st & 2s^2 \\
&&&&&&&&& -(\lambda - 1)t^2 & -(\lambda - 2)st & s^2 \\
&&&&&&&&&& -\lambda t^2 & -\lambda st
\end{bmatrix}\]
\caption{$B(\lambda)$} \label{figMatB}
\end{sidewaysfigure}
Index the rows and columns of this matrix using the integers $0, 1, \ldots, \lambda$. Then the entries of $B(\lambda)$ are
\[B(\lambda)_{ij} = \begin{cases}
-it^2 & \text{if} \ \ i = j + 1 \\
(\lambda - 2i)st & \text{if} \ \ i = j \\
(\lambda - i)s^2 & \text{if} \ \ i = j - 1 \\
0 & \text{otherwise.}
\end{cases}\]
\begin{Prop} \label{propBker}
The kernel of $B(\lambda)$ is a free $k[s, t]$-module of rank $r + 1$. There is one basis element that is homogeneous of degree $\lambda$ and the remaining are homogeneous of degree $p - a - 2$.
\end{Prop}
\begin{proof}
The proof is very similar to the proof of \cref{propMker}. We start by finding the kernel of the matrix $\frac{1}{s^2}B(\lambda)$ shown in \Cref{figMatBx}
\begin{sidewaysfigure}[p]
\vspace{350pt}
\[\begin{bmatrix} \lambda x & \lambda \\
-x^2 & (\lambda - 2)x & \lambda - 1 \\
& -2x^2 & (\lambda - 4)x & \lambda - 2 \\
&& -3x^2 & \ddots & \ddots \\
&&& \ddots & \ddots & \lambda + 2 \\
&&&& x^2 & (\lambda + 2)x & \lambda + 1 \\
&&&&& 0 & \lambda x & \lambda \\
&&&&&& -x^2 & \ddots & \ddots \\
&&&&&&& \ddots & \ddots & 3 \\
&&&&&&&& -(\lambda - 2)x^2 & -(\lambda - 4)x & 2 \\
&&&&&&&&& -(\lambda - 1)x^2 & -(\lambda - 2)x & 1 \\
&&&&&&&&&& -\lambda x^2 & -\lambda x
\end{bmatrix}\]
\caption{$\frac{1}{s^2}B(\lambda)$} \label{figMatBx}
\end{sidewaysfigure}
whose entries are given by
\[\frac{1}{s^2}B(\lambda)_{ij} = \begin{cases}
-ix^2 & \text{if} \ \ i = j + 1 \\
(\lambda - 2i)x & \text{if} \ \ i = j \\
\lambda - i & \text{if} \ \ i = j - 1 \\
0 & \text{otherwise.}
\end{cases}\]
with $x = \frac{t}{s}$. Let
\[f = \begin{bmatrix} f_0 \\ f_1 \\ \vdots \\ f_\lambda \end{bmatrix}\]
be an arbitrary element of the kernel. We induct down the rows of the matrix to show that if $i = qp + b$, where $0 \leq b < p$ then
\[f_{\lambda - i} = (-1)^{\lambda - i}x^{\lambda - i}g + (-1)^b\binom{p + a - b}{p - b - 1}x^{p - b - 1}h_q\]
where $h_r = 0$.
For the base case $i = \lambda$ in the formula gives $f_0 = g$ so we take this as the definition of $g$. The condition imposed by the first row is $axg + af_1 = 0$ so if $a \neq 0$ then $f_1 = -xg$. The formula gives $f_1 = -xg + (-1)^a - 1\binom{p + 1}{p - a}x^{p - a}h_r = -xg$ so these agree. If $a = 0$ then the condition is automatically satisfied and the formula gives $f_1 = -xg + h_{r - 1}$ so we take this as the definition of $h_{r - 1}$.
For the inductive step assume the formula holds for $f_0, f_1, \ldots, f_{i - 1}$ and that these $f_j$ satisfy the conditions imposed by rows $0, \ldots, \lambda - i - 2$. The three nonzero entries in row $\lambda - i - 1$ are $(b - a + 1)x^2$, $(2b - a + 2)x$, and $b + 1$ in columns $\lambda - i - 2$, $\lambda - i - 1$, and $\lambda - i$ respectively, thus the condition imposed is
\[(b - a + 1)x^2f_{\lambda - i - 2} + (2b - a + 2)xf_{\lambda - i - 1} + (b + 1)f_{\lambda - i} = 0.\]
If $b < p - 2$ then we can solve this for $f_{\lambda - i}$ and we find that it agrees with the formula above (for the $h_j$ terms the computation is identical to the one shown in \cref{propMker}). If $b = p - 2$ we get
\begin{align*}
f_{\lambda - i} &= -(a + 1)x^2f_{\lambda - i - 2} - (a + 2)xf_{\lambda - i - 1} \\
&= (-1)^{\lambda - i - 1}(a + 1)x^{\lambda - i}g + (-1)^{\lambda - i}(a + 2)x^{\lambda - i}g - (a + 2)h_q \\
&= (-1)^{\lambda - i}x^{\lambda - i}g - \binom{a + 2}{1}xh_q
\end{align*}
as desired. Finally if $b = p - 1$ then $b + 1 = 0$ in $k$ so the condition is
\[-ax^2f_{\lambda - i - 2} - axf_{\lambda - i - 1} = 0\]
and this is automatically satisfied (the formulas are the same as in \cref{propMker} again). Thus no condition is imposed on $f_{\lambda - i}$ so we take the formula
\[f_{\lambda - i} = (-1)^{\lambda - i}x^{\lambda - i}g + h_q\]
as the definition of $h_q$. This completes the induction.
Note that the final row to be $\lambda - i - 1$ we must choose $i = -1$ and therefore are in the case where $b + 1 = 0$ and the condition is automatically satisfied. The rest of the proof goes as in \cref{propMker}, except that there is no final condition forcing $g = 0$. If we let $G$ and $H_0, \ldots, H_{r - 1}$ be the basis vectors corresponding to $g$ and $h_0, \ldots, h_{r - 1}$ then the $H_q$ are linearly independent as before. The first ($0^\text{th}$) coordinate of $G$ is $1$ while the first coordinate of each $H_q$ is $0$ therefore $G$ can be added and this gives a basis for the kernel. The largest power of $x$ in $G$ is $\lambda$ in the last coordinate and the largest power of $x$ in $H_q$ is $p - a - 2$ in the $(\lambda - qp - a + 1)^\text{th}$ coordinate. These basis vectors lift to basis vectors of the kernel as a $k[s, t]$-module and are in degrees $\lambda$ and $p - a - 2$ as desired.
\end{proof}
Before we move on to the third map, let us first prove the following lemma which will be needed in \cref{thmFiSimp}.
\begin{Lem} \label{lemBlambda}
Assume $0 \leq \lambda < p$. Then the $(i, j)^\text{th}$ entry of $B(\lambda)^\lambda$ is contained in the one dimensional space $ks^{\lambda + j - i}t^{\lambda - j + i}$.
\end{Lem}
\begin{proof}
Let $b_{ij}$ be the $(i, j)^\text{th}$ entry of $B(\lambda)$. By definition the $(i, j)^\text{th}$ entry of $B(\lambda)^\lambda$ is given by
\[(B(\lambda)^\lambda)_{ij} = \sum_{n_1, n_2, \ldots, n_{\lambda - 1}}b_{in_1}b_{n_1n_2}\cdots b_{n_{\lambda - 1}j}.\]
From the definition of $B(\lambda)$ we have
\begin{align*}
b_{ij} \in ks^2 & \ \ \ \text{if} \ j - i = 1, \\ b_{ij} \in kst & \ \ \ \text{if} \ j - i = 0, \\ b_{ij} \in kt^2 & \ \ \ \text{if} \ j - i = -1, \\ b_{ij} = 0 & \ \ \ \text{otherwise.}
\end{align*}
so any given term $b_{in_1}b_{n_1n_2}\cdots b_{n_{\lambda - 1}j}$ in the summation can be nonzero only if the $(\lambda + 1)$-tuple $(n_0, n_1, \ldots, n_\lambda)$ is a \emph{walk} from $n_0 = i$ to $n_\lambda = j$, i.e.\ each successive term of the tuple must differ from the last by at most $1$. For such a walk we now show by induction that $b_{n_0n_1}b_{n_1n_2}\cdots b_{n_{m - 1}n_m} \in ks^{m + n_m - n_0}t^{m - n_m + n_0}$. For the base case $m = 1$ we have the three cases above for $b_{n_0n_1}$ and one easily checks that the formula gives $kt^2$, $kst$, or $ks^2$ as needed.
Now assume the statement holds for $m - 1$ so that
\[b_{n_0n_1}\cdots b_{n_{m - 2}n_{m - 1}}b_{n_{m - 1}n_m} \in ks^{m - 1 + n_{m - 1} - n_0}t^{m - 1 - n_{m - 1} + n_0}b_{n_{m - 1}n_m}.\]
There are three cases. First if $n_m = n_{m - 1} + 1$ then $b_{n_{m - 1}n_m} \in ks^2$ and the set becomes
\[ks^{m - 1 + n_{m - 1} - n_0}t^{m - 1 - n_{m - 1} + n_0}\cdot s^2 = ks^{m - n_m + n_0}t^{m + n_m - n_0}\]
as desired. Next if $n_m = n_{m - 1}$ then $b_{n_{m - 1}n_m} \in kst$ and the set becomes
\[ks^{m - 1 + n_{m - 1} - n_0}t^{m - 1 - n_{m - 1} + n_0}\cdot st = ks^{m + n_m - n_0}t^{m - n_m + n_0}\]
as desired. Finally if $n_m = n_{m - 1} - 1$ then $b_{n_{m - 1}n_m} \in kt^2$ and the set becomes
\[ks^{m - 1 + n_{m - 1} - n_0}t^{m - 1 - n_{m - 1} + n_0}\cdot t^2 = ks^{m + n_m - n_0}t^{m - n_m + n_0}\]
as desired. Thus the induction is complete and for $m = \lambda$ this gives
\[b_{n_0n_1}b_{n_1n_2}\cdots b_{n_{\lambda - 1}n_\lambda} \in ks^{\lambda + n_\lambda - n_0}t^{\lambda - n_\lambda + n_0} = ks^{\lambda + j - i}t^{\lambda - j + i}\]
and completes the proof.
\end{proof}
Moving on, the third map we wish to consider is $B'(\lambda) \in \mathbb M_{rp}(k[s, t])$ defined to be the $rp^\text{th}$ trailing principal minor of $B(\lambda)$, i.e., the minor of $B(\lambda)$ consisting of rows and columns $a + 1, a + 2, \ldots, \lambda$.
\begin{Prop} \label{propBpker}
The kernel of $B'(\lambda)$ is a free $k[s, t]$-module (ungraded) of rank $r$ whose basis elements are homogeneous of degree $p - a - 2$.
\end{Prop}
\begin{proof}
The induction from the proof of \cref{propBker} applies giving
\[f_{\lambda - i} = (-1)^{\lambda - i}x^{\lambda - i}g + (-1)^b\binom{p + a - b}{p - b - 1}x^{p - b - 1}h_q\]
for $0 \leq i < rp$. All that is left is the condition
\[-(a + 2)xf_{a + 1} - f_{a + 2} = 0\]
from the first row of $\frac{1}{s^2}B'(\lambda)$. Substituting in the formulas we get
\[(-1)^{a + 1}(a + 1)x^{a + 2}g = 0\]
which forces $g = 0$. Thus as a basis for the kernel we get $H_0, \ldots, H_{r - 1}$.
\end{proof}
Before we move on to the final map, let us first prove the following lemma which was needed in \Cref{secSl2}
\begin{Lem} \label{lemBJType}
Let $s, t \in k$ so that $B'(\lambda) \in \mathbb M_{rp}(k)$.
\[\jtype(B'(\lambda)) = \begin{cases} [1]^{rp} & \text{if} \ s = t = 0, \\ [p]^{r - 1}[p - a - 1][a + 1] & \text{if} \ s = 0, t \neq 0, \\ [p]^r & \text{if} \ s \neq 0. \end{cases}\]
\end{Lem}
\begin{proof}
If $(s, t) = (0, 0)$ then $B'(\lambda)$ is the zero matrix, hence the Jordan type is $[1]^{rp}$. If $s = 0$ and $t \neq 0$ then $B'(\lambda)$ only has non-zero entries on the sub-diagonal. Normalizing these entries to $1$ gives the Jordan form of the matrix from which we read the Jordan type. If we use the row numbering from $B(\lambda)$ (i.e. the first row is $a + 1$, the second $a + 2$, etc.) then the zeros on the sub-diagonal occur at rows $p, 2p, \ldots, rp$. Thus the first block is size $p - a - 1$, followed by $r - 1$ blocks of size $p$, and the last block is size $a + 1$. Hence the Jordan type is $[p]^{r - 1}[p - a - 1][a + 1]$.
Now assume $s \neq 0$. There are exactly $r(p - 1)$ non-zero entries on the super-diagonal and no non-zero entries above the super-diagonal therefore $\rank B'(\lambda) \geq r(p - 1)$. But this is the maximal rank that a nilpotent matrix can achieve and such a matrix has Jordan type consisting only of blocks of size $p$. Hence the Jordan type is $[p]^r$.
\end{proof}
The final map we wish to consider is given by the matrix $C(\lambda) \in \mathbb M_{\lambda + 1}(k[s, t])$ shown in \Cref{figMatB}.
\begin{sidewaysfigure}[p]
\vspace{350pt}
\[\begin{bmatrix} \lambda st & s^2 \\
-\lambda t^2 & (\lambda - 2)st & 2s^2 \\
& -(\lambda - 1)t^2 & (\lambda - 4)st & 3s^2 \\
&& -(\lambda - 2)t^2 & \ddots & \ddots \\
&&& \ddots & \ddots & -s^2 \\
&&&& -(\lambda + 2)t^2 & (\lambda + 2)st & 0 \\
&&&&& -(\lambda + 1)t^2 & \lambda st & s^2 \\
&&&&&& -\lambda t^2 & \ddots & \ddots \\
&&&&&&& \ddots & \ddots & (\lambda - 2)s^2 \\
&&&&&&&& -3t^2 & -(\lambda - 4)st & (\lambda - 1)s^2 \\
&&&&&&&&& -2t^2 & -(\lambda - 2)st & \lambda s^2 \\
&&&&&&&&&& -t^2 & -\lambda st
\end{bmatrix}\]
\caption{$C(\lambda)$} \label{figMatC}
\end{sidewaysfigure}
Index the rows and columns of this matrix using the integers $0, 1, \ldots, \lambda$. Then the entries of $C(\lambda)$ are
\[C(\lambda)_{ij} = \begin{cases}
(i - \lambda - 1)t^2 & \text{if} \ \ i = j + 1 \\
(\lambda - 2i)st & \text{if} \ \ i = j \\
(i + 1)s^2 & \text{if} \ \ i = j - 1 \\
0 & \text{otherwise.}
\end{cases}\]
\begin{Prop} \label{propCker}
The kernel of $C(\lambda)$ is a free $k[s, t]$-module (ungraded) of rank $r + 1$ whose basis elements are homogeneous of degree $a$.
\end{Prop}
\begin{proof}
Let
\[f = \begin{bmatrix} f_0 \\ f_1 \\ \vdots \\ f_\lambda \end{bmatrix}\]
be an arbitrary element of the kernel of $\frac{1}{s^2}C(\lambda)$ shown in \Cref{figMatCx}
\begin{sidewaysfigure}[p]
\vspace{350pt}
\[\begin{bmatrix} \lambda x & 1 \\
-\lambda x^2 & (\lambda - 2)x & 2 \\
& -(\lambda - 1)x^2 & (\lambda - 4)x & 3 \\
&& -(\lambda - 2)x^2 & \ddots & \ddots \\
&&& \ddots & \ddots & -1 \\
&&&& -(\lambda + 2)x^2 & (\lambda + 2)x & 0 \\
&&&&& -(\lambda + 1)x^2 & \lambda x & 1 \\
&&&&&& -\lambda x^2 & \ddots & \ddots \\
&&&&&&& \ddots & \ddots & \lambda - 2 \\
&&&&&&&& -3x^2 & -(\lambda - 4)x & \lambda - 1 \\
&&&&&&&&& -2x^2 & -(\lambda - 2)x & \lambda \\
&&&&&&&&&& -x^2 & -\lambda x
\end{bmatrix}\]
\caption{$\frac{1}{s^2}C(\lambda)$} \label{figMatCx}
\end{sidewaysfigure}
whose entries are given by
\[\frac{1}{s^2}C(\lambda)_{ij} = \begin{cases}
(i - \lambda - 1)x^2 & \text{if} \ \ i = j + 1 \\
(\lambda - 2i)x & \text{if} \ \ i = j \\
i + 1 & \text{if} \ \ i = j - 1 \\
0 & \text{otherwise.}
\end{cases}\]
with $x = \frac{t}{s}$. We show by induction that if $i = qp + b$ and $0 \leq b < p$ then
\[f_i = (-1)^b\binom{\lambda}{b}x^bh_q.\]
For the base case the formula gives $f_0 = h_0$ so we take this as the definition of $h_0$. The condition imposed by row $1$ is $-\lambda xf_0 + f_1 = 0$ which gives $f_1 = -xh_0$ as desired.
For the inductive step assume the formula holds for indices less then $i$ and the condition imposed by all rows of index less than $i - 1$ is satisfied. Row $i - 1$ has nonzero entries $(i - \lambda - 2)x^2$, $(\lambda - 2i + 2)x$, and $i$ in columns $i - 2$, $i - 1$, and $i$ respectively so the condition is
\[(i - \lambda - 2)x^2f_{i - 2} + (\lambda - 2i + 2)xf_{i - 1} + if_i = 0\]
First assume $i \neq 0, 1$ in $k$. Then we have
\begin{align*}
f_i &= \frac{-1}{i}\left((-1)^{b - 2}(i - \lambda - 2)\binom{\lambda}{b - 2}x^bh_q + (-1)^{b - 1}(\lambda - 2i + 2)\binom{\lambda}{b - 1}x^bh_q\right) \\
&= \frac{(-1)^b}{b}\left((\lambda - b + 2)\binom{\lambda}{b - 2} + (\lambda - 2b + 2)\binom{\lambda}{b - 1}\right)x^bh_q \\
&= \frac{(-1)^b}{b}((b - 1) + (\lambda - 2b + 2))\binom{\lambda}{b - 1}x^bh_q \\
&= (-1)^b\frac{\lambda - b + 1}{b}\binom{\lambda}{b - 1}x^bh_q \\
&= (-1)^b\binom{\lambda}{b}x^bh_q
\end{align*}
as desired. Next assume $i = 0$ in $k$. Then
\begin{align*}
& (i - \lambda - 2)x^2f_{i - 2} + (\lambda - 2i + 2)xf_{i - 1} \\
&\qquad = (\lambda + 2)\binom{\lambda}{p - 2}x^ph_{q - 1} + (\lambda + 2)\binom{\lambda}{p - 1}x^ph_{q - 1}.
\end{align*}
But $a + 1\neq 0$ so $\binom{\lambda}{p - 1} = 0$ and if $a + 1 \neq 0$ then $\binom{\lambda}{p - 2} = 0$ otherwise $\lambda + 2 = 0$. In any case the above expression is $0$ so the condition imposed by row $i - 1$ is automatically satisfied. The formula gives $f_i = h_q$ so we take this as the definition of $h_q$. Finally assume $i = 1$ in $k$. Then we have
\begin{align*}
f_i &= (\lambda + 1)\binom{\lambda}{p - 1}x^{p + 1}h_{q - 1} - \lambda xh_q \\
&= -\binom{\lambda}{1}xh_q
\end{align*}
as desired. This completes the induction. We know that the given formulas for $f_i$ satisfy the conditions imposed by all rows save the last, whose condition is
\[-x^2f_{\lambda - 1} - \lambda xf_\lambda = 0.\]
We have
\[\lambda xf_\lambda = (-1)^a\lambda\binom{\lambda}{a}x^{a + 1}h_r = (-1)^a\lambda x^{a + 1}h_r.\]
If $a = 0$ then
\[x^2f_{\lambda - 1} = (-1)^{p - 1}\binom{\lambda}{p - 1}x^{p + 1}h_{r - 1} = 0\]
and $\lambda = 0$ so this conditions is satisfied. If $a \neq 0$ then
\[x^2f_{\lambda - 1} = (-1)^{a - 1}\binom{\lambda}{a - 1}x^{a + 1}h_r = (-1)^{a - 1}ax^{a - 1}h_r\]
so
\begin{align*}
& x^2f_{\lambda - 1} + \lambda xf_\lambda \\
& \qquad = (-1)^{a - 1}ax^{a - 1}h_r + (-1)^a\lambda x^{a + 1}h_r \\
& \qquad = (-1)^a(\lambda - a)x^{a + 1}h_r \\
& \qquad = 0
\end{align*}
and the condition is again satisfied so we have found a basis. If $H_q$ is the basis vector associated to $h_q$ then the smallest and largest powers of $x$ in $H_q$ are $0$ in coefficient $qp$ and $a$ in coefficient $qp + a$. By the usual arguments the $H_q$ lift to a basis for the kernel of $C(\lambda)$ that is homogeneous of degree $a$.
\end{proof}
The final map we want to consider is parametrized by $0 \leq a < p - 1$. Given such an $a$, let $D(a) \in \mathbb M_{2p}(k[s, t])$ be the block matrix
\[D(a) = \begin{bmatrix} B(2p - a - 2) & D'(a) \\ 0 & B(a)^\dagger \end{bmatrix}\]
where $D'(a)$ and $B(a)^\dagger$ are as follows. The matrix $D'(a)$ is a $(2p - a - 1) \times (a + 1)$ matrix whose $(i, j)^\text{th}$ entry is
\[D'(a)_{ij} = \begin{cases} \frac{1}{i + 1}s^2 & \text{if} \ i - j = p - a - 2 \\ \frac{1}{a + 1}t^2 & \text{if} \ (i, j) = (p, a) \\ 0 & \text{otherwise.} \end{cases}\]
The matrix $B(a)^\dagger$ is produced from $B(a)$ by taking the transpose and then swapping the variables $s$ and $t$.
\[B(a)^\dagger = \begin{bmatrix} ast & -s^2 \\ at^2 & (a - 2)st & -2s^2 \\ & (a - 1)t^2 & \ddots & \ddots \\ && \ddots & -(a - 2)st & -as^2 \\ &&& t^2 & -ast \end{bmatrix}\]
\begin{Prop} \label{propQker}
The inclusion of $k[s, t]^{2p - a - 1}$ into $k[s, t]^{2p}$ as the top $2p - a - 1$ coordinates of a column vector induces an isomorphism $\ker B(2p - a - 2) \simeq \ker D(a)$.
\end{Prop}
\begin{proof}
As $D(a)$ is block upper-triangular with $B(2p - a - 2)$ the top most block on the diagonal it suffices to show that every element of $\ker D(a)$ is of the form $\left[\begin{smallmatrix} v \\ 0 \end{smallmatrix}\right]$ with respect to this block decomposition. That is, we must show that if
\[f = \begin{bmatrix} f_0 \\ f_1 \\ \vdots \\ f_{2p - 1} \end{bmatrix}\]
is an element of $\ker D(a)$ then $f_i = 0$ for all $2p - a - 1 \leq i \leq 2p - 1$. Obviously it suffices to prove this for $\frac{1}{t^2}D(a)$ over $k[s, t, \frac{1}{t}]$ so let $x = \frac{s}{t}$.
We start by proving that $f_{2p - 1} = 0$. There are two cases, first assume that $a + 2 = 0$ in $k$. Then row $p$ of $\frac{1}{t^2}D(a)$ has only one nonzero entry, a $\frac{1}{a + 1}$ in column $2p - 1$. Thus $f \in \ker \frac{1}{t^2}D(a)$ gives $\frac{1}{a + 1}f_{2p - 1} = 0$ in $k[s, t, ]$, hence $f_{2p - 1} = 0$. Next assume that $a + 2 < p$. Then the induction from \cref{propBker} applies to rows $p + 1, \ldots, 20 - a - 2$ and gives
\[f_i = (-1)^{a + i}x^{2p - a - 2 - i}f_{2p - a - 2}\]
for $p \leq i \leq 2p - a - 2$. The condition imposed by row $p$ is
\[-(a + 2)xf_p - (a + 2)x^2f_{p + 1} + \frac{1}{a + 1}f_{2p - 1} = 0.\]
But note that the induction gave us $f_p = -xf_{p + 1}$ so this simplifies to $\frac{1}{a + 1}f_{2p - 1} = 0$ and again we have $f_{2p - 1} = 0$.
Now the condition imposed by the last row of $D(a)$ gives $f_{2p - 2} = axf_{2p - 1} = 0$. By induction the $i^\text{th}$ row gives $-if_{i - 1} = (2i + a + 2)xf_i + (i + a + 2)x^2f_{i + 1} = 0$, hence $f_{i - 1} = 0$, for $p - a \leq i \leq 2p - 2$ and this completes the proof.
\end{proof}
\section{Explicit computation of $\gker{M}$ and $\mathscr F_i(V(\lambda))$} \label{secLieEx}
In this final section we carry out the explicit computations of the sheaves $\gker{M}$, for every indecomposable $\slt$-module $M$, and $\mathscr F_i(V(\lambda))$ for $i \neq p$. Friedlander and Pevtsova \cite[Proposition 5.9]{friedpevConstructions} have calculated the sheaves $\gker{V(\lambda)}$ for Weyl modules $V(\lambda)$ such that $0 \leq \lambda \leq 2p - 2$. Using the explicit descriptions of these modules found in \Cref{secSl2} we can do the calculation for the remaining indecomposable modules in the category.
\begin{Prop} \label{thmKer}
Let $\lambda = rp + a$ with $0 \leq a < p$ the remainder of $\lambda$ modulo $p$. The kernel bundles associated to the indecomposable $\slt$-modules from \cref{thmPremet} are
\begin{align*}
\gker{\Phi_\xi(\lambda)} &\simeq \mathcal O_{\mathbb P^1}(a + 2 - p)^{\oplus r} \\
\gker{V(\lambda)} &\simeq \mathcal O_{\mathbb P^1}(-\lambda) \oplus \mathcal O_{\mathbb P^1}(a + 2 - p)^{\oplus r} \\
\gker{V(\lambda)^\ast} &\simeq \mathcal O_{\mathbb P^1}(-a)^{\oplus r + 1} \\
\gker{Q(a)} &\simeq \mathcal O_{\mathbb P^1}(-a) \oplus \mathcal O_{\mathbb P^1}(a + 2 - 2p)
\end{align*}
\end{Prop}
\begin{proof}
Assume first that $\xi = [1 : \varepsilon]$. Then using the basis from \Cref{secSl2} we get that the matrix defining $\Theta_{\Phi_\xi(\lambda)}$ has entries
\[(\Theta_{\Phi_\xi(\lambda)})_{ij} = \begin{cases}
ix & \text{if} \ \ i = j + 1 \\
(2i - a)z & \text{if} \ \ i = j \\
(a - i)y & \text{if} \ \ i = j - 1 \\
-(a + 1)\binom{r}{q}\varepsilon^{qp}x & \text{if} \ \ (i, j) = (0, qp + a) \\
0 & \text{otherwise.}
\end{cases}\]
Pulling back along the map $\iota\colon\mathbb P^1 \to \PG[\slt]$ from \cref{exNslt} corresponds with extending scalars through the homomorphism
\begin{align*}
\frac{k[x, y, z]}{(xy + z^2)^n} &\to k[s, t] \\
(x, y, z) &\mapsto (s^2, -t^2, st).
\end{align*}
Thus the matrix of $\Theta_{\Phi_\xi(\lambda)}$ becomes the matrix $M_\varepsilon(\lambda)$ from \cref{propMker} and we see that the kernel is free. A basis element, homogeneous of degree $m$, spans a summand of the kernel isomorphic to $k[s, t][-m]$. By definition the $\mathcal O_{\mathbb P^1}$-module corresponding to $k[s, t][-m]$ is $\mathcal O_{\mathbb P^1}(-m)$ so the description of the kernel translates directly to the description of the sheaf above.
The remaining cases are all identical. The modules $V(\lambda)$, $\Phi_{[0 : 1]}(\lambda)$, $V(\lambda)^\ast$, and $Q(a)$ give the matrices $B(\lambda)$, $B'(\lambda)$, $C(\lambda)$, and $D(a)$ whose kernels are calculated in Propositions \autoref{propBker}, \autoref{propBpker}, \autoref{propCker}, and \autoref{propQker} respectively.
\end{proof}
Next we compute $\mathscr F_i(V(\lambda))$ for any $i \neq p$ and any indecomposable $V(\lambda)$. The proof is by induction on $r$ in the expression $\lambda = rp + a$. For the base case we start with $V(\lambda)$ a simple module, i.e., $r = 0$. Note that for the base case we do indeed determine $\mathscr F_p(V(\lambda))$, it is during the inductive step that we lose $i = p$.
\begin{Prop} \label{thmFiSimp}
If $0 \leq \lambda < p$ then
\[\mathscr F_i(V(\lambda)) = \begin{cases}\gker{V(\lambda)} & \text{if} \ i = \lambda + 1 \\ 0 & \text{otherwise.}\end{cases}\]
\end{Prop}
\begin{proof}
First note that $V(\lambda)$ has constant Jordan type $[\lambda + 1]$ so \cref{thmFi} tells us that when $i \neq \lambda + 1$ the sheaf $\mathscr F_i(V(\lambda))$ is locally free of rank $0$, hence is the zero sheaf.
For $i = \lambda + 1$ recall from the previous proof that the map $\Theta_{V(\lambda)}$ of sheaves is given in the category of $k[s, t]$-modules by the matrix $B(\lambda)$ in \Cref{figMatB}. The $(\lambda + 1)^\text{th}$ power of a matrix of Jordan type $[\lambda + 1]$ is zero so the entries of $B(\lambda)^{\lambda + 1}$ are polynomials representing the zero function. We assume that $k$ is algebraically closed so this means $B(\lambda)^{\lambda + 1} = 0$ and therefore $\Theta_{V(\lambda)}^{\lambda + 1} = 0$. In particular
\[\gim[\lambda]{V(\lambda)} \subseteq \gker{V(\lambda)}\]
so the definition of $\mathscr F_{\lambda + 1}(V(\lambda))$ gives
\begin{equation} \label{eqnFi}
\mathscr F_{\lambda + 1}(V(\lambda)) = \frac{\gker{V(\lambda)} \cap \gim[\lambda]{V(\lambda)}}{\gker{V(\lambda)} \cap \gim[\lambda + 1]{V(\lambda)}} = \gim[\lambda]{V(\lambda)}.
\end{equation}
We have a short exact sequence of $k[s, t]$-modules
\[0 \to \im B(\lambda)^\lambda \to \ker B(\lambda) \to \frac{\ker B(\lambda)}{\im B(\lambda)^\lambda} \to 0.\]
If we show that the quotient $\ker B(\lambda)/\im B(\lambda)^\lambda$ is finite dimensional then by Serre's theorem and \autoref{eqnFi} this gives a short exact sequence of sheaves
\[0 \to \mathscr F_i(V(\lambda)) \to \gker{V(\lambda)} \to 0 \to 0\]
and completes the proof.
To show that $\ker B(\lambda)/\im B(\lambda)^\lambda$ is a finite dimensional module note that from $B(\lambda)^{\lambda + 1} = 0$ we get that the columns of $B(\lambda)^\lambda$ are contained in the kernel of $B(\lambda)$ which, in \cref{propBker} we determined is a free $k[s, t]$-module with basis element
\[G = \begin{bmatrix} s^\lambda \\ -s^{\lambda - 1}t \\ \vdots \\ (-1)^\lambda t^\lambda \end{bmatrix}.\]
We also know by \cref{lemBlambda} that the first entry in the $j^\text{th}$ column of $B(\lambda)^\lambda$ is $c_js^{\lambda + j}t^{\lambda - j}$ for some $c_j \in k$, so the $j^\text{th}$ column must therefore be $c_js^jt^{\lambda - j}G$. The columns of $B(\lambda)^\lambda$ range from $j = 0$ to $j = \lambda$ so this shows that $G$ times any monomial of degree $\lambda$ is contained in the image of $B(\lambda)^\lambda$. Thus the quotient $\ker B(\lambda)/\im B(\lambda)^\lambda$ is spanned, as a vector space, by the set of vectors of the form $G$ times a monomial of degree strictly less than $\lambda$. There are only finitely many such monomials therefore $\ker B(\lambda)/\im B(\lambda)^\lambda$ is finite dimensional and the proof is complete.
\end{proof}
Now for the inductive step we will make use of \cref{thmOm}, but in a slightly different form. Note that the shift in \cref{thmOm} is given by tensoring with the sheaf $\mathcal O_{\PG[\slt]}(1)$ associated to the shifted module $\frac{k[x, y, z]}{xy + z^2}[1]$. Likewise we consider $\mathcal O_{\mathbb P^1}(1)$ to be the sheaf associated to $k[s, t][1]$. Pullback through the isomorphism $\iota\colon\mathbb P^1 \to \PG[\slt]$ of \cref{exNslt} yields $\iota^\ast\mathcal O_{\PG[\slt]}(1) = \mathcal O_{\mathbb P^1}(2)$. Consequently, \cref{thmOm} has the following corollary.
\begin{Cor} \label{corFiOmega}
Let $M$ be an $\slt$-module and $1 \leq i < p$. With twist coming from $\mathbb P^1$ we have
\[\mathscr F_i(M) \simeq \mathscr F_{p - i}(\Omega M)(2p - 2i).\]
\end{Cor}
Observe that $i \neq p$ in the theorem; this is why our calculation of $\mathscr F_p(V(\lambda))$ for $\lambda < p$ does not induce a calculation of $\mathscr F_p(V(\lambda))$ when $\lambda \geq p$.
\begin{Prop}
If $V(\lambda)$ is indecomposable and $i \neq p$ then
\[\mathscr F_i(V(\lambda)) \simeq \begin{cases} \mathcal O_{\mathbb P^1}(-\lambda) & \text{if} \ i = \lambda + 1 \pmod p \\ 0 & \text{otherwise.} \end{cases}\]
\end{Prop}
\begin{proof}
Let $\lambda = rp + a$ where $0 \leq a < p$ is the remainder of $\lambda$ modulo $p$. We prove the result by induction on $r$. The base case $r = 0$ follows from Propositions \ref{thmKer} and \ref{thmFiSimp}. For the inductive step assume $r \geq 1$. By hypothesis the formula holds for $rp - a - 2$ and by \cref{propOmega} we have $\Omega V(rp - a - 2) = V(\lambda)$. Applying \cref{corFiOmega} we get
\begin{align*}
\mathscr F_i(V(\lambda)) &= \mathscr F_{p - i}(V(rp - a - 2))(-2i). \\
\intertext{If $i = a + 1$ then}
\mathscr F_{a + 1}(V(\lambda)) &= \mathscr F_{p - a - 1}(V(rp - a - 2))(-2a - 2) \\
&= \mathcal O_{\mathbb P^1}(a + 2 - rp)(-2a - 2) \\
&= \mathcal O_{\mathbb P^1}(-a - rp) \\
&= \mathcal O_{\mathbb P^1}(-\lambda)
\end{align*}
whereas if $i \neq a + 1$ then $p - i \neq p - a - 1$ so $\mathscr F_{p - i}(V(rp - a - 2)) = 0$. This completes the proof.
\end{proof}
\bibliographystyle{../Refs/alphanum}
| {'timestamp': '2015-04-01T02:02:45', 'yymm': '1309', 'arxiv_id': '1309.1505', 'language': 'en', 'url': 'https://arxiv.org/abs/1309.1505'} |
\section*{ABSTRACT}
\noindent
\textit{
A fundamental Doppler-like but asymmetric wave effect
that shifts
received signals in frequency
in proportion to
their respective source distances,
was recently described as means for
a whole new generation of communication technology
using angle and distance,
potentially replacing
TDM, FDM or CDMA,
for multiplexing.
It is equivalent to
wave packet compression
by scaling of time
at the receiver,
converting
path-dependent phase into
distance-dependent shifts,
and can multiply
the capacity of physical channels.
The effect was
hitherto unsuspected in physics,
appears to be responsible for both
the cosmological acceleration
and
the Pioneer 10/11 anomaly,
and is exhibited
in audio data.
This paper discusses
how it may be exploited for
instant, passive ranging of signal sources, for
verification, rescue and navigation;
incoherent aperture synthesis
for smaller, yet more accurate radars;
universal immunity to
jamming or interference;
and
precision frequency scaling of
radiant energy in general.
}
\newcommand{\Section}[2]{\section{#2}\label{s:#1}}
\newcommand{\Subsection}[2]{\subsection{#2}\label{ss:#1}}
\newcommand{}{}
\newcommand{}{}
\newcommand{}{}
\newcommand{}{}
\newcommand{}{}
\newcommand{}{}
\Section{intro}{Introduction}
A previously unsuspected result of the wave equation,
enabling a receiver to get frequency shifts
in an incoming signal
in proportion to
the physical distance of its source,
was recently described
\cite{Prasad2005}.
Its main premise,
that signals from real sources
must have nonzero spectral spread,
seems to be well supported by
the cosmological and
the Pioneer 10/11 anomalous acceleration datasets,
and
a remarkably large gamut of terrestrial mysteries
that can be mundanely resolved.
This perceived support actually concerns
a further inference that the mechanism
could occur naturally in our instruments
on the order of magnitude of
$
10^{-18}
~\reciprocal{\second}
$.
This is a decay corresponding, as half-life, to
the age of the solar system,
and is too small for
purely terrestrial applications.
The wave effect is consistently demonstrated with
acoustic samples, however,
testifying to
its fundamental and generic nature.
While a general exposition deserves to be made in due course
in a physics forum,
it presents fundamental new opportunities for
intelligence and military technologies.
Section \ref{s:theory} contains a brief review of
the theory of the effect,
showing that
all real signals necessarily carry source distance information
in the spectral distribution of phase,
analogous to
the ordinary spatial curvature of wavefronts,
and that by scanning this phase spectrum,
each frequency in a received waveform
comprising multiple signals
would be shifted in proportion to
its own source distance and the scanning rate.
Section \ref{s:impl} presents
the general principles of realization
by both
spectrometry
and
digital means.
Section \ref{s:formalism} shows
how this enables separation of signals by physics
instead of modulation,
whose implications for information theory
have been discussed [\ibid].
Intelligence and military possibilities
are considered for the first time
in Section \ref{s:mil}.
\Section{theory}{Hubble's law shifts, temporal parallax}
The sole premise for the effect, as mentioned, is
the nonzero bandwidth of a real signal.
The Green's function for the general wave equation
concerns an impulse function
$\delta(\mathbf{x},t)$
as the elemental source.
Its Fourier transform is
\begin{equation} \label{e:impulseFT}
F[\delta(t)] =
\int \delta(t) \, e^{-i \omega t} \, dt
=
1
\quad
,
\end{equation}
which says that
\emph{all spectral components of an impulse
start with the same phase}.
The source itself is the only common reference
across any continuous set of frequencies,
hence the \emph{spectral phase contours}
indicate its distance
(Fig.~\ref{f:PhaseGradient}).
\begin{figure}[ht]
\centering
\psfig{file=phasegrad-2.eps}
\caption{Phase contours and gradient for a source impulse}
\label{f:PhaseGradient}
\end{figure}
The main difficulty in tapping this available source
of source distance information
is that it lies in
differences of phase across incoming frequencies,
and
phases are hard to measure accurately.
However,
if we scanned the spectrum at rate $d \widehat{k}/dt$,
then,
for each contour $\phi$
and source distance $r$,
we should encounter
an increasing or decreasing phase
proportional to
$r$ and
the integration interval $\Delta \widehat{k}$,
as shown by
the shaded areas in the figure.
The trick that yields
this remarkable asymmetric effect
is in the definition that
\emph{a rate of change of phase is a frequency},
so that
the instantaneous measures of the spectral phase contours
obtained from the scanning
have the form of
frequency shifts proportional to the slopes,
and therefore to
the respective source distances.
These distance-dependent shifts
resemble Hubble's law in astronomy
but are strictly linear
due to their nonrelativistic, mundane origin,
and are rigorously predicted
by basic wave theory,
as follows.
The instantaneous phase of a sinusoidal wave
at $(\bold{r}, t)$,
from a source at $\mathbf{r} = 0$,
would be
\begin{equation} \label{e:twPhase}
\phi(\mathbf{k}, \omega, t) =
\mathbf{k} \cdot \mathbf{r} - \omega \; t
\quad
,
\end{equation}
and leads to
the differential relation
\begin{equation} \label{e:dphase}
\left. \Delta \phi \right|_{\omega,t}
=
\Delta (\omega \; t)
+
\mathbf{k} \cdot \Delta \mathbf{r}
+
\Delta \mathbf{k} \cdot \mathbf{r}
\quad
.
\end{equation}
The first term
$\Delta (\omega \; t)$
clearly concerns the signal content if any,
and has no immediate relevance.
The second term
$\mathbf{k} \cdot \Delta \mathbf{r}$
expresses
the path phase differences
at any individual frequency,
and is involved in
the ordinary Doppler effect,
as it describes phase change
due to changing source distance,
as well as
image reconstruction in holography
and synthetic aperture radar (SAR),
as the reconstructed image concerns
information of incremental distances of
the spatial features of the image.
The third term,
clearly orthogonal to other two,
represents
phase differences across frequencies
for any given (fixed) source distance.
Then, by varying a frequency selection $\widehat{k}$
at the receiver,
we should see
an incremental frequency, or shift,
\begin{equation} \label{e:fshift}
\delta \omega =
\lim_{\Delta t \rightarrow 0}
\left[
\frac{
\left.
\Delta \phi(\omega)
\right|_{\omega,r,t}
}
{ \Delta k }
\times
\frac{ \Delta \widehat{k} }
{ \Delta t }
\right]
=
\left.
\frac{\partial \phi}{\partial k}
\right|_{\omega,r,t}
.
\frac{d \widehat{k}}{dt}
\quad
.
\end{equation}
(Note that $k$ can only refer to the incoming wave vector
in the denominator.)
As this holds for each individual $\omega$,
from equation (\ref{e:twPhase}),
the signal spectrum would be uniformly shifted by
the normalized shift factor
\begin{equation} \label{e:zshift}
\begin{split}
z(r) \equiv
\frac{\delta \omega}{\widehat{\omega}}
&=
\frac{1}{\widehat{\omega}}
\left.
\frac{\partial \phi}{\partial k}
\right|_{\omega,r,t}
.
\frac{d \widehat{k}}{dt}
=
\frac{r}{\widehat{k} c}
.
\frac{d \widehat{k}}{dt}
=
\frac{\beta r}{c}
=
\alpha \, r
\quad
,
\\
\text{with}\quad
\beta &\equiv \widehat{k}^{-1} (d \widehat{k}/dt)
\quad
\text{and}
\quad
\alpha = \beta / c
\quad
,
\end{split}
\end{equation}
where $t$ denotes time kept
by the receiver's clock.
Further,
the shifts reflect only
the instantaneous value of $\beta$,
which is the normalized rate of change
at the receiver.
Fig.~\ref{f:Parallax} shows
that $\delta \omega$ is
a temporal analogue of spatial parallax,
with the instantaneous value of $\alpha \equiv \beta/c$
serving in the role of
lateral displacement of the observer's eyes:
at each value
$\alpha_1$ or $\alpha_2$,
the signal spectrum $\mathcal{F}(\omega)$ shifts to
$\mathcal{F}(\omega_1)$ or
$\mathcal{F}(\omega_2)$,
respectively,
and if the distance to the source were increased to
$r + \delta r$,
the spectrum would further shift to
$\mathcal{F}(\omega_3)$.
Unlike with ordinary (spatial) parallax,
the receiver can be fixed \emph{and monostatic},
and yet exploit
the physical information
of source distance.
\begin{figure}[ht]
\centering
\psfig{file=phenv-2.eps}
\caption{Temporal parallax}
\label{f:Parallax}
\end{figure}
\Section{impl}{Physics of realization}
There are three basic ways of accomplishing
Fourier decomposition or frequency selection
in a receiver: using
diffraction or refraction,
a resonant circuit,
or
by sampling and digital signal processing (DSP).
Realization of the shifts in all three methods
has been described,
along with how this ``virtual Hubble'' effect
had remained unnoticed so long
[\ibid].
A review of
the diffraction and DSP methods
is necessary as theoretical foundation for
the application ideas to be discussed.
These methods would also be applicable for
array antennas and software-defined radio,
respectively.
\Subsection{diffract}{Diffractive implementation}
Diffractive selection of
a wavelength $\widehat{k}$
concerns using a diffraction grating
to deflect normally incident rays
to an angle $\theta$
corresponding to the selected $\widehat{k}$,
as determined by the grating equation
$n \lambda = l \sin \theta $,
where $n$ denotes the order of diffraction,
and $l$ is the grating interval,
The property exploited is that
rays arriving at one end of the grating
combine with rays
that arrived at the other end
\emph{a little earlier}.
If we could change the grating intervals $l$
inbetween,
the rays that get summed at
a diffraction angle $\theta$
would correspond to changing
grating intervals $l(t)$,
as depicted in
Fig.~\ref{f:GratingInstants}.
Their wavelengths must then relate to
the grating interval as
$
n \, d \lambda/dt = (dl/dt) \, \sin \theta
$.
Upon dividing this by
the grating equation,
we obtain
the modified relation
$
\widehat \lambda^{-1} (d\widehat \lambda/dt)
=
\widehat l^{-1} (d\widehat l/dt)
\equiv
- \beta .
$
\begin{figure}[ht]
\centering
\psfig{file=grating-instants.eps}
\caption{Time-varying diffraction method}
\label{f:GratingInstants}
\end{figure}
This result is not merely
a coincidence of different wavelengths
at the focal point,
as would be obtained
with a nonuniform grating.
Each of the summed contributions itself
has a time-varying $\lambda$,
so the variation over the sum is consistent
with the component variations.
A grating gathers more total power
than, say, a two-slit device,
and the waves arrive with
instantaneously varying phases
\begin{equation} \label{e:phaserate}
\frac{d \phi}{dt}
\equiv
- \widehat\omega
=
\frac{\partial \phi}{\partial t}
+
\nabla_\mathbf{r} \phi \cdot \mathbf{\dot{r}}
+
\nabla_\mathbf{r'} \phi \cdot \mathbf{\dot{r}'}
+
\nabla_{\mathbf{k}} \phi \cdot
\mathbf{\dot{\widehat{k} }}
\quad
,
\end{equation}
where
the first term $\partial \phi/\partial t$ is
the signal ($\omega \; t$) contribution
in equation (\ref{e:twPhase})
and is equal to $-\omega$;
the second term (in $\mathbf{\dot{r}}$) concerns
the real motion and Doppler effect if any,
to be ignored hereon;
the third term (in $\mathbf{\dot{r}'}$) represents
a similar Doppler shift from
longitudinal motion of
either end of the receiver,
which would be generally negligible
for $r' \ll r$;
and
only the last term concerns
equation (\ref{e:fshift}).
\begin{comment}
The foregoing analysis partly explains why
the effect has generally escaped notice
in the past.
The distance-dependent character of the shifts
cannot be easy to notice
in the transient effects of
relative motion of a prism or a grating,
especially as
the detected transient effect would concern
only an individual wavelength.
To be noticeable,
a substantial pattern of spectral lines
must appear shifted in unison,
but that requires
uniform variation of grating intervals,
which is difficult to get
at visible wavelengths.
Mechanical or thermal stresses
are very difficult to apply uniformly
and
at a continuously increasing or decreasing rate.
Acousto-optic cells are advertized as
``continuously variable'',
but the continuity only refers to
a large number of standing wave modes
available for Bragg diffraction
--
if the excitation frequency is changed,
the new interval takes effect only after
the new mode stabilizes.
The effect could be produced by
varying the length of a Fabry-Perot resonator,
similarly to moving the detector,
but the usual notion of Fourier analysis
concerns steady-state spectra
and the transient effects
have been easy to overlook.
\end{comment}
\Subsection{dsp}{DSP implementation}
In DSP,
$\widehat{k}$'s are determined by
the sampling interval $T$,
and can therefore be continuously varied
by changing $T$.
The discrete Fourier transform (DFT)
is defined as
\begin{equation} \label{e:dft}
\begin{split}
F(m \omega_T)
&=
\sum_{n = 0}^{N - 1}
e^{i m \omega_T T}
\,
f(n T)
\\
\text{with the inverse}
\quad
f(n T)
&=
\frac{1}{N}
\sum_{m = 0}^{N - 1}
e^{i m \omega_T T}
\,
F(m \omega_T)
\quad
,
\end{split}
\end{equation}
where
$f$ is the input signal;
$T$ is the sampling interval;
$N$ is the number of samples per block;
and
$\omega_T = 2 \pi / N T$.
The inversion is governed by
the orthogonality condition
\begin{equation} \label{e:dordorth}
\sum_{n = 0}^{N - 1}
e^{i m \omega_T T}
\,
e^{i (k r - n \omega_T T)}
=
\frac{
1 - e^{i (m - n)}
}{
1 - e^{i (m - n)/N}
}
=
N \delta_{mn}
\quad
,
\end{equation}
where $\delta_{mn} = 1$
if $m = n$, else $0$.
These definitions as such suggest that
the instantaneous selections
$\widehat\omega_T \equiv \widehat k c$
can be varied via
the sampling interval $T$.
To verify that
a controlled variation of $T$
will indeed yield
the desired shifts,
observe that in equation (\ref{e:phaserate}),
both the real Doppler terms, in
$\dot{\mathbf{r}}$
and
$\dot{\mathbf{r}}'$,
can be ignored,
as $r' \ll r$ for sources of practical interest.
The surviving term on the right is
\begin{comment}
\begin{equation*}
\frac{d \phi}{dt}
\equiv
- \widehat\omega
=
\frac{\partial \phi}{\partial t}
+
\frac{\partial \phi}{\partial k}
\frac{d\widehat k}{dt}
\quad
,
\end{equation*}
\end{comment}
$
(\partial \phi/\partial k)
(d\widehat k/dt)
$
where
$\partial \phi/\partial t = - \omega$,
as before,
and
\begin{equation*}
\frac{d\widehat k}{dt} =
\frac{1}{c}
\frac{
d \widehat \omega_T
}{dt}
=
\frac{1}{c}
\frac{d}{dt}
\left(
\frac{2 \pi}{NT}
\right)
=
- \frac{2 \pi}{N c T^2}
\frac{dT}{dt}
=
- \widehat k
\,
\frac{1}{T}
\frac{dT}{dt}
\quad
,
\end{equation*}
so that, corresponding to equation (\ref{e:zshift}),
we do get
\begin{equation} \label{e:samplingmod}
\widehat k^{-1} (d\widehat k/dt) \equiv
\beta
=
- T^{-1} (dT/dt)
\quad
,
\end{equation}
confirming the desired effective variation of
$\widehat{k}$'s.
$\bold{\Box}$
Fig.~\ref{f:TimeDomain} explains the result.
The incoming wave
presents increasing phase differences
$\delta \phi_1$,
$\delta \phi_2$,
$\delta \phi_3$, ...
within the successive samples obtained from
the diminishing intervals
$\delta T_1 = T_1 - T_0$,
$\delta T_2 = T_2 - T_1$,
$\delta T_3 = T_3 - T_2$,
\etc
From the relation
$\widehat \omega_T \equiv \widehat k c = 2 \pi / N T$,
gradients of
the spectral phase contours
can be quantified as
\begin{equation*}
\frac{\partial \phi}{\partial T} =
\frac{\partial \phi}{\partial \widehat k}
\frac{d \widehat k}{dT}
=
-
\frac{2 \pi}{N c T^2}
\frac{\partial \phi}{\partial \widehat k}
\end{equation*}
so that by equation (\ref{e:samplingmod}),
each phase gradient reduces to
\begin{equation} \label{e:ksampling}
\frac{\partial \phi}{\partial T}
\frac{dT}{dt}
=
\frac{- 2 \pi}{N c T}
\frac{\partial \phi}{\partial \widehat k}
\cdot
\frac{1}{T}
\frac{dT}{dt}
=
\frac{\widehat \omega_T}{c}
\frac{\partial \phi}{\partial \widehat k}
\cdot
\frac{1}{\widehat k}
\frac{d\widehat k}{dt}
=
\frac{\partial \phi}{\partial \widehat k}
\frac{d\widehat k}{dt}
\end{equation}
identically,
validating the approach.
\begin{figure}[h]
\centering
\psfig{file=td-nopat.eps}
\caption{Variable sampling method}
\label{f:TimeDomain}
\end{figure}
\begin{comment}
This mechanism too is clearly rather simple,
and we may once again wonder how it escaped notice,
especially given
the sophisticated analytical tools available today.
Again,
the inherently nonstatic nature of the effect
seems to be the cause,
as even wavelet analysis concerns
strictly static time and frequency scales.
\end{comment}
\Subsection{scale}{General operating principles}
For a desired shift $z$ at a range $r$,
equation (\ref{e:zshift}) yields
\begin{equation} \label{e:scaling}
\frac{\delta \widehat{k}}{k}
\equiv
- \frac{ \delta {T}}{T}
\approx
\frac{ c z \; \delta t}{r}
\end{equation}
for normalized incremental change of
the receiver's grating or sampling interval
required
over a small interval $\delta t$.
In a DSP implementation,
appropriate at suboptical frequencies,
this determines
the needed rate of change
over a sampling interval $\delta T$.
As $\delta T$ would be nominally chosen
based on
the (carrier) frequencies of interest
and not on the range,
the nominal rate of change for achieving
a useful $z$ at a given range $r$
would be independent of
the operating frequencies.
For example,
for a $1~\giga\hertz$ signal,
a suitable choice of sampling interval
is $\delta t = 100~\pico\second$,
regardless of the range.
Chosing $z = 2$,
which is fairly large in astronomical terms
but convenient for the present purposes,
we obtain
$
\delta \widehat{k}/{k}
= 6 \times 10^{-2} / r
$
per sample
(taking $c = 3 \times 10^8~\metre\per\second$).
At $r = 1~\metre$,
the result is a somewhat demanding
$6 \times 10^{-2}$,
or $6\%$, per sample,
but for $r = 1, ~10$ or $1000 ~\kilo\metre$,
it is
$
6 \times 10^{-5},~
6 \times 10^{-8}
$
and
$
6 \times 10^{-11}
$
per sample,
respectively.
To be noticed is that
since equation (\ref{e:scaling}) prescribes
a normalized rate of change,
the instantaneous rate of change must grow
\emph{exponentially}
in the course of the observation,
since by
integrating equation (\ref{e:scaling}),
we get
\begin{equation} \label{e:exp}
\Delta\widehat{k}
\equiv
\Delta T ^{-1}
=
e^{c z \Delta t / r}
,
\end{equation}
where $\Delta t$ denotes
the total period of observation.
While an exponential variation is
generally difficult to achieve
other than with DSP,
two other problems
must be also contended with:
the total variation over
an arbitrarily long observation
would be impossible anyway
and
it could exceed the source bandwidth even over
a fairly short observation.
The only solution is to split
the observation time into windows,
as in DFT,
resetting the sampling interval and its variation
at the beginning of each window,
so that
the variation is only continuous
within each window.
For example,
the normalized rate
$6 \times 10^{-8}$
calculated above for
$100~\pico\second$ sampling
and
$10~\kilo\metre$ range
amounts to
$3.773 \times 10^{260}$ for
each second of observation!
Over a $1~\micro\second$ window, however,
the sampling rate would change
only by the factor $1.0006$.
This is also
the spectral spread required of
the wavepackets emitted by the source,
and conversely,
the window can be set to match
the spread.
It is possible to \emph{cascade}
multiple stages of gratings or DSP,
in order to multiply
the magnitude of the shifts.
Cascading seems straightforward with
optical gratings,
but in DSP,
as the data stream is already broken up
into discrete samples by the first stage,
and the sampling interval is to be varied
in each successive stage
relative to its predecessor,
it becomes necessary to interpolate the samples
between stages.
Interpolation is similarly needed for
the reverse shift stage in
distance-based signal separation
(Section \ref{ss:ddm}).
These shifts have been verified only by simulation
for
electromagnetic waves.
For sound data
($c \approx 330~\metre\per\second$),
the software does reveal
nonuniform spreading of
subbands of acoustic samples,
consistent with
the likely distributions of their sources.
Testing in an adequately equipped sound laboratory,
as well as with radio waves,
is still needed.
\Subsection{diff}{Differential techniques}
The effect thus scales well
from terrestrial to planetary distances,
but
the incremental changes required per sample
are very small.
Errors in these changes,
from inadvertent causes
including thermal or mechanical stresses,
could lead to large errors
in distance measurements using the effect.
The linearity of the effect can be exploited
to offset such errors by
differential methods.
From equation (\ref{e:zshift}), we have
\begin{equation} \label{e:zdelta}
\Delta z
=
\alpha \Delta r
+
r \Delta \alpha
+
o(\Delta r, \Delta \alpha)
,
\end{equation}
relating first order uncertainties.
First order error due to
an uncertainty $\Delta \alpha$
can be eliminated by measurements
using different $\alpha$
and using the differences.
Specifically,
an error $\Delta z$ would limit
the capability for source separation,
presuming $r$ is known.
By designing a receiver to depend on
the difference between two sets of shifts
for the same incoming waves,
\eg by applying the DSP method twice
with different values of $\alpha$,
a $\Delta z$ error can be eliminated.
Conversely,
for ranging applications,
where $z$ is the measured variable,
from
$
r = z / \alpha
$,
we get
\begin{equation} \label{e:rdelta}
\Delta r
=
\Delta z / \alpha
-
r \Delta \alpha
+
o(\Delta z, \Delta \alpha)
,
\end{equation}
so that
the first order error $\Delta r$ can be
once again eliminated using differences.
This is
a temporal form of triangulation,
illustrated by the lines for
$\alpha_1$ and $\alpha_2$
that converge at $r$
in Fig.~\ref{f:Parallax}.
Higher order differences would yield
more accuracy.
\Subsection{phaseshift}{Phase shifting and path length variation}
The main difficulty with
the method of Section \ref{ss:diffract}
is ensuring uniformity of the grating intervals
even while they are changing.
The method is not realizable by acousto-optic cells
for this reason.
Controlled nonuniform sampling and sample interpolation
pose similar difficulties for DSP%
\footnote
Radio telescopes, for instance, incorporate
1 or 3 bit sampling at an intermediate frequency
for correlation spectroscopy.
Interpolation would add more noise
to this already low phase information.
}.\xspac
The solution is to instead vary the path length
\emph{after} a fixed grating,
say using the longitudinal Faraday effect
and circular polarization.
As the variation now concerns a bulk property,
it would be easier.
The proof is straightforward.
The analogous simplification for DSP is to modify,
instead of the sampling interval $T$,
the forward transform kernel to
\begin{equation} \label{e:kdft}
F^{(\omega)} (m \widehat{\omega}_0)
\equiv
\sum_{n = 0}^{N - 1}
e^{i m \widehat{\omega}(t) T}
\,
f(nT)
,
\quad
\widehat{\omega}(t)
=
\widehat{\omega}_0
\, e^{\beta \, n T}
,
\end{equation}
\ie
apply changing phase shifts
to the successive samples
corresponding to the path length.
This is more complex,
but avoids interpolation noise
and access to the RF frontend.
\Section{formalism}{Source separation theory}
\Subsection{kernel}{Transformation kernel}
Fourier theory more generally involves the continuous form of
the orthogonality condition (\ref{e:dordorth}),
given by
\begin{equation} \label{e:ordorth}
\int_t
e^{i \widehat\omega t} \,
e^{i (k r - \omega t) } \, dt
=
e^{ikr} \delta(\widehat\omega - \omega)
\quad
,
\end{equation}
where
$\delta()$ is the Dirac delta function.
All of the methods described require continuously varying
the mechanism of frequency selection.
This is equivalent, in a basic sense,
to varying the receiver's notion of
the scale of time,
the result being
a change of the orthogonality condition to
\begin{equation} \label{e:modorth}
\begin{split}
\int_t
e^{i \widehat\omega \Delta(t)}
e^{i [kr \Delta(r) - \omega t]}
\, dt
&=
\int_t
e^{ikr\Delta}
\,
e^{i (\widehat\omega \Delta - \omega t)}
\, dt
\\
&\equiv
e^{ikr\Delta}
\,
\delta [\widehat\omega \Delta - \omega]
\quad
,
\end{split}
\end{equation}
where
$\Delta \equiv \Delta(r) = (1 + \alpha r)$
from equation (\ref{e:zshift}),
but is also equivalent to
$\Delta(t) = (1 + \alpha ct)$,
via the relation
$c = r/t$.
How did we get to equation (\ref{e:modorth})?
Notice that
equation (\ref{e:ordorth})
already contains
the frequency selection factor
$e^{i \widehat{\omega} t}$
--
the $\Delta$ in the exponent is
the variation of
$\widehat{\omega}$
provided by the methods of Section \ref{s:impl}.
The second factor
$e^{i [kr\Delta(r) - \omega t]}$
must correspondingly represent
the incoming component picked by
the modified selector;
its only difference from
equation (\ref{e:ordorth})
is the $\Delta$ multiplying
the path contribution to phase, $kr$.
Why should this $\Delta$ multiply $kr$
and not $\omega t$?
The answer lies in the basic premise of
equations (\ref{e:twPhase})-(\ref{e:zshift})
that our receiver manipulates
the phase of each incoming Fourier component
individually,
hence $\omega t$, representing
the signal component of the instantaneous phase,
is not touched.
The $kr$ term is affected, however,
since the summing builds up
amplitude at only
that value of $k$ which matches
$
\widehat{\omega} \equiv \widehat{k} c
$.
This aspect is readily verified by simulation,
and it defines a lower bound on
the window size (Section \ref{ss:scale})
in the optical methods,
since the window reset would break
the photon integration in a photodetector.
Equation (\ref{e:modorth}) is more general than
equation (\ref{e:ordorth}),
as the latter corresponds to
the special case of $\Delta = 1$
or $\alpha = 0$.
This property has
a fundamental consequence
impacting
all of physics and engineering:
\emph{
Since $\alpha$ refers to
the physics of the instruments
and could be
arbitrary small but nonzero,
there is fundamentally no way to rule out
a nonzero $\alpha$ in
any finite local set of instruments.
The only way to detect a nonzero $\alpha$
is to look for spectral shifts
of very distant objects
}
as a large enough $r$ would yield
a measurable $z(r)$%
\footnote
\label{foot:cosmos}
A personal hunch that something like this
could be responsible for
the cosmological redshifts,
had prompted informal prediction of
the cosmological acceleration to
some IBM colleagues in ca.1995-1996.
The modified eigenfunctions
$e^{i [kr\Delta(r) - \omega t]}$
are known in astrophysics
as the photon and particle eigenfunctions
over relativistic space-time
\cite{Parker1969}.
The natural value of
[$\alpha \approx$] $10^{-18}~\reciprocal{\second}$
cited in the Introduction can come from
the probability factor
$e^{-W/k_B T}$
for cumulative lattice dislocations
under centrifugal or tidal stresses
causing creep.
At $T \approx 300~\kelvin$,
it is
$3 \times 10^{-11}~\reciprocal{\second}$
at $W = 1~{\ensuremath \mathrm{eV}\xspace}$,
$1.5 \times 10^{-18}~\reciprocal{\second}$
at $W = 1.7~{\ensuremath \mathrm{eV}\xspace}$,
\etc
The point is that
solid state theory mandates such creep,
but it's untreated
in any branch of science or engineering.
Many of these details were put together in
mundane \texttt{arxiv.org} articles
\cite{PrasadArxiv},
to be eventually compiled into
a comprehensive paper.
Thanks to NASA/JPL's continued portrayal of the anomaly
as acceleration,
\emph{all} other explanations tested or offered
have been more obvious or exotic,
and totally fruitless
(\cf \cite{Anderson2002}).
}.
\begin{comment}
The NASA/JPL researchers incidently looked for
a systematic error in earth clocks
to explain the Pioneer 10/11 data
\cite{Anderson2002},
but only local consistency
could at all be verified.
The only distant ``clocks'' we have ever had are
NASA's six spin-stabilized deep space missions
\cite{Anderson1998},
and
they consistently point to an inconsistency with
``deep space time''.
Variations in the observed error,
and its interpretation as acceleration,
have led to confused interpretations
by many reseachers,
but all of the data correlates well with
centrifugal stresses and solar tide variations.
Another, related set of systematic errors has been identified
in the calibration of space telescopes
and is documented elsewhere
\cite{Prasad2004USPspec}.
\end{comment}
\Subsection{ddm}{Distance division multiplexing\texttrademark
Almost all the applications to be discussed
critically depend on
the fundamental separation of sources
enabled by
the present wave effect.
Using the notation of quantum theory,
we may denote
incoming signals by ``kets'' $\ket{}$,
and
the receiver states
as ``bras'' $\bra{}$,
and rewrite
equation (\ref{e:ordorth}) as
$
\iprod{\widehat\omega}{\omega, r}
=
e^{ikr}
\iprod{\widehat\omega}{\omega}
$.
Equation (\ref{e:modorth})
then becomes
\begin{equation} \label{e:qshift}
\begin{split}
\iprod{
\widehat\omega,
\frac{d \widehat\omega}{dt}
}{\omega, r}
&\equiv
\bra{\widehat\omega}
H
\ket{\omega, r}
=
e^{ikr\Delta(r)}
\,
\iprod{
\widehat\omega
}
{
\frac{\omega}{
\Delta(r)
}
}
\\
&=
e^{ikr\Delta(r)}
\,
\delta \left(
\widehat\omega
-
\frac{\omega}{
\Delta(r)
}
\right)
\quad
,
\end{split}
\end{equation}
where
$\bra{\widehat\omega, d\widehat\omega/dt}$ and
$\bra{\widehat\omega}$
are
the modified and original states of the receiver,
respectively.
The sampling clock and grating modifications
then correspond to the operator $H$
\begin{equation} \label{e:qop}
H \ket{\omega, r}
=
e^{ikr\Delta(r)}
\,
\ket{\frac{\omega}{\Delta(r)}}
\end{equation}
of an incoming wave state
$\ket{\omega,r}$.
These equations attribute the shift to
the incoming value $\omega$,
instead of
to $\widehat\omega$ selected instantaneously
since for the operator formalism,
$\bra{\widehat\omega}$ must represent
an eigenstate resulting from observation.
Equations (\ref{e:zshift}) and (\ref{e:qop})
both say that as in the Doppler case,
the shift is proportional
at each individual frequency $\omega$.
This also means that
the spectrum expands by the factor $\Delta$,
as illustrated in Fig.~\ref{f:DDM}
for the case of
a common signal spectrum
$F(\omega)$
emitted by two sources
at distances $r_1$ and $r_2 > r_1$,
respectively.
For a rate of change factor $\alpha$
applied at the receiver,
the signals will shift and expand by
$\Delta_1 \equiv (1 + \alpha r_1)$
and
$\Delta_2 \equiv (1 + \alpha r_2) > \Delta_1$,
respectively.
\begin{figure}[ht]
\centering
\psfig{file=ddm.eps}
\caption{Separation by source distance}
\label{f:DDM}
\end{figure}
Despite the increased spreading of the spectrum,
there is opportunity for
isolating the signal of a desired source
if its shifted spectrum comes out
substantially separated
from its neighbours,
as shown.
We could, for instance, apply
a band-pass filter $\widetilde{G}_1$
to the overall shifted spectrum
$H F_1 + H F_2$,
such that
$
\widetilde{G}_1 ( H F_1 + H F_2 )
\approx \widetilde{G}_1 H F_1 .
$
Writing $H$ as a function of $\alpha$,
by equations (\ref{e:modorth}) and (\ref{e:qop}),
we get
\begin{equation} \label{e:negH}
H^{-1}(\alpha) = - H(\alpha)
\quad
.
\end{equation}
We could apply a combination of
mixing and frequency-modulation operations
to shift and compress
$\widetilde{G}_1 H F_1$ back to $\approx F_1(\omega)$,
or just apply
a second $H$ with a negative $\alpha$.
In general,
a set of distance-selecting
``projection operators''
$H^{-1} \widetilde{G}_i H$,
can thus be defined by
the conditions
\begin{equation} \label{e:selection}
H^{-1} \widetilde{G}_i H
\,
\sum_j F(r_j, \omega)
\approx
F(r_j, \omega)
\text{~or~}
H^{-1} \widetilde{G}_i H
\approx
\delta_{ij}
\end{equation}
With base-band prefilters
$G_i F_i \approx F_i$,
they yield
\begin{equation}
H^{-1} \widetilde{G}_i H
=
\delta_{ij} G_j
\quad\text{so that}\quad
\widetilde{G}_i H
=
H G_i
\quad
.
\end{equation}
As $H$ is parametrized by $\alpha$
independently of $i$,
$
H\widetilde{G}_i H
$
provides spectral separation
without prior knowledge of $r_i$.
It is also independent of signal content,
wherein
the source coordinates could be supplied
or included via modulation,
but since $H$ depends only on
the path phase contribution,
it provides separation at
a more basic level.
Coordinate-based spread spectrum coding
could be combined,
for example,
to differentiate sources
at the same distance,
as an alternative to or to improve over
current phased array antennas.
It has been further suggested \cite{Prasad2005}
that DDM raises
the theoretical capacity of a channel to
$\sim 2 \mathcal{C} L / \lambda \gg \mathcal{C}$,
where
$\mathcal{C}$ is the traditional Shannon capacity
for a source-receiver pair using the channel,
by admitting
additional sources into the channel
at intermediate distances.
Further,
if combined with
directional selectivity say from
phased array antennae,
we would get true
\emph{space-division multiplexing}
[\ibid].
\begin{comment}
as illustrated in Fig.~\ref{f:sdm}.
The figure shows
how a source $A$ may be isolated by
its spatial coordinates
within the region
$(r \pm \delta r, \theta \pm \delta \theta)$,
from interference from other sources
using
the same frequencies, codes, \etc
\begin{figure}[ht]
\centering
\psfig{file=idealrcv.eps}
\caption{Source separation and space division multiplexing}
\label{f:sdm}
\end{figure}
\end{comment}
\Section{mil}{Military and intelligence applications}
The above theory represents a long overdue
rigorous analysis of
the spectral decomposition of waveforms
by a real, and therefore imperfect, instrument.
The nature of the error was inferred, as mentioned,
from an immense gamut of
astronomical,
planetary
and
terrestrial data,
spanning all scales of range
ever measured.
A natural occurrence of the effect
subsequently deduced from creep theory
(footnote, page \pageref{foot:cosmos})
could have been contradicted
in at least one set of data,
but remains totally consistent%
\footnote
Consistency with GPS datasets
has also been recently verified.
Calculations from GPS base stations data indicate
an ongoing rising of land
at a median rate of
$
1.6
~\milli\metre\per{\ensuremath {\mathrm y}\xspace}
$
(see \texttt{http://ray.tomes.biz}),
large enough to easily accommodate
a nontectonic apparent expansion of the earth
at $H_0 \times 6.371~\kilo\metre
\approx 0.437~\milli\metre\per{\ensuremath {\mathrm y}\xspace}$,
for the natural occurrence.
}.
Instead, we clearly face
a converse problem in physics itself,
that its patterns
have been so well explored on earth
that only astronomical tests can expose
remaining shortcomings
\cite{Anderson1998}.
The known laws of physics only reflect
understood mechanisms,
like
interference and the Doppler effect
and only partially%
\footnote
E.g.,
diffractive corrections in
astrophysics and quantum field theories
are limited to
the Fraunhofer and Fresnel approximations,
both being limited, by definition, to
total deflections of $\le \pi/2$.
Cumulative diffractive scattering would contribute,
as known in microwave theory,
a decay $e^{-\sigma r}$ to the propagation.
This makes \emph{all wavefunctions}
in the \emph{real universe} Klein-Gordon eigenfunctions,
and gives light
a rest-mass...
}.
We once again face a need to lead physics
by engineering%
\footnote
Recalling the classic case of thermodynamics.
The present effect
implies that photons,
traditionally viewed as indivisible and immutable,
are reconstituted
by every real, imperfect, receiver.
Though it is yet to be demonstrated with light,
the theory of Sections \ref{s:theory} and \ref{s:impl}
is fundamental and does not permit
a different result for quanta.
}.\xspac
\Subsection{shift}{Precision, power-independent frequency shifting}
This application would itself be
a direct, visible test of the effect,
the idea being that
the method could be applied also to
a proximal source
at a precisely known distance
in order to uniformly scale its spectrum
with great accuracy.
Accuracy is expected because
the source distance
can be accurately set
by simple mechanical means,
independently of
the exponential control of $\alpha$.
The mechanism would be
the first means for shifting frequencies
that is independent of
the nature and energy of the waves,
and their frequencies and bandwidth,
yet realizable by
a mundane, static means.
The main difficulty is the magnitude of $\alpha$
needed for source distances of under $1~\metre$,
but it should be
eventually solvable.
Likely applications include
transformation of high power
modulated $\giga\hertz$ carriers to
$\tera\hertz$ or optical bands,
and efficient, tunable
visible light, UV or even X- or $\gamma$-rays,
all
with controlled coherence
and
without nonlinear media.
\ifthenelse{\boolean{extended}}{
\cbstart
Two forms of implementation are being explored,
the variable grating scheme of
Section \ref{ss:diffract},
exploiting
the magnetostrictive ``smart material'' Terfenol-D,
and
the path variation method of
Section \ref{ss:phaseshift}
using a liquid crystal or a photorefractive medium
for a faster modulation
than seems possible with
the Faraday effect in glasses.
Another aspect being studied is
folding of the source path
using
an optical fibre or a transmission line,
so as to ensure realizability of
possibly several metres in a compact package
and
at least $1~\metre$ in a single chip.
The Terfenol-D design,
prepared for NSF SBIR/STTR Solicitation 05-557,
envisaged
a grating etched directly on
the side of a Terfenol-D medium,
and was expected to provide
shift factors $z \ge 2$
at least with
a strong, distant, commonly available source,
\viz the sun,
as a first step of
optical validation.
\cbend
}{}
\Subsection{radar}{Monostatic ranging and passive radar}
This application was immediately envisaged
when the effect was first actually suspected%
\footnote
The idea of such an effect had been disclosed,
by appointment, to an IP attorney
on the morning of 2001.9.11.
The mechanism itself (Section \ref{s:theory})
was uncovered only much later in 2004.
},
taking from the well known notion of
the cosmological distance scale
available from the Hubble redshifts.
By providing
a ``virtual Hubble flow'' view
that can be activated and scaled
at the receiver's discretion,
the present methods enable
ranging, or distance measurement,
of any source
that can be seen or received,
at only half
the round-trip delay,
and
zero transmitted power.
To compare,
as traditional active radar depends on
round trip times (RTT),
its range is fundamentally limited
by the transmitter power.
While Venus and Mars have been explored by radar,
for example,
the ranging of other planets
and of astronomical objects in general
is largely limited
to spatial triangulation,
including
using the earth's orbit itself
as the baseline.
Tracking of orbiting satellites
places similar demands on
the radar transmitter power $P$,
as the range $R$ is governed by
the ``radar power law''
$P \propto R^4$.
This makes
current (active) radars generally bulky.
The power law is relaxed for
cooperative targets with transponders,
to $P \propto R^2$,
where $P$ denotes
transponder power,
and the range is obtained from the RTT of
a transponded signal%
\footnote
This is generally what's used for deep space probes
\cite{Bender1989,Vincent1990}.
}.
The present methods make this lower power law
available for all targets,
and also eliminate the need for
a spatial baseline for parallax,
as the baseline is now given by
the range of variation of $\alpha$
at the receiver.
As shown in Section \ref{ss:scale},
a very large baseline seems to be indeed possible
with DSP alone
for both terrestrial
and earth orbit distances.
Current passive radar technology,
like Lockheed's Silent Sentry,
involves
coherent processing of the received scatter
of energy from
television broadcast
(see US Patent 3,812,493, 1974)
and
cellular base stations.
The accuracy is again dependent on
information representing overall trip times,
and hence entails
coherent processing.
Accurate ranging is now possible via
triangulation using temporal parallax,
as explained
in Section \ref{ss:diff}.
The difference is that
coherent processing can provide
imaging
with subwavelength precision,
but
triangulation would be simpler
for ranging and tracking.
We still need to illuminate silent targets,
but the processing could be simplified
for the same accuracy.
The accuracy of tracking could improve
for radiating targets
given
the lower overall trip time.
We can also use
temporal triangulation
to simplify traditional radar
and as a fast, efficient cross-check.
\Subsection{jam}{Overcoming interference, jamming and noise}
The capability for separating sources
regardless of the signal content or modulation
is also a guarantee that
a desired signal can be isolated
from any interfering signals
at the same frequencies and
bearing the same modulations or spread-spectrum codes,
so long as
the interfering sources are
at other directions or distances
from the receiver.
Simulation shows that
even signals in destructive interference
can be separated.
The criterion for separation is simply
the bandwidth to distance ratio:
denoting the low and high frequency bounds of
the signal bandwidth $W$ by
$\mathcal{L}$ and $\mathcal{H}$, respectively,
each of equations (\ref{e:zshift}) or (\ref{e:modorth})
implies
\begin{equation}
(1 + \alpha r_i) \mathcal{H}
\le
(1 + \alpha r_{i+1}) \mathcal{L}
\end{equation}
as the condition for separation between
the $i$th and $(i+1)$th sources,
assuming $r_i \le r_{i+1}$
(see Fig. \ref{f:DDM}).
This means
\begin{equation}
\alpha r_i
\ge
\frac{W}{(\delta r_i/r_i) \mathcal{L} - W}
=
\left[
\frac{\delta r_i}{r_i}
\frac{\mathcal{L}}{W}
-
1
\right]
\end{equation}
where $W = \mathcal{H} - \mathcal{L}$
and $\delta r_i = r_{i+1} - r_i$.
As separation is only possible for
$\alpha > 0$,
we need to have
$\delta r_i > r_i W / \mathcal{L}$.
It then follows, however,
that sufficiently narrow subbands of
the total received signal
would be ``source-separable''
even when the condition should fail
for $W$ as a whole.
All of the limitations are thus purely technological,
\eg
how many and how narrow subbands
we can construct,
how linear we can make
the subband and band-pass ($\widehat{G}$) filters
(since phase envelope distortions will alter the shift),
and
how large the stop-band rejection
we can get.
The stop-band rejection ratio is important
in overcoming jamming,
and the separation as such should suffice, in principle,
for filtering out truly extraneous noise
like lightnings.
\Subsection{isar}{Incoherent aperture synthesis}
In a conventional SAR,
multiple targets or features are imaged
from the echos received on a moving platform,
typically an aircraft or satellite,
as depicted in Fig.~\ref{f:sar}.
\begin{figure}[ht]
\centering
\psfig{file=sar.eps}
\caption{Aperture synthesis}
\label{f:sar}
\end{figure}
If the onboard oscillator could maintain coherence for
the duration of the flight,
\ie not suffer random phase shifts,
an entire target region could be imaged
by a simple Fourier transform,
after correcting for
the Doppler shifts in the received echos
due to the radar's own motion.
At each point $Q$
on the flight path,
echos are received from multiple features
$A$, $B$, \etc
and corresponding to
multiple previous locations
$P_1$, $P_2$, ...
of the transmitter.
Fourier techniques and
their optical holographic counterparts
are most efficient
for unravelling the information.
As stated in Section \ref{ss:radar},
the fundamental reason for coherent processing
is our current dependence on RTT of radar echos
for the range information.
The dependence is obviated by
the wave effect.
In the example scenario,
echos received at $Q$
from target features differing in direction,
like $A$ and $B$,
can be separated by the phase information
from an antenna array.
The echos from the same direction,
such as from $A$ and $D$,
can now be similarly separated
using the methods of
Section \ref{s:impl}
--
in particular,
the changing delay transform
of equation (\ref{e:kdft})
can be applied to
recordings of the received echo waveform.
Coherent processing ordinarily has
two advantages
that would seem to be impossible
in any other approach:
it tends to curtail noise and is generally precise
to within a wavelength.
We would get suppression of external noise,
as discussed in Section \ref{ss:jam},
and subwavelength resolution,
as shown in Section \ref{ss:diff},
so that applications including
hand-held short-range carrier-deck operation,
for which coherent processing is difficult,
become feasible.
\ifthenelse{\boolean{extended}}{
\Section{sim}{Simulation and testing}
\cbstart
The DSP approach of Section \ref{ss:dsp}
was first tried in 2001 using Octave
(an open source equivalent of Matlab),
on several \texttt{.wav} files
that came with Windows and Linux distributions.
The expectation of discernible shifts
seems to hold best for
the simple waveform of \emph{pop.wav}
(Fig.~\ref{f:PopWave}),
with a fundamental frequency
below $100~\hertz$.
Fig.~\ref{f:PopShift} shows
the corresponding spectra:
line 1 (in red) is
the spectrum of the unmodified samples,
and lines 2 (in green) and 3 (in blue) are
spectra obtained by
interpolating the samples to simulate
time-varying sampling rates
according to Fig.~\ref{f:TimeDomain}.
The shifted spectra are spread out
but otherwise preserved,
as expected,
at the lower frequency end.
The harmonic peaks
near $310~\hertz$ and $420~\hertz$
are lost,
and
the shifted spectra are increasingly weaker
--
both behaviours are expected as
the interpolation reduces the number of samples.
\cbend
\begin{figure}[htb]
\centering
\psfig{file=popwaveform.eps, height=1.6in}
\caption{Sample acoustic waveform}
\label{f:PopWave}
\end{figure}
\begin{figure}[htb]
\centering
\psfig{file=pop.eps, height=1.6in}
\caption{Original and shifted spectra}
\label{f:PopShift}
\end{figure}
\cbstart
With other audio samples,
and with a newer simulator developed in Java
especially to study
the performance of subband filtering
described in Section \ref{ss:jam}
for antijamming,
the frequency scaling of acoustic spectra
is found to be less distinct.
Rounding and aliasing problems are exacerbated
by the nonlinear sample interpolation,
but the main cause seems to be
the distance to the microphone
in the recording process
being
equal to or smaller than
the distribution of the recorded sources,
so that
$\Delta(\omega t)$ dominates over
the path length contribution
$\mathbf{k} \cdot \Delta\mathbf{r}$
(equation \ref{e:dphase}).
This can be solved by introducing
a fixed delay corresponding to a large source distance,
and the expected effect of increased shifts
was verified
in the Octave tests.
In the new simulator,
different subbands in most audio \texttt{.wav} files
are found to scale by different extents,
consistent with
the interpretation of source spread.
Nevertheless,
tests with actual, well-sampled sonar data
are needed to validate and characterize
the effect adequately
for use with sound.
The source spread problem would be worse for
longer radio waves,
making
the spectral phase gradients too small
to be usable
at long wavelengths.
This problem should vanish at short wavelengths,
and in particular,
at optical wavelengths
where individual photons could be traced, in principle,
to atoms that emitted them.
The source spread would likely arise again in
laser output,
in proportion to the spread of time spent by the photons
within the lasing cavity,
and this too
remains to be tested.
\cbend
\cbstart
The new simulator continues to evolve
as the lessons learnt are incorporated
into the code.
Development is slow because
the research is as yet unfunded
and being done in spare time spread across
complementary research efforts
that led to this discovery.
There were programming errors causing noise
and limiting the range of $\alpha$
in the simulation,
for instance,
and were identified and corrected only after
the initial presentation and demonstration
at WCNC 2005.
The simulator still uses sample interpolation
(Section \ref{ss:dsp}),
which needs to be replaced with
the more robust path variation scheme
of Section \ref{ss:phaseshift}.
\cbend
\cbstart
Direct validation with real RF data and live feeds,
for the DSP methods,
and with
a Terfenol-D or liquid crystal device
for optical wavelengths,
remain to be done,
the main limitation in each case being
the lack of current skill with
electronic circuit design,
and the time,
to implement them.
Nevertheless,
there is an enormous gamut of
astronomical and geological evidence
consistent with
the effect at optical wavelengths
and the uncorrected natural cause mentioned,
as reviewed in
the WCNC presentation,
and the capability to recover
signals even from total destructive interference,
also demonstrated at WCNC,
should be hopefully sufficient to motivate
independent validation efforts
and funding
for full scale development and exploitation
of this new physics.
The WCNC presentation and
a demonstration of separation of an FM signal
from similar interfering signals
using the new simulator
have accordingly been made available online
at \texttt{http://www.inspiredresearch.com} .
\cbend
}{}
\section*{Acknowledgement}
I thank
Asoke K. Bhattacharyya,
for introducing me to
inverse scattering in pulsed radar
in 1984,
and thus to
a key insight leading to this discovery.
\begin{raggedright}
| {'timestamp': '2008-12-15T18:43:36', 'yymm': '0812', 'arxiv_id': '0812.2652', 'language': 'en', 'url': 'https://arxiv.org/abs/0812.2652'} |
\section{Introduction}
\label{sec:Introd}
Many of the high energy theories of fundamental physics are formulated in
higher dimensional spacetimes. In particular, the idea of extra dimensions
has been extensively used in supergravity and superstring theories. It is
commonly assumed that the extra dimensions are compactified. From an
inflationary point of view, universes with compact spatial dimensions, under
certain conditions, should be considered a rule rather than an exception
\cite{Lind04}. The models of a compact universe with non-trivial topology
may play important roles by providing proper initial conditions for
inflation.
The compactification of spatial dimensions leads to a number of interesting
quantum field theoretic effects, which include instabilities in interacting
field theories, topological mass generation and symmetry breaking. In the
case of non-trivial topology, the boundary conditions imposed on fields give
rise to the modification of the spectrum for vacuum fluctuations and, as a
result, to the Casimir-type contributions in the vacuum expectation values
of physical observables.
The Casimir effect is arguably the most poignant demonstration of the
reality of the quantum vacuum and can be appreciated most simply
in the interaction of a pair of neutral parallel plates. The
presence of the plates modifies the quantum vacuum, and this
modification causes the plates to be pulled toward each other with
a force $F\propto 1/a^4$ where $a$ is the plate separation. The
Casimir effect is a purely quantum phenomenon. In classical
electrodynamics the force between the plates is zero. The ideal
scenario occurs at zero temperature when there are no real photons
(only virtual photons) between the plates; thus, it is the ground
state of the quantum electrodynamic vacuum which causes the
attraction. One of the most important features of the Casimir
effect is that even though it is purely quantum in nature, it
manifests itself macroscopically (Figure 1). \footnote{Illustration
courtesy of Richard Obousy Consulting LLC and AlVin, Antigravit\'{e}}
\begin{figure}
\begin{center}
\includegraphics[width=11cm, height=7.5cm]{fig1.jpg}
\caption{Due to the non-trivial boundary conditions imposed on the
quantum vacuum, the plates are pulled toward each other due to a
force that is purely quantum in nature.} \label{fig1}
\end{center}
\end{figure}
Compactifed extra dimensions introduce non-trivial boundary
conditions to the quantum vacuum similar to those in the parallel
plate example, and Casimir-type calculations become important when
calculating the resulting vacuum energy (for the topological
Casimir effect and its role in cosmology see \cite{Most97} and
references therein).
One important question in the context of theories with extra
dimensions is why additional spatial dimensions hold some fixed
size. This is commonly referred to as `modulus stabilization'.
Broadly stated, the question is as follows: if there are
additional spatial dimensions, why do they not perpetually expand,
like our familiar dimensions of space, or alternatively, why do
they not perpetually contract? What mechanism is it that allows
for this higher space to remain compact and stable? Of the handful
of theories that attempt to answer this problem, the Casimir
effect is particularly appealing due to its naturalness.
We find this effect compelling due to the fact that it is a
natural feature intrinsic to the fabric of space itself. There is
much research in the literature that demonstrates that with some
combination of fields, it is possible to generate a stable
minimum of the potential at some extra dimensional radius
\cite{ac}-\cite{Saha05}. In addition, the Casimir effect has been
used as a stabilization mechanism for moduli fields which
parameterize the size and the shape of the extra dimensions both in
models with a smooth distribution of matter in the extra
dimensions and in models with matter located on branes.
The Casimir energy can also serve as a model for dark energy
needed for the explanation of the present accelerated expansion of
the universe (see \cite{Milt03}-\cite{bcp} and references
therein). One interesting feature of these models is the strong
dependence of the dark energy density on the size of the extra
dimension.
One compelling possibility that we would like to introduce in this
paper is that at the energies accessible in the next generation of
particle accelerators, Standard Model (SM) fields might interact
with the graviton Kaluza-Klein (KK) tower which would effect the
local minimum of the potential. This would have the effect of
locally adjusting the radius of the extra dimension during this
interaction. Because the dark energy density is a function of the
size of the extra dimension, one remarkable feature of this idea
is that because the extra dimensions may be (temporarily)
adjusted, so too would the local dark energy density. This
adjustment would mean that, for the duration of the interaction,
the expansion of spacetime would be changed due to
technological intervention. What is particularly appealing about
this predicted phenomenon is its potential application as a future
exotic spacecraft propulsion mechanism.
In the present paper we shortly review the uses of the Casimir
effect in both standard Kaluza-Klein type and braneworld scenarios
for the stabilization of extra dimensions and for the generation
of dark energy. We also explore the energy requirements that would
be needed to temporarily adjust the size of the higher dimension
and hypothesize that, with some imagination, this mechanism could
be used by a sufficiently advanced technology as a means of
spacecraft propulsion.
\section{Kaluza-Klein type models}
To be consistent with the observational data, the extra dimensions in the
standard Kaluza-Klein description are assumed to be microscopic, with a size
much smaller than the scale of four dimensions. A generic prediction of
theories involving extra dimensions is that the gauge and Yukawa couplings
are in general related to the size of extra dimensions. In a cosmological
context, this implies that all couplings depend on the parameters of the
cosmological evolution. In particular, if the corresponding scale factors
are dynamical functions their time dependence induces that for the gauge
coupling constants. However, the strong cosmological constraints on the
variation of gauge couplings coming from the measurements of the quasar
absorbtion lines, the cosmic microwave background, and primordial
nucleosynthesis indicate that the extra dimensions must be not only small
but also static or nearly static. Consequently, the stabilization of extra
dimensions in multidimensional theories near their present day values is a
crucial issue and has been investigated in a number of papers. Various
mechanisms have been considered including fluxes from form-fields, one-loop
quantum effects from compact dimensions, wrapped branes and string corrections.
Let us consider the higher-dimensional action
\begin{equation}
S=\int d^{D}x\sqrt{|\det g_{MN}|}\left\{ -\frac{1}{16\pi G}R\left[ g_{MN}%
\right] +L\right\} , \label{action}
\end{equation}%
with the matter Lagrangian $L$ which includes also the cosmological constant
term. We take a spacetime of the form $R\times M_{0}\times \ldots \times
M_{n}$ with the corresponding line element
\begin{equation}
ds^{2}=g_{MN}dx^{M}dx^{N}=g_{\mu \nu }dx^{\mu }dx^{\nu
}+\sum_{i=1}^{n}e^{2\beta _{i}}g_{m_{i}n_{i}}dx^{m_{i}}dx^{n_{i}},
\label{ds2reduc}
\end{equation}%
where $M_{i}$, $i=0,1,\ldots ,n$, are $d_{i}$-dimensional spaces, $g_{\mu
\nu }$, $\beta _{i}$ are functions of the coordinates $x^{\mu }$ in the
subspace $R\times M_{0}$ only, and the metric tensor $g_{m_{i}n_{i}}$ in the
subspace $M_{i}$ is a function of the coordinates $x^{m_{i}}$ in this
subspace only. In order to present the effective action in the subspace $%
R\times M_{0}$ in the standard Einsteinian form we make a conformal
transformation of the $(d_{0}+1)$-dimensional metric:%
\begin{equation}
\tilde{g}_{\mu \nu }=\Omega ^{2}g_{\mu \nu },\;\Omega =\exp \left[
\sum_{j=1}^{n}d_{j}\beta _{j}/(d_{0}-1)\right] . \label{conftrans}
\end{equation}%
Dropping the total derivatives the action is presented in the form
\begin{equation}
S=\frac{\prod_{j}\mu ^{(j)}}{16\pi G}\int d^{d_{0}+1}x\sqrt{|\det \tilde{g}%
_{\mu \nu }|}\left\{ -R[\tilde{g}_{\mu \nu }]+\sum_{i,j}G_{ij}\tilde{g}^{\mu
\nu }\partial _{\mu }\beta _{i}\partial _{\nu }\beta _{j}-2U\right\} ,
\label{action2}
\end{equation}%
where $\mu ^{(j)}=\int d^{d_{j}}x\sqrt{|\det g_{m_{j}n_{j}}|}$, $%
G_{ij}=d_{i}\delta _{ij}+d_{i}d_{j}/(d_{0}+1)$, and%
\begin{equation}
U=\frac{\Omega ^{-2}}{2\prod_{j}\mu ^{(j)}}\int d^{D-d_{0}-1}x\prod_{j}\sqrt{%
|\det g_{m_{j}n_{j}}|}\left\{ \sum_{i}e^{-2\beta
_{i}}R[g_{m_{i}n_{i}}]-16\pi GL\right\} . \label{Vpot}
\end{equation}%
In (\ref{Vpot}), $R[g_{m_{i}n_{i}}]$ is the Ricci scalar for the metric $%
g_{m_{i}n_{i}}$.
In the case of standard cosmological metric $\tilde{g}_{\mu \nu }$ with the
scale factor $\tilde{a}_{0}$ and the synchronous time coordinate $\tilde{t}$%
, the field equations for the set of fields $\beta _{i}=\beta _{i}(\tilde{t}%
) $ following from action (\ref{action2}) have the form
\begin{equation}
\sum_{j=1}^{n}G_{ij}\left( \beta _{j}^{\prime \prime }+d_{0}\tilde{\beta}%
_{0}^{\prime }\beta _{j}^{\prime }\right) =-\left( \frac{\partial U}{%
\partial \beta _{i}}\right) _{\tilde{\beta}_{0}},\;i=1,2,\ldots ,n,
\label{coseqnew2}
\end{equation}%
where $\tilde{a}_{0}=\tilde{a}_{0}^{(0)}e^{\tilde{\beta}_{0}}$, $%
a_{j}=a_{j}^{(0)}e^{\beta _{j}}$, the prime means the derivative with
respect to the time coordinate $\tilde{t}$ and the potential $U$ is defined
by the formula
\begin{equation}
U=\frac{1}{2\Omega ^{2}}\left( 16\pi G\rho +\Lambda
_{D}-\sum_{j=1}^{n}\lambda _{j}d_{j}/a_{j}^{2}\right) . \label{U}
\end{equation}%
In (\ref{U}) $\rho $ is the matter energy density and $\Lambda _{D}$ is the $%
D$-dimensional cosmological constant, $\lambda _{j}=k_{j}(d_{j}-1)$, where $%
k_{j}=-1,0,1$ for the subspace $M_{j}$ with negative, zero, and positive
curvatures, respectively. In the case $\rho =0$ for the extrema of potential
(\ref{U}) one has the relations $\Lambda _{D}=(D-2)\lambda _{i}e^{-2\beta
_{i}}$ and $\partial ^{2}U/\partial \beta _{i}\partial \beta
_{j}=-2G_{ij}\Lambda _{D}/(D-2)$. It follows from here that for $\Lambda
_{D}>0$ the extremum is a maximum of the potential and is realized for
internal spaces with positive curvature. The corresponding effective
cosmological constant is positive. For $\Lambda _{D}<0$ the extremum is a
minimum and is realized for internal spaces with negative curvature. The
effective cosmological constant is negative.
There is a number of mechanisms giving contributions to the potential in
addition to the cosmological constant and curvature terms. An incomplete
list includes fluxes from form-fields, one-loop quantum effects from compact
dimensions (Casimir effect), wrapped branes, string corrections (loop and
classical). In the case of a single extra space for massless fields at zero
temperature the corresponding energy density is of the power-law form
\begin{equation}
\rho =\sigma a_{1}^{-q}, \label{rhocsigma}
\end{equation}%
with a constant $\sigma $ and $q$ being an integer. The values of the
parameter $q$ for various mechanisms are as follows: $q=D$ for the
contribution from the Casimir effect due to massless fields, $q=d_{1}-2p$
for fluxes of $p$-form fields, $q=p-d_{0}-2d_{1}$ for $p$-branes wrapping
the extra dimensions. The corresponding potential has the form (\ref{U})
with
\begin{equation}
U=\frac{1}{2\Omega ^{2}}\left( 16\pi G\sigma a_{1}^{-q}+\Lambda _{D}-\lambda
_{1}d_{1}a_{1}^{-2}\right) . \label{Uc}
\end{equation}%
In accordance with Eq. (\ref{coseqnew2}), solutions with a static internal
space correspond to the extremum of the potential $U$ and the effective
cosmological constant is related to the value of the potential at the
extremum by the formula $\Lambda _{\mathrm{eff}}=2\Omega ^{2}U$.
Introducing the notations
\begin{equation}
y=ba_{1},\quad b=\left[ \frac{d_{1}(d_{1}-1)}{16\pi G|\sigma |}\right] ^{%
\frac{1}{q-2}},\;U_{0}=\frac{d_{1}(d_{1}-1)}{2}%
b^{2}(ba_{1}^{(0)})^{2d_{1}/(d_{0}-1)}, \label{Uc0}
\end{equation}%
potential (\ref{Uc}) is written in the form
\begin{equation}
U=U_{0}y^{-q-2d_{1}/(d_{0}-1)}\left[ \frac{\Lambda _{D}b^{-2}}{d_{1}(d_{1}-1)%
}y^{q}-k_{1}y^{q-2}+{\mathrm{sign}}(\sigma )\right] . \label{Ucnew}
\end{equation}%
For the extremum with the zero effective cosmological constant one has%
\begin{equation}
k_{1}y^{q-2}=\frac{q}{2}\,{\mathrm{sign}}(\sigma ),\quad \Lambda
_{D}b^{-2}=k_{1}d_{1}(d_{1}-1)\left( 1-\frac{2}{q}\right) \left( \frac{2}{q}%
\right) ^{\frac{2}{q-2}}. \label{Lambdab-2}
\end{equation}%
For positive values $q$ and for an internal space with positive (negative)
curvature the extremum exists only when $\sigma >0$ ($\sigma <0$). The
extremum is a minimum for $k_{1}(q-2)>0$ and a maximum for $k_{1}(q-2)<0$.
Potential (\ref{Ucnew}) is a monotonic decreasing positive function for $%
k_{1}=-1,0$, $\Lambda _{D}\geq 0$, $\sigma >0$, has a single maximum with
positive effective cosmological constant for $k_{1}=1,0$, $\Lambda _{D}>0$, $%
\sigma <0$ and $k_{1}=-1$, $\Lambda _{D}\geq 0$, $\sigma <0$. For the case $%
k_{1}=1$, $\Lambda _{D}>0$, $\sigma >0$ and in the $d_{0}=d_{1}=3$ model
where $\rho $ is generated by one-loop quantum effects ($q=D$ in Eq. (\ref%
{rhocsigma})) the potential (\ref{Ucnew}) is plotted in Figure \ref{fig2}
for various values of higher dimensional cosmological constant corresponding
to $\Lambda _{D}b^{-2}=2,(30/7)(2/7)^{2/5},3,4$. In the model with the
second value the effective $(d_{0}+1)$-dimensional cosmological constant
vanishes. The behavior of the potential (\ref{Ucnew}) in the other cases is
obtained from those described above by replacements $(k_{1},\Lambda
_{D},\sigma )\rightarrow (-k_{1},-\Lambda _{D},-\sigma )$ and $U\rightarrow
-U$.
\begin{figure}
\begin{center}
\includegraphics[width=8.5cm, height=6.5cm]{fig2.jpg}
\caption{The potential $U/U_{0}$ as a function of $ba_{1}$ in the model with
$d_{0}=d_{1}=3$ and with the Casimir effect as a source for $\protect\rho $.
The numbers near the curves correspond to the value of the parameter $%
\Lambda _{D}b^{2}$. For the curve with the zero cosmological constant $%
\Lambda _{D}b^{2}=(30/7)(2/7)^{2/5}$.} \label{fig2}
\end{center}
\end{figure}
Introducing the $D$-dimensional Planck mass $M_{D}$ in accordance with the
relation $G=M_{D}^{2-D}$, from (\ref{Uc0}) we see that in the model with
one-loop quantum effects the parameter $b$ is of the order of the higher
dimensional Planck mass. As it is seen from the graphs in figure \ref{fig2}
the stabilized value of the size for the internal space is of the order of $%
D $-dimensional Planck length $1/M_{D}$. Note that, as it follows from (\ref%
{action2}), for the effective 4-dimensional Planck mass one has $M_{\mathrm{%
Pl}}^{2}\approx (a_{1}M_{D})^{d_{1}}M_{D}^{2}$ and, hence, in this type of
models the effective and higher dimensional Planck masses have the same
order. Consequently, the size of the internal space is of the order of the
Planck length and is not accessible for the near future accelerators. Note
that our knowledge of the electroweak and strong forces extends with great
precision down to distances of order $10^{-16}$cm. Thus if standard model
fields propagate in extra dimensions, then they must be compactified at a
scale above a few hundred GeV range.
\section{Models with large extra dimensions}
In the previous section we have considered the models, where the extra
dimensions are stabilized by one-loop quantum effects with the combination
of curvature terms and the higher dimensional cosmological constant. We have
taken the Casimir energy density coming from the compact internal space in
the simplest form $\sigma a_{1}^{-D}$. The corresponding effective potential
is monotonic and cannot stabilize the internal space separately. In more
general cases, in particular for massive fields, the dependence of the
Casimir energy on the size of the internal space is more complicated and the
corresponding effective potential can stabilize the extra dimensions
separately.
As a simple model consider the case of a single extra space, $n=1$, being a
circle, $M_{1}=S^{1}$, with a scalar field $\varphi $ as a non-gravitational
source. In the discussion below we will assume that either the external
space $M_{0}$ is non-compact or the external scale factor is much greater
than the internal one. First we discuss the case of the untwisted field with
mass $M$\ satisfying the periodicity condition on $S^{1}$. The vacuum energy
density and the pressures along the uncompactified ($p_{0}$) and
compactified ($p_{1}$) directions are given by the expressions
\begin{eqnarray}
&&\rho =-\frac{2M^{d_{0}+2}}{(2\pi )^{d_{0}/2+2}}\sum_{m=1}^{\infty }\frac{%
K_{d_{0}/2+1}(Mam)}{(Mam)^{d_{0}/2+1}},\quad p_{0}=-\rho , \notag \\
&&p_{1}=-\rho -\frac{2M^{d_{0}+2}}{(2\pi )^{d_{0}/2+2}}\sum_{m=1}^{\infty }%
\frac{K_{d_{0}/2+2}(Mam)}{(Mam)^{d_{0}/2}}, \label{s2rhopT02}
\end{eqnarray}%
where $K_{\nu }(x)$ is the modified Bessel function of the second
kind. The corresponding effective potential has the form
\begin{equation}
U=\frac{1}{2}\left( a^{(0)}/a\right) ^{\frac{2}{d_{0}-1}}\left( 16\pi G\rho
+\Lambda _{D}\right) , \label{US1}
\end{equation}%
with a constant $a^{(0)}$. The condition for the presence of the static
internal space takes the form
\begin{equation}
d_{0}\rho -(d_{0}-1)p_{1}=\frac{\Lambda _{D}}{8\pi G}. \label{s2condstat}
\end{equation}%
From formulae (\ref{s2rhopT02}) it follows that the left hand side of this
expression is positive and, hence, the static solutions are present only for
the positive cosmological constant $\Lambda _{D}$. The effective
cosmological constant is equal $\Lambda _{{\mathrm{eff}}}=-8\pi
G(d_{0}-1)(\rho +p_{1})$ and is positive. However, as it can be checked $%
(\partial /\partial a)[d_{0}\rho -(d_{0}-1)p_{1}]<0$ and the solutions with
static internal spaces are unstable.
In the case of a twisted scalar with antiperiodicity condition along the
compactified dimension the corresponding energy density and pressures are
obtained by using the expressions (\ref{s2rhopT02}) with the help of the
formula
\begin{equation}
q_{t}(a)=2q(2a)-q(a),\quad q=\rho ,p_{0},p_{1}. \label{s2qtw}
\end{equation}%
For the twisted scalar the corresponding energy density is positive. The
solutions with static internal spaces are stable and the effective
cosmological constant is negative: $\Lambda _{{\mathrm{eff}}}<0$.
In Figure \ref{fig3} we have plotted the effective potential $U$ in units of
\begin{equation}
U_{0}=4\pi GM^{d_{0}+2}(Ma^{(0)})^{\frac{2}{d_{0}-1}}, \label{U(0)}
\end{equation}%
as a function on $Ma$ for the model with $d_{0}=3$ consisting a single
untwisted scalar with mass $M$ and two twisted scalars with masses $M_{t}$
for various values of the ratio $M_{t}/M$ (numbers near the curves) and for $%
\Lambda _{D}/(8\pi GM^{d_{0}+2})=2$. For the graph with zero effective
cosmological constant $M_{t}/M\approx 11.134$.
\begin{figure}
\begin{center}
\includegraphics[width=8.5cm, height=6.5cm]{fig3.jpg}
\caption{The potential $U/U_{0}$ in $d_{0}=3$ as a function of $Ma$ for a
model with single untwisted and two twisted scalars and for $\Lambda _{D}/(8%
\protect\pi GM^{d_{0}+2})=2$. The numbers near the curves correspond to the
values of the ratio $M_{t}/M$. For the graph with zero effective
cosmological constant $M_{t}/M=11.134$.} \label{fig3}
\end{center}
\end{figure}
As we see in this type of models the size of the internal space is of the
order $1/M$. The effective cosmological constant is of the order $\Lambda _{{%
\mathrm{eff}}}\sim 8\pi M_{\mathrm{Pl}}^{-2}\times 10^{-2}/a^{4}$. Setting
the value of the corresponding energy density equal to the density of the
dark energy, for the size of the internal space we find $a\sim 10^{-3}%
\mathrm{cm}$. Such large extra dimensions are realized in models where the
standard model fields are localized on a 4-dimensional hypersurface (brane).
Note that the value of the effective cosmological constant can be tuned by
the parameter $M_{t}/M$ and the observed value of the dark energy density
can also be obtained for smaller values for the size of the internal space.
Similar results are obtained in models with more complicated internal
spaces, in \ particular, in ADD models with the number of internal
dimensions $\geqslant 2$.
Note that large extra dimensions accessible to all standard model fields can
also be realized. This type of extra dimensions are referred to as universal
dimensions. The key element in these models is the conservation of momentum
in the Universal Extra Dimensions which leads to the Kaluza-Klein number
conservation. In particular there are no tree-level contributions to the
electrowek observables. The Kaluza-Klein modes may be produced only in
groups of two or more and none of the known bounds on extra dimensions from
single Kaluza-Klein production at colliders applies for universal extra
dimensions. Contribution to precision electroweak observables arise at the
one loop level. In the case of a single extra dimension, recent experimental
constraints allow a compactification scale as low as TeV scale.
\section{Brane models}
Recently it has been suggested that the introduction of compactified extra
spatial dimensions may provide a solution to the hierarchy problem between
the gravitational and electroweak mass scales \cite{Arka98,Rand99}. The main
idea to resolve the large hierarchy is that the small coupling of
four-dimensional gravity is generated by the large physical volume of extra
dimensions. These theories provide a novel setting for discussing
phenomenological and cosmological issues related to extra dimensions. The
model introduced by Randall and Sundrum is particularly attractive. Their
background solution consists of two parallel flat branes, one with positive
tension and another with negative tension embedded in a five-dimensional AdS
bulk \cite{Rand99}. The fifth coordinate is compactified on $S^{1}/Z_{2}$,
and the branes are on the two fixed points. It is assumed that all matter
fields are confined on the branes and only the gravity propagates freely in
the five-dimensional bulk. In this model, the hierarchy problem is solved if
the distance between the branes is about 40 times the AdS radius and we live
on the negative tension brane. More recently, scenarios with additional bulk
fields have been considered.
For the braneworld scenario to be relevant, it is necessary to find a
mechanism for generating a potential to stabilize the distance between the
branes. The braneworld corresponds to a manifold with boundaries and all
fields which propagate in the bulk will give Casimir-type contributions to
the vacuum energy and, as a result, to the vacuum forces acting on the
branes. Casimir forces provide a natural mechanism for stabilizing the
radion field in the Randall-Sundrum model, as required for a complete
solution of the hierarchy problem. In addition, the Casimir energy gives a
contribution to both the brane and bulk cosmological constants and, hence,
has to be taken into account in the self-consistent formulation of the
braneworld dynamics.
In the $(D+1)$-dimensional generalization of the Randall-Sundrum spacetime
the background spacetime is described by the line-element%
\begin{equation}
ds^{2}=e^{-2k|y|}\eta _{\mu \nu }dx^{\mu }dx^{\nu }-dy^{2}, \label{ds2AdS}
\end{equation}%
where $\eta _{\mu \nu }$ is the metric tensor for the $D$-dimensional
Minkowski spacetime the AdS curvature radius is given by $1/k$. The fifth
dimension $y$ is compactified on an orbifold, $S^{1}/Z_{2}$ of length $a$,
with $-a<y<a$. The orbifold fixed points at $y=0$ and $y=a$ are the
locations of two $D$-branes. Consider a scalar field $\varphi (x)$\ with
curvature coupling parameter $\zeta $ obeying boundary conditions $\left(
\tilde{A}_{y}+\partial _{y}\right) \varphi (x)=0$, $y=0,a$, on the branes.
For a scalar field with brane mass terms $c_{0}$ and $c_{a}$ on the left and
right branes respectively, the coefficients in the boundary conditions are
defined by the relation $2\tilde{A}_{j}=-n^{(j)}c_{j}-4D\zeta k$ with $%
n^{(0)}=1$, $n^{(b)}=-1$ (see, for instance, \cite{Gher00,Flac01,Saha05}).
The corresponding Casimir energy is given by the expression \cite%
{Gold00,Flac01,Garr01,Saha05}
\begin{equation}
E(a)=\alpha +\beta e^{-Dka}+\frac{(4\pi )^{-D/2}k^{D}}{\Gamma \left(
D/2\right) }\int_{0}^{\infty }du\,u^{D-1}\ln \left\vert 1-\frac{\bar{I}_{\nu
}^{(a)}(u)\bar{K}_{\nu }^{(b)}(ue^{ka})}{\bar{K}_{\nu }^{(a)}(u)\bar{I}_{\nu
}^{(b)}(ue^{ka})}\right\vert , \label{ECasbr}
\end{equation}%
where $I_{\nu }(u)$ and $K_{\nu }(u)$ are the modified Bessel
functions,
\begin{equation}
\nu =\sqrt{(D/2)^{2}-D(D+1)\zeta +m^{2}/k^{2}}, \label{nu}
\end{equation}%
and the barred notation for a given function $F(x)$ is defined by $\bar{F}%
^{(j)}(x)=(\tilde{A}_{j}/k+D/2)F(x)+xF^{\prime }(x)$. The first and second
terms on the right of Eq. (\ref{ECasbr}) are the energies for a single brane
located at $y=0$ and $y=a$ respectively when the second brane is absent. The
coefficients $\alpha $ and $\beta $ cannot be determined from the low-energy
effective theory and can be fixed by imposing suitable renormalization
conditions which relate them to observables. The last term in (\ref{ECasbr})
is free of renormalization umbiguities and can be termed as an interaction
part. The corresponding vacuum forces acting on the branes can be either
repulsive or attractive in dependence of the coefficients in the boundary
conditions. In particular, there is a region in the parameter space where
these forces are attractive at large distances between the branes and
repulsive at small distances with an equilibrium state at some intermediate
distance $a=a_{0}$. In the original Randall-Sundrum braneworld with $D=4$,
to account for the observed hierarchy between the gravitational and
electroweak scales we need $ka_{0}\approx 37$.
In addition to the stabilization of the distance between the branes, the
quantum effects from bulk fields can also provide a mechanism for the
generation of the cosmological constant on the visible brane. On manifolds
with boundaries in addition to volume part, the vacuum energy contains a
contribution located on the boundary. For a scalar field the surface energy
density $\varepsilon _{\mathrm{v}}^{{\mathrm{(surf)}}}$\ on the visible
brane $y=a$ is presented as the sum \cite{Saha04}
\begin{equation}
\varepsilon _{\mathrm{v}}^{{\mathrm{(surf)}}}=\varepsilon _{1\mathrm{v}}^{{%
\mathrm{(surf)}}}+\Delta \varepsilon _{\mathrm{v}}^{{\mathrm{(surf)}}},
\label{emt2pl2}
\end{equation}%
where $\varepsilon _{1\mathrm{v}}^{{\mathrm{(surf)}}}$ is the surface energy
density on the visible brane when the hidden brane is absent, and the part
\begin{equation}
\Delta \varepsilon _{\mathrm{v}}^{{\mathrm{(surf)}}}=k^{D}\frac{\left(
4\zeta -1\right) \tilde{A}_{j}-2\zeta }{(4\pi )^{D/2}\Gamma \left(
D/2\right) }\int_{0}^{\infty }du\,\frac{u^{D-1}\bar{I}_{\nu
}^{(a)}(ue^{-ka})/\bar{I}_{\nu }^{(b)}(u)}{\bar{K}_{\nu }^{(a)}(ue^{-ka})%
\bar{I}_{\nu }^{(b)}(u)-\bar{I}_{\nu }^{(a)}(ue^{-ka})\bar{K}_{\nu }^{(b)}(u)%
}, \label{Delteps}
\end{equation}%
is induced by the presence of the hidden brane. The part $\varepsilon _{1%
\mathrm{v}}^{{\mathrm{(surf)}}}$ contains finite renormalization terms which
are not computable within the framework of the model under consideration and
their values should be fixed by additional renormalization conditions. The
effective cosmological constant generated by the hidden brane is determined
by the relation
\begin{equation}
\Lambda _{\mathrm{v}}=8\pi M_{\mathrm{v}}^{2-D}\Delta \varepsilon _{\mathrm{v%
}}^{{\mathrm{(surf)}}}, \label{effCC}
\end{equation}%
where $M_{\mathrm{v}}$ is the $D$-dimensional effective Planck mass scale
for an observer on the visible brane. For large interbrane distances the
quantity (\ref{effCC}) is suppressed compared with the corresponding Planck
scale quantity in the brane universe by the factor $\exp [-k(2\nu +D)a]$,
assuming that the AdS inverse radius and the fundamental Planck mass are of
the same order. In the original Randall-Sundrum model with $D=4$, for a
scalar field with the mass $|m|^{2}\lesssim k^{2}$, and interbrane distances
solving the hierarchy problem, the value of the induced cosmological
constant on the visible brane by the order of magnitude is in agreement with
the value of the dark energy density suggested by current cosmological
observations without an additional fine tuning of the parameters.
\section{Exotic Propulsion}
The idea of manipulating spacetime in some ingenious fashion to
facilitate a novel form of spacecraft propulsion has been well
explored in the literature \cite{oc1,oc2,mty,Ford:1994bj,
Ford:1996er, Pfenning:1997wh, Pfenning:1997rg, Everett:1997hb,
Everett:1995nn, Visser:1998ua, VanDenBroeck:1999sn, Lobo:2004wq,
Hiscock:1997ya, GonzalezDiaz:1999db, Puthoff:1996my,
Natario:2004zr, Lobo:2004an, Bennett:1995gp, GonzalezDiaz:2007zzb,
alc, DavisMillis:2009}. If we are to realistically entertain the
notion of interstellar exploration in timeframes of a human
life-span, a profound shift in the traditional approach to
spacecraft propulsion is clearly necessary. It is well known that
the universe imposes dramatic relativistic limitations on all
bodies moving through spacetime, and that all matter is restricted
to motion at sublight velocities ($< 3\times10^8 \rm{m/s}$, the
speed of light, or c) and that as matter approaches the speed of
light, its mass asymptotically approaches infinity. This mass
increase ensures that an infinite amount of energy would be
necessary to travel at the speed of light, and thus, this speed is
impossible to reach and represents an absolute speed limit to all
matter travelling through spacetime.
Even if an engine were designed that could propel a spacecraft to
an appreciable fraction of light-speed, travel to even the closest
stars would take many decades in the frame of reference of an
observer located on Earth. Although these lengthy transit times
would not make interstellar exploration impossible, they would
certainly reduce the enthusiasm of governments or private
individuals funding these missions. After all, a mission whose
success is, perhaps, a century away would be difficult to justify.
In recent years, however, two loop-holes to Einstein's ultimate
speed limit are known to exist: the Einstein-Rosen bridge and the
warp-drive. Fundamentally, both ideas involve the manipulation of
spacetime itself in some exotic way that allows for superluminal
travel.
The warp drive, which is the main feature of study for this
section, involves local manipulation of the fabric of space in the
immediate vicinity of a spacecraft. The basic idea is to create an
asymmetric bubble of space that is contracting in front of the
spacecraft while expanding behind it. \footnote{Recent progress by
Jos\'{e} Nat\'{a}rio has demonstrated that, with a slightly more
complicated metric, one can dispense of the expansion.} Spacetime
is believed to have stretched at many times the speed of light in
the first second of its existence during the inflationary period.
In many ways, the idea presented in this paper is an artifical and
local re-creation of those initial conditions.
Using this form of locomotion, the spacecraft remains stationary
inside this `warp bubble,' and it is the movement of space itself
that facilitates the relative motion of the spacecraft. The most
attractive feature of the warp drive is that the theory of
relativity places no known restrictions on the motion of space
itself, thus allowing for a convenient circumvention of the speed
of light barrier.
\begin{figure}
\begin{center}
\includegraphics[width=8cm, height=6.8cm]{fig4.jpg}
\caption{The `Top hat metric'. A bubble of asymmetric spacetime
curvature surrounds a spacecraft which would sit in the center of
the bubble. The space immediately surrounding the
spacecraft would be expanding/contracting behind/in front of the
craft. In this image the ship would `move' from left to right.} \label{fig4}
\end{center}
\end{figure}
By associating dark energy with the Casimir energy due to the KK
modes of vacuum fluctuations in higher dimensions, especially in
the context of M-theory derived or inspired models, it is possible
to form a relationship between $\Lambda$ and the radius of the
compact extra dimension.
\begin{equation}
\rho=\Lambda\propto 1/a^D.
\end{equation}
An easier way of developing the relationship between the energy
density and the expansion of space is to discuss quantities in
terms of Hubble's constant H, which describes the rate of
expansion of space per unit distance of space.
\begin{equation}
H\propto\sqrt{\Lambda} ,
\end{equation}
or in terms of the radius of the extra dimension we have
\begin{equation}
H\propto1/a^{D/2}.
\end{equation}
This result indicates that within this model, the expansion of
spacetime is a function of the size of the higher dimension. One
fascinating question to ask at this point is: could it be possible
to effect the radius of a higher dimension through some advanced
technology? If this were, indeed, possible then this technology
would provide a remarkable mechanism to locally adjust the dark
energy density, and thus the local expansion rate of spacetime
(Figure \ref{fig5}). \footnote{Illustration
courtesy of Richard Obousy Consulting LLC and AlVin, Antigravit\'{e}}
In principal, this represents an interesting way to artificially
generate the type of spacetime featured in Figure \ref{fig4}. A
spacecraft with the ability to create such a bubble would always
move inside its own local light-cone. This ship could utilize the
expansion of spacetime behind the ship to move away from some
object at any desired speed, or equivalently, to contract the
space-time in front of the ship to approach any object. The
possibility that the size of the compact geometry might, indeed,
vary depending on the location in four dimensional spacetime has
been explored in the context of string theory \cite{Giddings2005},
but never from the perspective of propulsion technology.
\begin{figure}
\begin{center}
\includegraphics[width=11.5cm, height=7.5cm]{fig5.jpg}
\caption{An extra dimension that is artifically stimulated to
contract (or expand), due to interaction with the SM fields, might
provide a way to locally adjust the dark energy density.} \label{fig5}
\end{center}
\end{figure}
To explore the types of energies that may be required to generate
this manipulation of a higher dimension, we consider that fact
that in models with large extra dimensions, the interaction of the
graviton KK tower with the Standard Model fields are suppressed
by the higher dimensional Planck scale, and the corresponding
couplings are inverse TeV in strength. This can be seen more
clearly when we consider the expansion of the metric tensor in
models with large extra dimensions computed within linearized
gravity models:
\begin{equation}
g_{MN}=\eta_{MN}+h_{MN}/M_D^{D/2-1},
\end{equation}
where $\eta_{MN}$ corresponds to flat (Minkowski) spacetime and
$h_{MN}$ corresponds to the bulk graviton fluctuations. The
graviton interaction term in the action is expressed by:
\begin{equation}
S_{\rm{int}}=1/M_D^{D/2-1}\int d^D x h_{MN}T^{MN}
\end{equation}
with $T^{MN}$ being the higher dimensional energy-momentum tensor.
The interaction of the graviton KK states with the SM fields are
obtained by integrating the action over the extra coordinates.
Because all these states are coupled with the universal strength $
1/M_{\rm{Pl}}$ this leads to the compelling possibility of the
control of the size of the extra dimensions by processes at
energies that will be accessible via the particle accelerators of
the near future. Although the coupling is extremely small, the
effective coupling is enhanced by the large number of KK states.
Referring to \ref{fig2}, additional energy in the form of matter
or radiation with the TeV energy scale can alter the shape of the
effective potential. In particular, the extrema determining the
size of the extra dimensions are modified with the change of the
Casimir energy density and hence, the dark energy density, in the
models under consideration.
\begin{figure}
\begin{center}
\includegraphics[width=7.52cm, height=4.76cm]{fig6.jpg}
\caption{By surrounding a spacecraft with false
minima an artificial inflation of the local spacetime might be
achieved.} \label{fig6}
\end{center}
\end{figure}
The key to creating a warp drive in this model is to create a
false vacuum minimum, i.e., to modify the vacuum spectrum and
inject some field which creates a deSitter minimum at the rear of
the craft and an anti-deSitter minimum at the front of the craft.
What this requires is a technology that would allow us to
artificially manipulate the field content illustrated in
\ref{fig2}, shifting the location of the minimum. In this basic
representation, the spacecraft would sit in a stable region of
space corresponding to the natural minimum of the extra dimension
(approximately flat space). At the front and rear of the craft,
regions of false minima would be artificially created via the
adjustment of the extra dimension (Figure \ref{fig6}). These modified
regions would correspond to increased and negative dark energy
densities, thus creating the warp bubble previously discussed.
\section{Extra Dimensions in General Relativity}
In the context of GR a similar phenomenology is produced for the
case of anisotropic cosmological models, in which it is the
$\textit{contraction}$ of the extra dimension that has the effect
of expanding another \cite{Levin:1994}. For example, consider a
`toy' universe with one additional spatial dimension with the
following metric
\begin{equation}
ds^2=dt^2-a^2(t)d\vec{x}^2-b^2(t)dy^2 \ .
\end{equation}
In this toy universe we will assume spacetime is empty, that there
is no cosmological constant, and that all spatial dimensions are
locally flat,
\begin{equation}
T_{\mu \nu}=\Lambda g_{\mu \nu}=0 \ .
\end{equation}
The action of the Einstein theory of gravity generalized to five dimensions will be
\begin{equation}
S^{(5)}=\int d^4x dy \sqrt{-g^{(5)}}\left(\frac{M_5^2 }{16\pi }R^{(5)}\right) \ .
\end{equation}
Solving the vacuum Einstein equations
\begin{equation}
G_{\mu\nu}=0 \ ,
\end{equation}
we obtain for the $G_{11}$ component
\begin{equation}
G_{11}=\frac{3\dot{a}(b\dot{a}+a\dot{b})}{a^2b} \ .
\end{equation}
Rewriting $\dot{a}/a=H_a$ and $\dot{b}/b=H_b$ where $H_a$ and
$H_b$ corresponds to the Hubble constant in three space and the
Hubble constant in the extra dimension respectively, we find that
solving for $G_{11}=0$ yields
\begin{equation}
H_a=-H_b .
\end{equation}
This remarkable result indicates that in a vacuum, the shear of a
contracting dimension is able to inflate the remaining dimensions.
In other words, the expansion of the 3-volume is associated with
the contraction of the one-volume.
Even in the limit of flat spacetime with zero cosmological
constant, general relativity shows that the physics of the
$\textit{compactified}$ space affects the expansion rate of the
non-compact space. The main difference to note here is that the
quantum field theoretic result demonstrates that a
$\textit{fixed}$ compactification radius can also result in
expansion of the three-volume due to the Casimir effect, whereas
the GR approach suggests that a $\textit{changing}$
compactification radius results in expansion, as is shown in (34).
What is particularly interesting about this result is that it
demonstrates that the expansion and contraction of the non-compact
space seems to be intimately related to the extra dimension in
both QFT and GR calculations.
\section{Discussion}
As we have seen, the stabilization of compact extra dimensions and the
acceleration of the other 4-dimensional part of the spacetime can be
simultaneously described by using the Casimir energy-momentum tensor as a
source in the Einstein equations. The acceleration in the 3-dimensional
subspace occurs naturally when the extra dimensions are stabilized. In this
case the Casimir energy density $\rho _{C}$ is a constant with the equation
of state $p_{C}=-\rho _{C}$, where $p_{C}$ is the Casimir pressure in the
visible universe, and an effective cosmological constant is induced. Note
that the current observational bounds for the equation of state parameter
are $-1.4<$ $p_{\mathrm{DE}}/\rho _{\mathrm{DE}}<-0.85$ and the value -1 of
this parameter is among the best fits of the recent observational data. For
the Casimir energy density to be the dark energy driving accelerated
expansion the size of the internal space should be of the order $10^{-3}%
\mathrm{cm}$. Such large extra dimensions are realized in braneworld
scenario. The value of the effective cosmological constant can be tuned by
choosing the masses of the fields and the observed value of the dark energy
density can also be obtained for smaller values for the size of the internal
space.
An important feature of both models with a smooth distribution of
matter in the extra dimensions and with branes is the dependence
of the dark energy density on the size of the extra dimensions. In
models with large extra dimensions the interaction of the graviton
Kaluza-Klein tower with the standard model fields are suppressed
by the higher dimensional Planck scale and the corresponding
couplings are inverse TeV strength. This leads to the interesting
possibility for the control of the size of extra dimensions by
processes at energies accessible at the near future particle
accelerators. This is seen from the form of the effective
potential for the scale factor of the internal subspace given
above. Additional energy density in the form of matter or
radiation with the TeV energy scale can alter the shape of the
effective potential. In particular, the extrema determining the
size of the extra dimensions are changed with the change of the
Casimir energy density and, hence, the dark energy density in the
models under consideration.
In the ADD scenario the Standard model
fields are confined to a 4D brane and a
new gravity scale $M_{D}=G^{1/(2-D)}\gtrsim \mathrm{TeV}$ is introduced in $%
4+d_{1}$ dimensions. The gravity on the brane appears as a tower of
Kaluza-Klein states with universal coupling to the Standard Model fields.
Though this coupling is small ($\sim 1/M_{\mathrm{Pl}}$), a relatively large
cross section is obtained from the large number of Kaluza-Klein states.
Kaluza-Klein graviton emission from SN1987A puts a bound $M_{D}\geqslant 30\,%
\mathrm{TeV}$ on the fundamental energy scale for ADD type models
in the case $d_{1}=2$. For the models with $d_{1}=3$ the
constraint is less restrictive, $M_{D}\geqslant
\mathrm{few\,\,TeVs}$. Hence, the latter case is still viable for
solving the hierarchy problem and accessible to being tested at
the LHC. The ATLAS experiment which will start taking data at the
LHC will be able to probe ADD type extra dimensions up to
$M_{D}\approx $ $8$ TeV. In the Randall-Sundrum scenario the
hierarchy is explained by an exponential warp factor in
$\mathrm{AdS}_{5}$ bulk geometry. Electroweak precision tests put
a severe lower bound on the lowest Kaluza-Klein mass. The masses
of the order of $1\,\mathrm{TeV}$ can be accommodated. In
universal extra dimension scenarios the current Tevatron results
constrain the mass of the compactification scale to
$M_{\mathrm{C}}>400$ GeV. The ATLAS experiment will be sensitive
to $M_{\mathrm{C}}\approx 3$ TeV. These estimates show that if the
nature is described by one of the models under consideration, the
extra dimensions can be probed and their size can be controlled at
the energies accessible at LHC. In models with the Casimir energy
in the role of the dark energy this provides a possibility for the
control of the dark energy local density through the change in the
size of the extra dimensions.
Note that the effective potential (\ref{Vpot}) contains a factor $\Omega
^{-2}\sim V_{{\mathrm{int}}}^{-2/(d_{0}-1)}$, with $V_{{\mathrm{int}}}$
being the volume of the internal space, and, hence, vanishes in the large
volume limit for the extra dimensions if the energy density grows not
sufficiently fast. This leads to the instability of a dS minimum with
respect to the decompactification of extra dimensions. This feature is
characteristic for other stabilization mechanisms as well including those
based on fluxes from form-fields and wrapped branes. The stability
properties of the metastable dS minimum depend on the height of the local
maximum separating this minimum from the minimum which corresponds to the
infinite size of the extra dimensions. Now we conclude that in models with
large extra dimensions, as in the case of the parameters for the minimum,
the height and the location of the maximum can be controlled by the
processes with the energy density of the TeV scale.
Another important point which should be touched is the following.
Though the size of the internal space is stabilized by the
one-loop effective potential, in general, there are fluctuations
around the corresponding minimum with both classical and quantum
sources which shift the size for the internal space from the fixed
value leading to the time variations of this size. This gives time
variations in the equation of state parameter for the
corresponding dark energy density. However, it should be taken
into account that the variations of the size for the internal
space around the minimum induces variation of fundamental
constants (in particular, gauge couplings) in the effective
four-dimensional theory. These variations are strongly constrained
by the observational data. As a consequence, the variation of the
equation of state parameter for the corresponding dark energy
density around the mean value $-1$ are small. It is interesting
that in the models under consideration the constancy of the dark
energy density is related to the constancy of effective physical
constants. Note that models can be constructed where the volume of
the extra dimensions is stabilized at early times, thus
guaranteeing standard four dimensional gravity, while the role of
quintessence is played by the moduli fields controlling the shape
of the extra space.
| {'timestamp': '2009-06-25T08:58:46', 'yymm': '0906', 'arxiv_id': '0906.4601', 'language': 'en', 'url': 'https://arxiv.org/abs/0906.4601'} |
\section{Introduction} An object of major interest in Harmonic Analysis is the Hardy-Littlewood maximal function,
which can be defined as
$$ Mf(x) = \sup_{t \in \mathbb{R}_{+}} \frac{1}{2t} \int_{x-t}^{x+t} |f(s)| \mathrm{d} s.$$
Alternatively, one can also define its \emph{uncentered} version as $$\tilde{M}f(x) = \sup_{x \in I} \frac{1}{|I|} \int_I |f(s)| \mathrm{d} s.$$ The most classical result about these maximal functions is perhaps the Hardy--Littlewood--Wiener theorem,
which states that both $M$ and $\tilde{M}$ map $L^p(\mathbb{R})$ into itself for $1<p \le \infty$, and that in the case $p=1$ they satisfy a weak type inequality:
$$ |\{x \in \mathbb{R}\colon Mf(x) > \lambda\}| \le \frac{C}{\lambda}\|f\|_1,$$
where $C = \frac{11 + \sqrt{61}}{12}$ is the best constant possible found by A. Melas \cite{Melas} for $M$. The same inequality also holds in the case of $\tilde{M}$ above, but this time with $C=2$ being the best constant, as shown by F. Riesz \cite{riesz}.\\
In the remarkable paper \cite{Kinnunen1}, J. Kinnunen proves, using functional analytic techniques and the aforementioned theorem, that, in fact,
$M$ maps the Sobolev spaces $W^{1,p}(\mathbb{R})$ into themselves, for $1<p\le \infty$. Kinnunen also proves that this result holds if we replace the standard maximal function by its uncentered version.
This opened a new field of studies, and several other properties of this and other related maximal functions were studied. We mention, for example, \cite{emanuben, emanumatren, hajlaszonninen, kinnunenlindqvist, luiro}. \\
Since the Hardy-Littlewood maximal function fails to be in $L^1$ for every nontrivial function $f$ and the tools from functional analysis used are not available either in the case $p=1$,
an important question was whether a bound of the form $\|(Mf)'\|_1 \le C \|f'\|_1$ could hold for every $f \in W^{1,1}$. \\
In the uncentered case, H. Tanaka \cite{Tanaka} provided us with a positive answer to this question. Explicitly, Tanaka proved that, whenever $f \in W^{1,1}(\mathbb{R}),$ then $\tilde{M}f$ is weakly differentiable, and it satisfies that $\|(\tilde{M}f)'\|_1 \le 2 \|f'\|_1$. Here,
$W^{1,1}(\mathbb{R})$ stands for the Sobolev space $\{f:\mathbb{R} \to \mathbb{R}\colon \|f\|_1 + \|f'\|_1 < + \infty \}.$ \\
Some years later, Aldaz and P\'erez L\'azaro \cite{aldazperezlazaro} improved Tanaka's result, showing that, whenever $f \in BV(\mathbb{R})$, then the maximal function $\tilde{M}f$ is in fact
\emph{absolutely continuous}, and $\mathcal{V}(\tilde{M}f) = \|(\tilde{M}f)'\|_1 \le \mathcal{V}(f)$, with $C=1$ being sharp, where we take the \emph{total variation of a function} to be $\mathcal{V}(f) := \sup_{\{x_1<\cdots<x_N\} = \mathcal{P}} \sum_{i=1}^{N-1} |f(x_{i+1}) - f(x_i)|,$ and consequently
the space of \emph{bounded variation functions} as the space of functions $f: \mathbb{R} \to \mathbb{R}\colon \exists g; \, f= g \text{ a.e. and }\mathcal{V}(g) < +\infty.$ In this direction, J. Bober, E. Carneiro, K. Hughes and L. Pierce \cite{BCHP} studied the discrete version of this problem, obtaining similar results. \\
In the centered case, many questions remain unsolved. Surprisingly, it turned out to be \emph{harder} than the uncentered one, due to the contrast in smoothness of $Mf$ and $\tilde{M}f.$
In \cite{kurka}, O. Kurka showed the endpoint question to be true, that is, that $\mathcal{V}(Mf) \le C\mathcal{V}(f)$, with $C=240,004$. Unfortunately, his
method does not give the best constant possible, with the standing conjecture being that $C=1$ is the sharp constant. \\
In \cite{temur}, F. Temur studied the discrete version of this problem, proving that
for every $f \in BV(\mathbb{Z})$ we have $\mathcal{V}(Mf) \le C'\mathcal{V}(f),$ where $C' > 10^6$ is an absolute constant. The standing conjecture is again that $C'=1$ in this case, which was in part backed up by J. Madrid's optimal results \cite{josesito}:
If $f \in \ell^1(\mathbb{Z}),$ then $Mf \in BV(\mathbb{Z})$, and $\mathcal{V}(Mf) \le 2 \|f\|_1,$ with 2 being sharp in this inequality. \\
Our main theorems deal with -- as far as the author knows -- the first attempt to prove sharp bounded variation results for classical Hardy--Littlewood maximal functions. Indeed, we may see the classical, uncentered Hardy-Littlewood maximal function as
\begin{align*}
\tilde{M}f(x) &= \sup_{x \in I} \frac{1}{|I|} \int_I |f(s)|\mathrm{d} s = \sup_{(y,t)\colon |x-y|\le t} \frac{1}{2t} \int_{y-t}^{y+t} |f(s)| \mathrm{d} s. \cr
\end{align*}
Notice that this supremum need \emph{not} be attained for every function $f$ and at every point $x \in \mathbb{R},$ but this shall not be a problem for us in the most diverse cases, as we will see throughout the text.
This way, we may look at this operator as a particular case of the wider class of \emph{nontangential} maximal operators
$$ M^{\alpha}f(x) = \sup_{|x-y| \le \alpha t} \frac{1}{2t} \int_{y-t}^{y+t} |f(s)| \mathrm{d} s.$$
Indeed, from this new definition, we get directly that
\[
\begin{cases}
M^{\alpha}f = Mf, & \text{ if } \alpha = 0,\cr
M^{\alpha}f = \tilde{M}f, & \text{ if } \alpha =1. \cr
\end{cases}
\]
As in the uncentered case, we can still define `truncated' versions of these operators, by imposing that $t \le R$. These operators are far from being a novelty: several references consider those all around mathematics, among those the classical \cite[Chapter~2]{stein}, and the more recent, yet related to our work,
\cite{emanumatren}. An easy argument (see Section \ref{commrem} below) proves that, if $\alpha < \beta$, then
\[
\mathcal{V}(M^{\beta}f) \le \mathcal{V}(M^{\alpha}f).
\]
This implies already, by the main Theorem in \cite{kurka}, that there exists a constant $A \ge 0$ such that $\mathcal{V}(M^{\alpha}f) \le A\mathcal{V}(f),$ for all $\alpha > 0.$ In the intention of sharpening this result, our first result reads, then, as follows:
\begin{theorem}\label{angle} Fix any $f \in BV(\mathbb{R}).$ For every $\alpha \in [\frac{1}{3},+\infty),$ we have that
\begin{equation}\label{maint}
\mathcal{V}(M^{\alpha}f) \le \mathcal{V}(f).
\end{equation}
There exists an extremizer $f$ for the inequality (\ref{maint}). If $\alpha > \frac{1}{3},$ then any positive extremizer $f$ to inequality (\ref{maint}) satisfies:
\begin{itemize}
\item $\lim_{x \to -\infty} f(x) = \lim_{x \to + \infty} f(x).$
\item There is $x_0$ such that $f$ is non-decreasing on $(-\infty,x_0)$ and non-increasing on $(x_0,+\infty).$
\end{itemize}
Conversely, all such functions are extremizers to our problem. Finally, for every $\alpha \ge 0$ and $f \in W^{1,1}(\mathbb{R}),$ $M^{\alpha}f \in W^{1,1}_{loc}(\mathbb{R}).$
\end{theorem}
Notice that stating that a function $g \in W^{1,1}_{loc}(\mathbb{R})$ is the same as asking it to be absolutely continuous. Our ideas to prove this theorem and theorem \ref{lip}
are heavily inspired by the ones in \cite{aldazperezlazaro}. Our aim will always be to prove that, when $f \in BV(\mathbb{R}),$ then the maximal function $M^{\alpha}f$ is well-behaved on the detachment set
\[
E_{\alpha} = \{x \in \mathbb{R}\colon M^{\alpha}f(x) > f(x)\}.
\]
Namely, we seek to obtain that the maximal function does not have any local maxima in the set where it disconnects from the original function. Such an idea, together with the concept
of the detachment set $E_{\alpha},$ are also far from being new, having already appeared at \cite{aldazperezlazaro, emanuben, emanumatren, Tanaka}, and recently at \cite{luiroradial}. More specific details of this can be found in the next section. \\
In general, our main ideas are contained in Lemma \ref{square}, where we prove that the region in the upper half plane that is taken into account for the supremum that defines
$$M^1_{\equiv R}f = \sup_{x \in I\colon |I| \le 2R} \Xint-_I |f(s)| \mathrm{d} s,$$
where we define
$$\Xint-_I g(s) \mathrm{d} s :=\frac{1}{|I|} \int_I g(s) \mathrm{d} s,$$
is actually a (rotated) \emph{square}, and not a triangle -- as a first
glance might impress on someone --, and in the comparison of $M^{\alpha}f$ and $M^1_{\equiv R}$ over a small interval, in order to establish the maximal attachment property. \\
We may ask ourselves if, for instance, we could go lower than $1/3$ with this method. Our next result, however, shows that this is the optimal bound for this technique:
\begin{theorem}\label{counterex} Let $\alpha< \frac{1}{3}$. Then there exists $f \in BV(\mathbb{R})$ and a point $x_{\alpha} \in \mathbb{R}$ such that $x_{\alpha}$ is a local maximum of $M^{\alpha}f$, but $M^{\alpha}f(x_{\alpha}) > f(x_{\alpha}).$
\end{theorem}
We can inquire ourselves whether we can generalize the results from Aldaz and P\'erez L\'azaro in yet another direction, though. With this in mind, we notice that Kurka \cite{kurka} mentions in his paper that his techniques allow one to
prove that some Lipschitz truncations of the center maximal function, that is, maximal functions of the form
\[
M^0_{N}f(x) = \sup_{t \le N(x)} \frac{1}{2t} \int_{x-t}^{x+t} |f(s)| \mathrm{d} s,
\]
are bounded from $BV(\mathbb{R})$ to $BV(\mathbb{R})$ -- with some possibly big constant -- if $\text{Lip}(N) \le 1.$ Inspired by it, we define the \emph{$N-$truncated uncentered maximal function} as
$$ M^1_{N}f(x) = \sup_{|x-y|\le t \le N(x)} \Xint-_{y-t}^{y+t} |f(s)| \mathrm{d} s. $$
The next result deals then with an analogous of Kurka's result in the case of the centered maximal functions. In fact, we achieve even more in this
case, as we have also the explicit sharp constants for that. In details, the result reads as follows:
\begin{theorem}\label{lip} Let $N: \mathbb{R} \to \mathbb{R}_{+}$ be a measurable function. If $\text{Lip}(N) \le \frac{1}{2},$ we have that, for all $f \in BV(\mathbb{R}),$
\[
\mathcal{V}(M^1_Nf) \le \mathcal{V}(f).
\]
Moreover, the result is sharp, in the sense that there are non-constant functions $f$ such that $\mathcal{V}(f) = \mathcal{V}(M^1_Nf).$
\end{theorem}
Again, we are also going to use a careful maxima in this case. Actually, we are going to do it both in theorems \ref{angle} and \ref{lip} for the non-endpoint cases $\alpha > \frac{1}{3}$
and $\text{Lip}(N) < \frac{1}{2},$ while the endpoints are treated with an easy limiting argument. \\
In the same way, one may ask whether we can ask our Lipschitz constant to be greater than $\frac{1}{2}$ in this result. Regarding this question, we prove in section 4.3 the following negative answer:
\begin{theorem}\label{lipcont} Let $c > \frac{1}{2}$ and
\[
f(x) = \begin{cases}
1, & \text{ if } x \in (-1,0);\cr
0, & otherwise. \cr
\end{cases}
\]
Then there is a function $N:\mathbb{R} \to \mathbb{R}_{\ge 0}$ such that $\text{Lip}(N) = c$ and
\[
\mathcal{V}(M^1_Nf) = + \infty.
\]
\end{theorem}
\textbf{Acknowledgements.} The author would like to thank Christoph Thiele, for the remarks that led him to the full range $\alpha \ge \frac{1}{3}$ at Theorem \ref{angle}, as well as to the proof that this is sharp for this technique, and Olli Saari, for enlightening discussions
about the counterexamples in the proof of Theorem \ref{lipcont} and their construction. He would also like to thank Emanuel Carneiro and Mateus Sousa for helpful comments and discussions,
many of which took place during the author's visit to the International Centre for Theoretical Physics in Trieste, to which the author is grateful for its hospitality, and Diogo Oliveira e Silva, for his thorough inspection and numerous comments on the preliminary versions of this paper. The author would like to thank also
the anonymous refferees, whose corrections and ideas, among which the use of a new normalization, have simplified and cleaned a lot this manuscript. Finally, the author acknowledges financial
support from the Hausdorff Center of Mathematics and the DAAD.
\section{Basic definitions and properties} Throughout the paper, $I$ and $J$ will usually denote open intervals, and $l(I),l(J),r(I),r(J)$ their left and right endpoints, respectively.
We also denote, for $f \in BV(\mathbb{R}),$ the \emph{one-sided limits} $f(a+)$ and $f(a-)$ to be
\[
f(a+) = \lim_{x \searrow a} f(x) \text{ and } f(a-) = \lim_{x \nearrow a} f(x).
\]
We also define, for a general function $N:\mathbb{R} \to \mathbb{R},$ its \emph{Lipschitz constant} as
\[
\text{Lip}(N) := \sup_{x\ne y \in \mathbb{R}} \frac{|N(x) - N(y)|}{|x-y|}.
\]
By considering the arguments and techniques contained in the lemmata from \cite{aldazperezlazaro}, we may consider sometimes a function in $BV(\mathbb{R})$ endowed with the normalization $f(x) = \limsup_{y \to x}f(y), \,\forall x \in \mathbb{R}.$ At some other times, however, we might need to work with a normalization
a little more friendly to the maximal functions involved. Let, then, for a fixed $\alpha \in (0,1],$
\[
\mathcal{N}_{\alpha}{f}(x) = \limsup_{(y,t) \to (x,0): |y-x|\le \alpha t} \frac{1}{2t} \int_{y-t}^{y+t} |f(s)| \mathrm{d} s.
\]
This coincides, by definition, with $f$ almost everywhere, as bounded variation functions are continuous almost everywhere. Moreover, this normalization can be stated, in a pointwise context, as
\[
\mathcal{N}_{\alpha}f(x) = \frac{(1+\alpha)\limsup_{y \to x} f(y) + (1-\alpha) \liminf_{y \to x} f(y)}{2}.
\]
With this normalization, we see that, for any $f \in BV(\mathbb{R}),$
\[
M^{\alpha}f(x) \ge \mathcal{N}_{\alpha}f(x), \,\, \text{for each }x \in \mathbb{R}.
\]
This normalization, however, is \emph{not} friendly to boundary points: the sets $\{M^{\alpha}f > f\}$ might not be open when we adopt it, as the example of $f = \chi_{(0,\frac{1-\alpha}{4}]} + \frac{1}{2} \chi_{(\frac{1-\alpha}{4},\frac{1-\alpha}{2}]} + \chi_{(\frac{1-\alpha}{2},1]}$ endowed with $\mathcal{N}_{\alpha}f$ shows. This function has the property that $M^{\alpha}f\left(\frac{1-\alpha}{2}\right) > \mathcal{N}_{\alpha}f \left(\frac{1-\alpha}{2}\right),$ but $M^{\alpha}f = f$ at $\left(\frac{1-\alpha}{2},1\right).$\\
Consider then $\mathcal{N}_{\alpha}f$, and notice that the situations as in the example above can only happen if $\mathcal{N}_{\alpha}f$ is \emph{discontinuous} at a point $x$. We then let
\begin{equation}
\tilde{\mathcal{N}}_{\alpha}f(x) = \begin{cases}
\mathcal{N}_{\alpha}f(x), & \text{ if } M^{\alpha}f(x) > \limsup_{y \to x}f(x);\cr
M^{\alpha}f(x), & \text{ if } \limsup_{y \to x}f(x) \ge M^{\alpha}f(x) \ge \mathcal{N}_{\alpha}f(x). \cr
\end{cases}
\end{equation}
Of course, we are only changing the points in which $\liminf_{y \to x} f(y) < \mathcal{N}_{\alpha}f < \limsup_{y \to x} f(y),$ and thus this normalization does not change the variation, i. e., $\mathcal{V}(\tilde{\mathcal{N}}_{\alpha}f) = \mathcal{V}(f).$
Again, by adapting the lemmata in \cite{aldazperezlazaro} to this context, one checks that we may assume, without loss of generality, that our function has this normalization. We will, for shortness, say we are using $NORM(\alpha)$ whenever we use this normalization. Notice that $NORM(1)$ is the normalization
used by Aldaz and P\'erez L\'azaro. \\
We mention also a couple of words about the maxima analysis performed throughout the paper. In the paper \cite{aldazperezlazaro}, the authors developed an ingenious
way to prove the sharp bounded variation result for the uncentered maximal function. Namely, they proved that, whenever $f \in BV(\mathbb{R})$, then the maximal function $\tilde{M}f$ is actually continuous, and the (open) set
\[
E = \{ \tilde{M}f > f\} = \cup_j I_j
\]
satisfies that, in each of the intervals $I_j$, $\tilde{M}f$ has no local maxima. More specifically, they observed that every local maximum $x_0$ of $\tilde{M}f$ satisfies that $\tilde{M}f(x_0) = f(x_0).$ In our case, we are going to need the general version of this property,
as the statement with local maxima of $M^{\alpha}f(x_0)$ may not hold. It is much more of an informal principle than a property itself, but we shall state it nonetheless, for the sake of stressing its impact on our methods.
\begin{proper} We say that an operator $\mathcal{O}$ defined on the class of bounded variation functions has a good attachment at local maxima if, for every $f \in BV(\mathbb{R})$ and local maximum $x_0$ of $\mathcal{O}f$ over an interval $(a,b)$, with $\mathcal{O}f(x_0) > \max(\mathcal{O}f(a),\mathcal{O}f(b)),$
then either $\mathcal{O}f(x_0) = |f(x_0)|$ or there exists an interval $(a,b) \supset $ such that $\mathcal{O}f$ is constant on $I$ and there is $y \in I$ such that $\mathcal{O}f(y) = |f(y)|.$
\end{proper}
The intuition behind this principle is that, for such operators, one usually has that $\mathcal{V}(\mathcal{O}f) \le \mathcal{V}(f)$, as skimming through the proofs in \cite{aldazperezlazaro} suggests. This is, as one should expect, the main tool to prove Theorems \ref{angle} and \ref{lip}.
\section{Proof of Theorems \ref{angle} and \ref{counterex}} In what follows, let $f \in BV(\mathbb{R})$ have either $NORM(1)$ or $NORM(\alpha),$ where the specified normalization used will be stated in each context.
\subsection{Analysis of maxima for $M^{\alpha}$, $\alpha > \frac{1}{3}$} Here, we prove some major facts that will facilitate our work. Let then $[a,b]$ be an interval, and suppose that $M^{\alpha}f$ has a \emph{strict} local maximum at $x_0 \in (a,b).$ That is,
we suppose that $M^{\alpha}f(x_0)$ is maximal over $[a,b],$ with $M^{\alpha}f(x_0) > \max\{ M^{\alpha}f(a),M^{\alpha}f(b)\}.$ Suppose also that
$M^{\alpha}f(x_0) = u(y,t), \text{ for some } (y,t) \in \{(z,s); |z-x_0|\le \alpha s\},$ where we define the function $u:\mathbb{R}\times \mathbb{R}_{+} \to \mathbb{R}_+$ as
\[
u(y,t) = \frac{1}{2t} \int_{y-t}^{y+t} |f(s)|\,\mathrm{d} s.
\]
Such an assumption is possible, as we would otherwise have that either
\begin{itemize}
\item a sequence $(y,t) \to (x_0,0)$ such that $\Xint-_{y-t}^{y+t} |f(s)| \mathrm{d} s \to M^{\alpha}f(x_0),$ which implies $|f(x_0)| = M^{\alpha}f(x_0)$ by the normalization;
\item a sequence $(y,t)$ with $t \to \infty$ such that $\Xint-_{y-t}^{y+t} |f(s)| \mathrm{d} s \to M^{\alpha}f(x_0),$ which implies that either $M^{\alpha}f(a)$ or $M^{\alpha}f(b)$ is bigger than or equal to $M^{\alpha}f(x_0),$ a contradiction.
\end{itemize}
As $M^{\alpha}f(x_0) = u(y,t),$ we have that $M^{\alpha}f(x_0) = M^{\alpha}f(y).$ Moreover, we claim that
\[
[y-\alpha t, y+ \alpha t] \subset (a,b).
\]
If this did not hold, then $[y-\alpha t, y+ \alpha t] \ni \text{ either } a \text{ or } b.$ Let us suppose, without loss of generality, that $a \in [y-\alpha t, y+ \alpha t].$ But then
\[
a \ge y - \alpha t \Rightarrow |a-y| \le \alpha t \Rightarrow M^{\alpha}f(a) \ge M^{\alpha}f(y) \ge M^{\alpha}f(x_0),
\]
a contradiction to our assumption of strictness of the maximum. This implies that, as for any $z \in [y-\alpha t, y+ \alpha t] \Rightarrow |z-y|\le \alpha t,$ the maximal function $M^{\alpha}f$ is \emph{constant} over the interval $[y-\alpha t, y+ \alpha t].$ Moreover, we have that the supremum of
\[u(z,s), \text{ for } (z,s) \in \cup_{z' \in [y-\alpha t, y+ \alpha t]} \{(z'',s''): |z''-z'| \le \alpha s''\} =: C(y,\alpha,t),
\]
is attained for $(z,s) = (y,t).$ \\
By standar techniques, we shall assume $f \ge 0$ from now on. Our next step is then to find a subinterval $I$ of $[y-\alpha t, y+ \alpha t]$ and a $R = R(y,\alpha,t)$ such that, over this interval $I$, it holds that
\[
M^1_{\equiv R} f \equiv M^{\alpha}f.
\]
Here, $M^1_{\equiv R}$ stands for the operator $\sup_{x \in I, |I| \le 2R} \Xint-_I |f(s)| \mathrm{d} s.$
For that, we need to investigate a few properties of the restricted maximal function $M^1_{\equiv R} f.$ This is done via the following:
\begin{lemm}[Boundary Projection Lemma]\label{BPL} Let $(y,t) \in \mathbb{R}\times \mathbb{R}_{+}$. Let us denote $$ \frac{1}{2t} \int_{y-t}^{y+t} f(s) \mathrm{d} s = u(y,t).$$ If $(y,t) \in \{(z,s); 0<|z-x|\le s \}$, then
$$ u(y,t) \le \max\left\{u\left(\frac{x+y-t}{2},\frac{x-y+t}{2}\right), u\left(\frac{x+y+t}{2},\frac{y-x+t}{2}\right)\right\}.$$
\end{lemm}
\begin{proof}
The proof is simple: in case $|x-y|=t,$ then the inequality is trivial, so we assume $|x-y|<t.$ We then just have to write
\begin{align*}
u(y,t) &= \frac{1}{2t} \int_{y-t}^{y+t} f(s) \mathrm{d} s = \frac{1}{2t} \int_{y-t}^x f(s) \mathrm{d} s + \frac{1}{2t} \int_x^{y+t} f(s) \mathrm{d} s \cr
& = \frac{x-y+t}{2t} \frac{1}{x-y+t} \int_{y-t}^x f(s) \mathrm{d} s \cr
& + \frac{y-x+t}{2t} \frac{1}{y-x+t} \int_x^{y+t} f(s) \mathrm{d} s \cr
& = \frac{x-y+t}{2t} u\left(\frac{x+y-t}{2},\frac{x-y+t}{2}\right) \cr
& + \frac{y-x+t}{2t} u\left(\frac{x+y+t}{2},\frac{y-x+t}{2}\right) \cr
& \le \max \left\{ u\left(\frac{x+y-t}{2},\frac{x-y+t}{2}\right), u\left(\frac{x+y+t}{2},\frac{y-x+t}{2}\right)\right\}.
\end{align*}
\end{proof}
\begin{figure}
\centering
\begin{tikzpicture}[scale=0.763456]
\draw[-,semithick] (0,-0.5) -- (-4,4.5);
\draw[-|,semithick] (-3,-0.5) -- (0,-0.5);
\draw[|-|,semithick] (0,-0.5) -- (2,-0.5);
\draw[-|,semithick] (2,-0.5) -- (4,-0.5);
\draw[|-,semithick] (4,-0.5) -- (7,-0.5);
\draw[-,semithick] (4,-0.5) -- (8,4.5);
\draw (2,1.1) node[circle,fill,inner sep=1pt]{} ;
\fill [gray, opacity=0.15] (0,-0.5) -- (-4,4.5) -- (8,4.5) --(4,-0.5) -- cycle;
\draw (2,0.6) node {$ (y,t)$};
\draw (0,-1) node {$y-\alpha t$};
\draw (2,-1) node {$y$};
\draw (4,-1) node {$y+\alpha t$};
\end{tikzpicture}
\caption{The region $C(y,\alpha,t).$}
\end{figure}
\vspace{4mm}
Let $M_{r,A}f(x) = \sup_{0 \le t \le 2A}\frac{1}{t} \int_{x}^{x+t} |f(s)| \, \mathrm{d} s,$ and define $M_{l,A}f$ in a similar way, there the subindexes $``r"$ and $``l"$ represent, respectively, ``right" and ``left". These operators
are present out of the context of sharp regularity estimates for maximal functions, just like in \cite{riesz}. In the realm of regularity of maximal function, though, the first to introduce this notion was
Tanaka \cite{Tanaka}. As a corollary, we may obtain the following:
\begin{corol}\label{control} For every $f \in L^1_{loc}(\mathbb{R}),$ it holds that
\[
\sup_{|z-x| + |t-R| \le R} u(z,t) \le \max\{M_{r,R}f(x),M_{l,R}f(x)\}.
\]
\end{corol}
From this last corollary, we are able to establish the following important -- and, as far as the author knows, new -- lemma:
\begin{lemm}\label{square} For every $f \in L^1_{loc}(\mathbb{R}),$ we have also that
$$M^1_{\equiv R} f (x) = \sup_{|z-x| + |t-R| \le R} u(z,t).$$
\end{lemm}
\begin{proof}From Corollary \ref{control}, we have that
\[
M^1_{\equiv R}f(x) := \sup_{|x-y|\le t \le R} u(y,t) \le \sup_{|z-x| + |t-R| \le R} u(z,t)
\]
\[
\le \max\{M_{r,R}f(x),M_{l,R}f(x)\} \le M^1_{\equiv R} f(x).
\]
That is exactly what we wanted to prove.
\end{proof}
\begin{figure}
\begin{tikzpicture}
\draw[-|,semithick] (-4,0)--(0,0);
\draw[|-,semithick] (0,0)--(4,0);
\draw[-,thick] (0,0) -- (3,3);
\draw[-,thick] (0,0) -- (-3,3);
\draw[dashed] (1,4)--(-3/2,3/2);
\draw[dashed] (1,4)--(5/2,5/2);
\draw[-,thick] (-3,3)--(3,3);
\draw (1.5,4) node{$(y,t)$};
\draw (-3,3/2) node{$\left(\frac{x+y-t}{2},\frac{x-y+t}{2}\right)$};
\draw (4,5/2) node{$\left(\frac{x+y+t}{2},\frac{y-x+t}{2}\right)$};
\draw (0,-0.4) node{$x$};
\fill[brown, opacity=0.3] (0,0) -- (3,3) -- (-3,3) -- (0,0) -- cycle;
\end{tikzpicture}
\caption{Illustration of Lemma \ref{BPL}: the points $\left(\frac{x+y-t}{2},\frac{x-y+t}{2}\right)$ and $\left(\frac{x+y+t}{2},\frac{y-x+t}{2}\right)$ are the projections of $(y,t)$ over the lines
$t=x-y$ and $t=y+x,$ respectively.}
\end{figure}
Let $R$ be then selected such that $\frac{t}{2} < R $ and $R(1-\alpha) < \alpha t.$ For $\alpha > \frac{1}{3}$ this is possible. This condition is \emph{exactly} the condition so that the region
\[
\{(z,t'): |z-y| + |t'-R| \le R\} \subset C(y,\alpha,t).
\]
\begin{figure}
\centering
\begin{tikzpicture}[scale=0.676425]
\draw[-,semithick] (0,-0.5) -- (-3,15/2 - 0.5);
\draw[-|,semithick] (-3,-0.5) -- (0,-0.5);
\draw[|-|,semithick] (0,-0.5) -- (2,-0.5);
\draw[-|,semithick] (2,-0.5) -- (4,-0.5);
\draw[|-,semithick] (4,-0.5) -- (7,-0.5);
\draw[-,semithick] (4,-0.5) -- (7,15/2 - 0.5);
\draw (2,4.5) node[circle,fill,inner sep=1pt]{} ;
\draw[-,semithick] (20/3-4.3,4.5) -- (8.3 - 20/3,4.5);
\draw[dashed] (20/3-4.3,4.5) -- (20/3-4.3,-0.5);
\draw[dashed] (8.3-20/3,4.5) -- (8.3-20/3,-0.5);
\draw[|-|,ultra thick] (8.3-20/3,-0.5) -- (20/3-4.3,-0.5);
\draw[draw=black, fill=gray, fill opacity=0.44523] (2,-0.5) -- (2 + 10/3 - 3.15 +2.5, 10/3 - 3.15 + 2) -- (2, 20/3 - 6.3 + 4.5) -- (2-10/3 +3.15 -2.5, 10/3 - 3.15 + 2) -- cycle;
\fill[gray, opacity=0.1] (-3,15/2 - 0.5) -- (7,15/2 - 0.5) -- (4,-0.5) -- (0,-0.5) -- cycle;
\draw (2,5.2) node {$ (y,t)$};
\draw (0,-1) node {$y-\alpha t$};
\draw (2,-1) node {$y$};
\draw (4,-1) node {$y+\alpha t$};
\end{tikzpicture}\\
\caption{In the figure, the \textbf{\color{gray}dark gray} area represents the region that our Lemma gives, for some $\frac{1}{2} t < R < \frac{\alpha}{1-\alpha} t,$ and the \textbf{black} interval is one in which $M^{\alpha}f =M^1_{\equiv R} f \equiv M^{\alpha}f(y).$}
\end{figure}
Now we are able to end the proof: if $I$ is a sufficiently small interval around $y$, then, by continuity, it must hold true that the regions
\[
\{(z,t'): |z-y'| + |t'-R| \le R\} \subset C(y,\alpha,t),
\]
for all $y' \in I.$ This is our desired interval for which $M^{\alpha}f \equiv M^1_{\equiv R} f.$ But we already know that, from \cite[Lemma~3.6]{aldazperezlazaro}, $M^1_{\equiv R} f$ satisfies a \emph{stronger} property of control of maxima. Indeed, in order to fit it into the context of Aldaz and P\'erez L\'azaro, we note that, by adopting
$NORM(1),$ $f$ becomes automatically upper semicontinuous, and also $f \le M^1_{\equiv R}f$ everywhere. In particular,
we know that, if $M^1_{\equiv R} f$ is constant in an interval, then it must be \emph{equal} to the function $f$ at \emph{every} point of that interval. But this is exactly our case, as we have already noticed that $M^{\alpha} f$ is constant on $[y-\alpha t, y+ \alpha t],$
and therefore also on $I$. This implies, in particular, that
\[
M^{\alpha}f(y)= M^1_{\equiv R}f(y) = f(y),
\]
which concludes our analysis of local maxima. \\
\subsection{Proof of $\mathcal{V}(M^{\alpha}f) \le \mathcal{V}(f),$ for $\alpha \ge \frac{1}{3}$} We remark, before beginning, that this strategy, from now on, is essentially the same as the one contained in \cite{aldazperezlazaro}. We will, therefore, assume
that $f \ge 0$ throughout. \\
First, we say that a function $g : I \to \mathbb{R}$ is \emph{V-shaped} if there exists a point $c \in I$ such that
\[
g|_{(l(I),c)} \text{ is non-increasing and } g|_{(c,r(I))} \text{ is non-decreasing}.
\]
We then present two different proof of this inequality, each of each suitable for a different purpose.\\
\noindent\textit{First proof: Using Lipschitz functions.} For this, we will suppose that $f$ has $NORM(1)$ as normalization. One can easily check then that $M^{\alpha}f \in C(\mathbb{R})$ in this case. In fact, it is not difficult to show also that
$M^{\alpha}f$ is continuous wt $x$ if $f$ is continuous at $x$. Moreover, we may prove an aditional property about it that will help us later:
\begin{lemm}[Reduction to the Lipschitz case]\label{reduction} Suppose we have that $$\mathcal{V}(M^{\alpha} f) \le \mathcal{V}(f), \;\; \forall f \in BV(\mathbb{R}) \cap \emph{\text{Lip}}(\mathbb{R}).$$ Then the same inequality holds for all Bounded Variation functions, that is,
$$ \mathcal{V}(M^{\alpha} f) \le \mathcal{V}(f), \;\; \forall f \in BV(\mathbb{R}).$$
\end{lemm}
\begin{proof}
Let $\varphi \in \mathcal{S}(\mathbb{R})$ be a smooth, nonnegative function such that $\int_{\mathbb{R}} \varphi (t) \mathrm{d} t = 1$, $\text{supp}(\varphi) \subset [-1,1],$ $\varphi$ is even and non-increasing on $[0,1].$ Call $\varphi_{\varepsilon} (x) = \frac{1}{\varepsilon}\varphi(\frac{x}{\varepsilon}).$ We define then $f_{\varepsilon}(x) = f * \varphi_{\varepsilon} (x).$
Notice that these functions are all Lipschitz (in fact, smooth) functions. Moreover, by standard theorems on Approximate Identities, we have that $f_{\varepsilon}(x) \to f(x)$ almost everywhere. Therefore, assuming the Theorem to hold for Lipschitz Functions, we have:
\begin{align*}
\mathcal{V} (M^{\alpha} f_{\varepsilon}) & \le \mathcal{V}(f_{\varepsilon}) \cr
& = \sup_{x_1 < \cdots < x_N} \sum_{i=1}^{N-1} |f_{\varepsilon}(x_{i+1}) - f_{\varepsilon}(x_i)| \cr
& \le \int_{\mathbb{R}} \varphi_{\varepsilon}(t) \sup_{x_1 < \cdots < x_N} \left(\sum_{i=1}^{N-1} |f(x_{i+1} - t) - f(x_i - t)|\right)\mathrm{d} t \cr
& \le \mathcal{V}(f). \cr
\end{align*}
Thus, it suffices to prove that
\begin{equation}\label{convv}
\limsup_{y \to x} M^{\alpha}f(y) \ge \limsup_{\varepsilon \to 0} M^{\alpha}f_{\varepsilon} (x) \ge \liminf_{\varepsilon \to 0} M^{\alpha}f_{\varepsilon}(x) \ge \liminf_{y \to x} M^{\alpha}f(y), \,\,\forall x \in \mathbb{R},
\end{equation}
as then
\begin{equation}\label{ineqqq}
\mathcal{V}(M^{\alpha}f) = \mathcal{V}(\liminf_{\varepsilon \to 0} M^{\alpha}f_{\varepsilon}) = \mathcal{V}(\limsup_{\varepsilon \to 0} M^{\alpha}f_{\varepsilon}) = \mathcal{V}(\lim_{j \to \infty} M^{\alpha}f_{\varepsilon_j}) \le \mathcal{V}(f).
\end{equation}
The proof of the equalities in \ref{ineqqq} is a direct consequence of \ref{convv}, where $\varepsilon_j \to 0$ as $j \to \infty,$ and the inequality is just a consequence of Fatou's Lemma.\\
Let us suppose, for the sake of a contradiction, that either the first or the third inequalities in \ref{convv} are not fulfilled. Therefore, we focus on the first inequality: suppose that there exists a real number $x_0$, a sequence $\varepsilon_k \to 0$ and a positive real number
$\eta > 0$ such that
$$ M^{\alpha}f_{\varepsilon_k}(x_0) >(1+2\eta)\limsup_{y \to x_0}M^{\alpha}f(y).$$
By definition, there exists a sequence $(y_k,r_k)$ with
$|y_k - x_0| \le \alpha r_k$ and
\[
\Xint-_{y_k-r_k}^{y_k + r_k} f_{\varepsilon_k}(s) \mathrm{d} s > (1+\eta) \limsup_{y \to x_0}M^{\alpha}f(y).
\]
\textit{Case 1:} Suppose $r_k \to 0.$ By the way we normalized $f$, there is an interval $I \ni x_0$ such that $f(y) \le (1+\eta/4)f(x_0), \forall y \in I.$ But then, by the support properties of $\varphi$ and for $k$ sufficiently large, we would have that
$(1+\eta/2)f(x_0) \ge M^{\alpha}f_{\varepsilon_k}(x_0),$ which is a contradiction, as $\limsup_{y \to x_0}M^{\alpha}f(y) \ge f(x_0).$\\
\noindent\textit{Case 2:} Let then $\inf_k r_k >0.$ Then, by Fubini's theorem and manipulations,
\begin{align*}
\Xint-_{y_k-r_k}^{y_k+r_k} f_{\varepsilon_k}(s) \mathrm{d} s & = \Xint-_{y_k-r_k}^{y_k+r_k} \left( \int_{-\varepsilon_k}^{\varepsilon_k} \varphi_{\varepsilon_k}(t) f(s-t) \mathrm{d} t\right) \mathrm{d} s \cr
& = \int_{-\varepsilon_k}^{\varepsilon_k}\varphi_{\varepsilon_k}(t) \left(\Xint-_{y_k-r_k}^{y_k+r_k} f(s-t) \mathrm{d} s\right) \mathrm{d} t \cr
& \le \frac{r_k + \varepsilon_k}{r_k} M^{\alpha}f(x_0). \cr
\end{align*}
This implies $r_k \le \frac{\varepsilon_k}{\eta} \to 0,$ which is another contradiction. \\
For the third inequality, we divide it once again: if $M^{\alpha}f(x_0) = u(y,t)$ for some $(y,t) \ne (x_0,0),$ then, by $L^1$ convergence of approximate identities, one easily gets that $\liminf_{\varepsilon\to 0} M^{\alpha}f_{\varepsilon} (x_0) \ge M^{\alpha}f(x_0).$ If not, pick $(y,t)$ such that $M^{\alpha}f(x_0) \le u(y,t) + \frac{\delta}{2}.$
Use then the $L^1$ convergenge of approximate identities in the interval $(y-t,y+t)$. The reverse inequality, and therefore the lemma, is proved, as $M^{\alpha}f(x) \ge \liminf_{y \to x} M^{\alpha}f(y).$
\end{proof}
Our main claim is then the following:
\begin{lemm}\label{lipproof}Let $f \in \text{Lip}(\mathbb{R})\cap BV(\mathbb{R}).$ Then, over every interval of the set
\[
E_{\alpha} = \{x \in \mathbb{R}\colon M^{\alpha}f(x) > f(x)\} = \bigcup_{j \in \mathbb{Z}} I_j^{\alpha},
\]
it holds that $M^{\alpha}f$ is either \emph{monotone} or \emph{V shaped} in $I_j^{\alpha}.$
\end{lemm}
\begin{proof}The proof goes roughly as the first paragraph of the proof of Lemma 3.9 in \cite{aldazperezlazaro}: let $I_j^{\alpha} = (l(I_j^{\alpha}),r(I_j^{\alpha})) =: (l_j,r_j),$ and suppose that $M^{\alpha}f$ is \emph{not} V shaped there. Therefore, there would be a maximal point $x_0 \in I_j^{\alpha}$ and an interval
$J \subset I_j^{\alpha}$ such that $M^{\alpha}f$ has a \emph{strict} local maximum at $x_0$ over $J.$ Then, by the maxima analysis we performed, we see that we have reached a contradiction from this fact alone, as $J \subset E_{\alpha}.$ We omit further details, as they can be found, as already mentioned,
at \cite[Lemma~3.9]{aldazperezlazaro}.
\end{proof}
We also need the following
\begin{lemm}\label{lipat} If $f \in BV(\mathbb{R}) \cap \text{Lip}(\mathbb{R}),$ then, for every (maximal) open interval $I_j^{\alpha} \subset E_{\alpha}$, we have that
\[
M^{\alpha}f(l(I_j^{\alpha})) = f(l(I_j^{\alpha})),
\]
and an analogous identity holds for $r(I_j^{\alpha}).$
\end{lemm}
The proof of this Lemma is straightforward, and we therefore skip it. To finalize the proof in this case for $\alpha > \frac{1}{3},$ we just notice that we can, in fact, bound the variation of $M^{\alpha}f$ \emph{inside} every interval $I_j^{\alpha}.$ In fact, we have directly from the last claim that, in case $M^{\alpha}f$ is V shaped on $I_j^{\alpha},$ then
there exists $c_j \in I_j^{\alpha}$ such that $M^{\alpha}f$ is non-increasing on $(l_j,c_j)$ and non-decreasing on $(c_j,r_j)$. We then calculate:
\begin{equation*}
\begin{split}
\mathcal{V}_{I_j^{\alpha}}(M^{\alpha}f) & = |M^{\alpha}f(l(I_j^{\alpha})) - M^{\alpha}f(c_j)| + |M^{\alpha}f(r(I_j^{\alpha})) - M^{\alpha}f(c_j)| \cr
& \le |f(l(I_j^{\alpha})) - f(c_j)| + |f(r(I_j^{\alpha})) - f(c_j)| \cr
& \le V_{I_j^{\alpha}}(f).
\end{split}
\end{equation*}
The way to formally end the proof is the following: let $\mathcal{P} = \{x_1 < \cdots < x_N\}$, and let $A := \{ j \in \mathbb{N}: \exists x_i \in \mathcal{P}\cap I_j^{\alpha}\}.$ Clearly, the index set $A$ is finite. We refine the partition $\mathcal{P}$ by adding to it the following points:
\begin{itemize}
\item If $j \in A$ and $M^{\alpha}f$ is \emph{monotone} over $I_j^{\alpha},$ then add $l_j,r_j$ to the partition;
\item If $j \in A$ and $M^{\alpha}f$ is \emph{V shaped} over $I_j^{\alpha},$ then add $l_j,r_j$ and the point $c_j$ to the partition.
\end{itemize}
Call this new partition $\mathcal{P}'.$ By the calculation above and the fact that, if $f \in \text{Lip}(\mathbb{R}) \Rightarrow M^{\alpha}f \ge f \text{ everywhere},$ and in particular $M^{\alpha}f = f$ at $\mathbb{R} \backslash E_{\alpha},$ one obtains that
\[
\mathcal{V}_{\mathcal{P}}(M^{\alpha}f) \le \mathcal{V}_{\mathcal{P}'}(M^{\alpha}f) \le \mathcal{V}(f).
\]
By taking a supremum over all partitions, we finish the result for $\alpha > \frac{1}{3}.$ On the other hand, it is straight from the definition that
\[
\beta \le \alpha \Rightarrow \frac{\beta}{\alpha} M^{\alpha}f \le M^{\beta}f \le M^{\alpha}f.
\]
This implies that, for a partition $\mathcal{P}$ as above,
\[
\sum_{i=1}^{N-1} |M^{\frac{1}{3}}f(x_{i+1}) - M^{\frac{1}{3}}f(x_i)| \le \lim_{\alpha \searrow \frac{1}{3}} \sum_{i=1}^{N-1} |M^{\alpha}f(x_{i+1}) - M^{\alpha}f(x_i)| \le \mathcal{V}(f).
\]
The theorem follows, again, as before. \\
\noindent\textit{Second proof: Directly for $f\in BV(\mathbb{R})$ general.} For this part, we assume that $f$ has $NORM(\alpha)$ normalization.
The argument here is morally the same, with just a couple of minor modifications -- and with the use of the facts we proved above, namely, that the result \emph{already} holds. Therefore, this section might seem a little bit
superfluous now, even though its reason of being is going to be shown while we characterize the extremizers. \\
\begin{claim} Let $E_{\alpha} = \{x \in \mathbb{R}: M^{\alpha}f(x) > f(x)\}.$ This set is open for any $f \in BV(\mathbb{R})$ normalized wiht $NORM(\alpha)$ and therefore can be decomposed as
\[
E_{\alpha} = \cup_{j \in \mathbb{Z}} I_j^{\alpha},
\]
where each $I_j^{\alpha}$ is an interval. Furthermore, the restriction of $M^{\alpha}f$ to each of those intervals is either a monotone function or a V shaped function with a minimum at $c_j \in I_j^{\alpha}.$ Moreover, $M^{\alpha}f(c_j) < \min\{M^{\alpha}f(l(I_j^{\alpha})),M^{\alpha}f(r(I_j^{\alpha}))\}.$
\end{claim}
\begin{proof}[Proof of the claim] The claim seems quite sophisticated, but its proof is simple, once one has done the maxima analysis we have done. The fact that $E_{\alpha}$ is open is easy to see. In fact, let $x_0 \in E_{\alpha}.$ By the lower semicontinuity of $M^{\alpha}f$ at $x_0$ and the fact that we normalized
$f$ with $NORM(\alpha),$
\[
\liminf_{z \to x_0} M^{\alpha}f(z) \ge M^{\alpha}f(x_0) > \limsup_{z \to x_0} f(z).
\]
This shows that, for $z$ close to $x_0$, the strict inequality should still hold, as desired. \\
The second part follows in the same fashion as the proof of Lemma \ref{lipproof}, and we therefore omit it.
\end{proof}
To finish the proof of the fact that $\mathcal{V}_{I_j^{\alpha}}(M^{\alpha}f) \le \mathcal{V}_{I_j^{\alpha}}(f)$ also in this case we just need one more lemma:
\begin{lemm}\label{interv} For every (maximal) open interval $I_j^{\alpha} \subset E_{\alpha}$ we have that
\[
M^{\alpha}f(l(I_j^{\alpha})) = f(l(I_j^{\alpha})),
\]
and an analogous identity holds for $r(I_j^{\alpha}).$
\end{lemm}
This is, just like Lemma \ref{lipat}, direct from the definition and the maximality of the intervals $I_j^{\alpha}.$ The conclusion in this case uses Lemma \ref{interv} in a direct fashion, combined with the strategy for the first proof: namely, the estimate
\begin{equation*}
\begin{split}
\mathcal{V}_{I_j^{\alpha}}(M^{\alpha}f) & \le |M^{\alpha}f(l(I_j^{\alpha})) - M^{\alpha}f(c_j)| + |M^{\alpha}f(r(I_j^{\alpha})) - M^{\alpha}f(c_j)| \cr
& \le |f(l(I_j^{\alpha})) - f(c_j)| + |f(r(I_j^{\alpha})) - f(c_j)| \cr
& \le V_{I_j^{\alpha}}(f)
\end{split}
\end{equation*}
still holds, by Lemma \ref{interv} and by the fact that $c_j \in I_j^{\alpha}.$ This finishes finally the second proof of Theorem \ref{angle}.
\subsection{Absolute continuity on the detachment set} We prove briefly the fact that, for $f \in W^{1,1}(\mathbb{R}),$ then we have that $M^{\alpha}f \in W^{1,1}_{loc}(\mathbb{R})$ for
\emph{any} $1>\alpha>0,$ as the case $\alpha = 0$ has been dealt with by Kurka \cite{kurka}, in Corollary 1.4. \\
Indeed, let
\[
E_{\alpha,k} = \{ x \in E_{\alpha} \colon M^{\alpha}f(x) = \sup_{(y,t) \colon |y-x|\le \alpha t, \,t \ge \frac{1}{2k}} \frac{1}{2t}\int_{y-t}^{y+t} |f(s)| \mathrm{d} s \}.
\]
Then we see that $E_{\alpha}= \cup_{k \ge 1} E_{\alpha,k}.$ Moreover, for $x,y \in E_{\alpha,k},$ let then $(y_1,t_1)$ have this property for $x$. Suppose also, without loss of generality, that $y \ge x$ and $M^{\alpha}f(x) > M^{\alpha}f(y).$ By assuming that $y > y_1 + \alpha t_1$ -- as otherwise $M^{\alpha}f(x) \le M^{\alpha}f(y)$ --, we have that
\begin{equation*}
\begin{split}
M^{\alpha}f(x) - M^{\alpha}f(y) & \le \frac{1}{2t_1} \int_{y_1-t_1}^{y_1 + t_1} |f(s)| \mathrm{d} s -u\left(\frac{y+\alpha y_1 - \alpha t_1}{1+\alpha},\frac{y-y_1+t_1}{1+\alpha}\right) \cr
& \le \frac{\frac{2}{1+\alpha}(y-y_1) -\frac{2\alpha}{1+\alpha}t_1}{2t_1 \cdot \frac{2}{1+\alpha}(y-y_1+t_1)} \int_{y_1 - t_1}^{y_1 + t_1} |f(s)| \mathrm{d} s \cr
& \le \frac{\frac{2}{1+\alpha}|y-x|}{\frac{2}{1+\alpha}(y-y_1+t_1)} \|f\|_{\infty} \le \frac{|x-y|}{2t_1} \|f\|_{\infty} \le k|x-y|\|f\|_{\infty}.\cr
\end{split}
\end{equation*}
This shows that $M^{\alpha}f$ is Lipschitz continuous with constant $\le k\|f\|_{\infty}$ on each $E_{\alpha,k}.$ The proof of the asserted fact, however, follows from this, by using the well-known Banach-Zarecki
lemma:
\begin{lemm}[Banach-Zarecki]\label{BZ} A function $g: I \to \mathbb{R}$ is absolutely continuous if and only if the following conditions hold simultaneously:
\begin{enumerate}
\item $g$ is continuous;
\item $g$ is of bounded variation;
\item $g(S)$ has measure zero for every set $S \subset I$ with $|S| = 0.$
\end{enumerate}
\end{lemm}
In fact, let $S$ be then a null-measure set on the real line and $f \in W^{1,1}(\mathbb{R})$ -- which implies that $M^{\alpha}f \in C(\mathbb{R})$ --, and let us invoke \cite[Lemma~3.1]{aldazperezlazaro}:
\begin{lemm}\label{lipk} Let $f: I \to \mathbb{R}$ be a continuous function. Let also $E \subset \{x \in I \colon |\overline{D}f(x)|:=\left| \limsup_{h \to 0} \frac{f(x+h)-f(x)}{h}\right| \le k\}.$ Then
\[
m^{*}(f(E)) \le k m^{*}(E),
\]
where $m(S) = |S|$ stands for the Lebesgue measure of $S$.
\end{lemm}
It is easy to see that the maximal functions $M^{\alpha}f$ are, in fact, \emph{continuous} on the open set $E_{\alpha}.$ Thus, we may use Lemmas \ref{BZ} and \ref{lipk} in each of the connected components of $E_{\alpha}:$
\begin{equation*}
\begin{split}
|M^{\alpha}f(S\cap I_j^{\alpha})| &\le \sum_{k \ge 1} |M^{\alpha}f(S \cap E_{\alpha,k} \cap I_j^{\alpha})| = 0, \cr
\end{split}
\end{equation*}
where we used that $M^{\alpha}f$ is Lipschitz over each $E_{\alpha,k}.$ But this implies that
\begin{align*}
|M^{\alpha}f(S)| & \le |M^{\alpha}f(S\cap E_{\alpha}^c)| + \sum_{j \in \mathbb{Z}} |M^{\alpha}f(S \cap I_j^{\alpha})|\cr
& = |f(S \cap E_{\alpha}^c)| = 0,
\end{align*}
by Lemma \ref{BZ} and the fact that $f \in W^{1,1}_{loc}(\mathbb{R}).$ This finishes this part of the analysis.
\subsection{Sharpness of the inequality and extremizers} In this part, we prove that the best constant in such inequalities is indeed 1, and characterize the extremizers for such. Namely, we mention promptly that the inequality
\emph{must} be sharp, as $f = \chi_{(-1,0)}$ realizes equality. \\
It is easy to see that, to do so, we may assume that $f$ still has $NORM(\alpha)$ normalization. \\
\begin{claim}\label{inteq} Let $f \in BV(\mathbb{R})$ normalized as before satisfy $\mathcal{V}(f) = \mathcal{V}(M^{\alpha}f).$ If we decompose $E_{\alpha} = \cup_{j} I_j^{\alpha},$ where each of the $I_j^{\alpha}$ is open and maximal, then
\[
\mathcal{V}_{I_j^{\alpha}}(f) = \mathcal{V}_{I_j^{\alpha}}(M^{\alpha}f).
\]
\end{claim}
\begin{proof} Let $\mathcal{P}, \mathcal{Q}$ be two finite partitions of $\mathbb{R}$ such that
\begin{equation}\label{partt}
\begin{cases}
\mathcal{V}(M^{\alpha}f) & \le \mathcal{V}_{\mathcal{P}} (M^{\alpha}f) + \frac{\varepsilon}{20},\cr
\mathcal{V}(f) & \le \mathcal{V}_{\mathcal{Q}}(f) + \frac{\varepsilon}{20}. \cr
\end{cases}
\end{equation}
Now let the mutual refinement of those be $\mathcal{S} = \mathcal{P} \cup \mathcal{Q}.$ We consider the intersection $\mathcal{S} \cap E_{\alpha}:$ if the finite set $A:= \{ i\colon I_j^{\alpha} \cap \mathcal{S} \ne \emptyset\}$ satisfies that
\begin{equation}\label{finvar} \sum_{j \in A} \mathcal{V}_{I_j^{\alpha}} (f) \ge \sum_{j \in \mathbb{N}}\mathcal{V}_{I_j^{\alpha}}(f) - \frac{\varepsilon}{20},
\end{equation}
then keep the partition as it is before advancing. If not, then add to $\mathcal{S}$ finitely many points, all of them contained in intervals of the form $\overline{I_j^{\alpha}}$, such that inequality \ref{finvar} holds. Call this
new partition $\mathcal{S}$ again, as it still satisfies the inequalities \ref{partt}. \\
We finally add some other points to the partition $\mathcal{S}:$ If $j \not\in A,$ do not add any points from the interval. If $j \in A,$ then do the following:
\begin{enumerate}
\item As $f = M^{\alpha}f$ on the boundary of an interval $I_j^{\alpha}$, we add to the collection both endpoints $r(I_j^{\alpha}),l(I_j^{\alpha}).$
\item If $M^{\alpha}f$ is V shaped over the interval $I_j^{\alpha}$, then there is a point $c_j$ such that $M^{\alpha}f$ is non-increasing on $(l_j,c_j)$ and non-decreasing on $(c_j,r_j).$ Add such a point to our partition.
\item If $\mathcal{V}_{I_j^{\alpha}}(f) > \mathcal{V}_{ \{x_i \in \mathcal{S} \colon x_i \in I_j^{\alpha}\}} (f) + \frac{\varepsilon}{2^{20|j|}},$ then add finitely many points to the partition to make the reverse inequality hold (here, $\mathcal{V}_{\{x_i \in \mathcal{S} \colon x_i \in A\}}(g)$ stands for
the variation along the finite partition composed solely by elements in the set $A$).
\end{enumerate}
It is easy to see that, if we denote by $\mathcal{S}'$ the partition obtained by the prescribed procedure above, then, as $\mathcal{V}(f) = \mathcal{V}(M^{\alpha}f)$ and $f = M^{\alpha}f$ on $\mathbb{R} \backslash E_{\alpha},$
\[
|\mathcal{V}_{\mathcal{S}'\cap E_{\alpha}}(f) - \mathcal{V}_{\mathcal{S}'\cap E_{\alpha}}(M^{\alpha}f)| \le 2\varepsilon,
\]
which then implies that, by the considerations above,
\begin{align}
\sum_{j \in \mathbb{Z}} \mathcal{V}_{I_j^{\alpha}}(f) - \frac{\varepsilon}{4} & \le \sum_{j \in A}\mathcal{V}_{I_j^{\alpha}}(f) \cr
& \le \sum_{j \in A}\mathcal{V}_{ \{x_i \in \mathcal{S}' \colon x_i \in I_j^{\alpha}\}}(f) + \varepsilon\cr
& \le \sum_{j \in A}\mathcal{V}_{ \{x_i \in \mathcal{S}' \colon x_i \in I_j^{\alpha}\}}(M^{\alpha}f) + 3\varepsilon \cr
& \le \sum_{j \in \mathbb{Z}}\mathcal{V}_{I_j^{\alpha}}(M^{\alpha}f) + 3 \varepsilon.\cr
\end{align}
As $\varepsilon$ was arbitrary, comparing the first and last terms above and looking back to our proof that in each of the $I_j^{\alpha}$ the variation of $f$ controls that of the maximal function, we conclude that, for each $j \in \mathbb{Z},$
\begin{equation}\label{localpart}
\mathcal{V}_{I_j^{\alpha}}(f) = \mathcal{V}_{I_j^{\alpha}}(M^{\alpha}f).
\end{equation}
This finishes the proof of this claim.
\end{proof}
\begin{claim}\label{monot1} Let $f, I_j^{\alpha}$ as above. Then $f$ and $M^{\alpha}f$ are monotone in $I_j^{\alpha}$.
\end{claim}
\begin{proof} Suppose first that $M^{\alpha}f$ is \emph{not} monotone there. Then it must be V shaped on $I_j^{\alpha}$, and then, by Claim \ref{inteq}, we see that the only possibility for that to happen is if $M^{\alpha}f(c_j) = f(c_j), \,c_j \in I_j^{\alpha}.$
This is clearly not possible by the definition of $I_j^{\alpha},$ and we reach a contradition. \\
Suppose now that $f$ is \emph{not} monotone over $I_j^{\alpha}.$ As $\mathcal{V}_{I_j^{\alpha}}(f) = \mathcal{V}_{I_j^{\alpha}}(M^{\alpha}f)$ by Claim \ref{inteq}, and $\mathcal{V}_{I_j^{\alpha}}(M^{\alpha}f) = f(r_j) - f(l_j),$ then it is easy to see that,
no matter what configuration of non-monotonicity we have, it yields a contradiction with the equality for the variations over the interval $I_j^{\alpha}.$ We skip the details, for they are routinely verified.
\end{proof}
\begin{rmk}\label{ceta} Note that this last claim proves also that, if $I_j^{\alpha}$ is \emph{bounded}, $f$ is \emph{non-decreasing} over it and $l_j$ is its left endpoint, then $f(l_j-) \le f(l_j+),$ as
otherwise we would arrive at a contradiction with the fact that $\mathcal{V}_{I_j^{\alpha}}(f) = \mathcal{V}_{I_j^{\alpha}}(M^{\alpha}f).$ An analogous statement holds for the right endpoint, and analogous conclusions
if $f$ is \emph{non-increasing} instead of non-decreasing over the interval. \\
\end{rmk}
Next, we suppose without loss of generality that the function $f$ is non-decreasing on $I_j^{\alpha},$ as the other case is completely analogous.
\begin{claim}\label{infty} Such an $f$ is, in fact, non-decreasing on $(-\infty,r(I_j^{\alpha})].$
\end{claim}
\begin{proof}Our proof of this fact will go by contradiction: \\
First, let $a_j = \inf\{t \in \mathbb{R}; f \text{ is non-decreasing in } [t,r(I_j^{\alpha})]\},$ and define $b_j<a_j$ such that the minimum of $f$ in $[b_j,r_j]$ happens \emph{inside} $(b_j,r_j).$ Of course, such a minimum need not happen at a point, but it surely does happen at a \emph{lateral limit} of a point.
\begin{subclaim}\label{sclm1} $M^{\alpha}f(a_j) = f(a_j)$ and $f(a_j-) = f(a_j+).$
\end{subclaim}
\begin{proof} If $M^{\alpha}f(a_j)>f(a_j)$, then there exists an open interval $E_{\alpha} \supset J'_j \ni a_j$, and, as we proved before, $f$ must be monotone in such an interval. By the definition of $a_j$, $f$ must be
non-decreasing, which is a contradiction to the definition of $a_j.$ \\
Now for the second equality: if it were not true, then $a_j$ would be, again, one of the endpoints of a maximal interval $J_j \subset E_{\alpha}.$ If $a_j$ is the left-endpoint, then it means that
$f(a_j-) > f(a_j+).$ But this is a contradiction, as $f$ then must be non-decreasing on $J_j$, and therefore we would again have that $\mathcal{V}_{J_j}(f) > \mathcal{V}_{J_j}(M^{\alpha}f).$ Therefore, $a_j$ is the right endpoint, and also $f(a_j-) < f(a_j+).$
At the present moment an analysis as in Remark \ref{ceta} is already available, and thus we conclude that $f$ shall be non-decreasing on $J_j$, which is again a contradiction to the definition of $J_j.$
\end{proof}
We must prove yet another fact that will help us:
\begin{subclaim}\label{minm} Let
\[
\mathcal{D} = \{x \in (b_j,r_j)\colon \min(f(x-),f(x+)) \text{ attains the minimum in } (b_j,r_j)\}.
\]
Then there exists $d \in \mathcal{D}$ such that $f(d-) = f(d+)$ and $M^{\alpha}f(d) = f(d).$
\end{subclaim}
\begin{proof} If $a_j \in \mathcal{D},$ then our assertion is proved by Subclaim \ref{sclm1}. If not, then $\mathcal{D} \subset (b_j,a_j).$ In this case, pick any point $d_0$ in this intersection. \\
\noindent \textit{Case 1: $f(d_0+) = f(d_0-)$.} In this case, there is nothing left undone if $f(d_0) = M^{\alpha}f(d_0).$ Otherwise, we would have that $M^{\alpha}f(d_0) > f(d_0),$ and then there would be an interval $E_{\alpha} \supset J_0 \ni d_0.$ By the fact that \emph{all} the points in $\mathcal{D}$
must lie in $(b_j,a_j),$ and that $f$ is monotone on $J_0$, we see automatically that either $f(b_j) \le f(d_0),$ a contradiction, or the right endpoint of $J_0$ satisfies $f(r(J_0)) \le f(d_0).$ By the definition of $d_0,$ this inequality has to be an equality, and
also $f$ must be continuous at $r(J_0)$, by the argument of Remark \ref{ceta}. As an endpoint of a maximal interval $J_0 \subset E_{\alpha},$ we have then $M^{\alpha}f(r(J_0)) = f(r(J_0)).$ \\
\noindent \textit{Case 2: $f(d_0+) > f(d_0-)$.} It is easy to see that, in this case, there is an open interval $J \subset E_{\alpha}$ such that either $J \ni d_0$ or $d_0$ is its right endpoint. In either case, we see that $f$ must be non-decreasing over this interval
$J$, and let again $l_0$ be its left endpoint. As we know, $l_0 \in \mathcal{D}$ again, $l_0 \in (b_j,r_j)$ and, by Remark \ref{ceta}, we must have that $f(l_0-) = f(l_0+).$ Of course, by being the endpoint we have automatically again that $M^{\alpha}f(l_0) = f(l_0).$ This concludes again this case, and therefore the proof of the subclaim.
\end{proof}
The concluding argument for the proof of the Claim \ref{infty} goes as follows: let $d$ be the point from Subclaim \ref{minm}. Then we must have that
\[
f(d) = M^{\alpha}f(d) \ge Mf(d) \ge \Xint-_{d-\delta}^{d+\delta} f.
\]
For small $\delta$, it is easy to get a contradiction from that. Indeed, by the properties of the interval $(b_j,r_j]$ one can ensure that it is only needed to analyze
$\delta \le |d-b_j|.$ The details are omitted. \\
This contradiction came from the fact that we supposed that $a_j > -\infty,$ and our claim is established.
\end{proof}
Now we finish the proof: If $M^{\alpha}f \le f$ always, we get to the case of a \emph{superharmonic function}, i.e., a function which satisfies $\Xint-_{x-r}^{x+r} f(s) \mathrm{d} s \le f(x)$ for all $r > 0.$ That is going to be handled in a while. If not, then we analyze the detachment set:
\begin{enumerate}
\item If all intervals in the detachment set are \emph{of one single type}, that is, either all \emph{non-increasing} or all \emph{non-decreasing}, our function must then admit a point $x_0$ such that $f$ is either non-decreasing on $(-\infty,x_0]$ (resp. non-decreasing on $[x_0,+\infty),$) and $f = M^{\alpha}f$ on $(x_0,+\infty)$ (resp. on $(-\infty,x_0)$).
\item If there is at least one interval of each type, then we must have an interval $[R,S]$ such that
\begin{itemize}
\item $f$ is non-decreasing on $(-\infty,R]$;
\item $f$ is non-increasing on $[S,+\infty)$;
\item $f = M^{\alpha}f$ on $(R,S).$
\end{itemize}
\end{enumerate}
The analysis is then easily completed for every one of the cases above: If $f = M^{\alpha}f$ over an interval, then, as $M^{\alpha}f \ge Mf$, we conclude that $f$ must be superharmonic there, where by ``locally subharmonic" we mean a function that satisfies $f(x) \ge \Xint-_{x-r}^{x+r}f(s) \mathrm{d} s$ for all $0\le s \ll_{x} 1$. As superharmonic in one dimension coincides with concave, and concave functions have \emph{at most} one global maximum, then the first case above gives that $f$ is either monotone or has exaclty one point $x_1$
such that it is exactly non-decreasing until a point $x_1,$ non-increasing after. The case of monotone functions is easily ruled out, as if $\lim_{x \to \infty} f = L, \, \lim_{x \to -\infty} f = M \Rightarrow \mathcal{V}(f) = |M-L|, \mathcal{V}(M^{\alpha}f) \le \frac{|M-L|}{2}.$ The second case is treated in the exact same fashion, and the result is the same: in the end, the only
possible extremizers for this problem are functions $f$ such that there is a point $x_1$ such that $f$ is non-decreasing on $(-\infty,x_1),$ and $f$ is non-increasing on $(x_1,+\infty).$ The theorem is then complete.
\subsection{Proof of Theorem \ref{counterex}}
We start our discussion by pointing out that the measure $\mathrm{d} \mu = \delta_0 + \delta_1$ satisfies our Theorem.
\begin{prop} Let $0 \le \alpha < \frac{1}{3}.$ Then
$$+\infty = M^{\alpha}\mu(0) > M^{\alpha}\mu\left(\frac{1}{3}\right) < M^{\alpha}\mu\left(\frac{1}{2}\right) > M^{\alpha}\mu\left(\frac{2}{3}\right).$$
That is, $M^{\alpha}\mu$ has a nontrivial local maximum.
\end{prop}
\begin{proof} By the symmetries of our measure, $M^{\alpha}\mu \left(\frac{1}{3}\right) = M^{\alpha}\mu\left(\frac{2}{3}\right).$ A simple calculation then shows that $M^{\alpha}f\left(\frac{1}{3}\right) = \frac{3(\alpha+1)}{2},$ if $\alpha < \frac{1}{3}.$
As $M^{\alpha}\mu \left(\frac{1}{2}\right) \ge M\mu\left(\frac{1}{2}\right) = 2 > \frac{3 \alpha + 3}{2} \iff \alpha < \frac{1}{3},$ we are done with the proof of this proposition.
\end{proof}
Before proving our Theorem, we mention that our choice of $\frac{1}{3},\frac{1}{2},\frac{2}{3}$ was not random: $\frac{1}{2}$ is actually a \emph{local maximum} of $M^{\alpha}\mu$, while $\frac{1}{3},\frac{2}{3}$ are \emph{local minima}.
\begin{proof}[Proof of Theorem \ref{counterex}] Let $f_n(x) = n (\chi_{[0,\frac{1}{n}]} + \chi_{[1-\frac{1}{n},1]}).$ It is easy to see that $\int g f_n \mathrm{d} x \to \int g \mathrm{d} \mu(x),$ for each $g \in L^{\infty}(\mathbb{R})$ that is continuous on $[0,t_0) \cup (t_1,1],$ for some $t_0 < t_1.$ \\
We prove that $M^{\alpha}f_n (x) \to M^{\alpha}\mu (x), \;\forall x \in [0,1].$ This is clearly enough to conclude our Theorem, as then, if we fix $\alpha < \frac{1}{3},$ there will be $n(\alpha) > 0$ such that, for $N \ge n(\alpha),$
$$0=f_N\left(\frac{1}{3}\right) <M^{\alpha}f_N \left(\frac{1}{3}\right) < M^{\alpha}f_N\left(\frac{1}{2}\right) > M^{\alpha}f_N\left(\frac{2}{3}\right) > f_N\left(\frac{2}{3}\right) = 0.$$
To prove convergence, we argue in two steps. \\
The first step is to prove that $\liminf_{n \to +\infty} M^{\alpha}f_n (x) \ge M^{\alpha}\mu (x).$ It clearly holds for $x \in \{0,1\}.$ For $x \in (0,1),$ we see that
$$M^{\alpha}f_n (x) = \sup_{|x-y|\le \alpha t \le 3 \alpha} \frac{1}{2t}\int_{y-t}^{y+t} f_n(s) \mathrm{d} s.$$
But then
\begin{align*}
M^{\alpha}\mu (x) &=\sup_{|x-y|\le \alpha t \le 3 \alpha} \frac{1}{2t}\int_{y-t}^{y+t}\mathrm{d} \mu (s) \cr
& = \sup_{|x-y|\le \alpha t \le 3 \alpha; t\ge\delta(x)>0} \lim_{n\to \infty} \frac{1}{2t}\int_{y-t}^{y+t} f_n(s) \mathrm{d} s \cr
& \le \liminf_{n \to \infty} M^{\alpha}f_n(x),
\end{align*}
where $\delta(x)>0$ is a multiple of the minimum of the distances of $x$ to either 1 or 0. This completes this part. \\
The second step is to establish that, for every $\varepsilon > 0,$ $\; (1+\varepsilon)M^{\alpha}\mu (x) \ge \limsup_{N \to \infty} M^{\alpha}f_N(x).$ This readily implies the result. \\
To do so, notice that, as $1>x >0,$ then for $N$ sufficiently large, the average that realizes the supremum on the definition of $M^{\alpha}$ has a positive radius bounded bellow and above in $N$. Specifically, we have that
$$M^{\alpha}f_N(x) = \Xint-_{y_N-t_N}^{y_N+t_N} f_N(s) \mathrm{d} s, \;\; \Delta(x) \ge t_N \ge \delta(x) >0.$$
This shows also that $\{y_N\}$ and $\{t_N\}$ must be bounded sequences. Therefore, using compactness,
\begin{align*}
\limsup_{N \to \infty} M^{\alpha}f_N(x)& = \limsup_{N \to \infty} \Xint-_{y_N-t_N}^{y_N+t_N} f(s) \mathrm{d} s \cr
& = \lim_{k \to \infty} \Xint-_{y_{N_k} - t_{N_k}}^{y_{N_k} + t_{N_k}} f(s) \mathrm{d} s \cr
& \le (1+\eta)\frac{1}{2t} \limsup_{N \to \infty} \int_{y-(1+\varepsilon/2) t}^{y+(1+\varepsilon/2) t} f_N(s) \mathrm{d} s \cr
& = (1+\eta)(1+\varepsilon/2) \Xint-_{y-(1+\varepsilon/2)t}^{y +(1+\varepsilon/2)t} \mathrm{d} \mu (s) \cr
& \le (1+\varepsilon) M^{\alpha}\mu(x), \cr
\end{align*}
where we assume that the sequence $\{n_k\}$ is suitably chosen so that the convergence requirements all hold.
If we make $N$ sufficiently large, and take $\eta$ depending on $\varepsilon$ such that $(1+\eta)(1+\varepsilon/2) < 1+\varepsilon,$ we are done with the second part.
\end{proof}
\section{Proof of Theorems \ref{lip} and \ref{lipcont}} The idea for this proof is basically the same as before: analyze local maxima in the detachment set in this Lipschitz case, proving that the maximal function is either V shaped or monotone in its composing intervals,
\emph{if} the Lipschitz constant into consideration is less than $\frac{1}{2}$. The endpoint case is done by approximation, and we comment on how to do it later. By the end, we sketch on how to build the mentioned counterexamples.
\subsection{Analysis of maxima of $M^1_N$ for $\text{Lip}(N)< \frac{1}{2}$} Let first $(a,b)$ be an interval on the real line, such that there exists a point $x_0$, maximum of $M^1_Nf$ over $(a,b)$, with the property that
\[
M^1_Nf(x_0) > \max\{M^1_Nf(a),M^1_Nf(b)\}.
\]
Therefore, we wish to prove that, for some point in $(a,b)$, $M^1_Nf = f.$ We begin with the general strategy: let us suppose that this is not the case.
Then there must be an average $u(y,t) =\frac{1}{2t} \int_{y-t}^{y+t}|f(s)|\mathrm{d} s$ with $N(x_0) \ge t>0$, $|x_0-y| \le t$ and $M^1_Nf(x_0) = u(y,t).$ \\
Now we want to find a neighbourhood of $x_0$ such that there is $R=R(x_0)>0$ such that, for all $x \in I,$ $M^1_{\equiv R}f(x) = M^1_Nf(x_0).$ \\
By Lemma \ref{BPL}, we can suppose that either $y = x_0 - t$ or $y=x_0 + t,$ as we can show that $y \in (a,b).$ Without loss of generality, let us assume that $y = x_0-t.$ \\
\noindent\textit{Case (a): $t<N(x_0).$} This is the easiest case, and we rule it out with a simple observation: let $I$ be an interval for which $x_0$ is an endpoint and
such that, for all $x \in I,\, N(x)> t.$ We claim then that, for $x \in I,$ $M^1_{\equiv t+\varepsilon}f(x) = M^1_Nf(x_0),$ if $\varepsilon$ is sufficiently small. Indeed, if $\varepsilon$
is sufficiently small, then $M^1_{\equiv t+\varepsilon}f(x) \le M^1_Nf(x) (\le M^1_Nf(x_0))$ for every $x \in I.$ But then we see also that $(x_0-t,t)$ belongs to the region $\{y:|x-y|\le s \le N(x)\},$ as
then $|(x_0-t)-x|=x+t-x_0 \le t < t+\varepsilon <N(x).$ This shows that
\[
M^1_Nf(x_0) \le \inf_{x \in I} M^1_{\equiv t+\varepsilon}f(x) \le \sup_{x \in I} M^1_{\equiv t+\varepsilon}f(x) \le M^1_Nf(x_0).
\]
As before, we finish this case with \cite[Lemma~3.6]{aldazperezlazaro}, as then it guarantees us that $M^1_{\equiv t+\varepsilon}f(x) = f(x)$ for every point in this interval $I$. \\
\noindent\textit{Case (b): $t=N(x_0).$} In this case, we have to use Lemma \ref{square}. Namely, we wish to include the point $(x_0 - N(x_0),N(x_0))$ in the region
\[
\{(z,s):|z-x| + |s-N(x)| \le N(x)\},
\]
for $x_0 - \delta < x<x_0$, $\delta$ sufficiently small.
Let then $\varepsilon >0$ and $x$ close to $x_0$ be such that $N(x) \ge N(x_0) - \varepsilon.$ We have already a comparison of the form
\[
M^1_Nf(x) \ge M^1_{\equiv N(x_0) - \varepsilon}f(x).
\]
We want to conclude that there is an interval $I$ such that $M^1_{\equiv N(x_0)-\varepsilon}f$ is constant on $I$. We want then the point $(x_0 - N(x_0),N(x_0))$ to lie on the set
\[
\{(z,s): |z-x| + |s-N(x_0) + \varepsilon| \le N(x_0) - \varepsilon\}.
\]
But this is equivalent to
\[
x-x_0 + N(x_0) + \varepsilon \le N(x_0) - \varepsilon \iff |x-x_0| \ge 2\varepsilon.
\]
So, we can only afford to to this if $x$ is somewhat not too close to $x_0.$ But, as $\text{Lip}(N) < \frac{1}{2}$ in this case, we see that
\[
|N(x)-N(x_0)| \le \text{Lip}(N)|x-x_0| \Rightarrow N(x) \ge N(x_0) -\text{Lip}(N)|x-x_0| > N(x_0) - \varepsilon \iff
\]
\[
|x-x_0| \le \frac{1}{\text{Lip}(N)} \varepsilon.
\]
Therefore, we conclude that, on the non-trivial set
\[
\{x \in \mathbb{R}: \frac{1}{\text{Lip}(N)} \varepsilon \ge |x-x_0| \ge 2\varepsilon\},
\]
it holds that $M^1_Nf(x_0) \ge M^1_Nf(x) \ge M^1_{\equiv N(x_0) - \varepsilon}f(x) \ge M^1_Nf(x_0) \ge M^1_Nf(x).$ By \cite[Lemma~3.6]{aldazperezlazaro}, $M^1_{\equiv N(x_0)-\varepsilon}f(x) = M^1_Nf(x) = f(x).$
This concludes the
analysis in this case, and also finishes this part of the section, as the finishing argument here is then the same as the one used in Theorem \ref{angle}, and we therefore omit it.
\begin{figure}
\centering
\begin{tikzpicture}[scale=0.7]
\draw[-|,semithick] (-8,0)--(-7.5,0);
\draw[|-|,semithick] (-7.5,0)--(1,0);
\draw[|-|,semithick] (1,0)--(3,0);
\draw[|-|,semithick] (3,0)--(9,0);
\draw[|-,semithick] (9,0)--(10,0);
\draw[-,thick] (3,0)--(8,5);
\draw[-,thick] (3,0)--(-2,5);
\draw[-,thick] (-2,5)--(8,5);
\draw[-,thick, color=red] (1,0)--(5.3,4.3);
\draw[-,thick, color=red] (1,0)--(-3.3,4.3);
\draw[-,thick] (-3.3,4.3)--(5.3,4.3);
\draw[|-|,semithick] (-5,4.3)--(-5,0);
\draw (-6.5,2.2) node {$N(x_0)-\varepsilon$};
\draw[dashed] (-5,4.3)--(-3.3,4.3);
\draw[|-|,semithick] (-4,5)--(-4,0);
\draw (-3,2.5) node {$N(x_0)$};
\draw[dashed] (2,1)--(2,0);
\draw[-,semithick] (-3,4)--(-2,5);
\draw[dashed] (-3,4)--(-3,5);
\draw[|-|,semithick] (-6,4)--(-6,5);
\draw (-7,4.5) node {$\frac{|x-x_0|}{2}$};
\draw (-3,4) node[circle,fill,inner sep=1pt]{} ;
\draw[dashed] (-6,4)--(-3,4);
\draw[dashed] (-6,5)--(-3,5);
\draw[dashed] (-2,5)--(-4,5);
\draw[dashed] (-3.3,4.3)--(-4,5);
\draw[dashed, color=red] (-3.3,4.3)--(1,8.6);
\draw[dashed, color=red] (1,8.6)--(5.3,4.3);
\draw[dashed, draw=red, fill=red, fill opacity=0.2] (1,0) -- (5.3,4.3) -- (1,8.6) -- (-3.3,4.3) -- (1,0);
\draw (3,-0.5) node {$x_0$};
\draw (1,-0.5) node {$x$};
\draw (-7,-0.5) node {$a$};
\draw (9,-0.5) node {$b$};
\draw (0,5.5) node {{\small$(x_0 - N(x_0),N(x_0))$}};
\draw[|-|,semithick] (-6,0) -- (-6,1);
\draw (-7,0.5) node {$\frac{|x-x_0|}{2}$};
\draw[dashed] (-6,1)--(2,1);
\draw (-2,5) node[circle,fill,inner sep=1pt]{} ;
\draw (-3.3,4.3) node[circle,fill,inner sep=1pt]{};
\end{tikzpicture}
\caption{Illustration of proof of case (b).}
\end{figure}
\vspace{1cm}
\subsection{The critical case $\text{Lip}(N)=\frac{1}{2}$} The argument is pretty simple: we build explicitly a suitable sequence of approximations of $N$ such that they all have Lipschitz constants less than $\frac{1}{2}.$ By our already proved results,
this will give us the result also in this case. \\
Explicitly, let $N$ be such that $\text{Lip}(N) = \frac{1}{2}$ and $f \in BV(\mathbb{R})$. Let then $\mathcal{P} = \{x_1 < \cdots < x_M\}$ be any partition of the real line. Let $J \gg 1$ be a large integer, and divide the interval $[x_1,x_M]$ into $J$ equal parts, that we call $(a_j,b_j)$. Define also the numbers
\[
\Delta_j = \frac{N(b_j)-N(a_j)}{b_j-a_j}.
\]
We know, by hypothesis, that $\Delta_j \in [-1/2,1/2].$ Let then $\tilde{\Delta}_j = \frac{1}{2} - \frac{1}{J^3},$ and define the function
\[
\tilde{N}(x) = \begin{cases}
N(x_1), & \text{ if } x \le x_1, \cr
N(x_1) + \tilde{\Delta}_1(x-x_1), & \text{ if } x \in (a_1,b_1),\cr
\tilde{N}(b_{j-1}) + \tilde{\Delta}_j (x-b_{j-1}), & \text{ if } x \in (a_j,b_j), \cr
\tilde{N}(b_J), & \text{ if } x \ge x_N.\cr
\end{cases}
\]
It is obvious that this function is continuous and Lipschitz with constant $\frac{1}{2} - \frac{1}{J^3}.$ If $x \in [x_1,x_N],$ then
\[
|\tilde{N}(x) - N(x)| = |\tilde{N}(x) - \tilde{N}(x_1) + N(x_1) - N(x)| \le \int_{x_1}^x |\frac{1}{2} - \frac{1}{J^3} - \frac{1}{2}| \mathrm{d} t \le \frac{|x_M - x_1|}{J^3}.
\]
We now choose $J$ such that the right hand side above is less than $\delta > 0,$ which is going to be chosen as follows: for the same partition $\mathcal{P},$ we let $\delta > 0$ be such that
\[
|\tilde{N}(x_i) - N(x_i)|< \delta \Rightarrow |M^1_{N(x_j)}f(x_j) - M^1_{\tilde{N}(x_j)}f(x_j)| < \frac{\varepsilon}{2M}.
\]
This can, by continuity, always be accomplished. This implies that, using the previous case,
\[
\mathcal{V}_{\mathcal{P}}(M^1_N f) \le \mathcal{V}_{\mathcal{P}}(M^1_{\tilde{N}}f) + \varepsilon \le \mathcal{V}(M^1_{\tilde{N}}f) + \varepsilon \le \mathcal{V}(f) + \varepsilon.
\]
Taking the supremum over all possible partitions and then taking $\varepsilon \to 0$ finishes also this case, and thus the proof of Theorem \ref{lip}.
\subsection{Counterexample for $\text{Lip}(N) > \frac{1}{2}$}
Finally, we build examples of functions with $\text{Lip}(N) > \frac{1}{2}$ and $f \in BV(\mathbb{R})$ such that
\[
\mathcal{V}(M_Nf) = + \infty.
\]
Fix then $\beta > \frac{1}{2}$ and let a function $N$ with $\text{Lip}(N) = \beta$ be defined as follows:
\begin{enumerate}
\item First, let $x_0 = \frac{2}{2\beta+1}.$ Let then $N(0) = 1,\,N(x_0) = \frac{x_0}{2}$ and extend it linearly in $(0,x_0).$
\item Let $x'_K$ be the solution to the equation $\beta x - \beta x_{K-1} + \frac{x_{K-1}}{2} = \frac{x+1}{2} \iff x'_K = x_{K-1} + \frac{1}{\beta - \frac{1}{2}}.$
\item At last, take $x_K = x'_K + \frac{1}{2\beta +1},$ and define for all $K \ge 1$ $N(x_K) = \frac{x_K}{2},\, N(x'_K) = \frac{x'_K+1}{2} ,$ extending it linearly on $(x_{K-1},x'_K)$ and $(x'_K,x_K).$
\end{enumerate}
As $\{x'_K\}_{K \ge 1}$ is an arithmetic progression, we see that
\[
\sum_{K \ge 0} \frac{1}{x'_K} = +\infty.
\]
Moreover, define $f(x) = \chi_{(-1,0)}(x).$ We will show that, for this $N$, we have that
\[
\mathcal{V}(M^1_Nf) = +\infty.
\]
In fact, it is not difficult to see that:
\begin{enumerate}
\item \textit{$M^1_Nf(x_K) =0, \, \forall K \ge 0.$} This is due to the fact that the maximal intervals $(y-t,y+t)$ that satisfy $|x_K - y| \le t \le N(x_K)$ are still contained in $[0,+\infty),$ which is of course disjoint from $(-1,0).$
\item \textit{$M^1_Nf(x'_K) \ge \frac{1}{x'_K+1}$.} This follows from
\[
M^1_Nf(x'_K) \ge \frac{1}{2Nf(x'_K)} \int_{-1}^{x'_K} f(t) \mathrm{d} t = \frac{1}{x'_K + 1}.
\]
\end{enumerate}
This shows that
\[
\mathcal{V}(M^1_Nf) \ge \sum_{K=0}^{\infty} |M^1_Nf(x'_K) - M^1_Nf(x_K)| = \sum_{K=0}^{\infty} \frac{1}{x'_K + 1} = + \infty.
\]
This construction therefore proves Theorem \ref{lipcont}.
\begin{figure}
\centering
\begin{tikzpicture}[scale=0.5]
\draw[->,semithick] (0,-0.5) -- (0,9);
\draw[->,semithick] (-2.5,0) -- (15,0);
\draw[-|,semithick] (0,0) -- (0,4);
\draw[-,semithick] (0,1) -- (4/5,2/5);
\draw[-,semithick] (4/5,2/5) -- (24/5,29/10);
\draw[-,semithick] (24/5,29/10) -- (26/5,13/5);
\draw[-,semithick] (26/5,13/5) -- (46/5, 51/10);
\draw[-,semithick] (46/5, 51/10) -- (48/5, 24/5);
\draw[-,semithick] (48/5,24/5) -- (68/5, 73/10);
\draw[-,semithick] (68/5,73/10) -- (70/5, 7);
\draw[-,semithick] (14,7) -- (15, 45/4 + 7 - 42/4);
\draw[dashed, cyan, domain=-0.5:14] plot(\x,{4/(1+\x)});
\draw[dashed] (0,0) -- (15,15/2);
\draw (24/5,4) node{$N(x)$};
\draw[-,semithick, draw=blue] (-1,4)--(0,4);
\draw[draw=blue, fill=none] plot [smooth, tension=0.1] coordinates {(0,4) (4/5,0) (24/5,20/29) (26/5,0) (46/5,20/51) (48/5,0) (68/5,20/73) (14,0)};
\end{tikzpicture}
\caption{A counterexample in the case of $\text{Lip}(N) = \frac{3}{4}.$ The dashed lines are the graphs of $\frac{x}{2}$ and $\frac{1}{1+x}$, and the non-dashes ones the graphs of $M^1_Nf$ and $N$ in this case.}
\end{figure}
\section{Comments and remarks}
\subsection{Nontangential maximal functions and classical results} Here, we investigated mostly the regularity aspect of our family $M^{\alpha}$ of nontangential maximal functions, and looked for the sharp constants
in such bounded variation inequalities. One can, however, still ask about the most classical aspect studied by Melas \cite{Melas}: Let $C_{\alpha}$ be the least constant such that we have the following inequality:
\[
|\{ x \in \mathbb{R}\colon M^{\alpha}f(x) > \lambda \}| \le \frac{C_{\alpha}}{\lambda} \|f\|_1.
\]
By \cite{Melas}, we have that, for when $\alpha = 0$, then $C_0 = \frac{11 + \sqrt{61}}{12},$ and the classical argument of Riesz \cite{riesz} that $C_1 = 2.$ Therefore, $\frac{11 + \sqrt{61}}{12} \le C_{\alpha} \le 2, \,\,\forall \alpha \in (0,1).$
Nevertheless, the exact values of those constants is, as long as the author knows, still unknown. \\
\subsection{Bounded variation results for mixed Lipschitz and nontangential maximal functions} In Theorems \ref{lip} and \ref{lipcont}, we proved that, for the \emph{uncentered} Lipschitz maximal function $M_N$, we have sharp
bounded variation results for $\text{Lip}(N) \le \frac{1}{2},$ and, if $\text{Lip}(N)>\frac{1}{2}$, we cannot even assure \emph{any} sort of bounded variation result. \\
We can ask yet another question: if we define the \emph{nontangential Lipschitz maximal function}
\[
M^{\alpha}_Nf(x) = \sup_{|x-y| \le \alpha t \le \alpha N(x)} \frac{1}{2t} \int_{y-t}^{y+t} |f(s)| \mathrm{d} s,
\]
then what should be the best constant $L(\alpha)$ such that, for $\text{Lip}(N) \le L(\alpha),$ then we have some sort of bounded variation result like $\mathcal{V}(M^{\alpha}_Nf) \le A\mathcal{V}(f),$ and, for each $\beta > L(\alpha),$ there
exists a function $N_{\beta}$ and a function $f_{\beta} \in BV(\mathbb{R})$ such that $\text{Lip}(N_{\beta}) = \beta$ and $\mathcal{V}(M_{N_{\beta}}f_{\beta}) = +\infty?$ Regarding this question, we cannot state any kind of
sharp constant bounded variation result, but the following is still attainable: it is possible to show that the first two lemmas of O. Kurka \cite{kurka} are adaptable in this context \emph{if} we suppose that
\[
\text{Lip}(N) \le \frac{1}{\alpha + 1},
\]
and then we obtain our results, with a constant that is even independent of $\alpha \in (0,1).$ On the other hand, our example used above in the proof of Theorem \ref{lipcont} is easily adaptable as well, and therefore one might prove
the following Theorem:
\begin{theorem} Let $\alpha \in [0,1]$ and $N$ be a Lipschitz function with $\text{Lip}(N) \le \frac{1}{\alpha + 1}.$ Then, for every $f \in BV(\mathbb{R}),$ we have that
\[
\mathcal{V}(M^{\alpha}_Nf) \le C \mathcal{V}(f),
\]
where $C$ is independent of $N,f,\alpha.$ Moreover, for all $\beta > \frac{1}{\alpha + 1},$ there is a function $N_{\beta}$ and
\[
f(x) = \begin{cases}
1, & \text{ if } x \in (-1,0);\cr
0, & otherwise, \cr
\end{cases}
\]
with $\text{Lip}(N_{\beta}) = \beta$ and $\mathcal{V}(M^{\alpha}_{N_{\beta}}f) = + \infty.$
\end{theorem}
\subsection{Increasing property of maximal $BV-$norms}\label{commrem} Theorem \ref{angle} proves that, if we define
\[
B(\alpha) := \sup_{f \in BV(\mathbb{R})} \frac{\mathcal{V}(M^{\alpha}f)}{\mathcal{V}(f)},
\]
then $B(\alpha) = 1$ for all $\alpha \in [\frac{1}{3},1].$ We can, however, with the same technique, show that $B(\alpha)$ is increasing in $\alpha > 0,$ and also that $B(\alpha) \equiv 1 \,\, \forall \alpha \in [\frac{1}{3},+\infty).$
Indeed, we show that, for $f \in BV(\mathbb{R})$ and $\beta > \alpha,$ then $\mathcal{V}(M^{\alpha}f) \ge \mathcal{V}(M^{\beta}f).$ The argument uses the maximal attachment property in the following way: let, as usual, $(a,b)$ be an interval where $M^{\beta}f$ has
a local maximum \emph{inside} it, at, say, $x_0$, and
\[
M^{\beta}f(x_0) > \max(M^{\beta}f(a),M^{\beta}f(b)).
\]
Then, as we have that $M^{\beta}f \ge M^{\alpha}f$ \emph{everywhere}, we have two options:
\begin{itemize}
\item \textit{If $M^{\beta}f(x_0) =f(x_0),$} we do not have absolutely anything to do, as then also $M^{\alpha}f(x_0) = M^{\beta}f(x_0).$
\item \textit{If $M^{\beta}f(x_0) = u(y,t),$ for $t > 0$,} we have -- as in the proof of Theorem \ref{angle} -- that $(y-\beta t, y+\beta t) \subset (a,b).$ But it is then obvious that
\[
M^{\alpha}f(y) \ge u(y,t) = M^{\beta}f(x_0) \ge M^{\beta}f(y) \ge M^{\alpha}f(y).
\]
\end{itemize}
Therefore, we have obtained a form of the maximal attachment property, and therefore we can apply the standard techniques that have been used through the paper to this case, and it is going to yield our result. \\
This shows directly that $B(\alpha) \le 1, \forall \alpha \ge 1,$ but taking $f(x) = \chi_{(0,1)}$ as we did several times shows that actually $B(\alpha)=1$ in this range.
| {'timestamp': '2018-01-31T02:09:01', 'yymm': '1703', 'arxiv_id': '1703.00362', 'language': 'en', 'url': 'https://arxiv.org/abs/1703.00362'} |
\section{Introduction}
Recent progress of storage media
has been creating interests on coding techniques to ensure reliability of the media and
to lengthen the life of storage media.
Write-Once-Memory (WOM) codes are getting renewed interests as one of promising
coding techniques for storage media.
In the scenario of the binary WOM codes,
the binary WOM storage (or channel) is assumed as follows.
A storage cell has two states 0 or 1 and the initial state is 0. If a cell changes its state to 1,
then it cannot be reset to 0 any more.
Punch cards and optical disks are examples of the binary WOM storages.
The celebrated work by Rivest and Shamir in 1982 \cite{first-wom}
presented the first binary WOM codes and their codes induced subsequent active researches
in the field of the binary WOM codes \cite{cohen} \cite{fiat} \cite{slc}.
A memory cell in recent flash memories has multiple levels such as 4 or 8 levels
and the number of levels are expected to be increased further in the near future.
This trend has produced motivation to the research activities on the non-binary WOM codes that
are closely related to the multilevel flash memories \cite{non-binary} \cite{yakkobi} \cite{q} \cite{brian}.
There are two threads of researches on the non-binary WOM codes.
The first one is {\em variable rate codes} and the other is {\em fixed rate codes}.
The variable rate codes are the non-binary WOM codes such that
message alphabets used in a sequence of writing processes are not necessarily identical.
This means that writing rate can vary at each writing attempt.
Fu and Vinck \cite{fu} proved the channel capacity of the variable rate non-binary WOM codes.
Recently, Shpilka \cite{shpilka} proposed a capacity achieving construction of non binary WOM codes.
Moreover, Gabrys et al. \cite{non-binary} presented a construction
of the non-binary WOM codes based on known efficient binary-WOM codes.
Although the variable rate codes are efficient because they can fully utilize the potential of a WOM storage,
fixed-rate codes that have a fixed message alphabet is more suitable for practical implementation into storage devices.
This is because a fixed amount of binary information is commonly sent from the master system to the storage.
Kurkoski \cite{brian} proposed a construction of fixed rate WOM codes using two dimensional lattices.
Bhatia et al. \cite{lattice-wom} showed a construction of non-binary WOM codes
that relies on the lattice continuous approximation.
Cassuto and Yaakobi \cite{q} proposed a construction of fixed rate non-binary WOM codes using lattice tiling.
Recently, fixed rate WOM code for reducing Inter-Cell Interference (ICI) is proposed by Hemo and Cassuto \cite{ici-q}.
It is known that ICI causes drift of the threshold voltage of a flash cell according to the voltages of adjacent flash cells \cite{ici-q}.
The drift of threshold voltage degrades the reliability of the flash cell and it should be avoided.
One promising approach to reduce ICI is to use an appropriate constraint coding to avoid certain patterns incurring large ICI.
The WOM codes presented in \cite{ici-q} not only have large $t^*$ but also satisfy certain ICI reducing constraints.
In the case of fixed rate codes, systematic constructions for efficient non-binary WOM codes
are still open to be studied.
Especially, perusing optimal codes with practical parameters
is an important subject for further studies.
Furthermore, it is desirable to develop a construction of fixed rate WOM codes that have wide range of
applicability; this means that a new construction should be applicable to wide classes of
WOM devises such as WOM devices with ICI constraint as well.
In this paper, we propose a novel construction of fixed rate non-binary WOM codes.
The target of storage media is modeled by a memory device with restricted state transitions,
i.e., a state of the memory can change to another state according to a given state transition graph.
The model is fairly general and it includes a common model of multilevel flash memories.
The proposed construction has two notable features.
First, it possesses a systematic method to determine the sets called the encoding regions that
are required for encoding processes.
This is a critical difference between ours and the prior work using lattice tiling \cite{q} and \cite{ici-q}.
Second, the proposed construction determines an encode table used for encoding
by integer programming.
\section{Preliminaries}
In this section, we first introduce several basic definitions and
notation used throughout the paper.
\begin{figure}[b]
\begin{center}
\includegraphics[width = 0.45\textwidth]{./figure/dag-message-1.eps}
\caption{
A state transition graph (left) and an example of encoding regions $(k=3)$ and a message function (right).
The numbers written in the nodes represent the indices of nodes.
The encoding regions $\omega(1) = \{1,2,3\}, \omega(2) = \{2,4,6\}$ are indicated by the dashed boxes (right).
The encoding regions are $\omega(x) = \emptyset$ for $x \in \{3,4,5,6\}$.
In the right figure, the values of the message function are expressed as follows:
The values 1, 2, and 3 are represented by a circle, a triangle, and a square, respectively.
For any message $m \in \{1,2,3\}$,
both encoding regions $\omega(1)$ and $\omega(2)$ contain the node corresponding to $m$.
}
\label{fig:dm}
\end{center}
\end{figure}
\subsection{Basic notation}
Let $G \triangleq (V,E)$ be a directed graph
where $V = \{1,2,\ldots,|V|\}$ is the set of vertices and $E \subseteq V \times V$ is the set of edges.
If there does not exist a directed edge or a path from a vertex to itself,
then the graph $G$ is said to be a directed acyclic graph, abbreviated as DAG.
A DAG is used as a {\em state transition graph} in this paper.
The left figure in Fig.~\ref{fig:dm} is an example of DAG.
We express the DAG as $G = (V,E,r)$.
The symbol $r$ represents the root of DAG.
If for any node $s,s' \in V,(s \neq s')$ there exists the directed edge or path from $s$ to $s'$,
we denote $s \preceq s'$.
In this case, we say that $s'$ is {\em reachable} from $s$.
Assume that DAG $G \triangleq (V,E,r)$.
A WOM device $D$ associated with the graph $G$ can store any $v \in V$ as its state.
The initial state of $D$ is assumed to be $r$.
We can change the state of $D$ from $s \in V$ to $s' \in V$,
if there exists a directed edge or a path from $s \in V$ to $s' \in V$.
The message alphabet to be written in $D$ is denoted by $\mathcal{M} \triangleq \{1,2,\ldots,M\}$.
In our scenario, we want to write several messages in $\mathcal{M}$ into $D$.
Namely, a sequence of messages is sequentially written in $D$.
When we write a message $m \in \mathcal{M}$,
we must change the state of $D$.
After that,
we get a written message $m$ in $D$ by reading the state of $D$.
\subsection{Encoding function and decoding function}
In this paper, we assume that an encoder has a unit memory
to keep a previous input message,
and that a corresponding decoder has no memory.
In order to write input messages into the WOM device $D$, we need an encoding function.
The definition of the encoding function is given as follows.
\begin{definition}\label{defi:enco}
Assume that a function
\[
\mathcal{E}: V \times \mathcal{M} \rightarrow V \cup \{ {\sf fail} \}
\]
is given.
The symbol {\sf fail} represents a failure of an encoding process.
If for any $s \in V$ and $m \in \mathcal{M}$,
$
s \preceq \mathcal{E}(s,m)
$
or
$
\mathcal{E}(s,m) = {\sf fail},
$
then the function $\mathcal{E}$ is called an {\em encoding function}.
\end{definition}
The following definition on the decoding function
is used to retrieve the message $m \in \mathcal{M}$ from $D$.
\begin{definition}\label{defi:deco}
Assume that a function $\mathcal{D}: V \rightarrow \mathcal{M}$ is given.
If for any $m \in \mathcal{M}$ and $s \in V$, the consistency condition
\begin{equation} \label{consistency}
\mathcal{D}(\mathcal{E}(s,m)) = m
\end{equation}
is satisfied,
then the function $\mathcal{D}$ is called a {\em decoding function}.
\end{definition}
Assume that a sequence of input messages $m_1,m_2,\ldots \in \mathcal{M}$
are sequentially encoded.
We also assume that the initial node is $s_0 = r$.
The encoder encodes the incoming message $m_i$ by
\begin{equation}
s_i = \mathcal{E}(s_{i-1}, m_i)
\end{equation}
for $i = 0, 1, \ldots$.
The output of the encoder, $s_i$, is then written into $D$ as the
next node, i.e., next state.
The following definition gives the worst number of consecutive writes to $D$
for a pair of encoding and decoding functions $(\mathcal{E},\mathcal{D})$.
\begin{definition}{}
Assume that a sequence of messages of length $t$, $(m_1,m_2,\ldots, m_t) \in \mathcal{M}^t$, is given.
Let $(s_1,s_2,\ldots, s_t)$ be the state sequence defined by $s_i = \mathcal{E}(s_{i-1}, m_i)$
under the assumption $s_0=r$.
If for any $i \in [1,t]$, $s_i \ne {\sf fail}$, then
the pair $(\mathcal{E},\mathcal{D})$ is said to be $t$ writes achievable.
The {\em worst number of writes} $t^*$ is defined by
\begin{equation}
t^* \triangleq \max \{t \mid (\mathcal{E}, \mathcal{D}) \mbox{ is $t$-times achivable} \}.
\end{equation}
\end{definition}
In other words, the pair $(\mathcal{E},\mathcal{D})$ ensures consecutive $t^*$ writes of fixed size messages in the worst case.
Of course, in terms of efficient use of the device $D$, we should design $(\mathcal{E},\mathcal{D})$ to maximize $t^*$.
\section{Realization of encoding function}
In this section,
we prepare basic definitions required for precise description
of our encoding algorithm used in the encoding function.
\subsection{Notation}
The {\em reachable region} $R(s) (s \in V)$ is
the set of the all nodes to which a node can change from $s$.
The precise definition is given as follows.
\begin{definition}\label{defi:reachable}
The reachable region $R(s) (s \in V)$ is defined as
\begin{equation}
R(s) \triangleq \{ x \in V | s \preceq x \}.
\end{equation}
\end{definition}
Encoding regions and a message function defined below play a critical role in an encoding process.
\begin{definition}{}\label{defi:patition}
Assume that a family of subsets in $V$,
\[
\{\omega(s) \} \triangleq\{\omega(1), \omega(2), \ldots, \omega(|V|) \},
\]
is given.
Let $k$ be a positive integer satisfying $|\mathcal{M}| \le k$.
If the family satisfies the following two conditions:
\begin{enumerate}
\item $\forall s \in V, \ \omega(s) \subseteq R(s)$
\item $\forall s \in V, \ |\omega(s)| = k$ or $\omega(s) = \emptyset$,
\end{enumerate}
then the family $\{\omega(s) \}$ is said to be the {\em encoding regions}.
\end{definition}
A message function defined below is used to retrieve a message.
\begin{definition}{}\label{defi:g}
Assume that a family of encoding regions $\{\omega(s)\} $ is given.
Let $g$ be a function:
\[
g: \bigcup_{s \in V} \omega(s) \rightarrow \mathcal{M}.
\]
If for any $m \in \mathcal{M}$
and for any $s \in \{x \in V \mid \omega(x) \ne \emptyset \}$, there exists $a \in \omega(s)$ satisfying
\[
g(a) = m,
\]
then the function $g$ is called a {\em message function} corresponding to the family of encoding regions $\{\omega(s) \}$.
\end{definition}
We can consider the message function as the labels attached to the nodes.
In following encoding and decoding processes, the label, i.e., the value of the message function corresponds to the message associated with the node.
This definition implies that we can find arbitrary message $m \in \mathcal{M}$ in arbitrary encoding region $\omega(s)$ $(s \in \{ x \in V \mid \omega(x) \ne \emptyset \})$.
An example of encoding regions and a message function is shown in the right-hand of Fig.~\ref{fig:dm}.
We use the above definitions of the encoding regions and the message function to encode given messages.
In order to write a sequence of messages, we must connect several nonempty encoding regions to make {\em layers}.
We here define frontiers and layers as follows.
\begin{definition}{}\label{defi:ft}
For a subset of nodes $X \subseteq V$,
the {\em frontier} of $X$, $F(X)$, is defined by
\begin{equation}
F(X) \triangleq \{x \in X \mid R(x) \cap X = \{x\} \}.
\end{equation}
\end{definition}
If $x \notin F(X)$ is hold for $x \in X$, then there exists $y \in F(X)$
which is reachable from $x$, i.e., $x \preceq y$.
A layer consists of a union set of encoding regions.
\begin{definition} \label{defi:bset}
Assume that a family of encoding regions $\{\omega(s) \}$ is given.
The {\em layer} $\mathcal{L}_i$ is recursively defined by
\begin{equation}
\mathcal{L}_i \triangleq \bigcup_{x \in F(\mathcal{L}_{i-1})}\omega(x),\quad\mathcal{L}_0 = \{r\}.
\end{equation}
$r$ represents the root of DAG.
\end{definition}
\begin{definition}
Assume that for integer $i \ge 0$, $\mathcal{L}_i \subset V$ is given.
The {\em start point set} $V^*$ is defined by
\begin{equation}
V^* \triangleq \bigcup_{i \ge 0} F(\mathcal{L}_i).
\end{equation}
\end{definition}
Figure \ref{fig:fe} shows an example of frontiers and layers.
\begin{figure}[tb]
\begin{center}
\includegraphics[width = 0.4\textwidth]{./figure/enal.eps}
\caption{
An example of frontiers and layers.
The dashed boxes represent the encoding regions $\omega(1) = \{ 1,2,3\}, \omega(2) = \{2,4,6\}, \omega(3) = \{3,4,5\}$.
The encoding regions $\omega(4), \omega(5), \omega(6)$ are empty sets.
The layers are $\mathcal{L}_0 = \{1\}, \mathcal{L}_1 = \omega(1), \mathcal{L}_2 = \omega(2) \bigcup \omega(3)$.
The frontier for each layer is depicted as filled circles: $F(\mathcal{L}_1) = \{2,3\}, F(\mathcal{L}_2) = \{ 6\}$.
The start point set is
$V^* = F(\mathcal{L}_0) \bigcup F(\mathcal{L}_1) \bigcup F(\mathcal{L}_2) = \{1,2,3,6\}$.
In the right figure, the values of the message function are expressed as follows:
The values 1, 2, and 3 are represented by a circle, a triangle, and a square, respectively. }
\label{fig:fe}
\end{center}
\end{figure}
\subsection{Encoding algorithm}
In this subsection, we explain the encoding algorithm to realize an encoding function.
The algorithm presented here is similar to the algorithm presented in the reference \cite{q}.
The encoding algorithm is shown in Algorithm \ref{alg1}.
\begin{algorithm}[tb]
\caption{Encoding algorithm}
\label{alg1}
\begin{algorithmic}[1]
\STATE input: $s \in V$ (current state)
\STATE input: $m \in \mathcal{M}$ (message)
\STATE output: $s' = \mathcal{E}(s,m)$ (next state, or ${\sf fail}$)
\STATE $d:= \min [\{ x \in V \mid s \in \omega(x) \} \cup \{\infty\}] $
\IF {$\omega(d) = \emptyset$ or $d = \infty$}
\STATE{output ${\sf fail}$ and quit.}
\ENDIF
\STATE $y := \min \{x \in \omega(d) \mid g(x)=m\}$
\IF {$s \preceq y$}
\STATE {$s' := y$}
\ELSE
\STATE $i = \min\{i' | s \in \mathcal{L}_{i'} \}$
\STATE $d := \min [\{x \in F(\mathcal{L}_i) \mid s \preceq x\} \cup \{\infty\} ]$
\STATE Go to line 5.
\ENDIF
\STATE output $s'$ and quit.
\end{algorithmic}
\end{algorithm}
Suppose that we have the two inputs, a state $s$ which represents the current node in the state transition graph, and a message $m$.
The main job of this encoding algorithm is
to find $y \in \omega(d)$ satisfying $g(y) = m$ for a given message $m$.
The encoding region $\omega(d)$ can be considered as the current
{\em encoding window} in which the candidate of the next state
is found. The variable $d$ is called
a {\em start point} of the encoding window.
If such $y$ can be written in $D$ or is reachable from $s$ (i.e., $s \preceq y$),
then the next state is set to $s' := y$ in line 10 of Algorithm \ref{alg1}.
Otherwise, the current encoding window should move to another encoding
region in the next layer (line 13).
The new start point $z$ is chosen in the frontier $F(\mathcal{L}_i)$
and $d$ is updated as $d := z$.
\footnote{It is clear that, for any $x \in \omega(d)$ ($\omega(d)$ is the current encoding window),
there exists $z \in F(\mathcal{L}_i)$ satisfying $x \preceq z$.}
The layer index $i$ is the minimum index satisfying $s \in \mathcal{L}_i$.
The decoding function associated with the encoding function $\mathcal{E}$ realized by Algorithm \ref{alg1}
is given by
\[
\mathcal{D}(x) = g(x).
\]
From the definition of the message function and the procedure of Algorithm \ref{alg1},
it is evident that this function satisfies the consistency conditions.
We here explain an example of an encoding process
by using the state transition graph presented in Fig.~\ref{fig:fe}.
Assume that an input message sequence $(m_1, m_2) = (2,3)$ is given.
In the beginning of an encoding process, the current state is
initialized as $s = 1$.
Since the initial message is $m_1 = 2$, the pair $(s = 1, m = 2)$ is firstly
given to Algorithm \ref{alg1}.
In this case, we have $d = 1$ in line 4.
Since $g(3) = 2$ is satisfied in $\omega(1)$ in line 8,
the candidate of the next state $y = 3$ is obtained.
Because $s = 1 \preceq y = 3$ holds, we obtain $s' = y = 3$ in line 10.
The encoding process outputs $s' =3$ and then quits the process.
Let us consider the second encoding process for $m_2 = 3$.
We start a new encoding process with inputs $(s = 3, m = 3)$.
From line 4, we have $d=1$. This means that we set the encoding window to $\omega(1)$.
In this case, the encoder finds $g(2) = 3$ and lets $y = 2$.
However, the condition $s=3 \preceq y = 2$ is not satisfied, i.e., $y = 2$ cannot be
the next state because the node cannot change from $3$ to $2$.
In order to find the next state, we need to change the encoding window.
From line 13, the new start point of the encoding window $d = 3$ is
chosen from the frontier as $d = \min [\{x \in F(\mathcal{L}_1) \mid s \preceq x\}] = 3$.
This operation means that we change the encoding window from $\omega(1)$ to $\omega(3)$.
From the new encoding window $\omega(3)$, we can find $x = 5$ satisfying $g(5) = 3$.
Because $s = 3 \preceq y = 5$ holds (i.e., $y=5$ is reachable from $ s = 3 $), we finally have the next state $s' = 5$.
\section{Construction of WOM codes}
The performance and efficiency of the WOM codes realized by the encoding and decoding functions
described in the previous section depend on the choice of the encoding regions.
In this section, we propose a method to create a family of the encoding regions
and a method to determine labels of nodes, i.e, the message function by using integer programming.
\subsection{Greedy rule for constructing a family of encoding regions} \label{seq:region}
In this subsection, we propose a method for creating a family of the encoding regions
based on a greedy rule.
The proposed WOM codes described later exploit a family of
the encoding regions defined based on the following sets.
\begin{figure}[tb]
\begin{center}
\includegraphics[width = 0.5\textwidth]{./figure/flow-encoding-region.eps}
\caption{An example of a process of the greedy construction of the encoding region for $s$,
whose size is $k=3$.
The dashed box in the leftmost figure indicates the reachable region $R(s)$.
In the middle of the figure,
the numbers of reachable nodes for each elements in $R(s)$ are presented.
We then select top 3 nodes in terms of the number of reachable nodes
as an encoding region.
In the rightmost figure,
the dashed box represents the encoding region $\Omega(s)$
constructed by the greedy process.
} \label{fig:pro-en}
\end{center}
\end{figure}
\begin{definition} \label{defi:Omega}
Assume that an integer $k(M \le k)$ is given.
Let us denote the elements in the reachable region $R(s)$ by $r_1,r_2,\ldots,r_{|R(s)|} (s \in V)$
where the index of $r_i$ satisfies
\begin{equation}
|R(r_1)| \ge |R(r_2)| \ge \cdots \ge |R(r_{|R(s)| })|. \label{eq:order}
\end{equation}
The set $\Omega(s)$ is defined by
\begin{equation}
\label{eq:omega}
\Omega(s) \triangleq
\left\{
\begin{array}{ll}
\{ r_{1}, r_{2}, \ldots, r_{k} \}, & |R(s)| \ge k, \\
\emptyset, & |R(s)| < k.\\
\end{array}
\right.
\end{equation}
\end{definition}
In the above definition, a tie break rule is not explicitly stated.
If $|R(r_{a})| = |R(r_{b})|$ holds, we will randomly choose $r_{a}$ or $r_{b}$ to break a tie.
Figure \ref{fig:pro-en} shows an example of a greedy process for generating an encoding region.
The underlying idea in the greedy process is simply
to enlarge future writing possibilities.
The set $\Omega(s)$ is determined by a greedy manner
in terms of the size of reachable regions.
In other words, we want to postpone a state transition
to a state with the smaller reachable region as late as possible.
This is because such a transition would lead to a smaller number of writes.
In the following part of this paper, we will use the encoding regions defined by
\begin{equation}
\omega(s) \triangleq
\left\{
\begin{array}{ll}
\Omega(s), & s \in V^*, \\
\emptyset, & s \notin V^*.
\end{array}
\right.
\end{equation}
\subsection{Message labeling}\label{seq:ip}
In the previous subsection, we saw how to determine the family of the encoding regions.
The remaining task is to find
appropriate message labels of nodes. Namely, we must find an appropriate
message function satisfying the required constraint described in Definition \ref{defi:g}.
In this subsection, we will propose a method to find a message function based on integer programming.
The solution of the following integer linear programming problem provides a message function.
\begin{definition} \label{defi:ip}
Assume that a family of the encoding regions
$\{\omega(s)\}$ is given.
Let $x^*_{j,\ell}, y^*_\ell \in \{0,1\} (j \in \Gamma, \ell \in [1,k])$
be a set of value assignments of an optimal solution
of the following integer problem:
\begin{eqnarray}
&&{\rm Maximize} \ \sum_{\ell \in [1,k]} y_\ell \\ \nonumber
&&{\rm Subject\ to} \\
&& \forall i \in V^*, \forall \ell \in [1,k],\quad \sum_{j \in \omega(i)} x_{j,\ell} \ge y_{\ell}, \\ \label{eq:xy}
&& \forall j \in \Gamma,\quad \sum_{\ell \in [1,k]} x_{j,\ell} = 1,\\
&& \forall j \in \Gamma, \forall \ell \in [1,k], \quad x_{j,\ell} \le y_{\ell}, \\
&& \forall j \in \Gamma, \forall \ell \in [1,k],\quad x_{j,\ell} ,y_{\ell} \in \{0,1 \},
\end{eqnarray}
where $\Gamma \triangleq \bigcup_{i \ge 0} \mathcal{L}_i$.
The maximum value of the objective function is denoted by $M^*$.
\end{definition}
The symbol $z^*_j (j \in \Gamma)$ represents
\begin{equation}
z^*_j \triangleq \arg \max_{\ell \in [1,k]} \mathbb{I} [x^*_{j, \ell} = 1],
\end{equation}
where the indicator function $\mathbb{I}[condition]$ takes the value one if $condition$ is true;
otherwise it takes the value zero. If we regard $z^*_j$ as a color put on the node $j$, the above IP problem
can be considered as an IP problem for a coloring problem. In our case, the coloring constraint is as follows:
for every node $s$ (i.e., state) in $V^*$, the neighbor of $s$ including itself contains $M^*$-colors.
This problem has close relationship to the {\em domatic partition problem}.
In the following arguments, we set the maximize number of message $M$ equal to $M^*$.
\begin{definition}
Assume that a function $G: \Gamma \rightarrow \mathcal{M}$
is defined by
$
G(j) \triangleq \alpha(z_j^*),
$
where the mapping $\alpha: A \rightarrow \mathcal{M}$ is an arbitrary bijection.
The set $A$ is defined as
\[
A \triangleq \{\ell \in [1,k] \mid y_\ell^* = 1 \}.
\]
\end{definition}
The following theorem means that the determination of the message function can be done
by solving the above integer programming problem.
\begin{theorem}
The function $G$ is a message function.
\end{theorem}
\begin{proof}
We assume that arbitrary $m \in \mathcal{M}$ and $i \in V^*$ are given.
First, we consider $\tilde{\ell} = \alpha^{-1}(m)$.
From the definition of the set $A$, we have $y_{\tilde \ell}^* = 1$.
The optimal solution satisfies $ \sum_{j \in \omega(i)} x^*_{j, \tilde{\ell} } \ge y_{\tilde \ell}^* = 1$.
Because $x^*_{j, \tilde{\ell} } \in \{0,1\}$,
there exists $\tilde{j} \in \omega(i)$ satisfying $x^*_{\tilde{j}, \tilde{\ell} } = 1$.
By the definition of the function $G$, the equation
$
G(\tilde{j}) = \alpha(\tilde{\ell}) = \alpha(\alpha^{-1}(m)) = m
$
holds. This satisfies the condition for the message function.
\end{proof}
In the following, we use this message function $G$
in the encoding function (Algorithm 1) and the decoding function.
\subsection{Worst number of writes}
\begin{figure}[tb]
\begin{center}
\includegraphics[width = 0.25\textwidth]{./figure/worst-write.eps}
\caption{
An example of a case where the worst number of writes is 2.
The boxes indicated in the figure is the layers $\mathcal{L}_1, \mathcal{L}_2, \mathcal{L}_3$.
The black nodes represent frontiers of layers $\mathcal{L}_1, \mathcal{L}_2$. The node with index 4 is a frontier whose encoding region
is the empty set. Since the node with index 4 is included in $\mathcal{L}_2$, we thus have $t^* = 2$.}
\label{fig:wn}
\end{center}
\end{figure}
The worst number of writes $t^*$ provided by
the encoding algorithm with the encoding regions and the message function defined above
is given by
\begin{equation}
t^* = \min\{i > 0 \mid \exists x \in F(\mathcal{L}_i), \ \omega(x) = \emptyset \}.
\end{equation}
This statement appears clear from the definition of the encoding algorithm.
Figure \ref{fig:wn} presents an example for $t^* = 2$.
\section{Numerical results on proposed WOM codes}
In this section, we will construct several classes of fixed rate WOM codes based on the
proposed construction. We used the IP solver IBM CPLEX for solving the integer programming problem.
\subsection{Multilevel flash memories}
Multilevel flash memories consist of a large number of cells.
Each of cell can store electrons in itself.
It is assumed that the level of a cell can be increased but cannot be decreased.
In this paper, we assume that $n$ cells that can keep $q$ level values from the alphabet $\{0,1,\ldots,q-1\}$.
The state transitions of $q$ level multilevel flash memories of $n$ cells
can be represented by a state transition graph (directed square grid graph) presented in Fig. \ref{fig:flash}.
Figure \ref{fig:flash} presents
the state transition graph for multilevel flash memory $(n = 2, q = 4)$ and
the encoding regions constructed by the proposed method.
In this case, we can always write 5 messages for each write operation and
the worst number of writes is $t^* = 2$ in this case.
\begin{figure}[b]
\begin{center}
\includegraphics[width = 0.45\textwidth]{./figure/flash.eps}
\caption{
The left figure presents the state transition graph for multilevel flash memories $(n = 2, q = 4)$.
The levels of two cells are denoted by $\ell_1$ and $\ell_2$.
The horizontal (resp. vertical) direction means the level of the cell $\ell_1$ (resp. $\ell_2$).
The right figure presents a family of encoding regions and a message function
constructed by the proposed method.
The numbers written in the nodes represent the values of the message function.
Nonempty encoding regions are indicated by the boxes.
For any message $m$ in $\{1,2,3,4,5\}$,
each nonempty encoding regions contains a node corresponding to $m$.
}
\label{fig:flash}
\end{center}
\end{figure}
\begin{table}[b]
\caption{Worst numbers of writes $t^*$ of proposed WOM codes for $q$ level flash memories with $n = 2$ cells.}
\begin{center}
\begin{tabular}{c|ccccc}
\hline \label{tab:t-n2}
$M \backslash q$ & 4 & 5 & 6 & 7 & 8 \\ \hline
4 & 3 & 4 & 5 & 6 & 7\\
5 & 2 & 3 & 4 & 5 & 6\\
6 & 2 & 3 & 3 & 4 & 5\\
7 & 1 & 2 & 3 & 3 & 4\\
8 & 1 & 2 & 3 & 3 & 4\\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}[tb]
\begin{center}
\caption{Comparison between $t^*$ of proposed WOM codes and upper bound ($q$ level flash memories)($n=2, M = 8$)}
\begin{tabular}{c|cccccccc} \hline \label{tab:ub}
& $q = 4$ & 5 & 6 & 7 & 8 &16 & 32 & 48\\ \hline
Upper bound & 1 & 2 & 3 & 3 & 4 & 9 & 20 & 31\\
Proposed & 1 & 2 & 3 & 3 & 4 & 9 & 20 & 31\\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}[tb]
\begin{center}
\caption{Worst numbers of writes $t^*$ of proposed WOM codes for $q$ level flash memories with $n = 3$ cells.}
\begin{tabular}{c|ccccc}
\hline \label{tab:t-n3}
$M \backslash q$ & 4 & 5 & 6 & 7 & 8 \\ \hline
4 & 6 & 8 & 10 & 12 & 14\\
5 & 4 & 5 & 7 & 8 & 10\\
6 & 4 & 5 & 7 & 8 & 10\\
7 & 3 & 5 & 6 & 8 & 9\\
8 & 3 & 4 & 6 & 7 & 8\\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}[tb]
\begin{center}
\caption{Worst numbers of writes $t^*$ of proposed WOM codes for $q$ level flash memories with $n = 4$ cells.}
\begin{tabular}{c|ccccc}
\hline \label{tab:t-n4}
$M \backslash q$ & 4 & 5 & 6 & 7 & 8 \\ \hline
5 & 7& 9 & 12 & 14 & 17\\
6 & 5& 7 & 9 & 11 & 13\\
7 & 5& 7 & 9 & 11 & 13\\
8 & 5& 7 & 9 & 11 & 13\\
\hline
\end{tabular}
\end{center}
\end{table}
Table \ref{tab:t-n2} presents the worst numbers of writes $t^*$ of
the proposed WOM codes for multilevel flash memories
for the cases $n = 2$.
When we solved the IP problems, $k = M$ was assumed.
For example, in the case of $q=8, M=8$, the worst number of writes equals $t^* = 4$.
In \cite{q}, several upper bounds for $t^*$ are presented for WOM codes $(n = 2, q, M, t^*)$.
For $M \ge 8$, the worst numbers of writes are upper bounded as
\begin{equation}
t^* \le \left\lceil \frac{2(q-1)}{3} \right\rceil -1.
\end{equation}
Table \ref{tab:ub} shows
the comparison between this upper bound and the worst numbers of writes of the proposed codes for $n=2, M = 8$.
We can see that the worst numbers of writes of the proposed WOM codes
exactly coincide with the values of the upper bound.
This result can be seen as an evidence of the efficiency of the WOM codes
constructed by the proposed method.
Table \ref{tab:t-n3} shows the result for $n=3$.
In \cite{q},
an $(n=3, q=7, M=7 ,t^*=7)$ WOM code is presented.
According to Table \ref{tab:t-n3}, the proposed WOM code attains $t^* = 8$ which is larger than that of the known code
under the same parameter setting: $n = 3, q = 7, M = 7$.
Table \ref{tab:t-n4} shows the result for $n=4$.
In our experiments, we were able to
construct WOM codes for the range of $M \in \{5,6,7,8\}$ and $q \in \{4,5,6,7,8\}$ with reasonable computation time.
\subsection{WOM codes with constraints for reducing ICI}
In the current rapid grow of the cell density of NAND flash memories,
the ICI is getting to be one of hardest obstacles
for narrowing cell sizes.
The paper \cite{ici-q} showed several excellent fixed rate WOM codes with constraints for reducing ICI.
Their codes incorporate a constraint that keeps balance of the charge levels of adjacent cells.
It is expected that such constraints promote a reduction on the ICI effect and leads to realizing more reliable memories.
In this subsection, we will apply our construction to WOM codes with constraints for reducing ICI.
Assume that we have flash memory cells $c_1, c_2, \ldots, c_n$.
The current level for each cell is denoted by $\ell_i$.
The following definition gives the $d$ imbalance constraint for reducing ICI.
\begin{definition}\label{defi:imb}
Let $d$ be a positive integer smaller than $n$.
For any write sequence, if each cells $c_i (1 \le i \le n)$ satisfies
\begin{equation}
\max_{i,j,\ i \neq j} | \ell_i - \ell_j | \le d,
\end{equation}
then we say that the cell block satisfies {\em the $d$ imbalance constraint}.
\end{definition}
In other words, if the cells satisfy the $d$ imbalance constraint, then the level difference between a pair of adjacent cells is limited to $d$.
It is known that a large level difference of adjacent cells tends to induce ICI. The $d$ imbalance constraint is thus helpful to reduce ICI \cite{ici-q}.
Figure.~\ref{fig:flash-ici} presents state transition graphs for 4 level flash memories of two cells $(n=2)$
with the $d$ imbalance constraint $(d = 1,2)$. From this figure, at any state (or node), the difference of level between $c_1$ and $c_2$
are always limited to $d (=1,2)$.
\begin{figure}[tb]
\begin{center}
\includegraphics[width = 0.47\textwidth]{./figure/ici-1.eps}
\caption{
This figure presents a state transition diagram of
$q=4$ level flash memories of two cells $(n=2)$ satisfying
$d$ imbalance constraints (left: $d=1$, right: $d=2$).
The levels of two cells are denoted by $\ell_1$ and $\ell_2$.
The horizontal (resp. vertical) direction means the level of the cell $\ell_1$ (resp. $\ell_2$).
}
\label{fig:flash-ici}
\end{center}
\end{figure}
It is straightforward to apply our code construction to the case of the WOM codes with $d$-imbalance constraint.
Figure \ref{fig:flash-ici-construct} presents a family of encoding functions and values of a message function constructed by the
proposed method. This example shows universality of the proposed construction, i.e., it can be applied to any state transition graph.
\begin{figure}[tb]
\begin{center}
\includegraphics[width = 0.2\textwidth]{./figure/ici-construct-2.eps}
\caption{
This state transition graph corresponds to
$q=4$ level flash memories of two cells $(n=2)$ satisfying $d=2$ imbalance constraint.
The values in the circles represent the values of the message function.
Nonempty encoding regions are indicated by the boxes.}
\label{fig:flash-ici-construct}
\end{center}
\end{figure}
Table \ref{tab:ici-ub} shows the comparison between
upper bound presented in \cite{ici-q}
and the worst numbers of writes of the proposed codes $(n=2, M=8, d=3)$.
For $M = 8$, the worst numbers of writes of the WOM codes with the $d=3$ imbalance constraint
are upper bounded by
\begin{equation}
t^* \le \left\lfloor \frac{3(q-1)}{5} \right\rfloor.
\end{equation}
We can see that the worst numbers of writes of the proposed WOM codes
exactly coincide with the values of the upper bound.
Tables \ref{tab:ici-n3} and \ref{tab:ici-n4} present the worst numbers of writes $t^*$ of the proposed WOM codes with the $d$ imbalance constraint $(n = 3,4)$.
The paper \cite{ici-q} only deals with the case of two cells $(n=2)$.
It is not trivial to construct WOM codes ($n=3, n=4$) with the $d$ imbalance constraint by using the construction given in \cite{ici-q} but
our construction is directly applicable even for such cases.
\begin{table}[tb]
\begin{center}
\caption{Comparison between $t^*$ of proposed WOM codes with the $d$ imbalance constraint and upper bound ($n=2, M = 8, d=3$)}
\label{tab:ici-ub}
\begin{tabular}{c|cccccccc} \hline
& $q = 4$ & 5 & 6 & 7 & 8 &16 & 32 & 48\\ \hline
Upper bound & 1 & 2 & 3 & 3 & 4 & 9 & 18 & 28\\
Proposed & 1 & 2 & 3 & 3 & 4 & 9 & 18 & 28\\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}[tb]
\begin{center}
\caption{Worst numbers of writes $t^*$ of proposed WOM codes with the $d$ imbalance constraint
$(n = 3)$}
\begin{tabular}{c|cc|cc}
\hline \label{tab:ici-n3}
& \multicolumn{2}{c|}{$d = 2$} & \multicolumn{2}{c}{$d = 3$} \\ \hline
$M \backslash q$& 4 & 8 & 4 & 8 \\ \hline
5& 4 & 10 & 4 & 10 \\
6 & 4 & 9 & 4 & 10 \\
7 & 3 & 9 & 3 & 9 \\
8 & 3 & -- & 3 & 8 \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}[tb]
\begin{center}
\caption{Worst numbers of writes $t^*$ of proposed WOM codes with the $d$ imbalance constraint $(n = 4)$}
\begin{tabular}{c|cc|cc}
\hline \label{tab:ici-n4}
& \multicolumn{2}{c|}{$d = 2$} & \multicolumn{2}{c}{$d = 3$} \\ \hline
$M \backslash q$& 4 & 8 & 4 & 8 \\ \hline
5 & 7 & -- & 7 & 17 \\
6 & 5 & 13 & 5 & 13 \\
7 & 5 & 13 & 5 & 13 \\
8 & 5 & 13 & 5 & 13 \\
\hline
\end{tabular}
\end{center}
\end{table}
\section{Conclusion}
In this paper, we proposed a construction of fixed rate non-binary WOM codes
based on integer programming.
The novel WOM codes with $n=2, M=8$ achieve
the worst numbers of writes $t^*$
that meet the known upper bound in the range $q \in [4,8]$.
We discovered several new efficient WOM codes for $q$ level flash memories when $n = 3, 4$.
For instance, our $(n=3, q= 7, M = 7, t^* = 8)$ WOM code provides a larger worst number of writes
than that of the known code with the parameters $(n=3, q= 7, M = 7, t^* = 7)$ \cite{q}.
In addition, We constructed several WOM codes with $d$ imbalance constraint for reducing ICI.
Our WOM codes with $n=2, M=8, d=3$ achieve the worst numbers of writes $t^*$
that meet the known upper bound in the range $q \in [4,8]$.
This implies the efficiency of the WOM codes
constructed by our construction.
Another notable advantage of the proposed construction is its flexibility
for handling high dimensional cases.
It is easy to construct for the codes with modestly large $n$
when the integer programming problem can be solved with reasonable time.
The proposed construction can be applied to various storage devices,
to various dimensions (i.e, number of cells),
and various kind of additional constraints.
\section*{Acknowledgment}
This work was supported by JSPS Grant-in-Aid for Scientific Research Grant Number 16K14267.
| {'timestamp': '2017-01-31T02:04:24', 'yymm': '1701', 'arxiv_id': '1701.08347', 'language': 'en', 'url': 'https://arxiv.org/abs/1701.08347'} |
\section{Introduction}
This paper is concerned with the design of numerical schemes for the one-dimensional Cauchy problem for non-local scalar conservation law of the form
\begin{equation}\label{nl_claw}
\begin{cases}
\rho_t+(f(\rho)V(\rho*\omega_\eta ))_x=0, & x\in \mathbb{R},\quad t>0, \\ %
\rho(x,0)=\rho_0(x) & x\in \mathbb{R},
\end{cases}
\end{equation}
where the unknown density $\rho$ depends on the space variable $x$ and the time variable $t$, $\rho \to V(\rho)$ is a given velocity function, and $\rho \to g(\rho) = f(\rho)V(\rho)$ is the usual flux function for the corresponding local scalar conservation law.
Here, (\ref{nl_claw}) is non-local in the sense that the velocity function $V$ is evaluated on a ``neighborhood'' of $x\in\mathbb{R}$ defined by the convolution of the density $\rho$ and a given kernel function $\omega_\eta$ with compact support. In this paper, we are especially interested in two specific forms of (\ref{nl_claw}) which naturally arise in traffic flow modelling \cite{BG_2016, GS_2016} and sedimentation problems \cite{BBKT_2011}. They are given as follows. \\
\ \\
{\it A non-local traffic flow model.} In this context, we follow \cite{BG_2016, GS_2016} and consider (\ref{nl_claw}) as an extension of the classical Lighthill-Whitham-Richards traffic flow model, in which the mean velocity is assumed to be
a non-increasing function of the downstream traffic density and where the flux function is given by
\begin{equation}\label{LWRnl}
f(\rho)=\rho,\quad
V(\rho)= \textcolor{black}{1-\rho}, \quad (\rho*\omega_\eta)(x,t)=\int_x^{x+\eta}\omega_\eta(y-x)\rho(y,t)dy.
\end{equation}
Four different nonnegative kernel functions will be considered in the numerical section, namely $\omega_\eta(x) = 1/\eta$,
$\omega_\eta(x) = 2(\eta-x)/\eta^2$,
$\omega_\eta(x) = 3(\eta^2-x^2)/(2 \eta^3)$ and \textcolor{black}{$\omega_\eta(x) = 2x/\eta^2$} with support on $[0,\eta]$ for a given value of the real number $\eta>0$. Notice that the well-posedness of this model together with the design of a first and a second order FV approximation have been considered in \cite{BG_2016, GS_2016}. \\
\ \\
{\it A non-local sedimentation model.} Following \cite{BBKT_2011}, we consider under idealized assumptions that equation \eqref{nl_claw} represents a one-dimensional model for the sedimentation
of small equal-sized spherical solid particles dispersed in a viscous fluid, where the local solids column fraction $\rho=\rho(x,t)$ is a function of depth $x$
and time $t$. In this context, the flux function is given by
\begin{equation}\label{sednl}
f(\rho)=\rho(1-\rho)^\alpha,\quad
V(\rho)=(1-\rho)^n, \quad (\rho*\omega_\eta)(x,t)=\int_{-2\eta}^{2\eta}\omega_\eta(y)\rho(x+y,t)dy,
\end{equation}
where $n \geq 1$ and the parameter $\alpha$ satisfies $\alpha=0$ or $\alpha\geq1$. The function $V$ is the so-called hindered settling factor and the convolution term $\rho*\omega_\eta$ is defined by a symmetric, nonnegative, and piecewise smooth kernel function $\omega_\eta$ with
support on $[-2\eta,2\eta]$ for a parameter $\eta>0$ and $\int_{\mathbb{R}}\omega_\eta(x)dx=1$. More precisely, the authors define in \cite{BBKT_2011} a truncated parabola $K$ by
$$
K(x)=\frac38\left(1-\frac{x^2}{4}\right) \, \text{ for } |x|<2,\qquad K(x)=0\,\,\text{ otherwise, }
$$
and set
\begin{equation}\label{ker_sed}
\omega_\eta(x):=\eta^{-1}K(\eta^{-1}x).
\end{equation}
Conservation laws with non-local terms arise in a variety of physical and engineering applications: besides the above cited ones, we mention models for granular flows \cite{AmadoriShen2012}, production networks \cite{Keimer2015}, conveyor belts \cite{Gottlich2014}, weakly coupled oscillators \cite{AHP2016}, laser cutting processes \cite{ColomboMarcellini2015}, biological applications like structured populations dynamics \cite{Perthame_book2007}, crowd dynamics \cite{Carrillo2016, ColomboGaravelloMercier2012} or more general problems like gradient constrained equations \cite{Amorim2012}. \\
While several analytical results on non-local conservation laws can be found in the recent literature (we refer to \cite{Amorim2015}
for scalar equations in one space dimension, \cite{AmbrosioGangbo2008, ColomboHertyMercier2011, PiccoliRossi2013} for scalar equations in several space dimensions and \cite{ACG2015,ColomboMercier2012,CrippaMercier2012} for multi-dimensional systems of conservation laws),
few specific numerical methods have been developed up to now.
Finite volume numerical methods have been studied in \cite{ACG2015, GS_2016, KurganovPolizzi2009, PiccoliTosin2011}. At this regard, it is important to notice that the lack of Riemann solvers for non-local equations limits strongly the choice of the scheme. At the best of our knowledge, two main approaches have been proposed in the literature to treat non-local problems: first and second order central schemes like
Lax-Friedrichs or Nassyau-Tadmor \cite{ACG2015, Amorim2015, BBKT_2011, GS_2016, KurganovPolizzi2009}
and Discontinuous Galerkin (DG) methods \cite{GottlichSchindler2015}. In particular, the comparative study presented in \cite{GottlichSchindler2015}
on a specific model for material flow in two space dimensions, involving density gradient convolutions,
encourages the use of DG schemes for their versatility and lower computational cost, but further investigations are needed in this direction.
Besides that, the computational cost induced by the presence of non-local terms, requiring the computation of quadrature formulas at each time step,
motivate the development of high order algorithms.
The aim of the present article is to conduct a comparison study on high order schemes for a class of non-local scalar equations in one space dimension,
focusing on equations of type \eqref{nl_claw}.
In Section \ref{review} we review DG and FV-WENO schemes for classical conservation laws.
These schemes will be extended to the non-local case in Sections \ref{sec:dg_nl} and \ref{sec:fv_nl}.
Section \ref{sec:tests} is devoted to numerical tests.
\section{A review of Discontinuous Galerkin and Finite Volume WENO schemes for local conservation laws} \label{review}
The aim of this section is to introduce some notations and to briefly review the DG and FV-WENO numerical schemes to solve the classical {\it local} nonlinear conservation law
\begin{equation}\label{claw}
\begin{cases}
\rho_t+g(\rho)_x=0, & x\in \mathbb{R},\quad t>0, \\
\rho(x,0)=\rho_0(x), & x\in \mathbb{R}.
\end{cases}
\end{equation}
We first consider $\{I_j\}_{j\in\mathbb{Z}}$ a partition of $\mathbb{R}$. The points $x_j$ will represent the centers of the cells $I_j=[x_{j-\frac12},x_{j+\frac12}]$, and the cell sizes will be denoted by
$\Delta x_j=x_{j+\frac12}-x_{j-\frac12}$. The largest cell size is $h=\sup_j \Delta x_j$. Note that, in practice, we will consider a constant space step so that we will have $h=\Delta x$.
\subsection{The Discontinuous Galerkin approach}
In this approach, we look for approximate
solutions in the function space of discontinuous polynomials
$$
V_h:=V_h^k=\{v:v|_{I_j}\in P^k(I_j)\},
$$
where $P^k(I_j)$ denotes the space of polynomials of degree at most $k$ in the element $I_j$.
The approximate solutions are sought under the form
$$
\rho^h(x,t) = \sum_{l=0}^k c_j^{(l)}(t) v_j^{(l)}(x), \quad v_j^{(l)}(x) = v^{(l)}(\zeta_j(x)),
$$
where $c_j^{(l)}$ are the degrees of freedom
in the element $I_j$.
The subset $\{v_j^{(l)}\}_{l=0,...,k}$
constitutes a basis of $P^k(I_j)$ and
in this work we will take Legendre polynomials as a local orthogonal basis of $P^k(I_j)$, namely
$$v^{(0)}(\zeta_j)=1,\qquad v^{(1)}(\zeta_j)=\zeta_j ,\qquad v^{(2)}(\zeta_j)=\frac12\left( 3 \zeta_j^2-1\right),\dots, \quad \zeta_j:=\zeta_j(x)=\frac{x-x_j}{{\Delta x}/2},$$
see for instance \textcolor{black}{\cite{CS_2001,QS_2005}}. \\
Multiplying \eqref{claw} by $v_h\in V_h$ and integrating over $I_j$ gives
\begin{equation}
\int_{I_{j}}\rho_{t}v_{h}dx-\int_{I_{j}}g(\rho)v_{h,x}dx+g(\rho(\cdot,t))v_{h}(\cdot)\bigl\lvert_{x_{j-\frac12}}^{x_{j+\frac12}}=0,
\end{equation}
and {the semi-discrete DG formulation thus consists} in looking for $\rho^{h}\in V_{h}$, such that for all $v_{h}\in V_{h}$ and all $j$,
\begin{equation}\label{semi-discrete}
\int_{I_{j}}\rho^h_{t}v_{h}dx-\int_{I_{j}}g(\rho^h)v_{h,x}dx+\hat{g}_{j+\frac12}v^{-}_{h}(x_{j+\frac12})-\hat{g}_{j-\frac12}v^{+}_{h}(x_{j-\frac12})=0 ,
\end{equation}
where $\hat{g}_{j+\frac12}=\hat{g}(\rho^{h,-}_{j+\frac12},\rho^{h,+}_{j+\frac12})$ is a consistent, monotone and Lipschitz continuous numerical flux function. In particular,
we will choose to use the Lax-Friedrichs flux
$$\hat{g}(a,b):=\frac{g(a)+g(b)}{2}+\alpha\frac{a-b}{2},\quad\quad \alpha=\max_u |g'(u)|.$$
Let us now observe that if $v_h$ is the $l$-th Legendre polynomial, we have $v^{+}_{h}(x_{j-\frac12})=v^{(l)}(\zeta_{j}(x_{j-\frac12}))=(-1)^{l}$, and
$v^{+}_{h}(x_{j+\frac12})=v^{(l)}(\zeta_{j}(x_{j+\frac12}))=1$, $\forall j\,, l=0,1,\dots,k$. Therefore,
replacing $\rho^h(x,t)$ by $\rho_j^{h}(x,t)$ and $v_h(x)$ by $v^{(l)}(\zeta_j(x))$ in \eqref{semi-discrete}, the degrees of freedom $c^{(l)}_{j}(t)$ satisfy the differential equations
\begin{equation}\label{ode}
\frac{d}{dt}c^{(l)}_{j}(t)+\frac{1}{a_{l}} \left(-\int_{I_{j}}g(\rho_j^h)\frac{d}{dx}v^{(l)}(\zeta_j(x))dx+\hat{g}(\rho^{h,-}_{j+\frac12},\rho^{h,+}_{j+\frac12})-
(-1)^{l}\hat{g}(\rho^{h,-}_{j-\frac12},\rho^{h,+}_{j-\frac12})\right)=0,
\end{equation}
with $${a_{l}}=\int_{I_{j}}(v^{(l)}(\zeta_j(x)))^2dx=\frac{\Delta x}{2l+1},\qquad l=0,1,\dots,k.$$
On the other hand, the initial condition can be obtained using the $L^2$-projection of $\rho_0(x)$, namely
$$c^{(l)}_{j}(0)=\frac{1}{a_{l}}\int_{I_{j}}\rho_0(x)v^{(l)}(\zeta_j(x))dx=\frac{2l+1}{2}\int_{-1}^1\rho_0\left(\frac{\Delta x}{2}y+x_j\right)v^{(l)}(y)dy,\qquad l=0,\dots,k.$$
The integral terms in \eqref{ode} can be computed exactly or using a high-order quadrature technique after a suitable change of variable, namely
$$\int_{I_{j}}g(\rho_j^h)\frac{d}{dx}v^{(l)}(\zeta_j(x))dx=\int_{-1}^1g\left(\rho_j^h\left(\frac{\Delta x}{2}y+x_j,t\right)\right)(v^{(l)})'(y)dy.$$
In this work, we will consider a Gauss-Legendre quadrature with $N_G=5$ nodes for integrals on $[-1,1]$
$$
\int_{-1}^{1}g(y)dy=\sum_{e=1}^{N_G}w_eg(y_e),
$$
where $y_e$ are the Gauss-Legendre quadrature points such that the quadrature formula is exact for polynomials of degree until $2N_G-1=9$ \textcolor{black}{\cite{abramowitz1966handbook}}. \\
\textcolor{black}{
The semi-discrete scheme \eqref{ode} can be written under the usual form
$$\frac{d}{dt}C(t)= \mathcal{L}(C(t)),$$
where $\mathcal{L}$ is the spatial discretization operator defined by (\ref{ode}). In this work, we will consider a time-discretisation based on the following
total variation diminishing (TVD) third-order Runge-Kutta method \cite{shu1988efficient},
\begin{eqnarray}\label{rk3}
C^{(1)}&=&C^{n}+\Delta t \mathcal{L}(C^{n}),\notag\\
C^{(2)}&=&\frac34C^{n}+\frac14C^{(1)}+ \frac14\Delta t \mathcal{L}(C^{(1)}),\\
C^{n+1}&=&\frac13C^{n}+\frac23C^{(2)}+ \frac23\Delta t \mathcal{L}(C^{(2)}).\notag
\end{eqnarray}
Other TVD or strong stability preserving SSP time discretization can be also used \cite{gottlieb2009high}. The CFL condition is
$$\frac{\Delta t}{\Delta x}\max_\rho|g'(\rho)|\leq \mathrm{C}_{\mathrm{CFL}}= \frac{1}{2k+1},$$
where $k$ is the degree of the polynomial, see \textcolor{black}{\cite{CS_2001}}. The scheme
\eqref{ode} and \eqref{rk3} will be denoted RKDG. }
\subsection{Generalized slope limiter}
It is well-known that RKDG schemes like the one proposed above may oscillate when sharp discontinuities are present in the solution. In order to control these instabilities, a common
strategy is to use a limiting procedure. We will consider the so-called generalized slope limiters proposed in \cite{CS_2001}.
With this in mind and $\rho_j^h(x)=\sum_{l=0}^{k}c^{(l)}_{j}v^{(l)}(\zeta_j(x))\in P^k(I_j)$, we first set
$$u_{j+\frac12}^{-}:=\bar{\rho}_j+\operatorname*{\mathrm{mm}}(\rho_j^h(x_{j+\frac12})-\bar{\rho}_j,\Delta_{+}\bar{\rho}_j,\Delta_{-}\bar{\rho}_{j})
$$
and
$$ u_{j-\frac12}^{+}:=\bar{\rho}_j-\operatorname*{\mathrm{mm}}(\bar{\rho}_j-\rho_j^h(x_{j-\frac12}),\Delta_{+}\bar{\rho}_j,\Delta_{-}\bar{\rho}_{j}),$$
where $\bar{\rho}_j$ is the average of $\rho^h$ on $I_j$, $\Delta_{+}\bar{\rho}_j=\bar{\rho}_{j+1}-\bar{\rho}_j$, $\Delta_{-}\bar{\rho}_j=\bar{\rho}_j-\bar{\rho}_{j-1}$, and
where $\operatorname*{\mathrm{mm}}$ is given by the minmod function limiter
$$\operatorname*{\mathrm{mm}}(a_1,a_2,a_3)=\begin{cases} s\cdot \min_{j} |a_j| & \text{ if } s=sign(a_1)=sign(a_2)=sign(a_3)\\ 0 & \text{ otherwise,} \end{cases}$$
or by the TVB modified minmod function
\begin{equation}\label{slopelimiter}
{\overline{\operatorname*{\mathrm{mm}}}}(a_1,a_2,a_2)=\begin{cases} a_1 & \text{ if } |a_1|\leq M_{b} h^2,\\ \operatorname*{\mathrm{mm}}(a_1,a_2,a_3) & \text{ otherwise,} \end{cases}
\end{equation}
where $M_{b} >0$ is a constant. According to \cite{CS_2001, QS_2005}, this constant is proportional to the second-order derivative of the initial condition at local extrema.
Note that if $M_b$ is chosen too small, the scheme is very diffusive, while if $M_b$ is too large, oscillations may appear. \\
The values $u_{j+\frac12}^{-}$ and $u_{j-\frac12}^{+}$ allow to compare the interfacial values of $\rho^h_j(x)$ with respect to its local cell averages. Then, the generalized slope limiter technique consists in replacing $\rho_j^h$ on each cell $I_j$ with $\Lambda\Pi_h$ defined by
\begin{equation*}
\Lambda\Pi_h(\rho_j^h)=\begin{cases}
\rho_j^h &\text{if } u_{j-\frac12}^{+}=\rho_j^h(x_{j-\frac12})\text{ and } u_{j+\frac12}^{-}=\rho_j^h(x_{j+\frac12}),\\
\displaystyle \bar{\rho}_j+\frac{(x-x_j)}{\Delta x/2}\operatorname*{\mathrm{mm}}(c^{(1)}_{j},\Delta_{+}\bar{\rho}_j,\Delta_{-}\bar{\rho}_{j}) &\text{otherwise.}
\end{cases}
\end{equation*}
Of course, this generalized slope limiter procedure has to be performed after each inner step of the Runge-Kutta scheme \eqref{rk3}.
\subsection{The Finite Volume WENO approach} \label{sec:fvweno1}
In this section, we solve the nonlinear conservation law \eqref{claw} by using a
high-order finite volume WENO scheme \textcolor{black}{\cite{Shu_1998,shu1988efficient}}.
Let us denote by $\bar{\rho}(x_j,t)$ the cell average of the exact solution $\rho(\cdot,t)$ in the cell $I_j$~:
$$
\bar{\rho}(x_j,t):=\frac{1}{\Delta x}\int_{I_j}\rho(x,t)dx.
$$
The unknowns are here the set of all $\{\bar{\rho}_j(t)\}_{j\in\mathbb{Z}}$ which represent approximations of the exact cell averages $\bar{\rho}(x_j,t)$. Integrating \eqref{claw} over $I_j$
we obtain
\begin{equation*}
\frac{d}{dt}\bar{\rho}(x_j,t)=-\frac{1}{\Delta x}\left( g(\rho(x_{j+\frac12},t))-g(\rho(x_{j-\frac12},t))\right),\qquad \forall j \in \mathbb{Z}.
\end{equation*}
This equation is approximated by the semi-discrete conservative scheme
\begin{equation}\label{fv_semi}
\frac{d}{dt}\bar{\rho}_j(t)=-\frac{1}{\Delta x}\left( \hat{g}_{j+\frac12}-\hat{g}_{j-\frac12}\right),\qquad \forall j \in \mathbb{Z},
\end{equation}
where the numerical flux $\hat{g}_{j+\frac12}:=\hat{g}(\rho_{j+\frac12}^l,\rho_{j+\frac12}^r)$ is the Lax-Friedrichs flux and $\rho_{j+\frac12}^l$ and $\rho_{j+\frac12}^r$ are some left and right high-order WENO reconstructions of $\rho(x_{j+\frac12},t)$ obtained from the cell averages $\{\bar{\rho}_j(t)\}_{j \in \mathbb{Z}}$.
Let us focus on the definition of $\rho_{j+\frac12}^l$.
In order to obtain a $(2k-1)$th-order WENO approximation, we first compute $k$ reconstructed values
$$\hat{\rho}_{j+\frac12}^{(r)}=\sum_{i=0}^{k-1}c_{r}^i\bar{\rho}_{i-r+j},\quad r=0,\dots,k-1,$$
that correspond to $k$ possible stencils $S_r(j)=\{x_{j-r},\dots,x_{j-r+k-1}\}$ for $r=0,\dots,k-1$. The coefficients $c_{r}^i$ are chosen in such a way that each of the $k$ reconstructed values is $k$th-order accurate \cite{Shu_1998}. Then, the $(2k-1)$th-order WENO reconstruction is a convex combination of all these $k$ reconstructed values and defined by
$$
\rho_{j+\frac12}^l=\sum_{r=0}^{k-1}w_{r}\hat{\rho}_{j+\frac12}^{(r)},
$$
where the positive nonlinear weights $w_r$ satisfy $\sum_{r=0}^{k-1}w_r=1$ and are defined by
$$
w_r=\frac{\alpha_r}{ \sum_{s=0}^{k-1}\alpha_s},\quad \alpha_r=\frac{d_r}{(\epsilon+\beta_r)^2}.
$$
Here $d_r$ are the linear weights which yield the $(2k-1)$th-order accuracy, $\beta_r$ are called the ``smoothness indicators'' of the stencil $S_r(j)$, which measure the smoothness of the function $\rho$ in the stencil, and $\epsilon$ is a small parameter used to avoid dividing by zero (typically $\epsilon=10^{-6}$).
The exact form of the smoothness indicators and other details about WENO reconstructions can be found in \cite{Shu_1998}. \\
The reconstruction of $\rho_{j-\frac12}^r$ is obtained in a mirror symmetric fashion with respect to $x_{j-\frac12}$.
\textcolor{black}{
The semi-discrete scheme \eqref{fv_semi} is then integrated in time using the (TVD) third-order Runge-Kutta scheme \eqref{rk3}, under
the CFL condition
$$\frac{\Delta t}{\Delta x}\max_\rho|g'(\rho)|\leq \mathrm{C}_{\mathrm{CFL}}<1.$$. }
\section{Construction of DG schemes for non-local problems} \label{sec:dg_nl}
We now focus on the non-local equation \eqref{nl_claw}, for which we set $R(x,t):=(\rho*\omega_\eta)(x,t)$. Since
$\rho^{h}(x,t)|_{I_j}=\sum_{l=0}^{k}c^{(l)}_{j}(t)v^{(l)}(\zeta_j(x))\in P^k(I_j)$, is a weak solution of the non-local problem \eqref{nl_claw}, the coefficients $c^{(l)}_{j}(t)$ can be calculated by solving the following differential equation,
\begin{equation}\label{DG_nl}
\frac{d}{dt}c^{(l)}_{j}(t)+\frac{1}{a_{l}} \left(-\int_{I_{j}}f(\rho^h) V(R)\frac{d}{dx}v^{(l)}(\zeta_j(x))dx+\check{f}_{j+\frac12}-
(-1)^{l}\check{f}_{j-\frac12}\right)=0,
\end{equation}
where $\check{f}_{j+\frac12}$ is a
consistent approximation of $f(\rho)V(R)$ at interface $x_{j+1/2}$. Here, we consider again the Lax-Friedrichs numerical flux defined by
\begin{equation}\label{nlflux1}
\check{f}_{j+\frac12}=\frac12\left(f(\rho_{j+\frac12}^{h,-})V(R_{j+\frac12}^{h,-})+f(\rho_{j+\frac12}^{h,+})V(R_{j+\frac12}^{h,+})+\alpha(\rho_{j+\frac12}^{h,-}-\rho_{j+\frac12}^{h,+}) \right),
\end{equation}
with $\alpha:=\max_\rho|\partial_\rho(f(\rho)V(\rho))|$ and where $R_{j+\frac12}^{h,-}$ and $R_{j+\frac12}^{h,+}$ are the left and right approximations of $R(x,t)$ at the interface $x_{j+\frac12}$.
Since $R$ is defined by a convolution, we naturally set
$R_{j+\frac12}^{h,-}=R_{j+\frac12}^{h,+}=R(x_{j+\frac12},t)=:R_{j+\frac12}$, so that \eqref{nlflux1} can be written as
\begin{equation}\label{nlflux2}
\check{f}_{j+\frac12}:=\check{f}(\rho_{j+\frac12}^{h,-},\rho_{j+\frac12}^{h,+},R_{j+\frac12})=
\frac12\left((f(\rho_{j+\frac12}^{h,-})+f(\rho_{j+\frac12}^{h,+}))V(R_{j+\frac12})+\alpha(\rho_{j+\frac12}^{h,-}-\rho_{j+\frac12}^{h,+}) \right).
\end{equation}
Next, we propose to approximate the integral term in (\ref{DG_nl}) using the following high-order Gauss-Legendre quadrature technique,
\begin{eqnarray}\label{nl_intR}
\int_{I_{j}}f(\rho^h) V(R)\frac{d}{dx}v^{(l)}(\zeta_j(x))dx&=&\int_{-1}^1f(\rho^h\left(\frac{\Delta x}{2}y+x_j,t\right))V\left(R\left(\frac{\Delta x}{2}y+x_j,t\right)\right)(v^{(l)})'(y)dy \notag\\
&=& \sum_{e=1}^{N_G}w_ef(\rho^{h} \left(\hat{x}_e,t\right))V\left(R\left(\hat{x}_e,t\right)\right)(v^{(l)})'(y_e),
\end{eqnarray}
where we have set
$\hat{x}_e=\frac{\Delta x}{2}y_e+x_j\in I_j$, $y_e$ being the Gauss-Legendre quadrature points ensuring that the quadrature formula is exact for polynomials of order less or equal to $2N_G-1$. \\
It is important to notice that the DG formulation for the non-local conservation law \eqref{DG_nl} requires the computation of the extra integral terms $R_{j+\frac12}$ in \eqref{nlflux2} and $R\left(\hat{x}_e,t\right)$ in \eqref{nl_intR} for each quadrature point, which increases the computational cost of the strategy. For $\rho^{h}(x,t)|_{I_j}\in P^k(I_j)$, we can compute these terms as follows for both the LWR and sedimentation non-local models considered in this paper. \\
\ \\
{\it Non-local LWR model.} For the non-local LWR model, we impose the condition $N\Delta x=\eta$ for some $N\in\mathbb{N}$, so that we have
\begin{eqnarray*}
R_{j+\frac12}&=&\int_{x_{j+\frac12}}^{x_{j+\frac12}+\eta}\omega_\eta(y-{x_{j+\frac12}})\rho^h(y,t)dy=\sum_{i=1}^N\int_{I_{j+i}}
\omega_\eta(y-{x_{j+\frac12}})\rho^h_{j+i}(y,t)dy\\
&=&\sum_{i=1}^N \sum_{l=0}^{k}c^{(l)}_{j+i}(t)\int_{I_{j+i}}\omega_\eta(y-{x_{j+\frac12}})v^{(l)}(\zeta_{j+i}(y))dy\\
&=&\sum_{i=1}^N \sum_{l=0}^{k}c^{(l)}_{j+i}(t)\underbrace{\frac{\Delta x}{2}\int_{-1}^1\omega_\eta\left(\frac{\Delta x}{2}y+(i-\frac12)\Delta x\right)v^{(l)}(y)dy}_{\Gamma_{i,l}}=\sum_{i=1}^N \sum_{l=0}^{k}c^{(l)}_{j+i}\Gamma_{i,l},
\end{eqnarray*}
while for each quadrature point $\hat{x}_e$ we have
\begin{eqnarray*}
R\left(\hat{x}_e,t\right)&=&\int_{\hat{x}_e}^{\hat{x}_e+\eta}\omega_\eta(y-\hat{x}_e)\rho^h(y,t)dy= \underbrace{\int_{\hat{x}_e}^{x_{j+\frac12}}\omega_\eta(y-\hat{x}_e)\rho_{j}^h(y,t)dy}_{\Gamma_a}+\\
&&\sum_{i=1}^{N-1}\underbrace{\int_{I_{j+i}}\omega_\eta(y-\hat{x}_e)\rho_{j+i}^h(y,t)dy}_{\Gamma_b^i} +\underbrace{\int_{x_{j+N-\frac12}}^{\hat{x}_e+\eta}\omega_\eta(y-\hat{x}_e)\rho_{j+N}^h(y,t)dy}_{\Gamma_c}.
\end{eqnarray*}
The three integrals $\Gamma_a$, $\Gamma_b^i$ and $\Gamma_c$ can be computed with the same change of variable as before, namely
\begin{eqnarray*}
\Gamma_a&=&\frac{\Delta x}{2}\int_{y_e}^1\omega_\eta\left(\frac{\Delta x}{2}(y-y_e)\right)\sum_{l=0}^kc^{(l)}_j(t)v^{(l)}(y)dy\\
&=&\sum_{l=0}^kc^{(l)}_j(t)\underbrace{\frac{\Delta x}{2}\int_{y_e}^1\omega_\eta\left(\frac{\Delta x}{2}(y-y_e)\right)v^{(l)}(y)dy}_{\Gamma_{0,l}^{(e)}}=\sum_{l=0}^kc^{(l)}_j \Gamma_{0,l}^{(e)}, \\
\Gamma_b^i&=&\frac{\Delta x}{2}\int_{-1}^1\omega_\eta\left(\frac{\Delta x}{2}(y-y_e)+i\Delta x\right)\sum_{l=0}^kc^{(l)}_{j+i}(t)v^{(l)}(y)dy\\
&=&\sum_{l=0}^kc^{(l)}_{j+i}\underbrace{\frac{\Delta x}{2}\int_{-1}^1\omega_\eta\left(\frac{\Delta x}{2}(y-y_e)+i\Delta x\right)v^{(l)}(y)dy}_{\Gamma_{i,l}^{(e)}}=\sum_{l=0}^kc^{(l)}_{j+i} \Gamma_{i,l}^{(e)}, \\
\Gamma_c&=&\frac{\Delta x}{2}\int_{-1}^{y_e}\omega_\eta\left(\frac{\Delta x}{2}(y-y_e)+N\Delta x\right)\sum_{l=0}^kc^{(l)}_{j+N}v^{(l)}(y)dy\\
&=&\sum_{l=0}^kc^{(l)}_{j+N}(t)\underbrace{\frac{\Delta x}{2}\int_{-1}^{y_e}\omega_\eta\left(\frac{\Delta x}{2}(y-y_e)+N\Delta x\right)v^{(l)}(y)dy}_{\Gamma_{N,l}^{(e)}}=\sum_{l=0}^kc^{(l)}_{j+N} \Gamma_{N,l}^{(e)}.
\end{eqnarray*}
Finally we can compute $R\left(\hat{x}_e,t\right)$ as
$$R\left(\hat{x}_e,t\right)=\sum_{i=0}^N\sum_{l=0}^kc^{(l)}_{j+i} (t)\Gamma_{i,l}^{(e)},$$
in order to evaluate \eqref{nl_intR}. \\
\ \\
{\it Non-local sedimentation model.}
Considering now the non-local sedimentation model, we impose $N\Delta x=2\eta$ for some $N\in\mathbb{N}$, so that we have
$$R_{j+\frac12}= \int_{x_{j+\frac12}-2\eta}^{x_{j+\frac12}+2\eta}\omega_\eta(y-x_{j+\frac12})\rho^h(y,t)dy=\sum_{i=-N+1}^N\sum_{l=0}^kc^{(l)}_{j+i}(t)\Gamma_{i,l},$$
and for each quadrature point $\hat{x}_e$,
$$R\left(\hat{x}_e,t\right)=\int_{\hat{x}_e-2\eta}^{\hat{x}_e+2\eta}\omega_\eta(y-\hat{x}_e)\rho^h(y,t)dy=\sum_{i=-N}^N\sum_{l=0}^kc^{(l)}_{j+i}(t) \Gamma_{i,l}^{(e)},$$
with
\begin{eqnarray*}
\Gamma_{-N,l}^{(e)}&=& \frac{\Delta x}{2}\int_{y_e}^1\omega_\eta\left(\frac{\Delta x}{2}(y-y_e)-N\Delta x\right)v^{(l)}(y)dy,\\
\Gamma_{i,l}^{(e)}&=& \frac{\Delta x}{2}\int_{-1}^1\omega_\eta\left(\frac{\Delta x}{2}(y-y_e)+i\Delta x\right)v^{(l)}(y)dy,\\
\Gamma_{N,l}^{(e)}&=& \frac{\Delta x}{2}\int_{-1}^{y_e}\omega_\eta\left(\frac{\Delta x}{2}(y-y_e)+N\Delta x\right)v^{(l)}(y)dy.
\end{eqnarray*}
\ \\
{\bf Remark.} \textcolor{black}{In order to compute integral terms in \eqref{DG_nl} as accurately as possible, the
integrals $R_{j+\frac12}$ and $R\left(\hat{x}_e,t\right)$ above, and in particular, the coefficients $\Gamma_{i,l}$,
must be calculated exactly or using a suitable quadrature formula accurate to at least $\mathcal{O}(\Delta x ^{l+p})$ where $p$ is the degree of the convolution term $\omega_{\eta}$.
It is important to notice that the coefficients can be precomputed and stored in order to save computational time.}\\
\textcolor{black}{
Finally the semi-discrete scheme \eqref{DG_nl} can be discretized in time using the (TVD) third-order Runge-Kutta scheme \eqref{rk3}, under
the CFL condition
$$\frac{\Delta t}{\Delta x}\max_\rho|\partial_\rho (f(\rho)V(\rho))|\leq \mathrm{C}_{\mathrm{CFL}}= \frac{1}{2k+1},$$
where $k$ is the degree of the polynomial. }
\section{Construction of FV schemes for non-local conservation laws} \label{sec:fv_nl}
Let us now extend the FV-WENO strategy of Section \ref{sec:fvweno1} to the non-local case.
We first integrate \eqref{nl_claw} over $I_j$ to obtain
\begin{equation*}\label{fv_nl_int}
\frac{d}{dt}\bar{\rho}(x_j,t)=-\frac{1}{\Delta x}\left( f(\rho(x_{j+\frac12},t))V(R(x_{j+\frac12},t))-f(\rho(x_{j-\frac12},t))V(R(x_{j-\frac12},t))\right),\quad \forall j\in\mathbb{Z},
\end{equation*}
so that the semi-discrete discretization can be written as
\begin{equation}\label{fv_semi_2}
\frac{d}{dt}\bar{\rho}_j(t)=-\frac{1}{\Delta x}\left( \check{f}_{j+\frac12}-\check{f}_{j-\frac12}\right),\qquad \forall j\in\mathbb{Z},
\end{equation}
where the use of the Lax-Friedrichs numerical flux gives
\begin{equation*}
\check{f}_{j+\frac12}=\check{f}(\rho_{j+\frac12}^l,\rho_{j+\frac12}^r,R_{j+\frac12})=\frac12\left((f(\rho_{j+\frac12}^l)+f(\rho_{j+\frac12}^r))V(R_{j+\frac12})+\alpha(\rho_{j+\frac12}^l-\rho_{j+\frac12}^r) \right).
\end{equation*}
Recall that $\rho_{j+\frac12}^l$ and $\rho_{j+\frac12}^r$ are the left and right WENO high-order reconstructions at point $x_{j+\frac12}$. \\
\ \\
At this stage, it is crucial to notice that in the present FV framework, the approximate solution is piecewise constant on each cell $I_j$, so that a naive evaluation of the convolution terms may lead to a loss of high-order accuracy. Let us illustrate this. Considering for instance
the non-local LWR model and using that $\rho(x,t)$ is piecewise constant on each cell naturally leads to
\begin{eqnarray*}
R_{j+\frac12}&=&\int_{x_{j+\frac12}}^{x_{j+\frac12}+\eta}\omega_\eta(y-{x_{j+\frac12}})\rho(y,t)dy=\sum_{i=1}^N\int_{I_{j+i}}
\omega_\eta(y-{x_{j+\frac12}})\rho(y,t)dy\\
&=&\Delta x \sum_{i=1}^N\bar{\rho}_{j+i}\int_{I_{j+i}}\omega_\eta(y-{x_{j+\frac12}})dy,
\end{eqnarray*}
which does not account for the high-order WENO reconstruction. In order to overcome this difficulty,
we propose to approximate the value of $\rho(x,t)$ using quadratic polynomials in each cell.
This strategy is detailed for each model in the following two subsections.
\subsection{Computation of $R_{j+\frac12}$ for the non-local LWR model}
In order to compute the integral $$
R_{j+\frac12}=\int_{x_{j+\frac12}}^{x+\eta}\omega_\eta(y-x_{j+\frac12})\rho(y,t)dy,
$$
we propose to consider a reconstruction of $\rho(x,t)$ on $I_j$ by taking advantage of the high-order WENO reconstructions
$\rho_{j-\frac12}^r$ and $\rho_{j+\frac12}^l$ at the boundaries of $I_j$, as well as the approximation of the cell average $\bar{\rho}_j^n$. More precisely, we propose
to define a polynomial $P_j(x)$ of degree 2 on
$I_j$ by
$$
P_j(x_{j-\frac12})=\rho_{j-\frac12}^r,\quad P_j(x_{j+\frac12})=\rho_{j+\frac12}^l,\quad \frac{1}{\Delta x}\int_{I_j}P_j(x)dx=\bar{\rho}_j^n,
$$
which is very easy to handle. In particular, we have
$$
P_j (x):=a_{j,0}+a_{j,1}\left( \frac{x-x_j}{\Delta x/2} \right)+a_{j,2}\left(3 \left( \frac{x-x_j}{\Delta x/2} \right)^2-1\right)/2,\qquad x \in I_j,
$$
with
$$
a_{j,0}=\bar{\rho}_j^n,\qquad a_{j,1}=\frac12\left(\rho_{j+\frac12}^l-\rho_{j-\frac12}^r \right), \qquad a_{j,2}=\frac12\left(\rho_{j+\frac12}^l+\rho_{j-\frac12}^r \right)-\bar{\rho}_j^n.
$$
Observe that we have used the same polynomials as in the DG formulation, i.e., $P_j(x)=\sum_{l=0}^2a_{j,l}v^{(l)}(\zeta_{j}(y))$.
With this, $R_{j+\frac12}$ can be computed as
\begin{eqnarray}\label{lwr_Rj12}
R_{j+\frac12}&=&\sum_{i=1}^N\int_{I_{j+i}}\omega_\eta(y-{x_{j+\frac12}})P_{j+i}(y)dy=\sum_{i=1}^N\int_{I_{j+i}}\omega_\eta(y-{x_{j+\frac12}})\sum_{l=0}^2a_{j+i,l}v^{(l)}(\zeta_{j+i}(y))dy \notag\\
&=&\sum_{i=1}^N\sum_{l=0}^2a_{j+i,l}\int_{I_{j+i}}\omega_\eta(y-{x_{j+\frac12}})v^{(l)}(\zeta_{j+i}(y))dy\notag \\ \notag
&=&\sum_{i=1}^N\sum_{l=0}^2a_{j+i,l}\underbrace{\frac{\Delta x}{2}\int_{-1}^{1}\omega_\eta\left(\frac{\Delta x}{2}y+(i-\frac12)\Delta x\right)v^{(l)}(y)dy}=\sum_{i=1}^N\sum_{l=0}^2a_{j+i,l}{\Gamma}_{i,l},
\end{eqnarray}
where the coefficients ${\Gamma}_{l,i}$ are computed exactly or using a high-order quadrature approximation.
\subsection{Computation of $R_{j+\frac12}$ for non-local sedimentation model}
As far as the non-local sedimentation model \eqref{nl_claw}-\eqref{sednl} is concerned, we have
$$
R_{j+\frac12}=\int_{-2\eta}^{2\eta}\omega_\eta(y)\rho({x_{j+\frac12}}+y,t)dy=\int_{{x_{j+\frac12}}-2\eta}^{{x_{j+\frac12}}+2\eta}\omega_\eta(y-{x_{j+\frac12}})\rho(y,t)dy.
$$
Considering again the assumption $N\Delta x=2\eta$, we get
\begin{eqnarray*}\label{sed_Rj12}
R_{j+\frac12}&=&\sum_{i=-N+1}^N\int_{I_{j+i}}\omega_\eta(y-{x_{j+\frac12}})P_{j+i}(y)dy=\sum_{i=-N+1}^N\sum_{l=0}^2a_{j+i,l}\int_{I_{j+i}}\omega_\eta(y-{x_{j+\frac12}})v^{(l)}(\zeta_{j+i}(y))dy \notag \\
&=&\sum_{i=-N+1}^N\sum_{l=0}^2a_{j+i,l}\underbrace{\frac{\Delta x}{2}\int_{-1}^{1}\omega_\eta\left(\frac{\Delta x}{2}y+(i-\frac12)\Delta x\right)v^{(l)}(y)dy}=\sum_{i=-N+1}^N\sum_{l=0}^2a_{j+i,l}{\Gamma}_{i,l}.
\end{eqnarray*}
\\
To conclude this section, let us remark that
the coefficients ${\Gamma}_{i,l}$ are computed for $l=0,\dots,k$ in the DG formulation, where $k$ is the degree of the polynomials in $P^k(I_j)$, while in the
FV formulation, the coefficients ${\Gamma}_{i,l}$ are computed only for $l=0,\dots,2$ due to the quadratic reconstruction of the unknown in each cell.
\textcolor{black}{
The semi-discrete scheme \eqref{fv_semi_2} is then integrated in time using the (TVD) third-order Runge-Kutta scheme \eqref{rk3}, under
the CFL condition
$$\frac{\Delta t}{\Delta x}\max_\rho|\partial_\rho (f(\rho)V(\rho))|\leq \mathrm{C}_{\mathrm{CFL}}<1.$$.}
\section{Numerical experiments} \label{sec:tests}
In this section, we propose several test cases in order to illustrate the behaviour of the RKDG and FV-WENO high-order schemes proposed in the previous Sections
\ref{sec:dg_nl} and \ref{sec:fv_nl} for the numerical approximation of the solutions of the non-local traffic flow and sedimentation models on a bounded interval $I=[0,L]$ with boundary conditions.
We consider periodic boundary conditions for the traffic flow model, i.e. $\rho(0,t)=\rho(L,t)$ for $t\geq0$ and zero-flux boundary conditions for sedimentation model, in order to simulate a batch sedimentation process. More precisely, we assume that $\rho(x,t)=0$ for $x\leq0$ and $\rho(x,t)=1$ for $x\geq1.$
{
Given a uniform partition of $[0,L]$ $\{I_{j}\}_{j=1}^{M}$ with $\Delta x= L/M$, in order to compute the numerical fluxes $\check{f}_{j+1/2}$ for $j=0,\dots,M+1$,
we define $\rho_{j}^{n}$ in the ghost cells as follow: for the traffic model,
$$\rho_{-1}^{n}:=\rho_{M-1}^{n}, \quad \rho_{0}^{n}:=\rho_{M}^{n}, \quad \rho_{M+i}^{n}:=\rho_{i}^{n}, \qquad \text{ for } i=1,\dots,N;$$
and for the sedimentation model
$$\rho_{i}^{n}:=0, \text{ for } i=-N,\dots,0, \quad \text{ and } \quad \rho_{M+i}^{n}:=1, \qquad \text{ for } i=1,\dots,N.$$ }
A key element of this section will be the computation of the Experimental Order of Accuracy (EOA) of the proposed strategies, which is expected to coincide with the theoretical order of accuracy given by the high-order reconstructions involved in the corresponding numerical schemes. Let us begin with a description of the practical computation of the EOA. \\
Regarding the RKDG schemes, if $\rho^{\Delta x}(x,T)$ and $\rho^{\Delta x/2}(x,T)\in V_h^k$ are the solutions computed with $M$ and $2M$ mesh cells respectively, the L${}^1$-error is computed by
\begin{align*}
e(\Delta x)&=\|\rho^{\Delta x}(x,T)-\rho^{\Delta x/2}(x,T)\|_{L{}^1} \\
&=\sum_{j=1}^M \int_{I_{2j-1}} | \rho^{\Delta x}_{j}(x,T)-\rho^{\Delta x/2}_{2j-1}(x,T)|dx+
\int_{I_{2j}} | \rho^{\Delta x}_j(x,T)-\rho^{\Delta x/2}_{2j}(x,T)|dx,
\end{align*}
where the integrals are computed with a high-order quadrature formula. As far as the FV schemes are concerned, the L${}^1$-error is computed as
$$e(\Delta x)=\|\rho^{\Delta x}(x,T)-\rho^{\Delta x/2}(x,T)\|_{L{}^1}=\Delta x \sum_{j=1}^M | \rho^{\Delta x}_{j}(x_j,T)-\tilde{\rho}_{j}(x_j)|dx,$$
where $\tilde{\rho}_{j}(x)$ is a third-degree polynomial reconstruction of $\rho^{\Delta x/2}_{j}(x,T)$ at point $x_j$, i.e.,
$$\tilde{\rho}_{j}(x_{j}) =\frac{9}{16}\left( \rho^{\Delta x/2}_{2j}+\rho^{\Delta x/2}_{2j-1}\right) -\frac{1}{16}\left(\rho^{\Delta x/2}_{2j+1}+\rho^{\Delta x/2}_{2j-2} \right).
$$
In both cases, the EOA is naturally defined by
$$\gamma(\Delta x)=\log_2\left(e(\Delta x)/e(\Delta x/2)\right).$$
In the following numerical tests, the CFL number is taken as $\mathrm{C}_{\mathrm{CFL}}=0.2, 0.1, 0.05$ for RKDG1, RKDG2 and RKDG3 schemes respectively, and $\mathrm{C}_{\mathrm{CFL}}=0.6$ for
FV-WENO3 and FV-WENO5 schemes. For RKDG3 and FV-WENO5 cases $\Delta t$ is further reduced in the accuracy tests.
\subsection{Test 1a: non-local LWR model}
We consider the Riemann problem
\begin{equation*}
\rho(x,0)= \begin{cases} 0.95 & x\in [-0.5,0.4], \\
0.05 & \hbox{otherwise},
\end{cases}
\end{equation*}
{with absorbing boundary conditions}
and compute the numerical solution of \eqref{nl_claw}-\eqref{LWRnl} at time $T=0.1$ with $\eta=0.1$ and $w_\eta(x)=3(\eta^2-x^2)/(2\eta^3)$.
We set $\Delta x=1/800$ and compare the numerical solutions obtained with the FV-WENO3, FV-WENO5, and for RKDG1, RKDG2 and RKDG3 we use a generalized slope limiter \eqref{slopelimiter} with $M_b=35$.
The results displayed in Fig. \ref{fig_nlLWR} are compared with respect to the reference solution which was obtained with FV-WENO5 scheme and $\Delta x=1/3200$. In Fig\ref{fig_nlLWR} (b) we observe that RKDG schemes are much more accurate than FV-WENO3 and even FV-WENO5.
\begin{figure}
\centering
\begin{tabular}{cc}
(a) & (b) \\
\includegraphics[scale=0.45]{test1a_a.pdf}&\includegraphics[scale=0.45]{test1a_b.pdf}
\end{tabular}
\caption{Test 1a. (a) Solution of Riemann problem with $w_\eta(x)=3(\eta^2-x^2)/(2\eta^3)$ at $T=0.1$. We compare solutions computed with FV-WENO and
RKDG schemes using $\Delta x=1/800$. (b) Zoomed view of the left discontinuity in Fig (a). Reference solution is computed with FV-WENO5 with $\Delta x=1/3200$. }
\label{fig_nlLWR}
\end{figure}
\subsection{Test 1b: non-local LWR model}
Now, we consider an initial condition with a small perturbation $\rho_0(x)=0.35-(x-0.5)\exp{(-2000(x-0.5)^2)}$ and an increasing kernel function
$\omega_{\eta}=2x/\eta^2$, with $\eta=0.05$ {and periodic boundary conditions}.
According to \cite{BG_2016, GS_2016}, the non-local LWR model is not stable with increasing kernels, in the sense that {oscillations develop in
short time}. Fig. \ref{fig_nlLWR2} displays the numerical solution with different RKDG and FV-WENO schemes with $\Delta x=1/400$ at time $T=0.3$.
The profile provided by the first order scheme proposed in \cite{BG_2016} is also included. The reference solution is computed with a FV-WENO5 scheme with $\Delta x=1/3200$. We observe that the numerical solutions obtained with the high-order schemes provide better approximations of the oscillatory shape of the solution than the first-order scheme, and that RKDG schemes give better approximations than the FV-WENO schemes.
\begin{figure}
\centering
\begin{tabular}{cc}
(a) & (b) \\
\includegraphics[scale=0.45]{test1b_a.pdf}&\includegraphics[scale=0.45]{test1b_b.pdf}\\
\end{tabular}
\caption{ Test1b. Numerical solution non-local LWR model with $\omega_{\eta}=2x/\eta^2$, $\eta=0.05$ and $\rho_0(x)=0.35-(x-0.5)\exp{(-2000(x-0.5)^2)}$.
We compare solutions computed with FV Lax-Friedrichs, FV-WENO and RKDG schemes, using $\Delta x=1/400$. Reference solution is computed with FV-WENO5 with $\Delta x=1/3200$. }
\label{fig_nlLWR2}
\end{figure}
\subsection{Test 2: non-local sedimentation model}
We now solve \eqref{nl_claw}-\eqref{sednl}-\eqref{ker_sed} with the piece-wise constant initial datum
\begin{equation*}
\rho(x,0)= \begin{cases} 0 & x\leq 0, \\
0.5 & 0<x< 1, \\
1 & x\geq1,
\end{cases}
\end{equation*}
{with zero-flux boundary conditions} in the interval $[0,1]$, and compute the solution at time $T=1$, with parameters $\alpha=1$, $n=3$ and $a=0.025$.
We set $\Delta x=1/400$ and compute the solutions with different RKDG and FV-WENO schemes, including the
first-order Lax-Friedrichs scheme used in \cite{BBKT_2011}. The results displayed in Fig. \ref{fig_nlsed} are compared to
a reference solution computed with FV-WENO5 and $\Delta x=1/3200$. Compared to the reference solution, we observe that
RKDG1 is more accurate than FV-WENO3, and FV-WENO5 more accurate than RKDG3 and RKDG2 (Fig. \ref{fig_nlsed}).
In particular, we observe that the numerical solutions obtained with the high-order schemes provide better approximations of the oscillatory shape of the solution than the first order scheme.
These oscillations can possibly be explained as {\it layering phenomenon} in sedimentation \cite{siano1979layered}, which denotes a traveling staircases pattern, looking
as several distinct bands of different concentrations. We observe in Fig. \ref{fig_nlsedhis}, where the evolution of $\rho^{h}(\cdot,t)$ is displayed for $t\in[0,5]$,
that this layering phenomenon
is observed with high order schemes, in this case RKDG1 and FV-WENO5, instead of Lax-Friedrichs scheme.
\begin{figure}[t]
\begin{tabular}{cc}
\includegraphics[scale=0.45]{test2_a.pdf}&\includegraphics[scale=0.45]{test2_b.pdf}
\end{tabular}
\caption{Test2. Riemann problem for non-local sedimentation model. Comparing numerical solution computed with FV Lax-Friedrichs,
FV-WENO schemes and RKDG schemes using limiter function with $M_b=50$ using $\Delta x=1/400$. Reference solution is computed with FV-WENO5 with $\Delta x=1/3200$.}
\label{fig_nlsed}
\end{figure}
\begin{figure}
\begin{tabular}{ccc}
\fbox{\includegraphics[scale=0.3,bb=40 10 520 400]{test2a_a.png}}&\fbox{\includegraphics[scale=0.3,bb=40 10 520 400]{test2a_b.png}}&
\fbox{\includegraphics[scale=0.3,bb=40 10 520 400]{test2a_c.png}}
\end{tabular}
\caption{Test 2. Nonlocal sedimentation model. History solution for $t\in[0,5]$ computed with LxF (left) RKDG1 (center) and WENO 5 (right) with $\Delta x=1/400$.
We observe that the same kind of layering phenomenon reported in \cite{siano1979layered} seems to be observed with high order schemes RKDG1 and FV-WENO5 instead of LxF scheme.}
\label{fig_nlsedhis}
\end{figure}
\subsection{Test 3: Experimental Order of Accuracy for {\it local} conservation laws}
In this subsection we compute the EOA for local conservation laws. More precisely, we first consider the
advection equation $\rho_t+\rho_x=0,$ and the initial datum $\rho_0(x)=0.5+0.4\sin(\pi x)$ for $x\in[-1,1]$ with periodic boundary conditions. We compare the numerical approximations
obtained with $\Delta x=1/M$ and $M=20,40,80,160,320,640$ at time $T=1$. RKDG schemes are used without limiting function and $L^1$-errors are collected in Table \ref{EOA_advection}.
We observe that the correct EOA is obtained for the different schemes, moreover, we observe that the $L^{1}$-errors obtained with FV-WENO3 and RKDG1 are comparable, and also the
$L^{1}$-errors for FV-WENO5 and RKDG3.
We also consider the nonlinear local LWR model $\rho_t+(\rho(1-\rho))_x=0,$ with initial datum $\rho_0(x)=0.5+0.4\sin(\pi x)$ for $x\in[-1,1]$ and periodic boundary conditions. A shock wave appears at time $T=0.3$. We compare the numerical approximations obtained with $\Delta x=1/M$ and $M=20,40,80,160,320,640$ at time $T=0.15$. RKDG schemes are used without limiters and $L^1$-errors are given in Table \ref{EOA_LWR}. We again observe that the correct EOA is obtained for the different schemes, and the $L^{1}$-errors obtained with FV-WENO3 and RKDG1 are comparable, and also the
$L^{1}$-errors for FV-WENO5 and RKDG3.
\begin{table}[h!]
\centering
\begin{tabular}{ |c|c|c|c|c|c|c||c|c|c|c|}\hline
& \multicolumn{2}{|c|}{$k=1$} & \multicolumn{2}{|c|}{$k=2$} & \multicolumn{2}{|c||}{$k=3$} & \multicolumn{2}{|c|}{FV-WENO$3$}& \multicolumn{2}{|c|}{FV-WENO$5$} \\ \hline
$\tiny{1/\Delta x}$ & $L^1$-err &\tiny{$\gamma(\Delta x)$} & $L^1$-err &\tiny{$\gamma(\Delta x)$} & $L^1$-err &\tiny{$\gamma(\Delta x)$} & $L^1$-err &\tiny{$\gamma(\Delta x)$}
& $L^1$-err &\tiny{$\gamma(\Delta x)$} \\ \hline\hline
$20$ &1.80E-03 &-- &9.70E-05 & -- & 1.00E-06 & -- &3.72E-02&--&4.63E-04&-- \\
$40$ &4.30E-04 &{2.06}&1.28E-05 & 2.92 & 6.04E-08& 4.05& 1.17E-02&1.66 & 1.41E-05 & 5.03 \\
$80$ &1.06E-04 & 2.02 & 1.63E-06 & 2.96 & 3.92E-09& 3.94 & 2.85E-03& 2.04& 4.33E-07& 5.02 \\
$160$&2.64E-05 & 2.00 & 2.06E-07 & 2.98 & 2.58E-10& 3.92& 4.24E-04& 2.74& 1.35E-08 & 5.00 \\
$320$ &6.61E-06 & 2.00 &2.59E-08 & 2.99&1.51E-11&3.32&3.06E-05 & 3.79& 1.15E-11& 5.20\\ \hline
\end{tabular}
\caption{Test 3: Advection equation. Experimental order of accuracy for FV-WENO schemes and RKDG schemes without Limiter
at $T=1$ with initial condition $\rho_0(x)=0.5+0.4\sin(\pi x)$ on $x\in[-1,1]$. }
\label{EOA_advection}
\end{table}
\begin{table}[h]
\centering
\begin{tabular}{ |c|c|c|c|c|c|c||c|c|c|c|}\hline
& \multicolumn{2}{|c|}{$k=1$} & \multicolumn{2}{|c|}{$k=2$} & \multicolumn{2}{|c||}{$k=3$} & \multicolumn{2}{|c|}{WENO$3$}& \multicolumn{2}{|c|}{WENO$5$} \\ \hline
$\tiny{1/\Delta x}$ & $L^1$-err &\tiny{$\gamma(\Delta x)$} & $L^1$-err &\tiny{$\gamma(\Delta x)$} & $L^1$-err &\tiny{$\gamma(\Delta x)$} & $L^1$-err &\tiny{$\gamma(\Delta x)$}
& $L^1$-err &\tiny{$\gamma(\Delta x)$} \\ \hline\hline
$20$ &2.03E-03 & -- & 1.88E-04 &-- &5.03E-06 &--&6.92E-03&--&2.80E-04&-- \\
$40$ & 4.98E-04 & 2.02 & 3.06E-05& 2.62 & 2.81E-07 & 4.16& 2.37E-03&1.77&1.41E-05 & 4.31 \\
$80$ & 1.23E-04 & 2.01 & 4.71E-06& 2.69 & 1.72E-08 & 4.03 & 5.95E-04&1.99 &6.14E-07&4.52 \\
$160$ &3.08E-05 &2.00 & 7.11E-07 & 2.72 & 1.05E-09& 4.02& 8.20E-05& 2.86& 2.62E-08& 4.55 \\
$320$ &7.71E-06 &2.00 &1.06E-07&2.74&6.62E-11&3.99 & 5.47E-06&3.90 & 8.61E-10& 4.92 \\\hline
\end{tabular}
\caption{Test 3: Classical LWR equation. Experimental order of accuracy for FV-WENO schemes and RKDG schemes without Limiter
at $T=0.15$ with initial condition $\rho_0(x)=0.5+0.4\sin(\pi x)$ on $x\in[-1,1]$. }
\label{EOA_LWR}
\end{table}
\subsection{Test 4: Experimental Order of Accuracy for the {\it non-local} problems}
We now consider the non-local LWR and sedimentation models. Considering the non-local LWR model, we compute the solution of \eqref{nl_claw}-\eqref{LWRnl} with initial data $\rho_0(x)=0.5+0.4\sin(\pi x)$ for $x\in[-1,1]$, { periodic boundary conditions}, with $\eta=0.1$ and $\Delta x=1/M$ and $M=80,160,320,640,1280$ at final time $T=0.15$. The results are given for different kernel functions $w_\eta(x)$ in Table \ref{EOA_nl_LWR}. For RKDG schemes, we obtain the correct EOA. For the FV-WENO schemes, the EOA is also correct thanks to the in-cell quadratic reconstructions used to
compute the non-local terms. \\
For the non-local sedimentation model, we compute the solution of \eqref{nl_claw}-\eqref{sednl}-\eqref{ker_sed} with initial data $\rho_0(x)=0.8\sin(\pi x)^{10}$ for $x\in[0,1]$ with $\eta=0.025$, $\alpha=1$, $V(\rho)=(1-\rho)^3$ and $\Delta x=1/M$ with $M=100,200,400,800,1600,3200$ at final time $T=0.04$. The results are given in Table \ref{EOA_nl_LWR}.
\begin{table}
\centering
\begin{tabular}{ |c||c|c|c|c|c|c|c| }\hline
& & \multicolumn{2}{|c|}{$w_\eta(x)=1/\eta$} & \multicolumn{2}{|c|}{$w_\eta(x)=2(\eta-x)/\eta^2$} & \multicolumn{2}{|c|}{$w_\eta(x)=3(\eta^2-x^2)/(2\eta^3)$} \\ \hline
&$1/\Delta x$ & $L^1-$error &$\gamma(\Delta x)$ & $L^1-$error &$\gamma(\Delta x)$ & $L^1-$error &$\gamma(\Delta x)$ \\ \hline\hline
\multirow{5}{*}{\rotatebox[origin=c]{90}{ FVWENO3}}&$40$ & 1.49E-03 & -- & 1.64 -03 & -- & 1.78E-03 & -- \\
&$80$ & 3.27E-04 & 2.18 &3.39E-04 & 2.27 & 3.53E-04 & 2.33 \\
&$160$ & 4.64E-05 & 2.81 & 4.63E-05 & 2.87 &4.69E-05 & 2.91 \\
&$320$ & 3.37E-06 & 3.78 & 3.44E-06 & 3.74 & 3.57E-06 & 3.71 \\
&$640$ & 2.29E-07 & 3.87 & 2.41E-07 & 3.83 & 2.50E-07 & 3.83 \\ \hline
\multirow{5}{*}{\rotatebox[origin=c]{90}{FVWENO5}}&$40$ &7.54E-06 & -- & 8.65E-06 & -- & 9.69E-06 & \\
& $80$ & 3.36E-07 &4.48 &3.75E-07 & 4.52 & 4.17E-07 & 4.53\\
& $160$ &1.21E-08 &4.79 &1.44E-08 &4.70 & 1.64E-08 & 4.66\\
& $320$ &2.80E-10 & 5.43 & 4.52E-10 & 4.99 & 5.85E-10 & 4.81 \\
& $640$ &6.71E-12 & 5.38 &1.85E-11 & 4.61 & 2.70E-11 & 4.43 \\ \hline\hline
\multirow{5}{*}{\rotatebox[origin=c]{90}{RKDG $k=1$}}&$40$ & 4.72E-04 & --& 4.75E-04 & --& 4.76E-04 & --\\
& $80$ &1.17E-04 & 2.00 & 1.18E-04 & 1.99 & 1.19E-04 &1.99 \\
& $160$ & 2.94E-05 & 2.00 & 2.97E-05 & 1.99 & 2.99E-05 &1.99 \\
& $320$ & 7.36E-06 & 1.99 & 7.45E-06 & 1.99 & 7.49E-06 & 1.99 \\
& $640$ & 1.84E-06 & 1.99 & 1.86E-06 & 1.99 & 1.87E-06 & 1.99 \\ \hline
\multirow{5}{*}{\rotatebox[origin=c]{90}{RKDG $k=2$}}&$40$ & 1.81E-05 & --& 1.21E-05 & --& 1.15E-05 & --\\
& $80$ & 2.63E-06 & 2.78 & 1.77E-06 & 2.77 & 1.65E-06 & 2.80\\
& $160$ & 3.70E-07 & 2.82 & 2.54E-07 & 2.79 & 2.40E-07 & 2.78 \\
& $320$ & 3.96E-08 & 2.89 & 3.58E-08 & 2.82 & 3.43E-08 & 2.80\\
& $640$ & 5.20E-09 & 2.93 & 4.03E-09 & 2.86 & 4.81E-09 &2.83 \\ \hline
\multirow{5}{*}{\rotatebox[origin=c]{90}{RKDG $k=3$}}&$40$ &1.93E-07 & --&1.86E-07 & --& 1.68E-07 & --\\
& $80$ &1.21E-08 & 3.99 &1.20E-08 &3.95 &1.05E-08 & 4.00\\
& $160$ &7.55E-10 & 4.00 &7.65E-10 &3.97 &6.65E-10 & 3.98\\
& $320$ &4.87E-11 & 3.95 & 4.97E-11 & 3.94 & 4.35E-11 &3.93 \\
& $640$ & 3.41E-12 & 3.83 &3.51E-12 & 3.82 & 3.15E-12 & 3.78 \\ \hline
\end{tabular}
\caption{Test 4: Non-local LWR model. Experimental order of accuracy. Initial condition $\rho_0(x)=0.5+0.4\sin(\pi x)$, $\eta=0.1$, numerical solution at $T=0.15$ for
FV-WENO and RKDG schemes without generalized slope limiter.}
\label{EOA_nl_LWR}
\end{table}
\begin{table}[h]
\centering
\begin{tabular}{ |c|c|c|c|c|c|c||c|c|c|c|}\hline
& \multicolumn{2}{|c|}{$k=1$} & \multicolumn{2}{|c|}{$k=2$} & \multicolumn{2}{|c||}{$k=3$} & \multicolumn{2}{|c|}{WENO$3$}& \multicolumn{2}{|c|}{WENO$5$} \\ \hline
$\tiny{1/\Delta x}$ & $L^1$-err &\tiny{$\gamma(\Delta x)$} & $L^1$-err &\tiny{$\gamma(\Delta x)$} & $L^1$-err &\tiny{$\gamma(\Delta x)$} & $L^1$-err &\tiny{$\gamma(\Delta x)$}
& $L^1$-err &\tiny{$\gamma(\Delta x)$} \\ \hline\hline
$100$ & 7.98E-05 & -- & 2.90E-06 & --&4.25E-07 &-- & 2.72E-04 & -- & 2.65E-06 & -- \\
$200$ &1.98E-05 &2.0 & 4.64E-07 & 2.6 & 3.95E-08 &3.5 &6.02E-05 &2.1 & 1.01E-07 & 4.7 \\
$400$ & 4.95E-06& 2.0 & 7.75E-08 & 2.6 & 3.37E-09 &3.6 & 7.87E-06 &2.9 & 3.22E-09 & 4.9 \\
$800$ & 1.23E-06 & 2.0 & 1.23E-08 & 2.6 & 2.47E-10 &3.8 &5.50E-07 &3.8 &1.11E-10 & 4.8 \\
$1600$& 3.09E-07 & 2.0 & 2.02E-09 & 2.6 & 1.22E-11 & 4.2 & 4.64E-08 &3.7 &4.02E-12 & 4.7 \\\hline
\end{tabular}
\caption{Test 4: Non-local sedimentation problem. Initial condition $\rho_0(x)=0.8\sin(\pi x)^{10}$.
Experimental order of accuracy at $T=0.04$ with $f(\rho)=\rho(1-\rho)$, $V(\rho)=(1-\rho)^3$, $\omega_\eta(x):=\eta^{-1}K(\eta^{-1}x)$ with $\eta=0.025$.}
\label{EOA_nl_sed}
\end{table}
\section*{Conclusion}
In this paper we developed high order numerical approximations of the solutions of non-local conservation laws in one space dimension motivated by application to traffic flow and sedimentation models.
We propose to design Discontinuous Galerkin (DG) schemes which can be applied in a natural way and Finite Volume WENO (FV- WENO) schemes where we have used quadratic polynomial reconstruction in each cell to evaluate the convolution term in order to obtain the high-order accuracy. The numerical solutions obtained with high-order schemes provide better approximations of the oscillatory shape of the solutions
than the first order schemes. We also remark that DG schemes are more accurate but expensive, while FV-WENO are less accurate but also less expensive. The present work thus establishes the preliminary basis for a deeper study and optimum programming of the methods, so that efficiently plots could then be considered.
\section*{Acknowledgements}
This research was conducted while
LMV was visiting Inria Sophia Antipolis M\'editerran\'ee and the Laboratoire de Math\'ematiques de Versailles,
with the support of Fondecyt-Chile project 11140708 and ``Poste Rouge'' 2016 of CNRS -France.
The authors thank R\'egis Duvigneau for useful discussions and advices.
{ \small
| {'timestamp': '2016-12-20T02:03:02', 'yymm': '1612', 'arxiv_id': '1612.05775', 'language': 'en', 'url': 'https://arxiv.org/abs/1612.05775'} |
\section{Introduction}
Consider a periodic array of identical particles at low temperature confined to a one-dimensional ring by means of {a particular} trapping potential. Then, making such a system rotate at a constant velocity, i.e. uniformly, is a trivial exercise. What we mean by ``trivial'' here is that an observer at rest in the laboratory frame cannot detect that the device is rotating: the system is invariant under a rotation of reference frames. Put in a more elegant way, the gauge invariance associated with rotation is preserved. Obviously, if this gauge invariance is in some way broken, then the above is no longer true and rotation acquires a physical, in principle measurable, element or, in other words, the observer at rest in the laboratory frame can detect rotation. It is nontrivial, in fact interesting, to build up models where this symmetry can be broken; however, in $1+1$ dimensions the Coleman-Hohenberg-Mermin-Wagner theorem prevents this from happening dynamically \cite{Mermin:1966,Hohenberg:1968,Coleman:1973}. The simplest and most natural way to break this gauge invariance is to do this \textit{classically} by enforcing non-trivial boundary conditions at one point along the ring, thus breaking the periodicity of the array: this is equivalent to an explicit symmetry breaking. A simple way to achieve this is by placing an impurity somewhere along the ring, thus breaking the uniformity of the array. A pictorial explanation of this is given in Fig.1.
The ``device'' we have described above, whether rotating or static, is susceptible {to} a Casimir force, arising from the deformations of its (quantum) vacuum due to the compactness of the topology of the set-up, \textit{i.e.} the boundary conditions \cite{Milton:2001,Bordag:2009}. If the ring is static ({\textit{i.e.},} non rotating) and periodic ({\textit{i.e.},} periodic boundary conditions are imposed at \textit{any} one point along the ring), then quantum fluctuations are massless ({\textit{i.e.},} long-ranged) and induce a Casimir force that scales with the inverse size of the ring. Then, the arguments given in the previous paragraph imply that making such a periodic ring rotate should not change the Casimir force. However, if we change the boundary conditions into non-periodic ones, something interesting happens: rotation becomes ``physical'', quantum fluctuations have to obey nontrivial boundary conditions, and as a result the Casimir force should respond to variations in angular velocity.
A computation of the Casimir energy for a non-interacting scalar field theory can be found in Refs.~\cite{Chernodub:2012em,Schaden:2012}, while a computation of the Casimir energy for a scalar field theory with quartic interactions can be found in Ref.~\cite{Bordag:2021}. A calculation of the one-loop effective action for a non-relativstic nonlinear sigma model with rotation can be found in Ref.~\cite{Corradini:2021yha}. Here, we consider the simultaneous effect of both rotation and interactions.
The paper is organized as follows. After introducing the notation by illustrating the calculation of the Casimir energy for a real scalar field, we proceed by reviewing the free-field rotating case and illustrate an unconventional and computationally convenient way to calculate the Casimir energy. We then move on to the interacting, rotating case and discuss how both factors alter rather nontrivially the Casimir energy.
\begin{figure}
\begin{center}
\includegraphics[scale=0.25,clip]{figure_cas_rot_PNG.png}
\end{center}
\caption{Both figures above represent a ring rotating with an angular velocity $\Omega$. On the left, periodic boundary conditions are imposed at any one point, and the two observers (represented by the two reference frames $O$ and $O'$, the latter rotating with the ring) cannot detect that the ring is rotating. The ring on the right has an impurity (represented by a black dot) that breaks the rotational symmetry. In this second case, for the observer $O'$ at rest with the ring (i.e., rotating with the ring) the position of the impurity would not change (i.e. the boundary conditions are time-independent), while for the \textit{laboratory} observer $O$, rotation is evident and the boundary conditions imposed at the impurity are time-dependent.}
\label{figure}
\end{figure}
\section{Free case}
To introduce our notation, let us consider the case of a non-interacting {single}-component, real scalar field, whose Lagrangian density, in absence of rotation, is
\begin{eqnarray}
\mathcal{L}
= {1\over 2} \left[ \left({\partial \phi\over \partial \hat t}\right)^2 - {1\over R^2}\left({\partial \phi\over \partial \hat \varphi}\right)^2\right].
\end{eqnarray}
$R$ is the radius of the ring and $\hat \varphi \in \left[0,2\pi\right)$. In the first part of this section we simply repeat the calculation of Refs.~\cite{Chernodub:2012em,Schaden:2012}; in the second part of this section, we carry out the calculation of the vacuum energy in a simpler way. The field $\phi$ is assumed to satisfy certain boundary conditions at the end-points, $\hat\varphi=0,~2\pi$. Taking for concreteness Dirichlet boundary conditions, we have
\begin{eqnarray}
\phi(\hat t,\hat \varphi)\Big|_{\hat \varphi=0}
=
\phi(\hat t,\hat \varphi)\Big|_{\varphi=2\pi}
=0.
\label{DirichletBC}
\end{eqnarray}
In this case the system is static and a potential barrier at $\hat \varphi=0=2\pi$ enforces the above Dirichlet boundary conditions. Changing the potential barrier will change the boundary conditions. The Casimir energy is defined as
\begin{eqnarray}
E = \int_{0}^{2\pi R} dx \rho(x),
\end{eqnarray}
where $\rho(x) = \langle T_{00} (x)\rangle$ is the energy density component of the energy momentum tensor in the vacuum state.
The expression above can be written as the \textit{regularized} sum over the zero-point energy levels $E_n$ of the fluctuations,
\begin{eqnarray}
E = \sum_{n} {}^{'} E_n,
\end{eqnarray}
which defines the Casimir energy. The ``prime'' in the sum is a reminder that the summation is divergent and requires regularization. Throughout our paper we adopt zeta-function regularization \cite{Toms:2012}. A straightforward calculation of the Casimir energy gives \cite{Bordag:2009}:
\begin{eqnarray}
E = -\pi/(24 \times L),
\label{Ef}
\end{eqnarray}
showing that the energy scales as the inverse size of the ring $L =2 \pi R$. This is the one-dimensional scalar equivalent of the usual electromagnetic Casimir energy; see \textit{e.g.} Ref.~\cite{Milton:2001}. The numerical coefficient and the sign in (\ref{Ef}) depend on the boundary conditions, on the features of the vacuum fluctuations (\textit{e.g.}, spin) and of the background (\textit{e.g.}, an external potential or a non-trivial background geometry in more than one dimension). In the present case the Casimir force is attractive.
\section{Spinning the ring}
In this section we describe what happens to the Casimir energy when the ring is rotating. In the first part of this section we review some known results, Refs.~\cite{Chernodub:2012em,Schaden:2012}, and compute the Casimir energy using the standard way of summing over the eigenvalues and regularizing the sum. In the second part of the section we show how it is possible to obtain the same result for the Casimir energy {in a different way} using a replica trick applied to functional determinants.
Following the standard approach, we first pass from the laboratory frame (where the boundary conditions are time-dependent) to a co-rotating frame (where the boundary conditions are simply (\ref{DirichletBC})) by performing the following coordinate transformation,
\begin{eqnarray}
t = \hat {t},~~~ \varphi =\hat{\varphi} + \Omega t.
\end{eqnarray}
\begin{eqnarray}
{\partial \over \partial t} &\to& {\partial \over \partial \hat t} - \Omega {\partial \over \partial \hat\varphi}, \nonumber\\
{\partial \over \partial \varphi} &\to& {\partial \over \partial \hat \varphi}, \nonumber
\end{eqnarray}
that leads to the following Lagrangian density
\begin{eqnarray}
{L} =
{1\over 2} \left[
\left(
\left(
{\partial \phi\over \partial t} +\Omega {\partial \phi \over \partial \varphi}
\right)^2
- {1\over R^2}\left({\partial \phi\over \partial \varphi} \right)^2
\right)
\right].
\label{lagr}
\end{eqnarray}
The equation of motion following from the above Lagrangian density are
\begin{eqnarray}
0 = \left(
\left({\partial \over \partial t} -\Omega {\partial \over \partial \varphi}\right)^2
-{1\over R^2}{\partial^2 \over \partial \varphi^2}
\right) \phi .
\label{eq6}
\end{eqnarray}
Imposing Dirichlet boundary conditions means
\begin{eqnarray}
\phi \left(t, \varphi\right) \Big|_{\varphi = 0, 2\pi}= 0.
\label{eq7}
\end{eqnarray}
Notice that the above boundary conditions are time independent for a co-rotating observer.
First, {we describe} the ``canonical'' method.
Assuming that any solution to (\ref{eq6}) can be written as
\begin{eqnarray}
\psi(t, \varphi) = \sum_n a_n e^{-i \omega_n t} f_n(\varphi) + a^\dagger_n e^{i \omega_n t} f^*_n(\varphi).
\label{dec_norm_mods}
\end{eqnarray}
with $\omega_n \geq 0$,
and substituting the above expression in the original equation for $\psi$, we obtain
\begin{eqnarray}
0 &=&
\sum_n \omega_n^2 e^{-i \omega_n t} f_n(\varphi)
- \left(\Omega^2-R^{-2}\right) \sum_n e^{-i \omega_n t} f''_n(\varphi) \nonumber \\
&&- 2 i \Omega \sum_n \omega_n e^{-i \omega_n t} f'_n(\varphi)
\end{eqnarray}
The above assumption is verified if the modes $f_n(\varphi)$ are a complete and orthogonal set of solutions satisfying {the} equation
\begin{eqnarray}\label{eqn:fn}
0 =
\left(1 - \beta^2\right) f''_n(\varphi)
- 2 i \beta \tilde\omega_n f'_n(\varphi)
+\tilde\omega_n^2 f_n(\varphi).
\label{eq12}
\end{eqnarray}
We have defined, for brevity of notation,
\begin{eqnarray}
\beta^2 &=& \Omega^2 R^2,\nonumber\\
\tilde\omega_n &=& R\omega_n.
\end{eqnarray}
Equation \eqref{eqn:fn} can be solved exactly and after imposing the boundary conditions (\ref{eq7}) on the general solutions, one finds through simple algebra that the following quantization condition holds
\begin{eqnarray}
\sin^2\left(
{2 \pi \tilde \omega_n\over 1 - \beta^2}
\right) = 0,
\end{eqnarray}
which gives the following spectrum of the quantum fluctuations
\begin{eqnarray}
\omega_n ={n\over 2R }\left(1 - \beta^2\right),
\label{omega0n}
\end{eqnarray}
with $n \in \mathbb{N}$. The corresponding eigenfunctions can be written as follows
\begin{eqnarray}
\phi_n(t,\varphi) = {1\over \sqrt{\pi R}} e^{-i \omega_n t} e^{\imath \varphi {\beta \omega_n R \over 1- \beta^2}} \sin\left({n \varphi \over 2}\right),
\end{eqnarray}
with the pre-factor coming from the requirement that the modes are normalized.
The above solutions for the modes $\phi_n(t, \varphi)$ correspond to the modes in the co-rotating frame. To go back to the stationary-laboratory frame, we can perform the inverse coordinate transformation,
\begin{eqnarray}
t &\to& t, ~~~~~~~~~ \&~~~~~~~~~ \varphi \to \varphi + \Omega t \nonumber
\end{eqnarray}
to get
\begin{eqnarray}
\phi_n(t,\varphi) = {1\over \sqrt{\pi R}} e^{-i \omega_n t} e^{\imath \left[\varphi + \Omega t\right]_{2\pi} {\beta \omega_n R \over 1- \beta^2}} \sin\left({n \over 2} \left[\varphi + \Omega t\right]_{2\pi}\right),\nonumber
\end{eqnarray}
with $\left[ u \right]_{2\pi} \equiv u \left[\mbox{mod($2\pi$)}\right]$, thus explicitly indicating that the solutions have a $2\pi$ periodicity. These solutions and the method we have describe are those of Refs.~\cite{Chernodub:2012em,Schaden:2012}.
A direct way to compute the Casimir energy is to perform the regularized sum over the eigenvalues. It is straightforward to write
\begin{eqnarray}
E_r &=& {1\over 2} \lim_{s \to -1}\sum_{n=1}^\infty \left({n\over 2R }\left(1 - \beta^2\right)\right)^{-s} \\
&=& -{1\over 2\times 12\times 2 R }\left(1 - \beta^2\right),
\end{eqnarray}
where the factor $1/12$ comes from the summation over $n$ yielding $\zeta(-1)=-1/12$. Here, $\zeta(s)$ defines the Riemann zeta function.
As clearly discussed in Refs.~\cite{Chernodub:2012em,Schaden:2012}, the Casimir energy $E_s$ in the stationary-laboratory frame is related to the Casimir energy $E_r$ in the rotating frame by the following relation
\begin{eqnarray}
E_s - E_r = {\bf \Omega L},
\end{eqnarray}
from which, by taking the inverse Legendre transform of $E_r$, one can obtain the angular momentum dependence of the ground state energy:
\begin{eqnarray}\label{eqn:am}
L &=& -{\partial E_r \over \partial \Omega}
=-{1\over 24} \Omega R.
\end{eqnarray}
It follows that
\begin{eqnarray}
E_s= -{1\over 48 R}\left(1 + \beta^2\right).
\label{Esd}
\end{eqnarray}
As noted in Ref.~\cite{Schaden:2012}, the above quantity is the quantum vacuum energy and it should be augmented by a classical contribution proportional to the moment of inertia, $I$, yielding the total energy to be
\begin{eqnarray}
E_s= -{1\over 48 R}\left(1 + \beta^2\right) +{1\over 2} I \Omega^2.
\end{eqnarray}
While Ref.~\cite{Chernodub:2012em} had initially and correctly argued that the vacuum energy lowers the moment of inertia of such a system, Ref.~\cite{Schaden:2012} later argued that this never happens, at least within the range of validity of a semiclassical treatment.
\section{Functional determinants and the replica trick}
Before including interactions, we will illustrate an alternative way to compute the Casimir energy for the free case with almost no calculation by using a replica trick to obtain directly the energy $E_s$ in the laboratory frame without passing through the solution of the mode equation. This is interesting for two reasons. First, it is simple. Second, it offers at least in some cases a simple way to compute the quantum vacuum energy in stationary (although simple) backgrounds, which is usually complicated due to the mixing of space and time components in the metric tensor that yields nonlinear eigenvalue problems \cite{Fursaev,Fursaev:2011}.
The method we outline below is valid in setups more general than what we discuss here, assuming that there are no parity breaking interactions. The replica trick here stems from the fact that the Casimir energy of the rotating system must be an even function of the rotational velocity, i.e. it does not change if we invert the direction of rotation. This simple physical consideration allows us to greatly simplify the calculation.
Rather {than} following the canonical approach of summing over the zero point energies, we pass through the Coleman-Weinberg effective potential from which one can extract the Casimir energy.
The Coleman-Weinberg effective potential can be obtained starting from the following functional determinant
\begin{eqnarray}
\delta \Gamma = \log \det \left(\left({\partial\over \partial t} -\Omega {\partial\over \partial \varphi}\right)^2 -{1\over R^2}{\partial^2\over \partial \varphi^2}\right).
\end{eqnarray}
The problem with the expression above is that the presence of first order derivatives in time makes the eigenvalue problem {nonlinear}, Ref.~\cite{Fursaev}. Here we will bypass the nonlinearities using the following replica trick. First of all, we can express the determinant as
\begin{eqnarray}
\delta \Gamma = \log A = \log \det \left(L_+ \times L_-\right)
\end{eqnarray}
having defined
\begin{eqnarray}
L_\pm \left(\Omega\right) = {\partial\over \partial t} -\alpha_\pm {\partial\over \partial \varphi}
\end{eqnarray}
with
\begin{eqnarray}
\alpha_\pm = \Omega \pm {1\over R}
\end{eqnarray}
This gives
\begin{eqnarray}\label{eqn:det}
A = \det \left(L_+ \left(\Omega\right) \right)\times \det \left(L_- \left(\Omega\right)\right).
\end{eqnarray}
The replica trick relies on the {assumption} that inverting the sense of rotation $\Omega \to -\Omega$, does not change the effective action. This is a \textit{physical} assumption, which is valid in our case. If parity breaking terms are present, one can modify the trick by adding up a residue. In our case, {we can express Eq.~\eqref{eqn:det} as}
\begin{eqnarray}
A &=&
\left(
\det \left(L_+ \left(\Omega\right) L_- \left(-\Omega\right) \right) \times
\left(L_- \left(\Omega\right) L_+ \left(-\Omega\right) \right)
\right)^{1/2} \nonumber\\
&=&
\det
\left(
\left({\partial^2\over \partial t^2} - {1\over \rho_+^2} {\partial^2\over \partial \varphi^2}\right)
\left({\partial^2\over \partial t^2} - {1\over \rho_-^2} {\partial^2\over \partial \varphi^2}\right)
\right)^{1/2}
\label{prodea}
\end{eqnarray}
where
\begin{eqnarray}
\rho_\pm^{-1} = \left|\Omega \pm {1\over R}\right|.
\end{eqnarray}
From the above relation we can factorize the effective action as follows:
\begin{eqnarray}
\delta \Gamma &=&
{1\over 2} \log \det \left({\partial^2\over \partial t^2} - {1\over \rho_+^2} {\partial^2\over \partial \varphi^2}\right)\nonumber\\
&&+
{1\over 2} \log \det \left({\partial^2\over \partial t^2} - {1\over \rho_-^2} {\partial^2\over \partial \varphi^2}\right).
\label{replea}
\end{eqnarray}
Notice that we have commuted the operators $L_+$ and $L_-$. This practice requires some justification since a non-commutative or \textit{Wodzicki residue} may appear under some circumstances in the trace functionals of formal pseudo-differential operators. In the present case it makes no difference, but in more general situations care should be paid about this point \cite{Elizalde:1997nd,Dowker:1998tb,McKenzie-Smith:1998ocy}.
The replica trick has allowed us to express the effective action for the rotating system in terms of the effective action for an analogous system without rotation but with an effective radius,
\begin{eqnarray}
\delta \Gamma_\pm &=&
{1\over 2} \log \det \left({\partial^2\over \partial t^2}
- {1\over \rho_\pm^2} {\partial^2\over \partial \varphi^2}\right).
\end{eqnarray}
The above is nothing but the integral over the volume of the Coleman-Weinberg potential for a free theory; combining the $\delta \Gamma_\pm$ terms as in Eq.~(\ref{replea}) should return the Casimir energy after renormalization. {We outline the computation of the $\delta \Gamma_\pm$ terms for completeness}. Setting
\begin{eqnarray}
x_\pm = \rho_\pm \varphi
\end{eqnarray}
we have
\begin{eqnarray}
\delta \Gamma_\pm=
{1\over 2} \log \det \left({\partial^2\over \partial t^2} - {\partial^2\over \partial x_\pm^2}\right),
\end{eqnarray}
with boundary conditions imposed at $x_\pm=0$ and $x_\pm=2\pi \rho_\pm$. For Dirichlet boundary conditions, the eigenvalues of the above differential operators are
\begin{eqnarray}
E^{(\pm)}_n = \pi n/x_\pm.
\end{eqnarray}
Using zeta regularization, we get for the effective potential $V_\pm$ (i.e., the effective action divided by the volume)
\begin{eqnarray}
V_\pm = {1\over 2 x_\pm} \sum_n E^{(\pm)}_n.
\end{eqnarray}
{One can then show that}
\begin{eqnarray}
V_\pm = -{1\over 96 \pi \rho_\pm^2}.
\end{eqnarray}
The contribution to the Casimir energy from the $+$ and $-$ terms summed up gives
\begin{eqnarray}
E_s = 2 \pi R \left( V_+ + V_-\right) = -{1\over 2\times 24 R}{\left(1 + R^2 \Omega^2\right)},
\label{eq:cas0}
\end{eqnarray}
where the factor $1/2$ comes from the overall $1/2$ in the factorization of the determinant. The above formula coincides with (\ref{Esd}), and leads to an attractive force.
Adding additional flat directions, orthogonal to the rotating ring, turning it into a rotating cylinder, does not change the attractive nature of the force for the physically relevant regime of $\Omega R \ll 1$. If we change the boundary conditions to periodic, then the sums in the above zeta function $\zeta_\pm$ extend over $n \in \mathbb{Z}$, resulting in a total Casimir energy independent of the rotational velocity $\Omega$, as we have argued earlier.
\section{Adding rotation and interactions}
Now, we get to the physically novel part of this work, that is working out the vacuum energy in the presence of rotation and interactions. The non-rotating case has been worked out in Ref.~\cite{Bordag:2021} and here those results will be re-obtained as limiting case of our more general expressions. In the following, we shall consider a complex scalar theory with quartic interactions and start from the Lagrangian density
\begin{eqnarray}
{L} =
{1\over 2} \left[
\left(
\left(
{\partial \phi\over \partial t} +\Omega {\partial \phi \over \partial \varphi}
\right)^2
- {1\over R^2}\left({\partial \phi\over \partial \varphi} \right)^2
\right)
+{\lambda \over 4} \phi^4 \right],~~~
\label{lagr_int}
\end{eqnarray}
from which the equations of motions can be easily obtained,
\begin{eqnarray}
0 = \left(
\left({\partial \over \partial t} -\Omega {\partial \over \partial \varphi}\right)^2
-{1\over R^2}{\partial^2 \over \partial \varphi^2}
+{\lambda} \phi^2
\right) \phi .
\label{eq_rot_int}
\end{eqnarray}
The computation of the Casimir energy in the present case can be carried out in a similar manner as we did in the free case (i.e., going over the mode decomposition, finding the spectrum of the fluctuations, and performing the renormalized sum over the zero point energies); however, differently from before, in general the Casimir energy cannot be obtained in analytical form. The reason will become clear in a moment.
After proceeding with the decomposition in normal modes, as in Eq.~(\ref{dec_norm_mods}), we obtain the following nonlinear Sch\"odinger equation
\begin{eqnarray}
0 = \left(
\left(-{\imath \omega_p} - \beta {\partial \over \partial x}\right)^2
-{\partial^2 \over \partial x^2}
+{\lambda} f_p^2
\right) f_p.
\label{eq_rot_int}
\end{eqnarray}
Above, we have defined $x = R \varphi$, and the index $p$ is a generic, boundary conditions-dependent multi-index that can in principle take continuous and/or discrete values. Setting $\lambda=0$, that is in the non-interacting limit, returns equation (\ref{eq12}).
Notice that Eq.~(\ref{eq_rot_int}) differs from the analogous equation of Ref.~\cite{Bordag:2021} by the addition of the term proportional to $\beta$; such a contribution combines with the frequency $\omega_p$ making the associated eigenvalue problem nonlinear, as we have anticipated in the previous section.
For notational convenience we first write the mode equation as follows:
\begin{eqnarray}\label{eqn:nlse}
0 =
f_p'' - \imath a_p f_p' + b_p f_p - \gamma \left|f_p\right|^2 f_p,
\label{eq_damped_cubed_oscillator}
\end{eqnarray}
where the prime refers to differentiation with respect to $x$ and
\begin{eqnarray}
a_p &=& {2 \omega \beta \over 1-\beta^2} > 0, \label{Eq:42}\\
b_p &=& {\omega^2\over 1-\beta^2} > 0, \label{Eq:43}\\
\gamma &=& {\lambda \over 1-\beta^2}. \label{Eq:44}
\end{eqnarray}
Equation \eqref{eqn:nlse} is that of a damped, cubed anharmonic oscillator and can be explicitly solved only for special values or combinations of the parameters. For general values of the parameters, the equation is not exactly integrable. In the present case, thanks to the presence of the imaginary term accompanying the first derivative, {analytic solutions can be obtained. Proceeding} by redefining
\begin{eqnarray}
f_p (x) &=& g_p(x) e^{{\imath \over 2} a_p x}
\end{eqnarray}
allows us to write Eq.~(\ref{eq_damped_cubed_oscillator}) as
\begin{eqnarray}
g_p''(x) + {\epsilon}^2_p g_p(x) - \gamma g_p^3(x) = 0,
\label{red_eq}
\end{eqnarray}
where we have defined
\begin{eqnarray}
{\epsilon}^2_p = \left({a_p^2\over 4} + b_p \right).
\label{eq:47}
\end{eqnarray}
Equation (\ref{red_eq}) can be solved exactly in terms of Jacobi elliptic functions.
We have taken advantage of the imaginary term in (\ref{red_eq}) to reduce Eq.~(\ref{eq_damped_cubed_oscillator}) to a standard cubic {nonlinear Schr\"odinger} equation. This would have not been possible if the coefficient of the first derivative were any real number.
{Since the model we are considering is classically unstable for $\lambda<0$, we shall focus here on the $\lambda>0$ case. The calculation of the quantization relations of the energy eigenvalues follows a similar approach to the non-rotating case; since it is slightly cumbersome and requires some {familiarity with} elliptic functions, the actual calculation will be relegated to the appendix. Here, we summarize the main results. The solution to Eq.~(\ref{red_eq}) can be written as follows:
\begin{eqnarray}
g_p(x) = A\; \mathtt{sn}( qx +\delta, k^2),
\label{gpsn}
\end{eqnarray}
with the coefficients $A$, $\delta$, $q$ and $k^2$ to be determined by imposing the boundary and the normalization conditions, and by use of the first integral of Eq.~(\ref{red_eq}). As a result, $k^2$ (the elliptic modulus) and the eigenvalues $\epsilon_n^2$ are quantized according to the following equations (see appendix for details of the computation),
\begin{eqnarray}
{\lambda L\over 8(1-\beta^2) n^2}
&=&
\Kappa(k^2_n) \left( {\Kappa(k^2_n)} - \Epsilon \left(k^2_n\right)\right),
\label{quant_m_n}\\
\epsilon_n^2 &=& {4n^2\over L^2} \Kappa^2(k_n^2) (1+k_n^2).
\label{quantz_eps_n}
\end{eqnarray}
with $0< n \in \mathbb{N}$, $\Kappa(k^2)$ and $\Epsilon(k^2)$ {defining the} \textit{complete elliptic integral of the first and second kind}, respectively.}
{The spectrum of the rotating problem is encoded in the two equations (\ref{quantz_eps_n}) and (\ref{quant_m_n}). Thus, for any admissible value of the physical parameters, $L$, $\lambda$, $\beta$, and any positive integer $n$, Eq.(\ref{quant_m_n}) determines the value of the elliptic modulus $k_n^2$, which togheter with Eq.~(\ref{quantz_eps_n}) determines the eigenvalue $\epsilon_n$. Notice, that the angular velocity $\Omega$ ($\beta=\Omega R={\Omega L\over 2\pi}$) appears with even powers preserving the symmetry $\Omega \leftrightarrow -\Omega$.}
{The difference with the non-rotating case is rather non obvious; rotation enters the frequencies $\omega_n$ via the coefficient $(1-\beta^2)$ in Eq.~(\ref{quant_m_n}) (see Eqs.~(\ref{Eq:42}), (\ref{Eq:43}) and (\ref{eq:47}) for the relation between $\omega_n$ and $\epsilon_n$). This is already nontrivial as the coupling constant is multiplied by the factor $1/(1-\beta^2)$, implying that the coupling constant can be effectively amplified if the rotation is large, i.e. when $\beta$ tends to unity. The second way that rotation enters the frequencies is via relation (\ref{quantz_eps_n}), which relates nonlinearly {to} the rotational velocity $\Omega$ {and} to the elliptic modulus $k^2_n$. That is, once the parameters ($\lambda$, $\beta$, $L$) and the quantum number $n$ are fixed, the elliptic modulus $k^2_n$ is determined according to Eq.~(\ref{quant_m_n}). Then, upon substitution in Eq.~(\ref{quantz_eps_n}), $k_n^2$ enters the frequencies.}
Before going into the computation of the vacuum energy, there are a number of relevant limits to check. The first is the noninteracting limit, $\lambda \rightarrow 0$. In this case, the first equation (\ref{quant_m_n}) gives
\begin{eqnarray}
k_n^2 = 0, \mbox{$\forall n \in \mathbb{N}$.}
\end{eqnarray}
Substituting in (\ref{quantz_eps_n}) we obtain
\begin{eqnarray}
\omega_n \equiv \omega^{(0)}_n = {\pi n\over L}(1-\beta^2),
\end{eqnarray}
{which} coincides with the free rotating result (\ref{omega0n}). Naturally, taking the non-rotating limit, $\beta \to 0$, we also recover the non-rotating case.
Corrections to the above leading result are interesting in this case because of the way that {the} interaction strength and rotation intertwine. For small interaction strengths and zero rotation, the elliptic modulus can be obtained by expanding the right hand side of Eq.~(\ref{quant_m_n}) for small argument.
Ignoring terms of order $k_n^4$ or higher, we find
\begin{eqnarray}
{ k^2_n} \approx {\lambda L\over \pi^2 n^2}.
\end{eqnarray}
Keeping only this first correction, we get for the frequencies
\begin{eqnarray}
\omega^2_n
&\approx& {{\pi^2} n^2\over L^2} \left(1 + {3\over 2\pi^2}{\lambda L\over n^2} \right),
\end{eqnarray}
which recovers the result of \cite{Bordag:2021}.
More interesting is the limit of small interaction strength with rotation on. In this case, relation (\ref{quant_m_n}) implies that the relevant expansion parameter is $\lambda/(1-\beta^2)$, which plays the role of an effective coupling. When rotation is small, $\beta \ll1$, then at leading order we return to the previous small-$\lambda$ small-$\beta$ case. When the ring is spinning fast, i.e. $1-\beta^2 \ll 1$, even for small $\lambda$ the quantity $\lambda/(1-\beta^2)$ can in principle be large. Thus we have two physically relevant limits. The first is for weak coupling and slow rotation, which yields the following expressions for the frequencies,
\begin{eqnarray}
&& \omega^2_n {\approx} {{\pi^2} n^2\over L^2} \left(1 {+} {3\over 2\pi^2}{\lambda L\over n^2} \right)
\left(1 {-} \beta^2 \left( 3 {-} {2 n^2 \pi^2 \over 2 n^2 \pi^2 {+} 3\lambda L} \right)\right).\nonumber
\end{eqnarray}
The other is for weak coupling and fast rotation. In this case, the limit is trickier to take since in the calculation of the Casimir energy we need to sum over $n \in \mathbb{N}$. Since the sum extends to infinity, the quantity ${\lambda L/ 8/( 1-\beta^2)/n^2}$ is not necessary large for any $n$ if ${\lambda L/( 1-\beta^2)}$ is large but finite. The limit of large but finite effective coupling, ${\lambda /( 1-\beta^2)}$, can be conveniently treated numerically, and we will do so in the next subsection.
Finally, there is the case of ${\lambda /(1-\beta^2)} \to \infty$, i.e. infinitely strong coupling (that as explained can also be realized in the limit of small coupling and fast rotation); this corresponds in (\ref{quant_m_n}) to the limit of $k_n^2 \sim 1$.
Thus, from Eq.~(\ref{quant_m_n}) we get, for any $k_n^2$,
\begin{eqnarray}
\Kappa (k^2_n) = {1\over 2} \left(\Epsilon \left(k^2_n\right) \pm \sqrt{\Epsilon^2 \left(k^2_n\right) + {\lambda L^2\over 2 n^2 ( 1-\beta^2) }}\right).\nonumber
\end{eqnarray}
Upon substituting in Eq.~(\ref{quantz_eps_n}) we get
\begin{eqnarray}
\omega^2_n &=& {n^2\over L^2} \left(\Epsilon \left(k^2_n\right) \pm \sqrt{\Epsilon^2 \left(k^2_n\right) +
{\lambda L^2\over 2 n^2 ( 1-\beta^2) }} \right)^2 \times\nonumber\\&& \times (1+k_n^2)(1-\beta^2)^2.
\end{eqnarray}
The leading term results from the $k_n^2 \to 1$ limit, yielding
\begin{eqnarray}
\omega^2_n {\approx} {2 n^2 (1{-}\beta^2)^2\over L^2} \left(1 {+}\sqrt{1{+}{L^2 \lambda\over 2n^2 (1{-}\beta^2)}}\right)^2{+} \dots.
\end{eqnarray}
This regime {can also be} computed numerically.
\subsection{A numerical implementation of zeta regularization}
{In this section we shall compute the quantum vacuum energy $E_r$ given by the following (formal) expression
\begin{eqnarray}
E_r = {1\over 2}\lim_{s\to 0} \sum_n{}^{'} \omega_n^{1-2s}.
\label{eq65}
\end{eqnarray}
The `prime' is a reminder that the sum is divergent and requires regularization.}
{Before going into the specifics of the computation, it may be useful to explain the method we use to compute the vacuum energy in more general terms. The issue at hand is having to compute a summation of the form Eq.~(\ref{eq65}) without any explicit knowledge of the eigenvalues. Knowing the eigenvalues would be enough to compute the above expression, were the summation convergent. However, as {is} common in quantum field theory, divergences are present and the above brute force approach cannot be used. The procedure described below is essentially a regularization procedure based on a numerical implementation of zeta-function regularization.}
\begin{figure}[t]
\includegraphics[width=1.0\columnwidth]{error_fig.pdf}
{\caption{\label{fig:numeig}(color online) Numerical eigenvalue analysis. Panel (a) shows a comparison of the polynomial fit (Eq.~\eqref{eqn:polyfit}) and the numerically computed eigenvalues (Eqs.~\eqref{quant_m_n}, \eqref{quantz_eps_n}) eigenvalues up to cubic order. The individual fitting coefficients are defined in the legend as $\tilde{\omega}_j=(\tilde{\omega}_0,\tilde{\omega}_1,\tilde{\omega}_2,\tilde{\omega}_3)$. (b) shows the correction $\delta E_r$ (Eq.~\eqref{eqn:der}) computed for each of the cases presented in (a), i.e. for fixed $\lambda=1$, $L=2$ with $\Omega R=0.2,0.4,0.6,0.8$. The inset (i) shows an example of the eigenvalue difference $\omega_n-\tilde{\omega}_n$ for $\Omega R=0.2$. }}
\end{figure}
{Our approach consists in finding an approximate form for the eigenvalues, say $\tilde\omega_n$, that can be used to compute the summation (\ref{eq65}) without relying on any approximation; this is possible \textit{as long as, the approximant $\tilde\omega_n$ captures the correct asymptotic behavior that causes the divergence}. In the present case, since we can find the eigenvalues numerically by solving Eqs.~(\ref{quant_m_n}) and (\ref{quantz_eps_n}) to any desired order and accuracy, we can proceed to find a suitable $\tilde\omega_n$ simply by fitting the eigenvalues. The form of the fitting function is unimportant (as we shall explain below), as long as the correct asymptotic behavior is captured by the fitting function. Assuming that this the case, we can write the above expression for $E_r$ as
\begin{eqnarray}\label{eqn:caslong}
E_r {=} {1\over 2} \lim_{s\to 0}\sum_n \left( \omega_n^{1-2s} {-} \tilde\omega_n^{1-2s}\right)
{+} {1\over 2}\lim_{s\to 0}\sum_n \tilde\omega_n^{1-2s}.
\end{eqnarray}
In the first term above the limit ${s \to 0}$ can be safely taken (by construction); we call this term
\begin{eqnarray}\label{eqn:der}
\delta E_r ={1\over 2}\sum_n \left( \omega_n - \tilde\omega_n\right).
\end{eqnarray}
{We note} that the above quantity can be computed numerically to any desired accuracy.}
{Thus, the Casimir energy {Eq.~\eqref{eqn:caslong}} can be expressed as
\begin{eqnarray}
E_r = {1\over 2}\lim_{s\to 0}\sum_n \tilde\omega_n^{1-2s}+ \delta E_r.
\label{eqsplit}
\end{eqnarray}
There is no approximation at this stage: Eqs.~(\ref{eq65}) and (\ref{eqsplit}) are equivalent.}
{Now, in order to get the \textit{exact} Casimir energy $E_r$ we are left with the computation of
\begin{eqnarray}
\tilde E_r = {1\over 2}\lim_{s\to 0}\sum_n \tilde\omega_n^{1-2s},
\label{eqcas}
\end{eqnarray}
which, as it will be detailed below, can be carried out analytically. Once we have $\tilde E_r$ and $\delta E_r$, these can be combined into an exact result for $E_r$.}
{There are still two issues to clarify. The first is about the renormalization of the Casimir energy: $\tilde E_r$ is still divergent and needs to be regularized and renormalized. In the present case the regularization, i.e. extracting the diverging behavior of the sum in $\tilde E_r$, is straightforward by using zeta regularization. The renormalization also can be carried out at ease by subtracting from $\tilde E_r$, the asymptotic (i.e., calculated for $L \to \infty$) value of $\tilde E_r$ (practically, this is equivalent to {normalizing} the vacuum energy to zero in absence of boundaries, a straightforward procedure in Casimir energy calculations).}
{The other point to clarify regards the choice of the fitting function. First of all, one can easily understand that the choice of the fitting function is not unique, as two different fitting functions may result in differing $\delta E_r$. However, any such difference is irrelevant as it is compensated out in (\ref{eqsplit}) due to the fact that we add and subtract the same quantity. Using two different fitting functions may result in an expedited numerical evaluation and in {an efficient} regularization of $\tilde E_r$.}
{Where uniqueness is required and two fitting functions cannot differ is in their asymptotic behavior. This is necessary to properly `treat' (i.e., renormalize) the vacuum energy. In order to find the asymptotic behavior one may proceed heuristically, essentially by trial-and-error, to find the best fitting function (this is straightforward to implement numerically in our case, or in situations where the eigenvalues can be computed numerically to any desired order and accuracy). In the present case, as discussed in the next section, we find the leading term in the frequencies $\omega_n^2$ behaves as $n^2$. A more general approach would be to invoke Weyl's theorem on the asymptotic behavior of the eigenvalues of a differential operator. In general dimensionality, given certain assumptions on the differential operator involved, the Weyl theorem provides {an intuitive} rationale for {obtaining} the correct asymptotic behavior as the eigenvalues behave as $n^{2/d}$ with $d$ being the dimensionality. To be precise, in its original formulation Weyl's law refers to the Laplace-Beltrami operator in $d\geq 2$, however, extensions of the theorem and to more general manifolds and operators, including in $d=1$, have been discussed in the literature \cite{Kirsten,Arendt,Bonder}.}
{To recap, given two fitting function $\omega^{(fit)}_n$ and $\bar{\omega}^{(fit)}_n$ with the same asymptotic behavior, as \textit{per} Eq.~(\ref{eqsplit}), the vacuum energy will not change: any change in the fitting function will result in different $\tilde E_r$ and $\delta E_r$, with the difference simply compensated by construction. Of course, as already stated, the arbitrariness in the choice of the fitting function can be used to achieve a speedier computation of $\delta E_r$ as well as a more manageable expression for $\tilde E_r$.}
{The above method will be concretely applied to our problem in the next section.}
\subsubsection{The Casimir Energy}
In order to compute the vacuum energy we shall proceed as follows. First, we fix the values of the interaction strength $\lambda$, of the rotation parameter $\beta = {L \Omega\over 2\pi}$ and $L$ (the procedure is iterated over these parameters). Then, we solve Eq.~(\ref{quant_m_n})
for a sequence of $n \in \mathbb{N}$ up {to} some large value {$n_{\rm max}$}. These values of $k_n$ are then used to compute the frequencies $\omega_n$ up to {$n_{\rm max}$} according to formula (\ref{quantz_eps_n}).
With this set of frequencies $\Theta\left(n_*\right) = \left\{ \omega_1, \omega_2, \dots, \omega_{n_*}\right\}$, we fit the spectrum with a polynomial,
\begin{eqnarray}\label{eqn:polyfit}
\tilde \omega^2_n = \varpi_0 + \varpi_1 n + \varpi_2 n^2 + \varpi_3 n^3 + \dots + \varpi_{p_*} n^{p_*},
\end{eqnarray}
with $2 \leq p_* \in \mathbb{N}$ (the fit is repeated for various values of $p_*$ with the value corresponding to the best fit selected. This is, in fact, a redundant step as it follows that the best-fit value is $p_*=2$). The next step consists in increasing the value $n_*$ and repeating the process until the coefficients $\varpi_k$ converge. {We find} that the best fit returns $\varpi_1 \approx 0$ and $\varpi_j \approx 0$ with $j\neq 2$, indicating that the spectrum is approximately of the form
\begin{eqnarray}
\tilde \omega_n^2 \approx \varpi_0 + \varpi_2 n^2,
\label{spectrumapprox}
\end{eqnarray}
where the ``$\approx$'' symbol should be understood in the sense of numerical approximation.
As we have explained in the preceding section, the form Eq.~(\ref{spectrumapprox}) of the spectrum is anticipated from the expected Weyl-like asymptotic behavior of the eigenvalues, from which it follows $p_*=2$. As explained in the previous section, we should note that one may chose to proceed using a different fitting function with the same asymptotic behavior; this will not change the final result, but the following step should be modified accordingly. The choice {of} Eq.~(\ref{spectrumapprox}) is the easiest to handle and the most natural, {since the resulting summation will be of the form of a generalized Epstein-Hurwitz {zeta-function (i.e., a type of zeta-function associated with quadratic forms)} whose regularization can be carried our using a rearrangement by Chowla and Selberg, as explained below.}
\begin{figure}[t]
\centering
\includegraphics[width=1\columnwidth]{cas_fig.pdf}
\caption{\label{fig:2}(color online) Rotation-interaction heatmap. (a) shows the Casimir energy $E_r$ (Eq.~\eqref{eqn:er}) in the $(\Omega R,\lambda)$ parameter space. (b) shows $E_{r}(\lambda)$ for $\Omega R=0,0.2,0.4,0.6,0.8,0.9$. The size of the ring is $L=20$.}
\end{figure}
Once we have the eigenvalues $\omega_n$, these have to be summed according to Eq.~(\ref{eq65}) in order to obtain the Casimir energy of the rotating system.
\begin{figure*}[t]
\centering
\includegraphics[scale=0.8]{grid_fig.pdf}
\caption{\label{fig:grid}(color online) Casimir energies. (a) shows $E_r$ (Eq.~\eqref{eqn:er}) for varying $L$ for various fixed interaction strengths $\lambda=1,10,25,50,75,100$. The rotation parameter is $\Omega R=0.75$. (b) shows $E_r$ for fixed $\lambda=10$ for $\Omega R=0,0.5,0.6,0.7,0.8,0.9$, while the inset shows a zoomed portion between $L=10^{-3}:10^{-2}$. (c) and (d) show $E_r$ as a function of the rotation parameter $\Omega R$ for fixed ring length $L=20$ in (c) and fixed interaction strength $\lambda=0.5$ in (d). {Here we have fixed $\mu=1$ throughout.}}
\label{figure3}
\end{figure*}
The computation of the Casimir energy is performed using the approximate formula {of} Eq.~(\ref{spectrumapprox}) leading to
\begin{eqnarray}
\tilde E_r = \lim_{s\to 0} {\mu^{2s}\over 2}\varpi_2^{1/2-s} \sum_n \left({\varpi_0\over \varpi_2 } + n^2\right)^{1/2-s}.
\label{casimirexpl}
\end{eqnarray}
We have added the multiplicative term $\mu^{2s}$ in order to keep the dimensionality of the above expression to be that of energy \cite{Toms:2012}.
The expression Eq.~(\ref{casimirexpl}) can be easily dealt with analytically when the ratio $\varpi_0 / \varpi_2$ is small, for instance in the non-interacting limit. In this case, a valid representation can be built by simply using the binomial expansion of the summand in Eq.~(\ref{casimirexpl}), yielding
\begin{eqnarray}
\hskip -0.5 cm
\tilde E_r {=} -{\sqrt{ \varpi_2} \over 24}\left\{ 1 {+} 6{\varpi_0\over \varpi_2}\left[ 1 {-} \gamma_e {-} {1\over 2 s}
{+} {1 \over 2}
\log\left( \varpi_2 \over \mu^2 \right)\right]\right\}
\end{eqnarray}
Notice that the noninteracting case corresponds to $\varpi_0 \to 0$, leading to
\begin{eqnarray}
\tilde E_r &=& -{\sqrt{\varpi_2}\over 24},
\end{eqnarray}
implying that
\begin{eqnarray}
\lim_{\lambda \to 0} {\varpi_2} = {\pi^2 \over L^2}{\left(1 - \beta^2\right)^2}.
\end{eqnarray}
The above limit can (and has been) verified numerically.
When the ratio $\varpi_0 / \varpi_2$ is not small, we can use the following (Chowla-Selberg) representation for the sum above (see Ref.~\cite{Flachi:2008}):
\begin{eqnarray}
\tilde E_r &=&
- {{\varpi_0}^{1/2} \over 4}\left\{1 - {1 \over 2} \sqrt{\varpi_0 \over \varpi_2} \left[ 1 - {\gamma_e}
+ {1 \over s} - \log \left(\varpi_0 \over \mu^2\right) \right.\right.\nonumber\\
&&\left.
- {\psi(-1/2) } \right] \Big\}- {1 \over 2\pi}\sqrt{{\varpi_0}}\sigma\left( \sqrt{\varpi_0\over \varpi_2 }\right),\label{eqn:er}
\end{eqnarray}
where
\begin{eqnarray}
\sigma(t) = \sum_{p=1}^\infty {1\over p}K_{1}\left( 2\pi p t \right).
\end{eqnarray}
For clarity we have left the diverging pieces $\propto 1/s$ in both expression for the vacuum energy. These terms are removed by subtracting {from} the non-renormalized vacuum energy its counterpart for $L\to \infty$. We again stress here that using a different ansatz for the frequencies, for instance keeping a linear term, will change the expression Eq.~(\ref{casimirexpl}) and, in turn, both representations for the summations have to be modified accordingly. However, as we have explained, this does not change the Casimir energy, since the difference is compensated away by the same change in $\delta E_r$.
In order to obtain the exact quantum vacuum energy, we need to add to $\tilde E_r$ the term $\delta E_r$ that can be calculated numerically,
\begin{eqnarray}
\delta E_r = \lim_{s\to 0} {\mu^{2s}\over 2} \sum_n \left(\omega_n^{1-2s} - \tilde\omega_n^{1-2s} \right).
\end{eqnarray}
The above is nothing but the sum over the difference between the exact eigenvalues and the approximated ones. The advantage of using this formulation is that the above expression is regular in the ultraviolet and the limit $s \to 0$ can be taken without any further manipulation, as explained in the previous section. With the two pieces in hand the exact Casimir energy can be calculated as stipulated by Eq.~(\ref{eqsplit}).
{In Fig.~\ref{fig:numeig} we present an overview of our numerical methodology. Panel (a) shows the eigenvalues of the polynomial fit (Eq.~\eqref{eqn:polyfit}) (grey solid) and the numerically computed ones (Eqs.~\eqref{quant_m_n}, \eqref{quantz_eps_n}) (open coloured symbols), respectively. The comparison includes terms up to cubic order (viz. Eq.~\eqref{eqn:polyfit}) for fixed interaction strength and ring size, for various values of the rotation parameter $\Omega R$. Although the (odd) terms $\tilde{\omega}_1$ and $\tilde{\omega}_3$ have been included, they are many orders of magnitude smaller than $\tilde{\omega}_{0}$ and $\tilde{\omega}_{2}$, typically $\tilde{\omega}_1\sim 10^{-6}$ and $\tilde{\omega}_3\sim 10^{-12}$, supporting our choice of Eq.~\eqref{spectrumapprox}. We also checked the typical values of $\tilde{\omega}_{j>3}$, which are even smaller than the preceding terms. Then, Fig.~\ref{fig:numeig} (b) shows the quantity $\delta E_r$, Eq.~\eqref{eqn:der}. This quantity is calculated again for fixed $\lambda=1$, $L=2$ for various $\Omega R$, showing a monotonic decrease as $n_{\rm max}$ is increased from $n_{\rm max}=50:10^4$. Finally, the inset (i) shows an example of the quantity $\omega_n-\tilde{\omega}_n$, displaying a prominent maximum located at $n\sim 5$. Due to the asymptotic nature of the regularization procedure, the behaviour of this quantity is formally accurate for $n\gg 1$, hence we exclude the first $\sim 10\%$ of computed eigenvalues from our simulations (shaded grey), leading to the monotonic decrease of $\delta E_r$ towards zero for increasing $n_{\rm max}$. In the calculations that follow we set $n_{\rm max} = 1000$.}
\section{Results and discussion}
Figs.~(\ref{fig:2}), (\ref{fig:grid}) and (\ref{fig:4}) illustrate the results. Fig.~\ref{fig:2}(a) summarizes in a rotation-interaction heatmap how the Casimir energy $E_r$ depends on both the rotational parameter $\Omega R$ and the interaction strength $\lambda$. {A local maxima in the Casimir energy is present for $\Omega R\geq0$, as depicted in Fig.~\ref{fig:2} where Eq.~\eqref{eqn:er} is computed as a function of $\lambda$ for fixed values of $\Omega R$. We also found that increasing $L$, the size of the ring causes a proportional increase to the corresponding Casimir energy. Note that the existence of the local maxima depends on the size $L$ of the ring, appearing in the non-interacting $\Omega R=0$ limit for $L\gtrsim 5$.}
In fact, the changes in the Casimir energy caused by both rotation and interaction are rather nontrivial, as we illustrate in Figs.~\ref{fig:grid}(a) and \ref{fig:grid}(b); Fig.~\ref{fig:grid}(a) shows the Casimir energy $E_r$ as a function of $L$ for various fixed and increasing interaction strengths $\lambda=1, \dots,100$ with rotation parameter set to $\Omega R=0.75$; Fig.~\ref{fig:grid}(b) shows $E_r$ as a function of $L$ for fixed $\lambda=10$ and for increasing $\Omega R=0, \dots,0.9$. While the energy increases for rings of small sizes and decays to zero for large enough rings, from both cases it is clear that a departure from the typical $1/L$ dependence of the energy occurs as a consequence of both interaction and rotation (only rotation would not cause such a departure). Figs.~\ref{fig:grid}(c) and \ref{fig:grid}(d) illustrate the increase in the Casimir energy with respect to its non-rotating counterpart for different values of the interaction strength and for rings of different sizes. Finally, in Fig.~\ref{fig:4} we plot the angular momentum {$L_{\rm am}$} calculated accoding to Eq.~\eqref{eqn:am}: panel (a) shows {$L_{\rm am}$} computed from the data in Fig.~\ref{fig:grid}(c) as a function of $\Omega R$ for fixed ring length $L=20$ in (a) and from the data in Fig.~\ref{fig:grid}(d) for fixed interaction strength $\lambda=0.5$ in (b).
\begin{figure}
\includegraphics[width=0.95\columnwidth]{am_fig.pdf}
{\caption{\label{fig:4}(color online) Angular momentum. (a) and (b) shows the angular momentum calculated per Eq.~\eqref{eqn:am}. Panel (a) shows {$L_{\rm am}$} computed from the data in Fig.~\ref{fig:grid}(c) as a function of $\Omega R$ for fixed ring length $L=20$ in (a) and from the data in Fig.~\ref{fig:grid}(d) for fixed interaction strength $\lambda=0.5$ in (b).}}
\end{figure}
The Casimir effect, and more generally calculations of quantum vacuum energies for interacting field theories, have been considered for many years, starting with Refs.~\cite{Ford:1979b,Toms:1980a,Peterson:1982} (see also Ref.~\cite{Toms:2012}). Refs.~\cite{Maciolek:2007,Moazzemi:2007,Schmidt:2008,Flachi:2012,Flachi:2013,Flachi:2017,Chernodub:2017gwe,Chernodub:2018pmt,Valuyan:2018,Flachi:2017xat,Flachi:2021,Song:2021,Flachi:2022} give an incomplete list of more recent papers that have discussed different aspects of the Casimir effect and related physics in the presence of field-interactions. Of particular interest to this work are Refs.~\cite{Flachi:2017,Bordag:2021} where the connection between the Casimir energy and elliptic functions has been pointed out and used, numerically in Ref.~\cite{Flachi:2017} and analytically in Ref.~\cite{Bordag:2021}, to compute the Casimir energy. Particularly worthy of notice are in fact the results of Ref.~\cite{Bordag:2021} that have highlighted clearly how to compute semi-analytically (full analytic results can be obtained in specific regimes, but in general numerics cannot be avoided) and \textit{exactly} (without resorting to any approximation) the quantum vacuum energy dealing directly with the nonlinear problem. In this work, we have extended those results by adding rotation, a feature relevant to cold-atomic systems at least in the long wavelength regime where the perturbations around a Bose-Einstein condensate evolve according to a relativistic Klein-Gordon equation (see, for example, Refs~\cite{Kurita:2008fb,Barcelo:2010bq}). {Complementary to this, the Casimir effect has also been studied for superfluids in the presence of vorticity \cite{impens_2010,mendonca_2020}, as well as near surfaces (Casimir-Polder interaction) \cite{pasquini_2004,harber_2005,moreno_2010,bender_2014}, at finite temperature \cite{obrecht_2007}, in the presence of disorder \cite{moreno_2010a} or with an impurity \cite{marino_2017}.}
The other interesting aspect of the generalization we have considered here is that the mixing of interaction and rotation can give rise to a departure from the usual massless behavior of the Casimir energy for both the rotating and non-rotating non-interacting case. Also, from Eq.~(\ref{quant_m_n}) one notices that rotation combines with the {rotation} coupling constant in a way that even when the interaction strength is small, its effect can be amplified by fast rotation. Extending the present results to the non-relativistic Gross-Pitaevski case can be done along the lines discussed here. More interesting would be generalizations to higher dimensions since the Coleman-Hohenberg-Mermin-Wagner theorem does not apply and phase transitions may occur dynamically without having to introduce any explicit breaking of rotational symmetry, {at least} in principle. In this case, not only the Casimir effect will experience a phase transition in correspondence to a critical value of the coupling constant and rotation at which symmetry breaking eventually occurs, but it may also become a proxy of critical quantities. This is of course a technically complicated problem as in more than $1$ non-compact dimension the separability of the field equation becomes nontrivial. Obviously, the more interesting aspect to investigate is a closer connection to cold atoms and to the possibility of using these as a probe of quantum vacuum effects.
\section*{Acknowledgments}
AF's research was supported by the Japanese Society for the Promotion of Science Grant-in-Aid for Scientific Research (KAKENHI, grant number 21K03540). MJE's research was supported by the Australian Research Council Centre of Excellence in Future Low-Energy Electronics Technologies (Project No. CE170100039) and funded by the Australian government, and by the Japan Society of Promotion of Science Grant-in-Aid for Scientific Research (KAKENHI Grant No. JP20K14376). One of us (AF) wishes to thank O. Corradini, G. Marmorini, and V. Vitagliano for earlier discussions and I. Moss for a comment regarding the use of a relativistic Klein-Gordon equation in Bose-Einstein condensates in the long wavelength approximation.
| {'timestamp': '2022-12-07T02:08:30', 'yymm': '2212', 'arxiv_id': '2212.02776', 'language': 'en', 'url': 'https://arxiv.org/abs/2212.02776'} |
\section{#1}\setcounter{theorem}{0}
\setcounter{equation}{0}\par\noindent}
\renewcommand{\theequation}{\arabic{section}.\arabic{equation}}
\renewcommand{\thesubsection}{{\arabic{section}.\arabic{subsection}}}
\renewcommand{\thelemma}{\arabic{section}.\arabic{lemma}}
\numberwithin{equation}{section}
\newcommand{\mathrm{grad}}{\mathrm{grad}}
\newcommand{{{\mathbb R}^n}}{{{\mathbb R}^n}}
\newcommand{{\mathbb R}}{{\mathbb R}}
\newcommand{{\mathbb N}}{{\mathbb N}}
\newcommand{{\mathbb Z}}{{\mathbb Z}}
\newcommand{{\tilde W}}{{\tilde W}}
\newcommand{{\tilde Q}}{{\tilde Q}}
\newcommand{\tilde{\tilde W}}{\tilde{\tilde W}}
\newcommand{\tilde {\tilde Q}}{\tilde {\tilde Q}}
\newcommand{\tilde{E}}{\tilde{E}}
\renewcommand{\H}{{\mathcal H }}
\renewcommand{\AA}{\mathbf A}
\newcommand{{\mathcal{WH} }}{{\mathcal{WH} }}
\newcommand{\mathbf c}{\mathbf c}
\renewcommand{\SS}{\mathbf S}
\newcommand{{\tilde{\tilde{}}}}{{\tilde{\tilde{}}}}
\newcommand{\mathop{P}}{\mathop{P}}
\newcommand{{{\Lambda}}}{{{\Lambda}}}
\newcommand{\mathrm{l.o.t.}}{\mathrm{l.o.t.}}
\newcommand{\tilde{G}}{\tilde{G}}
\newcommand{\tilde{\tilde{G}}}{\tilde{\tilde{G}}}
\newcommand{\tilde{\tilde K}}{\tilde{\tilde K}}
\newcommand{\tilde{K}}{\tilde{K}}
\newcommand{\tilde{\tilde w}}{\tilde{\tilde w}}
\newcommand{\tilde{\tilde q}}{\tilde{\tilde q}}
\usepackage[makeroom]{cancel}
\newcommand{\mathbb{C}}{\mathbb{C}}
\newcommand{\mathbb{H}}{\mathbb{H}}
\newcommand{\mathcal{F}}{\mathcal{F}}
\newcommand{\Psi}{\Psi}
\newcommand{\Theta}{\Theta}
\newcommand{H}{H}
\newcommand{I}{I}
\newcommand{\mathop{\mathrm{sgn}}}{\mathop{\mathrm{sgn}}}
\newcommand{{\tilde q}}{{\tilde q}}
\newcommand{{\tilde w}}{{\tilde w}}
\newcommand{{\tilde r}}{{\tilde r}}
\newcommand{{\tilde {\mathbf R}}}{{\tilde {\mathbf R}}}
\newcommand{{\dot{\mathcal H} }}{{\dot{\mathcal H} }}
\newcommand{\left\langle \frac{t}{\alpha} \right\rangle}{\left\langle \frac{t}{\alpha} \right\rangle}
\newcommand{{\mathbf w}}{{\mathbf w}}
\newcommand{{\mathbf W}}{{\mathbf W}}
\renewcommand{\r}{{\mathbf r}}
\renewcommand{{\mathbf w}}{{\mathbf w}}
\renewcommand{\b}{{\mathbf b}}
\renewcommand{\S}{{\mathbf S}}
\newcommand{\text{\bf err}}{\text{\bf err}}
\newcommand{\text{\bf err}(L^2)}{\text{\bf err}(L^2)}
\newcommand{\text{\bf err}(\dot H^\frac12)}{\text{\bf err}(\dot H^\frac12)}
\newcommand{{\mathbf N}}{{\mathbf N}}
\newcommand{\tilde{u}}{\tilde{u}}
\newcommand{\tilde{v}}{\tilde{v}}
\newcommand{\tilde{\phi}}{\tilde{\phi}}
\newcommand{\tilde{\tilde{u}}}{\tilde{\tilde{u}}}
\newcommand{\tilde{\tilde{\phi}}}{\tilde{\tilde{\phi}}}
\renewcommand{\L}{{L^{NL}}}
\begin{document}
\title{Well-posedness and dispersive decay of small data solutions for the Benjamin-Ono equation}
\author{Mihaela Ifrim}
\address{Department of Mathematics, University of California at Berkeley}
\thanks{The first author was supported by the Simons Foundation}
\email{ifrim@math.berkeley.edu}
\author{ Daniel Tataru}
\address{Department of Mathematics, University of California at Berkeley}
\thanks{The second author was partially supported by the NSF grant DMS-1266182
as well as by the Simons Foundation}
\email{tataru@math.berkeley.edu}
\begin{abstract}
This article represents a first step toward understanding the long
time dynamics of solutions for the Benjamin-Ono equation. While
this problem is known to be both completely integrable and globally
well-posed in $L^2$, much less seems to be known concerning its long
time dynamics. Here, we prove that for small localized data the
solutions have (nearly) dispersive dynamics almost globally in time.
An additional objective is to revisit the $L^2$ theory for the
Benjamin-Ono equation and provide a simpler, self-contained
approach.
\end{abstract}
\maketitle
\tableofcontents
\section{Introduction
In this article we consider the Benjamin-Ono equation
\begin{equation}\label{bo}
(\partial_t + H \partial_x^2) \phi = \frac12 \partial_x (\phi^2), \qquad \phi(0) = \phi_0,
\end{equation}
where $\phi$ is a real valued function $\phi :\mathbf{R}\times\mathbf{R}\rightarrow \mathbf{R}$.
$H$ denotes the Hilbert transform on the real line; we use the convention that its symbol is
\[
H(\xi) = - i \mathop{\mathrm{sgn}} \xi
\]
as in Tao \cite{tao} and opposite to Kenig-Martel \cite{km}. Thus, dispersive waves travel to the right
and solitons to the left.
The Benjamin-Ono equation is a model for the propagation of one
dimensional internal waves (see \cite{benjamin}). Among others, it describes the
physical phenomena of wave propagation at the interface of layers of
fluids with different densities (see Benjamin~\cite{benjamin} and
Ono~\cite{ono}). It also belongs to a larger class of equation modeling
this type of phenomena, some of which are certainly more physically
relevant than others.
Equation \eqref{bo} is known to be completely integrable. In
particular it has an associated Lax pair, an inverse scattering
transform and an infinite hierarchy of conservation laws. For further information in this direction
we refer the reader to \cite{klm} and references therein.
We list only some of these conserved energies, which hold for smooth solutions
(for example $H_x^3(\mathbb{R})$). Integrating by parts, one sees
that this problem has conserved mass,
\[
E_0 = \int \phi^2 \, dx,
\]
momentum
\[
E_1 = \int \phi H\phi_x - \frac13 \phi^3\, dx ,
\]
as well as energy
\[
E_2 = \int \phi_x^2 - \frac34 \phi^2 H \phi_x + \frac18 \phi^4 \, dx.
\]
More generally, at each nonnegative integer $k$ we similarly have a
conserved energy $E_k$ corresponding at leading order to the $\dot
H^{\frac{k}2}$ norm of $\phi$.
This is closely related to the Hamiltonian structure of the equation, which uses the symplectic form
\[
\omega (\psi_1,\psi_2) = \int \psi_1 \partial_x \psi_2\, dx
\]
with associated map $J = \partial_x$. Then the Benjamin-Ono equation is generated
by the Hamiltonian $E_1$ and symplectic form $\omega$. $E_0$ generates the group of translations.
All higher order conserved energies can be viewed in turn as Hamiltonians
for a sequence of commuting flows, which are known as the Benjamin-Ono
hierarchy of equations.
The Benjamin-Ono equation is a dispersive equation, i.e. the group velocity of waves depends
on the frequency. Precisely, the dispersion relation for the linear part is given by
\[
\omega(\xi) = - \xi |\xi|,
\]
and the group velocity for waves of frequency $\xi$ is $v = 2|\xi|$.
Here we are considering real solutions, so the positive and negative
frequencies are matched. However, if one were to restrict the linear
Benjamin-Ono waves to either positive or negative frequencies
then we obtain a linear Schr\"odinger equation with a choice of signs.
Thus one expects that many features arising in the study of nonlinear Schr\"{o}dinger equations
will also appear in the study of Benjamin-Ono.
Last but not least, when working with Benjamin-Ono one has to take into account its quasilinear
character. A cursory examination of the equation might lead one to the conclusion that
it is in effect semilinear. It is only a deeper analysis see \cite{MST}\cite{KT} which reveals the fact that the
derivative in the nonlinearity is strong enough to insure that the nonlinearity is non-perturbative,
and that only countinuous dependence on the initial data may hold, even at high regularity.
Considering local and global well-posedness results in Sobolev spaces $H^s$,
a natural threshold is given by the fact that the Benjamin-Ono equation has a scale invariance,
\begin{equation}
\label{si}
\phi(t,x) \to \lambda \phi(\lambda^2 t,\lambda x),
\end{equation}
and the scale invariant Sobolev space associated to this scaling is $\dot H^{-\frac12}$.
There have been many developments in the well-posedness theory for the
Benjamin-Ono equations, see: \cite{bp, ik, KT, KK, tao, MST, ponce,
iorio, saut}. Well-posedness in weighted Sobolev spaces was considered in \cite{fp} and \cite{fpl},
while soliton stability was studied in \cite{km, gusta}.
These is also closely related work on an extended class
of equations, called generalized Benjamin-Ono equations, for which we refer
the reader to \cite{herr}, \cite{hikk} and references therein. More extensive discussion of Benjamin-Ono and related fluid
models can be found in the survey papers \cite{abfs} and \cite{klein-saut}.
Presently, for the Cauchy problem at low regularity,
the existence and uniqueness result at the level of $H^s(\mathbf{R})$
data is now known for the Sobolev index $s\geq 0$. Well-posedness
in the range $-\frac12 \leq s < 0$ appears to be an open question.
We now review some of the key thresholds in this analysis.
The $H^3$ well posedness result was obtained by Saut in
\cite{saut}, using energy estimates. For convenience we use his result as a starting point for
our work, which is why we recall it here:
\begin{theorem}
The Benjamin-Ono equation is globally well-posed in $H^3$.
\end{theorem}
The $H^1$ threshold is another important one, and it was reached by Tao~\cite{tao}; his
article is highly relevant to the present work, and it is where the idea
of renormalization is first used in the study of Benjamin-Ono equation.
The $L^2$ threshold was first reached by Ionescu and Kenig \cite{ik},
essentially by implementing Tao's renormalization argument in the
context of a much more involved and more delicate functional setting,
inspired in part from the work of the second author \cite{mp} and of
Tao \cite{tao} on wave maps. This is imposed by the fact that the
derivative in the nonlinearity is borderline from the perspective of
bilinear estimates, i.e. there is no room for high frequency losses.
An attempt to simplify the $L^2$ theory was later made by
Molinet-Pilod~\cite{mp}; however, their approach still involves a
rather complicated functional structure, involving not only $X^{s,b}$
spaces but additional weighted mixed norms in frequency.
Our first goal here is to revisit the $L^2$ theory for the Benjamin-Ono equation,
and (re)prove the following theorem:
\begin{theorem}\label{thm:lwp}
The Benjamin-Ono equation is globally well-posed in $L^2$.
\end{theorem}
Since the $L^2$ norm of the solutions is conserved, this is in effect a local in time result,
trivially propagated in time by the conservation of mass. In particular it says little
about the long time properties of the flow, which will be our primary target here.
Given the quasilinear nature of the Benjamin-Ono equation, here it is important
to specify the meaning of well-posedness. This is summarized in the following properties:
\begin{description}
\item [(i) Existence of regular solutions] For each initial data $\phi_0 \in H^3$ there exists a unique
global solution $\phi \in C({\mathbb R};H^3)$.
\item[(ii) Existence and uniqueness of rough solutions] For each initial data $\phi_0 \in L^2$
there exists a solution $\phi \in C({\mathbb R};L^2)$, which is the unique limit of regular solutions.
\item[(iii) Continuous dependence ] The data to solution map $\phi_0 \to \phi$ is continuous
from $L^2$ into $C(L^2)$, locally in time.
\item [(iv) Higher regularity] The data to solution map $\phi_0 \to \phi$ is continuous
from $H^s$ into $C(H^s)$, locally in time, for each $s > 0$.
\item[(v) Weak Lipschitz dependence] The flow map for $L^2$ solutions is locally Lipschitz
in the $ H^{-\frac12}$ topology.
\end{description}
The weak Lipschitz dependence part appears to be a new result, even though certain estimates
for differences of solutions are part of the prior proofs in \cite{ik} and \cite{mp}.
Our approach to this result is based on the idea of normal forms,
introduced by Shatah \cite{shatah} \cite{hikk}in the dispersive realm
in the context of studying the long time behavior of dispersive pde's.
Here we turn it around and consider it in the context of studying
local well-posedness. In doing this, the chief difficulty we face is
that the standard normal form method does not readily apply for
quasilinear equations.
One very robust adaptation of the normal form method to quasilinear
equations, called ``quasilinear modified energy method'' was
introduced earlier by the authors and collaborators in \cite{BH}, and
then further developed in the water wave context first in \cite{hit}
and later in \cite{its, itg, itgr, itc}. There the idea is to modify the energies, rather
than apply a normal form transform to the equations; this method is then
successfully used in the study of long time behavior of solutions.
Alazard and Delort~\cite{ad, ad1} have also developed another way of constructing
the same type of almost conserved energies by using a partial normal form transformation
to symmetrize the equation, effectively diagonalizing the leading part of the energy.
The present paper provides a different quasilinear adaptation of the normal form method.
Here we do transform the equation, but not with a direct quadratic normal form correction
(which would not work). Instead we split the quadratic nonlinearity in two parts,
a milder part and a paradifferential part\footnote{This splitting is of course not a new idea,
and it has been used for some time in the study of quasilinear problems}.
Then we construct our normal form correction in two steps: first a direct quadratic correction
for the milder part, and then a renormalization type correction for the paradifferential part.
For the second step we use a paradifferential version of Tao's renormalization argument \cite{tao}.
Compared with the prior proofs of $L^2$ well-posedness in \cite{ik}
and \cite{mp}, our functional setting is extremely simple, using only
Strichartz norms and bilinear $L^2$ bounds. Furthermore, the bilinear
$L^2$ estimates are proved in full strength but used only in a very
mild way, in order to remove certain logarithmic divergences which
would otherwise arise. The (minor) price to pay is that the argument
is now phrased as a bootstrap argument, same as in \cite{tao}.
However this is quite natural in a quasilinear context.
One additional natural goal in this problem is the enhanced uniqueness
question, namely to provide relaxed conditions which must be imposed
on an arbitrary $L^2$ solution in order to compel it to agree with the
$L^2$ solution provided in the theorem. This problem has received
substantial attention in the literature but is beyond the scope of the
present paper. Instead we refer the reader to the most up to date
results in \cite{mp}.
We now arrive at the primary goal of this paper. The question we
consider concerns the long time behavior of Benjamin-Ono solutions
with small localized data. Precisely, we are asking what is the
optimal time-scale up to which the solutions have linear dispersive
decay. Our main result is likely optimal, and asserts that this holds
almost globally in time:
\begin{theorem}
Assume that the initial data $\phi_0$ for \eqref{bo} satisfies
\begin{equation}\label{data}
\| \phi_0\|_{L^2} + \| x \phi_0\|_{L^2} \leq \epsilon \ll 1.
\end{equation}
Then the solution $\phi$ satisfies the dispersive decay bounds
\begin{equation} \label{point}
|\phi(t,x)| +\vert H\phi (t,x)\vert \lesssim \epsilon |t|^{-\frac12} \langle x_- t^{-\frac12}\rangle^{-\frac12}
\end{equation}
up to time
\[
|t| \lesssim T_{\epsilon}:= e^{\frac{c}{\epsilon}}, \qquad c \ll 1.
\]
\end{theorem}
The novelty in our result is that the solution exhibits dispersive
decay. We also remark that better decay holds in the region $x <
0$. This is because of the dispersion relation, which sends all the
propagating waves to the right.
A key ingredient of the proof of our result is a seemingly new
conservation law for the Benjamin Ono equation, which is akin to a
normal form associated to a corresponding linear conservation law.
This result closely resembles the authors' recent work in \cite{nls}
(see also further references therein) on the cubic nonlinear
Schr\"odinger problem (NLS)
\begin{equation}
i u_t - u_{xx} = \pm u^3, \qquad u(0) = u_0,
\end{equation}
with the same assumptions on the initial data. However, our result
here is only almost global, unlike the global NLS result in \cite{nls}.
To understand why the cubic NLS problem serves as a good comparison,
we first note that both the Benjamin-Ono equation and the cubic NLS
problem have $\dot H^{-\frac12}$ scaling. Further, for a restricted
frequency range of nonlinear interactions in the Benjamin-Ono
equation, away from zero frequency, a normal form transformation turns
the quadratic BO nonlinearity into a cubic NLS type problem for which
the methods of \cite{nls} apply. Thus, one might naively expect a
similar global result. However, it appears that the Benjamin-Ono
equation exhibits more complicated long range dynamics near frequency
zero, which have yet to be completely understood.
One way to heuristically explain these differences is provided by the
the inverse scattering point of view. While the small data cubic
focusing NLS problem has no solitons, on the other hand in the
Benjamin-Ono case the problem could have solitons for arbitrarily
small localized data. As our result can only hold in a non-soliton
regime, the interesting question then becomes what is the lowest
time-scale where solitons can emerge from small localized data. A
direct computation\footnote{This is based on the inverse scattering theory
for the Benjamin-Ono equation, and will be described in subsequent work.}
shows that this is indeed the almost global time
scale, thus justifying our result.
We further observe that our result opens the way for the next natural
step, which is to understand the global in time behavior of solutions,
where in the small data case one expects a dichotomy between dispersive solutions and
dispersive solutions plus one soliton:
\begin{conjecture}[Soliton resolution]
Any global Benjamin-Ono solution which has small data as in \eqref{data} must
either be dispersive, or it must resolve into a soliton and a dispersive part.
\end{conjecture}
\section{Definitions and review of notations
\subsubsection*{The big O notation:} We use the notation $A \lesssim
B$ or $A = O(B)$ to denote the estimate that $|A| \leq C B$, where $C$
is a universal constant which will not depend on $\epsilon$. If $X$
is a Banach space, we use $O_X(B)$ to denote any element in $X$ with
norm $O(B)$; explicitly we say $u =O_X(B)$ if $\Vert u\Vert _X\leq C
B$. We use $\langle x\rangle$ to denote the quantity $\langle x
\rangle := (1 + |x|^2)^{1/2}$.
\subsubsection*{Littlewood-Paley decomposition:} One important tool in
dealing with dispersive equations is the Littlewood-Paley
decomposition. We recall its definition and also its usefulness in
the next paragraph. We begin with the Riesz decomposition
\[
1 = P_- + P_+,
\]
where $P_\pm$ are the Fourier projections to $\pm [0,\infty)$; from
\[
\widehat{Hf}(\xi)=-i\mathop{\mathrm{sgn}} (\xi)\, \hat{f}(\xi),
\]
we observe that
\begin{equation}\label{hilbert}
iH = P_{+} - P_{-}.
\end{equation}
Let $\psi$ be a bump function adapted to $[-2,2]$ and equal to $1$ on
$[-1,1]$. We define the Littlewood-Paley operators $P_{k}$ and
$P_{\leq k} = P_{<k+1}$ for $k \geq 0$ by defining
$$ \widehat{P_{\leq k} f}(\xi) := \psi(\xi/2^k) \hat f(\xi)$$
for all $k \geq 0$, and $P_k := P_{\leq k} - P_{\leq k-1}$ (with the
convention $P_{\leq -1} = 0$). Note that all the operators $P_k$,
$P_{\leq k}$ are bounded on all translation-invariant Banach spaces,
thanks to Minkowski's inequality. We define $P_{>k} := P_{\geq k-1}
:= 1 - P_{\leq k}$.
For simplicity, and because $P_{\pm}$ commutes with the
Littlewood-Paley projections $P_k$, $P_{<k}$, we will introduce the
following notation $P^{\pm}_k:=P_kP_{\pm}$ , respectively
$P^{\pm}_{<k}:=P_{\pm}P_{<k}$. In the same spirit, we introduce the
notations $\phi^{+}_k:=P^{+}_k\phi$, and $\phi^{-}_k:=P^{-}_k\phi$,
respectively.
Given the projectors $P_k$, we also introduce additional projectors $\tilde P_k$
with slightly enlarged support (say by $2^{k-4}$) and symbol equal to $1$ in the support of $P_k$.
From Plancherel's theorem we have the bound
\begin{equation}\label{eq:planch}
\| f \|_{H^s_x} \approx (\sum_{k=0}^\infty \| P_k f\|_{H^s_x}^2)^{1/2}
\approx (\sum_{k=0}^\infty 2^{ks} \| P_k f\|_{L^2_x}^2)^{1/2}
\end{equation}
for any $s \in \mathbb R$.
\subsubsection*{Multi-linear expressions} We shall now make use of a
convenient notation for describing multi-linear expressions of product
type, as in \cite{tao-c}. By $L(\phi_1,\cdots,\phi_n)$ we denote a
translation invariant expression of the form
\[
L(\phi_1,\cdots,\phi_n)(x) = \int K(y) \phi_1(x+y_1) \cdots \phi_n(x+y_n) \, dy,
\]
where $K \in L^1$. More generally, one can replace $Kdy$ by any bounded measure.
By $L_k$ we denote such multilinear expressions whose output is localized at frequency $2^k$.
This $L$ notation is extremely handy for expressions such as the
ones we encounter here; for example we can re-express the normal
form \eqref{commutator B_k} in a simpler way as shown in
Section~\ref{s:local}. It also behaves well with respect to reiteration,
e.g.
\[
L(L(u,v),w) = L(u,v,w).
\]
Multilinear $L$ type expressions can easily be estimated
in terms of linear bounds for their entries. For instance we have
\[
\Vert L(u_1,u_2) \|_{L^r} \lesssim \|u_1\|_{L^{p_1}} \|u_1\|_{L^{p_2}}, \qquad \frac{1}{p_1}+ \frac{1}{p_2} = \frac{1}{r}.
\]
A slightly more involved situation arises in this article when we seek to
use bilinear bounds in estimates for an $L$ form. There we need to account for the
effect of uncorrelated translations, which are allowed given the integral bound
on the kernel of $L$. To account for that we use the translation group $\{T_y\}_{y \in {\mathbb R}}$,
\[
(T_y u)(x) = u(x+y),
\]
and estimate, say, a trilinear form as follows:
\[
\|L(u_1,u_2,u_3) \|_{L^r} \lesssim \|u_1\|_{L^{p_1}} \sup_{y \in {\mathbb R}}
\|u_2 T_y u_3\|_{L^{p_2}}, \qquad \frac{1}{p_1}+ \frac{1}{p_2} = \frac{1}{r} .
\]
On occasion we will write this in a shorter form
\[
\|L(u_1,u_2,u_3) \|_{L^r} \lesssim \|u_1\|_{L^{p_1}} \|L(u_2,u_3)\|_{L^{p_2}}.
\]
To prove the boundedness in $L^2$ of the normal form transformation,
we will use the following proposition from Tao \cite{tao-c}; for
completeness we recall it below:
\begin{lemma}[Leibnitz rule for $P_k$]\label{commutator} We have the commutator identity
\begin{equation}
\label{commute}
\left[ P_k\, ,\, f\right] g = L(\partial_x f, 2^{-k} g).
\end{equation}
\end{lemma}
When classifying cubic terms (and not only) obtained after
implementing a normal form transformation, we observe that having a
commutator structure is a desired feature. In particular Lemma
~\ref{commutator} tells us that when one of the entry (call it $g$) has
frequency $\sim 2^k$ and the other entry (call it $f$) has frequency
$\lesssim 2^k$, then $P_k(fg) - f P_k g$ effectively shifts a
derivative from the high-frequency function $g$ to the low-frequency
function $f$. This shift will generally ensure that all such
commutator terms will be easily estimated.
\subsubsection*{Frequency envelopes.} Before stating one of the main
theorems of this paper, we revisit the \emph{frequency envelope}
notion; it will turn out to be very useful, and also an elegant tool
later in the proof of the local well-posedness result, both in the
proof of the a-priori bounds for solutions for the Cauchy problem
\eqref{bo} with data in $L^2$, which we state in
Section~\ref{s:local}, and in the proof of the bounds for the
linearized equation, in the following section.
Following Tao's paper \cite{tao} we say that a sequence $c_{k}\in l^2$
is an $L^2$ frequency envelope for $\phi \in L^2$ if
\begin{itemize}
\item[i)] $\sum_{k=0}^{\infty}c_k^2 \lesssim 1$;\\
\item[ii)] it is slowly varying, $c_j /c_k \leq 2^{\delta \vert j-k\vert}$, with $\delta$ a very small universal constant;\\
\item[iii)] it bounds the dyadic norms of $\phi$, namely $\Vert P_{k}\phi \Vert_{L^2} \leq c_k$.
\end{itemize}
Given a frequency envelope $c_k$ we define
\[
c_{\leq k} = (\sum_{j \leq k} c_j^2)^\frac12, \qquad c_{\geq k} = (\sum_{j \geq k} c_j^2)^\frac12.
\]
\begin{remark}
To avoid dealing with certain issues arising at low frequencies,
we can harmlessly make the extra assumption that $c_{0}\approx 1$.
\end{remark}
\begin{remark}
Another useful variation is to weaken the slowly varying assumption to
\[
2^{- \delta \vert j-k\vert} \leq c_j /c_k \leq 2^{C \vert j-k\vert}, \qquad j < k,
\]
where $C$ is a fixed but possibly large constant. All the results in this paper are compatible with this choice.
This offers the extra flexibility of providing higher regularity results by the same argument.
\end{remark}
\section{The linear flow
Here we consider the linear Benjamin-Ono flow,
\begin{equation}\label{bo-lin}
(\partial_t + H\partial^2)\psi = 0, \qquad \psi(0) = \psi_0.
\end{equation}
Its solution $\phi(t) = e^{-t H\partial^2} \psi_0$ has conserved $L^2$ norm, and satisfies
standard dispersive bounds:
\begin{proposition}\label{p:bodisp}
The linear Benjamin-Ono flow satisfies the dispersive bound
\begin{equation}
\|e^{-t H\partial^2}\|_{L^1 \to L^\infty} \lesssim t^{-\frac12}.
\end{equation}
\end{proposition}
This is a well known result. For convenience we outline the classical proof,
and then provide a second, energy estimates based proof.
\begin{proof}[First proof of Proposition~\ref{p:bodisp}]
Applying the spatial Fourier transform and solving the corresponding
differential equation we obtain the following solution of the linear
Benjamin-Ono equation
\begin{equation}
\label{linear-sol}
\psi (t,x)=\int_{-\infty}^{\infty}\int_{-\infty}^{\infty} e^{-i\vert \xi\vert \xi t+i\xi (x-y)}\psi_{0}(y)\, dyd\xi.
\end{equation}
We change coordinates $\xi \rightarrow t^{-\frac{1}{2}}\eta$ and
rewrite \eqref{linear-sol} as
\[
\psi (t,x)=t^{-\frac{1}{2}}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty} e^{-i\vert \eta\vert \eta +i\eta t^{-\frac{1}{2}} (x-y)}\psi_{0}(y)\, dyd\eta ,
\]
which can be further seen as a convolution
\[
\psi (t,x)=t^{-\frac{1}{2}} A(t^{-\frac{1}{2}}x)\ast \psi_{0}(x),
\]
where $A(x)$ is an oscillatory integral
\[
A(x):=\int_{-\infty}^{\infty}e^{-i\vert \eta \vert \eta +i\eta x}\, d\eta .
\]
It remains to show that $A$ is bounded, which follows by a standard stationary phase argument, with a minor
complication arising from the fact that the phase is not $C^2$ at $\eta = 0$.
\end{proof}
The second proof will also give us a good starting point in our study of the dispersive properties for the
nonlinear equation. This is based on using the operator
\[
L = x- 2 t H \partial_x ,
\]
which is the push forward of $x$ along the linear flow,
\[
L(t) = e^{-t H\partial^2} x e^{t H\partial^2} ,
\]
and thus commutes with the linear operator,
\[
[ L, \partial_t + H\partial^2] = 0.
\]
In particular this shows that for solutions $\psi$ to the homogeneous equation, the
quantity $\|L\psi \|_{L^2}^2$ is also a conserved quantity.
\begin{proof}[Second proof of Proposition~\ref{p:bodisp}]
We rewrite the dispersive estimate in the form
\[
\| e^{-t H \partial^2} \delta_0 \|_{L^\infty} \lesssim t^{-\frac12}.
\]
We approximate $\delta_0$ with standard bump functions $\alpha_\epsilon(x) = \epsilon^{-1} \alpha(x/\epsilon)$,
where $\alpha$ is a $C_0^\infty$ function with integral one.
It suffices to show the uniform bound
\begin{equation}\label{phie}
\| e^{-t H \partial^2} \alpha_\epsilon \|_{L^\infty} \lesssim t^{-\frac12}.
\end{equation}
The functions $\alpha_\epsilon$ satisfy the $L^2$ bound
\[
\|\alpha_\epsilon\|_{L^2} \lesssim \epsilon^{-\frac12}, \qquad \|x \alpha_\epsilon\|_{L^2} \lesssim \epsilon^{\frac12}.
\]
By energy estimates, this implies that
\[
\|e^{-t H \partial^2}\alpha_\epsilon\|_{L^2} \lesssim \epsilon^{-\frac12}, \qquad \|L e^{-t H \partial^2} \alpha_\epsilon\|_{L^2}
\lesssim \epsilon^{\frac12}.
\]
Then the bound \eqref{phie} is a consequence of the following
\begin{lemma}
The following pointwise bound holds:
\begin{equation}
\label{prima}
\|\psi\|_{L^\infty}+\|H\psi\|_{L^\infty} \lesssim t^{-\frac12} \|\psi\|_{L^2}^\frac12 \| L\psi\|_{L^2}^\frac12 .
\end{equation}
\end{lemma}
We remark that the operator $L$ is elliptic in the region $x < 0$, therefore
a better pointwise bound is expected there. Indeed, we have the estimate
\begin{equation}
\label{prima+}
|\psi(t,x)| + |H \psi(t,x)| \leq t^{-\frac12} (1+ |x_-| t^{-\frac12})^{-\frac14} \|\psi\|_{L^2}^\frac12 \| L\psi\|_{L^2}^\frac12 ,
\end{equation}
where $x_-$ stands for the negative part of $x$. To avoid repetition we do not prove this here,
but it does follow from the analysis in the last section of the paper.
\begin{proof}
Denote
\[
c = \int_{\mathbb R} \psi\, dx .
\]
We first observe that we have
\begin{equation} \label{c-est}
c^2 \lesssim \| \psi\|_{L^2} \|L\psi\|_{L^2}.
\end{equation}
All three quantities are constant along the linear Benjamin-Ono flow, so it suffices to verify this
at $t = 0$. But there this inequality becomes
\[
c^2 \lesssim \| \psi\|_{L^2} \| x \psi\|_{L^2},
\]
which is straightforward using H\"older's inequality on each dyadic spatial region.
Next we establish the uniform $t^{-\frac12}$ pointwise bound. We rescale to $t = 1$.
Denote $u = P^+ \psi$, so that $\psi = 2 \Re u$ and $H\psi = 2 \Im u$. Hence it suffices to
obtain the pointwise bound for $u$.
We begin with the relation
\[
( x + 2i \partial) u = P^+ L \psi + c,
\]
where the $c$ term arises from the commutator of $P^+$ and $x$.
We rewrite this as
\[
\partial_x ( ue^{\frac{ix^2}{4}}) = \frac{1}{2i} e^{\frac{ix^2}{4} } ( P^+ L \psi + c).
\]
Let $F$ be a bounded antiderivative for $\frac{1}{2i} e^{\frac{ix^2}{4}} $.
Then we introduce the auxiliary function
\[
v = ue^{\frac{ix^2}{4t}} - cF,
\]
which satisfies
\[
\partial_x v = \frac{1}{2i} e^{\frac{ix^2}{4}} ( P^+ L \psi ).
\]
In view of the previous bound \eqref{c-est} for $c$, it remains to show that
\begin{equation}\label{v-est}
\| v\|_{L^\infty}^2 \lesssim c^2 +\| v_x\|_{L^2} \| v+cF\|_{L^2} .
\end{equation}
On each interval $I$ of length $R$ we have by H\"older's inequality
\[
\| v\|_{L^\infty(I)} \lesssim R^{\frac12} \| v_x\|_{L^2(I)} + R^{-\frac12} \| v\|_{L^2(I)}.
\]
Thus we obtain
\[
\| v\|_{L^\infty}^2 \lesssim R \|v_x\|_{L^2}^2 + R^{-1} ( \|v+cF\|_{L^2}^2 + c^2 R)
= c^2 + R \|v_x\|_{L^2}^2 + R^{-1} \|v+cF\|_{L^2}^2 ,
\]
and \eqref{v-est} follows by optimizing the value for $R$.
\end{proof}
\end{proof}
One standard consequence of the dispersive estimates is the Strichartz inequality,
which applies to solutions to the inhomogeneous linear Benjamin-Ono equation.
\begin{equation}\label{bo-lin-inhom}
(\partial_t + H\partial^2)\psi = f, \qquad \psi(0) = \psi_0.
\end{equation}
We define the Strichartz space $S$ associated to the $L^2$ flow by
\[
S = L^\infty_t L^2_x \cap L^4_t L^\infty_x,
\]
as well as its dual
\[
S' = L^1_t L^2_x + L^{\frac43} _t L^1_x .
\]
We will also use the notation
\[
S^{s} = \langle D \rangle^{-s} S
\]
to denote the similar spaces associated to the flow in $H^s$.
The Strichartz estimates in the $L^2$ setting are summarized in the following
\begin{lemma}
Assume that $\psi$ solves \eqref{bo-lin-inhom} in $[0,T] \times {\mathbb R}$. Then
the following estimate holds.
\begin{equation}
\label{strichartz}
\| \psi\|_S \lesssim \|\psi_0 \|_{L^2} + \|f\|_{S'} .
\end{equation}
\end{lemma}
We remark that these Strichartz estimates can also be viewed as a consequence \footnote{ Exept for the $L^4_tL_x^{\infty}$ bound, as the Hilbert transform is not bounded in $L^{\infty}$.}
of the similar estimates for the linear Schr\"odinger equation. This is because the two flows
agree when restricted to functions with frequency localization in ${\mathbb R}^+$.
We also remark that we have the following Besov version of the
estimates,
\begin{equation}
\label{strichartzB}
\| \psi\|_{\ell^2 S} \lesssim \|\psi_0 \|_{L^2} + \|f\|_{\ell^2 S'},
\end{equation}
where
\[
\| \psi \|_{\ell^2 S}^2 = \sum_k \| \psi _k \|_{ S}^2, \qquad \| \psi \|_{\ell^2 S'}^2 = \sum_k \| \psi _k \|_{ S'}^2 .
\]
The last property of the linear Benjamin-Ono equation we will use here is the
bilinear $L^2$ estimate, which is as follows:
\begin{lemma}
\label{l:bi}
Let $\psi^1$, $\psi^2$ be two solutions to the inhomogeneous Schr\"odinger equation with data
$\psi^1_0$, $\psi^2_0$ and inhomogeneous terms $f^1$ and $f^2$. Assume
that the sets
\[
E_i = \{ |\xi|, \xi \in \text{supp } \hat \psi^i \}
\]
are disjoint. Then we have
\begin{equation}
\label{bi-di}
\| \psi^1 \psi^2\|_{L^2} \lesssim \frac{1}{\text{dist}(E_1,E_2)}
( \|\psi_0^1 \|_{L^2} + \|f^1\|_{S'}) ( \|\psi_0^2 \|_{L^2} + \|f^2\|_{S'}).
\end{equation}
\end{lemma}
These bounds also follow from the similar bounds for the Schr\"odinger equation, where only the separation
of the supports of the Fourier transforms is required. They can be obtained in a standard manner from the similar bound for products of solutions to the homogenous equation, for which we reffer the reader to \cite{tao-m}.
One corollary of this applies in the case when we look at the product
of two solutions which are supported in different dyadic regions:
\begin{corollary}\label{c:bi-jk}
Assume that $\psi^1$ and $\psi^2$ as above are supported in dyadic regions $\vert \xi\vert \approx 2^j$ and $\vert \xi\vert \approx 2^k$, $\vert j-k\vert >2$, then
\begin{equation}
\label{bi}
\| \psi^1 \psi^2\|_{L^2} \lesssim 2^{-\frac{\max\left\lbrace j,k \right\rbrace }{2}}
( \|\psi_0^1 \|_{L^2} + \|f^1\|_{S'}) ( \|\psi_0^2 \|_{L^2} + \|f^2\|_{S'}).
\end{equation}
\end{corollary}
Another useful case is when we look at the product
of two solutions which are supported in the same dyadic region, but with frequency separation:
\begin{corollary}\label{c:bi-kk}
Assume that $\psi^1$ and $\psi^2$ as above are supported in the dyadic region
$\vert \xi\vert \approx 2^k$, but have $O(2^k)$ frequency separation between their supports.
Then
\begin{equation}
\label{bi-kk}
\| \psi^1 \psi^2\|_{L^2} \lesssim 2^{-\frac{k}{2}}
( \|\psi_0^1 \|_{L^2} + \|f^1\|_{S'}) ( \|\psi_0^2 \|_{L^2} + \|f^2\|_{S'}).
\end{equation}
\end{corollary}
\section{Normal form analysis and a-priori bounds}
In this section we establish apriori $L^2$ bounds for regular
($H^3_x$) solutions for the Cauchy problem \eqref{bo}. First, we
observe from the scale invariance \eqref{si} of the equation
\eqref{bo} that it suffices to work with solutions for which the $L^2$
norm is small, in which case it is natural to consider these solutions on the time interval $[-1,1]$ (i.e., we set $T := 1$).
Precisely we may assume that the initial satisfies
\begin{equation}
\label{small}
\Vert \phi (0)\Vert_{L^2_x}\leq \epsilon .
\end{equation}
Then our main apriori estimate is as follows:
\begin{theorem}\label{apriori} Let $\phi$ be an $H^3_x$ solution to
\eqref{bo} with small initial data as in \eqref{small}. Let $\left\lbrace
c_k\right\rbrace _{k=0}^{\infty}\in l^2 $ so that $\epsilon c_k$
is a frequency envelope for the initial $\phi(0)$ in $L^2$.
Then we have the Strichartz bounds
\begin{equation}\label{u-small}
\Vert \phi_k \Vert_{S^{0}([-1,1] \times \mathbf{R})} \lesssim \epsilon c_k,
\end{equation}
as well as the bilinear bounds
\begin{equation}
\label{bilinear-small}
\Vert \phi_j \cdot \phi_k\Vert_{L^2}\lesssim 2^{-\frac{\max \left\lbrace j,k\right\rbrace}{2} }\epsilon^2 c_k\, c_j, \qquad j\neq k.
\end{equation}
\end{theorem}
Here, the implicit constants do not depend on the $H^3_x$ norm of
the initial data $\phi(0)$, but they will depend on $\Vert
\phi(0)\Vert_{L^2}$. A standard iteration method will not work,
because the linear part of the Benjamin-Ono equation does not have
enough smoothing to compensate for the derivative in the
nonlinearity. To resolve this difficulty we use ideas related to
the normal form method, first introduced by Shatah in \cite{shatah}
in the context of dispersive PDEs. The main principle in the
normal form method is to apply a quadratic correction to the
unknown in order to replace a nonresonant quadratic nonlinearity by
a milder cubic nonlinearity. Unfortunately this method does not
apply directly here, because some terms in the quadratic correction
are unbounded, and so are some of the cubic terms generated by the
correction. To bypass this issue, here we develop a more favorable
implementation of normal form analysis. This is carried out in two
steps:
\begin{itemize}
\item a partial normal form transformation which is bounded and removes some of the quadratic nonlinearity
\item a conjugation via a suitable exponential (also called gauge transform, \cite{tao}) which removes in a bounded way the remaining part of the quadratic nonlinearity.
\end{itemize}
This will transform the Benjamin-Ono equation \eqref{bo} into an equation where the the quadratic terms have been removed and replaced by cubic perturbative terms.
\subsection{The quadratic normal form analysis} In this subsection we formally derive the
normal form transformation for the Benjamin-Ono equation, \eqref{bo}.
Even though we will not make use of it directly we will still use portions
of it to remove certain ranges of frequency interactions from
the quadratic nonlinearity.
Before going further, we emphasizes that by a \emph{normal form} we
refer to any type of transformation which will remove nonresonant
quadratic terms; all such transformations are uniquely determined up
to quadratic terms.
The normal form idea goes back to Birkhoff which used it in the
context of ordinary differential equations. Later, Shatah
\cite{shatah} was the first to implement it in the context of partial
differential equations. In general, the fact that one can compute
such a normal form for a partial differential equation with quadratic
nonresonant interactions is not sufficient, unless the transformation
is invertible, and, as seen in other works, in addition, good energy
estimates are required. In the context of quasilinear equations one
almost never expects the normal form transformation to be bounded, and
new ideas are needed. In the Benjamin-Ono setting such ideas were
first introduced by Tao \cite{tao} whose renormalization is a partial normal
form transformation in disguise. More recently, other ideas have been
introduced in the quasilinear context by Wu \cite{wu}, Hunter-Ifrim
\cite{hi}, Hunter-Ifrim-Tataru \cite{BH}, Alazard-Delort \cite{ad, ad1} and Hunter-Ifrim-Tataru
\cite{hit}.
In particular, for the Benjamin-Ono equation we seek a quadratic transformation
\[
\tilde{\phi} =\phi +B(\phi, \phi),
\]
so that the new variable $\tilde{\phi}$ solves an equation with a cubic nonlinearity,
\[
(\partial_t +H\partial_x^2)\tilde{\phi}=Q(\phi, \phi, \phi),
\]
where $B$ and $Q$ are translation invariant bilinear, respectively trilinear forms.
A direct computation yields an explicit formal spatial expression of the normal form transformation:
\begin{equation}
\label{bo nft}
\tilde{\phi}=\phi -\frac{1}{4}H\phi \cdot \partial^{-1}_x\phi -\frac{1}{4}H\left( \phi \cdot \partial^{-1}_x\phi \right) .
\end{equation}
Note that at low frequencies \eqref{bo nft} is not invertible, which
tends to be a problem if one wants to apply the normal form
transformation directly.
\subsection{A modified normal form analysis}
\label{s:local}
We begin by writing the Benjamin-Ono equation \eqref{bo} in a
paradifferential form, i.e., we localize ourselves at a frequency $2^k$,
and then project the equation either onto negative or positive
frequencies:
\[
(\partial_t \mp i\partial^{2}_{x} )\phi_k^{\pm}=P_k^{\pm} (\phi \cdot \phi_{x}).
\]
Since $\phi$ is real, $\phi^- $ is the complex conjugate of $\phi^+$ so it suffices to work with the latter.
Thus, the Benjamin-Ono equation for the positive frequency Littlewood-Paley components $\phi^{+}_k$ is
\begin{equation}
\label{eq-loc}
\begin{aligned}
&\left( i\partial_{t} + \partial^{2}_{x} \right) \phi^+_{k}=iP_k ^{+}(\phi \cdot \phi_{x}).
\end{aligned}
\end{equation}
Heuristically, the worst term in $P_k^+(\phi \cdot \phi_{x})$ occurs
when $ \phi _x$ is at high frequency and $\phi$ is at low
frequency. We can approximate $P_k^+ (\phi \cdot \phi_x)$, by its
leading paradifferential component $\phi_{<k} \cdot \partial_x
\phi_{k}^+$; the remaining part of the nonlinearity will be harmless.
More explicitly we can eliminate it by means of a bounded normal form
transformation.
We will extract out the main term
$i\phi_{<k}\cdot \partial_{x}\phi_k^+$ from the right hand side
nonlinearity and move it to the left, obtaining
\begin{equation}
\label{eq-lin-k}
\left( i\partial_t+\partial_x^2 -i\phi_{<k}\cdot \partial_{x}\right) \phi^+_k =i P_{k}^{+}\left( \phi_{\geq k} \cdot \phi _{x}\right) +i\left[ P_{k}^{+}\, , \, \phi_{<k} \right]\phi_x.
\end{equation}
For reasons which will become apparent later on when we do the
exponential conjugation, it is convenient to add an additional lower
order term on the left hand side (and thus also on the right).
Denoting by $A^{k,+}_{BO} $ the operator
\begin{equation}
\label{op Abo}
A^{k, +}_{BO}:= i\partial_t +\partial^2 _x-i\phi_{<k}\cdot \partial_{x} +\frac{1}{2} \left( H+i \right)\partial_x\phi _{<k}
\end{equation}
we rewrite the equation \eqref{eq-lin-k} in the form
\begin{equation}
\label{eq-lin-k+}
A^{k,+}_{BO} \ \phi^+_k =i P_{k}^{+}\left( \phi_{\geq k} \cdot \phi _{x}\right) +i\left[ P_{k}^{+}\, , \, \phi_{<k} \right]\phi_x +\frac{1}{2} \left( H+i \right)\partial_x\phi _{<k} \cdot \phi^+_k.
\end{equation}
Note the key property that the operator $A^{k,+}_{BO} $ is symmetric,
which in particular tells us that the $L^2$ norm is conserved in the
corresponding linear evolution.
The case $k=0$ is mildly different in this discussion. There we need no paradifferential component,
and also we want to avoid the operator $P_0^+$ which does not have a smooth symbol.
Thus we will work with the equation
\begin{equation}
\label{eq-lin-0}
(\partial_t + H \partial_x^2)\phi_0 = P_0( \phi_{0} \phi_x) + P_0( \phi_{>0} \phi_x),
\end{equation}
where the first term on the right is purely a low frequency term and will play only a perturbative role.
The next step is to eliminate the terms on the right hand side of
\eqref{eq-lin-k+} using a normal form transformation
\begin{equation}
\label{partial nft}
\begin{aligned}
\tilde{\phi}_k^+:= \phi^+_k +B_{k}(\phi, \phi).
\end{aligned}
\end{equation}
Such a transformation is easily computed and formally is given by the expression
\begin{equation}
\label{bilinear nft}
\begin{aligned}
B_{k}(\phi, \phi) =&\frac{1}{2}HP_{k}^{+}
\phi \cdot \partial_{x}^{-1}P_{<k}\phi-\frac{1}{4}P_{k} ^{+}\left( H\phi\cdot \partial_{x}^{-1}\phi\right)
-\frac{1}{4}P_{k}^{+}H\left( \phi \cdot \partial^{-1}_x\phi\right).
\end{aligned}
\end{equation}
One can view this as a subset of the normal form transformation
computed for the full equation, see \eqref{bo nft}. Unfortunately, as
written, the terms in this expression are not well defined because
$\partial^{-1}_x \phi$ is only defined modulo constants. To avoid this
problem we separate the low-high interactions which yield a well
defined commutator, and we rewrite $B_{k}(\phi, \phi)$ in a better
fashion as
\begin{equation}
\label{commutator B_k}
B_{k}(\phi, \phi)=-\frac{1}{2}\left[ P^+_kH\, , \, \partial^{-1}_x \phi_{<k} \right]\phi -\frac{1}{4}P^+_k \left( H\phi \cdot \partial_x^{-1}\phi_{\geq k}\right)
-\frac{1}{4}P^+_k H \left(\phi \cdot \partial_x^{-1}\phi_{\geq k}\right).
\end{equation}
In the case $k=0$ we will keep the first term on the right and apply a quadratic correction to remove the second.
This yields
\begin{equation}
\label{commutator B_0}
B_{0}(\phi, \phi)= -\frac{1}{4}P^+_0 \left[ H\phi \cdot \partial_x^{-1}\phi_{\geq 1}\right]
-\frac{1}{4}P^+_0 H \left[\phi \cdot \partial_x^{-1}\phi_{\geq 1}\right].
\end{equation}
\begin{remark} The normal form transformation associated to
\eqref{eq-loc} is the normal form derived in \eqref{bo nft}, but
with the additional $P_k^+$ applied to it. Thus, the second and the
third term in \eqref{bilinear nft} are the projection $P^+_k$ of
\eqref{bo nft}, which , in particular, implies that the linear
Schr\"odinger operator $i\partial_t+\partial_x^2$ applied to these
two terms will eliminate entirely the nonlinearity $P_k^+ (\phi
\cdot\phi_{x})$. The first term in \eqref{bilinear nft} introduces
the paradifferential corrections moved to the left of
\eqref{eq-lin-k+}, and also has the property that it removes the
unbounded part in the second and third term.
\end{remark}
Replying $\phi^+_k$ with $\tilde{\phi}^+_k $ removes all the quadratic
terms on the right and leaves us with an equation of the form
\begin{equation}
\label{eq after 1nft}
A^{k,+}_{BO} \, \tilde{\phi}^{+}_k =Q^{3}_k(\phi, \phi, \phi ) ,
\end{equation}
where $Q^{3}_k(\phi, \phi, \phi)$ contains only cubic terms in $\phi$. We will examine $Q^3_k(\phi, \phi, \phi)$ in greater detail later in Lemma~\ref{l:q3}, where
its full expression is given.
The case $k=0$ is again special. Here the first normal form transformation does not eliminate the low-low frequency interactions, and our intermediate equation has the form
\begin{equation}
\label{eq after 1nft-0}
(i\partial_t +\partial^2_x) \, \tilde{\phi}^{+}_0 =Q^2_0 (\phi, \phi) +Q^{3}_0(\phi, \phi, \phi ),
\end{equation}
where $Q^2_0$ contains all the low-low frequency interactions
\[
Q^2_0 (\phi, \phi):=P_0^+ \left( \phi_{0}\cdot \phi_x\right).
\]
The second stage in our normal form analysis is to perform a second bounded normal form transformation that will remove the paradifferential terms in
the left hand side of \eqref{eq after 1nft}; this will be a renormalization, following the idea introduced by Tao (\cite{tao}). To achieve this we introduce and initialize the spatial primitive $\Phi(t, x)$ of $\phi (t,x)$, exactly as in Tao \cite{tao}.
It turns out that $\Phi (t,x)$ is necessarily a real valued function that solves the equation
\begin{equation}
\label{Phi}
\Phi_{t}+H\Phi_{xx}=\Phi_{x}^2,
\end{equation}
which holds globally in time and space. Here, the initial condition imposed is $\Phi(0,0)=0$. Thus,
\begin{equation}
\label{antiderivative}
\Phi_x (t,x)= \frac{1}{2}\phi(t,x).
\end{equation}
The idea in \cite{tao} was that in order to get bounds on $\phi$ it suffices to obtain appropriate bounds
on $\Phi(t,x)$ which are one higher degree of reqularity as \eqref{antiderivative} suggests.
Here we instead use $\Phi$ merely in an auxiliary role, in order to define the second normal form
transformation. This is
\begin{equation}
\label{conjugation}
\displaystyle{\psi_k^+:=\tilde{\phi}_k^{+} \cdot e^{-i\Phi_{<k}}}.
\end{equation}
The transformation \eqref{conjugation} is akin to a Cole-Hopf
transformation, and expanding it up to quadratic terms, one observes
that the expression obtained works as a normal form transformation,
i.e., it removes the paradifferential quadratic terms. The
difference is that the exponential will be a bounded transformation,
whereas the corresponding quadratic normal form is not. One also
sees the difference reflected at the level of cubic or higher order
terms obtained after implementing these transformation (obviously
they will differ).
By applying this \emph{Cole-Hopf type} transformation, we rewrite the equation \eqref{eq after 1nft}
as a a nonlinear Schr\"odinger equation for our final normal form variable $\psi_k$, with only cubic and quartic nonlinear terms:
\begin{equation}
\label{conjugare}
\begin{aligned}
(i\partial_t +\partial^2_x)\, \psi_k^+ = [\tilde{Q_k}^{3}(\phi, \phi, \phi) + \tilde{Q_k}^{4}(\phi, \phi, \phi,\phi)] e^{-i\Phi_{<k}},
\end{aligned}
\end{equation}
where $\tilde{Q}_k^3$ and $\tilde Q_k^4$ contain only cubic,
respectively quartic terms; these are also computed in
Lemma~\ref{l:q3}.
The case $k=0$ is special here as well, in that no renormalization is
needed. There we simply set $\psi_0 =\tilde{\phi}_0$, and use the
equation \eqref{eq after 1nft-0}.
This concludes the algebraic part of the analysis. Our next goal is
study the analytic properties of our multilinear forms:
\begin{lemma}
\label{l:q3}
The quadratic form $B_k$ can be expressed as
\begin{equation}
B_k(\phi, \phi)=2^{-k} L_k (\phi_{<k}, \phi_k) +\sum _{j\geq k} 2^{-j}L_k (\phi_j, \phi_j)=2^{-k} L_k(\phi,\phi).
\end{equation}
The cubic and quartic expressions $Q_k^3$, $\tilde Q_k^3$ and $\tilde Q_k^4$ are translation invariant multilinear forms of the type
\begin{equation}
\begin{aligned}
Q_k^3(\phi,\phi,\phi) = & \ L_k(\phi,\phi, \phi)+L_k(H\phi,\phi, \phi),\\
\tilde Q_k^3(\phi,\phi,\phi) = & \ L_k(\phi,\phi, \phi)+L_k(H\phi,\phi, \phi),
\\
\tilde Q_k^4(\phi,\phi,\phi,\phi) = \ & L_k(\phi,\phi, \phi,\phi)+ L_k(H\phi,\phi, \phi,\phi),
\end{aligned}
\end{equation}
all with output at frequency $2^k$.
\end{lemma}
\begin{proof}
We recall that $B_k$ is given in \eqref{commutator B_k}. For the
first term we use Lemma~\ref{commutator}. For the two remaining
terms we split the unlocalized $\phi$ factor into $\phi_{<k}+
\phi_{\geq k}$. The contribution of $\phi_{<k}$ is as before, while
in the remaining bilinear term in $\phi_{\geq k}$ the frequencies
of the two inputs must be balanced at some frequency $2^j$ where
$j$ ranges in the region $j \geq k$. For the last expression of
$B_k$ we simply observe that
\begin{equation}
\label{dipi}
\partial^{-1}_x\phi_{\geq k}=2^{-k}L(\phi).
\end{equation}
Next we consider $Q^3_k$ which is obtained by a direct computation
\begin{equation}
\label{q3}
\begin{aligned}
Q^3_k(\phi, \phi, \phi)=&-\frac{1}{2}i \, \left[ P^+_kH\, , \, P_{<k}(\phi ^2)\right]\, \phi -\frac{1}{2}i \, \left[ P^+_kH\, , \, \partial_x^{-1}\phi_{<k} \right]\partial_x (\phi ^2)
-\frac{1}{4}iP^+_k\left( H\partial_x (\phi ^2)\cdot \partial_x^{-1}\phi_{\geq k}\right)\\
&-\frac{1}{4}iP^+_k\left( H\phi \cdot P_{\geq k}(\phi ^2)\right)-\frac{1}{4} iP^+_k H\left( \partial_x (\phi^2)\cdot \partial_x^{-1}\phi_{\geq k}\right)
-\frac{1}{4}i P^+_k H\left(\phi \cdot P_{\geq k}(\phi ^2)\right)\\
&-iP_{<k}\phi \cdot \left\lbrace -\frac{1}{2} \, \left[ P^+_kH\, , \, \phi_{<k}\right]\, \phi -\frac{1}{2} \, \left[ P^+_kH\, , \, \partial_x^{-1}\phi_{<k} \right]\, \phi_x
-\frac{1}{4}P^+_k\left( H\phi_x \cdot \partial_x^{-1}\phi_{\geq k}\right)\right.\\
& \hspace*{2.15cm} \left. -\frac{1}{4} P^+_k \left( H\phi \cdot \phi_{\geq k}\right) -\frac{1}{4}P^+_KH \left( \phi_x \cdot \partial_x^{-1}\phi_{\geq k}\right)
-\frac{1}{4}P^+_kH \left( \phi \cdot \phi_{\geq k}\right) \right\rbrace \\
&-\frac{1}{2}\partial_x (H+i)\phi_{<k} \cdot B_{k}(\phi , \phi).
\end{aligned}
\end{equation}
We consider each term separately. For the commutator terms we use Lemma~\ref{commutator} to eliminate all the inverse derivatives. This yields a factor of $2^{-k}$ which in
turn is used to cancel the remaining derivative in the expressions. For instance consider the second term
\[
\begin{aligned}
\left[ P^+_k H\, , \, \partial^{-1}_x \phi_{<k} \right]\, \partial_{x}(\phi^2) &=\left[ P^+_k H\, , \, \partial^{-1}_x \phi_{<k} \right]\,\tilde{P}_k \partial_{x}(\phi^2)\\
&=L (\phi_{<k} , 2^{-k}\tilde{P}_k\partial_{x}(\phi^2))\\
&=L (\phi_{<k} , \phi^2)\\
&=L (\phi_{<k} , \phi, \phi).
\end{aligned}
\]
The remaining terms are all similar. We consider for example the third term
\[
P^+_k \left( H\partial_x (\phi^2)\cdot \partial^{-1}_x\phi_{\geq k}\right)= P^+_k \partial_x\left( H (\phi^2)\cdot \partial^{-1}_x\phi_{\geq k}\right)
-P^+_k \left( H (\phi^2)\cdot \phi_{\geq k}\right).
\]
The derivative in the first term yields a $2^k$ factor, and we can use \eqref{dipi}, and the second term is straightforward.
For $\tilde{Q}^3_k$ an easy computation yields
\[
\tilde{Q}^3_k(\phi, \phi, \phi)=Q^3_k(\phi, \phi, \phi)+\frac12 \phi^+_k \cdot P_{<k}(\phi^2) -\frac14 \phi^+_k \cdot \left( P_{<k}\phi\right) ^2,
\]
and both extra terms are straightforward.
Finally, $\tilde{Q}^4_k (\phi, \phi, \phi, \phi)$ is given by
\[
\tilde{Q}^4_k (\phi, \phi, \phi, \phi)=\frac14 B_{k}(\phi, \phi) \cdot \left\lbrace 2P_{<k}(\phi ^2) -\left( P_{<k}\phi\right)^2 \right\rbrace ,
\]
and the result follows from the one for the $B_k (\phi, \phi)$.
\end{proof}
\subsection{The bootstrap argument}
We now finalize the proof of Theorem~\ref{apriori} using a standard
continuity argument based on the $H^3_x$ global well-posedness
theory. Given $0 < t_0 \leq 1$ we denote by
\[
M(t_0) := \sup_{j} c_j^{-2} \, \| P_k \phi\|_{S^0[0,t_0]}^2 + \sup_{j\neq k\in \mathbf{N}} \sup_{y\in {\mathbb R}} c^{-1}_j \cdot c_k^{-1} \cdot \| \phi_j \cdot T_y \phi_k \|_{L^2\left[ 0, t_0\right] }.
\]
Here, in the second term, the role of the condition $j\neq k$ is to
insure that $\phi_j$ and $\phi_k$ have $O(2^{\max \left\lbrace
j,k\right\rbrace })$ separated frequency localizations. However,
by a slight abuse of notation, we also allow bilinear expressions of
the form $ P_{k}^1\phi \cdot P_k^2\phi$, where $P_{k}^1$ and $P_{k}^2$
are both projectors at frequency $2^k$ but with at least $2^{k-4}$
separation between the \textbf{absolute values} of the frequencies in
their support.
We also remark here on the role played by the translation operator $T_y$. This is
needed in order for us to be able to use thee bilinear bounds in estimating multilinear
$L$ type expressions.
We seek to show that
\[
M(1) \lesssim \epsilon^2.
\]
As $\phi$ is an $H^3$ solution, it is easy to see that $M(t)$
is continuous as a function of $t$, and
\[
\lim_{t\searrow 0}M(t) \lesssim \epsilon^2 .
\]
This is because the only nonzero component of the $S$ norm in the limit $t \to 0$
is the energy norm, which converges to the energy norm of the data.
Thus, by a continuity argument
it suffices to make the bootstrap assumption
\[
M(t_0) \leq C^2 \epsilon^2
\]
and then show that
\[
M(t_0) \lesssim \epsilon^2 + C^6 \epsilon^6.
\]
This suffices provided that $C$ is large enough (independent of
$\epsilon$) and $\epsilon$ is sufficiently small (depending on $C$).
From here on $t_0 \in (0,1]$ is fixed and not needed in the argument,
so we drop it from the notations.
Given our bootstrap assumption, we have the starting estimates
\begin{equation}
\label{boot1}
\Vert \phi_k\Vert_{S^0}\lesssim C \epsilon c_k ,
\end{equation}
and
\begin{equation}
\label{boot2}
\Vert \phi_j \cdot T_y \phi_k\Vert_{L^2}\lesssim 2^{-\frac{\max \left\lbrace j,k\right\rbrace}{2} }C^2 \epsilon^2 c_j c_k, \qquad j\neq k, \qquad y \in {\mathbb R}.
\end{equation}
where in the bilinear case, as discussed above, we also allow $j=k$ provided the two localization multipliers
are at least $2^{k-4}$ separated. This separation threshold is fixed once and for all. On the other hand, when
we prove that the bilinear estimates hold, no such sharp threshold is needed.
Our strategy will be to establish these bounds for the normal form variables $\psi_k$, and then to transfer
them to the original solution $\phi$ by inverting the normal form transformations and estimating errors.
We obtain bounds for the normal form variables $\psi_k^+$. For this we estimate the initial data for $\psi_k$
in $L^2$, and then the right hand side in the Schrodinger equation \eqref{conjugare} for $\psi_k^+$ in
$L^1 L^2$. For the initial data we have
\begin{lemma}
\label{l:invertibila}
Assume \eqref{small}. Then we have
\begin{equation}
\| \psi_k^+(0)\|_{L^2} \lesssim c_k\epsilon.
\end{equation}
\end{lemma}
\begin{proof} We begin by recalling the definition of $\psi (t, x)$:
\[
\psi (t,x)=\tilde{\phi}^+_k e^{-i\Phi _{< k}}.
\]
The $L^2_x$ norms of $\psi_k$ and $\tilde{\phi}^+_k$ are equivalent
since the conjugation with the exponential is harmless. Thus, we need
to prove that $L^2$ norm of $\tilde{\phi}^+_k$ is comparable with the
$L^2$ norm of $\phi^+_k$. The two variables are related via the
relation \eqref{partial nft}. Thus, we reduce our problem to the study
of the $L^2$ bound for the bilinear form $B_{k}(\phi , \phi)$. From
Lemma \eqref{l:q3} we know that
\[
B_k(\phi, \phi)=2^{-k} L_k (\phi_{<k}, \phi_k) +\sum _{j\geq k} 2^{-j}L_k (\phi_j, \phi_j),
\]
so we estimate each term separately. For the first term we use the the smallness of the initial data in the $L^2$ norm, together with Bernstein's inequality, which we use for the low frequency term
\begin{equation*}
\Vert 2^{-k} L_k (\phi_{<k}, \phi_k)\Vert_{L^2}\lesssim 2^{-\frac{k}{2}} \cdot \epsilon \cdot \Vert \phi(0)\Vert_{L^2}=2^{-\frac{k}{2}} \cdot \epsilon ^2 \cdot c_k.
\end{equation*}
For the second component of $B_k(\phi, \phi)$, we again use Bernstein's inequality
\[
\Vert \sum _{j\geq k} 2^{-j}L_k (\phi_j, \phi_j) \Vert_{L^2_x}\lesssim \sum_{j\geq k} 2^{-\frac{j}{2}}\cdot \epsilon \cdot \Vert \phi_j(0) \Vert_{L^2}\lesssim \sum_{j\geq k} 2^{-\frac{j}{2}}\cdot \epsilon ^2 \cdot c_j\lesssim 2^{-\frac{k}{2}} \cdot c_k \cdot \epsilon^2 .
\]
This concludes the proof.
\end{proof}
Next we consider the right hand side in the $\psi_k$ equation:
\begin{lemma}
Assume \eqref{boot1} and \eqref{boot2}. Then we have
\begin{equation}
\label{perturbative}
\| \tilde Q_k^3 \|_{L^1 L^2} +\| \tilde Q_k^4 \|_{L^1 L^2} \lesssim C^3 \epsilon^3 c_k.
\end{equation}
\end{lemma}
A similar estimate holds for the quadratic term $Q^2_0$ which appears in the case $k=0$,
but that is quite straightforward.
\begin{proof}
We start by estimating the first term in \eqref{perturbative}. For completeness we recall the expression of
$\tilde{Q}^3_k$ from Lemma~\ref{l:q3}:
\[
\tilde{Q}^3_k(\phi, \phi, \phi)=L_{k}(\phi, \phi, \phi)+L_{k}(H\phi, \phi, \phi).
\]
Here $H$ plays no role so it suffices to discuss the first term. To
estimate the trilinear expression $L_k (\phi, \phi, \phi)$ we do a
frequency analysis. We begin by assuming that the first entry of $L_k$
is localized at frequency $2^k_1$, the second at frequency $2^{k_2}$,
and finally the third one is at frequency $2^{k_3}$. As the output is
at frequency $2^k$, there are three possible cases:
\begin{itemize}
\item If $2^k<2^{k_1}<2^{k_2}=2^{k_3}$, then we can use the bilinear Strichartz estimate for the imbalanced frequencies, and the Strichartz inequality for the remaining term to arrive at
\[
\begin{aligned}
\Vert L_k(\phi_{k_1}, \phi_{k_2}, \phi_{k_3})\Vert_{L^{\frac43}_t L^2_x}&\lesssim \Vert L(\phi_{k_1}, \phi_{k_3})\Vert_{L^2_{t,x}}\cdot \Vert \phi_{k_2}\Vert_{L^{4}_tL^\infty_x}\\
&\lesssim 2^{-\frac{k_3}{2}}\cdot C^2\epsilon^2\cdot c_{k_1}\cdot c_{k_3}\cdot \Vert \phi_{k_2}\Vert_{L^{4}_t L^\infty_x}\\
&\lesssim 2^{-\frac{k_3}{2}}\cdot C^3\cdot \epsilon^3 \cdot c_{k_1}c_{k_2}c_{k_3}\lesssim 2^{-\frac{k}{2}}\cdot C^3\cdot \epsilon^3 \cdot c_{k}^3.
\end{aligned}
\]
\item If $ 2^{k_1}=2^{k_2}=2^{k_3} \approx 2^k$, then we use directly the Strichartz estimates
\[
\Vert L_k(\phi_{k_1}, \phi_{k_2}, \phi_{k_3})\Vert_{L^2_t L^2_x}\lesssim \| \phi_{k_1}\|_{L^\infty_t L^2_x} \cdot \Vert \phi_{k_2}\Vert_{L^{4}_t L^\infty_x} \cdot \Vert \phi_{k_3}\Vert_{L^{4}_t L^\infty_x} \lesssim C^3 \epsilon^3 c_k^3.
\]
\item If $ 2^{k_1}=2^{k_2}=2^{k_3} \gg 2^k$ then the frequencies of
the three entries must add to $O(2^k)$. Then the absolute values of
at least two of the three frequencies must have at least a
$2^{k_3-4}$ separation. Thus, the bilinear Strichartz estimate
applies, and the same estimate as in the first case follows in the
same manner.
\end{itemize}
This concludes the bound for $\tilde Q^{3}_k$.
Finally, the $L^1_tL^2_x$ bound for
\[
\tilde{Q}^4_k (\phi, \phi, \phi, \phi)=\frac14 B_{k}(\phi, \phi) \cdot \left\lbrace 2P_{<k}(\phi ^2) -\left( P_{<k}\phi\right)^2 \right\rbrace ,
\]
follows from the $L^2$ bound for $B_k(\phi, \phi)$ obtained in
Lemma~\ref{l:invertibila} together with the $L^{4}_t L^\infty_x $
bounds for the remaining factors. To bound these terms we do similar
estimates as the ones in Lemma~\ref{l:invertibila}.
\end{proof}
Given the bounds in the two above lemmas we have the Strichartz estimates for $\psi_k$:
\[
\Vert \psi_k\Vert_{S^0}\lesssim \Vert \psi_k(0)\Vert _{L^2_x}+\Vert \tilde{Q}^3_k(\phi, \phi, \phi) +\tilde{Q}^4_{k}(\phi, \phi, \phi, \phi) \Vert_{L^1_tL^2_x}
\lesssim c_k\left( \epsilon + \epsilon ^3C^3\right) .
\]
This implies the same estimate for $\tilde{\phi}^+_k$. Further we
claim that the same holds for $\phi^+_k$. For this we need to estimate
$B_{k}(\phi, \phi)$ in $S^0$. We recall that
\[
B_k(\phi, \phi) = 2^{-k} L_k(P_{<k}\phi,P_k\phi)+\sum_{j\geq k} 2^{-j} L_{k} (\phi_j, \phi_j).
\]
We now estimate
\[
\begin{aligned}
\Vert B_{k}(\phi, \phi)\Vert_{S^0}& \lesssim 2^{-k}\Vert \phi_k\Vert_{S^0}\Vert \phi_{<k}\Vert_{L^{\infty}}+\sum_{j\geq k} 2^{-j} \Vert \phi_j\Vert_{S^0}\Vert \phi_j\Vert_{L^{\infty}}\\
&\lesssim C\epsilon^2 c_k 2^{-\frac{k}{2}}+\sum_{j\geq k} C\epsilon^2 c_j 2^{-\frac{j}{2}}\\
& \lesssim C\epsilon^2 c_k 2^{-\frac{k}{2}} .
\end{aligned}
\]
Here we have used Bernstein's inequality to estimate the $L^{\infty}$ norm
in term of the mass, and the slowly varying property of the $c_k$'s
for the last series summation. This concludes the Strichartz
component of the bootstrap argument.
For later use, we observe that the same argument as above but with without using Bernstein's
inequality yields the bound
\begin{equation}\label{psi-err}
\|\psi_k - e^{-i \Phi_{<k}} \phi_k^+ \|_{L^2 L^\infty \cap L^4 L^2} \lesssim 2^{-k} \epsilon^2C^2 c_k
\end{equation}
as a consequence of a similar bound for $B_k$.
We now consider the bilinear estimates in our bootstrap argument. We
drop the translations from the notations, as they play no role in the
argument. Also to fix the notations, in what follows we assume that $j < k$.
The case when $j=k$ but we have frequency separation is completely similar.
We would like to start from the bilinear bounds for $\psi_k$, which
solve suitable inhomogeneous linear Schr\"odinger equations. However,
the difficulty we face is that, unlike $\tilde{\phi}_k^+$, $\psi_k$ are no
longer properly localized in frequency, therefore for $j \neq k$,
$\psi_j$ and $\psi_k$ are no longer frequency separated. To remedy
this we introduce additional truncation operators $\tilde P_j$ and
$\tilde P_k$ which still have $2^{\max\{j,k\}}$ separated supports but
whose symbols are identically $1$ in the support of $P_j$,
respectively $P_k$. Then the bilinear $L^2$ bound in Lemma~\ref{l:bi}
yields
\[
\| \tilde P_j \psi_j \cdot \tilde P_k \psi_k\|_{L^2} \lesssim \epsilon^2 c_j c_k 2^{-\frac{\max\{j,k\}}2}
(\epsilon^2 + C^6 \epsilon^6) .
\]
It remains to transfer this bound to $\phi_j^+ \phi_k^+$. We expand
\[
\tilde P_j \psi_j \tilde P_k \psi_k - \phi_j^+ e^{-i \Phi_{<j}} \phi_k^+ e^{-i \Phi_{<k}} =
\tilde P_j \psi_j (\tilde P_k \psi_k - \phi_k^+ e^{-i \Phi_{<k}})+
(\tilde P_j \psi_j - \phi_j^+ e^{-i \Phi_{<j}}) \phi_k^+ e^{-i \Phi_{<k}} .
\]
For the first term we use the bound \eqref{psi-err} for the second factor
combined with the Strichartz bound for the second,
\[
\|\tilde P_j \psi_j (\tilde P_k \psi_k - \phi_k^+ e^{-i \Phi_{<k}})\|_{L^2} \lesssim
\| \psi_j \|_{L^\infty L^2} \| \psi_k - \phi_k^+ e^{-i \Phi_{<k}}\|_{L^2 L^\infty} \lesssim \epsilon^3 C^2 c_j c_k
2^{-k},
\]
which is better than we need. It remains to consider the second term,
where we freely drop the exponential. There the above argument no longer suffices,
as it will only yield a $2^{-k}$ low frequency gain.
We use the commutator Lemma~\ref{commutator} to express the difference in the second term
as
\[
\begin{split}
\tilde P_j \psi_j - \phi_j^+ e^{-i \Phi_{<j}} = & \
(\tilde P_j-1) (\tilde{\phi}_j^+ e^{ -i \Phi_{<j}}) + B_j (\phi,\phi) e^{-i \Phi_{<j}}
\\
= & \
[\tilde P_j-1, e^{ -i \Phi_{<j}}] \phi_j^+ e^{ -i \Phi_{<j}}) +
(\tilde P_j-1) ( B_j (\phi,\phi) e^{ -i \Phi_{<j}} +
+ B_j (\phi,\phi) e^{-i \Phi_{<j}}
\\
= & \ 2^{-j} L(\partial_x e^{ -i \Phi_{<j}}, \phi_j^+) + L( B_j (\phi,\phi), e^{-i \Phi_{<j}})
\\
= & \ 2^{-j} L( \phi_{<j}, \phi_j, e^{ -i \Phi_{<j}}) + \sum_{l > j} 2^{-l} L(\phi_l,\phi_l, e^{-i \Phi_{<j}}) .
\end{split}
\]
Now we multiply this by $\phi_k^+$, and estimate in $L^2$ using our
bootstrap hypothesis. For $l \neq k$ we can use a bilinear $L^2$ estimate
combined with an $L^\infty$ bound obtained via Bernstein's inequality.
For $l = k$ we use three Strichartz bounds. The exponential is harmlessly discarded in all cases.
We obtain
\[
\| (\tilde P_j \psi_j - \phi_j^+ e^{-i \Phi_{<j}} ) \phi_k^{+}\|_{L^2}
\lesssim \epsilon^3 C^2 ( c_j c_k 2^{-\frac{j}2} 2^{-\frac{k}2} + \sum_{l > j}
c_l c_k 2^{-\frac{l}2} 2^{-\frac{k}2}) = \epsilon^3 C^2 c_j c_k 2^{-\frac{j}2} 2^{-\frac{k}2}
\]
which suffices.
\section{Bounds for the linearized equation
In this section we consider the linearized Benjamin-Ono equation equation,
\begin{equation}\label{lin}
(\partial_t +H \partial^2_x) v = \partial_x ( \phi v ).
\end{equation}
Understanding the properties of the linearized flow is critical for
any local well-posedness result.
Unfortunately, studying the linearized problem in $L^2$ presents
considerable difficulty. One way to think about this is that $L^2$
well-posedness for the linearized equation would yield Lipschitz
dependence in $L^2$ for the solution to data map, which is known to be
false.
Another way is to observe that by duality, $L^2$ well-posedness
implies $\dot H^{-1}$ well-posedness, and then, by interpolation,
$\dot H^s$ well-posedness for $s \in [0,1]$. This last
consideration shows that the weakest (and most robust)
local well-posedness result we could prove for the linearized
equation is in $\dot H^{-\frac12}$.
Since we are concerned with local well-posedness here, we will harmlessly replace
the homogeneous space $\dot H^{-\frac12}$ with $ H^{-\frac12}$. Then we will prove
the following:
\begin{theorem}
\label{t:liniarizare}
Let $\phi$ be an $H^3$ solution to the Benjamin-Ono equation
in $[0,1]$ with small mass, as in \eqref{small}.
Then the linearized equation \eqref{lin} is well-posed in $H^{-\frac12}$
with a uniform bound
\begin{equation}\label{S-lin}
\| v\|_{C(0,1;H^{-\frac12})} \lesssim \|v_0\|_{H^{-\frac12}}
\end{equation}
with a universal implicit constant (i.e., not depending on the $H^3$ norm of $\phi$).
\end{theorem}
We remark that as part of the proof we also show that the solutions to the linearized equation
satisfy appropriate Strichartz and bilinear $L^2$ bounds expressed in terms of the frequency envelope
of the initial data.
The rest of the section is devoted to the proof of the theorem.
We begin by considering more regular solutions:
\begin{lemma}
\label{l:dippi}
Assume that $\phi$ is an $H^3$ solution to the Benjamin-Ono equation.
Then the linearized equation \eqref{lin} is well-posed in $H^1$, with uniform
bounds
\begin{equation}
\| v\|_{C(0,1;H^1)} \lesssim \|v_0\|_{H^1}.
\end{equation}
\end{lemma}
Compared with the main theorem, here the implicit constant is allowed to depend on the
$H^3$ norm of $\phi$.
\begin{proof}
The lemma is proved using energy estimates. We begin with the easier
$L^2$ well-posedness. On one hand, for solutions for \eqref{lin}
we have the bound
\[
\frac{d}{dt} \| v\|_{L^2}^2 = \int_{\mathbb R} v \partial_x (\phi v)\, dx = \frac12 \int_{\mathbb R} v^2 \partial_x \phi \, dx \lesssim \|\phi_x\|_{L^\infty} \| v\|_{L^2}^2 ,
\]
which by Gronwall's inequality shows that
\[
\| v\|_{L^\infty _tL^2_x} \lesssim \|v_0\|_{L^2_x} ,
\]
thereby proving uniqueness. On the other hand, for the (backward) adjoint problem
\begin{equation}
(\partial_t +H \partial^2_x) w = \phi \partial_x w, \qquad w(1) = w_1
\end{equation}
we similarly have
\[
\| w\|_{L^\infty _tL^2_x} \lesssim \|w_1\|_{L^2_x} ,
\]
which proves existence for the direct problem.
To establish $H^1$ well-posedness in a similar manner we rewrite our
evolution as a system for $(v,v_1:= \partial_x v)$,
\[
\left\{
\begin{array}{l}
(\partial_t +H \partial^2_x) v = \partial_x ( \phi v ), \cr
(\partial_t +H \partial^2_x) v_1 = \partial_x ( \phi v_1 ) + \phi_x v_1 + \phi_{xx} v.
\end{array}
\right.
\]
An argument similar to the above one shows that this system is also $L^2$ well-posed.
Further, if initially we have $v_1 = v_x$ then this condition is easily propagated in time.
This concludes the proof of the lemma.
\end{proof}
Given the last Lemma ~\ref{l:dippi}, in order to prove Theorem ~\ref{t:liniarizare}, it suffices to show that the
$H^1$ solutions $v$ given by the Lemma~\ref{l:dippi} satisfy the bound \eqref{S-lin}. It is convenient
in effect to prove stronger bounds. To state them we assume that $\| v(0)\|_{H^{-\frac12}} \leq 1$,
and consider a frequency envelope $d_k$ for $v(0)$ in $H^{-\frac12}$. Without any restriction in generality
we may assume that $c_k \leq d_k$, where $c_k$ represents an $L^2$ frequency envelope for $\phi(0)$
as in the previous section. With these notations, we aim to prove that the dyadic pieces $v_k$ of $v$
satisfy the Strichartz estimates
\[
\| v_k \|_{S^0} \lesssim 2^{\frac{k}{2}} d_k,
\]
as well as the bilinear $L^2$ estimates
\[
\|L(v_j, \phi_k)\|_{L^2} \lesssim \epsilon d_j c_k 2^{\frac{j}{2}}\cdot 2^{-\frac{\min\{j,k\}}{2}}.
\]
Again, here we allow for $j = k$ under a $2^{k-4}$ frequency separation condition.
Since $v$ is already in $H^1$ and $\phi$ is in $H^3$, a continuity argument shows that it suffices to make
the bootstrap assumptions
\begin{equation}
\label{boot-lin}
\| v_k\|_{S^0} \leq C 2^{\frac{k}{2}} d_k,
\end{equation}
\begin{equation}
\label{boot-lin-bi}
\sup_{y\in {\mathbb R}}\| v_j T_y\phi_k\|_{L^2} \lesssim C\epsilon d_j c_k 2^{\frac{j}{2}} 2^{-\frac{\min\{j,k\}}{2}} ,\quad j\neq k,
\end{equation}
and prove that
\begin{equation}
\label{get-lin}
\| v_k\|_{S^0} \lesssim ( 1 + \epsilon C) 2^{\frac{k}{2}} d_k,
\end{equation}
respectively
\begin{equation}
\label{get-lin-bi}
\sup_{y\in {\mathbb R}}\| v_j T_y\phi_k\|_{L^2} \lesssim \epsilon ( 1 + \epsilon C) d_j c_k 2^{\frac{j}{2}} 2^{-\frac{\min\{j,k\}}{2}} , \quad j\neq k .
\end{equation}
We proceed in the same manner as for the nonlinear equation, rewriting the
linearized equation in paradifferential form as
\begin{equation}
\label{linearization}
A_{BO}^{k,+} v_k^+ = i P_k^+ \partial_x ( \phi \cdot v) - i \phi_{<k} \partial_x v_k^+
+ \frac12 \partial_x (H+i) \phi_{<k} \cdot v_k^+.
\end{equation}
Here, in a similar manner as before, we isolate the case $k = 0$, where no paradifferential terms
are kept on the left.
The next step is to use a normal form transformation to eliminate
quadratic terms on the right, and replace them by cubic terms. The
difference with respect to the prior computation is that here we leave
certain quadratic terms on the right, because their corresponding
normal form correction would be too singular. To understand why this
is so we begin with a formal computation which is based on our prior
analysis for the main problem. Precisely, the normal form which
eliminates the full quadratic nonlinearity in the linearized equation
(i.e. the first term on the right in \eqref{linearization}) is
obtained by linearizing the normal for for the full equation, and is
given by
\begin{equation}
\label{part1}
-\frac{1}{4}P_{k} ^{+}\left[ Hv\cdot \partial_{x}^{-1}\phi\right] -\frac{1}{4}P_{k}^{+}H\left[ v \cdot \partial^{-1}_x\phi\right]
-\frac{1}{4}P_{k} ^{+}\left[ H\phi\cdot \partial_{x}^{-1}v\right]-\frac{1}{4}P_{k}^{+}H\left[ \phi \cdot \partial^{-1}_xv\right].
\end{equation}
On the other hand, the correction which eliminates the paradifferential component (i.e., last two terms in \eqref{linearization}) is given by
\begin{equation}
\label{part2}
\frac{1}{2}HP_{k}^{+}v \cdot \partial_{x}^{-1}P_{<k}\phi ,
\end{equation}
which corresponds to an asymmetric version of the first term in $B_k$
in \eqref{partial nft}. Thus, the full normal form correction for the
right hand side of the equation \eqref{linearization} is \eqref{part1}
$+$ \eqref{part2}. The term in \eqref{part2} together with the last
two entries in \eqref{part1} yield a commutator structure as in $B_k$
in the previous section. To obtain a similar commutator structure for
the first two terms in \eqref{part1} we would need an additional
correction
\begin{equation}
\label{part3}
\frac{1}{2}HP_{k}^{+}\phi \cdot \partial_{x}^{-1} P_{<k} v.
\end{equation}
Precisely, if we add the three expressions above we obtain the linearization of $B_k$,
\[
\eqref{part1}+\eqref{part2}+\eqref{part3}=2B_{k}(v, \phi),
\]
where $B_k$ stands for the symmetric bilinear form associated to the
quadratic form $B_k$ defined in \eqref{commutator B_k}.
Hence, our desired normal form correction is
\[
\eqref{part1}+\eqref{part2}=2B_{k}(v, \phi)-\eqref{part3}.
\]
Unfortunately the expression \eqref{part3} contains $\partial_x^{-1}v$
which is ill defined at low frequencies. Unlike in the analysis of the
main equation in the previous section, here we also have no commutator
structure to compensate. To avoid this problem we
exclude the frequencies $<1$ in $v$ from the \eqref{part3} part of the
normal form correction. Thus, our quadratic normal form correction
will be
\begin{equation}
\label{bilinear nft lin}
\begin{aligned}
B_{k}^{lin}(\phi, v) =&2B_{k}(v, \phi)-\frac{1}{2}HP_{k}^{+}\phi \cdot \partial_{x}^{-1}v_{(0,k)}.
\end{aligned}
\end{equation}
This serves as a quadratic correction for the full quadratic terms in the right hand side of
\eqref{linearization}, except for the term which corresponds to the frequencies of size $O(1)$
in $w$, namely the expression
\[
Q^{2,lin}_k(\phi,v) = i v_{0} \partial_x \phi_k^+ - \frac12 \partial_x (H+i) v_{0} \cdot \phi_k^+.
\]
Following the same procedure as in the normal form transformation for the full equation we
denote the first normal form correction in the linearized equation by
\begin{equation}\label{partial-lin-nft}
\tilde{v}_k^+ := v_k^+ + 2 B_k^{lin}(\phi,v).
\end{equation}
The equation for $\tilde{v}_k^+$ has the form
\begin{equation}
A_{BO}^{k,+} \tilde{v}_k^+ = Q_k^{3,lin}(\phi,\phi,v) + Q_k^{2,lin}( v_0,\phi_k).
\end{equation}
Here $Q_k^{2,lin}$ is as above, whereas $Q_k^{3,lin}$ contains the linearization
of $Q_k^3$ plus the extra contribution arising from the second term in $B_k^{lin}$,
namely
\begin{equation}
Q_k^{3,lin}(\phi,\phi,v) = 3 Q_k^{3}(\phi,\phi,v) + \frac{i}2 \phi_k^+ P_{(0,k)} (v \phi) +
\frac{i}2 P_k^+ \partial_x (\phi^2) \partial_{x}^{-1} v_{(0,k)} .
\end{equation}
Again there is a straightforward adjustment in this analysis for the case $k=0$,
following the model in the previous section. This adds a trivial low frequency
quadratic term on the right.
Finally, for $k > 0$ we renormalize $\tilde{v}_k^+$ to
\[
w_k := e^{-i \Phi_{<k}} \tilde{v}_k^+,
\]
which in turn solves the inhomogeneous Schr\"odinger equation
\begin{equation}
\label{conjugare-lin}
\begin{aligned}
(i\partial_t +\partial^2_x)\, w_k = [Q_k^{2,lin}( \phi_0,v_k^+) + 3 \tilde Q_k^{3,lin}(\phi, \phi, v) +
\tilde Q_k^{4,lin}(v, \phi, \phi,\phi)] e^{-i\Phi_{<k}},
\end{aligned}
\end{equation}
where
\[
\tilde Q_k^{3,lin}(v, \phi, \phi)= Q_k^{3,lin}(v, \phi, \phi) + \frac14 v_k^+ \left( 2\cdot P_{<k}(\phi^2) -
\left( P_{<k}\phi\right) ^2 \right),
\]
and
\[
\tilde Q_k^{4,lin}(v, \phi, \phi)= Q_k^{3,lin}(v, \phi, \phi) + \frac14 B_{k}^{lin}(v,\phi) \left( 2\cdot P_{<k}(\phi^2) -
\left( P_{<k}\phi\right) ^2 \right).
\]
Our goal is now to estimate the initial data for $w_k$ in $L^2$, and the inhomogeneous term
in $L^1_t L^2_x$. We begin with the initial data, for which we have
\begin{lemma}
The initial data for $w_k$ satisfies
\begin{equation}\label{wk-data}
\| w_k(0) \|_{L^2} \lesssim 2^{\frac{k}2} d_k.
\end{equation}
\end{lemma}
\begin{proof}
It suffices to prove the similar estimate for $\tilde{v}_k$, which in turn reduces to
estimating $B_k^{lin}(\phi,v)$. The same argument as in the proof of Lemma~\ref{l:invertibila}
yields
\[
\| B_k^{lin}(\phi,v) \|_{L^2} \lesssim k \epsilon d_k ,
\]
which is stronger than we need.
\end{proof}
Next we consider the inhomogeneous term:
\begin{lemma}
The inhomogeneous terms in the $w_k$ equation satisfy
\begin{equation}\label{wk-inhom}
\| Q_k^{2,lin} \|_{L^1 L^2} + \| \tilde Q_k^{3,lin} \|_{L^1 L^2} +\| \tilde Q_k^{4,lin} \|_{L^1 L^2} \lesssim
2^{\frac{k}2} C \epsilon d_k.
\end{equation}
\end{lemma}
\begin{proof}
We begin with $Q^{2,lin}_k$, which is easily estimated in $L^2$ using the bilinear Strichartz estimates
\eqref{boot-lin-bi} in our bootstrap assumption.
All terms in the cubic part $\tilde Q_k^{3,lin}$ have the form
$L_k(\phi,\phi,v)$ possibly with an added harmless Hilbert transform,
except for the expression $ P_k^+ \partial_x (\phi^2) \partial_{x}^{-1} v_{(0,k)}$.
For this we have the bound
\[
\| L_k(\phi,\phi,v)\|_{L^1 L^2} \lesssim 2^{\frac{k}2} C^2 \epsilon^2 d_k .
\]
The proof is identical to the similar argument for the similar bound
in Lemma~\ref{perturbative}; we remark that the only difference occurs
in the case when $v$ has the highest frequency, which is larger than
$2^k$.
We now consider the remaining expression $ P_k^+ \partial_x (\phi^2) \partial_{x}^{-1} v_{(0,k)}$,
which admits the expansion
\[
P_k^+ \partial_x (\phi^2) \partial_{x}^{-1} v_{(0,k)} = \sum_{j \in (0,k)} 2^{-j} 2^{k} L_k(\phi_k,\phi_{<k},v_j)
+ \sum_{j \in (0,k)} \sum_{l\geq k} 2^{-j} 2^{k} L_k(\phi_l,\phi_{l},v_j) .
\]
Here we necessarily have two unbalanced frequencies, therefore this expression is estimated
by a direct application of the bilinear $L^2$ bound plus a Strichartz estimate.
The bound for the quartic term is identical to the one in Lemma~\ref{perturbative}.
\end{proof}
Now we proceed to recover the Strichartz and bilinear $L^2$ bounds.
In view of the last two Lemmas we do have the Strichartz bounds for $w_k$, and thus for $\tilde{v}_k$.
On the other hand for the quadratic correction $B^{lin}_k(\phi,v)$ we have
\[
B_k^{lin}(\phi,v) = 2^{-k} L(\phi_{<k},v_k) + \sum_{j \in (0,k)} 2^{-j} L(v_j,\phi_k) + \sum_{j \geq k} 2^{-j} L(\phi_j,v_j).
\]
Therefore, applying one Strichartz and one Bernstein inequality, we obtain
\[
\| B_k^{lin}(\phi,v) \|_{S} \lesssim C\epsilon d_k ,
\]
which suffices in order to transfer the Strichartz bounds to $v_k$.
To recover the bilinear $L^2$ bounds we again follow the argument in
the proof of Theorem~\ref{apriori}. Our starting point is the bilinear $L^2$ bound
\[
\| \tilde P_j v_j \cdot \tilde P_k \psi_k\|_{L^2} \lesssim C \epsilon d_j c_k 2^{\frac{j}2} 2^{-\frac{\max\{j,k\}}2}
\]
which is a consequence of Lemma~\ref{l:bi}. To fix the notations we assume that $j < k$;
the opposite case is similar. To transfer this bound to $v_j^+ \phi_k^+$ we
write
\[
\tilde P_j v_j \tilde P_k \psi_k - \phi_j^+ e^{-i \Phi_{<j}} \phi_k^+ e^{-i \Phi_{<k}} =
\tilde P_j v_j (\tilde P_k \psi_k - \phi_k^+ e^{-i \Phi_{<k}})+
(\tilde P_j v_j - v_j^+ e^{-i \Phi_{<j}}) \phi_k^+ e^{-i \Phi_{<k}} .
\]
For the first term we use the bound \eqref{psi-err} for the second
factor combined with the Strichartz bound for the first factor. It
remains to consider the second term. We freely drop the exponential,
and use the commutator result in Lemma~\ref{commutator} to express the difference in
the second term as
\[
\begin{split}
\tilde P_j w_j - v_j^+ e^{-i \Phi_{<j}} = & \
(\tilde P_j-1) (\tilde{v}^+ e^{ -i \Psi_{<j}}) + B_j^{lin} (\phi,v) e^{-i \Phi_{<j}}
\\
= & \
[\tilde P_j-1, e^{ -i \Phi_{<j}}] v_j^+ +
(\tilde P_j-1) ( B_j (\phi,v) e^{ -i \Psi_{<j}})
+ B_j (\phi,v) e^{-i \Phi_{<j}}
\\
= & \ 2^{-j} L(\partial_x e^{ -i \Phi_{<j}}, \phi_j^+) + L( B_j (\phi,v), e^{-i \Phi_{<j}})
\\
= & \ 2^{-j} L( \phi_{<j}, v_j, e^{ -i \Phi_{<j}})
+ 2^{-j} L( v_{<j}, \phi_j, e^{ -i \Phi_{<j}}) + L( \partial^{-1} v_{(0,j)}, \phi_j, e^{ -i \Phi_{<j}})
\\ & \ + \sum_{l > j} 2^{-l} L(v_l,\phi_l, e^{-i \Phi_{<j}}) .
\end{split}
\]
Now we multiply this by $\phi_k^+$, and estimate in $L^2$ using our
bootstrap hypothesis. For $l \neq k$ we can use a bilinear $L^2$ estimate
combined with an $L^\infty$ bound obtained via Bernstein's inequality.
For $l = k$ we use three Strichartz bounds. The exponential is harmlessly discarded in all cases.
We obtain
\[
\| (\tilde P_j w_j - \phi_j^+ e^{-i \Phi_{<j}} ) \phi_k^{+}\|_{L^2}
\lesssim C \epsilon^2 2^{-\frac{k}2} d_j d_k
\]
which suffices. The same argument applies when the roles of $j$ and $k$ are interchanged.
\section{ $L^2$ well-posedness for Benjamin-Ono
Here we prove our main result in Theorem~\ref{thm:lwp}. By scaling we can assume that our initial data satisfies
\begin{equation}\label{small+}
\| \phi_0\|_{L^2} \leq \epsilon \ll 1,
\end{equation}
and prove well-posedness up to time $T = 1$. We know that if in addition $\phi_0 \in H^3$
then solutions exist, are unique and satisfy the bounds in Theorem~\ref{apriori}. For $H^3$ data
we can also use the bounds for the linearized equation in Theorem~\ref{t:liniarizare} to compare
two solutions,
\begin{equation}\label{lip}
\| \phi^{(1)} - \phi^{(2)}\|_{S^{-\frac12}} \lesssim \| \phi^{(1)}(0) - \phi^{(2)}(0)\|_{C(0,1;H^{-\frac12})}.
\end{equation}
We call this property \emph{weak Lipschitz dependence on the initial data}.
We next use the above Lipschitz property to construct solutions for $L^2$ data. Given any initial
data $\phi_0 \in L^2$ satisfying \eqref{small+}, we consider the corresponding regularized data
\[
\phi^{(n)}(0) = P_{<n} \phi_0.
\]
These satisfy uniformly the bound \eqref{small+}, and further they admit a uniform frequency envelope
$\epsilon c_{k}$ in $L^2$,
\[
\| P_k \phi^{(n)}(0) \|_{L^2} \leq \epsilon c_k .
\]
By virtue of Theorem ~\ref{apriori}, the corresponding solutions
$\phi^{(k)}$ exist in $[0,1]$, and satisfy the uniform bounds
\begin{equation}\label{S-unif}
\| P_k \phi^{(n)} \|_{S} \lesssim \epsilon c_k .
\end{equation}
On the other hand, the differences satisfy
\[
\| \phi^{(n)} - \phi^{(m)}\|_{S^{-\frac12}} \lesssim \| \phi^{(1)}(0) - \phi^{(2)}(0)\|_{H^{-\frac12)}} \lesssim
(2^{-n} + 2^{-m}) \epsilon .
\]
Thus the sequence $\phi^{(n)}$ converges to some function $\phi$ in $S^{-\frac12}$,
\[
\| \phi^{(n)} - \phi\|_{S^{-\frac12}} \lesssim 2^{-n} \epsilon .
\]
In particular we have convergence in $S$ for each dyadic component, therefore
the function $\phi$ inherits the dyadic bounds in \eqref{S-unif},
\begin{equation}\label{fe-l2}
\| P_k \phi\|_{S} \lesssim \epsilon c_k.
\end{equation}
This further allows us to prove convergence in $\ell^2 S$. For fixed $k$ we write
\[
\limsup \| \phi^{(n)} - \phi\|_{\ell^2 S} \leq \limsup \| P_{<k} (\phi^{(n)} - \phi)\|_{\ell^2 S}
+ \| P_{\geq k} \phi\|_{\ell^2 S} + \limsup \| P_{\geq k} \phi^{(n)} \|_{\ell^2 S} \leq c_{\geq k} .
\]
Letting $k \to \infty$ we obtain
\[
\lim \| \phi^{(n)} - \phi\|_{\ell^2 S} = 0.
\]
Finally, this property also implies uniform convergence in $C(0,1;L^2)$; this in turn allows
us to pass to the limit in the Benjamin-Ono equation, and prove that the limit $\phi$
solves the Benjamin-Ono equation in the sense of distributions.
Thus, for each initial data $\phi_0 \in L^2$ we have obtained a weak solution $\phi \in \ell^2 S$,
as the limit of the solutions with regularized data. Further, this solution satisfies
the frequency envelope bound \eqref{fe-l2}.
Now we consider the dependence of these weak solutions on the initial data. First of all, the $\ell^2 S$
convergence allows us to pass to the limit in \eqref{lip}, therefore \eqref{lip} extends to these weak
solutions. Finally, we show that these weak solutions depend continuously on the initial data in $L^2$.
To see that, we consider a sequence of data $\phi^{(n)}(0)$ satisfying \eqref{small+} uniformly,
so that
\[
\phi^{(n)}(0) \to \phi_0 \qquad \text{ in } L^2 .
\]
Then by the weak Lipschitz dependence we have
\[
\phi^{(n)} \to \phi \text{ in } S^{-\frac12}.
\]
Hence for the corresponding solutions we estimate
\[
\phi^{(n)} - \phi = P_{<k} (\phi^{(n)} -\phi) + P_{\geq k} \phi^{(n)} - P_{\geq k} \phi .
\]
Here the first term on the right converges to zero in $\ell^2 S$ as $n \to \infty$
by the weak Lipschitz dependence \eqref{small+}, and the last term converges to zero
as $k \to \infty$ by the frequency envelope bound \eqref{fe-l2}. Hence letting in order
first $n \to \infty$ then $k \to \infty$ we have
\[
\limsup_{n \to \infty} \| \phi^{(n)} - \phi\|_{\ell^2 S} \leq \| P_{\geq k} \phi \|_{\ell^2 S}+
\limsup_{n \to \infty} \| P_{\geq k} \phi^{(n)} \|_{\ell^2 S}
\]
and then
\[
\limsup_{n \to \infty} \| \phi^{(n)} - \phi\|_{\ell^2 S} \leq
\lim_{k \to \infty} \limsup_{n \to \infty} \| P_{\geq k} \phi^{(n)} \|_{\ell^2 S}.
\]
It remains to show that this last right hand side vanishes. For this we
use the frequency envelope bound \eqref{fe-l2} applied to $\phi^{(n)}$ as follows.
Given $\delta > 0$, we have
\[
\| \phi^{(n)}(0) - \phi_0\|_{L^2} \leq \delta, \qquad n \geq n_\delta.
\]
Suppose $\epsilon c_k$ is an $L^2$ frequency envelope for $\phi_0$,
and $\delta d_k$ is an $L^2$ frequency envelope for $ \phi^{(n)}(0) -
\phi_0$. Here $d_k$ is a normalized frequency envelope,
which however may depend on $n$.
Then $\epsilon c_k+ \delta d_k$ is an $L^2$ frequency
envelope for $ \phi^{(n)}(0)$. Hence by \eqref{fe-l2} we obtain for
$n \geq n_\delta$
\[
\| P_{\geq k} \phi^{(n)}\|_{\ell^2 S} \lesssim \epsilon c_{\leq k} + \delta d_{\leq k} \lesssim \epsilon c_{\leq k} +\delta.
\]
Thus
\[
\limsup_{n \to \infty} \| P_{\geq k} \phi^{(n)} \|_{\ell^2 S} \lesssim \epsilon c_{\leq k} +\delta,
\]
and letting $k \to \infty$
we have
\[
\lim_{k \to \infty} \limsup_{n \to \infty} \| P_{\geq k} \phi^{(n)} \|_{\ell^2 S} \lesssim \delta.
\]
But $\delta > 0 $ was arbitrary. Hence
\[
\lim_{k \to \infty} \limsup_{n \to \infty} \| P_{\geq k} \phi^{(n)} \|_{\ell^2 S} = 0,
\]
and the proof of the theorem is concluded.
\section{The scaling conservation law
As discussed in the previous section, for the linear equation \eqref{bo-lin} with localized data
we can measure the initial data localization with an $x$ weight, and then propagate
this information along the flow using the following relation:
\[
\| x \psi(0)\|_{L^2} = \| L\psi (t)\|_{L^2} = \| (x - 2tH \partial_x)\psi(t)\|_{L^2}^2.
\]
The question we ask here is whether there is a nonlinear counterpart to that.
To understand this issue we expand
\[
\| (x - 2tH \partial_x)\phi(t)\|_{L^2}^2 = \int x^2 \phi^2 - 4xt \phi H\phi_x + 4 t^2 \phi_x^2\, dx \,dt,
\]
where we recognize the linear mass, momentum and energy densities.
To define the nonlinear counterpart of this we introduce the nonlinear mass, momentum
and energy densities as
\[
\begin{split}
m = &\phi^2,
\\
p = & \ \phi H\phi_x - \frac13 \phi^3 ,
\\
e = & \phi_x^2 - \frac34 \phi^2 H \phi_x + \frac18 \phi^4 .
\end{split}
\]
Then we set
\[
G(\phi) = \int x^2 m - 4xt p + 4 t^2 e \, dx .
\]
For this we claim that the following holds:
\begin{proposition}
\label{p:energy}
Let $\phi$ be a solution to the Benjamin-Ono equation for which
the initial data satisfies $\phi_0 \in H^2$, $x\phi_0 \in L^2$. Then
a) $L \phi \in C_{loc} ({\mathbb R}; L^2({\mathbb R}))$.
b) The expression $G(\phi)$ is conserved along the flow.
c) We have the representation
\begin{equation}
G(\phi) = \| \L \phi\|_{L^2}^2
\end{equation}
where
\begin{equation}
\L \phi = x\phi- 2t ( H \phi_x - \frac18(3\phi^2 - (H\phi)^2).
\end{equation}
\end{proposition}
Here one can view the expression $\L \phi$ as a normal form correction to $L \phi$.
While such a correction is perhaps expected to exist, what is remarkable is that it is both nonsingular
and exactly conserved.
\begin{proof}
a) We first show that the solution $\phi$ satisfies
\begin{equation}\label{xphi}
\| x \phi(t)\|_{L^2} \lesssim_{\phi_0} \langle t \rangle .
\end{equation}
For this we truncate the weight to $x_R$, which is chosen
to be a smooth function which equals $x$ for $|x| < R/2$ and $R$ for $|x| > R$.
Then we establish the uniform bound
\begin{equation}\label{xrphi}
\frac{d}{dt} \| x_R \phi\|_{L^2}^2 \lesssim_{\phi_0} 1 + \|x_R \phi\|_{L^2}.
\end{equation}
Indeed, we have
\[
\begin{split}
\frac{d}{dt} \| x_R \phi\|_{L^2}^2 = & \ \int_{\mathbb R} x_R^2 \phi (-H \partial_x^2 \phi + \phi \phi_x) \, dx
\\
= & \ \ \int_{\mathbb R} x_R^2 \phi_x H \phi_x \, dx + \int_{\mathbb R} 2 x_R x'_R (\phi H \phi_x - \frac13 \phi^3) \, dx
\\
= & \ \ \int_{\mathbb R} x_R \phi_x [x_R,H] \phi_x \, dx + \int_{\mathbb R} 2 x_R x'_R (\phi H \phi_x - \frac13 \phi^3)\, dx
\\
= & \ \ \int_{\mathbb R} - x'_R \phi [x_R,H] \phi_x - x_R \phi \partial_x [x_R,H] \phi_x \, dx + \int_{\mathbb R} 2 x_R x'_R (\phi H \phi_x - \frac13 \phi^3) \, dx.
\end{split}
\]
Then it suffices to establish the commutator bounds
\[
\| [x_R, H] \partial_x\|_{L^2 \to L^2} \lesssim 1, \qquad \| \partial_x [x_R, H] \|_{L^2 \to L^2} .
\lesssim 1
\]
But these are both standard Coifman-Meyer estimates,
which require only $x_R' \in BMO$.
Combining \eqref{xphi} with the uniform $H^1$ bound, we obtain
\[
\| L \phi\|_{L^2} \lesssim_{\phi_0} \langle t \rangle.
\]
To establish the continuity in time of $L\phi$, we write the evolution equation
\[
( \partial_t + H \partial_x^2 ) L \phi = L \phi \phi_x + H \phi_x \phi_x ,
\]
and observe that this equation is strongly well-posed in $L^2$.
b) Integrating by parts we write
\[
\frac{d}{dt} G(\phi) = \int_{\mathbb R} x^2 (m_t + 2p_x) - 4xt (p_t +2e_x) \, dx .
\]
It remains to show that the two terms above vanish. For the first we compute
\[
\begin{split}
m_t + 2p_x = & \ - 2 \phi H \phi_{xx} + 2 \phi^2 \phi_x + 2(\phi H \phi_x)_x - 2 \phi^2 \phi_x = 2 \phi_x H \phi_x.
\end{split}
\]
Integrating, we can commute in the $x$ to get
\[
\int x^2 (m_t + 2p_x) \, dx = 2 \int x^2 \phi_x H \phi_x \, dx = \int x \phi_x H(x \phi_x) \, dx= 0
\]
using the antisymmetry of $H$.
For the second term we write
\[
\begin{split}
p_t + 2 e_x = & \ - H \phi_{xx} H \phi_x + \phi \phi_{xxx} + \phi \phi_x H\phi_x + \phi H(\phi \phi_x)_x
+ \phi^2 H \phi_{xx} - \phi^3 \phi_x
\\ & \ + 4 \phi_x \phi_{xx} - 3 \phi \phi_x H\phi_x - \frac32 \phi^2 H \phi_{xx} + \phi^3 \phi_x
\\ = & \ \partial_x(- \frac12 (H \phi_x)^2 + \frac32 \phi_x^2 + \phi \phi_{xx}) + \partial_x ( \phi H(\phi \phi_x) - \frac12 \phi^2 H(\phi_x))
\\
& \ - \phi_x H(\phi \phi_x) - \phi \phi_x H \phi_x .
\end{split}
\]
Integrating by parts we have
\[
\begin{split}
\int x(p_t + 2 e_x) \, dx = & \ - \int - \frac12 (H \phi_x)^2 + \frac32 \phi_x^2 + \phi \phi_{xx} + \phi H(\phi \phi_x) - \frac12
\phi^2 H(\phi_x) \, dx \\ & \ - \int x( \phi_x H(\phi \phi_x) + \phi \phi_x H \phi_x)\, dx.
\end{split}
\]
To get zero in the first integral we integrate by parts and use the antisymmetry of $H$ together with $H^2 =- I$.
In the second integral we can freely commute $x$ under one $H$ and then use the antisymmetry of $H$.
\bigskip
c) We compute the expression
\[
Err(\phi) = G(\phi) - \int_{\mathbb R} (x\phi- 2t ( H \phi_x - \frac18(3\phi^2 - (H\phi)^2))^2\, dx.
\]
The quadratic terms easily cancel, so we are first left with an $xt$ term,
\[
Err_1(\phi) = \int - 4xt (- \frac13 \phi^3 + \frac18 \phi( 3\phi^2 - (H\phi)^2) \,dx.
\]
For this to cancel we need
\[
\int x \phi^3 \,dx = 3 \int x \phi (H\phi)^2\, dx.
\]
Splitting into positive and negative
frequencies
\[
\phi = \phi^+ + \phi^-. \qquad H \phi = \frac1i (\phi^+ - \phi^-),
\]
the cross terms cancel and we are left with having to prove that
\[
\int x (\phi^+)^3\, dx = \int x (\phi^-)^3 \, dx = 0.
\]
where $\phi^- = \overline{\phi^+}$. By density it suffices to establish this for Schwartz functions $\phi$.
Then the Fourier transform of $\phi^+$ is supported in ${\mathbb R}^+$, and is smooth except for a jump at frequency $0$.
It follows that the Fourier transform of $(\phi^+)^3$ is also supported in ${\mathbb R}^+$ but of class $C^{1,1}$ at zero,
i.e. with a second derivative jump. Hence the Fourier transform of $(\phi^+)^3$ vanishes at zero and the
conclusion follows.
Secondly, we are left with a $t^2$ term, namely
\[
Err_2(\phi) = \int 4t^2 ( - \frac34 \phi^2 H \phi_x + \frac14( 3\phi^2 - (H\phi)^2) H \phi_x )
+ 4 t^2( \frac18 \phi^4 - \frac{1}{64} (3 \phi^2 - (H\phi)^2)^2 ) \, dx .
\]
The first term cancels since we can integrate out the triple $H\phi$ term. For the second we compute
\[
8 \phi^4 - (3 \phi^2 - (H\phi)^2)^2 = - \phi^4 + 6 \phi^2 (H \phi)^2 - (H\phi)^4 = - 2 (\phi^-)^4 - 2(\phi^+)^4,
\]
which again suffices, by the same argument as in the first case.
\end{proof}
We further show that this bound naturally extends to $L^2$ solutions:
\begin{proposition}
Let $\phi$ be a solution to the Benjamin-Ono equation whose initial data satisfies
$\phi_0 \in L^2$, $x \phi_0 \in L^2$. Then $\phi$ satisfies the bounds
\begin{equation}\label{bd1}
\| L \phi\|_{L^2} \lesssim_{\phi_0} \langle t \rangle ,
\end{equation}
\begin{equation}\label{bd2}
\| \phi\|_{L^\infty} \lesssim_{\phi_0} t^{-\frac12} \langle t^\frac12 \rangle .
\end{equation}
Furthermore $\L \phi \in C({\mathbb R}; L^2)$ and has conserved $L^2$ norm.
\end{proposition}
We remark that both bounds \eqref{bd1} and \eqref{bd2} are sharp, as they
must apply to solitons.
\begin{proof}
Since the solution to data map is continuous in $L^2$, it suffices
to prove \eqref{bd1} and \eqref{bd2} for $H^2$ solutions. Then we
a-priori know that $L \phi \in L^2$ and $\phi \in L^\infty$, and we
can take advantage of the $\|\L \phi\|_{L^2}$ conservation law.
Hence we can use \eqref{prima} to estimate
\[
\| L \phi \|_{L^2} \lesssim \| \L \phi \|_{L^2} + t \| \phi \|_{L^\infty} \|\phi\|_{L^2}
\lesssim \| \L \phi \|_{L^2} + t^\frac12 \| L \phi \|_{L^2}^\frac12 \|\phi\|_{L^2}^\frac32 ,
\]
which by Cauchy-Schwarz inequality yields
\[
\| L \phi \|_{L^2} \lesssim \| \L \phi \|_{L^2} + t \| \phi\|_{L^2}^3.
\]
Now the pointwise bound bound for $\phi$ follows by reapplying \eqref{prima}.
For the last part, we first approximate the initial data $\phi_0$ with $H^2$ data $\phi_{0}^n$ so that
\[
\|\phi_0^ n - \phi_0\|_{L^2} \to 0, \qquad \|x(\phi_0^n - \phi_0)\|_{L^2} \to 0.
\]
Then we have $\|\L \phi^n\|_{L^2} \to \| \L \phi(0)\|_{L^2}$. Since $\phi^n \to \phi_0$ in $L^2_{loc}$,
taking weak limits, we obtain
\[
\| \L \phi\|_{L^\infty L^2} = \| \L \phi(0)\|_{L^2}.
\]
Repeating the argument but with initialization at a different time $t$ we similarly obtain
\[
\| \L \phi\|_{L^\infty_t L^2_x} = \| \L \phi(t)\|_{L^2_x}.
\]
Hence $\| \L\phi\|_{L^2}$ is constant in time. Then, the $L^2$ continuity follows from the
corresponding weak continuity, which in turn follows from the strong $L^2$ continuity of $\phi$.
\end{proof}
\section{The uniform pointwise decay bound
In this section we establish our main pointwise decay bound for $\phi$,
namely
\begin{equation}\label{want}
\| \phi(t)\|_{L^\infty} + \|H \phi(t)\|_{L^\infty} \leq C \epsilon \langle t \rangle^{-\frac12}, \qquad |t| \leq e^{\frac{c}{\epsilon}}
\end{equation}
with a large universal constant $C$ and a small universal constant $c$, to be chosen later.
Since the Benjamin-Ono equation is well-posed in $L^2$, with
continuous dependence on the initial data, by density it suffices to
prove our assertion under the additional assumption that $\phi_0 \in
H^2$. This guarantees that the norm $\| u(t)\|_{L^\infty}$ is
continuous as a function of time. Then it suffices to establish the
desired conclusion \eqref{want} in any time interval $[0,T]$ under the
additional bootstrap assumption
\begin{equation}\label{boot}
\| \phi(t)\|_{L^\infty} + \|H \phi(t)\|_{L^\infty} \leq 2C \epsilon \langle t \rangle^{-\frac12}, \qquad |t| \leq T \leq e^{\frac{c}{\epsilon}}.
\end{equation}
We will combine the above bootstrap assumption with the bounds arising from the following
conservation laws:
\begin{align}
\label{use01}
\| \phi(t) \|_{L^2} \leq & \ \epsilon,
\\
\label{use02}
\| \L \phi(t) \|_{L^2} \leq & \ \epsilon,
\\
\label{use03}
\int_{-\infty}^\infty \phi dx = c, \qquad &|c| \leq \epsilon .
\end{align}
We recall that $\L$ is given by
\[
\L \phi = x\phi- 2t \left[ H \phi_x - \frac18(3\phi^2 - (H\phi)^2)\right] .
\]
One difficulty here is that the quadratic term in $\L \phi$ cannot be treated
perturbatively. However, as it turns out, we can take advantage of its structure
in a simple fashion.
As a preliminary step, we establish a bound on the function
\[
\partial^{-1} \phi (x) := \int_{-\infty}^x \phi(y)\, dy
\]
as follows:
\begin{equation}\label{intphi}
| \partial^{-1} \phi(x)| \lesssim C \epsilon + C^2 \epsilon^2 \log \langle t/x\rangle.
\end{equation}
Assume first that $x \leq -\sqrt{t}$. Then we write
\[
\phi = \frac{1}{x} \L(\phi) + \frac{2t}x H \phi_x - \frac{t}{4x}( 3 \phi^2 -(H\phi)^2)) .
\]
Integrating by parts, we have
\[
\partial^{-1} \phi(x) = \frac{2t}x H\phi(x) + \int_{-\infty}^x \frac{2t}{y^2} H\phi(y) +\frac{1}{x} \L(\phi)
- \frac{t}{4y}( 3 \phi^2 -(H\phi)^2) \, dy.
\]
For the first two terms we have a straightforward $\dfrac{C\epsilon \sqrt{t}}{|x|}$ bound due to \eqref{boot}.
For the third term we use \eqref{use02} and the Cauchy-Schwarz inequality.
For the last integral term we use the $L^2$ bound \eqref{use01} for $x < -t$ and the $L^\infty$ bound \eqref{boot}
for $-t \leq x \leq -\sqrt{t}$ to get a bound of $C^2 \epsilon^2 \log \langle t/x\rangle$.
This gives the desired bound in the region $x \leq - \sqrt{t}$. A
similar argument yields the bound for $x \geq \sqrt{t}$, where in
addition we use the conservation law \eqref{use03} for $\displaystyle\int \phi \, dy$ to connect $\pm
\infty$. Finally, for the inner region $|x| \leq \sqrt{t}$ we use
directly the pointwise bound \eqref{boot} on $\phi$. This concludes the proof of \eqref{intphi}.
\bigskip
Now we return to the pointwise bounds on $\phi$ and $H \phi$.
Without using any bound for $t$, we will establish the estimate
\begin{equation} \label{point-get}
\|\phi(t)\|_{L^\infty}^2 + \|H\phi(t)\|_{L^\infty}^2 \lesssim \epsilon^2 t^{-1}(1+ C + C^3 \epsilon \log t + C^4 \epsilon^2 \log^2 t).
\end{equation}
In order to retrieve the desired bound \eqref{want} we first choose $C \gg 1$ in order to account
for the first two terms, and then restrict $t$ to the range $C \epsilon \log t \ll 1$ for the last two terms.
This determines the small constant $c$ in \eqref{want}.
To establish \eqref{point-get} we first use the expression for $\L (\phi)$ to compute
\[
\frac{d}{dx} (|\phi|^2 + |H\phi|^2) = \frac{1}t F_1 + \frac{1}t F_2 + \frac14 F_3,
\]
where
\[
F_1 = \phi H \L(\phi) + H \phi \L(\phi), \qquad F_2 = x \phi H \phi - \phi H(x\phi),
\]
\[
F_3 = - \phi H( 3 \phi^2 - (H\phi)^2) + H \phi ( 3 \phi^2 - (H\phi)^2).
\]
We will estimate separately the contributions of $F_1$, $F_2$ and $F_3$. For $F_1$ we combine \eqref{use01} and \eqref{use02}
to obtain
\[
\| F_1 \|_{L^1} \lesssim \epsilon^2,
\]
which suffices. For $F_2$ we commute $x$ with $H$ to rewrite it as
\[
F_2(x) = \phi(x) \int_{-\infty}^\infty \phi(y)\, dy,
\]
which we can integrate using \eqref{intphi}.
Finally, for $F_3$ we use the identity
\[
H ( \phi^2 - (H \phi)^2) = 2 \phi H\phi
\]
to rewrite it as
\[
F_3 = - \phi H( \phi^2 + (H\phi)^2) - H \phi ( \phi^2 + (H\phi)^2).
\]
This now has a commutator structure, which allows us to write
\[
\int_{-\infty}^{x_0} F_3(x)\, dx = - \int_{-\infty}^{x_0} \int_{x_0}^\infty \phi(x) \frac{1}{x-y} ( \phi^2 + (H\phi)^2)(y) \, dy\, dx .
\]
Here the key feature is that $x$ and $y$ are separated. We now estimate the last integral. We consider several cases:
\medskip
a) If $ |x -y| \lesssim \sqrt{t} $ then direct integration using \eqref{boot} yields a bound of $C^3 \epsilon^3 t^{-1}$.
\medskip
b) If $|x-y| > t$ then we use \eqref{use01} to bound $\phi^2 +(H\phi)^2$ in $L^1$. Denoting $x_1 = \min\{x_0,y-t\}$,
we are left with an integral of the form
\[
\int_{-\infty}^{x_1} \frac{1}{x-y} \phi(x) \,dx = \frac{1}{ x_1 - y} \partial^{-1}\phi(x_1) -
\int_{-\infty}^{x_1} \frac{1}{(x-y)^2} \partial^{-1}
\phi(x) \,dx .
\]
As $|x_1 - y| > t$ from \eqref{intphi} we obtain a bound of
\[
t^{-1}( C\epsilon^{3} + C^2 \epsilon^4 \log t ).
\]
\medskip
c) $x-y \approx r \in [\sqrt{t},t]$. Then we use \eqref{boot} to bound $ \phi^2 + (H\phi)^2 $ in $L^\infty$
and argue as in case (b) to obtain a bound of
\[
t^{-1}( C^3\epsilon^{3} + C^4 \epsilon^4 \log t).
\]
Then the dyadic $r$ summation adds another $\log t$ factor.
\section{The elliptic region
Here we improve the pointwise bound on $\phi$ in the elliptic region
$x < - \sqrt{t}$. Precisely, we will show that for $t < e^{\frac{c}\epsilon}$ we have
\begin{equation}\label{point-ell}
|\phi(x)| + |H\phi(x)| \lesssim \epsilon t^{-\frac14} x^{-\frac12}, \qquad x \geq \sqrt{t}.
\end{equation}
To prove this we take advantage of the ellipticity of the linear part $x -2t H \partial_x$ of the operator $\L$
in the region $x \geq \sqrt{t}$. For this linear part we claim the bound
\begin{equation}\label{lin(L)}
\| x \chi \phi\|^2_{L^2} + \| t \chi \phi_x\|_{L^2}^2 \lesssim \| (x -2t H \partial_x) \phi\|_{L^2}^2+ t^{\frac32} \| \phi \|_{L^\infty}^2
+ t^\frac12 \| \partial^{-1} \phi\|_{L^\infty}^2,
\end{equation}
where $\chi$ is a smooth cutoff function which selects the region $\{x > \sqrt{t} \}$.
Assuming we have this, using also \eqref{want}, \eqref{use02} and \eqref{intphi} we obtain
\[
\| x \chi \phi\|^2_{L^2} + \| t \chi \phi_x\|_{L^2}^2 \lesssim \epsilon t^{\frac12} +
t^2 \| \chi (\phi^2 + (H \phi)^2)\|_{L^2}^2 .
\]
We claim that we can dispense with the second term on the right. Indeed, we can easily use \eqref{want} to bound
the $\phi^2$ contribution by
\[
\| \chi \phi^2\|_{L^2} \lesssim \| \phi\|_{L^\infty} \| \chi \phi \|_{L^2} \lesssim \epsilon t^{-1} \|x \chi \phi\|_{L^2} .
\]
The $(H \phi)^2$ contribution is estimated in the same manner, but in addition we also need to bound the commutator
\begin{equation}\label{com(H)}
\| [H,\chi] \phi\|_{L^2} \lesssim \|\phi \|_{L^\infty} + t^{-\frac12} \|\partial^{-1} \phi \|_{L^\infty}.
\end{equation}
Assuming we also have this commutator bound, it follows that
\begin{equation} \label{ell}
\| x \chi\phi\|^2_{L^2} + \| t (\chi \phi)_x\|_{L^2}^2 \lesssim \epsilon t^{\frac12}.
\end{equation}
This directly yields the desired pointwise bound \eqref{point-ell} for $\phi$.
Now we prove the $H \phi$ part of \eqref{point-ell}. For $x \approx r > t^\frac12$ we decompose
\[
\phi = \chi_r \phi + (1-\chi_r)\phi,
\]
where $\chi_r$ is a smooth bump function selecting this dyadic region.
For the contribution of the first term we use interpolation to write
\[
\| H(\chi_r \phi)\|_{L^\infty} \lesssim \| \chi_r \phi\|_{L^2}^\frac12 \| \partial_x(\chi_r \phi)\|_{L^2}^\frac12 \lesssim
\epsilon (t^{\frac14} r^{-1})^\frac12 (t^{-\frac34})^\frac12 = \epsilon t^{-\frac14} r^{-\frac12}.
\]
For the second term we use the kernel for the Hilbert transform,
\[
H[ (1-\chi_r)\phi] (x) = \int \frac{1}{x-y} [(1-\chi_r)\phi] (y)\, dy .
\]
For the contribution of the region $y > t^\frac12$ we use the pointwise bound \eqref{point-ell} on $\phi$ and directly integrate.
For the contribution of the region $y < t^\frac12$ we integrate by parts and use the bound \eqref{intphi} on $\partial^{-1} \phi$.
This concludes the proof of the $H\phi$ bound in \eqref{point-ell}.
It remains to prove the bounds \eqref{lin(L)} and \eqref{com(H)}. Both are scale invariant in time, so without any restriction in
generality we can assume that $t = 1$.
\bigskip
{\em Proof of \eqref{com(H)}.}
The kernel $K(x,y)$ of $[\chi,H]$ is given by
\[
K(x,y) = \frac{\chi(x) -\chi(y)}{x-y},
\]
and thus satisfies
\[
(1+|x|+|y|)|K(x,y)| + (1+|x|+|y|)^2|\nabla_{x,y}K(x,y)| \lesssim 1
\]
Then we write
\[
\int_{\mathbb R} K(x,y) \phi(y) dy = - \int_{\mathbb R} K_y(x,y) \partial^{-1} \phi(y) dy
\]
and then take absolute values and estimate.
\bigskip
{\em Proof of \eqref{lin(L)}.}
We multiply $(x - 2H \partial_x) \phi$ by $\chi := \chi_{\geq 1}(x)$, square and integrate. have
\[
\|\chi (x - 2H \partial_x) \phi\|_{L^2}^2 - \| \chi x \phi\|_{L^2}^2 - 2 \| \chi |x|^\frac12 |D|^\frac12 \phi\|_{L^2}^2
- \| \chi \phi_x\|_{L^2}^2 = \langle ( T_1+T_2) \phi,\phi \rangle
\]
where
\[
T_1 = |D| \chi^2 |D| + \partial_x \chi^2 \partial_x, \qquad T_2 = \chi^2 x |D| + |D|\chi^2 x - 2 |D|^\frac12 \chi^2 x |D|^\frac12.
\]
Then it suffices to show that
\begin{equation}
\label{t12}
|\langle T_{1,2} \phi,\phi \rangle| \lesssim \| \phi\|_{L^\infty}^2 + \| \partial^{-1} \phi\|_{L^\infty}^2.
\end{equation}
To achieve this we estimate the kernels $K_{1,2}$ of $T_{1,2}$. In order to compute the kernels $K_1$ and $K_2$ we observe that both $T_1$ and $T_2$ have
a commutator structure
\begin{equation}
T_1 =\partial_x \left[ \left[ \chi ^2 \, , \, H\right] \, , \, H \right] \partial_x, \quad T_2= \left[ \left[ \vert D\vert ^{\frac{1}{2}}\, , \, \chi^2 \right] \, , \, \vert D\vert ^{\frac{1}{2}}\right] .
\end{equation}
We first consider $T_1$ for which we claim that its kernel $K_1$ satisfies the bound
\begin{equation}
\label{k1}
|K_1(x,y)| \lesssim \frac{1}{(1+|x|)(1+|y|)(1+|x|+|y|)}.
\end{equation}
This suffices for the estimate \eqref{t12}.
To prove \eqref{k1} we observe that instead of analyzing the kernel $K_1(x, y)$, we can analyze the kernel $\tilde{K}_1$:
\[
K_1(x,y)=\partial_x\partial_y\tilde{K}_{1}(x,y),
\]
where $\tilde{K}_1$ is the corresponding kernel of the commutator $\left[ \left[ \chi ^2 \, , \, H\right]\,, \, H\right] $, and is given by
\[
\tilde{K}_1(x,y)=\int \frac{\chi^2 (x)- \chi^2 (y)}{x-z} \cdot \frac{1}{z-y}-\frac{\chi^2 (z)- \chi^2 (y)}{z-z} \cdot \frac{1}{x-z}\, dz.
\]
We can rewrite $\tilde{K}_1$ using the symmetry $z\rightarrow x+y-z$
\[
\tilde{K}_1(x,y)=\int \frac{\chi^2 (x)+ \chi^2 (y) -\chi^2(z) -\chi^2 (x+y-z)}{(x-z)(y-z)}\, dz.
\]
Secondly, in a similar fashion, we compute the kernel $K_2$ of $T_2$,
\begin{equation}
\label{k2}
K_2(x,y) =\int \frac{\chi^2 (x) +\chi^2 (y)-\chi^2 (x+y-z)-\chi^2(z) }{\vert x-z\vert ^{\frac{3}{2}} \vert y-z \vert ^{\frac{3}{2}}}\, dz,
\end{equation}
where again the numerator vanishes of order one at $x=z$ and $y=z$.
For this kernel we distinguish two regions:
\begin{itemize}
\item $\vert x\vert +\vert y\vert \lesssim 1$; in this region a direct computation shows that the kernel $K_2$ has a mild logarithmic singularity on the diagonal $x=y$,
\[
\vert K_2 (x,y)\vert \leq 1+ \vert \log \vert x-y\vert \vert .
\]
\item $\vert x\vert +\vert y\vert \gg 1$; in this region the kernel $K_2$ is smooth and can be shown to satisfy the bound
\[
|K_2^{low}(x,y)| \lesssim \frac{ (1+ \min\{|x|,|y|\})^\frac12}{ (1+|x|+|y|)^\frac32} .
\]
This does not suffice for the bound \eqref{t12}. However after differentiation it improves to
\[
|\partial_x \partial_y K_2^{low}(x,y)| \lesssim \frac{1}{ (1+ \min\{|x|,|y|\})^\frac12 (1+|x|+|y|)^\frac52} ,
\]
and that is enough to obtain \eqref{t12}.
\end{itemize}
| {'timestamp': '2017-02-21T02:08:34', 'yymm': '1701', 'arxiv_id': '1701.08476', 'language': 'en', 'url': 'https://arxiv.org/abs/1701.08476'} |
\section{Introduction}
The AdS/CFT dictionary is far from complete, the main obstacle being the strong/weak nature of
Maldacena's conjecture\cite{Malda},\cite{Gubser},\cite{Witten}.
In an important line of research in this context, which has been widely studied,
certain regimes and limits have been considered where one can study both sides
of the duality perturbatively. The main idea for this has been focusing on
states/operators with large charges \cite{BMN},\cite{GKP}.
A large charge, $J$, introduces a free parameter, besides the 't Hooft coupling $\lambda$,
into the game that can give way for some control
over the problem. One can then define an effective 't Hooft coupling, $\lambda'$,
built from $\lambda$ and $J$, which is held fixed in the large charge limit.
The idea is that the world sheet loop expansions around certain classical
string solutions with a properly chosen $J$, are generically suppressed by inverse powers of $J$
\footnote{For this to happen one actually needs at least one large charge on $S^5$. For charges
only inside AdS, the semi classical expressions are only reliable when $\lambda\gg1$ but it is expected,
verified by some tests, that for large charges these expressions
can be extrapolated to perturbative field theory results
for anomalous dimensions of the conjectured corresponding operators.}.
In SYM this translates into the distinction of a class of SYM operators,
determined by the charges, whose couplings to the rest of the operators are
suppressed by inverse powers of $J$.
The best known example in this context is the BMN case \cite{BMN}
(see e.g. \cite{Sadri} for reviews). In the strict BMN limit,
the world sheet expansion around
a certain point-like string solution in $AdS_5\times S^5$ terminates
at one loop and the corresponding class of SYM operators consists only of the BMN operators,
decoupled from the rest. Moreover, the one loop action can be solved exactly \cite{7ppwave}
with a closed expression for the spectrum of fluctuations around the
classical configuration. This can be matched order by order with a perturbative
calculation for the anomalous dimension of BMN operators which can be done when
$\lambda'\equiv \lambda/J^2\ll1$.
In some other interesting cases, adding more large charges
can even make the one loop world sheet expansion
sub leading and one is left with only the classical expression for the energy of
the string configuration. That is, the classical expression gives an all loop quantum
prediction for the field theory operators in a certain class.
This is the case for string solutions with spin and angular
momentum.
In another significant development, perturbative planar calculations for
anomalous dimensions of SYM operators led to the discovery of an integrable structure
in the system \cite{Minahan:2002ve}. This came through the identification
of the dilatation operator (see \cite{Beisert:2002ff}), whose eigenvalues are the scaling dimensions of
SYM operators, with the Hamiltonian of an equivalent spin chain. The problem then
boils down to diagonalizing the Hamiltonian by solving
a set of Bethe equations for the spin chain. This method has had some successes even beyond
semi classical limit and large charges, which define infinitely long chains.
The integrability
structure in the string theory side has also been discovered \cite{Mandal:2002fs} and further studied
in the semi classical limit \cite{Arutyunov:2003uj}. For semi classical analysis of strings
and spin chain developments see e.g. \cite{Frolov:2002av}, \cite{spinchain}.
For reviews and references on these subjects see e.g. \cite{semireviews}.
In \cite{Kkk}
a new class of string solutions in AdS space
was found with a number of spikes on the string\footnote{Spiky strings in flat space
as cosmic strings were studied in \cite{Burden:1985md}.}. These
spiky strings were identified with higher twist operators in SYM with
each spike representing a particle in the field theory. Large angular
momentum is provided by a large number of covariant derivatives acting on the fields which
produce the mentioned particles. The total number of derivatives is
distributed equally amongst the fields for these solutions. Spiky strings
were further generalized to configurations on sphere \cite{Ryang}. Certain limits of these
solutions were shown \cite{KRT} to correspond to giant magnons \cite{HM} which
represent spin waves with a short wave length in the spin chain language. Solutions
corresponding to multi spin giant magnons \cite{Bobev:2006fg} and those
with a magnon like dispersion relation in M-theory \cite{Bozhilov:2006bi} were also
studied.
In this paper we generalize the spiky string solutions in both AdS and sphere spaces.
These solutions, which we will call ``dual spikes", represent spiky strings with
the direction of spikes reversed. In flat space, these dual spikes are shown to be T-dual to
the usual ones. In AdS we will study the large angular momentum limit of the solution
and derive its energy, $E$, in terms of angular momentum, $J$. We find that these dual spikes
represent higher twist operators whose anomalous dimensions are similar to those of
rotating and pulsating circular strings. That is, the anomalous dimension in
the large $J$ limit is proportional to $\lambda^{1/4}\sqrt{J}$. This replaces
the usual logarithm dependence for folded and spiky spinning strings in AdS.
We will argue that this is an expected behavior, as for fast spinning dual spikes
and near the boundary, we are effectively dealing with portions of almost circular
strings with a pulsation-like motion which is induced by a profile of string
in the AdS radius and angular momentum. One might then conclude that the
corresponding operators for such configurations, in addition to a large
number of covariant derivatives, $D_+$, which induce spin in $S^3\subset AdS_5$,
also contain the combination $D_+ D_-$ which induce an effective pulsation.
This last combination which contributes to the scaling dimension but not
to spin could be responsible for a change from logarithm dependence
to square root in the anomalous part.
In $S^5$ however, we will
show that unlike the usual spikes, the dual spikes have no large angular momentum limit.
We will find the $E-J$ of nearly circular strings for the cases that the
string lives near the pole or near the equator.\\
{\large\bf{Note Added:\ }}
While this work was being prepared we learned of a related work on dual spikes which
appeared very recently \cite{Ishizeki:2007we}.
\section{Spiky strings and their T-duals in flat space-time}
In this section we find a two parameter family of solutions describing closed strings with spikes in flat space. These
solutions fall into two distinct classes depending on whether the ratio of the two parameters is greater or
smaller than one. One of the two classes describes the spiky string solutions found in \cite{Kkk} and the the other one,
as we will see in what follows, can be obtained from the first one by a T-duality transformation.
Let us start with the Nambu-Gotto action and the following
ansatz for the string
\begin{equation}
\label{ansatz}
t=\tau, \hspace{1cm} \theta=\omega\ \tau+\sigma,
\hspace{1cm}r=r(\sigma).
\end{equation}
where $(\tau,\sigma)$ and $(t,r,\theta)$ are the world sheet and target space coordinates respectively
and $r_c$ is a constant. For the flat target space
\begin{equation}
ds^2=-dt^2+r^2+r^2d\theta^2
\end{equation}
and the ansatz (\ref{ansatz}), the NG action is found as
\begin{equation}
{\cal L}_{NG}=-\sqrt{l}\ ,\ \ \ \ \ l=(1-\omega^2\ r^2)\ r'^2+r^2
\end{equation}
where prime denotes $\partial_{\sigma}$. The constant of motion, $r_l$, associated with the
$\partial_{\theta}$ isometry is
\[
\frac{r^2}{\sqrt{l}}\equiv r_l
\]
Plugging this constant in the equations of motion gives
\begin{equation}
\label{rel}
\frac{r'^2}{r^2}=\frac{r_c^2}{r_l^2}\ \frac{r^2-r_l^2}{r_c^2-r^2}
\end{equation}
where $r_c=1/\omega$. One can show that $r(\sigma)$ found from (\ref{rel}) satisfies the equations of motion.
The two constants $r_l$ and $r_c$ parameterize the solutions and, as can be seen from (\ref{rel}), they correspond to the
radius of the lobe and cusp of the string respectively. Defining
\[
r_1=min(r_c,r_l)\ , \ \ \ \ \ \ \ \ \ \ \ \ r_2=max(r_c,r_l)
\]
we can write the expressions for the energy, $E$, and angular momentum, $J$, of a segment of string
stretched between $r_1$ and $r_2$
\begin{eqnarray}
E_{seg}&=&\frac{1}{2\pi}\frac{1}{r_c}\int_{r_1}^{r_2} dr r \frac{|r_c^2-r_l^2|}{\sqrt{(r^2-r_l^2)(r_c^2-r^2)}}
=\frac{1}{4}\frac{r_c}{a^2}|a^2-1|\nonumber\\
J_{seg}&=&=\frac{1}{2\pi}\int_{r_1}^{r_2} dr r \sqrt{\frac{r^2-r_l^2}{r_c^2-r^2}}=\frac{1}{8}\frac{r_c^2}{a^2}|a^2-1|
\end{eqnarray}
where we have defined the constant $a$ by
\[
a\equiv\frac{r_c}{r_l}
\]
The angle covered by the segment, $\Delta\theta$, can also be found as
\begin{equation}
\Delta\theta=\frac{r_l}{r_c}\int_{r_1}^{r_2}\frac{dr}{r}\sqrt{\frac{r_c^2-r^2}{r_2-r_l^2}}=\frac{\pi}{2}\frac{|a-1|}{a}
\end{equation}
We now demand that $2n$ number of segments make up a closed string and hence we have $n$ number of spikes
on the string. This will give
\begin{equation}
\label{cons}
\frac{2a}{|a-1|}=n
\end{equation}
As a result, the total energy and angular momentum of the string will be found as
\begin{equation}
E=\frac{r_c}{a}(a+1)\ , \ \ \ \ \ \ \ \ \ \ \ \ J=\frac{r_c^2}{2a}(a+1)
\end{equation}
\newpage
with the obvious relation
\begin{equation}
\label{EJ}
E=2\frac{J}{r_c}
\end{equation}
Therefore to each pair, $(a,r_c)$, corresponds a unique string configuration provided that the periodicity condition
(\ref{cons}) is satisfied for some integer $n$.
We can now identify two distinct cases; $a>1$ and $a<1$. It is readily seen that the first case
describes the spiky solutions found in \cite{Kkk} with the following
relations
\begin{equation}
a=\frac{n}{n-2}\ ,\ \ \ \ \ \ \ \ \ \ E=2r_c\frac{n-1}{n}\ ,\ \ \ \ \ \ \ \ \ \ J=r_c^2\ \frac{n-1}{n}
\end{equation}
and the dispersion relation
\begin{equation}
E=2\sqrt{\frac{n-1}{n}J}
\end{equation}
For the second case we have the following new relations
\begin{equation}
a=\frac{n}{n+2}\ ,\ \ \ \ \ \ \ \ \ \ E=2r_c\frac{n+1}{n}\ ,\ \ \ \ \ \ \ \ \ \ J=r_c^2\frac{n+1}{n}
\end{equation}
and the charges are related as
\begin{equation}
E=2\sqrt{\frac{n+1}{n}J}
\end{equation}
One can easily check that the energy determined by the pair $(a,r_c)$ remains invariant if we switch
to the pair $(1/a,r_c/a)$. Equivalently this amounts to interchanging $r_c$ and $r_l$.
Under such a transformation $J$ goes over to $J/a$ such
that the dispersion relation (\ref{EJ}) also remains invariant.
As can be seen from (\ref{cons}), this transformation takes an $n$ spike configuration in the first class
to an $n-2$ spike configuration in the second with the same energy and vice versa.
The whole set of transformations is thus as follows
\begin{eqnarray}
\label{Tdual}
a&\rightarrow& \frac{1}{a}\nonumber\\
r_c&\rightarrow&\frac{r_c}{a}\nonumber\\
n&\rightarrow&n+2\frac{|1-a|}{1-a}\\
J&\rightarrow&\frac{J}{a}\nonumber\\
E&\rightarrow&E\nonumber
\end{eqnarray}
\begin{figure}[ht]
\ \ \ \ \ \ \ \ \includegraphics[scale=0.7]{5flspike.eps}\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\includegraphics[scale=0.7]{3dflspike.eps}
\caption{Spiky strings related by T duality. The pair $(a,r_c)$ for the figure on the left
is $(5/3,5)$ which gives a $5$ spike configuration whereas for the one on the right is $(3/5,3)$ and results
in a $3$ spike solution}
\label{5to3}
\end{figure}
For a finite $a>1$, we have a closed string with $n$ spikes pointing outwards, $n$ determined by (\ref{cons}).
As $a$ decreases,
the number of spikes on the string increases until it approaches a circle which is the limiting
string configuration for this class of solutions. This happens when $a=1$ and $n=\infty$.
The complementary range of values for $a$, $0<a<1$, produces
configurations for the string which can be obtained from the first class by the transformations in (\ref{Tdual}).
This time, however, the spikes point inwards and as we decrease $a$, the number of spikes on the string also decreases from $n=\infty$
(a circular string) to $n=1$.
The limiting value of $a\rightarrow\infty$ corresponds to
a folded string with $n=2$ which rotates in a circle of radius $r_c$ with angular momentum $\omega=1/r_c$.
This configuration does not have a corresponding dual in the second class as the string segment
for $a=0$ describes a spiral stretching from $r_c=0$ to $r_l$ which covers an infinite $\Delta\theta$ and
has an infinite energy\footnote{In a recent paper \cite{Ishizeki:2007we}, such configurations
have been studied on sphere with the name ``single spikes".}. So we demand that in the second family $a$ is at least
equal to $1/3$ which is the minimum value that can produce a closed string with only one spike.
In short, one can have spiky configurations with $n\ge2$ for $a>1$ and
$n\ge1$ for $1/3\le a<1$. The $a=1$ case describes a circle and is self dual.
\begin{figure}[ht]
\ \ \ \ \ \ \ \ \ \includegraphics[scale=0.7]{1dflspike.eps}\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\includegraphics[scale=0.7]{spiral.eps}
\caption{The figure on the left is a dual closed spiky string with minimum number of spikes, $n=1$,
which is given by $a=1/3$. The figure on the right is a segment of string stretching between $r_c$ and $r_l$
with $a<1(a=0.1$ here).}
\end{figure}
One can show that (\ref{Tdual}) is in fact a T-duality
transformation in the $(r,\theta)$ plane.
To see this, we find the above spiky solutions using the Polyakov action for the string.
The following configuration is by construction a solution to the wave equations resulting from this action
\begin{eqnarray}
\label{flat}
\label{pol}
x&=&r_c\frac{|a-1|}{2a} \cos(\frac{a+1}{a-1}\sigma_+)+r_c\frac{a+1}{2a}\cos(\sigma_{-})\nonumber\\
y&=&r_c\frac{|a-1|}{2a} \sin(\frac{a+1}{a-1}\sigma_+)+r_c\frac{a+1}{2a}\sin(\sigma_{-})\\
t&=&r_c\frac{a+1}{a}\tau\nonumber
\end{eqnarray}
where $\sigma_+= \tau+\sigma$, $\sigma_-= \tau-\sigma$,
and the target space coordinates $(t,x,y)$ parameterize
a Minkowski space. Note that there are two free parameters in this solution, $a$ and $r_c$.
We further demand that the relation (\ref{cons}) holds for some $n$.
This guarantees, as can be easily checked, that (\ref{flat}) satisfies periodic boundary condition
as well as the level matching and Virasoro constraints. Therefore we find a two
parameter family of closed string solutions determined by $(a,r_c)$.
Each pair $(a,r_c)$, together with (\ref{cons}), describes a spiky closed string with
$n$ spikes in conformal gauge.
The condition $X'^2=0$, where $X=(t,x,y)$, determines the position of cusps. This
gives the radius of spikes at $r_c$. The condition $X.X'=0$, on the other hand,
gives the position of lobes which results in $r_c/a\equiv r_l$ for the radius of lobes.
For $a>1$, the solutions (\ref{flat}) are those found in \cite{Kkk}. For $1/3\le a<1$
we recover the second class of spiky solutions discussed above. Applying the
transformations (\ref{Tdual}) to a given solution, i.e. $(a,r_c)\rightarrow(1/a,r_c/a)$, amounts to
changing the sign of the left mover part of $y$, $y_L$. This transformation, $y_L\rightarrow -y_L$,
is of course T-duality in $y$ direction. We hence conclude that, as mentioned above, (\ref{Tdual})
is a T-duality transformation on solutions.
\section{Dual spikes in $AdS$}
In this section we find spiky solutions in the $AdS$ background whose spikes
point inward, like the T-dual spikes found in the previous section. This is a
generalization of spiky strings in $AdS$ with outward spikes found in \cite{Kkk}.
Our main interest is the $AdS_5\times S_5$ background, in the global coordinates, but the closed string we are
interested in lives in the $AdS_3$ subspace specified by the following metric
\begin{equation}
ds^2=-\cosh^2\rho dt^2+d\rho^2+\sinh^2\rho d\theta^2
\end{equation}
\newpage
Our ansatz for the string configuration is the following
\begin{equation}
t=\tau\ ,\ \ \ \ \ \theta=\omega\tau+\sigma\ ,\ \ \ \ \ \ \rho=\rho(\sigma)
\end{equation}
The radius of $AdS$ is chosen to be one and the dimensionless worldsheet
coupling constant is denoted by $1/\sqrt{\lambda}$ where it is understood that
$\lambda$ is the 't Hooft coupling in the dual field theory, $N=4$ SYM. The
Nambu-Gotto action for the above ansatz is
\[
{\cal L}_{NG}=-\frac{\sqrt{\lambda}}{2\pi}\sqrt{l}\nonumber
\]
where
\begin{equation}
l=(\cosh^2\rho-\omega^2\sinh^2\rho)\rho'^2+\sinh^2\rho\cosh^2\rho
\end{equation}
The isometry direction, $\partial_{\theta}$, results in a constant of motion, which as
we see below, determines the position of lobes in the string configuration and hence we denote
it by ${\rho_l}$
\begin{equation}
\label{lobe}
\frac{\sinh2\rho_l}{2}\equiv\frac{\sinh^2\rho\ \cosh^2\rho}{\sqrt{l}}
\end{equation}
The other free parameter of the problem, $\omega$, on the other hand, determines
the position of cusps, $\rho_c$, which we define by
\begin{equation}
\label{cusp}
\sinh^2{\rho_c}\equiv\frac{1}{\omega^2-1}
\end{equation}
From (\ref{lobe}) it follows that
\begin{equation}
\label{integral}
\frac{\rho'}{\sinh2\rho}=\frac{1}{2}\frac{\sqrt{\cosh2\rho_c-1}}{\sinh2\rho_l}
\sqrt{\frac{\cosh^22\rho-\cosh^22\rho_l}{\cosh2\rho_c-\cosh2\rho}}
\end{equation}
One can check that if the above relation holds, the equations of motion are also satisfied.
It is readily seen that $\rho_l$ and $\rho_c$ indeed determine the position of lobes and cusps
on the string respectively.
All the above relations were found in \cite{Kkk}. What we do in the following is to change the assumption
made in \cite{Kkk}, namely $\rho_c>\rho_l$, to $\rho_c<\rho_l$ and find a whole new family
of solutions which, unlike their counterparts in the mentioned reference, describe strings with
spikes pointing towards the origin of $AdS$.
To proceed, we make the useful change of variables mentioned in \cite{Kkk} from
$\rho$ to $u$ defined by $u=\cosh2\rho$. In terms of this new variable we can write
the following relations
\begin{eqnarray}
\Delta\theta&=&\sqrt{\frac{u_l^2-1}{u_c-1}}\int_{u_c}^{u_l}\frac{du}{u^2-1}\frac{\sqrt{u-u_c}}{\sqrt{u_l^2-u^2}}\\
J&=&\frac{2n}{2\pi}\sqrt{\lambda}\frac{\sqrt{u_c-1}}{4}\int_{u_c}^{u_l}\frac{du}{u+1}\frac{\sqrt{u_l^2-u^2}}{\sqrt{u-u_c}}\\
E-\omega J&=&\frac{2n}{2\pi}\sqrt{\lambda}\frac{\sqrt{2}}{\sqrt{u_c-1}}\int_{u_c}^{u_l}du\frac{\sqrt{u-u_c}}{\sqrt{u_l^2-u^2}}
\end{eqnarray}
\begin{figure}[ht]
\begin{center}
\includegraphics[scale=0.85]{3dadsspike.eps}
\caption{A dual spiky string with $3$ spikes. $\rho_c=0.61347562$ and $\rho_l=2$.}
\end{center}
\end{figure}
In the above relations, each integration is performed on a segment of string stretching between
a cusp and a lobe. When we add up $2n$ number of such segments we make a closed string with $n$ cusps
provided that the angle covered by a segment satisfies $\Delta\theta=\pi/n$.
The integrals written above can be computed using integral tables (see e.g. \cite{grad}). The result is
\begin{eqnarray}
\Delta\theta&=&\frac{\sqrt{u_l^2-1}}{\sqrt{2u_l(u_c-1)}}\ \{\frac{u_c+1}{u_l+1}\Pi(n_1,p)-\frac{u_c-1}{u_l-1}\Pi(n_2,p)\}\\
J&=&\frac{2n}{2\pi}\sqrt{\lambda}\frac{\sqrt{u_c+1}}{2\sqrt{2u_l}}\ \{(1+u_l)K(p)-2u_lE(p)+(u_l-1)\Pi(n_1,p)\}\\
E-\omega J&=&\frac{2n}{2\pi}\sqrt{\lambda}\frac{2\sqrt{u_l}}{\sqrt{u_c-1}}\ \{E(p)-\frac{u_c+u_l}{2u_l}K(p)\}
\end{eqnarray}
where
\begin{equation}
n_1=\frac{u_l-u_c}{u_l+1}\ ,\ \ \ \ \ \ n_2=\frac{u_l-u_c}{u_l-1}\ ,\ \ \ \ \ \ p=\sqrt{\frac{u_l-u_c}{2u_l}}
\end{equation}
and $K(p)$, $E(p)$ and $\Pi(n,p)$ are the complete elliptic integrals of first, second and third kind
respectively (the argument $n$ in $\Pi$ need not be an integer and shouldn't be confused with the number of spikes)(see Appendix A).
\begin{figure}[ht]
\begin{center}
\includegraphics[scale=0.85]{10dadsspike.eps}
\caption{A dual spiky string with $10$ spikes. $\rho_c=1.03593842$ and $\rho_l=2$.}
\end{center}
\end{figure}
The argument $p$ appearing in the elliptic integrals varies between zero and $1/\sqrt{2}$ and hence
$K(p)$ and $E(p)$ are always of order unity. The elliptic integrals of the third kind we are dealing with, $\Pi(n_i,p)$,
are ``circular" ones because of the relation $p^2<n_1,n_2<1$. These functions increase as $n_i$ and/or $p$ increase and
blow up as $n_i$ approaches $1$.
Due to the complicated form of the relations, and unlike the analysis made for the flat space,
we did not manage to make a direct and general comparison
between the two families of spiky solutions in $AdS$, i.e. the ones we have found here with inward spikes and
the ones found in \cite{Kkk}.
Therefore we study some limits where the solutions simplify and/or where
we may have a dual field theory description.
The first limit we consider is when $\rho_c<\rho_l\ll1$. This limit corresponds to
a small angular momentum spiky string close to the origin and reproduces the dual spikes found
in the previous section and all the relations there apply here.
The other limit, which for comparison with the dual field theory is of more interest, is when
the angular momentum is large. This happens when $\rho_l\gg1$ which we will study in the following in some detail.
So the limit we are interested in is when $\rho_c$ is fixed and $\rho_l\rightarrow\infty$.
The $\Pi(n_i,p)$ in this limit behave as
\[
\label{piapprox}
\Pi(n_1,p)\approx \frac{\pi}{2}\frac{\sqrt{2u_l}}{\sqrt{u_c-1}}\ ,\ \ \ \ \ \
\Pi(n_2,p)\approx \frac{\pi}{2}\frac{\sqrt{2u_l}}{\sqrt{u_c+1}}\nonumber
\]
In evaluating these limits we have made use of a relation which holds between the
circular elliptic integrals and the Heuman's Lambda function, $\Lambda_0(\psi,p)$, (see Appendix A).
Using (\ref{piapprox}) it is straightforward to find the following approximate expressions
\begin{eqnarray}
\label{tetaapprox}\Delta\theta&\approx&\frac{\pi}{2}\ \{\frac{\sqrt{u_c+1}}{\sqrt{u_c-1}}-1\}\\
\label{Japprox}J&\approx&n\ \frac{\sqrt{\lambda}}{4}\ u_l\\
\label{EJapprox}E-\omega J&\approx& \frac{2n}{2\pi}\ \sqrt{\lambda}\ \frac{2E(1/\sqrt{2})-K(1/\sqrt{2})}{\sqrt{u_c-1}}\ {u_l}^{1/2}
\end{eqnarray}
Recalling that the number of spikes is given by $n=\pi/{\Delta\theta}$, it is evident that the limit we are
considering describes strings with large angular momentum and fixed number of spikes. This number decreases as
$\rho_c$ approaches zero $(u_c\rightarrow1)$. As $\rho_c$ gets smaller, the angle covered by a single string segment increases
and in the limiting case it blows up resulting in a spiral with infinite energy, like the one
we encountered in the flat space, and is not a physically allowed configuration (See the footnote (\ref{foot})).
\begin{figure}[ht]
\begin{center}
\includegraphics[scale=0.7]{adsspiral.eps}
\caption{A string segment forming a spiral. $\rho_c$ is so small that covers an angle larger than the minimal allowed value, $\pi/2$.}
\end{center}
\end{figure}
We may thus assume that $\rho_c$ is much larger than one, to avoid the above mentioned situation.
In this limit $\omega\approx1$ and we find the following
expressions, which this time we write in terms of $\rho$ instead of $u$
\begin{eqnarray}
\Delta\theta&\approx&\pi\ e^{-2\rho_c}\\
J&\approx&n\ \frac{\sqrt{\lambda}}{8}\ e^{2\rho_l}\\
\label{EJapprox2}E-J&\approx& n\ \frac{\sqrt{\lambda}}{\pi}\ [2E(1/\sqrt{2})-K(1/\sqrt{2})]\ e^{(\rho_l-\rho_c)}
\end{eqnarray}
Looking at (\ref{EJapprox2}), we require $\rho_l\gg\rho_c$ such that $(E-J)/n$ remains large
and a semi classical approximation for each spike remains valid. This means that although we
require $\rho_c$ to be large enough to produce a finite number of spikes, we don't want
it to be so large to make each spike infinitesimal in size and energy.
Gathering all the relations above we can write the dispersion relation for spiky strings in
the limit of large angular momentum
\begin{eqnarray}
\label{EJspike}
E&\approx& J+4n\ \sqrt{\lambda}\ \frac{2E(1/\sqrt{2})-K(1/\sqrt{2})}{\sqrt{2}\pi}\
\left(\frac{1}{n}\frac{J}{\sqrt{\lambda}}\right)^{1/2}\nonumber\\
&\approx&J+1.34\ \sqrt{\lambda}\ \frac{n}{2\pi}\ \left(\frac{4\pi}{n}\frac{J}{\sqrt{\lambda}}\right)^{1/2}
\end{eqnarray}
where in the last line we have plugged the approximate values of the elliptic integrals.
This is to be compared with the relation obtained for the spiky strings found in \cite{Kkk}
\begin{equation}
E\approx J+\sqrt{\lambda}\ \frac{n}{2\pi}\ \ln\left(\frac{4\pi}{n}\frac{J}{\sqrt{\lambda}}\right)
\end{equation}
As can be seen, apart from a numerical factor of order of unity,
the dispersion relation for dual spiky strings differs from
the one for the usual spikes. This is because the dual spikes near the boundary
look like portions of almost circular strings rather than folded ones. This changes
the anomalous dimension from the usual logarithm dependence to square root
which is the known behavior for rotating and pulsating circular strings in AdS.
In the discussion section we will comment more on this point.
\section{Dual spikes on sphere}
In this section we find spiky string solutions on $Ads_5\times S^5$ which live
in the origin of $AdS$ and which are point-like in $S^3\subset S^5$ i.e. they live
on a hemisphere with the following metric
\begin{equation}
dS^2=-dt^2+\cos^2\theta d\phi^2+d\theta^2
\end{equation}
where $0\le\theta\le\pi/2$ and $0\le\phi\le\pi$.
In \cite{Ryang}, such solutions with spikes pointing towards the equator, $\theta=0$, were found.
In the following we find solutions with spikes pointing towards the pole $\theta=\pi/2$. The ansatz we consider is
\begin{equation}
t=\tau\ ,\ \ \ \ \ \phi=\omega t+\sigma\ ,\ \ \ \ \ \theta=\theta(\sigma)
\end{equation}
The Nambu-Gotto Lagrangian for this ansatz will read
\[
{\cal L}_{NG}=-\frac{\lambda}{2\pi}\sqrt{l}\nonumber
\]
with
\begin{equation}
l=(1-\omega^2\cos^2\theta)\ \theta'^2+\cos^2\theta
\end{equation}
the isometry direction $\partial_{\phi}$ results in a constant of motion,
which we denote by $\theta_l$ and is given by
\begin{equation}
\label{thetalobe}
\cos^2\theta_l\equiv\ \frac{\cos^2\theta}{\sqrt{l}}
\end{equation}
As we will see below, this constant determines the position of lobes in the string configuration.
The other free parameter of the problem, $\omega$, determines the position of cusps by $\cos\theta_c=1/\omega$.
From (\ref{thetalobe}) we find
\begin{equation}
\frac{\theta'}{\cos\theta}=\frac{\cos\theta_c}{\cos\theta_l}\ \sqrt{\frac{\cos^2\theta_l-\cos^2\theta}
{\cos^2\theta-\cos^2\theta_c}}
\end{equation}
Here again one can check that if the above relation holds the equations of motion
are also satisfied. In \cite{Ryang} it was assumed that $\theta_c<\theta_l$. Here we make the assumption
$\theta_l<\theta_c$ which will lead to a second class of spiky solutions which we call
dual spiky strings. These strings are formed by attaching $2n$ number of string segments
which stretch between $\theta_l$ and $\theta_c$. The angle covered by each segment, $\Delta\phi$, and
also the energy, $E$, and angular momentum, $J$, of the closed string are given by the following integrals
\begin{eqnarray}
\Delta\phi&=& \frac{\cos\theta_l}{\cos\theta_c}\int_{\theta_l}^{\theta_c}\frac{d\theta}{\cos\theta}
\frac{\sqrt{\cos^2\theta-\cos^2\theta_c}}{\sqrt{\cos^2\theta_l-\cos^2\theta}}\\
J&=&\sqrt{\lambda}\frac{2n}{2\pi}\int_{\theta_l}^{\theta_c}d\theta\cos\theta
\frac{\sqrt{\cos^2\theta_l-\cos^2\theta}}{\sqrt{\cos^2\theta-\cos^2\theta_c}}\\
E-\omega J&=&\sqrt{\lambda}\frac{2n}{2\pi}\omega \int_{\theta_l}^{\theta_c}d\theta\cos\theta
\frac{\sqrt{\cos^2\theta-\cos^2\theta_c}}{\sqrt{\cos^2\theta_l-\cos^2\theta}}
\end{eqnarray}
The integrals can be computed with the following results
\begin{eqnarray}
\Delta\phi&=&\frac{x_c\ q^2}{\sqrt{(1-x_c^2)(1-x_l^2)}}\ K(q)-\frac{\pi}{2}[1-\Lambda_0(\psi,q)]\\
J&=&\sqrt{\lambda}\ \frac{2n}{2\pi}x_c\ [E(q)-\frac{x_l^2}{x_c^2}\ K(q)]\\
E-\omega J&=&\sqrt{\lambda}\ \frac{2n}{2\pi}\frac{x_c\ q^2}{\sqrt{1-x_c^2}}\ [K(q)-E(q)]
\end{eqnarray}
where
\[
x=\sin\theta\ ,\ \ \ \ \ q^2=1-\frac{x_l^2}{x_c^2}\ ,\ \ \ \ \ \psi=\sin^{-1}\left(\frac{1-x_c^2}{1-x_l^2}\right)^{1/2}
\]
One can see that for this class of spiky strings the angular momentum can never be large.
Therefore there is no limit where the world sheet corrections to the classical
configuration become sub dominant and a semi classical analysis becomes valid.
Despite this, and in order to simplify the expressions,
we consider some limits of the parameters in what follows.
First consider the case when $x_l\ll 1$ and $x_c$ is fixed. In this limit $q\approx1$ and
\[
K(q)\approx\ln(\frac{1}{\sqrt{1-q^2}})\ ,\ \ \ \ \ E(q)\approx1\ ,\ \ \ \ \ \frac{\pi}{2}\Lambda_0(\psi,q)\approx\psi
\approx\frac{\pi}{2}-\theta_c
\]
Using the above approximate expressions we can find that
the limit $q\approx1$ corresponds to fixed angular momentum and large energy for each half spike.
A string segment stretching between $\theta_l\approx0$ and $\theta_c$ looks like a spiral which covers
a large angle. The limit therefore does not correspond to a physically valid configuration
\footnote
{\label{foot}This configuration
is not physical in the sense that it is not compatible with our original ansatz for the string where we
assumed a winding number $w=1$ for the string. Including a general winding number in the ansatz
might give rise to interesting new configurations including ones made up of spirals. In fact the
``single spikes" of \cite{Ishizeki:2007we} which live on sphere are of this kind. For these solutions
the infinite winding number is replacing $J$ in the dispersion relation to give a finite
anomalous part. The interchange of angular momentum and winding number is reminiscent of T-duality
relations. The dual spikes were shown to be T-duals of the usual ones in flat space.
Such a relation in AdS and sphere between spikes and dual spikes might be an interesting
subject to investigate.}.
Next consider a nearly circular configuration with $x_c\approx x_l$ or $q\approx0$.
For this case we have
\[
K(q)\approx\frac{\pi}{2}(1+\frac{\epsilon}{2x_c})\ ,\ \ \ E(q)\approx\frac{\pi}{2}(1-\frac{\epsilon}{2x_c})\ , \ \ \
\Lambda_0(\psi,q)\approx1-\sqrt{\frac{2}{1-x_c^2}}\ \epsilon\ ,\ \ \ \ \ \ (\epsilon\equiv x_c-x_l)
\]
We therefore find
\begin{eqnarray}
\Delta\phi&\approx&\sqrt{\frac{2}{1-x_c^2}}\left(\sqrt{\frac{2}{1-x_c^2}}-1\right)\frac{\pi}{2}\epsilon\\
J&\approx&\sqrt{\lambda}\ \frac{n}{2}\ \epsilon\\
E-\omega J&\approx& \sqrt{\lambda}\ \frac{n}{2}\ \frac{\epsilon}{\sqrt{1-x_c^2}}
\end{eqnarray}
This limit thus describes a large number of half spikes, each with a small energy and angular momentum,
resulting in a closed string with a finite $E$ and $J$.
If we further assume that the string is close to the equator, $x_c\approx x_l\approx0$ and hence
$\omega\approx1$, we find
\begin{equation}
E\approx 2J
\end{equation}
This coincides with the expression found in \cite{Ryang}
for nearly circular spiky strings near the equator but of course, and unlike our case,
with spikes pointing towards the equator.
Now consider the string to be near the pole, $x_c\approx x_l\approx1$ and hence $\omega\rightarrow\infty$.
To find the dispersion relation we should first express $x_c$ in terms of the charges which, to
the leading order, turns out to be
\[
1-x_c^2\approx n\epsilon\approx \frac{2J}{\sqrt{\lambda}}
\]
As a result we find
\begin{equation}
E^2\approx 2\sqrt{\lambda}\ J
\end{equation}
This differs from the result $E^2\approx4\sqrt{\lambda}(\sqrt{\lambda}+14/3\ J/n)$ found in \cite{Ryang} for
nearly circular spiky strings close to the pole and with spikes pointing towards the equator.
\section{Discussion}
Here we are mainly interested in the field theory interpretation of dual spikes
in AdS, namely the result (\ref{EJspike}) (remember that we did not have a large $J$ limit
for dual spikes on sphere which makes the semi classical limit difficult to interpret in
field theory). As was mentioned earlier, the result (\ref{EJspike}) looks like that of
circular rotating and pulsating strings \cite{park}. This is in fact understandable
as we will discuss below.
Semi classical string configurations in AdS have led to the picture that spikes on
string represent fields in SYM whereas a profile in the radius of AdS, which
should generically be accompanied by rotation to give a string solution, are
represented by covariant derivatives $D_+$ which also carry spin.
The logarithm behavior of anomalous dimension for spinning strings is believed
to be caused by the large number of derivatives as compared to fields.
Circular rotating and pulsating strings\footnote{
For circular rotating and/or pulsating strings see for example\cite{Minahan:2002rc},\cite{park}.},
on the other hand, have been proposed to be represented
by self dual gauge field strengths $F^{(+)}$ (see \cite{park}). These configurations have equal rotations
$S_1=S_2$ in the orthogonal planes of $S^3$ to stabilize and thus carry $(1,0)$ representation of
$SO(4)$ as $F^{(+)}$. The anomalous dimension for mostly spinning strings, in these solutions,
is like $(\lambda S)^{1/3}$ and is like $\lambda^{1/4}(N)^{1/2}$ for mostly pulsating ones
where $N$ is the oscillation number for pulsation.
The dual spikes we have studies in this paper, in the large $J$ limit and near
the boundary, look like portions of circular rotating strings. Moreover,
the profile in $\rho$ in addition to rotation induce a pulsating motion
for the string as seen by an observer in fixed angle. In fact
in the large $J$ limit of the dual spikes, letting $\eta\equiv\omega-1$ and
keeping $\eta$ small (with $\eta J$ fixed) to keep the number of spikes finite, we get
\[
E\approx J+\eta J+1.34\ \sqrt{\frac{2}{\pi}}\ n\ \lambda^{1/4}\ \sqrt{\eta J}
\]
Here $\eta J$ is replacing the oscillator number for pulsation. For smaller $\eta$
the portion of string near the boundary becomes more circular which reduces the
pulsation-like movement of the string and gives a smaller oscillation number.
One might then guess that the dual spiky strings in AdS in the large angular
momentum limit are schematically represented by operators of the form
\[
{\cal O}\sim {\bf Tr}\{\Pi_i^n (D_+)^{J/n} (D_+ D_-)^{\eta J/2n} \ \Phi_i\}
\]
The $D_+$ operators are responsible for the profile in $\rho$ as well as the rotation. The $D_+D_-$, on the
other hand, contribute to the dimension but not to the angular momentum. The fields $\Phi$ as before
represent the spikes on the string.
It is clear that a better understanding of the field theory interpretation for
these solutions needs a more careful study which we postpone to a future work.
It might also be very interesting, though seemingly difficult,
to find spiky solutions with a real pulsation, perhaps interpolating
between spikes and dual spikes.
\section*{Acknowledgements}
AEM would like to thank H. Arfaei for discussions and M.M. Sheikh-Jabbari
for discussions and comments on the draft and also M.R. Maktabdaran
for helping with the figures. We would especially like to thank M. Kruczenski for useful
discussions and comments on the draft.
\section*{Appendix A: Some useful relations for Elliptic integrals}\label{appendix}
Here we gather some relations regarding Elliptic integrals. For a complete account see \cite{grad},\cite{abram}.
The Elliptical integrals of first, second and third kind, $F$, $E$ and $\Pi$ are defined as
\begin{eqnarray}
F(\alpha;q)&=&\int_0^\alpha d\theta \frac{1}{(1-q \sin^2\theta)^{\frac{1}{2}}}\\
E(\alpha;q)&=&\int_0^\alpha d\theta (1-q \sin^2\theta)^{\frac{1}{2}}\\
\Pi(\alpha;n,q)&=& \int_0^\alpha d\theta {\frac{1}{(1-n
\sin^2\theta)(1-q^2 \sin^2\theta)^{\frac{1}{2}}}}
\end{eqnarray}
Complete Elliptic integrals are those with $\alpha=\pi/2$. A usual notation is
\[
K(q)\equiv F({\frac{\pi}{2}},q)
\]
For the limiting case $q\rightarrow 0$ we have
\[
K(q)\approx{\frac{\pi}{2}}\ (1+\frac{q^2}{4}),\ \ \ \ \ E(0)\approx{\frac{\pi}{2}} (1-\frac{q^2}{4})
\]
In the other
limiting case $q\rightarrow 1$
\[
K(q)\approx \ln\frac{4}{q'}\ ,\ \ \ \ \ E(q)\approx 1+\frac{q'^2}{2}\ln\frac{4}{q'}\ ,\ \ \ \ \ (q'\equiv\sqrt{1-q^2})
\]
For the complete Elliptic integrals of the third kind, $\Pi(n,q)$, the cases $0<n<q^2$ and $n>1$ are known
as ``hyperbolic" whereas those with $q^2<n<1$ and $n<0$ are known as``circular" cases. For $q^2< n <1$ we have
the following relation
\[
\Pi(n,q)=K(q)+\frac{\pi}{2}\delta[1-\Lambda_0(\psi,q)]
\]
where
\[
\delta=[{\frac{n}{(1-n)(n-q^2)}}]^{\frac{1}{2}} ,\ \ \ \ \
\psi=\arcsin[{\frac{1-n}{1-q^2}}]^{\frac{1}{2}}
\]
and $\Lambda_0$ is Heuman's lambda function defined as
\[
\Lambda_0(\psi,q)={\frac{2}
{\pi}}\left(K(q)E(\psi;q')-[K(q)-E(q)]F(\psi;q')\right)
\]
For $n<0$ we have
\[
\Pi(n,q)={\frac{-n(1-q^2)}{(1-n)(q^2-n)}}\pi(N,q)+{\frac{q^2}{(q^2-n)}}K(q)
\]
where $N={\frac{q^2-n}{1-n}}$.
| {'timestamp': '2007-06-28T15:18:51', 'yymm': '0705', 'arxiv_id': '0705.3131', 'language': 'en', 'url': 'https://arxiv.org/abs/0705.3131'} |
\section{Introduction}
\label{sec:intor}
Let $(M,\xi)$ be a contact manifold.
Each contact form $\lambda$ of $\xi$, i.e., a one-form with $\ker \lambda = \xi$, canonically induces a splitting
$$
TM = {\mathbb R}\{X_\lambda\} \oplus \xi.
$$
Here $X_\lambda$ is the Reeb vector field of $\lambda$,
which is uniquely determined by the equations
$$
X_\lambda \rfloor \lambda \equiv 1, \quad X_\lambda \rfloor d\lambda \equiv 0.
$$
We denote by $\Pi=\Pi_\lambda: TM \to TM$ the idempotent, i.e., an endomorphism satisfying
$\Pi^2 = \Pi$ such that $\ker \Pi = {\mathbb R}\{X_\lambda\}$ and $\operatorname{Im} \Pi = \xi$.
Denote by $\pi=\pi_\lambda: TM \to \xi$ the associated projection.
\begin{defn}[Contact Triad]
We call the triple $(M,\lambda,J)$ a contact triad of $(M,\xi)$ if
$\lambda$ is a contact form of $(M,\xi)$,
and $J$ is an endomorphism of $TM$ with $J^2 = -\Pi$ which we call
$CR$-almost complex structure, such that the triple $(\xi, d\lambda|_\xi,J|_\xi)$ defines a
Hermitian vector bundle over $M$.
\end{defn}
As long as no confusion arises, we do not distinguish $J$ and its restriction $J|_\xi$.
In \cite{oh-wang2}, the authors of the present paper called the pair $(w,j)$ a contact instanton, if
$(\Sigma, j)$ is a (punctured) Riemann surface and $w:\Sigma\to M$ satisfies the following equations
\begin{equation}\label{eq:contact-instanton}
{\overline \partial}^\pi w = 0, \quad d(w^*\lambda \circ j) = 0.
\end{equation}
A priori coercive $W^{k,2}$-estimates for $w$ with $W^{1,2}$-bound was established
\emph{without involving symplectization}. Moreover, the study of $W^{1,2}$ (or the derivative) bound
and the definition of relevant energy is carried out by the first-named author in \cite{oh:energy}.
Furthermore, for the punctured domains $\dot\Sigma$ equipped with cylindrical metric near the puncture,
the present authors proved the result of asymptotic subsequence \emph{uniform} convergence to a Reeb orbit
(which must be a closed
when the corresponding charge is vanishing) under the assumption that the $\pi$-harmonic energy is finite
and the $C^0$-norm of derivative $dw$ is bounded. (Refer \cite[Section 6]{oh-wang2} for precise statement and
Section \ref{sec:pseudo} in the current paper for its review.)
Based on this subsequence uniform convergence result, the present authors previously proved $C^\infty$ exponential decay
in \cite{oh-wang2} when the contact form is nondegenerate.
The proof is based on the so-called \emph{three-interval argument} which is essentially different
from the proofs for exponential convergence in existing literatures, e.g., from those
in \cite{HWZ1, HWZ2, HWZplane} which use differential inequality method.
The present paper is a sequel to the paper \cite{oh-wang2} and
the main purpose thereof is to generalize the exponential
convergence result to the Morse-Bott case. In Part \ref{part:exp} of the
current paper, we systematically develop the above mentioned three-interval method as a general framework
and establish the result for Morse-Bott contact forms.
(Corresponding results for pseudo-holomorphic curves in symplectizations
were provided by various authors including \cite{HWZ3, bourgeois, bao}
and we suggest readers comparing our method with theirs.)
The proof here consists of two parts, one geometric and the other analytic. Part \ref{part:coordinate} is
devoted to unveil the geometric structure, the \emph{pre-contact structure}, carried by the loci $Q$ of
the closed Reeb orbits of a Morse-Bott contact form $\lambda$ (see Section \ref{subsec:clean} for precise definition).
We prove a canonical neighborhood theorem of any pre-contact manifold which is the contact analogue to
Gotay's on presymplectic manifolds \cite{gotay}, which we call the \emph{contact thickening}
of a pre-contact manifold. By using this neighborhood theorem, we obtain a canonical splitting of
the tangent bundle $TM$ in terms of the pre-contact structure of $Q$ and its thickening. Then we
introduce the class consisting of $J$'s \emph{adapted to $Q$} (refer Section \ref{sec:adapted} for definition)
besides the standard compatibility requirement to $d\lambda$. At last we split the derivative $dw$ of contact instanton $w$.
In this way, we are given the geometric framework which gets us ready to conduct the three-interval method
provided in Part \ref{part:exp}.
Part \ref{part:exp} is then devoted to apply the enhanced version of the three-interval framework
in proving the exponential convergence for the Morse-Bott case, which generalizes the one presented
for the nondegenerate case in \cite{oh-wang2}.
Now we outline the main results in the present paper in more details.
\subsection{Structure of clean submanifolds of closed Reeb orbits}
\label{subsec:clean}
Assume $\lambda$ is a fixed contact form of the contact manifold $(M, \xi)$.
For a closed Reeb orbit $\gamma$ of period $T > 0$, one can write $\gamma(t) = \phi^t(\gamma(0))$, where
$\phi^t= \phi^t_{X_\lambda}$ is the flow of the Reeb vector field $X_\lambda$.
Denote by ${\operatorname{Cont}}(M, \xi)$ the set of all contact one-forms with respect to the contact structure $\xi$,
and by ${\mathcal L}(M)=C^\infty(S^1, M)$ the space of loops $z: S^1 = {\mathbb R} /{\mathbb Z} \to M$ and
${\mathcal L}^{1,2}(M)$ its $W^{1,2}$-completion.
Consider the Banach bundle
$\mathcal{L}$ over the Banach manifold $(0,\infty) \times {\mathcal L}^{1,2}(M) \times {\operatorname{Cont}}(M,\xi)$ whose fiber
at $(T, z, \lambda)$ is $L^{1,2}(z^*TM)$.
The assignment
$$
\Upsilon: (T,z,\lambda) \mapsto \dot z - T \,X_\lambda(z)
$$
is a section, where $(T,z)$ is a pair with a loop $z$ parameterized over
the unit interval $S^1=[0,1]/\sim$ defined by $z(t)=\gamma(Tt)$ for
a Reeb orbit $\gamma$ of period $T$. Notice that $(T, z, \lambda)\in \Upsilon^{-1}(0)
:=\frak{Reeb}(M,\xi)$ if and only if there exists some Reeb orbit $\gamma: {\mathbb R} \to M$ with period $T$, such that
$z(\cdot)=\gamma(T\cdot)$.
Denote by
$$
\frak{Reeb}(M,\lambda) : = \{(T,z) \mid (T,z,\lambda) \in \frak{Reeb}(M,\xi)\}
$$
for each $\lambda \in {\operatorname{Cont}}(M,\xi)$. From the formula of a $T$-periodic orbit $(T,\gamma)$,
$
T = \int_\gamma \lambda
$.
it follows that the period varies smoothly over $\gamma$.
The general Morse-Bott condition (Bott's notion \cite{bott} of clean critical submanifold in general) for $\lambda$ corresponds to the statement that
every connected component of $\frak{Reeb}(M,\lambda)$ is a smooth submanifold
of $ (0,\infty) \times {\mathcal L}^{1,2}(M)$ and its tangent space at
every pair $(T,z) \in \frak{Reeb}(M,\lambda)$ therein coincides with $\ker d_{(T,z)}\Upsilon$.
Denote by $Q$ the locus of Reeb orbits contained in a fixed
connected component of $\frak{Reeb}(M,\lambda)$.
However, when one tries to set up the moduli space of contact instantons
for Morse-Bott contact forms, more requirements are needed and we recall the definition that Bourgeois
adopted in \cite{bourgeois}.
\begin{defn}[Equivalent to Definition 1.7 \cite{bourgeois}]\label{defn:morse-bott-intro}
A contact form $\lambda$ is called be of Morse-Bott type, if it satisfies the following:
\begin{enumerate}
\item
Every connected component of $\frak{Reeb}(M,\lambda)$ is a smooth submanifold
of $ (0,\infty) \times {\mathcal L}^{1,2}(M)$ with its tangent space at
every pair $(T,z) \in \frak{Reeb}(M,\lambda)$ therein coincides with $\ker d_{(T,z)}\Upsilon$;
\item The locus $Q$ is embedded;
\item The 2-form $d\lambda|_Q$ associated to the locus $Q$ of Reeb orbits has constant rank.
\end{enumerate}
\end{defn}
Here Condition (1) corresponds to
Bott's notion of Morse-Bott critical manifolds which we name as \emph{standard Morse-Bott type}.
While $\frak{Reeb}(M,\lambda)$ is a
smooth submanifold, the orbit locus $Q \subset M$ of $\frak{Reeb}(M,\lambda)$ is
in general only an immersed submanifold and could have multiple sheets along the locus of multiple orbits.
Therefore we impose Condition (2).
In general, the restriction of the two form $d\lambda$ to $Q$ has varying rank. It is still not clear whether
the exponential estimates we derive in this paper holds in this general context because our proof
strongly relies on the existence of canonical model of neighborhoods of $Q$. For this reason, we
also impose Condition (3).
We remark that Condition (3) means that the 2-form $d\lambda|_Q$ becomes a presymplectic form.
Depending on the type of the presymplectic form,
we say that $Q$ is of pre-quantization type if the rank of $d\lambda|_Q$ is maximal,
and is of isotropic type if the rank of $d\lambda|_Q$ is zero. The general case is
a mixture of these two. In particular when $\opname{dim} M = 3$, such $Q$ must be either of
prequantization type or of isotropic type. This is the case dealt with in
\cite{HWZ3}. The general case considered in \cite{bourgeois}, \cite{behwz} includes the mixed type.
\begin{defn}[Pre-Contact Form] We call one-form $\theta$ on a manifold $Q$ a \emph{pre-contact} form
if $d\theta$ has constant rank, i.e., if $d\theta$ is a presymplectic form.
\end{defn}
While the notion of presymplectic manifolds is well-established in symplectic geometry,
this contact analogue seems to have not been used
in literature, at least formally, as far as we know.
With this terminology introduced, we prove the following theorem.
\begin{thm}[Theorem \ref{thm:morsebottsetup}]\label{thm:morsebottsetup-intro}
Let $Q$ be the clean submanifold foliated by the Reeb orbits of
Morse-Bott type contact form $\lambda$ of contact manifold $(M, \xi)$.
Assume $Q$ is embedded and $d\lambda|_Q = i_Q^*(d\lambda)$ has constant rank. Then $Q$ carries
\begin{enumerate}
\item a locally free $S^1$-action generated by the Reeb vector field $X_\lambda|_Q$;
\item the pre-contact form $\theta$ given by $\theta = i_Q^*\lambda$ and the splitting
\begin{equation}\label{eq:kernel-dtheta0}
\ker d\theta = {\mathbb R}\{X_\theta\} \oplus H,
\end{equation}
such that the distribution $H = \ker d\theta \cap \xi|_Q$ is integrable;
\item
an $S^1$-equivariant symplectic vector bundle $(E,\Omega) \to Q$ with
$$
E = (TQ)^{d\lambda}/\ker d\theta, \quad \Omega = [d\lambda]_E.
$$
\end{enumerate}
\end{thm}
Here we use the fact that there exists a canonical embedding
$$
E = (TQ)^{d\lambda}/\ker d\theta \hookrightarrow T_QM/ TQ = N_QM,
$$
and $d\lambda|_{(TQ)^{d\lambda}}$ canonically induces a bilinear form $[d\lambda]_E$
on $E = (TQ)^{d\lambda}/\ker di_Q^*\lambda$ by symplectic reduction.
\begin{defn} Let $(Q,\theta)$ be a pre-contact manifold
equipped with the splitting \eqref{eq:kernel-dtheta0}. We call such a triple $(Q,\theta,H)$
a \emph{Morse-Bott contact set-up}.
\end{defn}
We prove the following canonical model theorem as a converse of Theorem \ref{thm:morsebottsetup-intro}.
\begin{thm} Let $(Q,\theta,H)$ be a Morse-Bott contact set-up.
Denote by ${\mathcal F}$ and ${\mathcal N}$ the foliations associated to the distribution $\ker d\theta$ and $H$,
respectively. We also denote by $T{\mathcal F}$, $T{\mathcal N}$ the associated foliation tangent bundles and
$T^*{\mathcal N}$ the foliation cotangent bundle of ${\mathcal N}$. Then for any symplectic vector bundle $(E,\Omega) \to Q$,
the bundle $F = T^*{\mathcal N} \oplus E$
carries a canonical contact form $\lambda_{F;G}$ defined as in \eqref{eq:lambdaF}, for
each choice of complement $G$ such that $TQ = T{\mathcal F} \oplus G$. Furthermore for two such choices of
$G, \, G'$, two induced contact structures are naturally isomorphic.
\end{thm}
Based on this theorem, we denote any such $\lambda_{F;G}$ just by $\lambda_F$ suppressing $G$ from its notation.
Finally we prove the following canonical neighborhood
theorem for $Q \subset M$ with $Q$ defined above for any
Morse-Bott contact form $\lambda$ of contact manifold $(M,\xi)$.
\begin{thm}[Theorem \ref{thm:neighborhoods2}]\label{thm:neighbohood}
Let $Q$ be the clean submanifold foliated by Reeb orbits of
Morse-Bott type contact form $\lambda$ of contact manifold $(M,\xi)$, and
$(Q,\theta)$ and $(E,\Omega)$ be the associated pair defined above. Then
there exists neighborhoods $U$ of $Q$ and $U_F$ of the zero section $o_F$,
and a diffeomorphism $\psi: U_F \to U$ and a function $f: U_E \to {\mathbb R}$ such that
$$
\psi^*\lambda = f\, \lambda_F, \, f|_{o_F} \equiv 1, \, df|_{o_F}\equiv 0
$$
and
$$
i_{o_F}^*\psi^*\lambda = \theta, \quad (\psi^*d\lambda|_{VTF})|_{o_F} = 0\oplus \Omega
$$
where we use the canonical identification of $VTF|_{o_F} \cong T^*{\mathcal N} \oplus E$ on the
zero section $o_F \cong Q$.
\end{thm}
\begin{rem}\label{rem:behwz}
We would like to remark that while the bundles $E$ and $TQ/T{\mathcal F}$ carry canonical fiberwise
symplectic form and so carry canonical orientations
induced by $d\lambda$, the bundle $T{\mathcal N}$ may not be orientable in general along a
Reeb orbit corresponding to an orbifold point in $P = Q/\sim$.
\end{rem}
\subsection{The three-interval method of exponential estimates}
\label{subsec:three-interval}
For the study of the asymptotic behavior of finite $\pi$-energy solutions
of contact instanton $w: \dot \Sigma \to M$ near a Morse-Bott clean submanifold $Q$,
we introduce the following class of $CR$-almoxt complex structures
\begin{defn}[Definition \ref{defn:adapted}] Let $Q \subset M$ be a clean manifold of closed Reeb orbits of $\lambda$.
Suppose $J$ defines a contact triad $(M,\lambda,J)$.
We say a $CR$-almost complex structure $J$ for $(M,\xi)$ is adapted to
clean manifold $Q$ or simply is $Q$-adapted if $J$ satisfies
\begin{equation}\label{eq:JTNT}
J(TQ) \subset TQ + JT{\mathcal N}.
\end{equation}
\end{defn}
Note that this condition is vacuous for the nondegenerate case,
but for the general Morse-Bott case, the class of
adapted $J$ is strictly smaller than the one of general $CR$-almost complex structures of
the triad. The set of $Q$-adapted $J$'s is contractible and the proof is given in Appendix \ref{sec:appendix}.
As far as the applications to contact topology are concerned, requiring this
condition is not any restriction but seems to be necessary for the analysis of
contact-instanton maps or of the pseudoholomorphic maps in the symplectization
(or in the symplectic manifolds with contact-type boundary.)
Let $w: \dot\Sigma \rightarrow M$ be a contact instanton map, i.e.,
satisfying \eqref{eq:contact-instanton} at a cylindrical end
$[0, \infty)\times S^1$, which now can be written as
\begin{equation}\label{eq:contact-instanton2}
\pi \frac{\partial w}{\partial \tau} + J \pi \frac{\partial w}{\partial t} = 0, \quad
d(w^*\lambda \circ j) = 0,
\end{equation}
for $(\tau, t)\in [0, \infty)\times S^1$. We put the following basic hypotheses
for the study of exponential convergence.
\begin{hypo}\label{hypo:basic-intro}[Hypothesis \ref{hypo:basic}]
\begin{enumerate}
\item \emph{Finite $\pi$-energy}:
$E^\pi(w): = \frac{1}{2} \int_{[0, \infty)\times S^1} |d^\pi w|^2 < \infty$;
\item \emph{Finite derivative bound}:
$\|dw\|_{C^0([0, \infty)\times S^1)} \leq C < \infty$;
\item \emph{Non-vanishing asymptotic action}:\\
$
{\mathcal T} := \frac{1}{2}\int_{[0,\infty) \times S^1} |d^\pi w|^2
+ \int_{\{0\}\times S^1}(w|_{\{0\}\times S^1})^*\lambda\neq 0
$.
\item \emph{Vanishing asymptotic charge}:
$
{\mathcal Q}:=\int_{\{0\}\times S^1}((w|_{\{0\}\times S^1})^*\lambda\circ j)=0$,
\end{enumerate}
\end{hypo}
Under these hypotheses, we establish the following $C^\infty$ uniform exponential convergence of
$w$ to a closed Reeb orbit $z$ of period $T={\mathcal T}$. This result was already known in
\cite{HWZ3, bourgeois,bao} in the context of pseudo-holomorphic curves $u = (w,a)$ in symplectizations.
However we emphasize that our proof presented here, which uses the three-interval framework, is
essentially different from the ones \cite{HWZ3,bourgeois,bao} even in the symplectization case.
Furthermore in that case, we completely separate out the estimates of $w$ from that of $a$'s.
(See Section \ref{sec:asymp-cylinder}.)
\begin{thm}\label{thm:expdecay}Assume $(M, \lambda)$ is a Morse-Bott contact manifold and
$w$ is a contact instanton satisfying the Hypothesis \ref{hypo:basic-intro}
at the given end. Then there exists a closed Reeb orbit $z$ with period $T={\mathcal T}$ and positive
constant $\delta$ determined by $z$, such that
$$\|d(w(\tau, \cdot), z(T\cdot))\|_{C^0(S^1)}<C e^{-\delta \tau},$$
and
\begin{eqnarray*}
&&\|\pi \frac{\partial w}{\partial\tau}(\tau, \cdot)\|_{C^0(S^1)}<Ce^{-\delta\tau}, \quad
\|\pi \frac{\partial w}{\partial t}(\tau, \cdot)\|_{C^0(S^1)}<Ce^{-\delta\tau}\\
&&\|\lambda(\frac{\partial w}{\partial\tau})(\tau, \cdot)\|_{C^0(S^1)}<Ce^{-\delta\tau}, \quad
\|\lambda(\frac{\partial w}{\partial t})(\tau, \cdot)-T\|_{C^0(S^1)}<Ce^{-\delta\tau}\\
&&\|\nabla^l dw(\tau, t)\|_{C^0(S^1)}<C_le^{-\delta\tau} \quad \text{for any}\quad l\geq 1,
\end{eqnarray*}
where $C, C_{l}$ are positive constants which only depend on $l$.
\end{thm}
Now comes the outline of the strategy of our proof of exponential convergence in the present paper.
Mundet i Riera and Tian in \cite{mundet-tian} elegantly used a discrete method of three-interval arguemnts
in proving exponential decay under the assumption of
$C^0$-convergence already established. However, for most cases of interests, the $C^0$-convergence
is not a priori given in the beginning but it is often the case that the $C^0$-convergence
can be obtained only after one first proves the exponential convergence of derivatives.
(See the proofs of, for example, \cite{HWZ1, HWZ2, HWZplane}, \cite{HWZ3}, \cite{bourgeois}, \cite{bao}).
To obtain the exponential estimates of derivatives, researchers conduct some bruit-force calculations in deriving the needed
differential inequality, and then proceed from there towards the final result.
Such calculation, especially in coordinates, become quite complicated for the Morse-Bott situation
and hides the geometry that explains why such a differential inequality could be expected to be obtained.
Our proof is divided into two parts by writing $w=(u, s)$ in the
normalized contact triad $(U_F,\lambda_F, J_0)$ (see Definition \ref{defn:normaltriad}) with $U_F \subset F \to Q$
for any given compatible $J$ adapted to $Q$, where $J_0$ is canonical normalized
$CR$-almost complex structure associated to $J$.
We also decompose $s=(\mu, e)$ in terms of the splitting $F = T^*{\mathcal N} \oplus E$.
In this decomposition, the exponential estimates for the $e$-component is an easy consequence of
the 3-interval method which we formulate above in a general abstract framework
(see Theorem \ref{thm:3-interval} for the precise statement). This estimate belongs to the
standard realm of exponential decay proof for the asymptotically cylindrical elliptic equations.
However the study of exponential estimates for $(u,\mu)$ does not directly belong to this
standard realm. Although we still apply similar 3-interval method for the study of exponential convergence,
its study is much more subtle than the normal component due to the presence of
non-trivial kernel of the asymptotic operator $B_\infty$ of the linearization.
To handle the $(u,\mu)$-component, we formulate the following abstract theorem
from the abstract framework of the three-interval argument, and
refer readers to Section \ref{sec:3-interval}, \ref{subsec:exp-horizontal} for the precise statement and
its proof.
\begin{thm}
Assume $\xi(\tau, t)$ is a section of some vector bundle on ${\mathbb R}\times S^1$ which satisfies the equation
$$
\nabla^\pi_\tau\zeta+J\nabla^\pi_t\zeta+S\zeta=L(\tau, t) \quad \text{ with } |L|<Ce^{-\delta_0\tau}
$$
of Cauchy-Riemann type (or more generally any elliptic PDE of evolution type), where $S$ is a bounded symmetric operator.
Suppose that there exists a sequence $\{\bar\zeta_k\}$ (e.g., by performing a suitable
rescaling of $\zeta$) such that at least one subsequence converges to a non-zero
section $\bar\zeta_\infty$ of a (trivial) Banach bundle over a fixed finite interval, say on $[0,3]$,
that satisfies the ODE
$$
\frac{D \bar\zeta_\infty}{d\tau}+B_\infty \bar\zeta_\infty =0
$$
on the associated Banach space.
Then provided $\|\zeta(\tau, \cdot)\|_{L^2(S^1)}$ converges to zero as
$\tau$ goes to $\infty$, $\|\zeta(\tau, \cdot)\|_{L^2(S^1)}$ decays
to zero exponentially fast with the rate at least the smaller number
of the minimal non-zero eigenvalue and $\delta_0$.
\end{thm}
\begin{rem}For the special case when $B_\infty$ has only trivial kernel,
the result can be regarded as the discrete analogue of the differential inequality method
used by Robbin-Salamon in \cite{robbin-salamon}.
\end{rem}
In our framework, our exponential convergence proof is based on intrinsic geometric tensor calculations and
completely coordinate-free. As a result, our proof make it manifest that (roughly) the exponential
decay occurs whenever the geometric PDE has bounded energy at cylindrical ends
and the limiting equation is of linear evolution type
$
\frac{\partial \overline \zeta_\infty}{\partial\tau}+B_\infty \overline \zeta_\infty =0
$,
where $B_\infty$ is an elliptic operator with discrete spectrum.
If $B_\infty$ has only trivial kernel, the conclusion follows rather immediately from
the three-interval argument. Even when $B_\infty$ contains non-trivial kernel,
the exponential decay would still follow as long as some geometric condition, like the Morse-Bott assumption in
the current case of our interest, enables one to extract
some non-vanishing limiting solution of the limiting equation
$\frac{\partial \overline\zeta_\infty}{\partial\tau}+B_\infty \overline \zeta_\infty =0$.
Moreover the decay rate $\delta> 0$ is always provided by the minimal eigenvalue of $B_\infty$.
Now we roughly explain how the non-vanishing limiting solution mentioned above is obtained in current situation:
First, the canonical neighborhood provided in Part \ref{part:coordinate} is used to split the contact
instanton equations into the vertical and horizontal parts. By this way, only the horizontal equation could be
involved with the kernel of $B_\infty$ which by the Morse-Bott condition has
nice geometric structure in the sense that the kernel can be excluded by looking
higher derivative instead of the map itself.
Then, to further see the limiting derivative is indeed non-vanishing, we apply the
geometric coordinate, the center of mass on the Morse-Bott
submanfold $Q$. The details are presented in Section \ref{subsec:exp-horizontal}
and Section \ref{subsec:centerofmass}.
\part{Contact Hamiltonian geometry and canonical neighborhoods}\label{part:neighborhoods}
\label{part:coordinate}
The main purpose of this part is to prove a canonical neighborhood theorem of the
clean submanifold of the loci of Reeb orbits of any Morse-Bott contact form
$\lambda$ of a general contact manifold $(M,\xi)$. The outline of the contents of
this part section wise is as follows.
\begin{itemize}
\item In Section 2, we establish a natural isomorphism between $TM$ and $T^*M$
which is induced by each choice of contact form $\lambda$ for a given contact structure $\xi$.
This is a contact analogue to the well-known isomorphism between the tangent bundle and
the cotangent bundle of a symplectic manifold.
\item In Section 3, we derive explicit formulae of the Reeb vector field $X_{f\lambda}$ and
the contact projection $\pi_{f\, \lambda}$ respectively $X_\lambda$, $\pi_\lambda$ and $f$
respectively.
\item In Section 4, we give the definition of Morse-Bott contact forms we use and identify
the geometric structure, the pre-contact manifold, carried by the associated clean submanifold $Q$
of the loci of Reeb orbits.
\item In section 5, we introduce the notion of contact thickening of pre-contact structure which is the contact analogue
to the symplectic thickening of pre-symplectic structure constructed in \cite{gotay}, \cite{oh-park}.
\item In Section 7, we derive the linearization formula of a Reeb orbit in the normal form.
\item In Section 8, introduce the class of adapted CR-almost complex structure and prove its
abundance.
\item In section 9 and 10, we express the derivative $dw = (du, \nabla_{du} f)$ of
the map $w = (u,f)$ in terms of the splitting
$$
TU_F = TQ \oplus F = TQ \oplus (E \oplus JT{\mathcal F}), \quad TQ = T{\mathcal F} \oplus G
$$
for the normal neighborhood $(U_F, \lambda)$ of $Q$.
\end{itemize}
In Part 2, we exploit these geometric preparations and give a canonical
tensorial proof of asymptotic exponential convergence result of contact instantons
near each puncture of the punctured Riemaann surface $\dot \Sigma$.
\section{$\lambda$-dual vector fields and $\lambda$-dual one-forms}
\label{sec:some}
We recall some basic facts on the contact geometry and
contact Hamiltonian dynamics especially in relation to the perturbation of
contact forms for a given contact manifold $(M,\xi)$.
Let $(M,\xi)$ be a contact manifold and $\lambda$ be a contact form with $\ker \lambda = \xi$. Consider
its associated decomposition
\begin{equation}\label{eq:decomp-TM}
TM = {\mathbb R}\{X_\lambda\} \oplus \xi
\end{equation}
and denote by $\pi=\pi_\lambda: TM \to \xi$ the associated projection.
Denote $X^{\pi_\lambda} : = \pi_\lambda(X)$. We will suppress subindex $\lambda$ from
$\pi_\lambda$ whenever the background contact form is manifest. This decomposition canonically
induces the corresponding dual decomposition
\begin{equation}\label{eq:decomp-T*M}
T^*M = \xi^\perp \oplus ({\mathbb R}\{X_\lambda\})^\perp
\end{equation}
which corresponds to the unique decomposition
\begin{equation}\label{eq:alpha-decomp}
\alpha = \alpha(X_\lambda)\, \lambda + \alpha \circ \pi_\lambda.
\end{equation}
Then we have the following general lemma whose proof immediately follows from \eqref{eq:decomp-T*M}.
\begin{lem}\label{lem:decompose}
For any given one-form $\alpha$, there exists a unique $Y_\alpha \in \xi$ such that
$$
\alpha = Y_\alpha \rfloor d\lambda + \alpha(X_\lambda) \lambda.
$$
\end{lem}
\begin{defn}[$\lambda$-Dual Vector Field and One-Form] Let $\lambda$ be a given contact form of $(Q,\xi)$.
We define the \emph{$\lambda$-dual vector field} of a one-form $\alpha$ to be
$$
\flat_\lambda(\alpha): = Y_\alpha + \alpha(X_\lambda)\, X_\lambda.
$$
Conversely for any given vector field $X$ we define its $\lambda$-dual one-form by
$$
\sharp_\lambda(X)= X \rfloor d\lambda + \lambda(X)\, \lambda.
$$
\end{defn}
For the simplicity of notation, we will denote $\alpha_X: = \sharp_\lambda(X)$.
By definition, we have the identity
\begin{equation}\label{eq:obvious-id}
\lambda(\flat_\lambda(\alpha)) = \alpha(X_\lambda).
\end{equation}
The following proposition is immediate from the definitions of the dual vector field and the dual
one-forms.
\begin{prop}\label{prop:inverse} The maps $\flat_\lambda: \Omega^1(M) \to \frak X(M), \, \alpha \mapsto \flat_\lambda(\alpha)$
and the map $\sharp^\lambda: \frak X(M) \to \Omega^1(M), \, X \mapsto \alpha_X$ are inverse to each other.
In particular any vector field can be written as $\flat_\lambda(\alpha)$ for a unique one-form $\alpha$ and
any one-form can be written as $\alpha_X$ for a unique vector field $X$.
\end{prop}
By definition, we have $\flat_\lambda(\lambda) = X_\lambda$ which
corresponds to the $\lambda$-dual to the contact form $\lambda$ itself for which $Y_\lambda =0$
by definition.
Obviously when an exact one-form $\alpha$ is given, the choice of $h$ with $\alpha = dh$ is
unique modulo addition by a constant (on each connected component of $M$).
Now we write down the coordinate expression of $\flat_\lambda(\alpha)$ and $\alpha_X$
in the Darboux chart $(q_1,\cdots, q_n, p_1, \cdots, p_n,z)$ with the canonical
one form $\lambda_0 = dz - \sum_{i=1}^n p_i\, dq_i$ on ${\mathbb R}^{2n+1}$ or more generally on the
one-jet bundle $J^1(N)$ of smooth $n$-manifold. We recall that for this contact form,
the associated Reeb vector field is nothing but
$$
X_0 = \frac{\partial}{\partial z}.
$$
We start with the expression of $\flat_\lambda(\alpha)$ for a given one-form
$$
\alpha = \alpha_0 \, dz + \sum_{i=1}^n a_i\, dq_i + \sum_{j=1}^n b_j\, dp_j.
$$
We denote
$$
\flat_\lambda(\alpha) = v_0 \frac{\partial}{\partial z} + \sum_{i=1} v_{i;q} \frac{\partial}{\partial q_i}
+ \sum_{j=1} v_{j;p} \frac{\partial}{\partial p_j}.
$$
A direct computation using the defining equation of $\flat_\lambda(\alpha)$ leads to
\begin{prop}\label{prop:expressioninDarbouxchart}
Consider the standard contact structure $\lambda = dz - \sum_{i=1}p_i\, dq_i$
on ${\mathbb R}^{2n+1}$. Then for the given one form $\alpha = \alpha_0 dz + \sum_{i=1}a_i\, dq_i + \sum_{j=1}b_j \, dp_j$,
\begin{equation}\label{eq:vis}
\flat_\lambda(\alpha) = \left(\alpha_0 + \sum_{k=1} p_k\, b_k\right) \frac{\partial}{\partial z} +\sum_{i=1}^n b_i\frac{\partial}{\partial q_i}
+ \sum_{j=1}^n (-a_j - p_j\, \alpha_0) \frac{\partial}{\partial p_j}.
\end{equation}
Conversely, for given $X = v_0 \frac{\partial}{\partial z} + \sum_{i=1} v_{i;q} \frac{\partial}{\partial q_i} + \sum_{j=1} v_{j;p} \frac{\partial}{\partial p_j}$, we obtain
\begin{eqnarray}\label{eq:alphais}
\alpha_X & = & \left(v_0 - \sum_{j=1}^n p_j\, v_{j;q}\right)\,dz \nonumber \\
&{}& \quad - \sum_{i=1}\left( - v_{i;p} - (v_0 - p_i\, v_{i;q})p_i\right) dq_i +\sum_{j=1}v_{j;q}\, dp_j.
\end{eqnarray}
\end{prop}
\begin{proof} Here we first recall the basic identity \eqref{eq:obvious-id}.
By definition, $\flat_\lambda(\alpha)$ is determined by the equation
\begin{equation}\label{eq:alpha-lambda0}
\alpha = \flat_\lambda(\alpha) \rfloor \sum_{i=1} dq_i \wedge dp_i
+ \lambda(\flat_\lambda(\alpha))\, \left(dz - \sum_{i=1}p_i\, dq_i\right)
\end{equation}
in the current case.
A straightforward computation leads to the formula \eqref{eq:vis}.
Then \eqref{eq:alphais} can be derived either by inverting this formula or by
using the defining equation of $\alpha_X$, which is reduced to
$$
\alpha_X = X \rfloor d\lambda_0 + \lambda(X)\, \lambda = X \rfloor \sum_{i=1}dq_i \wedge dp_i
+ (dz - \sum_{j=1}p_j\, dq_j)(X)\, \lambda.
$$
We omit the details of the computation.
\end{proof}
\begin{exm}
Again consider the canonical one-form $dz - \sum_{i=1} p_i, \, dq_i$. Then we obtain the
following coordinate expression as a special case of \eqref{eq:vis}
\begin{equation}\label{eq:hamvis}
\flat_\lambda(dh) = \left(\frac{\partial h}{\partial z} + \sum_{i=1}p_i\frac{\partial h}{\partial p_i}\right)\frac{\partial}{\partial z}
+ \sum_{i=1}^n \frac{\partial h}{\partial p_i}\frac{\partial}{\partial q_i}
+ \sum_{j=1}\left(- \frac{\partial h}{\partial q_i} - p_i\frac{\partial h}{\partial z}\right)\frac{\partial}{\partial p_i}.
\end{equation}
\end{exm}
\section{Perturbation of contact forms of $(Q,\xi)$}
\label{sec:perturbed-forms}
In this section, we exploit the discussion on the $\lambda$-dual vector fields and
express the Reeb vector fields $X_{f\lambda}$ and the projection $\pi_{f\lambda}$
associated to the contact form $f \lambda$ for a positive function $f > 0$,
in terms of those associated to the given contact form $\lambda$ and
the $\lambda$-dual vector fields of $df$.
Recalling Lemma \ref{lem:decompose}, we can write
$$
dg = Y_{dg} \rfloor d\lambda + dg(X_\lambda) \lambda
$$
with $Y_{dg} \in \xi$ in a unique way for any smooth function $g$. Then by definition, we have
$Y_{dg} = \pi_\lambda(\flat_\lambda(dg))$.
We denote $\flat_\lambda(dg)=: X_{dg}$ and the unique vector field $Y_{dg}$ as
$$
Y_{dg} = X_{dg}^{\pi_\lambda}.
$$
We first compute the following useful explicit formula
for the associated Reeb vector fields $X_{f\lambda}$ in terms of $X_\lambda$
and $X_{df}^{\pi_{\lambda}}$.
\begin{prop}[Reeb Vector Fields]\label{prop:eta}
We have $\eta = X_{dg}^{\pi_\lambda}$. In particular,
$$
X_{f\lambda} = \frac{1}{f}(X_{\lambda} + X_{dg}^{\pi_\lambda}), \quad g = \log f.
$$
\end{prop}
\begin{proof} Denote $g = \log f$.
It turns out to be easier to
consider $f\, X_{f\lambda}$ and so we have the decomposition
\begin{equation}\label{eq:fXflambda}
f\,X_{f\lambda}= c \cdot X_\lambda+\eta
\end{equation}
with respect to the splitting $TM = {\mathbb R}\{X_\lambda\} \oplus \xi$. We evaluate
\begin{eqnarray*}
c = \lambda(fX_{f\lambda})=(f\lambda)(X_{f\lambda})= 1.
\end{eqnarray*}
It remains to derive the formula for $\eta$. Using the formula
\begin{eqnarray*}
d(f\lambda)= f d\lambda+df\wedge \lambda,
\end{eqnarray*}
we compute
\begin{eqnarray*}
\eta\rfloor d\lambda&=&(fX_{f\lambda})\rfloor d\lambda\\
&=& X_{f\lambda}\rfloor d(f\lambda)-X_{f\lambda}\rfloor(df\wedge\lambda)\\
&=&-X_{f\lambda}\rfloor(df\wedge\lambda)\\
&=&-X_{f\lambda}(f)\lambda+\lambda(X_{f\lambda})df\\
&=&-\frac{1}{f}(X_\lambda+\eta)(f)\lambda+\frac{1}{f}\lambda(X_\lambda+\eta)df\\
&=&-\frac{1}{f}X_\lambda(f)\lambda-\frac{1}{f}\eta(f)\lambda+\frac{1}{f}df.
\end{eqnarray*}
Take value of $X_\lambda$ for both sides, we get
$
\eta(f)=0,
$
and hence
$$
\eta\rfloor d\lambda=-\frac{1}{f}X_\lambda(f)\lambda+\frac{1}{f}df.
$$
Define $g =\log f$, and then
$$
\eta\rfloor d\lambda=-dg(X_\lambda)\lambda+dg.
$$
In other words, we obtain
\begin{equation}
dg = \eta \rfloor d\lambda + dg(X_\lambda)\lambda.\label{eq:eta1}
\end{equation}
Therefore by the uniqueness of this decomposition, we have obtained
$\eta = X_{dg}^{\pi_\lambda}$. Substituting this into \eqref{eq:fXflambda} and dividing it
by $f$, we have finished the proof.
\end{proof}
Next we compare the contact projection
of $\pi_{\lambda}$ with that of $\pi_{f\lambda}$.
\begin{prop}[$\pi$-Projection]\label{prop:xi-projection}
Let $(M,\xi)$ be a contact manifold and let $\lambda$ be a contact form
i.e., $\ker \lambda = \xi$. Let $f$ be a positive smooth function and $f \, \lambda$
be its associated contact form. Denote by $\pi_{\lambda}$ and $\pi_{f\, \lambda}$ be
their associated $\xi$-projection. Then
\begin{equation}\label{eq:xi-projection}
\pi_{f\, \lambda}(Z) = \pi_{\lambda}(Z)- \lambda(Z) X_{dg}^{\pi_{\lambda}}
\end{equation}
for the function $g = \log f$.
\end{prop}
\begin{proof}
We compute
\begin{eqnarray*}
\pi_{f\lambda}(Z)&=& Z - f\lambda(Z) X_{f\lambda}
= Z-\lambda(Z)(f X_{f\lambda})\\
&=&Z-\lambda(Z) X_{\lambda}+(\lambda(Z) X_{\lambda}-\lambda(Z)(fX_{f\lambda}))\\
&=&\pi_{\lambda}Z+\lambda(Z)(X_{\lambda}-f X_{f\lambda})\\
&=&\pi_{\lambda}Z-\lambda(Z) X_{dg}^{\pi_\lambda}.
\end{eqnarray*}
This finishes the proof.
\end{proof}
We next study the relationship between
the linearization of $\Upsilon(z) = \dot z - X_{f \lambda}(z)$
$$
D\Upsilon(z)(Y) = \nabla_t^\pi Y - \nabla_Y X_{f\, \lambda}
$$
with respect to and the triad connection of $(M,\lambda,J)$ (see Proposition 3.3 \cite{oh-wang2})
for a given function $f$. Substituting
$$
X_{f\, \lambda} = \frac{1}{f}(X_{\lambda} + X_{dg}^{\pi_\lambda})
$$
into this formula, we derive
\begin{lem}[Linearization]\label{lem:DUpsilon}
Let $\nabla$ be the triad connection of $(M, f \lambda,J)$. Then
for any vector field $Y$ along a Reeb orbit $z = (\gamma(T\cdot),o_{\gamma(T\cdot)})$,
\begin{equation}\label{eq:DUpsilon}
D\Upsilon(z)(Y)
= \nabla_t^\pi Y - \left(\frac{1}{f}\nabla_Y X_\lambda + Y[1/f] X_\lambda \right)
- \left(\frac{1}{f} \nabla_Y X_{dg}^{\pi_\lambda} + Y[1/f] X_{dg}^{\pi_\lambda}\right)
\end{equation}
\end{lem}
\begin{proof} Let $\nabla$ be the triad connection of $(M,f\lambda,J)$. Then
by definition its torsion $T$ satisfies the axiom $T(X_\lambda,Y) = 0$ for any vector field $Y$ on $M$
(see Axiom 2 of Theorem 1.3 \cite{oh-wang1}). Using this property, as in section 3 of \cite{oh-wang2},
we compute
\begin{eqnarray*}
D\Upsilon(z)(Y) & = & \nabla_t^\pi Y - \nabla_Y X_{f\lambda} \\
& = & \nabla_t^\pi Y - \nabla_Y\left(\frac{1}{f}(X_\lambda + X_{dg}^{\pi_\lambda})\right) \\
& = & \nabla_t^\pi Y -Y[1/f](X_\lambda + X_{dg}^{\pi_{\lambda_E}}) - \frac{1}{f}\nabla_Y(X_\lambda + X_{dg}^{\pi_\lambda})\\
& = & \nabla_t^\pi Y - \left(\frac{1}{f}\nabla_Y X_\lambda + Y[1/f] X_\lambda \right)
- \left(\frac{1}{f} \nabla_Y X_{dg}^{\pi_\lambda}) + Y[1/f] X_{dg}^{\pi_\lambda})\right).
\end{eqnarray*}
which finishes the proof.
\end{proof}
We note that when $f\equiv 1$, the above formula reduces to the standard formula
$$
D\Upsilon(z)(Y)
= \nabla_t^\pi Y - \nabla_Y X_\lambda
$$
which is reduced to
$$
D\Upsilon(z)(Y) = \nabla_t^\pi Y - \frac{1}{2}({\mathcal L}_{X_\lambda}J) J Y
$$
for any contact triad $(M,\lambda,J)$.
(See section 3 \cite{oh-wang2} for some discussion on this formula.)
\section{The clean submanifold of Reeb orbits}
\label{sec:neighbor}
\subsection{Definition of Morse-Bott contact form}
\label{subsec:morse-bott}
Let $(M,\xi)$ be a contact manifold and $\lambda$ be a contact form of $\xi$.
We would like to study the linearization of the equation $\dot x = X_\lambda(x)$
along a closed Reeb orbit.
Let $\gamma$ be a closed Reeb orbit of period $T > 0$. In other words,
$\gamma: {\mathbb R} \to M$ is a solution of $\dot \gamma = X_\lambda(\gamma)$ satisfying $\gamma(t+T) = \gamma(t)$.
By definition, we can write $\gamma(t) = \phi^t_{X_\lambda}(\gamma(0))$
for the Reeb flow $\phi^t= \phi^t_{X_\lambda}$ of the Reeb vector field $X_\lambda$.
In particular $p = \gamma(0)$ is a fixed point of the diffeomorphism
$\phi^T$ when $\gamma$ is a closed Reeb orbit of period $T$.
Since ${\mathcal L}_{X_\lambda}\lambda = 0$, the contact diffeomorphism $\phi^T$ canonically induces the isomorphism
$$
\Psi_{z;p} : = d\phi^T(p)|_{\xi_p}: \xi_p \to \xi_p
$$
which is the linearized Poincar\'e return map $\phi^T$ restricted to $\xi_p$ via the splitting
$$
T_p M=\xi_p\oplus {\mathbb R}\cdot \{X_\lambda(p)\}.
$$
\begin{defn} We say a $T$-closed Reeb orbit $(T,z)$ is \emph{nondegenerate}
if the linearized return map $\Psi_z(p):\xi_p \to \xi_p$ with $p = \gamma(0)$ has no eigenvalue 1.
\end{defn}
Denote ${\operatorname{Cont}}(M, \xi)$ the set of contact one-forms with respect to the contact structure $\xi$
and ${\mathcal L}(M)=C^\infty(S^1, M)$ the space of loops $z: S^1 = {\mathbb R} /{\mathbb Z} \to M$.
Let ${\mathcal L}^{1,2}(M)$ be the $W^{1,2}$-completion of ${\mathcal L}(M)$. We would like to consider some Banach bundle
$\mathcal{L}$ over the Banach manifold $(0,\infty) \times {\mathcal L}^{1,2}(M) \times {\operatorname{Cont}}(M,\xi)$ whose fiber
at $(T, z, \lambda)$ is given by $L^{1,2}(z^*TM)$.
We consider the assignment
$$
\Upsilon: (T,z,\lambda) \mapsto \dot z - T \,X_\lambda(z)
$$
which is a section. Then $(T, z, \lambda)\in \Upsilon^{-1}(0)
:=\mathfrak{Reeb}(M,\xi)$ if and only if there exists some Reeb orbit $\gamma: {\mathbb R} \to Q$ with period $T$, such that
$z(\cdot)=\gamma(T\cdot)$.
We first start with the standard notion, applied to the set of
Reeb orbits, of Morse-Bott critical manifolds introduced by Bott in \cite{bott}:
\begin{defn} We call a contact form $\lambda$ \emph{standard Morse-Bott type} if
every connected component of $\frak{Reeb}(M,\lambda)$ is a smooth submanifold
of $ (0,\infty) \times {\mathcal L}^{1,2}(M)$ with its tangent space at
every pair $(T,z) \in \frak{Reeb}(M,\lambda)$ therein coincides with $\ker d_{(T,z)}\Upsilon$.
\end{defn}
The following is an immediate consequence of this definition.
\begin{lem}\label{lem:T-constant} Suppose $\lambda$ is of standard Morse-Bott type, then
on each connected component of $\mathfrak{Reeb}(Q,\lambda)$, the period remains constant.
\end{lem}
\begin{proof}
Let $(T_0, z_0)$ and $(T_1, z_1)$ be two elements in the same
connected component of $\mathfrak{Reeb}(M,\lambda)$.
We connect them by a smooth one-parameter family
$(T_s, z_s)$ for $0 \leq s \leq 1$. Since $\dot{z_s}=T_sX_\lambda(z_s)$ and then
$T_s=\int_{S^1}z^*_s\lambda$, it is enough to prove
$$
\frac{d}{ds} T_s =\frac{d}{ds} \int_{S^1}z^*_s\lambda\equiv 0.
$$
We compute
\begin{eqnarray*}
\frac{d}{ds} z^*_s\lambda &=& z_s^*(d (z_s' \rfloor \lambda) +z_s' \rfloor d\lambda)\\
&=&d(z_s^*(z_s' \rfloor \lambda))+z_s^*(z_s' \rfloor d\lambda)\\
&=&d(z_s^*(z_s' \rfloor \lambda)).
\end{eqnarray*}
Here we use $z_s'$ to denote the derivative with respect to $s$.
The last equality comes from the fact that $\dot{z_s}$ is parallel to $X_\lambda$.
Therefore we obtain by Stokes formula that
$$
\frac{d}{ds} \int_{S^1}z^*_s\lambda = \int_{S^1} d(z_s^*(z_s' \rfloor \lambda)) = 0
$$
and finish the proof.
\end{proof}
Now we prove
\begin{lem} Let $\lambda$ be standard Morse-Bott type.
Fix a connected component ${\mathcal R} \subset \frak{Reeb}(M,\lambda)$
and denote by $Q \subset M$ the locus of the corresponding Reeb orbits. Then $Q$ is a smooth immersed submanifold
which carries a natural locally free $S^1$-action induced by the Reeb flow over one period.
\end{lem}
\begin{proof} Consider the evaluation map $ev_{{\mathcal R}}: \mathfrak{Reeb}(M,\xi) \times S^1 \to M$ defined by
$ev_{{\mathcal R}}(T,z,\lambda,t) = z(Tt)$. It is easy to prove that the map is a local immersion and so $Q$ is an
immersed submanfold. Since the Reeb orbits have constant period $T>0$ by Lemma \ref{lem:T-constant},
the action is obviously locally free and so $ev_{{\mathcal R}}$ is an immersion
and so $Q$ is immersed in $M$. This finishes the proof.
\end{proof}
However $Q$ may not be embedded in general along the locus of multiple orbits.
Partially following \cite{bourgeois}, from now on in the rest of the paper,
\emph{we always assume $Q$ is embedded and compact}.
Denote $\omega_Q : = i_Q^*d\lambda$ and
$$
\ker \omega_Q =\{e\in TQ \mid \omega(e, e')= 0 \quad \text{for any} \quad e'\in TQ\}.
$$
We warn readers that the Morse-Bott condition does not imply that the form $\omega_Q$
has constant rank, and hence the dimension of this kernel may vary pointwise on $Q$.
However it it does, $\ker \omega_Q$ defines an integrable distribution and so
defines a foliation, denoted by ${\mathcal F}$, on $Q$.
Since $Q$ is also foliated by Reeb orbits and ${\mathcal L}_{X_\lambda}d\lambda=0$,
it follows that ${\mathcal L}_{X_{\lambda}}\omega_Q=0$ when we restrict everything on $Q$. Therefore
each leaf of the the foliation consisting of Reeb orbits. Motivated by this, we also
impose the condition that \emph{the two-form $\omega_Q$ has constant rank}.
\begin{defn}[Compare with Definition 1.7 \cite{bourgeois}]
\label{defn:morse-bott}
We say that the contact form $\lambda$ is of Morse-Bott type if it satisfies the following:
\begin{enumerate}
\item
Every connected component of $\frak{Reeb}(M,\lambda)$ is a smooth submanifold
of $ (0,\infty) \times {\mathcal L}^{1,2}(M)$ with its tangent space at
every pair $(T,z) \in \frak{Reeb}(M,\lambda)$ therein coincides with $\ker d_{(T,z)}\Upsilon$.
\item $Q$ is embedded.
\item $\omega_Q$ has constant rank on $Q$.
\end{enumerate}
\end{defn}
\subsection{Structure of the clean manifold of Reeb orbits}
\label{subsec:structure}
Let $\lambda$ be a Morse-Bott contact form of $(M,\xi)$ and $X_\lambda$
its Reeb vector field. Let $Q$ be as in Definition \ref{defn:morse-bott}. In general, $Q$
carries a natural locally free $S^1$-action induced by the Reeb flow $\phi_{X_\lambda}^T$
(see Lemma \ref{lem:T-constant}).
Then by the general theory of compact Lie group actions (see \cite{helgason} for example), the action has a finite
number of orbit types which have their minimal periods, $T/m$ for some integer $m \geq 1$.
The set of orbit spaces $Q/S^1$ carries natural
orbifold structure at each multiple orbit with its isotropy group ${\mathbb Z}/m$ for some $m$.
\begin{rem}\label{rem:noneffective}
Here we would like to mention that the $S^1$-action induced by
$\phi_{X_\lambda}^T$ on $Q$ may not be effective:
It is possible that the connected component $\frak{R}$ of $\frak{Reeb}(M,\lambda)$
can consist entirely of multiple orbits.
\end{rem}
Now we fix a connected component of $Q$ and just denote it by $Q$ itself.
Denote $\theta = i_Q^*\lambda$.
We note that the two form $\omega_Q = d\theta$ is assumed to
have constant rank on $Q$ by the definition of Morse-Bott contact form in Definition \ref{defn:morse-bott}.
The following is an immediate consequence of the definition but exhibits a
particularity of the null foliation of the presymplectic manifold $(Q,\omega_Q)$
arising from the clean manifold of Reeb orbits. We note that $\ker \omega_Q$ carries a natural splitting
$$
\ker \omega_Q = {\mathbb R}\{X_\lambda\} \oplus (\ker \omega_Q \cap \xi|_Q).
$$
\begin{lem}\label{lem:integrableCN}
The distribution $(\ker \omega_Q) \cap \xi|_Q = (TQ \cap (TQ)^{\omega_Q}) \cap \xi|_Q$
on $Q$ is integrable.
\end{lem}
\begin{proof} Let $X,\, Y$ be vector fields on $Q$ such that $X,\, Y \in (\ker \omega_Q) \cap \xi|_Q$.
Obviously $[X,Y] \in TQ$ since $Q$ is a submanifold,
and also $[X, Y]\in \ker \omega_Q$ since $\omega_Q$ is a closed two-form whose
null distribution is integrable.
At the same time, we compute
$i_Q^*\lambda([X,Y]) = X[\lambda(Y)] - Y[\lambda(X)] -\omega_Q(X,Y) = 0$
where the first two terms vanish since $X, \, Y \in \xi$ and the third vanishes because
$X \in \ker \omega_Q$. This proves $[X,Y] \in \xi \cap TQ = \ker \omega_Q \cap \xi|_Q$,
which finishes the proof.
\end{proof}
Therefore $\ker \omega_Q \cap \xi|_Q$ defines another foliation ${\mathcal N}$ on $Q$, and
\begin{equation}\label{eq:TCF}
T{\mathcal F} = \mathbb{R}X_\lambda \oplus T{\mathcal N}.
\end{equation}
Note that this splitting is $S^1$-invariant.
We now recall some basic properties of presymplectic manifold \cite{gotay}
and its canonical neighborhood theorem.
Fix an $S^1$-equivariant splitting of $TQ$
\begin{equation}\label{eq:splitting}
TQ = T{\mathcal F} \oplus G =\mathbb{R}X_\lambda\oplus T{\mathcal N} \oplus G
\end{equation}
by choosing an $S^1$-invariant complementary subbundle $G \subset TQ$.
This splitting is not unique but its choice will not matter for the coming discussions.
The null foliation carries a natural {\it transverse symplectic form}
in general \cite{gotay}. Since the distribution $T{\mathcal F} \subset TQ$ is preserved by
Reeb flow, it generates the $S^1$-action thereon in the current context.
We denote by
$$
p_{T{\mathcal N};G}: TQ \to T{\mathcal N}, \quad p_{G;G}:TQ \to G
$$
the projection to $T{\mathcal N}$ and to $G$ respectively with respect to the splitting \eqref{eq:splitting}.
We denote by $T^*{\mathcal N} \to Q$ the associated foliation cotangent bundle, i.e., the
dual bundle of $T{\mathcal N}$. Then we have the natural isomorphism
$$
T^*{\mathcal N} \cong ({\mathbb R}\{X_\lambda\} \oplus G)^{\perp} \subset T^*Q \cong \widetilde{d\lambda|_\xi}(T{\mathcal N})
$$
where $(\cdot)^\perp$ denotes the annihilator of $(\cdot)$. We also note that the isomorphism
$$
\widetilde{d\lambda|_\xi}: \xi \to \xi^*
$$
induces a natural isomorphism
\begin{equation}\label{eq:TNT*N}
\#_{\mathcal N}: T{\mathcal N} \hookrightarrow \xi \to \xi^* \to T^*{\mathcal N}
\end{equation}
where the last map is the restriction to $T{\mathcal N}$.
Next we consider the symplectic normal bundle $(TQ)^{d\lambda} \subset T_QM$
defined by
\begin{equation}\label{eq:TQdlambda}
(TQ)^{d\lambda} = \{v \in T_qM \mid d\lambda(v, w) = 0 , \forall w \in T_qQ\}.
\end{equation}
We define a vector bundle
\begin{equation}\label{eq:E}
E = (TQ)^{d\lambda}/T{\mathcal F},
\end{equation}
and then have the natural embedding
\begin{equation}\label{eq:embed-E}
E = (TQ)^{d\lambda}/T{\mathcal F} \hookrightarrow T_QM/TQ = N_QM
\end{equation}
induced by the inclusion map $(TQ)^{d\lambda} \hookrightarrow T_QM$.
The following is straightforward to check.
\begin{lem} The $d\lambda|_E$ induces a nondegenerate 2-form and so $E$ carries a
fiberwise symplectic form, which we denote by $\Omega$.
\end{lem}
We also have the canonical embedding
\begin{equation}\label{eq:embed-T*N}
T^*{\mathcal N} \to T{\mathcal N} \hookrightarrow T_QM \to N_QM
\end{equation}
where the first map is the inverse of the map $\#_{\mathcal N}$ in \eqref{eq:TNT*N}.
We now denote $F: = T^*{\mathcal N} \oplus E \to Q$. The following proposition provides a
local model of the neighborhood of $Q \subset M$.
\begin{prop}\label{prop:neighbor-F} We choose a complement $G$ and
fix the splitting $TQ = {\mathbb R}\{X_\lambda\} \oplus T{\mathcal N} \oplus G$.
The direct sum of \eqref{eq:embed-T*N} and \eqref{eq:embed-E}
defines an isomorphism $T^*{\mathcal N} \oplus E \to N_QM$ depending only on the choice of $G$.
\end{prop}
\begin{proof}
A straightforward dimension counting shows that the bundle map indeed is an
isomorphism.
\end{proof}
Identifying a neighborhood of $Q \subset M$ with a neighborhood of the zero section of $F$
and pulling back the contact form $\lambda$ to $F$, we may assume that our contact form $\lambda$ is
defined on a neighborhood of $o_F \subset F$. We also identify with $T^*{\mathcal N}$ and $E$ as
their images in $N_QM$.
\begin{prop}\label{prop:S1-bundle} The $S^1$-action on $Q$
canonically induces the $S^1$-equivariant vector bundle structure on $E$ such that
the form $\Omega$ is equivariant under the $S^1$-action on $E$.
\end{prop}
\begin{proof}
The action of $S^1$ on $Q$ by
$t\cdot q=\phi^t(q)$
canonically induces a $S^1$ action on $T_QM$ by
$t\cdot v=(d\phi^t)(v)$,
for $v\in T_QM$.
Hence it follows the following identity since the Reeb flow preserves $\lambda$,
\begin{equation}
t^*d\lambda=d\lambda\label{eq:S1action}.
\end{equation}
We first show it is well-defined on $E\to Q$, i.e., if $v\in (T_qQ)^{d\lambda}$,
then $t\cdot v\in (T_{t\cdot q}Q)^{d\lambda}$.
In fact, by using \eqref{eq:S1action}, for $w\in T_{t\cdot q}Q$,
$$
d\lambda(t\cdot v, w)=\left((\phi^t)^*d\lambda\right)\left(v, (d\phi^t)^{-1}(w)\right)
=d\lambda\left(v, (d\phi^t)^{-1}(w)\right).
$$
This vanishes, since $Q$ consists of Reeb orbits and thus $d\phi^t$ preserves $TQ$.
Secondly, the same identity \eqref{eq:S1action} further indicates that this $S^1$ action preserves
$\Omega$ on fibers, i.e., $t^*\Omega=\Omega$, and we are done with the proof of this Proposition.
\end{proof}
Summarizing the above discussion, we have concluded that
the base $Q$ is decorated by
the one form $\theta: = i_Q^*\lambda$ on the base $Q$ and the bundle $E$ is decorated by
the fiberwise symplectic 2-form $\Omega$. They satisfy
the following additional properties:
\begin{enumerate}
\item $Q = o_F$ carries an $S^1$-action which is locally free.
In particular $Q/\sim$ is a smooth orbifold.
\item The one-form $\theta$ is $S^1$-invariant, and $d\theta$ is a presymplectic form.
\item The bundle $E$ carries an $S^1$-action that preserves the fiberwise 2-form $\Omega$
and hence induces a $S^1$-invariant symplectic vector bundle structure on $E$.
\item The bundle $F = T^*{\mathcal N} \oplus E \to Q$ carries the direct sum $S^1$-equivariant
vector bundle structure compatible to the $S^1$-action on $Q$.
\end{enumerate}
We summarize the above discussions into the following theorem.
\begin{thm}\label{thm:morsebottsetup} Consider the clean manifold $Q$ of closed Reeb orbits of
a Morse-Bott type contact form $\lambda$.
Let $TQ \cap (TQ)^{\omega_Q}$ be the null distribution of $\omega_Q = i_Q^*d\lambda$ and ${\mathcal F}$ be
the associated characteristic foliation.
Then the restriction of $\lambda$ to $Q$ induces the following geometric structures:
\begin{enumerate}
\item $Q = o_F$ carries an $S^1$-action which is locally free.
In particular $Q/\sim$ is a smooth orbifold.
Fix an $S^1$-invariant splitting \eqref{eq:splitting}.
\item We have the natural identificaion
\begin{equation}\label{eq:NQM}
N_QM \cong T^*{\mathcal N} \oplus E = F,
\end{equation}
as an $S^1$-equivariant vector bundle, where
$$
E: = (TQ)^{d\lambda}/T{\mathcal F}
$$
is the symplectic normal bundle.
\item The two-form $d\lambda$ restricts to a nondegenerate skew-symmetric
two form on $G$, and induces a fiberwise symplectic form
$
\Omega
$
on $E$ defined as above.
\item We have the following commutative diagram of $S^1$-equivariant presymplectic manifolds
induced by the form $\omega_Q$:
\begin{equation}\label{eq:diagram}
\xymatrix{(Q,\omega_Q)\ar[d]\ar[dr] & \\
(S,\omega_S) \ar[r] & (P,\omega_P)}
\end{equation}
The quotient $S = Q/S^1$, the set of closed Reeb orbits, forms a compact smooth orbifold that
has natural projection to $P = Q/\sim$, the set of foliation leaves of the foliation ${\mathcal F}$ induced by
the distribution $TQ \cap (TQ)^{\omega_Q} \to Q$. The set $P$ carries a canonical
symplectic form $\omega_P$ which can be lifted to the $S^1$-equivariant fiberwise symplectic
form on the bundle $G \to Q$ in the chosen $S^1$-invariant splitting
$$
TQ = {\mathbb R}\{X_\lambda\} \oplus T{\mathcal N} \oplus G.
$$
\end{enumerate}
\end{thm}
We say that $Q$ is of pre-quantization type if the rank of $d\lambda|_Q$ is maximal
and is of isotropic type if the rank of $d\lambda|_Q$ is zero. The general case will be
a mixture of the two. In particular when $\opname{dim} M = 3$, such $Q$ must be either of
prequantization type or of isotropic type. This is the case that is considered in
\cite{HWZ3}. The general case considered in \cite{bourgeois} and \cite{behwz} includes the mixed type.
\section{Contact thickening of Morse-Bott contact set-up}
\label{sec:thickening}
Motivated by the isomorphism in Theorem \ref{thm:morsebottsetup}, we consider the pair $(Q,\theta)$ and
the symplectic vector bundle $(E,\Omega) \to Q$ that satisfy the above properties.
In the next section, we will describe the model contact form on the direct sum
$$
F = T^*{\mathcal N} \oplus E
$$
and prove a canonical neighborhood theorem of the clean submanifold of Reeb orbits of
general contact manifold $(M,\lambda)$ such that the zero section of $F$ corresponds to $Q$.
To state our canonical neighborhood theorem, we need to first
identify the relevant geometric structure of the canonical neighborhoods.
For this purpose, introduction of the following notion is useful.
\begin{defn}[Pre-Contact Form] We call a one-form $\theta$ on a manifold $Q$ a \emph{pre-contact} form
if $d\theta$ has constant rank.
\end{defn}
\subsection{The $S^1$-invariant pre-contact manifold $(Q,\theta)$}
\label{subsec:Q}
First, we consider the pair $(Q,\theta)$ that carries a nontrivial
$S^1$-action preserving the one-form. After taking the quotient of $S^1$ by some finite subgroup,
we may assume that the action is effective. We will also assume that the action
is locally free. Then by the general theory of group actions of
compact Lie group (see \cite{helgason} for example),
the action is free on a dense open subset and has only finite number
of different types of isotropy groups. In particular the quotient
$P: = Q/S^1$ becomes a presymplectic orbifold with a finite number of
different types of orbifold points. We denote by $Y$ the vector field generating the
$S^1$-action, i.e., the $S^1$-action is generated by its flows.
We require that the circle action preserves $\theta$, i.e., ${\mathcal L}_Y\theta = 0$.
Since the action is locally free and free on a dense open subset of $Q$,
we can normalize the action so that
\begin{equation}\label{eq:thetaX=1}
\theta(Y) \equiv 1.
\end{equation}
We denote this normalized vector field by $X_\theta$. We would like to emphasize that
$\theta$ may not be a contact form but can be regarded as the connection form of
the circle bundle $S^1 \to Q \to P$ over the orbifold $P$ in general. Although $P$
may carry non-empty set of orbifold points, the connection form $\theta$ is assumed to
be smooth on $Q$.
Similarly as in Lemma \ref{lem:integrableCN}, we also require
the presence of $S^1$-invariant splitting
$$
\ker d\theta = {\mathbb R} \{X_\theta\} \oplus H
$$
such that the subbundle $H$ is also integrable.
With these terminologies introduced, we can rephrase Theorem \ref{thm:morsebottsetup} as the following.
\begin{thm}\label{thm:morsebottsetup2}
Let $Q$ be the clean submanifold of Reeb orbits of
Morse-Bott type contact form $\lambda$ of contact manifold $(M,\lambda)$.
Then $Q$ carries
\begin{enumerate}
\item a locally free $S^1$-invariant pre-contact form $\theta$ given by $\theta = i_Q^*\lambda$,
\item a splitting
\begin{equation}\label{eq:kernel-dtheta}
\ker d\theta = {\mathbb R}\{X_\theta\} \oplus H,
\end{equation}
such that $H = \ker d\theta \cap \xi|_Q$ the distribution $H$ is integrable,
\item
an $S^1$-equivariant symplectic vector bundle $(E,\Omega) \to Q$ with
$$
E = (TQ)^{d\lambda}/\ker d\theta, \quad \Omega = [d\lambda]_E
$$
\end{enumerate}
\end{thm}
Here we use the fact that there exists a canonical embedding
$$
E = (TQ)^{d\lambda}/\ker d\theta \hookrightarrow T_QM/ TQ = N_QM
$$
and $d\lambda|_{(TQ)^{d\lambda}}$ canonically induces a bilinear form $[d\lambda]_E$
on $E = (TQ)^{d\lambda}/\ker di_Q^*\lambda$ by symplectic reduction.
\begin{defn} Let $(Q,\theta)$ be a pre-contact manifold
equipped with a locally free $S^1$-action and with a $S^1$-invariant one-form $\theta$ and
the splitting \eqref{eq:kernel-dtheta}. We call such a triple $(Q,\theta,H)$
a \emph{Morse-Bott contact set-up}.
\end{defn}
As before, we denote by ${\mathcal F}$ and ${\mathcal N}$
the associated foliations on $Q$, and decompose
$$
T{\mathcal F} = {\mathbb R}\{X_\theta\} \oplus T{\mathcal N}.
$$
Now we fix an $S^1$-invariant connection on $TQ$, e.g., the Levi-Civita connection of
an $S^1$-invariant metric. By the presymplectic hypothesis, $\ker d\theta$ defines
an integrable distribution. This connection together with a choice of the complement $G$
$$
TQ = {\mathbb R}\{X_\theta\} \oplus T{\mathcal N} \oplus G
$$
will then induce a connection on the subbunle
$T{\mathcal N}$, which in turn induces the dual connection on $T^*{\mathcal N}$.
The connection then induces the splitting
$$
T_\alpha (T^*{\mathcal N}) = HT_\alpha(T^*{\mathcal N}) \oplus VT_\alpha(T^*{\mathcal N})
$$
where $HT_\alpha(T^*{\mathcal N}) = {\mathbb R}\{X_\theta(q)\} \oplus T_q{\mathcal N} \oplus G_q$ and $VT_\alpha(T^*{\mathcal N})\cong T^*_q{\mathcal N}$.
Then we define a one-form $\Theta_G$ on $T^*{\mathcal N}$ as follows. For a tangent
$\xi \in T_\alpha(T^*{\mathcal N})$, define
\begin{equation}\label{eq:thetaG}
\Theta_G(\xi): = \alpha(p_{T{\mathcal N};G} d\pi_{(T^*{\mathcal N};F)}(\xi,0))
\end{equation}
using the splitting
$$
T_\alpha(T^*{\mathcal N})= HT_\alpha(T^*{\mathcal N})\oplus VT_\alpha(T^*{\mathcal N}) \cong T_q Q \oplus T_q^*{\mathcal N}.
$$
By definition, it follows $\theta|_{VT(T^*{\mathcal N})} \equiv 0$ and $d\Theta_G(\alpha)$ is nondegenerate
on $\widetilde{T_q{\mathcal N}} \oplus VT_\alpha T^*{\mathcal N} \cong T_q{\mathcal N} \oplus T_q^*{\mathcal N}$ which becomes
the canonical pairing defined on $T_q{\mathcal N} \oplus T_q^*{\mathcal N}$ under the identification.
\subsection{The bundle $E$}
We next examine the structure of
the $S^1$-equivariant symplectic vector bundle $(E, \Omega)$.
We denote by $\vec R$ the radial vector field which
generates the family of radial multiplication
$$
(c, e) \mapsto c\, e.
$$
This vector field is invariant under the given $S^1$-action on $E$, and vanishes on the zero section.
By its definition, $d\pi(\vec R)=0$, i.e., $\vec R$ is in the vertical distribution, denoted by $VTE$, of $TE$.
Denote the canonical isomorphism $V_eTE \cong E_{\pi(e)}$ by $I_{e;\pi(e)}$.
It obviously intertwines the scalar multiplication, i.e.,
$$
I_{e;\pi(e)}(\mu\, \xi) = \mu I_{e;\pi(e)}(\xi)
$$
for a scalar $\mu$.
It also satisfies the following identity \eqref{eq:dRc=Ic} with respect to the derivative of the
fiberwise scalar multiplication map $R_c: E \to E$.
\begin{lem} Let $\xi \in V_e TE$. Then
\begin{equation}\label{eq:dRc=Ic}
I_{c\, e;\pi(c\, e)}(dR_c(\xi)) = c\, I_{e;\pi(e)}(\xi)
\end{equation}
on $E_{\pi(c\, e)} = E_{\pi(e)}$ for any constant $c$.
\end{lem}
\begin{proof} We compute
\begin{eqnarray*}
I_{c\, e;\pi(c\, e)}(dR_c(\xi)) & = & I_{c\, e;\pi(c\, e)}\left(\frac{d}{ds}\Big|_{s=0}c(e + s\xi)\right) \\
& = & I_{c\, e;\pi(c\, e)}(R_c(\xi)) = c\, I_{e;\pi(e)}(\xi)
\end{eqnarray*}
which finishes the proof.
\end{proof}
We then define the fiberwise two-form $\Omega^v$ on $VTE \to E$ by
$$
\Omega^v_e(\xi_1,\xi_2) = \Omega_{\pi_F(e)}(I_{e;\pi(e)}(\xi_1),I_{e;\pi(e)}(\xi_2))
$$
for $\xi_1, \xi_2\in V_eTE$,
and one-form
$
\vec R \rfloor \Omega^v
$
respectively.
Now we introduce an $S^1$-invariant symplectic connection on $E$, we choose the splitting
$$
TE = HTE \oplus VTE
$$
and extend the fiberwise form $\Omega$ of $E$ into the differential two form of
$\widetilde \Omega$ on $E$ by setting
$$
\widetilde \Omega_e = \Omega(I_{e;\pi(e)}\cdot, I_{e;\pi(e)}\cdot).
$$
Existence of such an invariant connection follows
e.g., by averaging over the compact group $S^1$. We denote by $\widetilde \Omega$ the resulting
two form on $E$.
Denote by $\vec R$ the radial vector field of $F \to Q$ and consider the one-form
\begin{equation}\label{eq:lambdaF}
\vec R \rfloor \widetilde \Omega
\end{equation}
which is invariant under the action of $S^1$ on $E$.
\begin{rem}
Suppose $d\lambda_F(\cdot, J_F \cdot)=: g_{E;J_F}$ defines a
Hermitian vector bundle $(\xi_F, g_{E,J}, J_F)$.
Then we can write the radial vector field considered in the previous section as
$$
\vec R(e) = \sum_{i=1}^k r_i \frac{\partial}{\partial r_i}
$$
where $(r_1,\cdots, r_k)$ is the coordinates of $e$ for
a local frame $\{e_1, \cdots, e_k\}$ of the vector bundle $E$. By definition, we have
\begin{equation}\label{eq:vecRe}
I_{e;\pi_F(e)}(\vec R(e)) = e.
\end{equation}
Obviously the right hand side expression does not depend on the choice of
local frames.
Let $(E,\Omega,J_F)$ be a Hermitian vector bundle and define
$|e|^2 = g_F(e,e)$. Motivated by the terminology used in \cite{bott-tu}, we call the one-form
\begin{equation}\label{eq:angularform}
\psi = \psi_\Omega = \frac{1}{r} \frac{\partial}{\partial r} \rfloor \Omega^v
\end{equation}
the \emph{global angular form} for the Hermitian vector bundle $(E,\Omega,J_F)$.
Note that $\psi$ is defined only on $E \setminus o_F$ although
$\Omega$ is globally defined.
\end{rem}
We state the following lemma.
\begin{lem}\label{lem:ROmega} Let $\Omega$ be as in the previous section.
Then,
\begin{enumerate}
\item $\vec R \rfloor d \widetilde \Omega = 0$.
\item
For any non-zero constant $c > 0$, we have
$$
R_{c}^*\widetilde \Omega = c^2\, \widetilde \Omega.
$$
\end{enumerate}
\end{lem}
\begin{proof}
Notice that $\widetilde\Omega$ is compatible with $\Omega$ in the sense of symplectic fiberation
and the symplectic vector bundle connection is nothing but the
Ehresmann connection induced by $\widetilde\Omega$, which is a symplectic connection now.
Since $\vec R$ is vertical, the statement (1) immediately follows from the fact that the symplectic connection is vertical closed.
It remains to prove statement (2).
Let $e \in E$ and $\xi_1, \, \xi_2 \in T_e E$. By definition, we derive
\begin{eqnarray*}
(R_c^*\widetilde \Omega)_e(\xi_1,\xi_2)
& = & \widetilde \Omega_{c\,e}(dR_c(\xi_1), dR_c(\xi_2))\\
& = & \Omega^v_{c\,e}(dR_c(\xi_1), dR_c(\xi_2)))\\
& = & \Omega_{\pi_F(c\, e)}(I_{c\,e;\pi(e)}(dR_c(\xi_1)),I_{c\,e;\pi(e)}(dR_c(\xi_2)))\\
& = & \Omega_{\pi_F(e)}(c\, I_{e;\pi(e)}(\xi_1),c\, I_{e;\pi(e)}(\xi_2)) \\
& = & c^2 \Omega_{\pi_F(e)}(I_{e;\pi(e)}(\xi_1), I_{e;\pi(e)}(\xi_2))= c^2 \Omega^v_e(\xi_1,\xi_2)\\
& = & c^2 \widetilde \Omega_e(\xi_1,\xi_2)
\end{eqnarray*}
where we use the equality \eqref{eq:dRc=Ic} and $\pi_F(c\, e) = \pi(e)$ for the fourth equality.
This proves $R_c^*\widetilde \Omega = c^2 \widetilde \Omega$.
\end{proof}
It follows from this lemma we get ${\mathcal L}_{\vec R}\widetilde\Omega=2\widetilde\Omega$. By Cartan's formula, we get
$$
d(\vec R \rfloor \widetilde \Omega) = 2\widetilde \Omega.
$$
\subsection{Canonical contact form and contact structure on $F$}
Let $(Q,\theta,H)$ be a given Morse-Bott contact set up and $(E,\Omega) \to Q$ be
any $S^1$-equivariant symplectic vector bundle equipped with an $S^1$-invariant connection
on it.
Now we equip the bundle $F = T^*{\mathcal N} \oplus E$ with a canonical $S^1$-invariant contact form
on $F$. We denote the bundle projections by $\pi_{E;F}: F \to E$ and $\pi_{T^*{\mathcal N};F}: F \to T^*{\mathcal N}$
of the splitting $F = T^*{\mathcal N} \oplus E$ respectively, and provide the direct sum connection on
$N_QM \cong F = T^*{\mathcal N} \oplus E$.
\begin{thm} Let $(Q,\theta,H)$ be a Morse-Bott contact set-up.
Denote by ${\mathcal F}$ and ${\mathcal N}$ the foliations associated to the distribution $\ker d\theta$ and $H$,
respectively. We also denote by $T{\mathcal F}$, $T{\mathcal N}$ the associated foliation tangent bundles and
$T^*{\mathcal N}$ the foliation cotangent bundle of ${\mathcal N}$. Then for any symplectic vector bundle $(E,\Omega) \to Q$,
the following holds:
\begin{enumerate}
\item
The bundle $F = T^*{\mathcal N} \oplus E$ carries a canonical contact form $\lambda_{F;G}$ defined as in \eqref{eq:lambdaF}, for
each choice of complement $G$ such that $TQ = T{\mathcal F} \oplus G$.
\item
For any two such choices of $G, \, G'$, the associated contact forms are
canonically gauge-equivalent by a bundle map $\psi_{GG'}: TQ \to TQ$ preserving $T{\mathcal F}$.
\end{enumerate}
\end{thm}
\begin{proof} We define a differential one-form on $E$ explicitly by
\begin{equation}\label{eq:lambdaF}
\lambda_F = \pi_F^*\theta + \pi_{T^*{\mathcal N};F}^*\Theta_G +
\frac{1}{2} \pi_{E;F}^*\left(\vec R \rfloor \widetilde \Omega\right).
\end{equation}
Using Lemma \ref{lem:ROmega}, we obtain
\begin{equation}\label{eq:dlambdaF}
d\lambda_F = \pi_F^*d\theta + \pi_{T^*{\mathcal N};F}^*d\Theta_G + \widetilde \Omega
\end{equation}
by taking the differential of \eqref{eq:lambdaF}.
A moment of examination of
this formula gives rise to the following
\begin{prop}\label{lem:deltaforE} There exists some $\delta> 0$ such that
the one-form $\lambda_F$ is a contact form
on the disc bundle $D^{\delta}(F)$, where
$$
D^{\delta}(F) = \{(q,v) \in F \mid \|v\| < \delta \}
$$
such that $\lambda_F|_{o_F} = \lambda_Q$ on $T_QM$.
\end{prop}
\begin{proof} This immediately follows from the formulae \eqref{eq:lambdaF}
and \eqref{eq:dlambdaF}.
\end{proof}
This proves the statement (1).
For the proof of the statement (2), we first note that the definitions $T^*{\mathcal N}$ and
$$
E_G = (TQ)^{d\lambda}/T{\mathcal F}
$$
do not depend on the choice of $G$. On the other hand, the one form
$$
\lambda_{F;G} = \pi_F^*\theta + \pi_{T^*{\mathcal N};F}^*\Theta_G + \frac{1}{2}\vec R \rfloor \widetilde \Omega
$$
possibly depends on $G$ because the one-form $\Theta_G$ does where we have
$$
\Theta_G(\alpha)(\eta) = \alpha(p_{T{\mathcal N};G}(d\pi_{T^*{\mathcal N}}d\pi_{T^*{\mathcal N};F}(\eta))) = \alpha(p_{T{\mathcal N};G}(d\pi_F(\eta)))
$$
with $p_{T{\mathcal N};G}: TQ \to TQ$ being the projection
with respect to the splitting $TQ = {\mathbb R}\{X_\lambda\}\oplus T{\mathcal N} \oplus G$.
Here the second equality follows since $\pi_F = \pi_{T^*{\mathcal N}}\circ \pi_{T^*{\mathcal N};F}$.
Now let $G,\, \, \quad G'$ be two such splittings
$$
TQ = T{\mathcal F} \oplus G = T{\mathcal F} \oplus G'.
$$
Since both $G, \, G'$ are transversal to $T{\mathcal F}$ in $TQ$,
we can represent $G'$ as the graph of the bundle map $A_G: G \to T{\mathcal N}$. Then we consider the
bundle isomorphism $\psi_{GG'}: TQ \to TQ$ defined by
$$
\psi_{GG'} = \left(\begin{matrix} Id_{T{\mathcal N}} & A_G\\
0 & id_G \end{matrix}\right)
$$
under the splitting $TQ = T{\mathcal N} \oplus G$. Then $\psi_{GG'}(G) = \operatorname{Graph} A_G$
and $\psi_{GG'}|_{T{\mathcal N}} = id_{T{\mathcal N}}$.
Therefore we have $p_{T{\mathcal N};G}|_{T{\mathcal N}} = p_{T{\mathcal N};G'}\circ \psi_{GG'}|_{T{\mathcal N}}$.
Now we compute
\begin{eqnarray*}
\Theta_{G}(\alpha)(\eta) & = &\alpha(p_{T{\mathcal N};G}(d\pi_{T^*{\mathcal N}}(\eta)) =
\alpha(p_{T{\mathcal N};G'}\circ \psi_{GG'}(\eta)) \\
& = & \Theta_{G'}(\alpha)(\psi_{GG'}(\eta)).
\end{eqnarray*}
This proves $\Theta_{G} = \Theta_{G'}\circ \psi_{GG'}$.
\end{proof}
Now we study the contact geometry of $(U_F,\lambda_F)$. We first note that the two-form
$d\lambda_F$ is a presymplectic form with one dimensional kernel such that
$$
d\lambda_F|_{VTF} = \widetilde \Omega^v|_{VTF}.
$$
Denote by $\widetilde{X}:=(d\pi_{F;H})^{-1}(X)$ the horizontal lifting of the vector field $X$ on $Q$,
where
$$
d\pi_{F;H}:= d\pi_F|_H:HTF \to TQ
$$
is the bijection of the horizontal distribution and $TQ$.
\begin{lem}[Reeb Vector Field] The Reeb vector field $X_F$ of $\lambda_F$ is given by
$$
X_F=\widetilde X_\theta,
$$
where $\widetilde X_\theta$ denotes the horizontal lifting of $X_\theta$ to $F$.
\end{lem}
\begin{proof} We only have to check the defining property $\widetilde X_\theta \rfloor \lambda_F = 1$
and $\widetilde X_\theta \rfloor d\lambda_F = 0$.
We first look at
\begin{eqnarray*}
\lambda_F(\widetilde X_\theta)&=& \pi_F^*\theta(\widetilde X_\theta) + \pi_{T^*{\mathcal N};F}^*\Theta_G(\widetilde X_\theta)
+ \frac{1}{2} \pi_{E;F}^*\left(\vec R \rfloor \widetilde \Omega\right)(\widetilde X_\theta)\\
& = & \lambda(X_\theta) + 0 + 0 = 1.
\end{eqnarray*}
Here the second term $\frac{1}{2}\widetilde{\Omega}(\vec{R}, \widetilde X_\theta)$ vanishes
due to the definition of $\widetilde\Omega$.
Then we calculate
\begin{eqnarray*}
\widetilde X_\theta\rfloor d\lambda_F&=&\widetilde X_\theta\rfloor (\pi_F^*d\lambda+\widetilde{\Omega}+pr^*d\Theta_G)\\
&=& \widetilde X_\theta\rfloor \pi_F^*d\lambda+\widetilde X_\theta\rfloor \widetilde{\Omega}+\widetilde X_\theta\rfloor \pi_{T^*{\mathcal N};F}^*d\Theta_G\\
&=&0.
\end{eqnarray*}
We only need to explain why the last term $\widetilde X_\theta\rfloor \pi_{T^*{\mathcal N};F}^*d\Theta_G$ vanishes.
In fact $pr_{{\mathcal N} \oplus G} d\pi_F(\widetilde X_\theta) = pr_{{\mathcal N} \oplus G}(X_\theta) = 0$. Using this,
the definition of $\Theta_G$ and the $S^1$-equivariance of the vector bundle $F \to Q$ and the fact that $\widetilde X_\theta$ is
the vector field generating the $S^1$-action, we derive
\begin{eqnarray*}
\widetilde X_\theta\rfloor \pi_{T^*{\mathcal N};F}^*d\Theta_G = 0
\end{eqnarray*}
by a straightforward computation.
This finishes the proof.
\end{proof}
Now we calculate the contact structure $\xi_F$.
\begin{lem}[Contact Distribution] \label{lem:decomp-VW}
At each point $(\alpha,e) \in U_F \subset F$, we
define two subspaces of $T_{(\alpha,e)}F$
$$
V:=\{\xi_V \in T_{(\alpha,e)}F \mid \xi_V =-\Theta_G(\eta)X_F+ \widetilde\eta, \, \eta \in \ker \lambda_Q \}
$$
and
$$
W:=\{\xi_W \in T_{(\alpha,e)}F \mid \xi_W :=-\frac{1}{2}\Omega(e,v)X_F+v, \, v\in VTF\},
$$
Then
$
\xi_F=V\oplus W.
$
\end{lem}
\begin{proof}
By straightforward calculation, both $V$ and $W$ are subspaces of $\xi_F = \ker \lambda_F$.
For any $\xi\in \xi_F$, we decompose $\xi=\xi^{\text{h}}+\xi^{\text{v}}$
using the decomposition $TF=HTF\oplus VTF$.
Since $\pi_F^*\Theta_G(\widetilde\eta) = \Theta_G(\eta)$ and we can write $\xi^{\text{v}} = I_{(e;\pi(e)}(v)$
for a unique $v \in E_{\pi(e)}$. Therefore we need to choose find $b \in {\mathbb R}$, $\eta \in \ker \lambda_Q$
so that for the horizontal vector $\xi^{\text{h}}=\widetilde{(\eta+b X_\theta)}$
\begin{eqnarray*}
\lambda_F(\xi) & = & 0 \\
-\Theta_G(\eta)X_F + \widetilde\eta & \in & V\\
-\frac{1}{2}\Omega(e, v)X_F+v & \in & W.
\end{eqnarray*}
Then
\begin{eqnarray*}
\xi^h&=&\widetilde{(\eta+ b X_\theta)}\\
&=&\widetilde\eta+bX_F
\end{eqnarray*}
which determines $\eta \in T_{\pi(e)}{\mathcal N} \oplus G_{\pi(e)}$ uniquely. We need to determine $b$.
Since
\begin{eqnarray*}
0=\lambda_F(\xi)&=&\lambda_F(\widetilde\eta+bX_F+\xi^{\text{v}})\\
&=&b+\lambda_F(\widetilde\eta)+\lambda_F(\xi^{\text{v}})\\
&=&b+\pi_F^*\Theta_G(\widetilde\eta)+\frac{1}{2}\Omega(e,v)
\end{eqnarray*}
Then we set $\xi_W = -\frac{1}{2}\Omega(\vec{R}, v)X_F+ v$ for $v$ such that $I_{e;\pi(e)}(v) = \xi^v$
and then finally choose $b = - \pi_F^*\Theta_G(\widetilde\eta)$ so that
$\xi_V: = - \pi_F^*\Theta_G(\widetilde\eta) X_F + \widetilde \eta$.
Therefore we have proved $\xi_F=V+W$.
To see it is a direct sum, assume
$$
-(\pi_F^*\Theta_G)(\widetilde{\eta})X_F+\widetilde\eta-\frac{1}{2}\Omega(e, v)X_F+v=0,
$$
for some $\eta\in \xi_\lambda$ and $v\in VTM$.
Apply $d\pi$ to both sides, and it follows that
$$
-\big(\pi_F^*\Theta_G)(\widetilde{\eta})+\frac{1}{2}\Omega(e, v)\big)X_\theta+\eta=0.
$$
Hence $\eta=0$, and then $v=0$ follows since $X_F$ is in horizontal part.
This finishes the proof.
\end{proof}
\section{Canonical neighborhoods of the clean manifold of Reeb orbits}
\label{sec:normalform}
Now let $Q$ be the clean submanifold of $(M,\lambda)$ that is
foliated by the closed Reeb orbits of $\lambda$ with constant period $T$.
Consider the Morse-Bott contact set-up $(Q,\theta,H)$ defined as before
and the symplectic vector bundle $(E,\Omega)$ associated to $Q$.
Let $(F, \lambda_F)$ be the model contact manifold with $F = T^*{\mathcal N} \oplus E$
and $\lambda_F$ be the contact form on $U_F \subset F$ given in \eqref{eq:lambdaF}.
Now in this section, we prove the following canonical neighborhood
theorem as the converse of Theorem \ref{thm:morsebottsetup2}.
\begin{thm}[Canonical Neighborhood Theorem]\label{thm:neighborhoods2}
Let $Q$ be the clean submanifold of Reeb orbits of
Morse-Bott type contact form $\lambda$ of contact manifold $(M,\xi)$, and
$(Q,\theta)$ and $(E,\Omega)$ be the associated pair defined above. Then
there exists neighborhoods $U$ of $Q$ and $U_F$ of the zero section $o_F$
and a diffeomorphism $\psi: U_F \to U$ and a function $f: U_E \to {\mathbb R}$ such that
$$
\psi^*\lambda = f\, \lambda_F, \, f|_{o_F} \equiv 1, \, df|_{o_F}\equiv 0
$$
and
$$
i_{o_F}^*\psi^*\lambda = \theta, \quad (\psi^*d\lambda|_{VTF})|_{o_F} = 0\oplus \Omega
$$
where we use the canonical identification of $VTF|_{o_F} \cong T^*{\mathcal N} \oplus E$ on the
zero section $o_F \cong Q$.
\end{thm}
We first identify the local pair $({\mathcal U}, Q) \cong (U_F,Q)$ by a
diffeomorphism $\phi: {\mathcal U} \to U_F$ such that
$$
\phi|_Q = id|_Q, \quad d\phi(N_QM) = T^*{\mathcal N} \oplus E
$$
Such a diffeomorphism obviously exists by definition of $E$ and $T^*{\mathcal N}$ using
use the normal exponential map with respect to any metric $g$ (defined on ${\mathcal U}$)
as a vector bundle on $Q$.
We will just denote $F$ by $U_F$ in the following context if there is no danger of confusion.
Now $F$ carries two contact forms $\lambda, \, \lambda_F$ and they are the same
on the zero section $o_F$. With this preparation, we will derive Theorem \ref{thm:neighborhoods2} by the following
general submanifold version of Gray's theorem.
\begin{thm}\label{thm:normalform} $M$ is an odd dimensional manifold with two contact
forms $\lambda_0$ and $\lambda_1$ on it.
Let $Q$ be a closed manifold of Reeb orbits of $\lambda_0$ in $M$ and
\begin{equation}\label{eq:equalities}
\lambda_0|_{T_QM} =\lambda_1|_{T_QM}, \, d\lambda_0|_{T_QM} = d\lambda_1|_{T_QM}
\end{equation}
where we denote $T_QM = TM|_Q$.
Then there exists a diffeomorphism $\phi$ from a neighborhood $\mathcal{U}$ to $\mathcal{V}$ such that
\begin{equation}\label{eq:phi}
\phi|_Q = id|_Q,
\end{equation}
and a function $f>0$ such that
$$
\phi^*\lambda_1 = f \cdot\lambda_0,
$$
and
\begin{equation}\label{eq:f}
f|_Q\equiv 1,\quad df|_{T_QM}\equiv 0.
\end{equation}
\end{thm}
\begin{proof}
By the assumption on $\lambda_0, \, \lambda_1$, there exists a small tubular neighborhood of $Q$ in $M$, denote by $\mathcal{U}$,
such that the isotopy $\lambda_t=(1-t)\lambda_0+t\lambda_1$, $t\in [0,1]$, are contact forms in $\mathcal{U}$:
this follows from the requirement \eqref{eq:equalities}. Moreover, we have
$$
\lambda_t|_{T_QM} \equiv \lambda_0|_{T_QM}(=\lambda_1|_{T_QM}), \quad \text{ for any } t\in [0,1].
$$
Then the standard Moser's trick will finish up the proof. For reader's convenience, we provide the details here.
We are looking for a family of diffeomorphisms onto its image $\phi_t: {\mathcal U}' \to {\mathcal U}$
for some smaller open subset ${\mathcal U}' \subset \overline {\mathcal U}' \subset {\mathcal U}$
such that
$$
\phi_t|_Q = id|_Q, \quad d\phi_t|_{T_QM} = id|_{T_QM}
$$
for all $t \in [0,1]$, together with a family of functions $f_t>0$
defined on $\phi_t(\overline {\mathcal U}')$ such that
\begin{eqnarray*}
\phi_t^*\lambda_t &=& f_t\cdot \lambda_0 \quad \text{on } \, \phi_t({\mathcal U}')\\
\phi_t|_Q&\equiv& id|_Q
\end{eqnarray*}
for $0 \leq t \leq 1$. We will further require $f_t \equiv 1$ on $Q$ and $df_t|_Q \equiv 0$.
Since $Q$ is a closed manifold, it is enough to look for the vector fields $Y_t$ generated by $\phi_t$ via
\begin{equation}\label{eq:ddtphit}
\frac{d}{dt}\phi_t=Y_t\circ \phi_t, \quad \phi_0=id,
\end{equation}
satisfying
$$
\begin{cases}
\phi_t^*\left(\frac{d}{dt}\lambda_t+{\mathcal L}_{Y_t}\lambda_t\right)
= \frac{f_t'}{f_t} \phi_t^*\lambda_t\\
Y_t|_Q\equiv0, \quad \nabla Y_t|_{T_QM} \equiv 0.
\end{cases}
$$
By Cartan's magic formula, the first equation gives rise to
\begin{equation}\label{eq:cartan}
d(Y_t\rfloor\lambda_t)+Y_t\rfloor d\lambda_t={\mathcal L}_{Y_t}\lambda_t=(\frac{f_t'}{f_t}\circ \phi_t^{-1})\lambda_t-\alpha,
\end{equation}
where
$$
\alpha=\lambda_1-\lambda_0 (\equiv \frac{d \lambda_t}{dt}).
$$
Now, we need to show that there exists $Y_t$ such that
$\frac{d}{dt}\lambda_t+{\mathcal L}_{Y_t}\lambda_t$ is proportional to $\lambda_t$.
Actually, we can make our choice of $Y_t$ unique if we restrict ourselves
to those tangent to $\xi_t = \ker \lambda_t$ by Lemma \ref{lem:decompose}.
We require $Y_t\in \xi_t$ and then \eqref{eq:cartan} becomes
\begin{equation}\label{eq:alphatYt}
\alpha = - Y_t \rfloor d\lambda_t +(\frac{f_t'}{f_t}\circ \phi_t^{-1})\lambda_t.
\end{equation}
This in turn determines $\phi_t$ by integration.
Since $\alpha|_Q =(\lambda_1-\lambda_0)|_Q=0$ and $\lambda_t|_{T_QM} = \lambda_0|_{T_QM}$,
(and hence $f_t \equiv 1$ on $Q$), it follows that $Y_t=0$ on $Q$. Therefore by compactness of
$[0,1] \times Q$, the domain of existence of the ODE $\dot x = Y_t(x)$
includes an open neighborhood of $[0,1] \times Q \subset {\mathbb R} \times M$ which we may
assume is of the form $(-\epsilon, 1+ \epsilon) \times {\mathcal V}$.
Now going back to \eqref{eq:alphatYt}, we find that the coefficient
$\frac{f_t'}{f_t}\circ \phi_t^{-1}$ is uniquely determined.
We evaluate $\alpha = \lambda_1 - \lambda_0$ against the vector fields $X_t := (\phi_t)_*X_{\lambda_0}$, and get
\begin{equation}\label{eq:logft}
\frac{d}{dt}\log f_t = \frac{f_t'}{f_t}=(\lambda_1(X_t)-\lambda_0(X_t))\circ \phi_t,
\end{equation}
which determines $f_t$ by integration with the initial condition $f_0 \equiv 1$.
It remains to check the additional properties \eqref{eq:phi}, \eqref{eq:f}.
We set
$$
h_t = \frac{f_t'}{f_t}\circ \phi_t^{-1}.
$$
\begin{lem}
$$
dh_t|_{T_QM} \equiv 0
$$
\end{lem}
\begin{proof} By \eqref{eq:logft}, we obtain
$$
dh_t = d(\lambda_1(X_t))-d(\lambda_0(X_t)) = {\mathcal L}_{X_t}(\lambda_1 - \lambda_0) - X_t \rfloor d(\lambda_1 - \lambda_0).
$$
Since $X_t = X_{\lambda_0} = X_{\lambda_1}$ on $Q$, $X_t \in \xi_1 \cap \xi_0$ and so the
second term vanishes.
For the first term, consider $p \in Q$ and $v \in T_pM$. Let $Y$ be a locally defined vector field
with $Y(p) = v$. Then we compute
$$
{\mathcal L}_{X_t}(\lambda_1 - \lambda_0)(Y)(p) = {\mathcal L}_{X_t}((\lambda_1 - \lambda_0)(Y))(p) - (\lambda_1 - \lambda_0)({\mathcal L}_{X_t}Y)(p).
$$
The second term of the right hand side vanishes since $\lambda_1 = \lambda_0$ on $T_pM$ for $p \in Q$. For the first one, we note
$X_t$ is tangent to $Q$ for all $t$ and $(\lambda_1 - \lambda_0)(Y) \equiv 0$ on $Q$ by the hypothesis
$\lambda_0 = \lambda_1$ on $T_QM$. Therefore the first term also vanishes. This finishes the proof.
\end{proof}
Since $\phi_t$ is a
diffeomorphism and $\phi_t(Q) \subset Q$, this implies $dg'_t = 0$ on $Q$ for all $t$.
By integrating $dg'_t = 0$ with $dg'_0 = 0$ along $Q$ over time $t = 0$ to $t=1$, which
implies $dg_t = 0$ along $Q$ (meaning $dg_t|_{T_QM} = 0$), i.e., $df_t = 0$ on $Q$.
This completes the proof of Thereom \ref{thm:normalform}.
\end{proof}
Applying this theorem to $\lambda$ and $\lambda_F$ on $F$ with $Q$ as the zero section $o_F$,
we can wrap-up the proof of Theorem \ref{thm:neighborhoods2}
\begin{proof}[Proof of Theorem \ref{thm:neighborhoods2}]
The first two statements are immediate translations of Theorem \ref{thm:normalform}.
For the last statement, we compute
$$
\phi_*d\lambda = df \wedge \lambda_F + f\, d\lambda_F.
$$
By using $df|_{o_F}= 0$ and $f|_{o_F} = 0$, this finishes the proof of the first part of
the theorem.
This finishes the second part of the theorem.
\end{proof}
\begin{defn}[Normal Form of Contact Form] We call $(U_F,f \, \lambda_F)$ the normal
form of the contact form $\lambda$ associated to clean submanifold $Q$ of Reeb orbits.
\end{defn}
Note that the contact structures associated to $\phi_*\lambda$ and $\lambda_F$ are the same which is given by
$$
\xi_F = \ker \lambda_F = \ker \phi_*\lambda.
$$
This proves the following normal form theorem of the contact structure $(M,\xi)$
in a neighborhood of $Q$.
\begin{prop} Suppose that $Q \subset M$ be a clean submanifold of Reeb orbits of
$\lambda$. Then there exists a contactomorphism from
a neighborhood ${\mathcal U} \supset Q$ to a neighborhood of the zero section of
$F$ equipped with $S^1$-equivariant contact structure $\lambda_F$.
\end{prop}
\begin{defn}[Normal Form of Contact Structure] We call $(F,\xi_F)$ the normal form of
$(M,\xi)$ associated to clean submanifold $Q$ of Reeb orbits.
\end{defn}
However, the Reeb vector fields of $\phi_*\lambda$ and $\lambda_F$ coincide
only along the zero section in general.
In the rest of the paper, we will work with $F$ and for the general contact form $\lambda$
that satisfies
\begin{equation}\label{eq:F-lambda}
\lambda|_{o_F} \equiv \lambda_F|_{o_F}, \quad d\lambda|_{VTF|_{o_F}} = \Omega.
\end{equation}
In particular $o_F$ is also the clean manifold of Reeb orbits of $\lambda$ with the same period $T$.
We re-state the above normal form theorem in this context.
\begin{prop} Let $\lambda$ be any contact form in a neighborhood of $o_F$ on $F$
satisfying \eqref{eq:F-lambda}. Then there exists a function $f$
in a neighborhood of $o_F$ such that
$\lambda = f\, \lambda_F$ with $f|_{o_F} \equiv 1$ and $df|_{o_F} \equiv 0$.
\end{prop}
We denote $\xi$ and $X_\lambda$ the corresponding contact structure and Reeb vector field of $\lambda$,
and $\pi_\lambda$, $\pi_{\lambda_F}$ the corresponding projection from
$TF$ to $\xi$ and $\xi_F$.
\section{Linearization of Reeb orbits on the normal form}
\label{sec:linearization-orbits}
In this section, we systematically examine the decomposition of the linearization map of
Reeb orbits in terms of the coordinate expression of
the loops $z$ in $F$ in this normal neighborhood.
For a given map $z: S^1 \to F$, we denote by $x := \pi_F \circ z$. Then
we can express
$$
z(t) = (x(t), s(t)), \quad t \in S^1
$$
where $s(t) \in F_{x(t)}$, i.e., $s$ is the section os $x^*F$.
We regard this decomposition as the map
$$
{\mathcal I}: {\mathcal F}(S^1, F) \to {\mathcal H}^F_{S^1}
$$
where ${\mathcal H}^F$ is the infinite dimensional vector bundle
$$
{\mathcal H}^F_{S^1} = \bigcup_{x \in {\mathcal F}(S^1,F)} {\mathcal H}^F_{S^1,x}
$$
where ${\mathcal H}^F_{S^1,x}$ is the vector space given by
$$
{\mathcal H}^F_{S^1,x} = \Omega^0(x^*F)
$$
the set of smooth sections of the pull-back vector bundle $x^*F$. This provides
a coordinate description of ${\mathcal F}(S^1,F)$ in terms of ${\mathcal H}^F_{S^1}$. We denote the corresponding
coordinates $z = (u_z,s_z)$ when we feel necessary to make the dependence of $(x,s)$ on $z$
explicit.
We fix an $S^1$-invariant connection on $F$ and the associated splitting
\begin{equation}\label{eq:TF}
TF = HTF \oplus VTF
\end{equation}
which is defined to be the direct sum connection of $T^*{\mathcal N}$ and the $S^1$-invariant
symplectic vector bundle $(E,\Omega)$.
Then we express
$$
\dot z = \left(\begin{matrix} \widetilde{\dot x} \\
\nabla_t s \end{matrix}\right).
$$
Here we regard $\dot x$ as a $TQ$-valued one-form on $S^1$ and
$\nabla_t s$ is defined to be
$$
\nabla_{\dot x} s = (x^*\nabla)_{\frac{\partial}{\partial t}} s
$$
which we regard as an element of $F_{x(t)}$. Through identification of
$H_s TF$ with $T_{\pi_F(s)} Q$ and $V_s TF$ with $F_{\pi(s)}$ or more precisely through the identity
$$
I_{s;x}(\widetilde{\dot x}) = \dot x,
$$
we will just write
$$
\dot z = \left(\begin{matrix} \dot x \\
\nabla_t s \end{matrix}\right).
$$
Recall that $o_F$ is foliated by the Reeb orbits of $\lambda$ which also form
the fibers of the prequantization bundle $Q \to P$.
For a given Reeb orbit $z = (x,s)$, we denote $x(t) = \gamma(T\cdot t)$ where
$\gamma$ is a Reeb orbit of period $T$ of the contact form $\theta$ on $Q$ which is
nothing but a fiber of the prequantization $Q \to P$.
We then decompose
$$
D\Upsilon(z)(Y) = (D\Upsilon(z)(Y))^v + (D\Upsilon(z)(Y))^h.
$$
Then the assignment $Y \mapsto (D\Upsilon(z)(Y))^v$ defines an operator
from $\Gamma(z^*VTF)$ to $\Gamma(z^*VTF)$. Composing with the map $I_{z;x}$,
we have obtained an operator from $\Omega^0(x^*F)$ to $\Omega^0(x^*F)$.
We denote this operator by
\begin{equation}\label{eq:Dupsilon}
D\upsilon(x): \Omega^0(x^*F) \to \Omega^0(x^*F).
\end{equation}
Using $X_F = \widetilde X_{\lambda,Q}$ and $\nabla_Y X_F = 0$ for any vertical vector field $Y$,
we derive the following proposition from Lemma \ref{lem:DUpsilon}.
This will be important later for our exponential estimates.
\begin{prop}\label{prop:DupsilonE} Let $D\upsilon = D\upsilon(x)$ be the operator defined above.
Define the vertical Hamiltonian vector field $X^\Omega_g$ by
$$
X^\Omega_g \rfloor \Omega = dg|_{VTF}.
$$
Then
\begin{equation}\label{eq:nablaeY}
D\upsilon = \nabla_t^F - T\, D^v X_g^{\Omega}(z)
\end{equation}
where $z = (x,o_{x})$.
\end{prop}
\begin{proof}
Consider a vertical vector field $Y \in VTF$ along a Reeb orbit $z$ as above
and regard it as the section of $z^*F$ defined by
$$
s_Y(t) = I_{z;x}(Y(t))
$$
where $\gamma$ is a Reeb orbit with period $T$ of $X_{\lambda,Q}$ on $o_F\cong Q$.
Recall the formula
\begin{equation}\label{eq:DUpsilonE}
D\Upsilon(z)(Y)
= \nabla_t^\pi Y - \left(\frac{1}{f}\nabla_Y X_{\lambda_F} + Y[1/f] X_{\lambda_F} \right)
- \left(\frac{1}{f} \nabla_Y X_{dg}^{\pi_{\lambda_F}} + Y[1/f] X_{dg}^{\pi_{\lambda_F}}\right)
\end{equation}
from Lemma \ref{lem:DUpsilon} which we apply to the vertical vector fiel $Y$ for
the contact manifold $(U_F,\lambda_F)$,
We recall $f \equiv 1$ on $o_F$ and $df \equiv 0$ on $TF|_{o_F}$.
Therefore we have $Y[1/f] = 0$. Furthermore recall $X_F = \widetilde X_{\lambda,Q}$ and on $o_F$
$$
\nabla_Y X_F = D^v X_F (s_Y) = D^v \widetilde X_\theta(s_Y) = 0
$$
on $o_F$. On the other hand, by definition, we derive
$$
- I_{z;x}\left(\frac{1}{f}\nabla_Y X_{dg}^{\pi_{\lambda_F}}\right) =
D^v X_g^\Omega (s_Y).
$$
By substituting this into \eqref{eq:DUpsilonE} and composing with $I_{z;x}$,
we have finished the proof.
\end{proof}
By construction, it follows that the vector field along $z$ defined by
$$
t \mapsto \phi_{X_\theta}^t(v), \quad t \in [0,1]
$$
for any $v \in T_{z(0)} Q$ lie in $\ker D\Upsilon(z)$.
By the Morse-Bott hypothesis, this set of vector fields exhausts
$
\ker D\Upsilon(z).
$
We denote by $\delta > 0$ the gap between $0$ and the first non-zero eigenvalue of
$D\Upsilon(z)$. Then we obtain the following
\begin{cor}\label{cor:gap} Let $z = (x_z, o_{x_z})$ be a Reeb orbit. Then
for any section $s \in \Omega^0(x^*F)$, we have
\begin{equation}\label{eq:Dupsilon-gap}
\|\nabla_t^F s - T D^v X_g^{\Omega}(z)(s)\|^2 \geq \delta^2 \|s\|_2^2.
\end{equation}
\end{cor}
This inequality plays a crucial role in the study of exponential convergence of
contact instantons in the Morse-Bott context studied later in the present paper.
\section{$CR$-almost complex structures adapted to $Q$}
\label{sec:adapted}
\emph{We would like to emphasize that we have not involved any
almost complex structure yet. Now we involve $J$ in our discussion.}
Let $J$ be any $CR$-almost complex structure compatible to $\lambda$ in that
$(Q,\lambda,J)$ defines a contact triad and denote by $g$ the triad metric.
Then we can realize the normal bundle $N_QM = T_QM /TQ$ as the metric normal bundle
$$
N^g_QM = \{ v \in T_QM \mid d\lambda(v, J w) = 0, \forall w \in TQ\}.
$$
We start with the following obvious lemma
\begin{lem} Consider the foliation ${\mathcal N}$ of $(Q,\omega_Q)$, where $\omega_Q = i_Q^*d\lambda$.
Then $JT{\mathcal N}$ is perpendicular to $TQ$ with respect to the triad metric of $(Q,\lambda,J)$.
In particular $JT{\mathcal N} \subset N^g_QM$.
\end{lem}
\begin{proof} The first statement follows from the property that
$T{\mathcal N}$ is isotropic with respect to $\omega_Q$.
\end{proof}
Now, we introduce the concept of adapted almost complex structures to a clean submanifold $Q$ of $M$.
\begin{defn}\label{defn:adapted} Let $Q \subset M$ be a clean manifold of closed Reeb orbits of $\lambda$.
Suppose $J$ defines a contact triad $(M,\lambda,J)$.
We say a $CR$-almost complex structure $J$ for $(M,\xi)$ is adapted to
the clean manifold $Q$ if $J$ satisfies
\begin{equation}\label{eq:JTQ}
J (TQ) \subset TQ + J T{\mathcal N}.
\end{equation}
\end{defn}
\begin{prop}\label{prop:adapted} The set of adapted $J$ relative to $Q$ is nonempty and is a contractible
infinite dimensional manifold.
\end{prop}
\begin{proof}
For the existence of a $J$ adapted to $Q$, we recall the splitting
\begin{eqnarray*}
T_q Q & = & {\mathbb R}\{X_\lambda(q)\}\oplus T_q {\mathcal N} \oplus G_q,\\
T_q M & \cong & ({\mathbb R}\{X_\lambda(q)\}\oplus T_q{\mathcal N} \oplus G_q) \oplus(T_q^*{\mathcal N} \oplus E_q) \\
& = & {\mathbb R}\{X_\lambda(q)\}\oplus (T_q{\mathcal N} \oplus T_q^*{\mathcal N}) \oplus G_q \oplus E_q
\end{eqnarray*}
on each connected component of $Q$. Therefore we
can find $J$ so that it is compatible on $T{\mathcal N} \oplus T^*{\mathcal N}$ with respect to
$-d\Theta_G|_{T{\mathcal N} \oplus T^*{\mathcal N}}$, and compatible on $G$
with respect to $\omega_Q$ and on $E$ with respect to $\Omega$. It follows that
any such $J$ is adapted to $Q$. This proves the first statement.
The proof of the second statement will be postponed until Appendix.
This finishes the proof.
\end{proof}
We note that each summand $T_q{\mathcal N} \oplus T_q^*{\mathcal N}$,
$G_q$ and $E_q$ in the above splitting of $T_QM$ is symplectic with respect to $d\lambda_F$.
We recall the canonical embeddings $T^*{\mathcal N}$ and $E$ into $N_QM$
and the identification $N_Q M \cong T^*{\mathcal N} \oplus E$.
\begin{lem}\label{lem:J-identify} For any adapted $J$, the identification of the normal bundle
\begin{equation}\label{eq:NgQM}
N_Q M \to N^g_Q M; \quad [v] \mapsto \widetilde{d\lambda}(-J v)
\end{equation}
naturally induces the following identifications:
\begin{enumerate}
\item
$
T^*{\mathcal N}\cong JT{\mathcal N}.
$
\item
$
E\hookrightarrow N_QM \cong (TQ)^{d\lambda}\cap (JT{\mathcal N})^{d\lambda}.
$
\end{enumerate}
\end{lem}
\begin{proof}
(1) follows by looking at the metric $\langle\cdot, \cdot\rangle=d\lambda(\cdot, J\cdot)$.
Now restrict it to $(TQ)^{d\lambda}$,
we can identify $E$ with the complement of $T{\mathcal F}$ with respect to this metric,
which is just $(TQ)^{d\lambda}\cap (JT{\mathcal N})^{d\lambda}$.
\end{proof}
\begin{lem} For any adapted $J$, $J E\subset E$ in the sense of the identification of $E$ with
the subbundle of $T_QM$ given in the above lemma.
\end{lem}
\begin{proof}
Take $v\in (TQ)^{d\lambda}\cap (JT{\mathcal N})^{d\lambda}$,
then for any $w\in TQ$,
$$
d\lambda(Jv, w)=-d\lambda(v, Jw)=0
$$
since $JTQ\subset TQ\oplus JT{\mathcal N}$. Hence $Jv\in (TQ)^{d\lambda}$.
For any $w\in T{\mathcal N}$,
$$
d\lambda(Jv, Jw)=d\lambda(v, w)=0
$$
since $v\in (TQ)^{d\lambda}$ and $w\in TQ$. Hence $Jv\in (JT{\mathcal N})^{d\lambda}$,
and we are done.
\end{proof}
\begin{rem}\label{rem:adapted}
\begin{enumerate}
\item
We would like to mention that in the nondegenerate case
the adaptedness is automatically satisfied by any compatible $CR$-almost complex structure
$J \in {\mathcal J}(Q,\lambda)$, because in that case $P$ is a point and $HTF = {\mathbb R} \cdot \{X_F\}$ and
$VTF = TF = \xi_F$.
\item
However for the general Morse-Bott case, the set of adapted $CR$-almost complex
structure is strictly smaller than ${\mathcal J}(M,\lambda)$. It appears that for the proof of exponential
convergence result of Reeb orbits in the Morse-Bott case, this additional restriction of
$J$ to those adapted to the clean manifold of Reeb orbits in the above sense facilitates geometric computation
considerably. (Compare our computations with those given in \cite{bourgeois}, \cite{behwz}.)
\item When $T{\mathcal N} = \{0\}$, $(Q,\lambda_Q)$ carries the structure of prequantization $Q \to P = Q/S^1$.
\end{enumerate}
\end{rem}
We specialize to the normal form $(U_F, f \lambda_F)$.
We note that the complex structure $J_F: \xi_F \to \xi_F$ canonically induces one on $VTF \to F$
$$
J_F^v; VTF \to VTF
$$
satisfying $(J_F^v)^2 = -id_{VTF}$. For any given $J$ adapted to $o_F \subset F$,
it has the decomposition
$$
J_{U_F}|_{o_F} = \left(\begin{matrix} \widetilde J_G & 0 & 0 & 0 \\
0 & 0 & I & D \\
C & -I & 0 & 0\\
0 & 0 & 0 & J_E
\end{matrix} \right)
$$
on the zero section with respect to the splitting
$$
TF|_{o_F} \cong {\mathbb R}\{X_F\} \oplus G \oplus (T{\mathcal N} \oplus T^*{\mathcal N}) \oplus E.
$$
Here we note that $C \in {\operatorname{Hom}}(TQ \cap \xi_F,T^*{\mathcal N})$ and $D \in {\operatorname{Hom}}(E,T{\mathcal N})$, which depend on
$J$.
Using the splitting
\begin{eqnarray*}
TF & \cong & {\mathbb R}\{X_{\lambda_F}\} \oplus \widetilde G \oplus (\widetilde{T{\mathcal N}} \oplus VT{\mathcal N}^*) \oplus VTF \\
& \cong &
{\mathbb R}\{X_{\lambda_F}\} \oplus (T{\mathcal N} \oplus T^*{\mathcal N})\oplus \widetilde G \oplus E
\end{eqnarray*}
on $o_F \cong Q$, we lift $J_F$ to a $\lambda_F$-compatible almost
complex structure on the total space $F$, which we denote by $J_0$.
We note that the triad $(F,\lambda_F,J_0)$ is naturally $S^1$-equivariant by the $S^1$-action induced by
the Reeb flow on $Q$.
\begin{defn}[Normalized Contact Triad $(F,\lambda_F,J_0)$]\label{defn:normaltriad}
We call the $S^1$-invariant contact triad $(F,\lambda_F,J_0)$ the normalized
contact triad adapted to $Q$.
\end{defn}
Now we are ready to give the proof of the following.
\begin{prop}\label{prop:connection} Consider the
contact triad $(U_F,\lambda_F,J_0)$ for an adapted $J$ and its associated
triad connection. Then the zero section $o_F \cong Q$ is
totally geodesic and so naturally induces an affine connection on $Q$.
Furthermore the induced connection on $Q$ preserves $T{\mathcal F}$ and the splitting
$$
T{\mathcal F} = {\mathbb R}\{X_{\lambda_F}\} \oplus T{\mathcal N}.
$$
\end{prop}
\begin{proof} Recall that ${\mathcal F}$ is the null foliation of $\omega_Q = i_Q^*d\lambda$,
and $Q$ carries the $S^1$-action induced by the Reeb flow of $\lambda$.
We consider the chain of fibration
$$
Q \to S \to P
$$
where the first projection is that of the $S^1$-action induced by the Reeb flow and
the second one is the projection of null foliation of the presymplectic form $\omega_S$.
To construct the required connection, we consider the
contact triad $(U_F,\lambda_F,J_0)$ for an adapted $J$. Then the associated triad connection
makes the zero section totally geodesic and so canonically restricts to
an affine connection on $o_F \cong Q$. The connection automatically preserves the form $d\lambda_Q$,
satisfies $\nabla_{X_{\lambda_F}}X_{\lambda_F} = 0$
and its torsion satisfies $T(X_{\lambda_F},Z) = 0$ for all vector fields $Z$ on $Q$.
It remains to show that this connection preserves the splitting $T{\mathcal F} = {\mathbb R}\{X_{\lambda,Q}\} \oplus T{\mathcal N}$.
For the simplicity of notation, we denote $\lambda_F = \lambda$ in the rest of this proof.
Then let $q \in Q$ and $v \in T_qQ$. We pick a vector field $Z$ that is
tangent to $Q$ and $S^1$-invariant and satisfies $Z(q) = v$.
If $Z$ is a multiple of $X_\lambda$, then we can choose $Z = c\, X_\lambda$ for some
constant and so $\nabla_Z X_{\lambda} = 0$ by the axiom $\nabla_{X_\lambda}X_\lambda = 0$
of contact triad connection. Then for $Y \in \xi \cap T{\mathcal F}$, we compute
$$
\nabla_{X_\lambda} Y = \nabla_Y X_\lambda + [X_\lambda,Y] \in \xi
$$
by an axiom of the triad connection. On the other hand, for any $Z$ tangent to $Q$, we derive
$$
d\lambda(\nabla_{X_\lambda} Y, Z) = -d\lambda(Y,\nabla_{X_\lambda}Z) = 0
$$
since $Q = o_F$ is totally geodesic and so $\nabla_{X_\lambda}Z \in TQ$. This proves that
$\nabla_{X_\lambda} T{\mathcal N} \subset T{\mathcal N}$.
For $v \in T_q Q \cap \xi_q$, we have
$
\nabla_Z {X_\lambda} \in \xi \cap TQ.
$
On the other hand,
$$
\nabla_Z {X_\lambda} = \nabla_{X_\lambda} Z + [Z,X_\lambda] = \nabla_{X_\lambda} Z
$$
since $[Z,X_\lambda] = 0$ by the $S^1$-invariance of $Z$. Now let $W \in T{\mathcal N}$ and compute
$$
\langle \nabla_Z {X_\lambda}, W \rangle = d\lambda_F(\nabla_Z {X_\lambda}, J_0 W).
$$
On the other hand $J_0 W \in T^*{\mathcal N} \subset T_{o_F}F$ since ${\mathcal F}$ is (maximally)
isotropic with respect to $d\lambda$. Therefore we obtain
$$
d\lambda_F( \nabla_Z {X_\lambda}, J_0 W) = -\pi_{T^*{\mathcal N};F}^*\Theta_G(q,0,0)(\nabla_Z {X_\lambda}, J_0 W) = 0.
$$
This proves $\nabla_Z {X_\lambda}$ is perpendicular to $T{\mathcal N}$ with respect to the triad metric
and so must be parallel to $X_\lambda$. On the other hand,
$$
\langle \nabla_Z W, {X_\lambda} \rangle = - \langle \nabla_Z {X_\lambda}, W \rangle = 0
$$
and hence if $W \in T{\mathcal N}$, it must be perpendicular to $X_\lambda$. Furthermore we have
$$
d\lambda(\nabla_Z W, V) = -d\lambda(W,\nabla_Z V) =0
$$
for any $V$ tangent to $Q$ since $\nabla_Z V \in TQ$ as $Q$ is totally geodesic. This proves
$\nabla_ZW$ indeed lies in $\xi \cap T{\mathcal F} = T{\mathcal N}$, which finishes the proof.
\end{proof}
\section{Normal coordinates of $dw$ in $(U_F,f \lambda_F)$}
\label{sec:coord}
We fix the splitting $TF = HTF \oplus VTF$ given in \eqref{eq:TF} and
consider the decomposition of $w = (u,s)$ according to the splitting.
For a given map $w: \dot \Sigma \to F$, we denote by $u := \pi_F \circ w$. Then
we can express
$$
w(z) = (u(z), s(z)), \quad z \in \Sigma
$$
where $s(z) \in F_{u(z)}$, i.e., $s$ is the section of $u^*F$.
We regard this decomposition as the map
$$
{\mathcal I}: {\mathcal F}(\Sigma, F) \to {\mathcal H}^F_\Sigma
$$
where ${\mathcal H}^F$ is the infinite dimensional vector bundle
$$
{\mathcal H}^F_\Sigma = \bigcup_{u \in {\mathcal F}(\Sigma,F)} {\mathcal H}^F_{\Sigma,u}
$$
where ${\mathcal H}^F_{\Sigma,u}$ is the vector space given by
$$
{\mathcal H}^F_{\Sigma,u} = \Omega^0(u^*F)
$$
the set of smooth sections of the pull-back vector bundle $u^*F$. This provides
a coordinate description of ${\mathcal F}(\Sigma,F)$ in terms of ${\mathcal H}^F_\Sigma$. We denote the corresponding
coordinates $w = (u_w,s_w)$ when we feel necessary to make the dependence of $(u,s)$ on $w$
explicit.
In terms of the splitting \eqref{eq:TF}, we express
$$
dw = \left(\begin{matrix} \widetilde{du} \\
\nabla_{du} s \end{matrix}\right).
$$
Here we regard $du$ as a $TQ$-valued one-form on $\dot\Sigma$ and
$\nabla_{du} s$ is defined to be
$$
\nabla_{du(\eta)} s(z) = (u^*\nabla)_{\eta} s
$$
for a tangent vector $\eta \in T_z\Sigma$, which we regard as an element of $F_{u(z)}$. Through identification of
$HTF_s$ with $T_{\pi_F(s)} Q$ and $VTF_s$ with $F_{\pi_F(s)}$ or more precisely through the identity
$$
I_{w;u}(\widetilde{du}) = du,
$$
we will just write
$$
dw = \left(\begin{matrix} du \\
\nabla_{du} s \end{matrix}\right)
$$
from now on, unless it is necessary to emphasize the fact that $dw$ a priori has values
in $TF = HTF \oplus VTF$, not $TQ \oplus F$.
To write them in terms of the coordinates $w = (u,s)$, we first derive the
formula for the projection $d^\pi w = d^{\pi_{\lambda}}w$ with $\lambda = f\, \lambda_F$.
For this purpose, we recall the formula for $X_{f\lambda_F}$ from Proposition \ref{prop:eta} section
\ref{sec:perturbed-forms}
$$
X_{f\lambda_F} = \frac{1}{f}(X_\lambda + X_{dg}^{\pi_{\lambda_F}})
$$
for $g = \log f$. We decompose
$$
X_{dg}^{\pi_{\lambda_F}} = (X_{dg}^{\pi_{\lambda_F}})^v + (X_{dg}^{\pi_{\lambda_F}})^h
$$
into the vertical and the horizontal components. This leads us to the decomposition
\begin{equation}\label{eq:XflambdaF-decompo}
f\, X_{f\lambda_F} = (X_{dg}^{\pi_{\lambda_F}})^h + X_{\lambda_F} + (X_{dg}^{\pi_{\lambda_F}})^v
\end{equation}
in terms of the splitting
$$
TF = \widetilde \xi_\lambda \oplus {\mathbb R}\{X_F\} \oplus VTF, \quad HTF = \widetilde \xi_\lambda \oplus {\mathbb R}\{X_F\}.
$$
Recalling $d\lambda_F = \pi_F^*d\lambda + \Omega$, we have derived
\begin{lem} At each $s \in F$,
\begin{equation}\label{eq:Hamiltonian}
(X_{dg}^{\pi_{\lambda_F}})^v(s) = X_{g|_{F_{\pi(s)}}}^{\Omega^v(s)}.
\end{equation}
\end{lem}
Now we are ready to derive an important formula that will play a
crucial role in our exponential estimates in later sections. Recalling the canonical
isomorphism
$$
I_{s;\pi_F (s)}; VTF_s \to F_{\pi(s)}
$$
we introduced in section 2, we define the following \emph{vertical derivative}
\begin{defn}\label{defn:vertical-derive}
Let $X$ be a vector field on $F \to Q$. The \emph{vertical derivative},
denoted by $D^v X: F \to F$ is the map defined by
\begin{equation}\label{eq:DvX}
D^vX(q)(s): = \frac{d}{dr}\Big|_{r=0} I_{r f;\pi(r f)}(X^v(r f))
\end{equation}
\end{defn}
\begin{prop} Let $(E,\Omega, J_E)$ be the Hermitian vector bundle
for $\Omega$ defined as before.
Let $g = \log f$ and $X_g^{d\lambda_E}$ be the contact Hamiltonian
vector field as above. Then we have
$$
J_E D^v X_{dg}^{\pi_{\lambda_F}} = \operatorname{Hess}^v g(q,o_q).
$$
In particular, $J_E D^v X_g^{d\lambda_E}: E \to E$
is a symmetric endomorphism with respect to the metric
$g_E = \Omega(\cdot, J_E\cdot)$.
\end{prop}
\begin{proof} Let $q \in Q$ and $e_1, \, e_2 \in E_q$. We compute
\begin{eqnarray*}
\langle D^v X_{dg}^{\pi_{\lambda_E}}(q)e_1, e_2 \rangle & = & \Omega (D^v X_{dg}^{\pi_{\lambda_E}}(q) e_1,J_E e_2) \\
d\lambda_E(D^v X_{dg}^{\pi_{\lambda_E}}(q) e_1,J_E e_2) & = &
\Omega \left(\frac{d}{dr}\Big|_{r=0} I_{re_1;q}((X_{dg}^{\pi_{\lambda_E}})^v (re_1)), J_E e_2\right) \\
& = & \Omega \left(\frac{d}{dr}\Big|_{r=0} I_{re_1;q}((X_{dg}^{\pi_{\lambda_E}})^v (re_1)), J_E e_2\right)\\
& = & \Omega \left(\frac{d}{dr}\Big|_{r=0} I_{re_1;q}(X_g^{\Omega}(re_1)), J_E e_2\right).
\end{eqnarray*}
Here $\frac{d}{dr}\Big|_{r=0} X_g^{\Omega}(re_1)$ is nothing but
$$
DX_g^{\Omega}(q)(e_1)
$$
where $DX_g^{\Omega}(q)$ is the linearization of the Hamiltonian vector field of $g|_{E_{q}}$ of the
symplectic inner product $\Omega(q)$ on $E_q$. Therefore it lies at the symplectic
Lie algebra $sp(\Omega)$ and so satisfies
\begin{equation}\label{eq:DXg-symp}
\Omega(DX_g^{\Omega}(q)(e_1), e_2) + \Omega(e_1, DX_g^{\Omega}(q)(e_2)) = 0
\end{equation}
which is equivalent to saying that $J_E DX_g^{\Omega}(q)$ is symmetric with
respect to the inner product $g_E = \Omega(\cdot, J_E \cdot)$. But we also have
$$
J_E DX_g^{\Omega}(q) = D \opname{grad}_{g_E(q)} g|_{E_q} = \operatorname{Hess}^v g(q).
$$
On the other hand, \eqref{eq:DXg-symp} also implies
$$
\Omega(DX_g^{\Omega}(q)(J_E e_1), e_2) + \Omega(DX_g^{\Omega}(q)(e_2), J_E e_1) = 0
$$
with $e_1$ replaced by $J e_1$ therein. The first term becomes
$$
\langle DX_g^{\Omega}(q)(e_1), e_2 \rangle
$$
and the second term can be written as
\begin{eqnarray*}
\Omega(DX_g^{\Omega}(q)(J_E e_2), e_1)& = & - \Omega(J_E e_2, DX_g^{\Omega}(q)(e_1))\\
& = & \Omega(e_2, J_E DX_g^{\Omega}(q)(e_1)) \\
& = & \langle e_2, DX_g^{\Omega}(q)(e_1)\rangle.
\end{eqnarray*}
\end{proof}
\section{Appendix}
\label{sec:appendix}
In this appendix, we prove contractibility of the set of $Q$-adapted $CR$-almost complex structures
postponed from the proof of Proposition \ref{prop:adapted}.
We first notice that for any $d\lambda$-compatible $CR$-almost complex structure $J$,
$(TQ\cap JTQ)\cap T{\mathcal F}=\{0\}$: This is because for any $v\in (TQ\cap JTQ)\cap T{\mathcal F}$,
\begin{eqnarray*}
|v|^2=d\lambda(v, Jv)=0,
\end{eqnarray*}
since $Jv\in TQ$ and $v\in T{\mathcal F}=\ker \omega_Q$.
Therefore $(TQ\cap JTQ)$ and $T{\mathcal F}$ are linearly independent.
We now give the following lemma.
\begin{lem}\label{lem:J-identify-G} $J$ satisfies
the condition $JTQ\subset TQ+JT{\mathcal N}$ if and only if it satisfies
$TQ =(TQ\cap JTQ)\oplus T{\mathcal F}$.
\end{lem}
\begin{proof}
It is obvious to see that $TQ=(TQ\cap JTQ)\oplus T{\mathcal F}$ indicates $J$ is $Q$-adapted.
It remains to prove the other direction.
For this, we only need to prove that $TQ \subset (TQ\cap JTQ) + T{\mathcal F}$ by the discussion
right in front of the statement of the lemma.
Let $v \in TQ$. By the definition of the adapted condition,
$Jv\in TQ + JT{\mathcal N}$. Therefore we can write
$$
Jv=w+Ju,
$$
for some $w\in TQ$ and $u\in T{\mathcal N}$.
Then it follows that $v=-Jw+u$,
Noting that $Jw\in TQ\cap JTQ$, we derive $v \in (TQ\cap JTQ) + T{\mathcal F}$ and so
we have finished the proof.
\end{proof}
This lemma shows that any $Q$-adapted $J$ naturally defines a splitting
\begin{equation}\label{eq:split1}
T{\mathcal F} \oplus G_J = TQ, \quad G_J:= TQ \cap JTQ.
\end{equation}
We also note that such $J$ preserves the subbunle $TQ + JT{\mathcal F} \subset TM$ and so defines an
invariant splitting
\begin{equation}\label{eq:split2}
TM = TQ \oplus JT{\mathcal F} \oplus E_J; \quad E_J = (TQ \oplus JT{\mathcal F})^{\perp_{g_J}}.
\end{equation}
Conversely, for a given splittings \eqref{eq:split1}, \eqref{eq:split2},
we can always choose $Q$-adapted $J$ so that $TQ \cap JTQ = G$
but the choice of such $J$ is not unique.
It is easy to see that the set of such splittings forms a contractible manifold
(see Lemma 4.1 \cite{oh-park} for a proof). We also note that the 2-form
$d\lambda$ induces nondegenerate (fiberwise) bilinear 2-forms on $G$ and $E$ which we denote by $\omega_G$ and
$\omega_E$. Now we denote by ${\mathcal J}_{G,E}(\lambda;Q)$ the subset of ${\mathcal J}(\lambda;Q)$ consisting of $J \in {\mathcal J}(\lambda;Q)$
that satisfy \eqref{eq:split1}, \eqref{eq:split2}. Then ${\mathcal J}(\lambda;Q)$ forms a fibration
$$
{\mathcal J}(\lambda;Q) = \bigcup_{G,E}{\mathcal J}_{G,E}(\lambda;Q).
$$
Therefore it is enough to prove that ${\mathcal J}_{G,E}(\lambda;Q)$ is contractible for each fixed $G, \, E$.
We denote each $J: TM \to TM$ as a block $4 \times 4$ matrix in terms of the splitting
$$
TM = T{\mathcal F} \oplus G \oplus JT{\mathcal F} \oplus E.
$$
Then one can easily check that the $Q$-adaptedness of $J$ implies $J$ must have the form
$$
\left(\begin{matrix} 0 & 0 & Id & 0 \\
0 & J_G & 0 & 0 \\
-Id & 0 & 0 & 0 \\
0 & B & 0 & J_E
\end{matrix} \right)
$$
where $J_G:G \to G$ is $\omega_G$-compatible and $J_E:E \to E$ is $\omega_E$-compatible, and $B$ satisfies the relation
$$
BJ_G = 0.
$$
Since each set of such $J_G$'s or of such $J_E$'s is contractible and the set of $B$'s satisfying
$BJ_G$ for given $J_G$ forms a linear space, it follows that ${\mathcal J}_{G,E}(\lambda;Q)$ is contractible.
This finishes the proof of contractibility of ${\mathcal J}(\lambda;Q)$.
\part{Exponential estimates for contact instantons: Morse-Bott case}
\label{part:exp}
In this part, we develop the three-interval method
for any finite $\pi$ energy contact instanton (with vanishing charges) for
any Morse-Bott contact form, and use it to prove $C^\infty$ exponential convergence
to Reeb orbits at each puncture of domain Riemann surface.
The contents of this part are as follows:
\begin{itemize}
\item In Section \ref{sec:pseudo}, we briefly review the subsequence convergence result
for finite energy contact instantons at the ends.
This is the foundation of applying the three-interval method introduced since Section \ref{sec:3-interval};
\item In Section \ref{sec:3-interval}, an abstract three-interval method framework is presented;
\item In Section \ref{sec:prequantization}, we focus on the prequantized case and use
the three-interval machinery introduced in Section \ref{sec:3-interval} to prove exponential decay.
\begin{itemize}
\item Section \ref{subsec:prepare} is devoted to geometric calculation, and the vertical-horizontal
splitting of a contact instanton equation is obtained;
\item Section \ref{sec:expdecayMB} is the application of the abstract three-interval method into the vertical equation;
\item Section \ref{subsec:exp-horizontal} and \ref{subsec:centerofmass} is devoted to the application of the three-interval method into the horizontal equation, which is much more subtle than the vertical one.
\end{itemize}
\item In Section \ref{sec:general}, we prove exponential decay for general cases;
\item In Section \ref{sec:asymp-cylinder}, we explain how to apply this method to symplectic
manifolds with asymptotically cylindrical ends.
\end{itemize}
\section{Subsequence convergence on the adapted contact triad $(U_F,\lambda,J)$}
\label{sec:pseudo}
From now on, we involve a $CR$-almost complex structure $J$ on $M$ adapted to $Q$,
which in turn induces a $CR$-almost complex structures on a neighborhood $U_F$
of the zero section of $F$.
Denote by $(U_F,\lambda,J)$ the corresponding adapted contact triad.
Let $(\Sigma,j)$ be a compact Riemann surface and $\dot \Sigma:=\Sigma-\{r_1, \cdots, r_k\}$ be the
punctured open Riemann surface with finite punctures. We fix an isothermal metric $h$ on $\dot\Sigma$ such that every puncture $r \in \{r_1, \cdots, r_k\}$
is a cylindrical end.
In this section, we briefly recall the assumptions and the consequent subsequence convergence results from \cite[Section 6]{oh-wang2},
which enable one to restrict the analysis of a contact instanton (or a pseudo-holomorphic curve) into the canonical neighborhood
$(U_F,\lambda,J)$.
\begin{hypo}\label{hypo:basic}Let $w: \dot\Sigma \rightarrow M$
be a contact instanton map. Assume $w$ satisfies
\begin{enumerate}
\item \emph{Finite $\pi$-energy}:
$E^\pi_{(\lambda,J;\dot\Sigma,h)}(w): = \frac{1}{2} \int_{\dot \Sigma} |d^\pi w|^2 < \infty$;
\item \emph{Finite derivative bound}:
$\|dw\|_{C^0(\dot\Sigma)} \leq C < \infty$.
\end{enumerate}
Furthermore, near the puncture $r\in \{r_1, \cdots, r_k\}$ we focus on, write $w$ as a map with domain $[0, \infty)\times S^1$ without the loss of generality,
we require that the asymptotic action ${\mathcal T}$ satisfies
\begin{enumerate}
\setcounter{enumi}{2}
\item \emph{Non-vanishing asymptotic action}:\\
$
{\mathcal T} := \frac{1}{2}\int_{[0,\infty) \times S^1} |d^\pi w|^2 + \int_{\{0\}\times S^1}(w|_{\{0\}\times S^1})^*\lambda\neq 0
$.
\end{enumerate}
\end{hypo}
\begin{thm}[Subsequence Convergence, \cite{oh-wang2} Theorem 6.4 Corollary 6.5]
\label{thm:subsequence}
Let $w:[0, \infty)\times S^1\to M$ satisfy Hypothesis \ref{hypo:basic}.
Then for any sequence $s_k\to \infty$, there exists a subsequence, still denoted by $s_k$,
and a Reeb orbit $\gamma$ (may depend on the choice of subsequences) with action ${\mathcal T}$
and charge ${\mathcal Q}$ (defined as ${\mathcal Q}:=\int_{\{0\}\times S^1}((w|_{\{0\}\times S^1})^*\lambda\circ j)$), such that
$$
\lim_{k\to \infty} w(\tau+s_k, t)= \gamma(-{\mathcal Q}\, \tau + {\mathcal T}\, t)
$$
in $C^l(K \times S^1, M)$ sense for any $l$, where $K\subset [0,\infty)$ is an arbitrary compact set.
As a consequence,
\begin{eqnarray*}
&&\lim_{s\to \infty}\left|\pi \frac{\partial w}{\partial\tau}(s+\tau, t)\right|=0, \quad
\lim_{s\to \infty}\left|\pi \frac{\partial w}{\partial t}(s+\tau, t)\right|=0\\
&&\lim_{s\to \infty}\lambda(\frac{\partial w}{\partial\tau})(s+\tau, t)=-{\mathcal Q}, \quad
\lim_{s\to \infty}\lambda(\frac{\partial w}{\partial t})(s+\tau, t)={\mathcal T}
\end{eqnarray*}
and
$$
\lim_{s\to \infty}|\nabla^l dw(s+\tau, t)|=0 \quad \text{for any}\quad l\geq 1.
$$
All the limits are uniform for $(\tau, t)$ on $K\times S^1$ with compact $K\subset [0,\infty)$.
\end{thm}
Together with the isolation of any Morse-Bott submanifold of Reeb orbits, it follows that there exists a uniform constant
$\tau_0>0$ such that the image of $w$ lies in a tubular neighborhood of $Q$ whenever $\tau>\tau_0$.
In another word, it is enough to restrict ourselves to study contact instanton maps from half cylinder $[0, \infty)\times S^1$ to the canonical neighborhood $(U_F, \lambda, J)$
for the purpose of the study of asymptotic behaviour at the end.
\medskip
Now with the normal form
we developed in Part \ref{part:coordinate}, express
$w$ as $w=(u, s)$
where $u:=\pi\circ w:[0, \infty)\times S^1\to Q$ and
$s=(\mu, e)$ is a section of the pull-back bundle $u^*(JT{\mathcal N})\oplus u^*E\to [0, \infty)\times S^1$.
Recall from Section \ref{sec:coord} and express
$$
dw = \left(\begin{matrix}du\\
\nabla_{du} s \end{matrix}\right)
=\left(\begin{matrix}du\\
\nabla_{du} \mu\\
\nabla_{du} e \end{matrix}\right),
$$
we reinterpret the convergence of $w$ stated in Theorem \ref{thm:subsequence}
in terms of the coordinate $w = (u,s)=(u, (\mu, e))$.
\begin{cor}\label{cor:convergence-ue} Let $w = (u,s)=(u, (\mu, e))$ satisfy the same assumption as in Theorem \ref{thm:subsequence}. Then for any sequence $s_k\to \infty$, there exists a subsequence, still denoted by $s_k$,
and a Reeb orbit $\gamma$ on $Q$ (may depend on the choice of subsequences) with action ${\mathcal T}$
and charge ${\mathcal Q}$, such that
$$
\lim_{k\to \infty} u(\tau+s_k, t)= \gamma(-{\mathcal Q}\, \tau + {\mathcal T}\, t)
$$
in $C^l(K \times S^1, M)$ sense for any $l$, where $K\subset [0,\infty)$ is an arbitrary compact set.
Furthermore,
we have
\begin{eqnarray}
\lim_{s\to \infty}\left|\mu(s+\tau, t)\right|=0, &{}& \quad \lim_{s\to \infty}\left|e(s+\tau, t)\right|=0\\
\lim_{s \to \infty} \left|d^{\pi_{\lambda}} u(s+\tau, t)\right|= 0, &{}& \quad
\lim_{s \to \infty} u^*\theta(s+\tau, t) = -{\mathcal Q} d\tau + {\mathcal T}\, dt\\
\lim_{s \to \infty} \left|\nabla_{du} e(s+\tau, t)\right| = 0,&{}&
\end{eqnarray}
and
\begin{eqnarray}
\lim_{s \to \infty} \left|\nabla^k d^{\pi_{\lambda}} u(s+\tau, t)\right| = 0, &{}& \quad
\lim_{s \to \infty} \left|\nabla^k u^*\theta(s+\tau, t)\right| =0\\
\lim_{s \to \infty} \left|\nabla_{du}^k e(s+\tau, t)\right| = 0 &{}&
\end{eqnarray}
for all $k \geq 1$, and all the limits are uniform for $(\tau, t)$ on $K\times S^1$ with compact $K\subset [0,\infty)$.
\end{cor}
In particular, we obtain
$$
\lim_{s \to \infty} du(s+\tau, t) = (-{\mathcal Q}\, d\tau +{\mathcal T}\, dt)\otimes X_\theta.
$$
uniform for $(\tau, t)$ on $K\times S^1$ with compact $K\subset [0,\infty)$ in $C^\infty$ topology.
\medskip
The uniform convergence ensures all the basic requirements (including
the uniformly local tameness,
pre-compactness, uniformly local coercive property and the locally asymptotically cylindrical
property) of applying the three-interval method
to prove exponential decay of $w$ at the end which we will introduce in details in following sections.
\section{Abstract framework of the three-interval method}
\label{sec:3-interval}
In this section, we introduce an abstract framework of the three-interval
method of proving exponential decay and then we will apply the scheme
to the normal bundle part in Section \ref{sec:expdecayMB}.
This section also extends the scope of its application from the case considered in \cite{oh-wang2}
to the case with an exponentially decaying perturbation term (see Theorem \ref{thm:3-interval}).
We start with the following simple analytic lemma.
\begin{lem}[\cite{mundet-tian} Lemma 9.4]\label{lem:three-interval}
For nonnegative numbers $x_k$, $k=0, 1, \cdots, N$, if
$$
x_k\leq \gamma(x_{k-1}+x_{k+1})
$$
holds for some fixed constant $0<\gamma<\frac{1}{2}$ for all $1\leq k\leq N-1$, then
we have
$$
x_k\leq x_0\xi^{-k}+x_N\xi^{-(N-k)},
$$
for all $k=0, 1, \cdots, N$, where $\xi:=\frac{1+\sqrt{1-4\gamma^2}}{2\gamma}$.
\end{lem}
\begin{rem}\label{rem:three-interval}
\begin{enumerate}
\item If we write $\gamma=\gamma(c):=\frac{1}{e^c+e^{-c}}$ for some $c>0$, then the conclusion
can be written as the exponential form
$$
x_k\leq x_0e^{-ck}+x_Ne^{-c(N-k)}.
$$
\item
For an infinite sequence $x_k$, $k=0, 1, \cdots$, if we have $x_k$ bounded
in addition,
then we have
$$
x_k\leq x_0e^{-ck}.
$$
\end{enumerate}
\end{rem}
The analysis carries on a Banach bundle ${\mathcal E}\to [0, \infty)$ modelled by the Banach space ${\mathbb E}$, for which we mean every fiber ${\mathbb E}_\tau$ is identified with the Banach space ${\mathbb E}$ smoothly depending on $\tau$. We omit this identification if there is no way of confusion.
First we emphasize the base $[0,\infty)$ is non-compact and
carries a natural translation map for any positive number $r$,
which is $\sigma_r: \tau \mapsto \tau + r$.
We introduce the following definition
which ensures us to study the sections in local trivialization after taking a subsequence.
\begin{defn}\label{def:unif-local-tame}
We call a Banach bundle ${\mathcal E}$ modelled with a Banach space ${\mathbb E}$ over $[0, \infty)$ is \emph{uniformly locally tame},
if for any bounded interval $[a,b] \subset [0,\infty)$ and any sequence $s_k \to \infty$,
there exists a subsequence, still denoted by $s_k$, a sufficiently large $k_0 > 0$ and trivializations
$$
\Phi_\cdot: \sigma^*_{s_\cdot}{\mathcal E}|_{[a+s_\cdot, b+s_\cdot]} \to [a, b] \times {\mathbb E}
$$
such that for any $k\geq 0$ the bundle map
$$
\Phi_{k_0+k} \circ \Phi_{k_0}^{-1}: [a,b] \times {\mathbb E} \to [a,b] \times {\mathbb E}
$$
satisfies
\begin{equation}\label{eq:locallytame}
\|\nabla_\tau^l(\Phi_{k_0+k} \circ \Phi_{k_0}^{-1})\|_{{\mathcal L}({\mathbb E},{\mathbb E})} \leq C_l<\infty
\end{equation}
for constants $C_l = C_l(|b-a|)$ depending only on $|b-a|$, $l=0, 1, \cdots$.
We call such a sequence of $\{\Phi_k\}$
a tame family of trivializations (relative to the sequence $\{s_k\}$).
\end{defn}
\begin{defn} Suppose ${\mathcal E}$ is uniformly locally tame. We say a connection $\nabla$ on
${\mathcal E}$ is \emph{uniformly locally tame} if the push-forward $(\Phi_k)_*\nabla_\tau$ can be written as
$$
(\Phi_k)_*\nabla_\tau = \frac{d}{d\tau} + \Gamma_k(\tau)
$$
for any tame family $\{\Phi_k\}$ so that
$\sup_{\tau \in [a,b]}\|\Gamma_k(\tau)\|_{{\mathcal L}({\mathbb E},{\mathbb E})} < C$ for some $C> 0$ independent of $k$'s.
\end{defn}
\begin{defn}
Consider a pair ${\mathcal E}_1 \supset {\mathcal E}_2$ of uniformly locally tame bundles, and a bundle map
$B: {\mathcal E}_2 \to {\mathcal E}_1$.
We say $B$ is \emph{uniformly locally bounded}, if for any compact set $[a,b] \subset [0,\infty)$ and
any sequence $s_k \to \infty$, there exists a subsequence, still denoted by $s_k$, a sufficiently large $k_0 > 0$ and tame families
$\Phi_{1,k}$, $\Phi_{2,k}$ such that for any $k\geq 0$
$$
\sup_{\tau \in [a,b]} \|\Phi_{i,k_0+k} \circ B \circ \Phi_{i,k_0}^{-1}\|_{{\mathcal L}({\mathbb E}_2, {\mathbb E}_1)} \leq C
$$
where $C$ is independent of $k$.
\end{defn}
For a given locally tame pair ${\mathcal E}_2 \subset {\mathcal E}_1$, we denote by ${\mathcal L}({\mathcal E}_2, {\mathcal E}_1)$ the set
of bundle homomorphisms which are uniformly locally bounded.
\begin{lem}
If ${\mathcal E}_1, \, {\mathcal E}_2$ are uniformly locally tame, then so is ${\mathcal L}({\mathcal E}_2, {\mathcal E}_1)$.
\end{lem}
\begin{defn}\label{defn:precompact} Let ${\mathcal E}_1 \supset {\mathcal E}_2$ be as above and let $B \in {\mathcal L}({\mathcal E}_2, {\mathcal E}_1)$.
We say $B$ is \emph{pre-compact} on $[0,\infty)$ if for any locally tame families $\Phi_1, \Phi_2$,
there exists a further subsequence such that
$
\Phi_{1, k_0+k} \circ B \circ \Phi_{1, k_0}^{-1}
$
converges to some $B_{\Phi_1\Phi_2;\infty}\in {\mathcal L}(\Gamma([a, b]\times {\mathbb E}_2), \Gamma([a, b]\times {\mathbb E}_1))$.
\end{defn}
Assume $B$ is a bundle map from ${\mathcal E}_2$ to ${\mathcal E}_1$ which is uniformly locally bounded,
where ${\mathcal E}_1 \supset {\mathcal E}_2$ are uniformly locally tame with tame families
$\Phi_{1,k}$, $\Phi_{2,k}$. We can write
$$
\Phi_{2,k_0+k} \circ (\nabla_\tau + B) \circ \Phi_{1,k_0}^{-1} = \frac{\partial}{\partial \tau} + B_{\Phi_1\Phi_2, k}
$$
as a linear map from $\Gamma([a,b]\times {\mathbb E}_2)$ to $\Gamma([a,b]\times {\mathbb E}_1)$, since $\nabla$ is uniformly
locally tame.
Next we introduce the following notion of coerciveness.
\begin{defn}\label{defn:localcoercive}
Let ${\mathcal E}_1, \, {\mathcal E}_2$ be as above and $B: {\mathcal E}_2 \to {\mathcal E}_1$ be a uniformly locally bounded bundle map.
We say the operator
$$
\nabla_\tau + B: \Gamma({\mathcal E}_2) \to \Gamma({\mathcal E}_1)
$$
is \emph{uniformly locally coercive},
if for given bounded sequence $\zeta_k \in \Gamma({\mathcal E}_2)$ satisfying
$$
\nabla_\tau \zeta_k + B \zeta_k = L_k
$$
with $\|L_k\|_{{\mathcal E}_1}$ bounded on a given compact subset $K \subset [0,\infty)$,
there exists a subsequence, still denoted by $\zeta_k$, that uniformly converges in ${\mathcal E}_2$.
\end{defn}
\begin{rem}
Let $E \to [0,\infty)\times S$ be a (finite dimensional) vector bundle and denote by
$W^{k,2}(E)$ the set of $W^{k,2}$-section of $E$ and $L^2(E)$ the set of $L^2$-sections. Let
$D: L^2(E)\to L^2(E)$ be a first order elliptic operator with cylindrical end.
Denote by $i_\tau: S \to [0,\infty)\times S$ the natural inclusion map. Then
there is a natural pair of Banach bundles ${\mathcal E}_2 \subset {\mathcal E}_1$ over $[0, \infty)$ associated to $E$, whose fiber
is given by ${\mathcal E}_{1,\tau}=L^2(i_\tau^*E)$, ${\mathcal E}_{2,\tau} = W^{1,2}(i_\tau^*E)$.
Furthermore assume ${\mathcal E}_i$ for $i=1, \, 2$ is uniformly local tame if $S$ is a compact manifold (without boundary).
Then $D$ is uniformly locally coercive, which follows from the elliptic bootstrapping and the Sobolev embedding.
\end{rem}
Finally we introduce the notion of asymptotically cylindrical operator $B$.
\begin{defn}\label{defn:asympcylinderical} We call $B$ \emph{locally asymptotically cylindrical} if the following holds:
Any subsequence limit $B_{\Phi_1\Phi_2;\infty}$ appearing in Definition \ref{defn:precompact}
is a \emph{constant} section,
and $\|B_{\Phi_1\Phi_2, k}-\Phi_{2,k_0+k} \circ B \circ \Phi_{1,k_0}^{-1}\|_{{\mathcal L}({\mathbb E}_i, {\mathbb E}_i)}$ converges to zero
as $k\to \infty$ for both $i =1, 2$.
\end{defn}
Now we specialize to the case of Hilbert bundles ${\mathcal E}_2 \subset {\mathcal E}_1$ over $[0,\infty)$ and assume that
${\mathcal E}_1$ carries a connection and denote by $\nabla_\tau$ the associated covariant
derivative. We assume that $\nabla_\tau$ is uniformly locally tame.
Denote by $L^2([a,b];{\mathcal E}_i)$ the space of $L^2$-sections $\zeta$ of ${\mathcal E}_i$ over
$[a,b]$, i.e., those satisfying
$$
\int_a^b |\zeta(\tau)|_{{\mathcal E}_i}^2\, dt < \infty.
$$
where $|\zeta(\tau)|_{{\mathcal E}_i}$ is the norm with respect to the given Hilbert bundle structure of ${\mathcal E}_i$.
\begin{thm}[Three-Interval Method]\label{thm:3-interval}
Assume ${\mathcal E}_2\subset{\mathcal E}_1$ is a pair of Hilbert bundles over $[0, \infty)$ with fibers ${\mathbb E}_2$ and ${\mathbb E}_1$,
and ${\mathbb E}_2\subset {\mathbb E}_1$ is dense.
Let $B$ be a section of the associated bundle ${\mathcal L}({\mathcal E}_2, {\mathcal E}_1)$ and
$L \in \Gamma({\mathcal E}_1)$.
We assume the following:
\begin{enumerate}
\item There exists an Ehresmann connection which preserves the Hilbert structure;
\item ${\mathcal E}_i$ for $i=1, \, 2$ are uniformly locally tame;
\item $B$ is precompact, uniformly locally coercive on and asymptotically cylindrical;
\item Every subsequence limit $B_{\infty}$ is a self-adjoint unbounded operator on
${\mathbb E}_1$ with its domain ${\mathbb E}_2$, and satisfies $\ker B_{\infty} = \{0\}$;
\item There exists some positive number $\delta\leq \delta_0$
such that any subsequence limiting operator $B_{\infty}$
of the above mentioned pre-compact family has their first non-zero eigenvalues $|\lambda_1| \geq \delta$;
\item
There exists some $R_0 > 0$, $C_0 > 0$ and $\delta_0 > 0$ such that
$$
|L(\tau)|_{{\mathcal E}_{1,\tau}} \leq C_0 e^{-\delta_0 \tau}
$$
for all $\tau \geq R_0$ and $t\in S$.
\end{enumerate}
Then for any (smooth) section $\zeta \in \Gamma({\mathcal E}_2)$ with
$\sup_{\tau \in [R_0,\infty)} \|\zeta(\tau,\cdot)\|_{{\mathcal E}_{2,\tau}} < \infty$
that satisfies the equation
$$
\nabla_\tau \zeta + B(\tau) \zeta(\tau) = L(\tau),
$$
there exist some constants $R$, $C>0$ such that for any $\tau>R$,
$$
|\zeta(\tau)|_{{\mathcal E}_{1,\tau}}\leq C e^{-\delta \tau }.
$$
\end{thm}
\begin{proof}
We divide $[0, \infty)$ into the union of unit intervals $I_k:=[k, k+1]$ for
$k=0, 1, \cdots$.
By Lemma \ref{lem:three-interval} and Remark \ref{rem:three-interval}, it is enough to prove that
\begin{equation}
\|\zeta\|^2_{L^2(I_k;{\mathcal E}_1)}\leq \gamma(2\delta)(\|\zeta\|^2_{L^2(I_{k-1};{\mathcal E}_1)}+\|\zeta\|^2_{L^2(I_{k+1};{\mathcal E}_1)}),\label{eq:3interval-ineq}
\end{equation}
for every $k=1, 2, \cdots$ for some choice of $0 < \delta < 1$.
For the simplicity of notation and also because $L^2([a,b];{\mathcal E}_2)$ or $L^\infty([a,b];{\mathcal E}_2)$ will not appear
in the discussion below, we will just denote
$$
L^2([a,b]) := L^2([a,b];{\mathcal E}_1), \quad L^\infty([a,b]): = L^\infty([a,b];{\mathcal E}_1)
$$
for any given interval $[a,b]$.
Now if the inequality \eqref{eq:3interval-ineq} does not hold for every $k$,
we collect all the $k$'s that reverse the direction of the inequality.
If such $k$'s are finitely many, in another word, \eqref{eq:3interval-ineq} holds after some large $k_0$,
then we will still get the exponential estimate
as the theorem claims.
Otherwise, there are infinitely many such three-intervals, which we enumerate
by $I^{l_k}_{I}:=[l_k, l_k+1], \,I^{l_k}_{II}:=[l_k+1, l_k+2], \,I^{l_k}_{III}:=[l_k+2, l_k+3]$, $k=1, 2, \cdots$, such that
\begin{equation}
\| \zeta\|^2_{L^2(I^k_{II})}>
\gamma(2\delta)(\| \zeta\|^2_{L^2(I^k_{I})}+\|\zeta\|^2_{L^2(I^k_{III})}).\label{eq:against-3interval}
\end{equation}
Before we deal with this case, we first remark that
this hypothesis in particular implies $ \zeta \not \equiv 0$ on
$I^{l_k}:=I^{l_k}_{I}\cup I^{l_k}_{II}\cup I^{l_k}_{III}$, i.e.,
$\|\zeta\|_{L^\infty(I^{l_k})}\neq 0$.
Now if there exists some uniform constant $C_1>0$ such that
on each such three-interval
\begin{equation}
\|\zeta\|_{L^\infty(I^{l_k})}<C_1e^{-\delta l_k}, \label{eq:expas-zeta}
\end{equation}
then it follows that
\begin{equation}
\|\zeta\|_{L^2([l_k+1, l_k+2])} \leq C_1e^{-\delta l_k}=C e^{-\delta(l_k+1)}.\label{eq:expest-zeta}
\end{equation}
Here $C=C_1e^{-\delta}$ is purely a constant depending only on $\delta$ which will be determined at the end.
We will abuse notation and not distinguish uniform constants but use $C$ to denote them from now on.
Remember now, under our assumption,
we have infinitely many intervals which satisfy \eqref{eq:3interval-ineq},
and \emph{everyone} satisfies the exponential inequality \eqref{eq:expas-zeta}.
If the continuity of the intervals never breaks after some point, then we already get our conclusion
from \eqref{eq:expest-zeta} since every interval is a middle interval (of the form $[l_k+1, l_k+2]$)
in such three-intervals.
Otherwise the set of $k$'s that satisfy \eqref{eq:3interval-ineq} form a sequence of clusters,
$$
I^{l_k+1}, I^{l_k+2}, \cdots, I^{l_k+N_k}
$$
for the sequence $l_1, \, l_2, \cdots, l_k, \cdots $ such that $l_{k+1} > l_k +N_k$ and \eqref{eq:3interval-ineq}
holds on each element contained in one of the clusters.
Now we notice that each cluster has the farthest left interval $[l_k+1, l_k+2]$ as the middle interval in $I^{l_k}$,
and the farthest right interval $[l_k+N+2, l_k+N+3]$ as the middle interval in $I^{l_{k+1}}$.
(See Figure \ref{fig:3-interval}.)
\begin{figure}[h]
\setlength{\unitlength}{0.37in}
\centering
\begin{picture}(32,6)
\put(2,5){\line(1,0){9}}
\put(2,5){\line(0,1){0.1}}
\put(3,5){\line(0,1){0.1}}
\put(4,5){\line(0,1){0.1}}
\put(5,5){\line(0,1){0.1}}
\put(8,5){\line(0,1){0.1}}
\put(9,5){\line(0,1){0.1}}
\put(10,5){\line(0,1){0.1}}
\put(11,5){\line(0,1){0.1}}
\put(1,3){\textcolor{red}{\line(1,0){3}}}
\put(9,3){\textcolor{red}{\line(1,0){3}}}
\put(1,3){\textcolor{red}{\line(0,1){0.1}}}
\put(2,3){\textcolor{red}{\line(0,1){0.1}}}
\put(3,3){\textcolor{red}{\line(0,1){0.1}}}
\put(4,3){\textcolor{red}{\line(0,1){0.1}}}
\put(9,3){\textcolor{red}{\line(0,1){0.1}}}
\put(10,3){\textcolor{red}{\line(0,1){0.1}}}
\put(11,3){\textcolor{red}{\line(0,1){0.1}}}
\put(12,3){\textcolor{red}{\line(0,1){0.1}}}
\put(2.5, 4.8){\vector(0,-1){1.3}}
\put(10.5, 4.8){\vector(0,-1){1.3}}
\put(2, 5.2){$\overbrace{\qquad\qquad\qquad\qquad}^{I_{l_k+1}}$}
\put(8, 5.2){$\overbrace{\qquad\qquad\qquad\qquad}^{I_{l_k+N}}$}
\put(1, 2.8){$\underbrace{\qquad\qquad\qquad\qquad}_{I^{l_k}}$}
\put(9, 2.8){$\underbrace{\qquad\qquad\qquad\qquad}_{I_{l_{k+1}}=I_{l_k+N+1}}$}
\put(7.5, 4.7){$l_k+N$}
\put(1.5, 4.7){$l_k+1$}
\put(0.9, 3.2){$l_k$}
\put(8.8, 3.2){$l_{k+1}(=l_k+N+1)$}
\put(2,5){\circle*{0.07}}
\put(1,3){\textcolor{red}{\circle*{0.07}}}
\put(8,5){\circle*{0.07}}
\put(9,3){\textcolor{red}{\circle*{0.07}}}
\put(6,5.3){$\cdots\cdots$}
\put(2,5.02){\line(1,0){1}}
\put(10,5.02){\line(1,0){1}}
\put(2,3.02){\textcolor{red}{\line(1,0){1}}}
\put(10,3.02){\textcolor{red}{\line(1,0){1}}}
\put(2,5.03){\line(1,0){1}}
\put(10,5.03){\line(1,0){1}}
\put(2,3.03){\textcolor{red}{\line(1,0){1}}}
\put(10,3.03){\textcolor{red}{\line(1,0){1}}}
\put(2,1){\line(1,0){1}}
\put(3.5,1){\text{denotes the unit intervals that satisfy \eqref{eq:3interval-ineq}}}
\put(2,1){\line(0,1){0.1}}
\put(3,1){\line(0,1){0.1}}
\put(2,0){\textcolor{red}{\line(1,0){1}}}
\put(3.5,0){\text{denotes the unit intervals that satisfy \eqref{eq:against-3interval} and \eqref{eq:expas-zeta}}}
\put(2,0){\textcolor{red}{\line(0,1){0.1}}}
\put(3,0){\textcolor{red}{\line(0,1){0.1}}}
\end{picture}
\caption{}
\label{fig:3-interval}
\end{figure}
Then from \eqref{eq:expest-zeta}, we derive
\begin{eqnarray*}
\|\zeta\|_{L^2([l_k+1, l_k+2])} &\leq& Ce^{-\delta l_k}, \\
\|\zeta\|_{L^2([l_k+N+2, l_k+N+3])} &\leq& Ce^{-\delta l_{k+1}}=Ce^{-\delta (l_k+N+1)}.
\end{eqnarray*}
Combining them and Lemma \ref{lem:three-interval}, and get the following estimate for $l_k+1\leq l\leq l_k+N+2$,
\begin{eqnarray*}
&{}&\|\zeta\|_{L^2([l, l+1])}\\
&\leq&\|\zeta\|_{L^2([l_k+1, l_k+2)} e^{-\delta (l-(l_k+1))}+\|\zeta\|_{L^2([l_k+N+2, l_k+N+3])}e^{-\delta (l_k+N+2-l)}\\
&\leq& Ce^{-\delta l_k}e^{-\delta (l-(l_k+1))}+Ce^{-\delta (l_k+N+1)}e^{-\delta (l_k+N+2-l)}\\
&=&Ce^{\delta}(e^{-\delta l}+e^{-\delta(2l_k+2N+4-l)})
\leq (2Ce^{\delta})e^{-\delta l}.
\end{eqnarray*}
Thus for this case, we have exponential decay with the presumed rate $\delta$ as claimed in the theorem.
Now if there is no such uniform $C=C_1$ such that \eqref{eq:expas-zeta} holds,
then we can find a sequence of constants $C_k\to \infty$
and a subsequence of such three-intervals $\{I^{l_k}\}$ (still use $l_k$ to denote them) such that
\begin{equation}
\|\zeta\|_{L^\infty(I^{l_k})}\geq C_ke^{-\delta l_k}.\label{eq:decay-fail-zeta}
\end{equation}
We can further choose a subsequence (but still use $l_k$ to denote)
so that $l_k+3<l_{k+1}$, i.e, the intervals do not intersect one another.
We translate the sections
$
\zeta_k:=\zeta|_{[l_k, l_k+1]}
$
to the sections ${\widetilde \zeta}_k$ defined on $[0,3]$ by defining
$$
{\widetilde \zeta}_k(\tau, \cdot):=\zeta(\tau + l_k, \cdot).
$$
Then \eqref{eq:decay-fail-zeta} becomes
\begin{equation}
\|{\widetilde\zeta}_k\|_{L^\infty([0,3])}\geq C_ke^{-\delta l_k}.\label{eq:decay-fail-zetak}
\end{equation}
Denote $\widetilde L_k(\tau, t)=L(\tau+l_k, t)$, then
\begin{equation}
|\widetilde L_k(\tau, t)|<Ce^{-\delta l_k}e^{-\delta\tau}\leq Ce^{-\delta l_k}\label{eq:Lk}
\end{equation}
for $\tau \geq 0$.
Now ${\widetilde \zeta}_k$ satisfies the equation
\begin{equation}
\nabla_\tau \widetilde \zeta_k + B(\tau+l_k,\cdot) \widetilde \zeta_k = \widetilde{L}_k(\tau, t).\label{eq:uktilde-zeta}
\end{equation}
We now rescale \eqref{eq:uktilde-zeta} by dividing it by $\|{\widetilde\zeta}_k\|_{L^\infty([0,3])}$ which can not vanish
by the hypothesis as we remarked below \eqref{eq:against-3interval}. Consider the rescaled sequence
$$
\overline \zeta_k:={\widetilde\zeta}_k/\|{\widetilde\zeta}_k\|_{L^\infty([0,3])}.
$$
We have now
\begin{eqnarray}
\|\overline \zeta_k\|_{{L^\infty([0,3])}}&=&1\nonumber\\
\nabla_\tau\overline\zeta_k+ B(\tau+l_k,t) \overline\zeta_k
&=&\frac{\widetilde{L}_k}{\|{\widetilde\zeta}_k\|_{L^\infty([0,3] )}}\label{eq:nablabarzeta}\\
\|\overline \zeta_k\|^2_{L^2([1,2] )}&\geq&\gamma(2\delta)(\|\overline \zeta_k\|^2_{L^2([0,1] )}+\|\overline \zeta_k\|^2_{L^2([2,3] )}).\nonumber
\end{eqnarray}
From \eqref{eq:decay-fail-zetak} and \eqref{eq:Lk}, we get
\begin{eqnarray*}
\frac{\|\widetilde{L}_k\|_{L^\infty([0, 3])}}{\|\zeta_k\|_{L^\infty([0, 3] )}} \leq \frac{C}{C_k},
\end{eqnarray*}
and then by our assumption that $C_k\to \infty$,
we prove that the right hand side of \eqref{eq:nablabarzeta} converges to zero as $k\to \infty$.
Since $B$ is assumed to be pre-compact, we
get a limiting operator $B_{\infty}$ after taking a subsequence (in a trivialization).
On the other hand, since $B$ is locally coercive,
there exists $\overline\zeta_\infty$ such that
$\bar\zeta_k \to
\overline\zeta _\infty$ uniformly converges in ${\mathcal E}_2$ and $\overline\zeta _\infty$ satisfies
\begin{equation}\label{eq:xibarinfty-zeta}
\nabla_\tau \overline\zeta_\infty+ B_\infty \overline\zeta_\infty=0 \quad \text{on $[0,3]$},
\end{equation}
and
\begin{equation}\label{eq:xibarinfty-ineq-zeta}
\|\overline \zeta_\infty\|^2_{L^2([1,2] )}\geq\gamma(2\delta)(\|\overline \zeta_\infty\|^2_{L^2([0,1] )}
+\|\overline \zeta_\infty\|^2_{L^2([2,3] )}).
\end{equation}
Since $\|\overline\zeta_\infty\|_{L^\infty([0,3\times S^1)} = 1$,
$\overline\zeta_\infty \not \equiv 0$. Since $B_\infty$ is assumed to be a
(unbounded) self-adjoint operator on ${\mathbb E}_1$ with its domain ${\mathbb E}_2$.
Let $\{e_i\}$ be its orthonormal eigen-basis of ${\mathbb E}_1$ with respect to $B_\infty$
We consider the eigen-function expansion of
$\overline \zeta_\infty(\tau,\cdot)$, we write
$$
\overline \zeta_\infty(\tau) = \sum_{i=1}^\infty a_i(\tau)\, e_i
$$
for each $\tau \in [0,3]$, where $e_i$ are the eigen-functions associated to the eigenvalue $\lambda_i$ with
$$
0 < \lambda_1 \leq \lambda_2 \leq \cdots \leq \lambda_i \leq \cdots \to \infty.
$$
By plugging $\overline\zeta_\infty$ into \eqref{eq:xibarinfty-zeta}, we derive
$$
a_i'(\tau)+\lambda_ia_i(\tau)=0, \quad i=1, 2, \cdots
$$
and it follows that
$$
a_i(\tau)=c_ie^{-\lambda_i\tau}, \quad i=1, 2, \cdots
$$
and hence
$$
\|a_i\|^2_{L^2([1,2])}=\gamma(2\lambda_i)(\|a_i\|^2_{L^2([0,1])}+\|a_i\|^2_{L^2([2,3])}).
$$
Notice that
\begin{eqnarray*}
\|\overline\zeta_\infty\|^2_{L^2([k, k+1])}&=&\int_{[k,k+1]}\|\overline\zeta_\infty\|^2_{L^2(S^1)}\,d\tau\\
&=&\int_{[k,k+1]}\sum_i |a_i(\tau)|^2 d\tau\\
&=&\sum_i\|a_i\|^2_{L^2([k,k+1])},
\end{eqnarray*}
and by the decreasing property of $\gamma$, we get
$$
\|\overline \zeta_\infty\|^2_{L^2([1,2])}< \gamma(2\delta) (\|\overline \zeta_\infty\|^2_{L^2([0,1])}+\|\overline \zeta_\infty\|^2_{L^2([2,3])}).
$$
Since $\overline\zeta_\infty\not \equiv 0$, this contradicts to \eqref{eq:xibarinfty-ineq-zeta}, if we choose $0 < \delta < \lambda_1$ at the beginning. This finishes the proof.
\end{proof}
\section{Exponential decay: prequantizable clean submanifold case}
\label{sec:prequantization}
To make the main arguments transparent in the scheme of our
exponential estimates, we start with the case of prequantizable one, i.e.,
the case without ${\mathcal N}$ and the normal form contains $E$ only.
The general case will be dealt with in the next section.
We put the basic hypothesis that
\begin{equation}\label{eq:Hausdorff}
|e(\tau,t)| < \delta
\end{equation}
for all $\tau \geq \tau_0$ in our further study,
where $\delta$ is given as in Proposition \ref{lem:deltaforE}.
From Corollary \ref{cor:convergence-ue} and the remark after it,
we can locally work with everything in a neighborhood of zero section in the
normal form $(U_E, f\lambda_E, J)$.
\subsection{Computational preparation}
\label{subsec:prepare}
For a smooth function $h$, we can express its gradient vector field $\opname{grad} h$
with respect to the metric $g_{(\lambda_E,J_0)} = d\lambda_E(\cdot, J_0\cdot ) + \lambda_E \otimes \lambda_E$
in terms of the $\lambda_E$-contact Hamiltonian vector field $X^{d\lambda_E}_h$ and
the Reeb vector field $X_E$ as
\begin{equation}\label{eq:gradh}
\opname{grad} h = -J_0 X^{d\lambda_E}_h + X_E[h]\, X_E.
\end{equation}
Note the first term
$-J_0 X^{d\lambda_E}_h=:\opname{grad} h^\pi$ is the $\pi_{\lambda_E}$-component of $\opname{grad} h$.
Consider the vector field $Y$ along $u$ given by
$
Y(\tau,t) := \nabla^\pi_\tau e
$
where $w = (u,e)$ in the coordinates defined in section \ref{sec:thickening}.
The vector field $e_\tau = e(\tau,t)$
as a vector field along $u(\tau,t)$ is nothing but
the map $(\tau,t) \mapsto I_{w(\tau,t);u(\tau,t)}(\vec R(w(\tau,t))$ as a section of $u^*E$. In particular
$$
e(\infty,t) = I_{w(\infty,t);u(\infty,t)}(\vec R(w(\infty,t)) = I_{z(t);x(Tt)}(\vec R(o_{x(T t)}) = o_{x(T t)}.
$$
Obviously, $I_{w(\tau,t);u(\tau,t)}(\vec R(w(\tau,t))$ is pointwise perpendicular to $o_E \cong Q$.
In particular,
\begin{equation}\label{eq:perpendicular}
(\Pi_{x(T\cdot)}^{u_\tau})^{-1} e_\tau \in (\ker D\Upsilon(z))^\perp
\end{equation}
where $e_\tau$ is the vector field along the loop $u_\tau \subset o_E$ and we regard
$(\Pi_{x(T\cdot)}^{u_\tau})^{-1} e_\tau$ as a vector field along $z = (x(T\cdot),o_{x(T\cdot)})$.
For further detailed computations, one needs to decompose the contact instanton map equation
\begin{equation}\label{eq:contact-instanton-E}
{\overline \partial}_J^{f\lambda_E} w = 0, \quad d(w^*(f\, \lambda_E)) \circ j) = 0.
\end{equation}
The second equation does not depend on the choice of endomorphisms $J$ and becomes
\begin{equation}\label{eq:dw*flambda}
d(w^* \lambda_E\circ j) = - dg \wedge (\lambda_E\circ j),
\end{equation}
which is equivalent to
\begin{equation}\label{eq:dw*flambda}
d(u^*\theta \circ j + \Omega(e, \nabla^E_{du\circ j} e))
= - dg \wedge (u^* \theta \circ j + \Omega(e, \nabla^E_{du\circ j} e)).
\end{equation}
On the other hand, by the formula \eqref{eq:xi-projection}, the first equation ${\overline \partial}_J^{f\lambda_E} w = 0$ becomes
\begin{equation}\label{eq:delbarwflambda}
{\overline \partial}^{\pi_{\lambda_E}}_{J_0} w = (w^*\lambda_E\, X_{dg}^{\pi_{\lambda_E}})^{(0,1)} + (J - J_0) d^{\pi_{\lambda_E}} w
\end{equation}
where $(w^*\lambda_E\, X_{dg}^{\pi_{\lambda_E}})^{(0,1)}$ is the $(0,1)$-part of the one-form $w^*\lambda_E\, X_{dg}^{\pi_{\lambda_E}}$
with respect to $J_0$.
In terms of the coordinates, the equation can be re-written as
\begin{eqnarray*}
&{}& \left(\begin{matrix} {\overline \partial}^{\pi_\theta} u - \left(\Omega(\vec R(u,e),\nabla^E_{du} e)\, X_E(u,e)\right)^{(0,1)}\\
(\nabla^E_{du} e)^{(0,1)} \end{matrix}\right)\\
& = & \left(\begin{matrix}
\left( \left(u^*\theta + \Omega(\vec R(u,e),\nabla^E_{du} e)\right) d\pi_E((X_{dg}^{\pi_{\lambda_E}})^h)\right)^{(0,1)} \\
\left( \left(u^*\theta + \Omega(\vec R(u,e),\nabla^E_{du} e)\right) X_g^\Omega(u,e)\right)^{(0,1)}\\
\end{matrix}\right) + (J - J_0) d^{\pi_{\lambda_E}} w.
\end{eqnarray*}
Here $\left(\Omega(\vec R(u,e),\nabla^E_{du} e)\, X_E(u,e)\right)^{(0,1)}$ is the $(0,1)$-part with respect to
$J_Q$ and $(\nabla^E_{du} e)^{(0,1)}$ is the $(0,1)$-part with respect to $J_E$.
From this, we have derived
\begin{lem}\label{lem:eq-in-(u,e)}In coordinates $w = (u,e)$, \eqref{eq:contact-instanton-E} is equivalent to
\begin{eqnarray}
\nabla''_{du} e & = & \left(w^*\lambda_E\, X_g^\Omega(u,e)\right)^{(0,1)} + I_{w;u}\left((J - J_0) d^{\pi_{\lambda_E}} w)^v\right)\label{eq:CR-e} \\
{\overline \partial}^\pi u & = & \left(w^*\lambda_E\, (d\pi_E(X_{dg}^{\pi_{\lambda_E}})^h)\right)^{(0,1)}
+ \left(\Omega(e, \nabla^E_{du} e)\, X_\theta(u(\tau,e))^h\right)^{(0,1)}\nonumber\\
&{}& + d\pi_E\left((J - J_0) d^{\pi_{\lambda_E}} w\right)^h
\label{eq:CR-uxi}
\end{eqnarray}
and
\begin{equation}\label{eq:CR-uReeb}
d(w^*\lambda_E \circ j) = -dg \wedge w^*\lambda_E \circ j
\end{equation}
with the insertions of
$$
w^*\lambda_E = u^*\theta + \Omega(e,\nabla^E_{du}e).
$$
\end{lem}
Note that with insertion of \eqref{eq:CR-uReeb}, we obtain
\begin{eqnarray}\label{eq:delbarpiu=}
{\overline \partial}^\pi u
& = & \left(u^*\theta\, d\pi_E((X_{dg}^{\pi_{\lambda_E}})^h)\right)^{(0,1)}
+ \left(\Omega(e, \nabla^E_{du} e)\, (X_\theta(u(\tau,e))+ d\pi_E(X_{dg}^{\pi_{\lambda_E}}))^h\right)^{(0,1)}\, \nonumber \\
&{}& + d\pi_E\left((J - J_0) d^{\pi_{\lambda_E}} w\right)^h.
\end{eqnarray}
Now let $w=(u,e)$ be a contact instanton in terms of the decomposition as above.
\begin{lem}\label{lem:J-J0} Let $e$ be an arbitrary section over a
smooth map $u:\Sigma \to Q$. Then
\begin{eqnarray}\label{eq:J-J0}
I_{w;u}\left(((J - J_0) d^{\pi_{\lambda_E}} w)^v\right)& = & L_1(u,e)(e,(d^\pi u,\nabla e))\\
d\pi_E\left((J - J_0) d^{\pi_{\lambda_E}} w)^h\right) & = & L_2(u,e)(e,(d^\pi u,\nabla e))
\end{eqnarray}
where $L_1(u,e)$ is a $(u,e)$-dependent bilinear map with values in $\Omega^0(u^*E)$ and
$L_2(u,e)$ is a bilinear map with values in $\Omega^0(u^*TQ)$. They also satisfy
\begin{equation}\label{eq:|J-J0|}
|L_i(u,e)| = O(1).
\end{equation}
\end{lem}
An immediate corollary of this lemma is
\begin{cor}\label{lem:|J-J0|}
\begin{eqnarray*}
|I_{w;u}\left(((J - J_0) d^{\pi_{\lambda_E}} w)^v\right)| & \leq & O(1)|e|(|d^\pi u| + |\nabla e|) \\
\left|\nabla\left(I_{w;u}\left(((J - J_0) d^{\pi_{\lambda_E}} w)^v\right)\right)\right| & \leq &
O(1)((|du| + |\nabla e|)^2|e| \\
&{}& \quad + |\nabla e|(|du| + |\nabla e|) + |e||\nabla^2 e|).
\end{eqnarray*}
\end{cor}
Next, we give the following lemmas whose proofs are straightforward from the definition of
$X_g^\Omega$.
\begin{lem}\label{lem:MN} Suppose $d_{C^0}(w(\tau,\cdot),z(\cdot)) \leq \iota_g$. Then
\begin{eqnarray*}
X^{\Omega}_{g}(u,e) &= & D^v X^\Omega_g(u,0) e + M_1(u,e)(e,e) \\
d\pi_E(X_{dg}^{\pi_{\lambda_E}}) & = & M_2(u,e)(e)\\
\Omega(e,\nabla_{du}^E e)\, X_g^\Omega(u,e) & = & N(u,e)(e,\nabla_{du} e,e)
\end{eqnarray*}
where $M_1(u,e)$ is a smoothly $(u,e)$-dependent bi-linear map on $\Omega^0(u^*E)$
and $M_2: \Omega^0(u^*E) \to \Omega^0(u^*TQ)$ is a linear map, $N(u,e)$ is a
$(u,e)$-dependent tri-linear map on $\Omega^0(u^*E)$. They also satisfy
$$
|M_i(u,e)| = O(1), \quad |N(u,e)| = O(1).
$$
\end{lem}
\begin{lem}\label{lem:K}
\begin{eqnarray*}
(X^{d\lambda_E}_g)^h(u,e) = K(u,e)\, e
\end{eqnarray*}
where $K(u,e)$ is a $(u,e)$-dependent linear map
from $\Omega^0(u^*E)$ to $\Omega^0(u^*TQ)$ satisfying
$$
|M_1(u,e)| = O(1), \quad |N(u,e)| = O(1), \quad |K(u,e)| = o(1).
$$
\end{lem}
\medskip
From now on in the rest of the paper, we put an additional hypothesis that restricts ourselves to the case of vanishing charge.
In fact, using the method in this paper, this hypothesis can be removed.
\begin{hypo}[Charge vanishing]\label{hypo:exact}
Vanishing asymptotic charge:
\begin{equation}\label{eq:asymp-a=0}
{\mathcal Q}:=\int_{\{0\}\times S^1}((w|_{\{0\}\times S^1})^*\lambda\circ j)=0.
\end{equation}
\end{hypo}
\subsection{Exponential decay of the normal bundle part $e$}
\label{sec:expdecayMB}
Combining Lemma \ref{lem:J-J0}, \ref{lem:MN} and \ref{lem:K}, we can write \eqref{eq:CR-e}
as
$$
\nabla''_{du}e - \left(u^* \, DX^\Omega_g(u)(e)\right)^{(0,1)} = K(e,\nabla_{du} e, du)
$$
with
$$
|K(e,\nabla_{du}e,du)| \leq O(1)\left(|e|^2|\nabla_{du}e| + |e||du||\nabla_{du}e|\right).
$$
By evaluating this equation against $\frac{\partial}{\partial \tau}$, we derive
\begin{equation}\label{eq:e-reduction}
\nabla_\tau e + J_E(u) \nabla_t e - \theta\left(\frac{\partial u}{\partial \tau}\right)DX_g^\Omega(u)(e)
+ J_E \theta\left(\frac{\partial u}{\partial t}\right) DX_g^\Omega(u)(e) = \|e\|\cdot O(\|\nabla_\tau e\| + \|e\|)
\end{equation}
In particular, the asymptotic operator
$$
B_\infty = J_E (z_\infty(t))(\nabla_t - DX_g^\Omega(u_\infty))
$$
has the property $\ker B_\infty = \{0\}$. Note that this equation is in the type of
\eqref{eq:ODE} (or more precisely in the type \eqref{eq:uktilde-zeta})
and satisfies all the hypothesis required in Theorem \ref{thm:3-interval}.
Readers may refer \cite[Section 8]{oh-wang2} for proofs.
Considering $\zeta=e$, we immediately obtain
$$
\|e\|_{L^2(S^1)} < C_0 e^{-\delta_0 \tau}
$$
for some $C_0, \, \delta_0$.
Then applying the elliptic boot-strap to \eqref{eq:CR-e} and Corollary \ref{cor:convergence-ue},
we obtain
\begin{prop} \label{prop:expdecayvertical}
For any $k = 0, 1, \cdots$,
there exists some constant $C_k>0$ and $\delta_0>0$
$$
|\nabla^k e|<C_k\, e^{-\delta_0 \tau}.
$$
\end{prop}
\subsection{Exponential decay of the tangential part I: three-interval argument}
\label{subsec:exp-horizontal}
We summarise previous geometric calculations into the following basic equation which we will study using the three-interval argument in this section.
\begin{lem} We can write the equation \eqref{eq:delbarpiu=} into the form
\begin{equation}\label{eq:pidudtauL}
\pi_\theta \frac{\partial u}{\partial \tau}+J(u)\pi_\theta \frac{\partial u}{\partial t}=L(\tau, t),
\end{equation}
so that $|L|\leq C e^{-\delta \tau}$.
\end{lem}
\begin{proof} We recall $|du| \leq C$ which follows from Corollary
\ref{cor:convergence-ue}. Furthermore since $X_{dg}^{d\lambda_E}|_Q \equiv 0$, it follows
$$
\left|\left(u^*\theta\, d\pi_E((X_{dg}^{d\lambda_E})^h)\right)^{(0,1)}\right| \leq C |e|.
$$
Furthermore by the adaptedness of $J$ and by the definition of the associated $J_0$,
we also have $(J - J_0)|_{Q} \equiv 0$ and so
$$
\left|\left(d\pi_E((J - J_0) d^{\pi_{\lambda_E}} w\right)^h\right| \leq C |e|.
$$
It is manifest that the second term above also carries similar estimate. Combining them,
we have established that the right hand side is bounded by $C|e|$ from above.
Then the required exponential inequality follows from the exponential decay estimate
of $e$ established in Proposition \ref{prop:expdecayvertical}.
\end{proof}
In the rest of this section and Section \ref{subsec:centerofmass}, we give the proof of the following
\begin{prop}\label{prop:expdecayhorizontal}
For any $k=0, 1, \cdots$, there exists some constant $C_k>0$ and $\delta_k>0$ such that
$$
\left|\nabla^k \left(\pi_\theta\frac{\partial u}{\partial \tau}\right)\right|<C_k\, e^{-\delta_k \tau}.
$$
\end{prop}
The proof basically follows the same three-interval argument as in the proof of Theorem \ref{thm:3-interval}.
However, since the current case is much more subtle, we would like to highlight the following points before we start:
\begin{enumerate}
\item Unlike the normal component $e$ whose governing equation \eqref{eq:e-reduction} is an (inhomogenious) linear elliptic,
the equation \eqref{eq:pidudtauL} is only (inhomogeneous) quasi-linear and \emph{degenerate elliptic}:
the limiting operator $B$ of its linearization contains non-trivial kernel.
\item Nonlinearity of the
equation makes somewhat cumbersome to formulate the abstract framework of three-interval argument as in
Theorem \ref{thm:3-interval} although we believe it is doable. Since this is not the main interest of
ours, we directly deal with \eqref{eq:pidudtauL} in the present paper postponing such an abstract
framework elsewhere in the future.
\item For the normal component, we directly establish the exponential decay estimates of the
map $e$ itself. On the other hand, for the tangential component, partly due to the absence of
direct linear structure of $u$ and also due to the presence of nontrivial kernel of the asymptotic
operator, we prove the exponential decay of the \emph{derivative} $\pi_\theta\frac{\partial u}{\partial\tau}$ first and
then prove the exponential convergence of to some Reeb orbit afterwards.
\item To obtain the exponential decay of the \emph{derivative} term,
we need to exclude the possibility of a kernel element for the limiting obtained in the
three-interval argument. In Section \ref{subsec:centerofmass} we use the techniques of the center of mass as an intrinsic geometric coordinates system to exclude the possibility of the vanishing the limiting.
This idea appears in \cite{mundet-tian} and \cite{oh-zhu11} too.
\item Unlike \cite{HWZ3} and \cite{bourgeois}, our proof obtains exponential decay directly instead of showing $C^0$ convergence first and repeating to get exponential decay afterwards.
\end{enumerate}
\medskip
Starting from now until the end of Section \ref{subsec:centerofmass}, we give the proof of Proposition \ref{prop:expdecayhorizontal}.
Divide $[0, \infty)$ into the union of unit intervals $I_k=[k, k+1]$ for
$k=0, 1, \cdots$, and denote by $Z_k:=[k, k+1]\times S^1$. In the context below, we also denote by
$
Z^l
$
the union of three-intervals $Z^l_{I}:=[k,k+1]\times S^1$, $Z^l_{II}:=[k+1,k+2]\times S^1$ and
$Z^l_{III}:=[k+2,k+3]\times S^1$.
Consider $x_k: =\|\pi_\theta \frac{\partial u}{\partial \tau}\|^2_{L^2(Z_k)}$ as symbols in Lemma \ref{lem:three-interval}.
As in the proof of Theorem \ref{thm:3-interval}, we still use the three-interval inequality as the criterion and consider two situations:
\begin{enumerate}
\item If there exists some constant $\delta>0$ such that
\begin{equation}
\|\pi_\theta \frac{\partial u}{\partial \tau}\|^2_{L^2(Z_{k})}\leq \gamma(2\delta)(\|\pi_\theta \frac{\partial u}{\partial \tau}\|^2_{L^2(Z_{k-1})}
+\|\pi_\theta \frac{\partial u}{\partial \tau}\|^2_{L^2(Z_{k+1})})\label{eq:3interval-ineq-II}
\end{equation}
holds for every $k$, then from Lemma \ref{lem:three-interval}, we are done with the proof;
\item Otherwise, we collect all the three-intervals $Z^{l_k}$ against \eqref{eq:3interval-ineq-II}, i.e.,
\begin{equation}
\|\pi_\theta \frac{\partial u}{\partial \tau}\|^2_{L^2(Z^{l_k}_{II})}>
\gamma(2\delta)(\|\pi_\theta \frac{\partial u}{\partial \tau}\|^2_{L^2(Z^{l_k}_{I})}+\|\pi_\theta \frac{\partial u}{\partial \tau}\|^2_{L^2(Z^{l_k}_{III})}).\label{eq:against-3interval-II}
\end{equation}
In the rest of the proof, we deal with this case.
\end{enumerate}
First, if there exists some uniform constant $C_1>0$ such that
on each such three-interval
\begin{equation}
\|\pi_\theta \frac{\partial u}{\partial \tau}\|_{L^\infty([l_k+0.5, l_k+2.5]\times S^1)}<C_1e^{-\delta l_k}, \label{eq:expas-zeta-II}
\end{equation}
then through the same estimates and analysis as for Theorem \ref{thm:3-interval},
we obtain the exponential decay of $\|\pi_\theta \frac{\partial u}{\partial \tau}\|$ with the presumed rate $\delta$ as claimed.
\begin{rem}Here we look at the $L^\infty$-norm on smaller intervals $[l_k+0.5, l_k+2.5]\times S^1$
instead of the whole $Z^{l_k}=[l_k, l_k+3]\times S^1$
is out of consideration for the elliptic bootstrapping argument in Lemma \ref{lem:boot}.
However, the change here doesn't change any argument, since smaller ones are already enough to cover the middle intervals (see Figure \ref{fig:3-interval}).
\end{rem}
Following the same scheme as for Theorem \ref{thm:3-interval}, we are going to deal with the case
when there is no uniform bound $C_1$. Then there exists a sequence of constants $C_k\to \infty$
and a subsequence of such three-intervals $\{Z^{l_k}\}$ (still use $l_k$ to denote them) such that
\begin{equation}
\|\pi_\theta \frac{\partial u}{\partial \tau}\|_{L^\infty([l_k+0.5, l_k+2.5]\times S^1)}\geq C_ke^{-\delta l_k}.\label{eq:decay-fail-zeta-II}
\end{equation}
By Theorem \ref{thm:subsequence}
and the local uniform $C^1$-estimate,
we can take a subsequence, still denoted by $\zeta_k$, such that $u(Z^{l_k})$ lives in a neighborhood of
some closed Reeb orbit $z_\infty$.
Next, we translate the sequence $u_k:=u|_{Z^{l_k}}$ to $\widetilde{u}_k: [0,3]\times S^1\to Q$
by defining
$
\widetilde{u}_k(\tau, t)=u_k(\tau+l_k, t).
$
As before, we also define $\widetilde{L}_k(\tau, t)=L(\tau+l_k, t)$. From \eqref{eq:pidudtauL-t}, we have now
\begin{equation}\label{eq:pidudtauL-t}
\pi_\theta \frac{\partial \widetilde{u}_k}{\partial \tau}+J(\widetilde{u}_k)\pi_\theta \frac{\partial \widetilde{u}_k}{\partial t}=\widetilde{L}_k(\tau, t).
\end{equation}
Recalling that $Q$ carries a natural $S^1$-action induced from the Reeb flow,
we equip $Q$ with a $S^1$-invariant metric and its associated Levi-Civita connection.
In particular, the vector field $X_\lambda$ restricted to $Q$ is a Killing vector
field of the metric and satisfies $\nabla_{X_\lambda}X_\lambda = 0$.
Now since the image of $\widetilde{u}_k$ live in neighbourhood of a fixed Reeb orbit $z$ in $Q$,
we can express
\begin{equation}\label{eq:normalexpN}
\widetilde u_k(\tau,t) = \exp_{z_k(\tau,t)}^Z \zeta_k(\tau,t)
\end{equation}
for the normal exponential map $exp^Z: NZ\to Q$ of the locus $Z$ of $z$, where
$z_k(\tau,t) = \pi_N(\widetilde u_k(\tau,t))$ is the normal projection of $\widetilde u_k(\tau,t)$
to $Z$ and $\zeta_k(\tau,t) \in N_{z_k(\tau,t)}Z = \zeta_{z_k(\tau,t)} \cap T_{z_k(\tau,t)}Q$.
\begin{lem}
\begin{equation}
\pi_\theta\frac{\partial \widetilde{u}_k}{\partial \tau} =\pi_\theta (d_2\exp^Z)(\nabla^{\pi_{\theta}}_\tau\zeta_k)
\label{eq:utau-thetatau}
\end{equation}
$$
\pi_\theta\frac{\partial \widetilde{u}_k}{\partial t}=\pi_\theta (d_2\exp^Z)(\nabla^{\pi_{\theta}}_t\zeta_k).
$$
\end{lem}
\begin{proof} To simplify notation, we omit $k$ here.
For each fixed $(\tau,t)$, we compute
$$
D_1 \exp^Z(z(\tau,t))(X_\lambda(z(\tau,t))
= \frac{d}{ds}\Big|_{s = 0} \exp^Z_{\alpha(s)} \Pi_{z(\tau,t)}^{\alpha(s)}(X_\lambda(z(\tau,t))
$$
for a curve $\alpha: (-\varepsilon} \def\ep{\epsilon,\varepsilon} \def\ep{\epsilon) \to Q$ satisfying $\alpha(0) = z(\tau,t), \, \alpha'(0) = X_\lambda(z(\tau,t))$.
For example, we can take $\alpha(s) = \phi_{X_\lambda}^s(z(\tau,t))$.
On the other hand, we compare the initial conditions of the two geodesics
$a \mapsto \exp^Z_{\alpha(s)} a \Pi_{x}^{\alpha(s)}(X_\lambda(x))$ and
$a \mapsto \phi_{X_\lambda}^s(\exp^Z_x a (X_\lambda(x))$ with $x = z(\tau,t)$.
Since $\phi_{X_\lambda}^s$ is an isometry, we derive
$$
\phi_{X_\lambda}^s(\exp^Z_x a (X_\lambda(x)) = \exp^Z_x a (d\phi_{X_\lambda}^s(X_\lambda(x))).
$$
Furthermore we note that $d\phi_{X_\lambda}^s(X_\lambda(x)) = X_\lambda(x) $ at $s = 0$
and the field $s \mapsto d\phi_{X_\lambda}^s(X_\lambda(x))$ is parallel along the curve $s \mapsto \phi_{X_\lambda}^s(x)$.
Therefore by definition of $\Pi_x^{\alpha(s)}(X_\lambda(z(\tau,t))$, we derive
$$
\Pi_x^{\alpha(s)}(X_\lambda(x) = d\phi_{X_\lambda}^s(X_\lambda(x)).
$$
Combining this discussion, we obtain
$$
\exp^Z_{\alpha(s)} \Pi_{z(\tau,t)}^{\alpha(s)}(X_\lambda(z(\tau,t)))
= \phi_{X_\lambda}^s(\exp^Z_{z(\tau,t)}(X_\lambda(z(\tau,t)))
$$
for all $s \in (-\varepsilon} \def\ep{\epsilon,\varepsilon} \def\ep{\epsilon)$. Therefore we obtain
$$
\frac{d}{ds}\Big|_{s = 0} \exp^Z_{\alpha(s)} \Pi_{z(\tau,t)}^{\alpha(s)}(X_\lambda(z(\tau,t)))
= X_\lambda(\exp^Z_{z(\tau,t)}(X_\lambda(z(\tau,t))).
$$
This shows
$
(D_1\exp^Z)(X_\lambda) = X_\lambda(\exp^Z_{z(\tau,t)}(X_\lambda(z(\tau,t)))
$.
To see $\pi_\theta (D_1\exp^Z)(\frac{\partial z}{\partial \tau}) = 0$, just note that $\frac{\partial z}{\partial \tau}
= k(\tau,t) X_\lambda(z(\tau,t))$ for some function $k$, which is parallel to $X_\lambda$
and $z(\tau,t) \in Z$. Using the definition of $D_1\exp^Z(x)(v)$ for $v \in T_x Q$ at $x \in Q$, we
compute
\begin{eqnarray*}
(D_1\exp^Z)(\frac{\partial z}{\partial \tau})(\tau,t) & = & D_1 \exp^Z(z(\tau,t))( k(\tau,t) X_\lambda(z(\tau,t))\\
& = & k(\tau,t) D_1 \exp^Z(z(\tau,t))(X_\lambda(z(\tau,t)),
\end{eqnarray*}
and hence the $\pi_\theta$ projection vanishes.
At last write
\begin{eqnarray*}
\pi_\theta \frac{\partial \widetilde{u}}{\partial \tau}= \pi_\theta (d_2\exp^Z)(\nabla^{\pi_{\theta}}_\tau\zeta)
+ \pi_\theta(D_1\exp^Z)(\frac{\partial z}{\partial \tau})
\end{eqnarray*}
and we are done with the first identity claimed.
The second one is proved exactly the same way.
\end{proof}
Further noting that $\pi_\theta (d_2\exp^Z_{z_k(\tau,t)}): \zeta_{z_k(\tau,t)} \to \zeta_{z_k(\tau,t)}$ is invertible,
using this lemma and \eqref{eq:pidudtauL-t}, we have now the equation of $\zeta_k$
\begin{equation}
\nabla^{\pi_{\theta}}_\tau\zeta_k+ \overline J(\tau,t) \nabla^{\pi_{\theta}}_t\zeta_k
= [\pi_\theta (d_2\exp^Z)]^{-1}\widetilde{L}_k.
\label{eq:nablaxi}
\end{equation}
where we set $[\pi_\theta (d_2\exp)]^{-1}J(\widetilde{u}_k)[\pi_\theta (d_2\exp^Z)] =: \overline J(\tau,t)$.
Next, we rescale this equation by the norm $\|{\zeta}_k\|_{L^\infty([0,3]\times S^1)}$ which can not vanish:
This is because if otherwise
${ \zeta}_k\equiv 0$ which would imply $\widetilde u_k(\tau,t) \equiv z_k(\tau, t)$ for all $(\tau, t)\in [0,3]\times S^1$.
Therefore $\frac{\partial \widetilde u_k}{\partial \tau}$ is parallel to $X_\theta$ on $[0,3] \times S^1$.
In particular $\pi_\theta \frac{\partial \widetilde u_k}{\partial \tau} \equiv 0$ which
would violate the inequality
$$
\|\pi_\theta\frac{\partial{\widetilde{u}_k}}{\partial\tau}\|_{L^2([1,2]\times S^1)}>\gamma(\|\pi_\theta\frac{\partial{\widetilde{u}_k}}{\partial\tau}\|_{L^2([0,1]\times S^1)}+\|\pi_\theta\frac{\partial{\widetilde{u}_k}}{\partial\tau}\|_{L^2([2,3]\times S^1)}).
$$
Now the rescaled sequence
$
\overline \zeta_k:={\zeta}_k/\|{\zeta}_k\|_{L^\infty([0,3]\times S^1)}.
$
satisfies $\|\overline \zeta_k\|_{{L^\infty([0,3]\times S^1)}} = 1$, and
\begin{eqnarray}\label{eq:nablabarxi}
\nabla^{\pi_{\theta}}_\tau\bar\zeta_k+ \overline J(\tau,t) \nabla^{\pi_{\theta}}_t\bar\zeta_k &=& \frac{[\pi_\theta (d_2\exp^Z)]^{-1}\widetilde{L}_k}{\|{\zeta}_k\|_{L^\infty([0,3]\times S^1)}}\\
\|\nabla^{\pi_{\theta}}_\tau\bar\zeta_k\|^2_{L^2([1,2]\times S^1)}&\geq&\gamma(2\delta)(\|\nabla^{\pi_{\theta}}_\tau\bar\zeta_k\|^2_{L^2([0,1]\times S^1)}+\|\nabla^{\pi_{\theta}}_\tau\bar\zeta_k\|^2_{L^2([2,3]\times S^1)}).\nonumber
\end{eqnarray}
The next step is to focus on the right hand side of \eqref{eq:nablabarxi} and we have
\begin{lem}\label{lem:boot} The right hand side of \eqref{eq:nablabarxi} converges to zero as $k \to \infty$.
\end{lem}
\begin{proof}
Since the left hand side of \eqref{eq:nablaxi} is an elliptic (Cauchy-Riemann type) operator,
we have the elliptic estimates
\begin{eqnarray*}
\|\nabla^{\pi_\theta}_\tau\widetilde\zeta_k\|_{W^{l,2}([0.5, 2.5]\times S^1)}&\leq& C_1(\|\widetilde\zeta_k\|_{L^2([0, 3]\times S^1)}+\|\widetilde{L}_k\|_{L^2([0, 3]\times S^1)})\\
&\leq& C_2(\|\widetilde\zeta_k\|_{L^\infty([0, 3]\times S^1)}+\|\widetilde{L}_k\|_{L^\infty([0, 3]\times S^1)}).
\end{eqnarray*}
Sobolev embedding theorem further gives
$$
\|\nabla^{\pi_\theta}_\tau\widetilde\zeta_k\|_{L^\infty([0.5, 2.5]\times S^1)}\leq C(\|\widetilde\zeta_k\|_{L^\infty([0, 3]\times S^1)}+\|\widetilde{L}_k\|_{L^\infty([0, 3]\times S^1)}).
$$
Hence we have
\begin{eqnarray*}
\frac{\|\widetilde{L}_k\|_{L^\infty([0, 3]\times S^1)}}{\|\nabla^{\pi_\theta}_\tau\widetilde\zeta_k\|_{L^\infty([0.5, 2.5]\times S^1)}}
&\geq& \frac{\|\widetilde{L}_k\|_{L^\infty([0, 3]\times S^1)}}{C(\|\widetilde\zeta_k\|_{L^\infty([0, 3]\times S^1)}+\|\widetilde{L}_k\|_{L^\infty([0, 3]\times S^1)})}\\
&=&\frac{1}{C(\frac{\|\widetilde\zeta_k\|_{L^\infty([0, 3]\times S^1)}}{\|\widetilde{L}_k\|_{L^\infty([0, 3]\times S^1)}}+1)}
\end{eqnarray*}
Since by our assumption \eqref{eq:decay-fail-zeta-II}, the left hand side converges to zero as $k\to \infty$, so we get
$$
\frac{\|\widetilde{L}_k\|_{L^\infty([0, 3]\times S^1)}}{\|\widetilde\zeta_k\|_{L^\infty([0, 3]\times S^1)}}\to 0, \quad \text{as} \quad k\to \infty.
$$
Thus the right hand side of \eqref{eq:nablabarxi} converges to zero.
\end{proof}
Then with the same argument as in the proof of Theorem \ref{thm:3-interval}, after taking a subsequence,
we obtain a limiting section $\overline\zeta_\infty$ of $z_\infty^*\zeta_\theta$ satisfying
\begin{eqnarray}
\nabla_\tau \overline\zeta_\infty+ B_\infty \overline\zeta_\infty&=&0.\label{eq:xibarinfty-II}\\
\|\nabla_\tau\overline \zeta_\infty\|^2_{L^2([1,2]\times S^1)}&\geq&\gamma(2\delta)(\|\nabla_\tau\overline \zeta_\infty\|^2_{L^2([0,1]\times S^1)}+\|\nabla_\tau\overline \zeta_\infty\|^2_{L^2([2,3]\times S^1)}).
\nonumber\\
&{}&
\label{eq:xibarinfty-ineq-II}
\end{eqnarray}
Here to make it compatible with the notation we use in Theorem \ref{thm:3-interval},
we use $B_\infty$ to denote the limiting operator of $B:=\overline J(\tau,t) \nabla^{\pi_{\theta}}_t$.
For the current case of horizontal part, this is nothing but $J$ acting on the linearization of Reeb orbit $z_\infty$.
Write
$$
\overline\zeta_\infty=\sum_{i=0, \cdots k}a_j(\tau)e_j+\sum_{i\geq k+1}a_i(\tau)e_i,
$$
where $\{e_i\}$ is the basis consisting of the eigenfunctions associated to the eigenvalue
$\lambda_i$ for $j\geq k+1$ with
$$
0< \lambda_{k+1} \leq \lambda_{k+2} \leq \cdots \leq \lambda_i \leq \cdots \to \infty.
$$
and $e_j$ for $j=1, \cdots k$ are eigen-functions of eigen-value zero.
By plugging $\overline\zeta_\infty$ into \eqref{eq:xibarinfty-zeta}, we derive
\begin{eqnarray*}
a_j'(\tau)&=&0, \quad j=1, \cdots, k\\
a_i'(\tau)+\lambda_ia_i(\tau)&=&0, \quad i=k+1, \cdots
\end{eqnarray*}
and it follows that
\begin{eqnarray*}
a_j&=&c_j, \quad j=1, \cdots, k\\
a_i(\tau)&=&c_ie^{-\lambda_i\tau}, \quad i=k+1, \cdots.
\end{eqnarray*}
By the same calculation in the proof of Theorem \ref{thm:3-interval}, it follows
$$
\|\nabla^{\pi_\theta}_\tau\overline \zeta_\infty\|^2_{L^2([1,2]\times S^1)}< \gamma(2\delta) (\|\nabla^{\pi_\theta}_\tau\overline \zeta_\infty\|^2_{L^2([0,1]\times S^1)}
+\|\nabla^{\pi_\theta}_\tau\overline \zeta_\infty\|^2_{L^2([2,3]\times S^1)}).
$$
As a conclusion of this section,
it remains to show the non-vanishing of $\nabla^{\pi_\theta}_\tau \overline\zeta_\infty$ which will then lead to contradiction and hence finish the proof of Proposition \ref{prop:expdecayhorizontal}.
The non-vanishing proof is given in the next section with introducing the center of mass.
\subsection{Exponential decay of the tangential part II: study of the center of mass}
\label{subsec:centerofmass}
The Morse-Bott submanifold $Q$ is assigned with an $S^1$-invariant metric and hence its associated Levi-Civita connection $\nabla$
is also $S^1$-invariant.
Denote by $\exp: U_{o_Q} \subset TQ \to M \times M$ the exponential map,
and this defines a diffeomorphism between the open neighborhood $U_{o_Q}$ of the zero section $o_Q$ of $TQ$
and some open neighborhood $U_\Delta$ of the diagonal $\Delta \subset M \times M$. Denote its inverse by
$$
E: U_\Delta \to U_{o_Q}; \quad E(x,y) = \exp_x^{-1}(y).
$$
We refer readers to \cite{katcher} for the detailed study of the various basic derivative
estimates of this map.
The following lemma is the variation of the well-known center of mass techniques from Riemannian geometry
with the contact structure being taken into consideration (by introducing the reparametrization function $h$ in the following statement).
\begin{lem}\label{lem:centerofmass-Reeb}
Let $(Q, \theta=\lambda|_Q)$ be the clean submanifold foliated by Reeb orbits of period $T$.
Then there exists some $\delta > 0$ depending only
on $(Q, \theta)$ such that for any $C^{k+1}$ loop $\gamma: S^1 \to M$ with
$d_{C^{k+1}}(\gamma, \frak{Reeb}(Q, \theta)) < \delta $, there exists a unique point $m(\gamma) \in Q$,
and a reparametrization map $h:S^1\to S^1$ which is $C^k$ close to $id_{S^1}$,
such that
\begin{eqnarray}
\int_{S^1} E(m, (\phi^{Th(t)}_{X_\theta})^{-1}(\gamma(t)))\,dt&=&0\label{eq:centerofmass-Reeb1}\\
E(m, (\phi^{Th(t)}_{X_\theta})^{-1}(\gamma(t)))&\in& \xi_\theta(m) \quad \text{for all $t\in S^1$}.\label{eq:centerofmass-Reeb2}
\end{eqnarray}
\end{lem}
\begin{proof
Consider the functional
$$
\Upsilon: C^\infty(S^1, S^1)\times Q\times C^{\infty}(S^1, Q)\to TQ\times \mathcal{R}
$$
defined as
$$
\Upsilon(h, m, \gamma):=\left(\left(m, \int_{S^1} E(m, (\phi^{Th(t)}_{X_\theta})^{-1}(\gamma(t))dt )\right), \theta(E(m, (\phi^{Th(t)}_{X_\theta})^{-1}(\gamma(t))) ) \right),
$$
where $\mathcal{R}$ denotes the trivial bundle over ${\mathbb R}\times Q$ over $Q$.
If $\gamma$ is a Reeb orbit with period $T$, then
$h=id_{S^1}$ and $m(\gamma)=\gamma(0)$ will solve the equation
$$
\Upsilon(h, m, \gamma)=(o_Q, o_{{\mathcal R}}).
$$
From straightforward calculations
\begin{eqnarray*}
D_h\Upsilon\big|_{(id_{S^1}, \gamma(0), \gamma)}(\eta)
&=&\left(\left(m, \int_{S^1} d_2E\big|_{(\gamma(0), \gamma(0))}(\eta(t)TX_\theta)\,dt \right), \theta(d_2E\big|_{(\gamma(0), \gamma(0))}(\eta(t)TX_\theta) ) \right)\\
&=&\left(\left(m, (T\int_{S^1} \eta(t) \,dt)\cdot X_\theta(\gamma(0)) \right), T\eta(t) \right)
\end{eqnarray*}
and
\begin{eqnarray*}
D_m\Upsilon\big|_{(id_{S^1}, \gamma(0), \gamma)}(v)
&=&\left(\left(v, \int_{S^1} D_1E\big|_{(\gamma(0), \gamma(0))}(v)\,dt \right), \theta(D_1E\big|_{(\gamma(0), \gamma(0))}(v) ) \right)\\
&=&\left(\left(v, \int_{S^1} D_1E\big|_{(\gamma(0), \gamma(0))}(v)\,dt \right), \theta(D_1E\big|_{(\gamma(0), \gamma(0))}(v) ) \right)\\
&=&\left((v, v ), \theta(v) \right),
\end{eqnarray*}
we claim that
$D_{(h, m)}\Upsilon$ is transversal to $o_{TM}\times o_{{\mathcal R}}$ at the point $(id_{S^1}, \gamma(0), \gamma(t))$, where $\gamma$ is a Reeb
orbit of period $T$.
To see this, notice that for any point in the set
$$
\Delta:=\{(aX_\theta+\mu, f)\in TQ\times C^\infty(S^1, \mathbb{R})\big| a=\int_{S^1}f(t)dt\},
$$
one can always find its pre-image as follows:
For any given $(aX_\theta+\mu, f)\in TQ\times C^\infty(S^1, \mathbb{R})$ with $a=\int_{S^1}f(t)dt$,
the pair
\begin{eqnarray*}
v&=&a\cdot X_\theta+\mu\\
\eta(t)&=&\frac{1}{T}(f(t)-a)
\end{eqnarray*}
lives in the preimage.
This proves surjectivity of the partial derivative $D_{(h,m)}\Upsilon\big|_{(id_{S^1}, \gamma(0), \gamma)}$.
Then applying the implicit function theorem, we have finished the proof.
\end{proof}
Using the center of mass, we can derive the following proposition which will be used to exclude the possibility of the vanishing of $\nabla_\tau \overline \zeta_\infty$.
\begin{prop}\label{prop:centerofmass-app} Recall the rescaling sequence $\frac{\widetilde \zeta_k}{L_k}$
we take in the proof Proposition \ref{prop:expdecayhorizontal} above,
and assume for a subsequence,
$$
\frac{\widetilde \zeta_k}{L_k} \to \overline \zeta_\infty
$$
in $L^2$. Then $\int_{S^1} (d\phi_{X_\theta}^{Tt})^{-1}(\overline \zeta_\infty(\tau,t)) \, dt = 0$
where $(d\phi_{X_\theta}^{t})^{-1}(\overline \zeta_\infty(\tau,t)) \in T_{z(0)}Q$.
\end{prop}
\begin{proof}
By the construction of the center of mass applying to maps $u_k(\tau, \cdot): S^1\to Q$
for $\tau\in [0,3]$, we have obtained that
$$
\int_{S^1}E(m_k(\tau), (\phi_{X_\theta}^{Th_k(\tau, t)})^{-1}u_k(\tau, t))\,dt=0.
$$
Write $u_k(\tau, t)=\exp_{z_\infty(\tau, t)}\zeta_k(\tau, t)$, then it follows that
\begin{eqnarray}\label{eq:intEmk}
&{}& \int_{S^1}E(m_k(\tau), \exp_{(\phi_{X_\theta}^{Th_k(\tau, t)})^{-1}z_\infty(\tau, t)}d(\phi_{X_\theta}^{Th_k(\tau, t)})^{-1}\zeta_k(\tau, t))\,dt \nonumber\\
&= &\int_{S^1}E(m_k(\tau), (\phi_{X_\theta}^{Th_k(\tau, t)})^{-1}\exp_{z_\infty(\tau, t)}\zeta_k(\tau, t))\,dt = 0
\end{eqnarray}
Recall the following lemma whose proof is direct and we skip.
\begin{lem} Let $\Pi_y^x$ is the pararell transport along the short geodesic from
$y$ to $x$. Then there exists some sufficiently small $\delta > 0$ depending only on
the given metric on $Q$ and a constant $C = C(\delta) > 0$ such that $C(\delta) \to 1$ as $\delta \to 0$
and
$$
|E(x, \exp_y^Z(\cdot)) - \Pi_y^x| \leq C \, d(x, y).
$$
In particular $|E(x, \exp_y^Z(\cdot))| \leq |\Pi_y^x| + C \, d(x, y)$.
\end{lem}
Applying this lemma to \eqref{eq:intEmk}, it follows
\begin{eqnarray*}
&{}& \left|\int_{S^1}\Pi_{m_k(\tau)}^{(\phi_{X_\theta}^{Th_k(\tau, t)})^{-1}z_\infty(\tau, t)}(d\phi_{X_\theta}^{Th(t)})^{-1}\zeta_k(\tau, t)\,dt\right| \\
& \leq &
\int_{S^1} C \,d(m_k(\tau), (\phi_{X_\theta}^{Th_k(\tau, t)})^{-1}z_\infty(\tau, t))\left((d\phi_{X_\theta}^{Th(t)})^{-1}\zeta_k(\tau, t)\right)\,dt.
\end{eqnarray*}
We rescale $\zeta_k$ by using $L_k$ and derive that
\begin{eqnarray*}
&&{}\left|\int_{S^1}\Pi_{m_k(\tau)}^{(\phi_{X_\theta}^{Th_k(\tau, t)})^{-1}z_\infty(\tau, t)}
\left(d\phi_{X_\theta}^{Th_k(\tau, t)}\right)^{-1}\frac{\zeta_k(\tau, t)}{L_k}\,dt\right| \\
&\leq & \int_{S^1} C \,d(m_k(\tau), (\phi_{X_\theta}^{Th_k(\tau, t)})^{-1}z_\infty(\tau, t))
\left|\left(d\phi_{X_\theta}^{Th_k(\tau, t)}\right)^{-1}\frac{\zeta_k(\tau, t)}{L_k}\right| \,dt.
\end{eqnarray*}
Take $k\to \infty$, and since that $m_k(\tau) \to z_{\infty}(0)$ and $h_k\to id_{S^1}$ uniformly,
we get $d(m_k(\tau), (\phi_{X_\theta}^{Th_k(\tau, t)})^{-1}z_\infty(\tau, t)) \to 0$
uniformly over $(\tau,t) \in [0,3] \times S^1$ as $k \to \infty$. Therefore the right hand
side of this inequality goes to 0. On the other hand by the same reason, it follows
$$
\Pi_{m_k(\tau)}^{(\phi_{X_\theta}^{Th_k(\tau, t)})^{-1}z_\infty(\tau, t)}
\left(d\phi_{X_\theta}^{Th_k(\tau, t)}\right)^{-1}\frac{\zeta_k(\tau, t)}{L_k} \to
(d\phi_{X_\theta}^{Tt})^{-1}\overline \zeta_\infty
$$
uniformly and hence we obtain
$
\int_{S^1}(d\phi_{X_\theta}^{Tt})^{-1}\overline \zeta_\infty\,dt=0.
$
\end{proof}
Using this proposition, we now prove
$
\nabla_\tau \overline \zeta_\infty \neq 0,
$
which is the last piece of finishing the proof of Proposition \ref{prop:expdecayhorizontal}.
Suppose to the contrary, i.e., $\nabla_\tau \overline \zeta_\infty = 0$,
then we would have $J(z_\infty(t))\nabla_t \overline \zeta_\infty = 0$ from
\eqref{eq:xibarinfty-II} and the remark right after it.
Fix a basis $
\{v_1, \cdots, v_{2k}\}
$ of $\xi_{z(0)} \subset T_{z(0)}Q$
and define
$
e_i(t) = d\phi_{X_\theta}^t(v_i)$ for $i = 1, \cdots, 2k$.
Since $\nabla_t$ is nothing but the linearization of the Reeb orbit $z_\infty$, the Morse-Bott
condition implies
$
\ker B_\infty = \operatorname{span} \{e_i(t)\}_{i=1}^{2k}
$.
Then one can express
$
\overline \zeta_\infty(t) = \sum_{i=1}^{2k} a_i(t) e_i(t)$.
Moreover since $e_i$ are parallel (with respect to the $S^1$-invariant connection) by construction, it follows $a_i$ are constants, $i = 1, \cdots, 2k$.
Then we can write
$
\overline \zeta_\infty(t) = \phi_{X_\theta}^t(v)$, where $v= \sum_{i=1}^{2k} a_i v_i$,
and further it follows that $\int_{S^1}((d\phi_{X_\theta}^t)^{-1}(\overline \zeta_\infty(t))\, dt = v$.
On the other hand, from Proposition \ref{prop:centerofmass-app}, $v=0$ and further
$\overline \zeta_\infty \equiv 0$, which contradicts with $\|\overline \zeta_\infty\|_{L^\infty([0,3]\times S^1)} = 1$.
Thus finally we conclude that $\nabla_\tau\overline \zeta_\infty$ is not zero, and we finish the proof of Proposition \ref{prop:expdecayhorizontal}.
\medskip
Since we have proved the $L^2$ exponential decay for both normal and tangential parts,
the $C^\infty$-uniformly exponential decay then follows by the same way as
in \cite{oh-wang2}.
\section{Exponential decay: general clean manifold case}
\label{sec:general}
In this section, we consider the general case of the Morse-Bott clean submanifold.
For this one, it is enough to consider the normalized contact triad
$(F, f\lambda_F, J)$ where $J$ is adapted to the zero section $Q$.
Write $w=(u, s)=(u, \mu, e)$, where $\mu\in u^*JT{\mathcal N}$ and $e\in u^*E$.
By the calculations in Section \ref{sec:coord},
and with similar calculation of Section \ref{sec:prequantization},
the $e$ part can be dealt with exactly the same as prequantizable case,
whose details are skipped here.
After the $e$-part is taken care of, for the $(u, \mu)$ part, we derive
$$
\left(\begin{matrix} \pi_\theta\frac{\partial u}{\partial\tau} \\
\nabla_\tau \mu \end{matrix}\right)
+J\left(\begin{matrix} \pi_\theta\frac{\partial u}{\partial t} \\
\nabla_t \mu \end{matrix}\right)=L,
$$
where $|L|\leq C e^{-\delta \tau}$ similarly as for the prequantization case.
Then we apply the three-interval argument
whose details are similar to the prequantization case and so are omitted.
We only need to claim as in the prequantizable case
for the limiting $(\overline{\zeta}_\infty, \overline{\mu}_\infty)$ is not in kernel of $B_\infty$.
If $(\overline{\zeta}_\infty, \overline{\mu}_\infty)$ is in the kernel of $B_\infty$, then by the Morse-Bott
condition, we have $\overline{\mu}_\infty=0$. With the same procedure for introducing the
center of mass, we can use the same argument to claim that $\overline{\zeta}_\infty$ must vanish
if it is contained in the kernel of $B_\infty$.
Hence we are done with the following proposition.
\begin{prop} \label{prop:expdecaygeneral}
For any $k=0, 1, \cdots$, there exists some constant $C_k>0$ and $\delta_k>0$
\begin{eqnarray*}
\left|\nabla^k \left(\pi\frac{\partial u}{\partial \tau}\right)\right|<C_k\, e^{-\delta_k \tau}, \quad
\left|\nabla^k \mu\right|<C_k\, e^{-\delta_k \tau}
\end{eqnarray*}
for each $k \geq 0$.
\end{prop}
\section{The case of asymptotically cylindrical symplectic manifolds}
\label{sec:asymp-cylinder}
In this section, we explain how we can apply the three-interval method and our tensorial scheme
to non-compact symplectic manifolds with \emph{asymptotically cylindrical ends}.
Here we use Bao's precise definition \cite{bao} of the asymptotically cylndrical ends but
restricted to the case where the asymptotical manifold is a contact manifold $(V,\xi)$.
In this section, we will denote $V$ for a contact manifold
to make comparison of our definition and Bao's transparent,
which is different from our usage of $M$ in the previous sections.
Let $(V,\xi)$ be a closed contact manifold of dimension $2n+1$ and let $J$ be an almost complex
structure on $W = [0,\infty) \times V$. We denote
\begin{equation}\label{eq:bfR}
{\bf R}: = J\frac{\partial}{\partial r}
\end{equation}
a smooth vector field on $W$, and let $\xi \subset TW$ be a subbundle defined by
\begin{equation}\label{eq:xi}
\xi_{(r,v)} = J T_{(r,v)}(\{r\} \times V) \cap T_{(r,v)}(\{r\} \times V).
\end{equation}
Then we have splitting
\begin{equation}\label{eq:splitting-asymp}
TW = {\mathbb R}\{\frac{\partial}{\partial r}\} \oplus {\mathbb R}\{{\bf R}\} \oplus \xi_{(r,v)}
\end{equation}
and denote by $i: {\mathbb R}\{\frac{\partial}{\partial r}\} \oplus {\mathbb R}\{{\bf R}\} \to {\mathbb R}\{\frac{\partial}{\partial r}\} \oplus {\mathbb R}\{{\bf R}\}$
the almost complex structure
$$
i \frac{\partial}{\partial r} = {\bf R}, \quad i{\bf R} = - \frac{\partial}{\partial r}.
$$
We denote by $\lambda$ and $\sigma$ the dual 1-forms of $\frac{\partial}{\partial r}$ and
${\bf R}$ such that $\lambda|_\xi = 0 = \sigma|_\xi$. In particular,
$$
\lambda({\bf R}) = 1 = \sigma(\frac{\partial}{\partial r}), \quad
\lambda(\frac{\partial}{\partial r}) = 0 = \sigma({\bf R}).
$$
We denote by $T_s: [0,\infty) \times V \to [-s,\infty) \times V$ the translation $T_s(r,v) = (r+s,v)$
and call a tensor on $W$ is translational invariant if it is invariant under the translation.
The following definition is the special case of the one in \cite{bao}
restricted to the contact type asymptotical boundary.
\begin{defn}[Asymptotically Cylindrical $(W,\omega, J)$ \cite{bao}]
The almost complex structure is called $C^\ell$-\emph{asymptotically cylindrical} if there exists a
2-form $\omega$ on $W$ such that the pair $(J,\omega)$ satisfies the following:
\begin{itemize}
\item[{(AC1)}] $\frac{\partial}{\partial r} \rfloor \omega = 0 = {\bf R} \rfloor \omega$,
\item[{(AC2)}] $\omega|_\xi(v, J\, v) \geq 0$ and equality holds iff $v = 0$,
\item[{(AC3)}] There exists a smooth translational invariant almost complex structure
$J_\infty$ on ${\mathbb R} \times V$ and constants $R_\ell > 0$ and $C_\ell, \, \delta_\ell > 0$
$$
\|(J - J_\infty)|_{[r,\infty) \times V}\|_{C^\ell} \leq C_\ell e^{-\delta_\ell r}
$$
for all $r \geq R_\ell$. Here the norm is computed in terms of the translational invariant
metric $g_\infty$ and a translational invariant connection.
\item[{(AC4)}] There exists a smooth translational invariant closed 2-form $\omega_\infty$ on
${\mathbb R} \times V$ such that
$$
\|(\omega - \omega_\infty)|_{[r,\infty) \times V}\|_{C^\ell} \leq C_\ell e^{-\delta_\ell r}
$$
for all $r \geq R_\ell$.
\item[{(AC5)}] $(J_\infty,\omega_\infty)$ satisfies (AC1) and (AC2).
\item[{(AC6)}] ${\bf R}_\infty\rfloor d\lambda_\infty = 0$, where ${\bf R}_\infty: = \lim_{s \to \infty}T_s^*{\bf R}$,
$\lambda_\infty : = \lim_{s \to \infty} T_s^*\lambda$ where both limit exist on ${\mathbb R} \times V$ by (AC3).
\item[{(AC7)}] ${\bf R}_\infty(r,v) = J_\infty\left(\frac{\partial}{\partial r}\right) \in T_{(r,v)}(\{r\} \times V)$.
\end{itemize}
\end{defn}
For the purpose of current paper, we will restrict ourselves to the case when $\lambda_\infty$ is
a contact form of a contact manifold $(V,\xi)$ and ${\bf R}$ the translational
invariant vector field induced by the Reeb vector field on $V$
associated to the contact form $\lambda_\infty$ of $(V,\xi)$. More precisely, we have
$$
{\bf R}(r,v) = (0, X_{\lambda_\infty}(v))
$$
with respect to the canonical splitting $T_{(r,v)}W = {\mathbb R} \oplus T_vV$.
Furthermore we also assume that $(V,\lambda_\infty, J_\infty)$ is a contact triad.
Now suppose that $Q \subset V$ is a clean subamanifold of Reeb orbits of $\lambda_\infty$
and that $\widetilde u:[0,\infty) \times S^1 \to W$ is a $\widetilde J$-holomorphic
curve for which the Subsequence Theorem given in section 3.2 \cite{bao} holds.
We also assume that $J_\infty$ is adapted to $Q$ in the sense of Definition \ref{defn:adapted}.
Let $\tau_k \to \infty$ be a sequence such that $a(\tau_k,t) \to \infty$ and
$w(\tau_k,t) \to z$ uniformly as $k \to \infty$ where $z$ is a closed Reeb orbit
whose image is contained in $Q$. By the local uniform elliptic estimates, we may
assume that the same uniform convergence holds on the intervals
$$
[\tau_k, \tau_k+3] \times S^1
$$
as $k \to \infty$. On these intervals, we can write the equation ${\overline \partial}_{\widetilde J} \widetilde u = 0$ as
$$
{\overline \partial}_{J_\infty} \widetilde u\left(\frac{\partial}{\partial \tau}\right)
= (\widetilde J - J_\infty) \frac{\partial \widetilde u}{\partial t}.
$$
We can write the endomorphism $(\widetilde J - J_\infty)(r,\Theta) =: M(r,\Theta)$ where
$(r,\Theta) \in {\mathbb R} \times V$ so that
\begin{equation}\label{eq:expdkM}
|\nabla^k M(r,\Theta)| \leq C_k e^{-\delta r}
\end{equation}
for all $r \geq R_0$. Therefore $u = (a,w)$ with $a = r\circ \widetilde u$, $w = \Theta \circ \widetilde u$
satisfies
$$
{\overline \partial}_{J_\infty} \widetilde u\left(\frac{\partial}{\partial \tau}\right) = M(a,w)\left(\frac{\partial \widetilde u}{\partial t}\right)
$$
Decomposing ${\overline \partial}_{J_\infty} \widetilde u$ and $\frac{\partial \widetilde u}{\partial t}$ with respect to the decomposition
$$
TW = {\mathbb R} \oplus TV = {\mathbb R}\cdot\frac{\partial}{\partial r} \oplus {\mathbb R} \cdot X_{\lambda_\infty} \oplus \xi
$$
we have derived
\begin{eqnarray}
{\overline \partial}^{\pi_\xi} w\left(\frac{\partial}{\partial \tau}\right) & = & \pi_\xi
\left(M(a,w)\left(\frac{\partial \widetilde u}{\partial t}\right)\right) \label{eq:delbarpiw1}\\
(dw^*\circ j - da)\left(\frac{\partial}{\partial \tau}\right) & = & \pi_{{\mathbb C}} \left(M(a,w)\left(\frac{\partial \widetilde u}{\partial t}\right)\right)\label{eq:delbarpiw2}
\end{eqnarray}
where $\pi_\xi$ is the projection to $\xi$ with respect to the contact form $\lambda_\infty$ and
$\pi_{\mathbb C}$ is the projection to ${\mathbb R}\cdot\frac{\partial}{\partial r} \oplus {\mathbb R} \cdot X_{\lambda_\infty}$
with respect to the cylindrical $(W,\omega_\infty,J_\infty)$.
Then we obtain from \eqref{eq:expdkM}
$$
|{\overline \partial}^{\pi_\xi} w| \leq Ce^{-\delta a}
$$
as $a \to \infty$. By the subsequence convergence theorem assumption and local a priori estimates on $\widetilde u$,
we have immediately obtained the following
$$
|\nabla''_\tau e(\tau,t)| \leq C\, e^{\delta_1 \tau},\,
|\nabla''_\tau \xi_{\mathcal F}(\tau,t)| \leq C\, e^{\delta_1 \tau},\,
|\nabla''_\tau \xi_G(\tau,t)| \leq C\, e^{\delta_1 \tau}
$$
where $w = \exp_Z(\xi_G + \xi_{\mathcal F} + e)$ is the decomposition similarly as before.
Now we can apply exactly the same proof as the one given in the previous section to establish
the exponential decay property of $dw$.
For the component $a$, we can use \eqref{eq:delbarpiw2} and the argument used in
\cite{oh-wang2} and obtain the necessary expoential property as before.
\smallskip
\textbf{Acknowledgements:}
Rui Wang would like to thank Erkao Bao and Ke Zhu for useful discussions.
| {'timestamp': '2014-12-18T02:09:16', 'yymm': '1311', 'arxiv_id': '1311.6196', 'language': 'en', 'url': 'https://arxiv.org/abs/1311.6196'} |
\section{Introduction}
Topological quantum phase is currently a mainstream of research in condensed matter physics~\cite{klitzing1980new,tsui1982two,hasan2010colloquium,wen2017colloquium,kosterlitz2017nobel}. At equilibrium, the topological phases can be characterized by nonlocal topological invariants~\cite{wen1990topological,qi2011topological} defined in ground states. This classifies the gapped band structures into distinct topological states, with great success having been achieved in study of topological insulators~\cite{fu2007topological,fu2007topologicalPRB,fu2011topological,chang2013experimental}, topological semimetals~\cite{burkov2011weyl,young2012dirac,xu2015discovery,lv2015experimental}, and topological superconductors~\cite{qi2009time,bernevig2013topological,ando2015topological,sato2017topological,zhang2018observation}. In general, a noninteracting topological phase can be greatly affected after considering many-body interaction~\cite{wang2010topological,pesin2010mott,lee2011effects,castro2011topological,araujo2013change,yao2013interaction,messer2015exploring,spanton2018observation,rachel2018interacting,viyuela2018chiral,andrews2020fractional,mook2021interaction}. For instance, the repulsive Hubbard interaction can drive a trivial insulator into a topological Mott insulator~\cite{raghu2008topological,rachel2010topological,vanhala2016topological}, while the attractive Hubbard interaction may drive a trivial phase of two-dimensional (2D) quantum anomalous Hall (QAH) system into a topological superconductor/superfluid~\cite{qi2006topological,qi2010chiral,liu2014realization,poon2018semimetal}. Nevertheless, how to accurately identify the topological phases driven by the interactions is still a fundamental issue and usually hard in experiments.
In recent years, the rapid development of quantum simulations~\cite{hofstetter2018quantum,fauseweh2021digital,monroe2021programmable}
provides new realistic platforms to explore exotic interacting physics, such as ultracold atoms in optical lattices~\cite{bloch2008many,langen2015ultracold,gross2017quantum,schreiber2015observation,smith2016many,bordia2016coupling,choi2016exploring} and superconducting qubits~\cite{ramasesh2017direct,jiang2018quantum,yan2019strongly}. A number of topological models have been realized in experiments, such as the 1D Su-Schriffer-Heeger model~\cite{su1980soliton,atala2013direct}, 1D AIII class topological insulator~\cite{liu2013manipulating,song2018observation}, 1D bosonic symmetry-protected phase~\cite{haldane1983continuum,de2019observation}, 2D Haldane model~\cite{jotzu2014experimental}, the spin-orbit coupled QAH model~\cite{wu2016realization,sun2018highly,liang2021realization}, and the 3D Weyl semimetal band~\cite{he2016realization,wang2018dirac,lu2020ideal,wang2021realization,li2021weyl}. Accordingly, the various detection schemes for the exotic topological physics are also developed, ranging from the measurements of equilibrium topological physics~\cite{liu2013detecting,hafezi2014measuring,wu2014topological,price2016measurement} to non-equilibrium quantum dynamics~\cite{vajna2015topological,hu2016dynamical,budich2016dynamical,wilson2016remnant,heyl2018dynamical,hu2020dynamical,hu2020topological,kuo2021decoherent,cai2022synthetic}. In particular, the dynamical characterization~\cite{zhang2018dynamical,zhang2019characterizing,ye2020emergent,jia2021dynamically,li2021direct,zhang2021universal} shows the correspondence between broad classes of equilibrium topological phases and the emergent dynamical topology in far-from-equilibrium quantum dynamics induced by quenching such topological systems, which brings about the systematic and high-precision schemes to detect the topological phases based on quantum dynamics and has advanced broad studies in experiment~\cite{sun2018uncover,tarnowski2019measuring,yi2019observing,wang2019experimental,xin2020quantum,ji2020quantum,niu2020simulation,chen2021digital,yu2021topological,zhang2022topological}. Nevertheless, in the current these studies have been mainly focused on the noninteracting topological systems, while the particle-particle interactions are expected to have crucial effects on the topological phases, whose detection is typically hard to achieve. For example, when quenching an interacting topological system~\cite{manmana2007strongly,polkovnikov2011colloquium,kiendl2017many,zhang2021nonequilibrium}, both the interactions and many-body states of the system evolve simultaneously after quenching, leading to complex nonlinear quantum dynamics~\cite{moeckel2008interaction,han2012evolution,foster2013quantum,dong2015dynamical} and exotic nonequilibrium phenomena~\cite{gornyi2005interacting,eisert2015quantum,yao2017discrete,peotta2021determination}.
In this paper, we propose a novel scheme based on quench dynamics to detect the mean-field topological phase diagram of an interacting Chern insulator, with nontrivial dynamical quantum physics being uncovered. Specifically, we consider a 2D QAH system in presence of a weak to intermediate Hubbard interaction which mainly induces a ferromagnetic order under the mean-field level. The interaction corrects the Zeeman coupling to an effective form and the equilibrium topological properties are fully determined by the self-consistent Zeeman potential. By quenching the system from an initial near fully polarized trivial state to a parameter regime in which the equilibrium phase is topologically nontrivial, we uncover two dynamical phenomena rendering the dynamical signals of the equilibrium mean-field phase. (i) There are two characteristic times $t_s$ and $t_c$ capturing the emergence of dynamical self-consistent particle number density and dynamical topological phase transition for the time-dependent Hamiltonian, respectively, with $t_s>t_c$ ($t_s<t_c$) occurring in the repulsive (attractive) interaction and $t_s=t_c$ on topological phase boundaries. (ii) The topological number of the equilibrium mean-field phase after quench is determined by the spin polarization of four Dirac points at the time $t_s$ in examining spin dynamics. Under the two fundamental properties, we employ the characteristic times to establish a dynamical detection scheme of topological phase in the interacting system. With feasibility of measuring the time scales in dynamical evolution, this scheme provides a simplified way to detect the mean-field phase diagram in an interacting Chern matter and may be applied to the quantum simulation experiments in the near future.
The remaining part of this paper is organized as follows.
In Sec.~\ref{sec:QAH-Hubbard model}, we introduce the QAH-Hubbard model.
In Sec.~\ref{sec:Quench dynamics}, we study the quench dynamics of the system.
In Sec.~\ref{sec:Nontrivial dynamical properties}, we reveal the nontrivial dynamical properties in quench dynamics.
In Sec.~\ref{sec:Determination of mean-field topological phases}, we determine the equilibrium mean-field topological phases via the dynamical properties.
In Sec.~\ref{sec:Experimental detection}, we establish the nonequilibrium detection scheme for the real measurements of mean-field topological phase diagram.
Finally, we summarize the main results and provide the brief discussion in Sec.~\ref{sec:Conclusion}.
\section{QAH-Hubbard model}\label{sec:QAH-Hubbard model}
Our starting point is a minimal 2D QAH model~\cite{qi2006topological,liu2014realization}, which has been recently realized in cold atoms~\cite{wu2016realization,sun2018uncover,yi2019observing,liang2021realization}, together with an attractive or repulsive on-site Hubbard interaction of strength $U$. The QAH-Hubbard Hamiltonian reads
\begin{equation}\label{eq:1}
\begin{aligned}
H & =\sum_{\mathbf{k}}C^{\dag}_{\mathbf{k}}\mathcal{H}^{(0)}_{\mathbf{k}}C_{\mathbf{k}}+U\sum_{\mathbf{j}}n_{\mathbf{j}\uparrow}n_{\mathbf{j}\downarrow},\\
\mathcal{H}^{(0)}_{\mathbf{k}} & = \mathbf{h}_{\mathbf{k}}\cdot\boldsymbol{\sigma} =\left[m_z-2t_0(\cos k_x+\cos k_y)\right]\sigma_z\\
&\quad+2t_{\text{so}}\sin k_y\sigma_x+2t_{\text{so}}\sin k_x\sigma_y,
\end{aligned}
\end{equation}
where $C_{\mathbf{k}}=(c_{\mathbf{k}\uparrow},c_{\mathbf{k}\downarrow})^{T}$ is the spinor operator of momentum $\mathbf{k}$, $n_{\mathbf{j}s}=c^{\dagger}_{\mathbf{j}s}c_{\mathbf{j}s}$ with $s=\uparrow$ or $\downarrow$ is the particle number operator at site $\mathbf{j}$, $\sigma_{x,y,z}$ are the Pauli matrices, and $m_z$ is the Zeeman coupling. Here $t_0$ and $t_{\text{so}}$ are the spin-conserved and spin-flipped hopping coefficients, respectively. In the noninteracting case, the Bloch Hamiltonian $\mathcal{H}^{(0)}_{\mathbf{k}}$ produces two energy bands $\pm e_{\mathbf{k}}=\pm\sqrt{h_{x,\mathbf{k}}^2+h_{y,\mathbf{k}}^2+h_{z,\mathbf{k}}^2}$, for which the gap can be closed at Dirac points $\mathbf{D}_i\in\{\mathbf{X}_1,\mathbf{X}_2,\boldsymbol{\Gamma},\mathbf{M}\}$ with $\mathbf{X}_1=(0,\pi)$, $\mathbf{X}_2=(\pi,0)$, $\boldsymbol{\Gamma}=(0,0)$, and $\mathbf{M}=(\pi,\pi)$ for certain Zeeman coupling. When the system is fully gapped, the corresponding band topology can be characterized by the first Chern number $\text{Ch}_1$, determining the QAH topological region $0<|m_z|<4t_0$ with $\text{Ch}_1=\text{sgn}(m_z)$ and the trivial region $|m_z|>4t_0$. This noninteracting topological property can also be captured by more intuitive and physical quantities, such as the spin textures on band inversion surfaces~\cite{zhang2018dynamical,yi2019observing} and spin polarizations on four Dirac points~\cite{liu2013detecting,wu2016realization}.
The presence of nonzero interactions can greatly affect the physics of Hamiltonian \eqref{eq:1} and may induce various interesting quantum phases at suitable interaction strength, such as the superfluid phase in attractive interaction~\cite{jia2019topological,powell2022superfluid} and the antiferromagnetic phase in repulsive interaction~\cite{esslinger2010fermi,tarruell2018quantum}. Especially for a general strong interaction, the system may host rich magnetic phases~\cite{ziegler2020correlated,ziegler2022large,tirrito2022large}. Yet, here we focus on the system at half filling~\cite{guo2011topological} within a weak to intermediate interacting regime, where the interaction mainly induces ferromagnetic order. In this case, under the mean-field level the Hubbard interaction can be rewritten as
\begin{equation}
\sum_{\mathbf{j}}n_{\mathbf{j}\uparrow}n_{\mathbf{j}\downarrow}=n_{\uparrow}\sum_{\mathbf{k}}c_{\mathbf{k} \downarrow}^{\dagger} c_{\mathbf{k} \downarrow}+n_{\downarrow}\sum_{\mathbf{k}}c_{\mathbf{k} \uparrow}^{\dagger} c_{\mathbf{k} \uparrow}-Nn_{\uparrow}n_{\downarrow}
\end{equation}
with $n_{s}=({1}/{N})\sum_{\mathbf{k}} \langle c_{\mathbf{k}s}^{\dagger} c_{\mathbf{k}s}\rangle$. Here $N$ is the total number of sites. It is clear that the nonzero interaction corrects the Zeeman coupling to an effective form
\begin{equation}\label{m_eff}
m^{\text{eff}}_z=m_z-U\frac{(n_{\uparrow}-n_{\downarrow})}{2}=m_z-Un_d,
\end{equation}
where $n_d\equiv (n_\uparrow-n_\downarrow)/2$ is the difference of density for spin-up and spin-down particles. The effective Zeeman coupling shifts the topological region to $0<|m^{\text{eff}}_z|<4t_0$ with $\text{Ch}_1=\text{sgn}(m^{\text{eff}}_z)$ in the interacting regime.
\begin{figure}[t]
\includegraphics[width=\columnwidth]{figure1.pdf}
\caption{(a)-(b) Self-consistent results of $n_{\uparrow}$ and $n_{\downarrow}$.
(c)~Mean-field phase diagram for the nonzero $U$ and $m_z$. (d)~Contour lines of $n_d$, where the topological phase boundaries (two dashed lines) have $n^{*}_d=\pm 0.4068$. Here we set $t_{\text{so}}=t_0$ and $N=60\times 60$.}
\label{Fig1}
\end{figure}
By self-consistently calculating the particle number density $n_{\uparrow (\downarrow)}$ [Figs.~{\ref{Fig1}}(a) and {\ref{Fig1}}(b)], we show the mean-field phase diagram of QAH-Hubbard model \eqref{eq:1} in Fig.~{\ref{Fig1}}(c). An important feature is that the topological phase boundaries depend on the strength of nonzero interaction and Zeeman coupling linearly~[see Appendix \ref{appendix-1}]. Indeed, this linear behavior of topological boundaries is essentially a natural consequence of the linear form of the contour lines of $n_d$ in terms of the interaction strength and Zeeman coupling, as we show in Fig.~{\ref{Fig1}}(d). On these lines, $m^{\mathrm{eff}}_{z}$ is also unchanged, since $n_{d}$ is fully determined by the effective Zeeman coupling. Specifically, we have $n^{*}_d=\pm 0.4068$ on the topological phase boundaries $|m^{\mathrm{eff}}_{z}|=4t_{0}$ for $t_{\mathrm{so}}=t_{0}$. This linear scaling form of $n_{d}$ also greatly affects the dynamical properties of the system and leads to novel phenomena in the quench dynamics.
\section{Quench dynamics}\label{sec:Quench dynamics}
It is known that the quench dynamics has been widely used in cold atoms~\cite{sun2018highly,sun2018uncover,song2019observation,wang2021realization}. The above mean-field topological phase diagram can be dynamically detected by employing the quantum quench scheme. Unlike the interaction quench in Refs.~\cite{moeckel2008interaction,foster2013quantum}, here we choose the Zeeman coupling as quench parameter, which has the following advantages: (i) The effective Zeeman coupling directly determines the topology of mean-field ground states at equilibrium. (ii) The Zeeman field in spin-orbit coupled quantum gases is controlled by the laser intensity and/or detuning and can be changed in a very short time scale, fulfilling the criterion for a sudden quench. (iii) The real experiments~\cite{lin2011spin,wang2012spin,williams2013raman} have demonstrated that both the magnitude and the sign of Zeeman field can be tuned, which is more convenient to operate.
The quench protocol is as follows. We first initialize the system into a near fully polarized state for time $t<0$ by taking a very large constant magnetization $m^{(c)}_z$ along the $\sigma_{z}$ axis but a very small $m^{(c)}_{x(y)}$ along $\sigma_{x(y)}$ axis. In this case, the effect of interaction $U$ can be ignored, and the spin of initial state is almost along the $z$ axis with only a very small component in the $x$-$y$ plane. At $t=0$, we suddenly change the Zeeman coupling $m^{(c)}_z$ to the postquench value $m_z$ and remove $m^{(c)}_{x(y)}$, then the near fully polarized state begins to evolve under the equation of motion $i \dot{\Psi}_{\mathbf{k}}(t)=\mathcal{H}_{\mathbf{k}}(t)\Psi_{\mathbf{k}}(t)$ with
\begin{equation}\label{eq:2}
\mathcal{H}_{\mathbf{k}}(t)={\left[ \begin{array}{cc}
h_{z,\mathbf{k}}+Un_\downarrow(t) & h_{x,\mathbf{k}}-ih_{y,\mathbf{k}} \\
h_{x,\mathbf{k}}+ih_{y,\mathbf{k}} & -h_{z,\mathbf{k}}+Un_{\uparrow}(t)
\end{array}
\right]},
\end{equation}
where $\Psi_{\mathbf{k}}(t)=[\chi_{\mathbf{k}}(t),\eta_{\mathbf{k}}(t)]^T$ is the instantaneous many-body state and $\mathcal{H}_{\mathbf{k}}(t)$ is the postquench Hamiltonian. To facilitate the capture of nontrivial properties in quantum dynamics, here $m^{(c)}_z$ should have the same sign as the postquench $m_z$.
Note that both the state $\Psi_{\mathbf{k}}(t)$ and Hamiltonian $\mathcal{H}_{\mathbf{k}}(t)$ are time-evolved, where the instantaneous particle number density is determined as
\begin{equation}\label{eq:3}
n_{\uparrow}(t)=\frac{1}{N} \sum_{\mathbf{k}}|\chi_\mathbf{k}(t)|^2,~
n_{\downarrow}(t)=\frac{1}{N} \sum_{\mathbf{k}}|\eta_\mathbf{k}(t)|^2.
\end{equation}
This is completely different from the noninteracting quantum quenches, where the postquench Hamiltonian keeps unchanged~\cite{zhang2018dynamical,zhang2019characterizing,ye2020emergent,jia2021dynamically,li2021direct,zhang2021universal}. We note that even if the Hamiltonian $\mathcal{H}_{\mathbf{k}}(t)$ may become steady after the long time evolution, it is still different from the equilibrium mean-field Hamiltonian with the postquench parameter $m_{z}$ and $U$, i.e.,
\begin{equation}\label{ap-1}
\mathcal{H}^{\text{MF}}_\mathbf{k}={\left[ \begin{array}{cc}
h_{z,\mathbf{k}}-Un_d+\frac{U}{2} & h_{x,\mathbf{k}}-ih_{y,\mathbf{k}} \\
h_{x,\mathbf{k}}+ih_{y,\mathbf{k}} & -h_{z,\mathbf{k}}+Un_d+\frac{U}{2}
\end{array}
\right]}.
\end{equation}
Here $n_{d}$ is the equilibrium value. Hence the detection scheme for noninteracting topological phases is inapplicable in this case.
\begin{figure}[t]
\includegraphics[width=\columnwidth]{figure2.pdf}
\caption{Numerical results for $n_{\uparrow}(t)$ (red solid lines), $n_{\downarrow}(t)$ (blue solid lines), $m^{\text{eff}}_z(t)$ (green solid lines), $\tilde{n}_{\uparrow}(t)$ (light-blue dash-dotted lines), and $\tilde{n}_{\downarrow}(t)$ (black dash-dotted lines) for different interaction and equilibrium mean-field topological phases. Two characteristic time $t_s$ and $t_c$ emerge in the dynamical time evolution. Here the postquench parameters are $\left(m_z,U\right)=(1.4,5.7)$, $(3.0,3.5)$, $(4.9,-2.7)$, and $(5.8,-4.0)$ for (a)-(d), respectively. We set $t_{\text{so}}=t_0=1$, $m^{(c)}_{x}=m_z$, $m^{(c)}_y=0$, and $m^{(c)}_z=100$.
}
\label{Fig2}
\end{figure}
Next we shall propose a feasible dynamical scheme to detect the mean-field topological phases of $\mathcal{H}^{\text{MF}}_{\mathbf{k}}$ based on the above novel dynamical time evolution. By defining the time-evolved quantity $n_{d}(t)\equiv[n_{\uparrow}(t)-n_{\downarrow}(t)]/2$, we notice that the time-dependent effective Zeeman coupling
\begin{equation}
m^{\text{eff}}_z(t)=m_z-Un_d(t)
\end{equation}
determines the topology $W(t)$ of the instantaneous postquench Hamiltonian $\mathcal{H}_{\mathbf{k}}(t)$ through the relation $W(t)=\text{sgn}[m^{\text{eff}}_z(t)]$ for $0<|m^{\text{eff}}_z(t)|<4t_0$ and $W(t)=0$ otherwise. Since the equilibrium mean-field Hamiltonian $\mathcal{H}^{\text{MF}}_{\mathbf{k}}$ is actually determined by the self-consistent particle number density $n_{\uparrow(\downarrow)}$ (i.e., the equilibrium effective Zeeman coupling $m^{\text{eff}}_z$; see the discussions in Sec.~\ref{sec:QAH-Hubbard model}), a key idea of identifying the equilibrium mean-field topological phases is to find a characteristic time $t_{s}$ such that
\begin{equation}
n_{\uparrow(\downarrow)}(t_{s})=n_{\uparrow(\downarrow)}.
\end{equation}
Namely, the evolved particle number density $n_{\uparrow(\downarrow)}(t)$ at the time $t_s$ captures the equilibrium self-consistent $n_{\uparrow(\downarrow)}$, giving the dynamical self-consistent particle number density $n_{\uparrow(\downarrow)}(t_{s})$. Then the instantaneous Hamiltonian $\mathcal{H}_{\mathbf{k}}(t_s)$ is the same as the equilibrium mean-field Hamiltonian $\mathcal{H}^{\text{MF}}_{\mathbf{k}}$, and the time-dependent effective Zeeman coupling $m^{\mathrm{eff}}_{z}(t_{s})$ determines the corresponding topology. We also introduce another characteristic time $t_{c}$ such that
\begin{equation}
|m^{\text{eff}}_z(t_c)|=4t_0,
\end{equation}
characterizing the critical time of the dynamical topological phase transition of $\mathcal{H}_{\mathbf{k}}(t)$. Since Hamiltonian (\ref{ap-1}) behaves as a single particle model under mean-field level, $\mathcal{H}_{\mathbf{k}}(t_s)$ can essentially describe the ground state properties of the mean-field Hamiltonian $\mathcal{H}^{\text{MF}}_{\mathbf{k}}$ and its topological phase transition is obtained by $\mathcal{H}_{\mathbf{k}}(t_c)$. With this, identifying the topology of $\mathcal{H}_{\mathbf{k}}(t_s)$ as well as that of $\mathcal{H}^{\text{MF}}_{\mathbf{k}}$ is equivalent to comparing the characteristic time $t_{s}$ and $t_{c}$ in the quench dynamics [i.e., comparing $m^{\mathrm{eff}}_{z}(t_{s})$ and $m^{\mathrm{eff}}_{z}(t_{c})$].
In Fig.~\ref{Fig2}, we show the time evolution of $n_{\uparrow(\downarrow)}(t)$ and $m^{\text{eff}}_z(t)$ and confirm the existence of characteristic time $t_{s}$ and $t_{c}$. Since the system is initially prepared in the near fully polarized state, the short-time evolution of $n_{d}(t)$ increases (decays) from $n_{d}(t=0)\approx -0.5$ ($+0.5$), when the postquench Zeeman coupling $m_z$ is positive (negative). We also observe that the two characteristic time both emerge in the short-time evolution, and their relative relation ($t_{s}<t_{c}$ or $t_{s}>t_{c}$) behaves differently for different interaction and equilibrium mean-field phases. In the following, we show how to identify the equilibrium mean-field topological phases via $t_{s}$ and $t_{c}$ in detail.
\section{Nontrivial dynamical properties}\label{sec:Nontrivial dynamical properties}
In this section, we determine the characteristic time $t_{s}$ and $t_{c}$ entirely from the quantum dynamics without knowing the postquench equilibrium mean-field value of $n_{\uparrow(\downarrow)}$, and provide the analytical results under certain conditions. The nontrivial dynamical properties in this interacting system are uncovered.
\subsection{Dynamical self-consistent particle number density and characteristic time $t_s$}
Since the equilibrium mean-field value of $n_{\uparrow(\downarrow)}$ is in general unknown, we need to first figure out how to determine the characteristic time $t_{s}$ in the quantum dynamics. A notable fact is that $n_{\uparrow(\downarrow)}$ is self-consistently obtained for the mean-field Hamiltonian $\mathcal{H}^{\text{MF}}_{\mathbf{k}}$, and the time-evolved particle number density $n_{\uparrow(\downarrow)}(t)$ can be considered as a specific path for updating the mean-field parameters. For this, we introduce another set of time-dependent particle number density
\begin{equation}\label{eq:4}
\tilde{n}_{\uparrow}(t)=\frac{1}{N} \sum_{\mathbf{k}}|\tilde{\chi}_\mathbf{k}(t)|^2,\quad
\tilde{n}_{\downarrow}(t)=\frac{1}{N} \sum_{\mathbf{k}}|\tilde{\eta}_\mathbf{k}(t)|^2
\end{equation}
for the eigenvector $\tilde{\Psi}_{\mathbf{k}}(t) =[\tilde{\chi}_{\mathbf{k}}(t),\tilde{\eta}_{\mathbf{k}}(t)]^T$ of $\mathcal{H}_{\mathbf{k}}(t)$ with negative energy, which can be obtained once we know the instantaneous particle number density $n_{\uparrow(\downarrow)}(t)$. We see that $\mathcal{H}_{\mathbf{k}}(t)$ is self-consistent when the two sets of time-dependent particle number density coincide, giving the characteristic time
\begin{equation}\label{eq:5}
t_s\equiv \text{min}\{t|n_{\uparrow(\downarrow)}(t)=\tilde{n}_{\uparrow(\downarrow)}(t)\},
\end{equation}
at which we have $n_{\uparrow(\downarrow)}=n_{\uparrow(\downarrow)}(t_s)=\tilde{n}_{\uparrow(\downarrow)}(t_s)$, as shown in Fig.~\ref{Fig2}. This gives the dynamical self-consistent particle number density $n_{\uparrow(\downarrow)}(t_{s})$, capturing the equilibrium self-consistent $n_{\uparrow(\downarrow)}$.
Note that here only the time-dependent particle number density becomes self-consistent, while the instantaneous wavefunction is in general not, which has subtle difference from the equilibrium mean-field Hamiltonian. This also implies that different instantaneous states may lead to the same $n_{\uparrow(\downarrow)}(t_s)$, and there may be multiple time points fulfilling the self-consistent condition. We hereby choose the smallest time point as the characteristic time $t_{s}$ due to the consideration of short-time evolution [see Fig.~\ref{Fig2}(a)]. Also, the characteristic time $t_s$ shall definitely appear in the short-time evolution, since the $n_{\uparrow(\downarrow)}(t)$ increases or decays in this regime and the state gradually reaches a steady state after $t>t_s$.
Although the explicit form of the characteristic time $t_{s}$ is in general very complex, we can obtain a analytical result in the short-time evolution for a weak interaction strength $U$ or $|Un_{d}|$, which is given by [see Appendix~\ref{appendix-2}]
\begin{equation}\label{a-tn}
t_s\approx \sqrt{\frac{N_1}{N_0+\sqrt{N_2+N_3 (m_z-n_dU)^2}}}
\end{equation}
with $N_0=12t_{\text{so}}^2$, $N_1=3(1\pm 2n_d)$, $N_2=-28t_{\text{so}}^4(\pm 10n_d-1)-72t_0^2t_{\text{so}}^2(\pm 2n_d+1)$, and $N_3=-28t_{\text{so}}^2(\pm 2n_d+ 1)$, where the symbol $\pm$ corresponds to the sign of the postquench $m_z$. From this result, we observe that the characteristic time $t_{s}$ is fixed by $n_{d}$. Since the contour lines of $n_{d}$ have the linear form of interaction strength $U$ and Zeeman coupling $m_{z}$, then $t_{s}$ has a similar linear scaling, which fully matches with the numerical results [see Fig.~\ref{Fig3}(a)].
\begin{figure}[t]
\includegraphics[width=\columnwidth]{figure3.pdf}
\caption{(a)-(b) Analytical results of $t_s$ and $t_c$, where $t^*=t_s=t_c=0.176$ is on the topological boundaries (black solid lines). The dark-brown regions present $t_s>0.35$ in (a) and $t_c=+\infty$ in (b) respectively, while the dark-purple regions in (b) give $t_c=0$. These regions actually correspond to the complex values of $t_s$ and $t_c$ since a large truncation $\mathcal{O}(U^3n_d^3)$ is taken. After reducing the truncation error, $t_s$ gives the real value but $t_c$ still gives complex value due to no solution. (c) Sign of $t_s-t_c$, where the shaded area is the equilibrium topological phase region. (d) Analytical results of $t_s$, $n_d$ and $\text{Re}[t_c]$ at $n_d\approx -0.09$, showing the linear scaling of $t_s$ and $n_d$. Here we set $t_{\text{so}}=t_0=1$.}
\label{Fig3}
\end{figure}
\subsection{Dynamical topological phase transition and characteristic time $t_c$}
We now determine the characteristic time $t_{c}$ from the dynamical topological phase transition of $\mathcal{H}_{\mathbf{k}}(t)$. Since the initial state is prepared into a near fully polarized state for $t<0$ and has $n_d(t=0)\approx\pm 0.5$ for negative or positive postquench $m_{z}$, the absolute value of $n_{d}(t)$ in general decays in the time evolution and approaches a steady value. We find that when the interaction strength satisfies $U>-8t_0$, the time-dependent effective Zeeman coupling $m^{\text{eff}}_z(t)$ passes through only one phase transition point $4t_{0}$ for $m_{z}>0$ or $-4t_{0}$ for $m_{z}<0$, although the corresponding crossing may occurs multiple times. Therefore, $\mathcal{H}_{\mathbf{k}}(t)$ can only change from the trivial (topological) regime at $t=0$ to the topological (trivial) regime for $t>0$, when the interaction is repulsive (attractive). The characteristic time $t_{c}$, characterizing the critical time of the dynamical topological phase transition of $\mathcal{H}_{\mathbf{k}}(t)$, can be naturally defined as
\begin{equation}\label{eq:6}
t_{c}\equiv\text{min}\{t|m^{\text{eff}}_z(t)=\pm 4t_0\}
\end{equation}
Correspondingly, we have $t_c=+\infty$ and $0$ ($t_c=0$ and $+\infty$) in the repulsive (attractive) interaction regime for $|m^{\text{eff}}_z(t)|>4t_0$ and $|m^{\text{eff}}_z(t)|<4t_0$, respectively. The numerical results for $m_z>0$ are shown in Fig.~\ref{Fig2}, where $m^{\text{eff}}_z(t)$ only passes though the phase transition point $m^{\text{eff}}_z(t_c)=4t_0$ and decays (increases) for the repulsive (attractive) interaction in the short time region. This feature actually can be described by the analytical result for $n_d(t)$ in Appendix \ref{appendix-2}, where we have $\dot{m}_z^{\text{eff}}(t)\gtrless 0$ for $Um_z\lessgtr 0$ due to $\dot{m}_z^{\text{eff}}(t)=-U\dot{n}_d(t)$.
Like Eq.~\eqref{a-tn}, we can also obtain a analytical result for $t_c$ in the short-time evolution [see Appendix \ref{appendix-2}]:
\begin{equation}\label{a-tp}
t_c\approx \sqrt{\frac{P_0-\sqrt{{P_2-3P_1{{(\pm m_z-4t_0)}/{U}}}}}{P_1}}
\end{equation}
with $P_0=6t_{\text{so}}^2$, $P_1=4t_{\text{so}}^2(5t_{\text{so}}^2+19t_0^2)$, and $P_2=6t_{\text{so}}^2(t_{\text{so}}^2-19t_0^2)$. Compared with $t_s$, the characteristic time $t_c$ presents a completely different scaling form in terms of the Zeeman coupling and interaction strength, expect for the topological phase boundaries~[see Fig.~\ref{Fig3}(b)]. Particularly, the finite $t_c$ only appears around the topological phase boundaries, since $\mathcal{H}_{\mathbf{k}}(t)$ always remains topological or trivial when it is far away from the phase boundaries; see Fig.~\ref{Fig5}(b).
\subsection{Linear scaling of characteristic time on phase boundaries}
We now give a special time $t^{*}$ on the topological phase boundaries, which shows the linear scaling for the Zeeman coupling and interaction. By taking $m_z-Un_d=\pm 4 t_0$ for the above two analytical characteristic time, we obtain
\begin{equation}\label{tst}
t^{*}=t_s=t_c\approx\sqrt{\frac{B_1}{B_0+\sqrt{B_2}}},
\end{equation}
where $B_0=6t_\text{so}^2$, $B_1=3(1\pm 2n^{*}_d)/2$, and $B_2=-114(1\pm 2n^*_d)t_0^2t_\text{so}^2-6(\pm 10n^{*}_d-1)t_\text{so}^4$; see Appendix~\ref{appendix-2}. This is a necessary condition for the topological transition of $\mathcal{H}^{\text{MF}}_{\mathbf{k}}$. Especially, we have $t^*\propto 1/t_0$ for a strong coupling with $t_{\text{so}}=t_0$, which gives $t^*= 0.176$ when $t_{\text{so}}=t_0=1$, matching the numerical result $t^*=0.180$ [see Fig.~\ref{Fig5}(b)]. With the time $t_{s}$, $t_{c}$, and $t^{*}$, the equilibrium mean-field topological phases is next characterized.
\section{Determination of mean-field topological phases}\label{sec:Determination of mean-field topological phases}
Based on the properties of $n_d(t)$ and $m^{\text{eff}}_z(t)$, the system shall have $t_s>t_c$ ($t_s<t_c$) in the quench dynamics for topological phases with the repulsive (attractive) interaction, if $\mathcal{H}_\mathbf{k}(t)$ is trivial (topological) at $t=0$. The correspondence between the characteristic time and the equilibrium topological properties is given by
\begin{equation}\label{eq:7}
|\text{Ch}_1|=\left\{
\begin{aligned}
1 & , & \text{for}
\begin{cases}
t_s>t^*>t_c,~U>0\\
t_s<t^*<t_c,~U<0
\end{cases}\\
0 & , & \text{for}
\begin{cases}
t^*<t_s<t_c,~U>0\\
t^*>t_s>t_c,~U<0
\end{cases}\\
\end{aligned}
\right.
\end{equation}
where $t^*$ is for the gap closing at Dirac points $\boldsymbol{\Gamma}$ or $\mathbf{M}$. The above formula provides a convenient method to determine the mean-field topological phase diagram by observing the short-time evolution of the system after quantum quench. Moreover, both the numerical results [Fig.~\ref{Fig2}] and the analytical results [Fig.~\ref{Fig3}(c)] confirm this scheme, especially for weak interaction $U$ or small $|Un_d|$, as we show in Fig.~\ref{Fig3}(d).
\begin{figure}[t]
\includegraphics[width=\columnwidth]{figure4.pdf}
\caption{(a) Schematic diagram for the motion of the near fully polarized spin at Dirac points $\mathbf{D}_i$. The clockwise (counterclockwise) motion of the spin implies the upward (downward) polarization of Hamiltonian at these points. (b) The polarization directions at Dirac points $\mathbf{D}_i$ determine the topology of postquench Hamiltonian. (c) Topological number $W(t)$ for $\mathcal{H}_{\mathbf{k}}(t)$. We have $\text{Ch}_1=1$ for the phase with $(m_z,U)=(2,2)$ due to $t_{c}=0<t_{s}=0.165$, while $\text{Ch}_1=0$ for the phase with $(m_z,U)=(3,4)$ due to $t_{c}=9.485>t_{s}=0.230$. The insets show the changes of polarization directions at Dirac points $\mathbf{D}_i$ for $\mathcal{H}_{\mathbf{k}}(t)$. Here we set $t_{\text{so}}=t_0=1$, $m^{(c)}_{x}=m_z$, $m^{(c)}_y=0$, and $m^{(c)}_z=100$.
}
\label{Fig4}
\end{figure}
To further determine the Chern number of the equilibrium mean-field topological phases, we consider the features of spin dynamics, which is described by the modified Landau-Lifshitz equation in this interacting system~\cite{gilbert2004phenomenological}
\begin{equation}
\dot{\mathbf{S}}(\mathbf{k},t)=\mathbf{S}(\mathbf{k},t)\times 2\mathbf{h}(\mathbf{k},t)
\end{equation}
with
\begin{equation}
\begin{split}
& S_x(\mathbf{k},t)=[\chi_\mathbf{k}(t)\eta^{*}_\mathbf{k}(t)+\chi^{*}_\mathbf{k}(t)\eta_\mathbf{k}(t)]/2,\\
& S_y(\mathbf{k},t)=i[\chi_\mathbf{k}(t)\eta^{*}_\mathbf{k}(t)-\chi^{*}_\mathbf{k}(t)\eta_\mathbf{k}(t)]/2,\\
& S_z(\mathbf{k},t)=[|\chi_\mathbf{k}(t)|^2-|\eta_\mathbf{k}(t)|^2]/2.\\
\end{split}
\end{equation}
We show in the following that the topology number $W(t)$ of the time-dependent postquench Hamiltonian $\mathcal{H}_{\mathbf{k}}(t)$ can be determined by the dynamical properties of $S_z$ at four Dirac points $\mathbf{D}_i$. The Chern number $\text{Ch}_1$ for mean-field Hamiltonian $\mathcal{H}^{\mathrm{MF}}_{\mathbf{k}}$ is then obtained by $W$ at the characteristic time $t_s$.
First, for the near fully polarized initial state, the spin at Dirac points $\mathbf{D}_i$ will move around the corresponding polarization direction of $\mathcal{H}_{\mathbf{k}}(t)$ after the quantum quench. When the polarization direction is reversed, e.g., from upward to downward, the motion of spin will be reversed at the same time, e.g., from clockwise to counterclockwise; see Fig.~\ref{Fig4}(a). For this, we can use the motion of spins to determine the polarization direction of $\mathcal{H}_{\mathbf{k}}(t)$ at Dirac points. Second, since the polarization direction is associated with the parity eigenvalue of the occupied eigenstate, the topology of $\mathcal{H}_{\mathbf{k}}(t)$ can be identified from the polarization directions at four Dirac points~\cite{liu2013detecting}, as shown in Fig.~\ref{Fig4}(b). For instance, the polarization directions at four Dirac points are the same for an initial trivial Hamiltonian $\mathcal{H}_{\mathbf{k}}(t)$. Once it enters the topological regime, the motion of spin at $\boldsymbol{\Gamma}$ or $\mathbf{M}$ point will be reversed, manifesting the change of the corresponding polarization direction. This can be captured by the quantity $\text{sgn}[\dot{S}_z(\mathbf{D}_i,t)]$, as shown in Fig.~\ref{Fig5}(d). Based on the above points, we define the following time-dependent dynamical invariant for $\mathcal{H}_{\mathbf{k}}(t)$:
\begin{equation}
(-1)^{\nu(t)}=\prod_{i}\text{sgn}[\dot{S}_z(\mathbf{D}_i,t)]
\end{equation}
with $\nu(t)=1$ for the topological regime $|m^{\text{eff}}_z(t)|<4t_0$ and $\nu(t)=0$ for the trivial regime $|m^{\text{eff}}_z(t)|>4t_0$, respectively. With this, the topological number $W(t)$ of $\mathcal{H}_\mathbf{k}(t)$ and the Chern number of $\mathcal{H}^{\text{MF}}_{\mathbf{k}}$ can be exactly given as
\begin{equation}\label{ch}
W(t)=\frac{\nu(t)}{2}\sum_{i}\text{sgn}[\dot{S}_z(\mathbf{D}_i,t)],~~\text{Ch}_1=W(t_s).
\end{equation}
As an example, the numerical results in Fig.~\ref{Fig4}(c) clearly show the nontrivial topology with $\text{Ch}_1=1$ and the trivial topology with $\text{Ch}_1=0$ for the postquench system with $(m_z,U)=(2,2)$ and $(3,4)$, respectively.
\begin{figure}[t]
\includegraphics[width=\columnwidth]{figure5.pdf}
\caption{(a) Mean-field phase diagram determined by the characteristic time $t_c$ and $t^*$, where the postquench parameters $A$-$F$ (stars) are chosen on the topological phase boundaries with $(m_z,U)=(1.0,7.38)$, $(2.0,4.92)$, $(3.0,2.46)$, $(4.0,0)$, $(5.0,-2.46)$, and $(6.0,-4.92)$. (b) Dynamical evolution of $m^{\text{eff}}_z(t)$ and $|\dot{n}_{\uparrow(\downarrow)}(t)|$ for $A$-$F$, where $t_c=t^{*}=0.180$ (upper) and $t_b=t^{*}=0.180$ (lower) with the same scaling. (c) Dynamical evolution of $|\dot{n}_{\uparrow(\downarrow)}(t)|$, where $t_b=0.165$ for $(m_z,U)=(3.0,4.0)$ and $(5.0,-1.0)$ while $t_b=0.230$ for $(m_z,U)=(2.0,2.0)$ and $(4.0,-4.0)$.
(d) Sign of $\dot{S}_z(\mathbf{D}_i,t)$ at four Dirac points. We have $\text{Ch}_1=1$ for the topological phase with $(m_z,U)=(2.0,2.0)$ due to $t_c=0<t^*$ and $\text{Ch}_1=0$ for the trivial phase with $(m_z,U)=(3.0,4.0)$ due to $t_c=9.485>t^*$. Here we set $t_{\text{so}}=t_0=1$, $m^{(c)}_{x}=m_z$, $m^{(c)}_y=0$, and $m^{(c)}_z=100$.
}
\label{Fig5}
\end{figure}
\section{Experimental detection}\label{sec:Experimental detection}
Based on the above analysis of characteristic time, in this section we propose an experimentally feasible scheme to detect the mean-field topological phase diagram.
In real quench experiments, the characteristic time $t_{c}$ can be directly detected from the spin dynamics by measuring $n_{\uparrow(\downarrow)}(t)$. The challenging part is how to measure the characteristic time $t_{s}$, since it does not have any measurable quantities but only corresponds to the self-consistent results of $\mathcal{H}_\mathbf{k}(t)$. To circumvent this difficulty, we instead choose to compare $t^*$ and $t_{c}$ based on Eq.~(\ref{ch}) to detect the mean-field topological phase diagram; see Fig.~\ref{Fig5}(a). The time $t^*$ can be directly measured by defining an auxiliary time
\begin{equation}\label{eq:7s}
t_b\equiv \text{min}\{t|\ddot{n}_{\uparrow(\downarrow)}(t)=0\},
\end{equation}
at which the short-time evolution of $n_{\uparrow(\downarrow)}(t)$ changes fastestly, i.e., $|\dot{n}_{\uparrow(\downarrow)}(t)|$ is maximal. Although the auxiliary time $t_{b}$ is larger (smaller) than $t^{*}$ in the topological (trivial) regime [see Fig.~\ref{Fig5}(c)], we indeed have $t_{b}=t^*$ on the topological phase boundaries, from which an analytical result for $t^*$ is obtained as
\begin{equation}\label{eq:t*}
t^*\approx \sqrt{\frac{3}{Q_0+Q_1}}
\end{equation}
with $Q_0=\sqrt{-135t_0^2+606t_0^2t_\text{so}^2+57t_\text{so}^4}$ and $Q_1=57t_0^2+15t_\text{so}^2$, consistent with Eq.~(\ref{tst}). Particularly, we have $t^*\approx 0.178$ for $t_0=t_\text{so}=1$. The auxiliary time $t_{b}$ simplifies the detection of the mean-field topological phase diagram.
The dynamical detection scheme is as follows. First, due to the linear scaling of $t^*$ in terms of $U$ and $m_{z}$ [see Fig.~\ref{Fig3}(c)], which is fixed on the mean-field topological phase boundaries, we can determine the special time $t^{*}$ from the noninteracting ($U=0$) quench dynamics. In this case, the Zeeman coupling is quenched from a large $m^{(c)}_{z}$ and a small $m^{(c)}_{x(y)}$ to the postquench parameters on phase boundaries, i.e., $m_z=\pm 4t_0$ and $m_{x(y)}=0$. Then the time-dependent particle number density $n_{\uparrow(\downarrow)}(t)$ in the short-time evolution is measured. By examining $\dot{n}_{\uparrow(\downarrow)}(t)$, we obtain $t^*=t_{b}$. Second, we turn to the interacting quench dynamics with postquench $m_{z}$ to obtain the fully mean-field phase diagram. Here $m^{(c)}_{x(y)}$ remains to be small and $m_{x(y)}=0$. By measuring $t_{c}$ from the quantum dynamics, the mean-field topological phase diagram is obtained by comparing it with $t^*$; see Fig.~\ref{Fig5}(a). Finally, the Chern number of mean-field topological phases can also be obtained via Eq.~(\ref{ch}). Since there is only one topological phase transition point, the topological number $W(t)$ has the same value at $t_s$ and $t^*$. Hence we have
\begin{equation}
\text{Ch}_1\simeq W(t^*).
\end{equation}
With this scheme, the mean-field phase diagram of the interacting Chern insulator can be completely determined, which paves the way to experimentally study topological phases in interacting systems and discover new phases.
\section{Conclusion and discussion}\label{sec:Conclusion}
In conclusion, we have studied the 2D QAH model with a weak-to-intermediate Hubbard interaction by performing quantum quenches. A nonequilibrium detection scheme for the mean-field topological phase diagram is proposed by observing the characteristic time $t_{s}$ and $t_{c}$, which capture the emergence of dynamical self-consistent particle number density and dynamical topological phase transition for the time-dependent postquench Hamiltonian, respectively. Two nontrivial dynamical properties are further uncovered. (i) After quenching the Zeeman coupling from the trivial regime to the topological regime, $t_s>t_c$ ($t_s<t_c$) is observed for the repulsive (attractive) interaction, and $t_s=t_c$ occurs on the topological phase boundaries. (ii) The Chern number of the postquench mean-field topological phase can be determined by the spin dynamics of four Dirac points at characteristic time $t_s$. Our results provide feasible schemes to identify the mean-field topological phases in an interacting Chern insulator and can be applied to quantum simulation experiments in the near future.
The present dynamical scheme can be applied to the Chern insulator with the weak-to-intermediate interaction. On the other hand, when the general strong interactions are considered, the system may host more abundant magnetic phases~\cite{ziegler2020correlated,ziegler2022large,tirrito2022large}. Generalizing these dynamical properties to the related studies and further identifying the rich topological phases would be an interesting and worthwhile work in the future.
\section*{ACKNOWLEDGEMENT}
We acknowledge the valuable discussions with Sen Niu and Ting Fung Jeffrey Poon. This work was supported by National Natural Science Foundation of China (Grants No. 11825401 and No. 11921005), the National Key R\&D Program of China (Project No. 2021YFA1400900), Strategic Priority Research Program of the Chinese Academy of Science (Grant No. XDB28000000), and by the Open Project of Shenzhen Institute of Quantum Science and Engineering (Grant No.SIQSE202003). Lin Zhang also acknowledges support from Agencia Estatal de Investigaci{\'o}n (the R\&D project CEX2019-000910-S, funded by MCIN/AEI/10.13039/501100011033, Plan National FIDEUA PID2019-106901GB-I00, FPI), Fundaci{\'o} Privada Cellex, Fundaci{\'o} Mir-Puig, and Generalitat de Catalunya (AGAUR Grant No. 2017 SGR 1341, CERCA program).
| {'timestamp': '2022-08-15T02:05:05', 'yymm': '2206', 'arxiv_id': '2206.11018', 'language': 'en', 'url': 'https://arxiv.org/abs/2206.11018'} |
\section{Introduction}
In \cite{bib:AbMu}, Ablowitz and Musslimani introduced the nonlocal
nonlinear Schr\"odinger equation and got its explicit solutions by
inverse scattering. Quite a lot of work were done after that for
this equation and other
equations.\cite{bib:AbMu2,bib:AbMu3,bib:Gadz,bib:HuangLing,bib:Khare,
bib:YJK,bib:LiXu,bib:Lou,bib:Zhu,bib:Sarma}
In \cite{bib:Fokashigh}, Fokas studied high dimensional equations and
introduced the nonlocal Davey-Stewartson I equation
\begin{equation}
\begin{array}{l}
\D\I u_t=u_{xx}+u_{yy}+2\sigma u^2\bar u^*+2uw_y,\\
\D w_{xx}-w_{yy}=2\sigma(u\bar u^*)_{y},\\
\end{array} \label{eq:EQ_DS}
\end{equation}
where $w$ satisfies $\bar w^*=w$. Here $\bar f(x,y,t)=f(-x,y,t)$ for
a function $f$, $*$ refers to complex conjugation. The solution of
(\ref{eq:EQ_DS}) is PT symmetric in the sense that if
$(u(x,y,t),w(x,y,t))$ is a solution of (\ref{eq:EQ_DS}), then so is
$(u^*(-x,y,-t), w^*(-x,y,-t))$. This leads to a conserved density
$u\bar u^*$, which is invariant under $x\to -x$ together with
complex conjugation.
As is known, the usual Davey-Stewartson I equation does not possess
a Darboux transformation in differential form. Instead, it has a
binary Darboux transformation in integral
form.\cite{bib:Salle,bib:MSbook} However, for the nonlocal
Davey-Stewartson I equation (\ref{eq:EQ_DS}), we can construct a
Darboux transformation in differential form. Like the nonlocal
equations in 1+1 dimensions, the solutions may have singularities.
Starting from the seed solutions which are zero and an exponential
function of $t$, we prove that the derived solutions can be globally
defined and bounded for all $(x,y,t)\in\hr^3$ if the parameters are
suitably chosen. Unlike the usual Davey-Stewartson I equation where
localized solutions are dromion solutions if the seed solution is
zero,\cite{bib:BLP,bib:Herm,bib:Hiet} the derived solutions here are
soliton solutions in the sense that there are $n$ peaks in the
solutions obtained from a Darboux transformation of degree $n$. If
the seed solution is an exponential function of $t$, the norms of
the derived solutions change a lot along some straight lines. We
call them ``line dark soliton'' solutions.
In Section~\ref{sect:LPDT} of this paper, the Lax pair for the
nonlocal Davey-Stewartson I equation is reviewed and its symmetries
are considered. Then the Darboux transformation is constructed and
the explicit expressions of the new solutions are derived. In
Section~\ref{sect:soliton} and Section~\ref{sect:darksoliton}, the
soliton solutions and ``line dark soliton'' solutions are
constructed respectively. The globalness, boundedness and the
asymptotic behaviors of those solutions are discussed.
\section{Lax pair and Darboux transformation}\label{sect:LPDT}
Consider the $2\times 2$ linear system
\begin{equation}
\begin{array}{l}
\D\Phi_x=\tau J\Phi_y+\tau
P\Phi=\tau\left(\begin{array}{cc}
1&0\\0&-1\end{array}\right)\Phi_y
+\tau\left(\begin{array}{cc}0&-u\\v&0\end{array}\right)\Phi,\\
\D\Phi_t=-2\I\tau^2J\Phi_{yy}-2\I\tau^2P\Phi_y+\I Q\Phi\\
\D\qquad=-2\I\tau^2\left(\begin{array}{cc}
1&0\\0&-1\end{array}\right)\Phi_{yy}
-2\I\tau^2\left(\begin{array}{cc}
0&-u\\v&0\end{array}\right)\Phi_y\\
\D\qquad+\I\tau\left(\begin{array}{cc}
-\tau uv-\tau w_y-w_x&u_x+\tau u_y\\
v_x-\tau v_y&\tau uv+\tau w_y-w_x\end{array}\right)\Phi
\end{array}\label{eq:LP}
\end{equation}
where $\tau=\pm 1$, $u,v,w$ are functions of $(x,y,t)$. The
compatibility condition $\Phi_{xt}=\Phi_{tx}$ gives the evolution
equation
\begin{equation}
\begin{array}{l}
\D\I u_t=u_{xx}+\tau^2u_{yy}+2\tau^2u^2v+2\tau^2uw_y,\\
\D-\I v_t=v_{xx}+\tau^2v_{yy}+2\tau^2uv^2+2\tau^2vw_y,\\
\D w_{xx}-\tau^2w_{yy}=2\tau^2(uv)_{y}.\\
\end{array}\label{eq:eq}
\end{equation}
When $\tau=1$, $v=\sigma\bar u^*$ $(\sigma=\pm 1)$, (\ref{eq:eq})
becomes the nonlocal Davey-Stewartson I equation (\ref{eq:EQ_DS}).
The Lax pair (\ref{eq:eq}) becomes
\begin{equation}
\begin{array}{l}
\D\Phi_x=U(\partial)\Phi\overset\triangle
=J\Phi_y+P\Phi
=\left(\begin{array}{cc}1&0\\0&-1\end{array}\right)\Phi_y
+\left(\begin{array}{cc}
0&-u\\\sigma\bar u^*&0\end{array}\right)\Phi,\\
\D\Phi_t=V(\partial)\Phi\overset\triangle
=-2\I J\Phi_{yy}-2\I P\Phi_y+\I Q\Phi\\
\D\qquad=-2\I\left(\begin{array}{cc}
1&0\\0&-1\end{array}\right)\Phi_{yy}
-2\I\left(\begin{array}{cc}
0&-u\\\sigma\bar u^*&0\end{array}\right)\Phi_y\\
\D\qquad+\I\left(\begin{array}{cc}
-\sigma u\bar u^*-w_y-w_x&u_x+u_y\\
\sigma(\bar u^*)_x-\sigma(\bar u^*)_y
&\sigma u\bar u^*+w_y-w_x
\end{array}\right)\Phi
\end{array}\label{eq:LP_DS}
\end{equation}
where $\D \partial=\frac{\partial}{\partial y}$. Here $U(\partial)$
implies that $U$ is a differential operator with respect to $y$.
The coefficients in the Lax pair (\ref{eq:LP_DS}) satisfies
\begin{equation}
\bar J^*=-KJK^{-1},\quad
\bar P^*=-KPK^{-1},\quad
\bar Q^*=-KQK^{-1}\label{eq:sym_coef}
\end{equation}
where $\D K=\left(\begin{array}{cc}0&\sigma\\1&0\end{array}\right)$.
Here $M^*$ refers to the complex conjugation (without transpose) of
a matrix $M$. (\ref{eq:sym_coef}) implies
\begin{equation}
\overline{U(\partial)^*}=-KU(\partial)K^{-1},\quad
\overline{V(\partial)^*}=KV(\partial)K^{-1}.
\end{equation}
Hence we have
\begin{lemma}\label{lemma:sym}
If $\D\Phi=\left(\begin{array}{c}\xi\\\eta\end{array}\right)$ is a
solution of (\ref{eq:LP_DS}), then so is $\D
K\bar\Phi^*=\left(\begin{array}{c}\sigma\bar\eta^*\\\bar
\xi^*\end{array}\right)$.
\end{lemma}
By Lemma~\ref{lemma:sym}, take a solution
$\D\left(\begin{array}{c}\xi\\\eta\end{array}\right)$ of
(\ref{eq:LP_DS}) and let $\D
H=\left(\begin{array}{cc}\xi&\sigma\bar\eta^*\\
\eta&\bar\xi^*\end{array}\right)$, then $G(\partial)=\partial-S$
with $S=H_yH^{-1}$ gives a Darboux
transformation.\cite{bib:ZZX,bib:GHZbook} Written explicitly,
\begin{equation}
S=\frac 1{\xi\bar\xi^*-\sigma\eta\bar\eta^*}
\left(\begin{array}{cc}\bar\xi^*\xi_y-\sigma\eta(\bar\eta^*)_y
&\sigma\xi(\bar\eta^*)_y-\sigma\bar\eta^*\xi_y\\
\bar\xi^*\eta_y-\eta(\bar\xi^*)_y&\xi(\bar\xi^*)_y-\sigma\bar\eta^*\eta_y.
\end{array}\right).
\end{equation}
$G(\partial)$ also keeps the symmetries (\ref{eq:sym_coef})
invariant. After the action of $G(\partial)$, $(u,w)$ is transformed
to $(\widetilde u,\widetilde w)$ by
\begin{equation}
\begin{array}{l}
G(\partial)U(\partial)+G_x(\partial)
=\widetilde U(\partial)G(\partial),\quad
G(\partial)V(\partial)+G_t(\partial)
=\widetilde V(\partial)G(\partial).
\end{array} \label{eq:Gtransf}
\end{equation}
That is,
\begin{equation}
\begin{array}{l}
\D\widetilde u=u+2\sigma\frac{\bar\eta^*\xi_y-\xi(\bar\eta^*)_y}
{\bar\xi^*\xi-\sigma\bar\eta^*\eta},\\
\D\widetilde w=w+2\frac{(\bar\xi^*\xi-\sigma\bar\eta^*\eta)_y}
{\bar\xi^*\xi-\sigma\bar\eta^*\eta}.
\end{array}\label{eq:DTuv1}
\end{equation}
The Darboux transformation of degree $n$ is given by a matrix-valued
differential operator
\begin{equation}
G(\partial)=\partial^n+G_1\partial^{n-1}+\cdots+G_n
\end{equation}
of degree $n$ which is determined by
\begin{equation}
G(\partial)H_j=0\quad(j=1,\cdots,n)\label{eq:GH=0}
\end{equation}
for $n$ matrix solutions $H_j$ $(j=1,\cdots,n)$ of (\ref{eq:LP}). By
comparing the coefficients of $\partial^j$ in (\ref{eq:Gtransf}),
the transformation of $(P,Q)$ is
\begin{equation}
\widetilde P=P-[J,G_1],\quad
\widetilde Q=Q+2[J,G_2]-2[JG_1-P,G_1]+4JG_{1,y}-2nP_y.
\label{eq:PQ}
\end{equation}
Rewrite (\ref{eq:GH=0}) as
\begin{equation}
\partial^nH_j+G_1\partial^{n-1}H_j+\cdots+G_nH_j=0\quad
(j=1,\cdots,n),
\end{equation}
then
\begin{equation}
\begin{array}{l}
\D\left(\begin{array}{cccc}G_1&G_2&\cdots&G_n\end{array}\right)
\left(\begin{array}{cccc}\partial^{n-1}H_1&\partial^{n-1}H_2
&\cdots&\partial^{n-1}H_n\\
\partial^{n-2}H_1&\partial^{n-2}H_2&\cdots&\partial^{n-2}H_n\\
\vdots&\vdots&&\vdots\\
H_1&H_2&\cdots&H_n\end{array}\right)\\
\D=\left(\begin{array}{cccc}-\partial^nH_1&-\partial^nH_2
&\cdots&-\partial^nH_n\end{array}\right).
\end{array}
\end{equation}
Write $\D H_j=\left(\begin{array}{cc}h_{11}^{(j)}&h_{12}^{(j)}\\
h_{21}^{(j)}&h_{22}^{(j)}\end{array}\right)$. By reordering the rows
and columns, we have
\begin{equation}
\begin{array}{l}
\D\left(\begin{array}{cccccc}(G_1)_{11}&\cdots&(G_n)_{11}
&(G_1)_{12}&\cdots&(G_n)_{12}\\
(G_1)_{21}&\cdots&(G_n)_{21}
&(G_1)_{22}&\cdots&(G_n)_{22}\end{array}\right)W=-R\\
\end{array}\label{eq:G12b}
\end{equation}
where $\D W=(W_{jk})_{1\le j,k\le 2}$, $\D R=(R_{jk})_{1\le j,k\le
2}$,
\begin{equation}
W_{jk}=\left(\begin{array}{cccc}
\partial^{n-1}h_{jk}^{(1)}&\partial^{n-1}h_{jk}^{(2)}
&\cdots&\partial^{n-1}h_{jk}^{(n)}\\
\partial^{n-2}h_{jk}^{(1)}&\partial^{n-2}h_{jk}^{(2)}
&\cdots&\partial^{n-2}h_{jk}^{(n)}\\
\vdots&\vdots&&\vdots\\
h_{jk}^{(1)}&h_{jk}^{(2)}&\cdots&h_{jk}^{(n)}\\
\end{array}\right),\label{eq:Wjk}
\end{equation}
\begin{equation}
R_{jk}=\left(\begin{array}{cccc}
\partial^nh_{jk}^{(1)}&\partial^nh_{jk}^{(2)}
&\cdots&\partial^nh_{jk}^{(n)}\end{array}\right)\quad(j,k=1,2).
\label{eq:Rjk}
\end{equation}
Solving $G$ from (\ref{eq:G12b}), we get the new solution of the
equation (\ref{eq:EQ_DS}) from (\ref{eq:PQ}). Especially,
\begin{equation}
\widetilde u=u+2(G_1)_{12}.\label{eq:trans_u}
\end{equation}
\section{Soliton solutions}\label{sect:soliton}
\subsection{Single soliton solutions}\label{subsect:soliton1}
Let $u=0$, then $\D\Phi=\left(\begin{array}{c}
\xi\\\eta\end{array}\right)$ satisfies
\begin{equation}
\begin{array}{l}
\D\Phi_x=\left(\begin{array}{cc}
1&0\\0&-1\end{array}\right)\Phi_y,\quad
\D\Phi_t=-2\I\left(\begin{array}{cc}
1&0\\0&-1\end{array}\right)\Phi_{yy}.
\end{array} \label{eq:LP_DSeg}
\end{equation}
Take a special solution
\begin{equation}
\begin{array}{l}
\D \xi=\E{\lambda x+\lambda y-2\I\lambda^2t}
+\E{-\lambda^*x-\lambda^*y-2\I\lambda^{*2}t},\\
\D \eta=a\E{\lambda x-\lambda y+2\I\lambda^2t}
+b\E{-\lambda^*x+\lambda^*y+2\I\lambda^{*2}t},
\end{array}\label{eq:1stnxieta}
\end{equation}
where $\lambda,a,b$ are complex constants. (\ref{eq:DTuv1}) gives
the explicit solution
\begin{equation}
\begin{array}{l}
\D\widetilde u
\D =\frac{2\sigma\lambda_R(a^*-b^*)}{D}
\E{2\I\lambda_I y-4\I(\lambda_R^2-\lambda_I^2)t}
\end{array}\label{eq:u1eg}
\end{equation}
of (\ref{eq:EQ_DS}) where
\begin{equation}
\begin{array}{l}
D=(2-\sigma|a|^2-\sigma|b|^2)\cosh(2\lambda_R y
+8\lambda_R\lambda_It)\\
\qquad+\sigma(|a|^2-|b|^2)\sinh(2\lambda_R y+8\lambda_R\lambda_It)\\
\qquad+2(1-\sigma\re(ab^*))\cosh(2\lambda_R x)
-2\I\sigma\im(ab^*)\sinh(2\lambda_Rx).
\end{array}
\end{equation}
Here $z_R=\re z$ and $z_I=\im z$ for a complex number $z$. Since
$\widetilde w$ is looked as an auxiliary function in
(\ref{eq:EQ_DS}), hereafter we will study mainly the behavior of
$\widetilde u$.
Note that $\widetilde u$ is global if $|a|<1$ and $|b|<1$ since $\re
D>0$ in this case. Moreover, the peak moves in the velocity
$(v_x,v_y)=(0,-4\lambda_I)$.
\begin{remark}
The solution (\ref{eq:u1eg}) may have singularities when the
parameters are not chosen suitably, say, when $\sigma=-1$, $a=1$,
$b=-2$, or $\sigma=1$, $a=2$, $b=1/4$.
\end{remark}
Figure~1 shows a $1$ soliton solution with parameters $\sigma=-1$,
$t=20$, $\lambda=0.07-1.5\I$, $a=0.2$, $b=0.1\I$.
\begin{figure}\begin{center}
\scalebox{1.6}{\includegraphics[150,100]{1soliton.ps}}\label{fig:stn1}
\caption{$|\widetilde u|$ of a $1$ soliton solution.}
\end{center}\end{figure}
\subsection{Multiple soliton solutions}\label{subsect:solitonn}
For an $n\times n$ matrix $M$, define $\D||M||=\sup_{x\in
\smallhc^n,||x||=1}||Mx||$ where $||\cdot||$ is the standard
Hermitian norm in $\hc^n$. The following facts hold obviously.
(i) $||MN||\le ||M||\,||N||$.
(ii) Each entry $M_{jk}$ of $M$ satisfies $|M_{jk}|\le||M||$.
(iii) $|\det M|\le||M||^n$.
(iv) If $||M||<1$, then $\D||(I+M)^{-1}||\le(1-||M||)^{-1}$.
(v) If $||M||<1$, then $\D|\det(I+M)|\ge(1-||M||)^n$.
Now we construct explicit solutions according to (\ref{eq:trans_u}).
As in (\ref{eq:1stnxieta}), take
\begin{equation}
\begin{array}{l}
\D\xi_k=\E{\lambda_k(x+y)-2\I\lambda_k^2t}
+\E{-\lambda_k^*(x+y)-2\I\lambda_k^{*2}t},\\
\D\eta_k=a_k\E{\lambda_k(x-y)+2\I\lambda_k^2t}
+b_k\E{-\lambda_k^*(x-y)+2\I\lambda_k^{*2}t},
\label{eq:Hjeg}
\end{array}
\end{equation}
then $W_{jk}$'s in (\ref{eq:Wjk}) and $R_{jk}$'s in (\ref{eq:Rjk})
are
\begin{equation}\fl
\begin{array}{l}
(W_{11})_{jk}=\lambda_k^{n-j}\ee_{k+}
+(-\lambda_k^*)^{n-j}\ee_{k+}^{*-1},\quad
(W_{12})_{jk}=\sigma a_k^*(-\lambda_k^*)^{n-j}\ee_{k+}^{*-1}
+\sigma b_k^*\lambda_k^{n-j}\ee_{k+},\\
(W_{21})_{jk}=a_k(-\lambda_k)^{n-j}\ee_{k-}
+b_k(\lambda_k^*)^{n-j}\ee_{k-}^{*-1},\quad
(W_{22})_{jk}=(\lambda_k^*)^{n-j}\ee_{k-}^{*-1}
+(-\lambda_k)^{n-j}\ee_{k-},\\
(R_{11})_{1k}=\lambda_k^n\ee_{k+}
+(-\lambda_k^*)^n\ee_{k+}^{*-1},\quad
(R_{12})_{1k}=\sigma a_k^*(-\lambda_k^*)^n\ee_{k+}^{*-1}
+\sigma b_k^*\lambda_k^n\ee_{k+},\\
(R_{21})_{1k}=a_k(-\lambda_k)^n\ee_{k-}
+b_k(\lambda_k^*)^n\ee_{k-}^{*-1},\quad
(R_{22})_{1k}=(\lambda_k^*)^n\ee_{k-}^{*-1}
+(-\lambda_k)^n\ee_{k-}\\
(j,k=1,\cdots,n)
\end{array}\label{eq:W22}
\end{equation}
where
\begin{equation}
\ee_{k\pm}=\E{\lambda_k(x\pm y)\mp 2\I\lambda_k^2t}.\label{eq:ee}
\end{equation}
However, temporarily, we assume $\ee_{k+},\ee_{k-}$ $(k=1,\cdots,n)$
are arbitrary complex numbers rather that (\ref{eq:ee}) holds.
Denote $\D L=\diag((-1)^{n-1},(-1)^{n-2},\cdots,-1,1)$,
\begin{equation}
F=(\lambda_k^{n-j})_{1\le j,k\le n},\quad
f=(\lambda_1^n,\cdots,\lambda_n^n),
\label{eq:V}
\end{equation}
\begin{equation}
\begin{array}{l}
A=\diag(a_1,\cdots,a_n),\quad
B=\diag(b_1,\cdots,b_n),
\end{array}\label{eq:AB}
\end{equation}
\begin{equation}
\begin{array}{l}
\D E_\pm=\diag(\ee_{1\pm},\cdots,\ee_{n\pm}).
\end{array}
\end{equation}
Then
\begin{equation}
W=\left(\begin{array}{cc}FE_++LF^*E^{*-1}_{+}
&\sigma FB^*E_++\sigma LF^*A^*E^{*-1}_{+}\\
LFAE_-+F^*BE^{*-1}_{-}
&LFE_-+F^*E^{*-1}_{-}\end{array}\right),
\label{eq:W}
\end{equation}
\begin{equation}
R=\left(\begin{array}{cc}fE_++(-1)^nf^*E^{*-1}_{+}
&\sigma fB^*E_++\sigma(-1)^nf^*A^*E^{*-1}_{+}\\
(-1)^nfAE_-+f^*BE^{*-1}_{-}
&(-1)^nfE_-+f^*E^{*-1}_{-}\end{array}\right).
\label{eq:R}
\end{equation}
By using the identity
\begin{equation}
\left(\begin{array}{cc}A&B\\C&D\end{array}\right)^{-1}
=\left(\begin{array}{cc}A^{-1}+A^{-1}B\Delta^{-1}CA^{-1}&-A^{-1}B\Delta^{-1}\\
-\Delta^{-1}CA^{-1}&\Delta^{-1}\end{array}\right)
\end{equation}
for a block matrix where $\Delta=D-CA^{-1}B$, (\ref{eq:G12b}) gives
\begin{equation}
\Big((G_1)_{12},\cdots,(G_n)_{12}\Big)=-(RW^{-1})_{12}
=-(R_{12}-R_{11}W_{11}^{-1}W_{12})\cir W^{-1} \label{eq:G12WR}
\end{equation}
where
\begin{equation}
\begin{array}{l}
\D\cir{W}=W_{22}-W_{21}W_{11}^{-1} W_{12}\\
\D\qquad=L\Big(FE_-+LF^*E^{*-1}_{-}
-\sigma(FAE_-+LF^*BE^{*-1}_{-})\cdot\\
\D\qquad\quad\cdot(FE_++LF^*E^{*-1}_{+})^{-1}
(FB^*E_++LF^*A^*E^{*-1}_{+})\Big).
\end{array}
\label{eq:W0}
\end{equation}
\begin{lemma}\label{lemma:lbound}
Suppose $a_j$ and $b_j$ are nonzero complex constants with
$|a_j|<1$, $|b_j|<1$ $(j=1,\cdots,n)$, $\kappa_1,\cdots,\kappa_n$
are nonzero real constants with $|\kappa_j|\ne|\kappa_k|$
$(j,k=1,\cdots,n;\,j\ne k)$, then there exist positive constants
$\delta$, $C_1$ and $C_2$, which depend on $a_j$'s, $b_j$'s and
$\kappa_j$'s, such that $|\det W|\ge C_1$ and
\begin{equation}
\begin{array}{l}
\D|(G_1)_{12}|\le C_2\max_{1\le k\le n}
\frac{|\ee_{k+}|}{1+|\ee_{k+}|^2}
\max_{1\le k\le n}\frac{|\ee_{k-}|}{1+|\ee_{k-}|^2}
\end{array}
\end{equation}
hold whenever $|\lambda_j-\I\kappa_j|<\delta$ and $\ee_{j\pm}\in\hc$
$(j=1,\cdots,n)$.
\end{lemma}
\demo Denote $F^{-1}LF^*=I+Z$, then $Z=0$ if
$\lambda_1,\cdots,\lambda_n$ are all purely imaginary. From
(\ref{eq:W}) and (\ref{eq:W0}),
\begin{equation}
\det W=\det(FE_++LF^*E^{*-1}_{+})\det\cir W,
\label{eq:WWo}
\end{equation}
\begin{equation}
\cir{W}=L(FE_-+LF^*E^{*-1}_{-})(I-\sigma\chi_-\chi_+)
\label{eq:cirW}
\end{equation}
where
\begin{equation}
\begin{array}{l}
\chi_+=(FE_++LF^*E^{*-1}_{+})^{-1}(FB^*E_+
+LF^*A^*E^{*-1}_{+})\\
\qquad=\Xi_{1+}\Xi_{0+}^{-1}+\Xi_{0+}^{-1}(I+ZE^{*-1}_{+}
\Xi_{0+}^{-1})^{-1}ZE^{*-1}_{+}
(A^*-\Xi_{1+}\Xi_{0+}^{-1}),\\
\chi_-=(FE_-+LF^*E^{*-1}_{-})^{-1}(FAE_-
+LF^*BE^{*-1}_{-})\\
\qquad=\Xi_{1-}\Xi_{0-}^{-1}+\Xi_{0-}^{-1}(I+ZE^{*-1}_{-}
\Xi_{0-}^{-1})^{-1}ZE^{*-1}_{-}
(B-\Xi_{1-}\Xi_{0-}^{-1}),\\
\end{array}
\end{equation}
\begin{equation}
\Xi_{0\pm}=E_\pm+E^{*-1}_{\pm},\quad
\Xi_{1-}=AE_-+BE^{*-1}_{-},\quad
\Xi_{1+}=B^*E_++A^*E^{*-1}_{+}.
\end{equation}
Let $\D c_0=\max_{1\le k\le n}\{|a_k|,|b_k|\}<1$. Suppose
$\D||Z||<\frac{1-c_0}2$, then we have the following estimates.
\begin{equation}
||A||\le c_0<1,\quad
||B||\le c_0<1,\label{eq:esti_AB}
\end{equation}
\begin{equation}
||E_{\pm}\Xi_{0\pm}^{-1}||\le 1,\quad
||E^{*-1}_{\pm}\Xi_{0\pm}^{-1}||\le 1,
\end{equation}
\begin{equation}
||\Xi_{0\pm}||\ge2,\quad
||\Xi_{0\pm}^{-1}||\le\frac 12,\quad
||\Xi_{1\pm}\Xi_{0\pm}^{-1}||\le c_0<1,
\end{equation}
\begin{equation}
\begin{array}{l}
\D||E^{*-1}_{+}(\Xi_{1+}\Xi_{0+}^{-1}-A^*)||
=\max_{1\le k\le n}\frac{|a_k-b_k|\,
|\ee_{k+}|}{1+|\ee_{k+}|^2}\le1,\\
\D||E^{*-1}_{-}(\Xi_{1-}\Xi_{0-}^{-1}-B)||
=\max_{1\le k\le n}\frac{|a_k-b_k|\,
|\ee_{k-}|}{1+|\ee_{k-}|^2}\le1,
\end{array}
\end{equation}
\begin{equation}
||(I+Z E^{*-1}_{\pm}\Xi_{0\pm}^{-1})^{-1}||\le||(1-||Z||\,
||E^{*-1}_{\pm}\Xi_{0\pm}^{-1}||)^{-1}||
\le(1-||Z||)^{-1}\le 2.
\end{equation}
Hence $||\chi_\pm-\Xi_{1\pm}\Xi_{0\pm}^{-1}||\le||Z||$,
\begin{equation}
||\chi_\pm||\le c_0+||Z||\le\frac{1+c_0}2<1.
\label{eq:esti_chi}
\end{equation}
Denote
\begin{equation}
\pi_0=|\det F|\Big|_{\lambda_j=\smallI\kappa_j\atop{j=1,\cdots,n}},\quad
\pi_1=||F^{-1}||\Big|_{\lambda_j=\smallI\kappa_j\atop{j=1,\cdots,n}},\quad
\pi_2=||f||\Big|_{\lambda_j=\smallI\kappa_j\atop{j=1,\cdots,n}}.
\end{equation}
Clearly, $\pi_0$, $\pi_1$, $\pi_2$ are all positive since $\D\det
F|_{\lambda_j=\smallI\kappa_j\atop{j=1,\cdots,n}}$ is a Vandermonde
determinant. By the continuity, there exists $\delta>0$ such that
$\D\frac{\pi_0}2\le|\det F|\le 2\pi_0$, $||F^{-1}||\le2\pi_1$,
$||f||\le2\pi_2$, and $\D||F^{-1}LF^*-I||=||Z||<\frac{1-c_0}2$
whenever $|\lambda_j-\I\kappa_j|<\delta$. (\ref{eq:WWo}) and
(\ref{eq:cirW}) lead to
\begin{equation}
\begin{array}{l}
|\det W|\!=\!|\det F|^2\,|\det\Xi_{0+}|\,|
\det(I+ZE^{*-1}_{+}\Xi_{0+}^{-1})|\cdot\\
\qquad\cdot|\det\Xi_{0-}|\,|\det(I+ZE^{*-1}_{-}\Xi_{0-}^{-1})|\,
|\det(1-\sigma\chi_-\chi_+)|\\
\D\qquad\ge\pi_0^2(1-||Z||)^{2n}(1-||\chi_+||\,||\chi_-||)^n
\ge\pi_0^2\Big(\frac{1+c_0}{2}\Big)^{2n}\Big(1-\Big(\frac{1+c_0}2\Big)^2\Big)^{n},
\end{array}
\end{equation}
which is a uniform positive lower bound for any $\ee_{j\pm}\in\hc$
$(j=1,\cdots,n)$ when $|\lambda_j-\I\kappa_j|<\delta$.
By (\ref{eq:W}), (\ref{eq:R}), (\ref{eq:G12WR}) and (\ref{eq:W0}),
\begin{equation}
\begin{array}{l}
\D((G_1)_{12},(G_2)_{12},\cdots,(G_n)_{12})
=-(R_{12}-R_{11}W_{11}^{-1}W_{12})\cir W^{-1}\\
\D\qquad=-\sigma fE_+\Xi_{0+}^{-1}(I+ZE^{*-1}_{+}\Xi_{0+}^{-1})^{-1}
(I+Z)(B^*-A^*)E^{*-1}_{+}\cir W^{-1}\\
\qquad\quad-\sigma(-1)^nf^*E^{*-1}_{+}\Xi_{0+}^{-1}
(I+ZE^{*-1}_{+}\Xi_{0+}^{-1})^{-1}
(A^*-B^*)E_+\cir W^{-1}\\
\D\qquad=-\sigma\Big(f-(-1)^nf^*+(fE_+\Xi_{0+}^{-1}+(-1)^nf^*E^{*-1}_{+}\Xi_{0+}^{-1})
(I+ZE^{*-1}_{+}\Xi_{0+}^{-1})^{-1}Z\Big)\cdot\\
\qquad\quad\cdot(B^*-A^*)E_+E^{*-1}_{+}\Xi_{0+}^{-1}
(I-\sigma\chi_-\chi_+)^{-1}\Xi_{0-}^{-1}(I+ZE^{*-1}_{-}
\Xi_{0-}^{-1})^{-1}F^{-1}L^{-1}.
\end{array}\label{eq:G12expr}
\end{equation}
Here we have used
$I+Z=(I+ZE^{*-1}_{+}\Xi_{0+}^{-1})+ZE_+\Xi_{0+}^{-1}$. Hence, by
using (\ref{eq:esti_AB})--(\ref{eq:esti_chi}),
\begin{equation}
\begin{array}{l}
\D|(G_1)_{12}|\le
8(||f||+||f^*||)||F^{-1}||\,
||(I-\sigma\chi_-\chi_+)^{-1}||\,||\Xi_{0+}^{-1}||\,||\Xi_{0-}^{-1}||\\
\D\qquad\le 64\pi_1\pi_2\Big(1-\Big(\frac{1+c_0}2\Big)^2\Big)^{-1}
\max_{1\le k\le n}\frac{|\ee_{k+}|}{1+|\ee_{k+}|^2}
\max_{1\le k\le n}\frac{|\ee_{k-}|}{1+|\ee_{k-}|^2}.
\end{array}
\end{equation}
The lemma is proved.
Now we consider the solutions of the nonlocal Davey-Stewartson I
equation. That is, we consider the case where $\ee_{j\pm}$'s are
taken as (\ref{eq:ee}).
\begin{theorem}
Suppose $a_j$ and $b_j$ are nonzero complex constants with
$|a_j|<1$, $|b_j|<1$ $(j=1,\cdots,n)$, $\kappa_1,\cdots,\kappa_n$
are nonzero real constants with $|\kappa_j|\ne|\kappa_k|$
$(j,k=1,\cdots,n;\,j\ne k)$, then there exists a positive constant
$\delta$ such that the following results hold for the derived
solution $\widetilde u=2(G_1)_{12}$ of the nonlocal Davey-Stewartson
I equation when $\re\lambda_j\ne 0$ and
$|\lambda_j-\I\kappa_j|<\delta$ $(j=1,\cdots,n)$.
\noindent(i) $\widetilde u$ is defined globally for $(x,y,t)\in\hr^3$.
\noindent(ii) For fixed $t$, $\widetilde u$ tends to zero
exponentially as $(x,y)\to 0$.
\noindent(iii) Let $y=\widetilde y+vt$ and keep $(x,\widetilde y)$
bounded, then $\widetilde u\to 0$ as $t\to\infty$ if
$v\ne-4\lambda_{kI}$ for all $k$.
\end{theorem}
\demo We have known that $\widetilde u$ is a solution of the
nonlocal Davey-Stewartson equation in Section~\ref{sect:LPDT}.
(i) According to Lemma~\ref{lemma:lbound}, $|\det W|$ has a uniform
positive lower bound. Hence $\widetilde u$ is defined globally.
\vskip6pt(ii) When $x\ge 0$ and $y\ge 0$,
\begin{equation}
\begin{array}{l}
|\ee_{k+}|\ge\E{\lambda_{kR}\sqrt{x^2+y^2}+4\lambda_{kR}\lambda_{kI}t}
\hbox{ if }\lambda_{kR}>0,\\
|\ee_{k+}|\le\E{-|\lambda_{kR}|\sqrt{x^2+y^2}+4\lambda_{kR}\lambda_{kI}t}
\hbox{ if }\lambda_{kR}<0.
\end{array}
\end{equation}
Hence $\D\max_{1\le k\le n}\frac{|\ee_{k+}|}{1+|\ee_{k+}|^2}$ tends
to zero exponentially when $x\ge 0$, $y\ge 0$ and $(x,y)\to\infty$.
Likewise, we have
\begin{equation}
\begin{array}{l}
|\ee_{k-}|\le\E{-\lambda_{kR}\sqrt{x^2+y^2}-4\lambda_{kR}\lambda_{kI}t}
\hbox{ if }x\le 0,\;y\ge 0,\;\lambda_{kR}>0,\\
|\ee_{k-}|\ge\E{|\lambda_{kR}|\sqrt{x^2+y^2}-4\lambda_{kR}\lambda_{kI}t}
\hbox{ if }x\le 0,\;y\ge 0,\;\lambda_{kR}<0,\\
|\ee_{k+}|\le\E{-\lambda_{kR}\sqrt{x^2+y^2}+4\lambda_{kR}\lambda_{kI}t}
\hbox{ if }x\le 0,\;y\le 0,\;\lambda_{kR}>0,\\
|\ee_{k+}|\ge\E{|\lambda_{kR}|\sqrt{x^2+y^2}+4\lambda_{kR}\lambda_{kI}t}
\hbox{ if }x\le 0,\;y\le 0,\;\lambda_{kR}<0,\\
|\ee_{k-}|\ge\E{\lambda_{kR}\sqrt{x^2+y^2}-4\lambda_{kR}\lambda_{kI}t}
\hbox{ if }x\ge 0,\;y\le 0,\;\lambda_{kR}>0,\\
|\ee_{k-}|\le\E{-|\lambda_{kR}|\sqrt{x^2+y^2}-4\lambda_{kR}\lambda_{kI}t}
\hbox{ if }x\ge 0,\;y\le 0,\;\lambda_{kR}<0.
\end{array}
\end{equation}
Lemma~\ref{lemma:lbound} implies that $\widetilde u\to 0$
exponentially as $(x,y)\to\infty$.
\vskip6pt(iii)
\begin{equation}
|\ee_{k\pm}|=\E{\lambda_{kR}(x\pm\widetilde y)
\pm\lambda_{kR}(v+4\lambda_{kI})t}.
\end{equation}
If $v\ne-4\lambda_{kI}$ for all $k=1,\cdots,n$, then either
$\ee_{k+}\to 0$ or $\ee_{k+}\to\infty$ for all $k=1,\cdots,n$ when
$t\to\infty$. Lemma~\ref{lemma:lbound} implies that $\widetilde u\to
0$ when $t\to\infty$. The theorem is proved.
A $3$ soliton solution is shown in Figure~2 where the parameters are
$\sigma=-1$, $t=20$, $\lambda_1=0.07-1.5\I$, $\lambda_2=0.05+2\I$,
$\lambda_3=0.1+\I$, $a_1=0.2$, $a_2=0.1\I$, $a_3=0.1$, $b_1=0.1\I$,
$b_2=-0.2$, $b_3=-0.2$. The figure of the solution appears similarly
if $\sigma$ is changed to $+1$, although it is not shown here.
\begin{figure}\begin{center}
\scalebox{1.6}{\includegraphics[150,100]{3soliton.ps}}\label{fig:stn3}
\caption{$|\widetilde u|$ of a $3$ soliton solution.}
\end{center}\end{figure}
\section{``Line dark soliton'' solutions}\label{sect:darksoliton}
\subsection{Single ``line dark soliton'' solutions}\label{subsect:darksoliton1}
Now we take
\begin{equation}
u=\rho\E{-2\I\sigma|\rho|^2t},\quad
w=0
\end{equation}
as a solution of (\ref{eq:EQ_DS}) where $\rho$ is a complex
constant. The Lax pair (\ref{eq:LP_DS}) has a solution
\begin{equation}
\begin{array}{l}
\D\left(\begin{array}{c}\E{\alpha(\lambda)x+\beta(\lambda)y+\gamma(\lambda)t}\\
\D\frac{\lambda}{\rho}
\E{\alpha(\lambda)x+\beta(\lambda)y
+(\gamma(\lambda)+2\I\sigma|\rho|^2)t}\end{array}\right),
\end{array}\label{eq:LPsndark}
\end{equation}
where
\begin{equation}
\begin{array}{l}
\D\alpha(\lambda)=\frac 12\Big(\frac{\sigma|\rho|^2}{\lambda}-\lambda\Big),\quad
\beta(\lambda)=\frac 12\Big(\frac{\sigma|\rho|^2}{\lambda}+\lambda\Big),\\
\D\gamma(\lambda)=\I(\alpha(\lambda)^2-2\alpha(\lambda)\beta(\lambda)-\beta(\lambda)^2)
=\I\lambda^2-\frac\I 2
\Big(\frac{\sigma|\rho|^2}{\lambda}+\lambda\Big)^2,
\end{array}
\end{equation}
$\lambda$ is a complex constant. Note that $\alpha(-\lambda^*)=-(\alpha(\lambda))^*$,
$\beta(-\lambda^*)=-(\beta(\lambda))^*$, $\gamma(-\lambda^*)=-(\gamma(\lambda))^*$.
Now take $\D\Phi=\left(\begin{array}{c}\xi\\\eta\end{array}\right)$
where
\begin{equation}
\begin{array}{l}
\D \xi=\E{\alpha x+\beta y+\gamma t}+\E{-\alpha^*x-\beta^*y-\gamma^*t},\\
\D \eta=\frac{\lambda}{\rho}\E{\alpha x+\beta y
+(\gamma+2\I\sigma|\rho|^2)t}-\frac{\lambda^*}{\rho}
\E{-\alpha^*x-\beta^*y-(\gamma^*-2\I\sigma|\rho|^2)t}.
\end{array}\label{eq:darkxieta}
\end{equation}
Here $\alpha=\alpha(\lambda)$, $\beta=\beta(\lambda)$,
$\gamma=\gamma(\lambda)$. This $\Phi$ is a linear combination of the
solutions of form (\ref{eq:LPsndark}). Then (\ref{eq:DTuv1}) gives
the new solution
\begin{equation}
\widetilde u=\rho\E{-2\I\sigma|\rho|^2t}
\frac{\D
\D \frac{\lambda^*}{\lambda}c_1\E{2\beta_Ry+2\gamma_Rt}
+\frac{\lambda}{\lambda^*}c_1\E{-2\beta_Ry-2\gamma_Rt}-c_2\E{2\alpha_Rx}
-c_2^*\E{-2\alpha_Rx}}
{\D c_1(\E{2\beta_Ry+2\gamma_Rt}+\E{-2\beta_Ry-2\gamma_Rt})+c_2\E{2\alpha_Rx}
+c_2^*\E{-2\alpha_Rx}}
\end{equation}
of the nonlocal Davey-Stewartson I equation where
\begin{equation}
\begin{array}{l}
\D c_1=1-\sigma\frac{|\lambda|^2}{|\rho|^2},\quad
c_2=1+\sigma\frac{\lambda^2}{|\rho|^2}
\end{array}
\end{equation}
This solution is smooth for all $(x,y,t)\in\hr^3$ if
$|\lambda|<|\rho|$.
Especially, if $\lambda$ is real, then $\gamma_R=0$, so we get a
standing wave solution.
Figure~3 shows a $1$ ``line dark soliton'' solution with parameters
$\sigma=-1$, $t=10$, $\rho=1$, $\lambda=0.3+0.1\I$. The figure on
the right describes the same solution but is upside down.
\begin{figure}\begin{center}
\scalebox{1.24}{\includegraphics[340,100]{1darkstn.ps}}\label{fig:1darkstn}
\caption{$|\widetilde u|$ of a $1$ ``line dark soliton'' solution.}
\end{center}\end{figure}
\subsection{Multiple ``line dark soliton'' solutions}
Now we take $n$ solutions
\begin{equation}
\begin{array}{l}
\D \xi_k=\E{\alpha_k x+\beta_k y+\gamma_k t}+\E{-\alpha_k^*x-\beta_k^*y-\gamma_k^*t},\\
\D \eta_k=\frac{\lambda_k}{\rho}\E{\alpha_k x+\beta_k y
+(\gamma_k+2\I\sigma|\rho|^2)t}-\frac{\lambda_k^*}{\rho}
\E{-\alpha_k^*x-\beta_k^*y-(\gamma_k^*-2\I\sigma|\rho|^2)t}
\end{array}
\end{equation}
of form (\ref{eq:darkxieta}) to get multiple ``line dark soliton''
solutions where
\begin{equation}
\begin{array}{l}
\D\alpha_k=\frac 12\Big(\frac{\sigma|\rho|^2}{\lambda_k}-\lambda_k\Big),\quad
\beta_k=\frac12\Big(\frac{\sigma|\rho|^2}{\lambda_k}+\lambda_k\Big),\\
\gamma_k=\I(\alpha_k^2-2\alpha_k\beta_k-\beta_k^2).
\end{array}\label{eq:abc}
\end{equation}
Similar to (\ref{eq:W}) and (\ref{eq:R}), $W$ and $R$ in
(\ref{eq:G12b}) are
\begin{equation}\fl
W=\left(\begin{array}{cc}FE_++LF^*E^{*-1}_{+}
&-\sigma\rho^{*-1}\E{-\I\phi}(LF\Lambda E_--F^*\Lambda ^*E^{*-1}_{-})\\
\rho^{-1}\E{\I\phi}(F\Lambda E_+-LF^*\Lambda^*E^{*-1}_{+})
&LFE_-+F^*E^{*-1}_{-}
\end{array}\right),\label{eq:Wdark}
\end{equation}
\begin{equation}\fl
R=\left(\begin{array}{cc}fE_++(-1)^nf^*E^{*-1}_{+}
&-\sigma\rho^{*-1}\E{-\I\phi}((-1)^nf\Lambda E_--f^*\Lambda ^*E^{*-1}_{-})\\
\rho^{-1}\E{\I\phi}(f\Lambda E_+-(-1)^nf^*\Lambda^*E^{*-1}_{+})
&(-1)^nfE_-+f^*E^{*-1}_{-}
\end{array}\right),\label{eq:Rdark}
\end{equation}
where
\begin{equation}
\begin{array}{l}
E_\pm=\diag(\ee_{k\pm})_{k=1,\cdots,n},\quad
F=(\beta_k^{n-j})_{1\le j,k\le n},\quad
f=(\beta_1^n,\cdots,\beta_n^n),\\
\D\alpha_k=\frac 12\Big(\frac{\sigma|\rho|^2}{\lambda_k}-\lambda_k\Big),\quad
\beta_k=\frac12\Big(\frac{\sigma|\rho|^2}{\lambda_k}+\lambda_k\Big),\\
\gamma_k=\I(\alpha_k^2-2\alpha_k\beta_k-\beta_k^2),
\end{array}\label{eq:EFf}
\end{equation}
$\D \Lambda=\diag(\lambda_k)_{1\le k\le n}$,
$\phi=2\sigma|\rho|^2t$. Moreover, $\ee_{k\pm}=\E{\alpha_kx\pm
\beta_ky\pm\gamma_kt}$. However, as in the soliton case, we suppose
temporarily that $\ee_{k\pm}$'s are arbitrary complex numbers.
\begin{lemma}\label{lemma:lbounddark}
Suppose $\kappa_1,\cdots,\kappa_n$ are distinct nonzero real
numbers, then there exist positive constants $\rho_0$, $\delta$,
$C_1$ and $C_2$, which depend on $\kappa_j$'s, such that $|\det
W|\ge C_1$ and $|(G_1)_{12}|\le C_2$ hold whenever $|\rho|>\rho_0$,
$|\lambda_j-\I\kappa_j|<\delta$ and $e_{j\pm}\in\hc$
$(j=1,\cdots,n)$.
\end{lemma}
\demo Denote $F^{-1}LF^*=I+Z$, then $Z=0$ if
$\lambda_1,\cdots,\lambda_n$ are all purely imaginary. Hence $||Z||$
is small enough if
$|\lambda_1-\I\kappa_1|,\cdots,|\lambda_n-\I\kappa_n|$ are all small
enough.
Let $\D c_0=\max_{1\le k\le n}|\kappa_k|$, $\pi_3=|\det
F|\Big|_{\lambda_k=\smallI\kappa_k\atop k=1,\cdots,n}$,
$\pi_4=||F^{-1} LF||\Big|_{\lambda_k=\smallI\kappa_k\atop
k=1,\cdots,n}$. Then there exists $\delta$ with $0<\delta<c_0$ such
that $\D||Z||\le\frac12$, $\D|\det F|\ge\frac{\pi_3}2$, $||F^{-1}
LF||\le 2\pi_4$ if $|\lambda_k-\I\kappa_k|<\delta$ $(k=1,\cdots,n)$.
In this case, $|\lambda_k|<c_0+\delta<2c_0$.
From (\ref{eq:Wdark}),
\begin{equation}
\det W=\det(FE_++LF^*E^{*-1}_{+})\det\cir W,
\label{eq:WWodark}
\end{equation}
where
\begin{equation}
\begin{array}{l}
\cir{W}=LFE_-+F^*E^{*-1}_{-}
+\sigma|\rho|^{-2}(F\Lambda E_+-LF^*\Lambda^*E^{*-1}_{+})\cdot\\
\qquad\cdot(FE_++LF^*E^{*-1}_{+})^{-1}
L(F\Lambda E_--LF^*\Lambda^*E^{*-1}_{-})\\
\qquad=(1+\sigma|\rho|^{-2}F\chi_+F^{-1} LF\chi_-F^{-1} L^{-1})LF
(I+ZE^{*-1}_{-}\Xi_{0-}^{-1})\Xi_{0-}
\end{array}
\label{eq:cirWdark}
\end{equation}
\begin{equation}
\begin{array}{l}
\chi_\pm=F^{-1}(F\Lambda E_\pm-LF^*\Lambda^*E^{*-1}_{\pm})
(FE_\pm+LF^*E^{*-1}_{\pm})^{-1}F\\
\qquad=\Xi_{1\pm}\Xi_{0\pm}^{-1}
-(Z\Lambda^*+\Xi_{1\pm}\Xi_{0\pm}^{-1} Z)
E^{*-1}_{\pm}\Xi_{0\pm}^{-1}(I+ZE^{*-1}_{\pm}\Xi_{0\pm}^{-1})^{-1},
\end{array}\label{eq:chipm}
\end{equation}
\begin{equation}
\Xi_{0\pm}=E_\pm+E^{*-1}_{\pm},\quad
\Xi_{1\pm}=\Lambda E_\pm-\Lambda^*E^{*-1}_{\pm}.
\end{equation}
We have the following estimates:
\begin{equation}
||E_{\pm}\Xi_{0\pm}^{-1}||\le 1,\quad
||E^{*-1}_{\pm}\Xi_{0\pm}^{-1}||\le 1,\quad
||\Xi_{1\pm}\Xi_{0\pm}^{-1}||\le 2c_0,\label{eq:Echi_dark_esti}
\end{equation}
\begin{equation}
||\Xi_{0\pm}||\ge 2,\quad
||\Xi_{0\pm}^{-1}||\le\frac 12,\quad
|\det\Xi_{0\pm}|\ge 2^n,
\end{equation}
\begin{equation}
||(I+Z E^{*-1}_{\pm}\Xi_{0\pm}^{-1})^{-1}||\le(1-||Z||)^{-1}
\le 2,\quad
|\det(I+Z
E^{*-1}_{\pm}\Xi_{0\pm}^{-1})|\ge(1-||Z||)^n\ge\frac 1{2^n}.
\end{equation}
Hence (\ref{eq:chipm}) implies
\begin{equation}
\begin{array}{l}
\D||\chi_\pm||\le 2c_0+8c_0||Z||\le 6c_0,
\end{array}
\end{equation}
\begin{equation}
\begin{array}{l}
\D||\chi_+F^{-1} LF\chi_-F^{-1} L^{-1}F||\le||F^{-1} LF||^2\,||\chi_+||\,||\chi_-||\le
144c_0^2\pi_4^2.
\end{array}\label{eq:Echi_dark_esti_end-1}
\end{equation}
By (\ref{eq:WWodark}) and (\ref{eq:cirWdark}),
\begin{equation}
\begin{array}{l}
|\det W|=|\det F|^2\,
|\det\Xi_{0+}|\,|\det\Xi_{0-}|\,|\det(I+ZE^{*-1}_{+}\Xi_{0+}^{-1})|\cdot\\
\qquad\cdot
|\det(I+ZE^{*-1}_{-}\Xi_{0-}^{-1})|\,
|\det(I+\sigma|\rho|^{-2}\chi_+F^{-1} LF\chi_-F^{-1} L^{-1} F)|\\
\D\ge\frac{\pi_3^2}{4}(1-144c_0^2\pi_4^2|\rho|^{-2})>0
\end{array}\label{eq:Echi_dark_esti_end}
\end{equation}
if $\D|\rho|>12c_0\pi_4$. Therefore, $|\det W|$ has a uniform
positive lower bound if $\D|\rho|>12c_0\pi_4$ and $|\lambda_k-\I\kappa_k|<\delta$
$(k=1,\cdots,n)$.
By (\ref{eq:G12b}), (\ref{eq:Wdark}), (\ref{eq:Rdark}),
(\ref{eq:WWodark}) and (\ref{eq:cirWdark}),
\begin{equation}
\begin{array}{l}
\D((G_1)_{12},(G_2)_{12},\cdots,(G_n)_{12})
=-(R_{12}-R_{11}W_{11}^{-1}W_{12})\cir W^{-1}\\
\D=\sigma\rho^{*-1}\E{-\I\phi}(-1)^n\Big(f\Lambda E_--(-1)^nf^*\Lambda^*E^{*-1}_{-}
-(fE_++(-1)^nf^*E^{*-1}_{+})\cdot\\
\D\qquad\cdot(-1)^n
(FE_++LF^*E^{*-1}_{+})^{-1}L(F\Lambda E_--LF^*\Lambda^*E^{*-1}_{-})\Big)\cir W^{-1}\\
\D=\sigma\rho^{*-1}\E{-\I\phi}(-1)^n
\Big(f\Lambda E_-\Xi_{0-}^{-1}-(-1)^nf^*\Lambda^*E^{*-1}_{-}\Xi_{0-}^{-1}-\\
\D\qquad-(-1)^n(fE_+\Xi_{0+}^{-1}+(-1)^nf^*E^{*-1}_{+}\Xi_{0+}^{-1})
(I+ZE^{*-1}_{+}\Xi_{0+}^{-1})^{-1} F^{-1}LF\cdot\\
\D\qquad\cdot
(\Xi_{1-}\Xi_{0-}^{-1}-Z\Lambda^*E^{*-1}_{-}\Xi_{0-}^{-1})\Big)
(I+ZE^{*-1}_{-}\Xi_{0-}^{-1})^{-1}F^{-1}L^{-1}\cdot\\
\D\qquad\cdot(1+\sigma|\rho|^{-2}F\chi_+F^{-1} LF\chi_-F^{-1} L^{-1})^{-1}.
\end{array}
\end{equation}
$(G_1)_{12}$ is bounded when $\D|\rho|>12c_0\pi_4$ and
$|\lambda_k-\I\kappa_k|<\delta$ $(k=1,\cdots,n)$ because of the
estimates (\ref{eq:Echi_dark_esti})--(\ref{eq:Echi_dark_esti_end}).
The lemma is proved.
Now we have the following theorem for the multiply ``line dark
soliton'' solution.
\begin{theorem}
Suppose $\kappa_1,\cdots,\kappa_n$ are distinct nonzero real
numbers, then there exist positive constants $\rho_0$ and $\delta$
such that the following results hold for the derived solution
$\widetilde u=u+2(G_1)_{12}$ of the nonlocal Davey-Stewartson I
equation when $|\rho|>\rho_0$ and $|\lambda_j-\I\kappa_j|<\delta$
$(j=1,\cdots,n)$.
\noindent (i) $\widetilde u$ is globally defined and bounded for
$(x,y,t)\in\hr^3$.
\noindent (ii) Suppose the real numbers $v_x,v_y$ satisfy
$\alpha_{kR}v_x\pm\beta_{kR}v_y\ne 0$ for all $k=1,\cdots,n$ where
$\alpha_k$'s and $\beta_k$'s are given by (\ref{eq:abc}), then
$\D\lim_{s\to+\infty}|\widetilde u|=|\rho|$ along the straight line
$x=x_0+v_xs$, $y=y_0+v_ys$ for arbitrary $x_0,y_0\in\hr$.
\end{theorem}
\demo (i) follows directly from Lemma~\ref{lemma:lbounddark}. Now we
prove (ii).
Since
$|\ee_{k\pm}|=\E{(\alpha_{kR}v_x\pm\beta_{kR}v_y)s
+(\alpha_{kR}x_0\pm\beta_{kR}y_0\pm\gamma_{kR}t)}$
along the straight line $x=x_0+v_xs$, $y=y_0+v_ys$,
$\alpha_{kR}v_x\pm\beta_{kR}v_y\ne 0$ implies that for each $k$,
$e_{k+}\to 0$ or $e_{k+}\to\infty$, and $e_{k-}\to 0$ or
$e_{k-}\to\infty$ as $s\to+\infty$. Let
\begin{equation}
\begin{array}{l}
\D \mu_k=\left\{\begin{array}{ll}\lambda_k&\hbox{ if }\alpha_{kR}v_x+\beta_{kR}v_y>0,\\
-\lambda_k^*&\hbox{ if }\alpha_{kR}v_x+\beta_{kR}v_y<0,\end{array}\right.\\
\D \nu_k=\left\{\begin{array}{ll}-\lambda_k&\hbox{ if }\alpha_{kR}v_x-\beta_{kR}v_y>0,\\
\lambda_k^*&\hbox{ if }\alpha_{kR}v_x-\beta_{kR}v_y<0,\end{array}\right.\\
\D a_k=\left\{\begin{array}{ll}\beta_k&\hbox{ if }\alpha_{kR}v_x+\beta_{kR}v_y>0,\\
-\beta_k^*&\hbox{ if }\alpha_{kR}v_x+\beta_{kR}v_y<0,\end{array}\right.\\
\D b_k=\left\{\begin{array}{ll}-\beta_k&\hbox{ if }\alpha_{kR}v_x-\beta_{kR}v_y>0,\\
\beta_k^*&\hbox{ if }\alpha_{kR}v_x-\beta_{kR}v_y<0,\end{array}\right.\\
\end{array}
\end{equation}
then
\begin{equation}
a_k=\frac 12\Big(\frac{\sigma|\rho|^2}{\mu_k}+\mu_k\Big),\quad
b_k=\frac
12\Big(\frac{\sigma|\rho|^2}{\nu_k}+\nu_k\Big).\label{eq:akbk}
\end{equation}
Rewrite (\ref{eq:G12b}) as
\begin{equation}\fl
\begin{array}{l}
\D\left(\begin{array}{cccccc}(G_1)_{11}&\cdots&(G_n)_{11}
&\rho^{-1}\E{\I\phi}(G_1)_{12}&\cdots&\rho^{-1}\E{\I\phi}(G_n)_{12}\\
(G_1)_{21}&\cdots&(G_n)_{21}
&\rho^{-1}\E{\I\phi}(G_1)_{22}&\cdots&\rho^{-1}\E{\I\phi}(G_n)_{22}\end{array}\right)
SWS^{-1}=-RS^{-1}\\
\end{array}\label{eq:G12bb}
\end{equation}
where $\D
S=\left(\begin{array}{cc}I_n\\&\rho\E{-\I\phi}I_n\end{array}\right)$,
$I_n$ is the $n\times n$ identity matrix.
Applying Cramer's rule to (\ref{eq:G12bb}) and using
(\ref{eq:akbk}), we have
\begin{equation}
\lim_{s\to+\infty}\widetilde u=\lim_{s\to+\infty}(\rho\E{-\I\phi}+2(G_1)_{12})
=\rho\E{-\I\phi}\Big(1-2\frac{\det W_1}{\det W_0}\Big)
=\rho\E{-\I\phi}\frac{\det W_2}{\det W_0}
\end{equation}
where
\begin{equation}\fl
W_0=\left(\begin{array}{cccccc}
a_1^{n-1}&\cdots&a_n^{n-1}
&\sigma|\rho|^{-2}b_1^{n-1}\nu_1&\cdots
&\sigma|\rho|^{-2}b_n^{n-1}\nu_n\\
a_1^{n-2}&\cdots&a_n^{n-2}
&\sigma|\rho|^{-2}b_1^{n-2}\nu_1&\cdots
&\sigma|\rho|^{-2}b_n^{n-2}\nu_n\\
\vdots&&\vdots&\vdots&&\vdots\\
a_1&\cdots&a_n
&\sigma|\rho|^{-2}b_1\nu_1&\cdots
&\sigma|\rho|^{-2}b_n\nu_n\\
1&\cdots&1&\sigma|\rho|^{-2}\nu_1&\cdots&\sigma|\rho|^{-2}\nu_n\\
a_1^{n-1}\mu_1&\cdots&a_n^{n-1}\mu_n
&b_1^{n-1}&\cdots&b_n^{n-1}\\
a_1^{n-2}\mu_1&\cdots&a_n^{n-2}\mu_n
&b_1^{n-2}&\cdots&b_n^{n-2}\\
\vdots&&\vdots&\vdots&&\vdots\\
a_1\mu_1&\cdots&a_n\mu_n
&b_1&\cdots&b_n\\
\mu_1&\cdots&\mu_n&1&\cdots&1\\
\end{array}\right),
\end{equation}
$W_1$ is obtained from $W_0$ by replacing the $(n+1)$-th row with
\begin{equation}\fl
\left(\begin{array}{cccccc}a_1^n&\cdots&a_n^n
&\sigma|\rho|^{-2}b_1^n\nu_1&\cdots
&\sigma|\rho|^{-2}b_n^n\nu_n\end{array}\right),
\end{equation}
and
\begin{equation}\fl
W_2=\left(\begin{array}{cccccc}
a_1^{n-1}&\cdots&a_n^{n-1}
&\sigma|\rho|^{-2}b_1^{n-1}\nu_1&\cdots
&\sigma|\rho|^{-2}b_n^{n-1}\nu_n\\
a_1^{n-2}&\cdots&a_n^{n-2}
&\sigma|\rho|^{-2}b_1^{n-2}\nu_1&\cdots
&\sigma|\rho|^{-2}b_n^{n-2}\nu_n\\
\vdots&&\vdots&\vdots&&\vdots\\
a_1&\cdots&a_n
&\sigma|\rho|^{-2}b_1\nu_1&\cdots
&\sigma|\rho|^{-2}b_n\nu_n\\
1&\cdots&1&\sigma|\rho|^{-2}\nu_1&\cdots&\sigma|\rho|^{-2}\nu_n\\
-\sigma|\rho|^2a_1^{n-1}\mu_1^{-1}&\cdots&-\sigma|\rho|^2a_n^{n-1}\mu_n^{-1}
&-\sigma|\rho|^{-2}b_1^{n-1}\nu_1^2&\cdots&-\sigma|\rho|^{-2}b_n^{n-1}\nu_n^2\\
a_1^{n-2}\mu_1&\cdots&a_n^{n-2}\mu_n
&b_1^{n-2}&\cdots&b_n^{n-2}\\
\vdots&&\vdots&\vdots&&\vdots\\
a_1\mu_1&\cdots&a_n\mu_n
&b_1&\cdots&b_n\\
\mu_1&\cdots&\mu_n&1&\cdots&1\\
\end{array}\right).
\end{equation}
Denote $\row_k$ and $\col_k$ to be the $k$-th row and $k$-th column
of $W_2$ respectively. The elementary transformations
\begin{equation}
\begin{array}{l}
\row_{n+k+1}-2\cdot\row_k\to\row_{n+k+1}\quad(k=1,\cdots,n-1),\\
\mu_k\cdot\col_k\to\col_k\quad(k=1,\cdots,n),\\
\sigma|\rho|^2\nu_k^{-1}\cdot\col_{n+k}\to\col_{n+k}\quad(k=1,\cdots,n),\\
-\sigma|\rho|^{-2}\cdot\row_{n+k}\to\row_{n+k}\quad(k=1,\cdots,n),\\
\row_k\leftrightarrow\row_{n+k}\quad(k=1,\cdots,n)
\end{array}
\end{equation}
transform $W_2$ to $W_0$. Hence $\D\det
W_2=\prod_{k=1}^n\frac{\nu_k}{\mu_k}\det W_0$. This leads to
$\D\lim_{s\to+\infty}|\widetilde u|=|\rho|$ since
$\D\prod_{k=1}^n|\mu_k|=\prod_{k=1}^n|\nu_k|=\prod_{k=1}^n|\lambda_k|$.
The theorem is proved.
A $2$ ``line dark soliton'' solution is shown in Figure~4 where the
parameters are $\sigma=-1$, $t=10$, $\rho=1$, $\lambda_1=0.8+0.1\I$,
$\lambda_2=-0.6-0.3\I$. The figure on the right describes the same
solution but is upside down.
\begin{figure}\begin{center}
\scalebox{1.25}{\includegraphics[340,100]{2darkstn.ps}}\label{fig:2darkstn}
\caption{$|\widetilde u|$ of a $2$ ``line dark soliton'' solution.}
\end{center}\end{figure}
\section*{Acknowledgements}
This work was supported by the Natural Science Foundation of
Shanghai (No.\ 16ZR\-1402600) and the Key Laboratory of Mathematics
for Nonlinear Sciences of Ministry of Education of China. The author
is grateful to Prof.\ S.Y.Lou for helpful discussion.
| {'timestamp': '2016-12-20T02:01:15', 'yymm': '1612', 'arxiv_id': '1612.05689', 'language': 'en', 'url': 'https://arxiv.org/abs/1612.05689'} |
\section{INTRODUCTION}
\noindent The process of interpolation by translates of a basic
function is a popular tool for the reconstruction of a multivariate
function from a scattered data set. The setup of the problem is as
follows. We are supplied with a finite set of interpolation points
$\nodes \subset \straightletter{R}^d$ and a function $f:\nodes \To \straightletter{R}$. We wish to
construct an interpolant to $f$ of the form
\begin{equation}\label{interpolant form}
(Sf)(x) = \sum_{a \in \nodes} \mu_a \psi(x-a) + p(x),\qquad
\mbox{for $x \in \straightletter{R}^d$.}
\end{equation}
Here, $\psi$ is a real-valued function defined on $\straightletter{R}^d$, and the
principle ingredient of our interpolant is the use of the translates
of $\psi$ by the points in $\nodes$. The function $\psi$ is referred
to as the {\it basic function}. The function $p$ in
Equation~\eqref{interpolant form} is a polynomial on $\straightletter{R}^d$ of total
degree at most $k-1$. The linear space of all such polynomials will
be denoted by $\Pi_{k-1}$. Of course, for $Sf$ to interpolate $f$
the real numbers $\mu_a$ and the polynomial $p$ must be chosen to
satisfy the system
\begin{equation*}
(Sf)(a) = f(a),\qquad \mbox{for $a \in \nodes$}.
\end{equation*}
It is natural to desire a unique solution to the above system.
However, with the present setup, there are less conditions available
to determine $Sf$ than there are free parameters in $Sf$. There is a
standard way of determining the remaining conditions, which are
often called the {\it natural boundary conditions}:
$$
\sum_{a\in\nodes} \mu_a q(a) = 0, \qquad \mbox{for all $q\in
\Pi_{k-1}$}.
$$
It is now essential that $\nodes$ is $\Pi_{k-1}$--unisolvent. This
means that if $q \in \Pi_{k-1}$ vanishes on $\nodes$ then $q$ must
be zero. Otherwise the polynomial term can be adjusted by any
polynomial which is zero on $\nodes$. However, more conditions are
needed to ensure uniqueness of the interpolant. The requirement that
$\psi$ should be strictly conditionally positive definite of order
$k$ is one possible assumption. To see explanations of why these
conditions arise, the reader is directed
to~\citeasnoun{cheneylight}. In most of the common applications the
function $\psi$ is a radial function. That is, there is a function
$\phi:\straightletter{R}_+ \rightarrow \straightletter{R}$ such that $\psi = \phi \circ
\abs{\,\cdot\,}$, where $\abs{\,\cdot\,}$ is the Euclidean norm. In
these cases we refer to $\psi$ as a {\it radial basic function}.
\citename{Duchon} \citeyear{Duchon,Duchon2} was amongst the first to
study interpolation problems of this flavour. His approach was to
formulate the interpolation problem as a variational one. To do this
we assume we have a space of continuous functions $X$ which carries
a seminorm $\abs{\,\cdot\,}$. The so-called minimal norm interpolant
to $f \in X$ on $\nodes$ from $X$ is the function $Sf \in X$
satisfying
\begin{enumerate}
\item $(Sf)(a)=f(a)$, for all $a \in \nodes$;
\item $\abs{Sf} \leq \abs{g}$, for all $g \in X$ such that
$g(a)=f(a)$ for all $a \in \nodes$.
\end{enumerate}
The spaces that Duchon considers are in fact spaces of tempered
distributions which he is able to embed in $C(\straightletter{R}^d)$. Let ${\cal
S}'$ be the space of all tempered distributions on $\straightletter{R}^d$. The
particular spaces of distributions that we will be concerned with
are called Beppo-Levi spaces. The $k$\textsuperscript{th} order
Beppo-Levi space is denoted by $BL^k(\Omega)$ and defined as
\begin{equation*}
BL^k(\Omega) = \left\{ f\in{\cal S}' : \mbox{$D^\alpha f \in
L_2(\Omega)$, $\alpha \in \mathord{\rm Z\mkern-6.1mu Z}^d_+$, $\abs{\alpha}=k$} \right\},
\end{equation*}
with seminorm
\begin{equation*}
\abs{f}_{k,\Omega}=\Biggl( \sum_{\abs{\alpha}=k} c_\alpha
\int_{\Omega}\abs{(D^\alpha f)(x)}^2\, dx\Biggr)^{1/2},\qquad f\in
BL^k(\Omega).
\end{equation*}
The constants $c_\alpha$ are chosen so that the seminorm is
rotationally invariant:
\begin{equation*}
\sum_{\abs{\alpha}=k} c_\alpha x^{2\alpha} = \abs{x}^{2k}, \qquad \mbox{for
all $x\in\straightletter{R}^d$}.
\end{equation*}
We assume throughout the paper that $2k>d$, because this has the
affect that $BL^k(\Omega)$ is embedded in the continuous
functions~\cite{Duchon}. The spaces $BL^k(\straightletter{R}^d)$ give rise to
minimal norm interpolants which are exactly of the form given in
Equation~\eqref{interpolant form}, where the radial basic function
is $x\mapsto\abs{x}^{2k-d}$ or $x\mapsto \abs{x}^{2k-d}\
\log{\abs{x}}$, depending on the parity of $d$.
It is perhaps no surprise to learn that the related functions $\psi$
are strictly conditionally positive definite of some appropriate
order. The name given to interpolants employing these basic
functions is surface splines. This is because they are a genuine
multivariate analogue of the well-loved natural splines in one
dimension.
It is of central importance to understand the behaviour of the error
between a function $f:\Omega \To \straightletter{R}$ and its interpolant as the set
$\nodes \subset \Omega$ becomes ``dense'' in $\Omega$. The measure
of density we employ is the {\it fill-distance} $h=\sup_{x \in
\Omega} \min_{a \in \nodes} \abs{x-a}$. One might hope that for some
suitable norm $\norm{\,\cdot\,}$ there is a constant $\gamma$,
independent of $f$ and $h$, such that
\begin{equation*}
\norm{f-Sf} = \order{(h^\gamma)},\qquad \mbox{as $h \To 0$}.
\end{equation*}
In the case of the Beppo-Levi spaces, there is a considerable
freedom of choice for the norm in which the error between $f$ and
$Sf$ is measured. The most widely quoted result concerns the norm
$\norm{\,\cdot\,}_{L_\infty(\Omega)}$, but for variety we prefer to
deal with the $L_p$-norm. To do this it is helpful to assume
$\Omega$ is a bounded domain, whose boundary is sufficiently smooth.
In this case
there is a constant $C>0$, independent of
$f$ and $h$, such that for all $f \in BL^k(\Omega)$,
\begin{equation}\label{known error}
\norm{f-Sf}_{L_p(\Omega)} \leq \biggl\{
\begin{array}{ll}
C h^{k-\frac{d}{2}+\frac{d}{p}}\abs{f}_{k,\Omega},& 2\leq p \leq
\infty\\
C h^{k} \abs{f}_{k,\Omega},& 1\leq p <2
\end{array},\qquad \mbox{as $h \To 0$.}
\end{equation}
There has been considerable interest recently in the following very
natural question. What happens if the function $f$ does not possess
sufficient smoothness to lie in $BL^k(\Omega)$? It may well be that
$f$ lies in $BL^m(\Omega)$, where $2k>2m>d$. The condition $2m>d$
ensures that $f(a)$ exists for each $a\in\nodes$, and so $Sf$
certainly exists. However, $\abs{f}_{k,\Omega}$ is not defined. It
is simple to conjecture that the new error estimate should be
\begin{equation}\label{ourerror}
\norm{f-Sf}_{L_p(\Omega)} \leq \biggl\{
\begin{array}{ll}
C h^{m-\frac{d}{2}+\frac{d}{p}}\abs{f}_{m,\Omega},& 2\leq p \leq
\infty\\
C h^{m} \abs{f}_{m,\Omega},& 1\leq p <2
\end{array},\qquad \mbox{as $h \To 0$}.
\end{equation}
It is perhaps surprising to the uninitiated reader that this
estimate is not true even with the reasonable restrictions we have
placed on $k$ and $m$. We are going to describe a recent result
from~\citeasnoun{johnsonnew}. To do that, we recall the familiar
definition of a Sobolev space. Let $W^k_2(\Omega)$ denote the
$k$\textsuperscript{th} order Sobolev space, which consists of
functions all of whose derivatives up to and including order $k$ are
in $L_2(\Omega)$. It is a Banach space under the norm
\begin{equation*}
\norm{f}_{k,\Omega}=\Biggl( \sum_{i=0}^k \abs{f}_{i,\Omega}^2
\Biggr)^{1/2},\qquad \mbox{where $f\in W^k_2(\Omega)$}.
\end{equation*}
We have already tacitly alluded to the Sobolev embedding theorem
which states that when $\Omega$ is reasonably regular (for example,
when $\Omega$ possesses a Lipschitz continuous boundary) and
$k>d/2$, then the space $W^k_2(\Omega)$ can be embedded in
$C(\Omega)$~\citeaffixed[Theorem 5.4, p. 97]{Adams}{see}. Now
Johnson's result is as follows.
\begin{thm}[\citename{johnsonnew}]
Let $\Omega$ be the unit ball in $\straightletter{R}^d$ and assume $d/2<m<k$. For
every $h_0>0$, there exists an $f \in W^m_2(\straightletter{R}^d)$ and a sequence of
sets $\{\nodes_n\}_{n\in \straightletter{N}}$ with the following properties:
\begin{list}{}{}
\item (i) each set $\nodes_n$ consists of finitely many points
contained in $\Omega$; \item (ii) the fill-distance of each set
$\nodes_n$ is at most $h_0$; \item (iii) if $S^n_kf$ is the surface
spline interpolant to $f$ from $BL^k(\straightletter{R}^d)$ associated with
$\nodes_n$, for each $n\in \straightletter{N}$, then $\norm{S_k^nf}_{L_1(\Omega)}
\rightarrow \infty$ as $n\rightarrow \infty$.
\end{list}{}{}
\end{thm}
If the surface spline interpolation operator is unbounded, there is
of course no possibility of getting an error estimate of the kind we
conjectured. Johnson's proof uses point sets which have a special
feature. We define the separation distance of $\nodes_n$ as $q_n =
\min\{\abs{a-b}/2: a,b\in \nodes_n, a\neq b\}$. Let the
fill-distance of each $\nodes_n$ be $h_n$. In Johnson's proof, the
construction of $\nodes_n$ is such that $q_n/h_n\rightarrow 0$. We
make this remark, because Johnson's result in one dimension refers
to interpolation by natural splines, and in this setting the
connection between the separation distance and the unboundedness of
$S_k^n$ has been known for some time. What is also known in the
one-dimensional case is that if the separation distance is tied to
the fill-distance, then a result of the type we are seeking is true.
Theorem \ref{main} is the definitive result we obtain, and is the
formalisation of the conjectured bounds in
Equation~(\ref{ourerror}).
Subsequent to carrying out this work, we became aware of independent
work by~\citeasnoun{yoonnew}. In that paper, error bounds for the
case we consider here are also offered. Because of Yoon's technique
of proof, which is considerably different to our own, he obtains
error bounds for functions $f$ with the additional restriction that
$f$ lies in $W^k_\infty (\Omega)$, so the results here have wider
applicability. However, Yoon does consider the shifted surface
splines, whilst in this paper we have chosen to consider only
surface splines as an exemplar of what can be achieved. At the end
of Section~3 we offer some comments on the difference between our
approach and that of Yoon.
To close this section we introduce some notation that will be
employed throughout the paper. The support of a function $\phi:\straightletter{R}^d
\To \straightletter{R}$ is defined to be the closure of the set $\set{x \in \straightletter{R}^d:\
\phi(x) \neq 0}$, and is denoted by $\supp(\phi)$. The volume of a
bounded set $\Omega$ is the quantity $\int_\Omega \,dx$ and will be
denoted $\vol{\Omega}$. We make much use of the space $\Pi_{m-1}$,
so for brevity we fix $\ell$ as the dimension of this space.
Finally, when we write $\h{f}$ we mean the Fourier transform of $f$.
The context will clarify whether the Fourier transform is the
natural one on $L_1(\straightletter{R}^d)$:
$$
\widehat{f}(x) = \frac{1}{(2\pi)^{d/2}}\int_{\straightletter{R}^d} f(t) e^{-ixt}\,
dt,
$$
or one of its several extensions to $L_2(\straightletter{R}^d)$ or ${\cal S}'$.
\section{SOBOLEV EXTENSION THEORY}
In this section we intend to collect together a number of useful
results, chiefly about the sorts of extensions which can be carried
out on Sobolev spaces. We begin with the well-known result which can
be found in many of the standard texts. Of course, the precise
nature of the set $\Omega$ in the following theorem varies from book
to book, and we have not striven here for the utmost generality,
because that is not really a part of our agenda in this paper.
\begin{thm}[\citename{Adams}~\citeyear*{Adams}, Theorem 4.32, p. 91]\label{sobolev extension thm}
Let $\Omega$ be an open, bounded subset of $\straightletter{R}^d$ satisfying the uniform
cone condition.
For every $f \in W^{m}_2(\Omega)$ there
is an $f^\Omega \in W^{m}_{2}(\straightletter{R}^d)$ satisfying
$f^\Omega\!\!\mid_{\Omega} = f$. Moreover, there is a positive
constant $K=K(\Omega)$ such that for all $f \in W^{m}_2(\Omega)$,
\begin{equation*}
\norm{f^\Omega}_{m,\straightletter{R}^d} \leq K \norm{f}_{m,\Omega}.
\end{equation*}
\end{thm}
We remark that the extension $f^\Omega$ can be chosen to be
supported on any compact subset of $\straightletter{R}^d$ containing $\Omega$. To
see this, we construct $f^\Omega$ in accordance with
Theorem~\ref{sobolev extension thm}, then select $\eta \in
C^m_0(\straightletter{R}^d)$ such that $\eta(x)=1$ for $x \in \Omega$. Now, if we
consider the compactly supported function $f^\Omega_0 = \eta
f^{\Omega} \in W^m_2(\straightletter{R}^d)$, we have $f^\Omega_0 \!\!\mid_\Omega =
f$. An elementary application of the Leibniz formula gives
\begin{equation*}
\norm{f^\Omega_0}_{m,\straightletter{R}^d} \leq C \norm{f}_{m,\Omega},\qquad
\mbox{where
$C=C(\Omega,\eta)$}.
\end{equation*}
One of the nice features of the above extension is that the
behaviour of the constant $K(\Omega)$ can be understood for simple
choices of $\Omega$. The reason for this is of course the choice of
$\Omega$ and the way the seminorms defining the Sobolev norms behave
under dilations and translations of $\Omega$.
\begin{lemma}\label{cov lemma}
Let $\Omega$ be a measurable subset of $\straightletter{R}^d$. Define the mapping
$\sigma:\straightletter{R}^d\rightarrow\straightletter{R}^d$ by
$\sigma(x)=a+h(x-t)$, where $h>0$, and
$a$, $t$, $x \in \straightletter{R}^d$. Then for all $f \in {W}^m_2(\sigma(\Omega))$,
\begin{equation*}
\abs{f \circ \sigma}_{m,\Omega} = h^{m-d/2}
\abs{f}_{m,\sigma(\Omega)}.
\end{equation*}
\end{lemma}
\proof We have, for $\abs{\alpha}=m$,
\begin{equation*}
(D^\alpha (f \circ \sigma))(x) = h^m (D^\alpha f )(\sigma(x)).
\end{equation*}
Thus,
\begin{equation*}
\begin{split}
\abs{f \circ \sigma}^2_{m,\Omega} &= \sum_{\abs{\alpha}=m}
c_\alpha
\int_{\Omega} \abs{(D^\alpha
(f\circ \sigma))(x)}^2\
dx\\
&= h^{2m}
\sum_{\abs{\alpha}=m} c_\alpha
\int_{\Omega} \abs{(D^\alpha
f )(\sigma(x))}^2\,
dx.
\end{split}
\end{equation*}
Now, using the change of variables $y=\sigma(x)$,
\begin{equation*}
\abs{f \circ \sigma}^2_{m,\Omega} = h^{2m-d} \sum_{\abs{\alpha}=m}
c_\alpha
\int_{\sigma(\Omega)}
\abs{(D^\alpha f )(y)}^2\,
dy =
h^{2m-d}\abs{f}^2_{m,\sigma(\Omega)}. \qedhere
\end{equation*}
Unfortunately, the Sobolev extension refers to the Sobolev norm. We
want to work with a norm which is more convenient for our purposes.
This norm is in fact equivalent to the Sobolev norm, as we shall now
see.
\begin{lemma}\label{norm equivlance}
Let $\Omega$ be an open subset of $\straightletter{R}^d$ having the cone property and a Lipschitz-continuous boundary. Let
$\dotz{b_1}{b_\ell} \in \Omega$ be unisolvent with respect to
$\Pi_{m-1}$. Define a norm on $W^m_2(\Omega)$ via
\begin{equation*}
\norm{f}_\Omega = \Biggl(
\abs{f}_{m,\Omega}^2+\sum_{i=1}^\ell
\abs{f(b_i)}^2 \Biggr)^{1/2},\qquad f\in W^m_2(\Omega).
\end{equation*}
There are positive constants $K_1$ and
$K_2$ such that for all $f \in W^m_2(\Omega)$,
\begin{equation*}
K_1 \norm{f}_{m,\Omega} \leq \norm{f}_{\Omega} \leq K_2
\norm{f}_{m,\Omega}.
\end{equation*}
\end{lemma}
\proof The conditions imposed on $m$ and $\Omega$ ensure that
$W^m_2(\Omega)$ is continuously embedded in
$C(\Omega)$~\cite[Theorem 5.4, p. 97]{Adams}. So, given $x \in
\Omega$, there is a constant $C$ such that $\abs{f(x)} \leq C
\norm{f}_{m,\Omega}$ for all $ f \in W^m_2(\Omega)$. Thus, there are
constants $\dotz{C_1}{C_\ell}$ such that
\begin{equation}
\norm{f}_{\Omega}^2 \leq \abs{f}_{m,\Omega}^2 + \sum_{i=1}^\ell
C_i \norm{f}_{m,\Omega}^2 \leq \Big(1+\sum_{i=1}^\ell C_i\Big)
\norm{f}_{m,\Omega}^2.
\label{bound}
\end{equation}
On the other hand, suppose there is no positive number $K$ with
$\norm{f}_{m,\Omega} \leq K \norm{f}_{\Omega}$ for all $f \in
W^m_2(\Omega)$. Then there is a sequence $\set{f_j}$ in
$W^m_2(\Omega)$ with
\begin{equation*}
\norm{f_j}_{m,\Omega} =1\qquad \mbox{and}\qquad
\norm{f_j}_{\Omega} \leq \frac{1}{j},\qquad \mbox{for $j=1,2,\dotsc$.}
\end{equation*}
The Rellich selection theorem~\cite[Theorem 1.9, p. 32]{Braess}
states that $W^m_2(\Omega)$ is compactly embedded in
$W^{m-1}_2(\Omega)$. Therefore, as $\set{f_j}$ is bounded in
$W^m_2(\Omega)$, this sequence must contain a convergent subsequence
in $W^{m-1}_2(\Omega)$. With no loss of generality we shall assume
$\set{f_j}$ itself converges in $W^{m-1}_2(\Omega)$. Thus $\{f_j\}$
is a Cauchy sequence in $W^{m-1}_2(\Omega)$. Next, as
$\norm{f_j}_{\Omega} \To 0$ it follows that $\abs{f_j}_{m,\Omega}
\To 0$. Moreover,
\begin{equation*}
\begin{split}
\norm{f_j - f_k}_{m,\Omega}^2 &= \norm{f_j - f_k}_{m-1,\Omega}^2
+
\abs{f_j -
f_k}_{m,\Omega}^2\\
&\leq \norm{f_j -
f_k}_{m-1,\Omega}^2 +
2\abs{f_j}_{m,\Omega}^2 +
2\abs{f_k}_{m,\Omega}^2.
\end{split}
\end{equation*}
Since $\{f_j\}$ is a Cauchy sequence in $W^{m-1}_2(\Omega)$, and
$\abs{f_j}_{m,\Omega} \rightarrow 0$, it follows that $\set{f_j}$ is
a Cauchy sequence in $W^{m}_2(\Omega)$. Since $W^m_2(\Omega)$ is
complete with respect to $\norm{\,\cdot\,}_{m,\Omega}$, this
sequence converges to a limit $f \in W^m_2(\Omega)$. By
Equation~(\ref{bound}),
$$
\norm{f-f_j}_\Omega^2 \leq \Big(1+\sum_{i=1}^\ell C_i\Big)
\norm{f-f_j}^2_{m,\Omega},
$$
and hence $\norm{f-f_j}_\Omega \rightarrow 0$ as $j\rightarrow
\infty$. Since $\norm{f_j}_\Omega \rightarrow 0$, it follows that
$f=0$. Because $\norm{f_j}_{m,\Omega} =1$, $j=1,2,\ldots$, it
follows that $\norm{f}_{m,\Omega}=1$. This contradiction establishes
the result. \qed
We are almost ready to state the key result which we will employ in
our later proofs about error estimates. Before we do this, let us
make a simple observation. Look at the unisolvent points
$\dotz{b_1}{b_\ell}$ in the statement of the previous Lemma. Since
$W^m_2(\Omega)$ can be embedded in $C(\Omega)$, it makes sense to
talk about the interpolation projection $P:W^m_2(\Omega) \rightarrow
\Pi_{m-1}$ based on these points. Furthermore, under certain nice
conditions (for example $\Omega$ being a bounded domain), $P$ is the
orthogonal projection of $W^m_2(\Omega)$ onto $\Pi_{m-1}$.
\begin{lemma}\label{poly extention}
Let $B$ be any ball of radius $h$ and center $a \in \straightletter{R}^d$, and let
$f \in W^{m}_{2}(B)$. Whenever $\dotz{b_1}{b_\ell} \in \straightletter{R}^d$ are unisolvent
with respect to $\Pi_{m-1}$ let $P_b:C(\straightletter{R}^d) \To \Pi_{m-1}$ be the
Lagrange interpolation operator on $\dotz{b_1}{b_\ell}$. Then
there exists $c=(\dotz{c_1}{c_\ell}) \in B^\ell$ and
$g \in W^{m}_{2}(\straightletter{R}^d)$ such that
\begin{enumerate}
\item $g(x)=(f-P_c f)(x)$ for all $x \in B$;
\item $g(x)=0$ for all $\abs{x-a}>2h$;
\item there exists a $C>0$, independent of $f$ and $B$, such
that $\abs{g}_{m,\straightletter{R}^d} \leq C \abs{f}_{m,B}$.
\end{enumerate}
Furthermore, $\dotz{c_1}{c_\ell}$ can be arranged so that $c_1=a$.
\end{lemma}
\proof Let $B_1$ be the unit ball in $\straightletter{R}^d$ and let $B_2=2B_1$. Let
$\dotz{b_1}{b_\ell} \in B_1$ be unisolvent with respect to
$\Pi_{m-1}$. Define $\sigma(x)=h^{-1}(x-a)$ for all $x \in \straightletter{R}^d$.
Set $c_i = \sigma^{-1}(b_i)$ for $i=\dotz{1}{\ell}$ so that
$\dotz{c_1}{c_\ell} \in B$ are unisolvent with respect to
$\Pi_{m-1}$. Take $f \in W^m_2(B)$. Then $(f-P_c f) \circ
\sigma^{-1} \in W^m_2(B_1)$. Set $F=(f-P_c f) \circ \sigma^{-1}$.
Let $F^{B_1}$ be constructed as an extension to $F$ on $B_1$. By
Theorem~\ref{sobolev extension thm} and the remarks following it, we
can assume $F^{B_1}$ is supported on $B_2$. Define $g=F^{B_1} \circ
\sigma \in W^m_2(\straightletter{R}^d)$. Let $x \in B$. Since $\sigma(B)=B_1$ there
is a $y \in B_1$ such that $x=\sigma^{-1}(y)$. Then,
\begin{equation*}
g(x)=(F^{B_1} \circ \sigma)(x) = F^{B_1}(y) = ((f-P_c f) \circ
\sigma^{-1}) (y) = (f-P_c f)(x).
\end{equation*}
Also, for $x \in \straightletter{R}^d$ with $\abs{x-a}>2h$, we have $\abs{\sigma(x)}
> 2$. Since $F^{B_1}$ is supported on $B_2$, $g(x)=0$ for $\abs{x-a}>2h$. Hence,
$g$ satisfies properties \textit{1} and \textit{2}. By
Theorem~\ref{sobolev extension thm} there is a $K_1$, independent of
$f$ and $B$, such that
\begin{equation*}
\norm{F^{B_1}}_{m,B_2} = \norm{F^{B_1}}_{m,\straightletter{R}^d} \leq K_1
\norm{F}_{m,B_1}.
\end{equation*}
We have seen in Lemma~\ref{norm equivlance} that if we endow
$W^m_2(B_1)$ and $W^m_2(B_2)$ with the norms
\begin{equation*}
\norm{v}_{B_i} = \Biggl( \abs{v}_{m,B_i}^2 + \sum_{i=1}^\ell
\abs{v(b_i)}^2 \Biggr)^{1/2},
\qquad i=1,2,
\end{equation*}
then $\norm{\,\cdot\,}_{B_i}$ and $\norm{\,\cdot\,}_{m,B_i}$ are
equivalent for $i=1,2$. Thus, there are constants $K_2$ and $K_3$,
independent of $f$ and $B$, such that
\begin{equation*}
\norm{F^{B_1}}_{B_2} \leq K_2 \norm{F^{B_1}}_{m,B_2} \leq K_1 K_2
\norm{F}_{m,B_1} \leq K_1 K_2 K_3 \norm{F}_{B_1}.
\end{equation*}
Set $C=K_1 K_2 K_3$. Since $F^{B_1}(b_i)=F(b_i)=(f-P_c
f)(\sigma^{-1}(b_i))=(f-P_c f)(c_i)=0$ for $i=\dotz{1}{\ell}$, it
follows that $\abs{F^{B_1}}_{m,B_2} \leq C \abs{F}_{m,B_1}$. Thus,
$\abs{g \circ \sigma^{-1}}_{m,\straightletter{R}^d} \leq C \abs{(f-P_c f) \circ
\sigma^{-1}}_{m,B_1}$. Now, Lemma~\ref{cov lemma} can be employed
twice to give
\begin{equation*}
\abs{g}_{m,\straightletter{R}^d} = h^{d/2 -m}\abs{g \circ \sigma^{-1}}_{m,\straightletter{R}^d} \leq C h^{d/2
-m} \abs{(f-P_c f) \circ \sigma^{-1}}_{m,B_1} = C \abs{f-P_c f}_{m,B}.
\end{equation*}
Finally, we observe that $\abs{f-P_c f}_{m,B} = \abs{f}_{m,B}$ to
complete the first part of the proof. The remaining part follows by
selecting $b_1=0$ and choosing $\dotz{b_2}{b_\ell}$ accordingly in
the above construction. \qed
\begin{lemma}[\citename{Duchon2}~\citeyear*{Duchon2}]\label{light wayne ext}
Let $\Omega$ be an open, bounded, connected subset of $\straightletter{R}^d$ having
the cone property and a Lipschitz-continuous boundary. Let $f \in
W^m_2(\Omega)$. Then there exists a unique element $f^\Omega \in
BL^m(\straightletter{R}^d)$ such that $f^\Omega\!\!\mid_{\Omega} = f$, and amongst
all elements of $BL^m(\straightletter{R}^d)$ satisfying this condition,
$\abs{f^\Omega}_{m,\straightletter{R}^d}$ is minimal. Furthermore, there exists a
constant $K=K(\Omega)$ such that, for all $f \in W^m_2(\Omega)$,
\begin{equation*}
\abs{f^\Omega}_{m,\straightletter{R}^d} \leq K \abs{f}_{m,\Omega}.
\end{equation*}
\end{lemma}
\section{ERROR ESTIMATES}
We arrive now at our main section, in which we derive the required
error estimates. Our strategy is simple. We begin with a function
$f$ in $BL^{m}(\straightletter{R}^d)$. We want to estimate $\norm{f- S_kf}$ for some
suitable norm $\norm{\,\cdot\,}$, where $S_k$ is the minimal norm
interpolation operator from $BL^{k}(\straightletter{R}^d)$, and $k> m$. We suppose
that we already have an error bound using the norm
$\norm{\,\cdot\,}$ for all functions $g\in BL^{k}(\straightletter{R}^d)$. Our proof
now proceeds as follows. Firstly, we adjust $f$ in a somewhat
delicate manner, obtaining a function $F$, still in $BL^{m}(\straightletter{R}^d)$,
and with seminorm in $BL^{m}(\straightletter{R}^d)$ not too far away from that of
$f$. We then smooth $F$ by convolving it with a function $\phi \in
C^\infty_0(\straightletter{R}^d)$. The key feature of the adjustment of $f$ to $F$
is that $(\phi*F)(a) = f(a)$ for every point $a$ in our set of
interpolation points. It then follows that $F\in BL^{k}(\straightletter{R}^d)$. We
then use the usual error estimate in $BL^{k}(\straightletter{R}^d)$. A standard
procedure (Lemma~\ref{seminorm convolution bound}) then takes us
back to an error estimate in $BL^{m}(\straightletter{R}^d)$.
\begin{lemma}\label{seminorm convolution bound}
Let $m \leq k$ and let $\phi \in C^{\infty}_0(\straightletter{R}^{d})$. For each $h>0$, let
$\phi_h(x) = h^{-d} \phi(x/h)$ for $x \in \straightletter{R}^d$. Then there exists
a constant $C>0$, independent of $h$, such that for all $f \in
{BL}^{m}(\straightletter{R}^d)$,
\begin{equation*}
\abs{\phi_h * f}_{k,\straightletter{R}^d} \leq C h^{m-k}
\abs{f}_{m,\straightletter{R}^d}.
\end{equation*}
Furthermore, we have $\abs{\phi_h *
f}_{k,\straightletter{R}^d} = o(h^{m-k})$ as $h \To 0$.
\end{lemma}
\proof The chain rule for differentiation gives $(D^\gamma
\phi_h)(x) = h^{-(d+\abs{\gamma})} (D^\gamma \phi) (x/h)$ for all $x
\in \straightletter{R}^d$, and $\gamma \in \mathord{\rm Z\mkern-6.1mu Z}^d_+$. Thus, for $\beta \in \mathord{\rm Z\mkern-6.1mu Z}^d_+$
with $\abs{\beta}=m$ we have
\begin{align}
\int_{\straightletter{R}^d} \abs{(D^\gamma \phi_h * D^\beta f)(x)}^2\ dx
&= \int_{\straightletter{R}^d} \biggabs{ \int_{\straightletter{R}^d} (D^\gamma
\phi_h) (x-y)
(D^\beta f)(y)\ dy}^2\ dx \nonumber\\
&= h^{-2(d+\abs{\gamma})} \int_{\straightletter{R}^d} \biggabs{
\int_{\straightletter{R}^d}
(D^\gamma \phi) \Bigl(\frac{x-y}{h}\Bigr)
(D^\beta f)(y)\, dy}^2\
dx \nonumber\\
&= h^{-2\abs{\gamma}} \int_{\straightletter{R}^d} \biggabs{
\int_{\straightletter{R}^d} (D^\gamma \phi )(t)
(D^\beta f)(x-h t)\ dt}^2\, dx \nonumber\\
&= h^{-2\abs{\gamma}} \int_{\straightletter{R}^d} \biggabs{
\int_{K} (D^\gamma \phi )(t)
(D^\beta f)(x-h t)\ dt}^2\, dx,\label{little
oh}
\end{align}
where $K=\supp{(\phi)}$. An application of the Cauchy-Schwartz
inequality gives
\begin{equation*}
\int_{\straightletter{R}^d} \abs{(D^\gamma \phi_h * D^\beta f)(x)}^2\ dx \leq
h^{-2\abs{\gamma}} \int_{\straightletter{R}^d} \Biggl( \int_{K} \abs{(D^\gamma
\phi)(t)}^2\ dt \Biggr)\Biggl(\int_{K} \abs{(D^\beta f)(x-h t)}^2\
dt\Biggr)\ dx,
\end{equation*}
and so,
\begin{equation}\label{post cauchy-schwartz}
\int_{\straightletter{R}^d} \abs{(D^\gamma \phi_h * D^\beta f)(x)}^2\ dx \leq
h^{-2\abs{\gamma}} \int_{\straightletter{R}^d} \abs{(D^\gamma \phi)(t)}^2\ dt
\int_{\straightletter{R}^d} \int_{K} \abs{(D^\beta f)(x-h t)}^2\ dtdx.
\end{equation}
The Parseval formula together with the relation $(D^\alpha (\phi_h
* f))\ehat{} = (i\,\cdot\,)^\alpha (\phi_h
* f)\ehat{}$ provide us with the
equality
\begin{align}
\sum_{\abs{\alpha}=k} c_\alpha \int_{\straightletter{R}^d} \abs{(D^\alpha
(\phi_h * f))(x)}^2\ dx
&= \sum_{\abs{\alpha}=k} c_\alpha \int_{\straightletter{R}^d}
\abs{(ix)^\alpha (\phi_h * f)\ehat(x)}^2\
dx\nonumber\\
&= \int_{\straightletter{R}^d} \sum_{\abs{\alpha}=k} c_\alpha
x^{2\alpha} \abs{(\phi_h * f)\ehat(x)}^2\
dx.\label{i x alpha}
\end{align}
Now, when Equation \eqref{i x alpha} is used in conjunction with the
relation
\begin{equation*}
\sum_{\abs{\alpha}=k} c_\alpha x^{2\alpha} = \abs{x}^{2k}=
\abs{x}^{2(m+k-m)}= \sum_{\abs{\beta}=m} c_\beta x^{2\beta}
\sum_{\abs{\gamma}=k-m} c_\gamma x^{2\gamma},
\end{equation*}
we obtain
\begin{equation*}
\begin{split}
\sum_{\abs{\alpha}=k} c_\alpha \int_{\straightletter{R}^d} \abs{(D^\alpha
(\phi_h
* f))(x)}^2\ dx &= \int_{\straightletter{R}^d}
\sum_{\abs{\beta}=m} c_\beta x^{2\beta}
\sum_{\abs{\gamma}=k-m} c_\gamma x^{2\gamma}
\abs{(\phi_h * f)\ehat(x)}^2\ dx\\
&= \sum_{\abs{\beta}=m} c_\beta \int_{\straightletter{R}^d}
\sum_{\abs{\gamma}=k-m} c_\gamma x^{2\gamma}
\abs{(ix)^\beta (\phi_h * f)\ehat(x)}^2\ dx
\\
&= \sum_{\abs{\beta}=m} c_\beta \int_{\straightletter{R}^d}
\sum_{\abs{\gamma}=k-m} c_\gamma x^{2\gamma}
\abs{(D^\beta(\phi_h * f))\ehat(x)}^2\
dx \\
&= \sum_{\abs{\beta}=m} c_\beta
\sum_{\abs{\gamma}=k-m} c_\gamma
\int_{\straightletter{R}^d}
\abs{(ix)^{\gamma} (D^\beta(\phi_h *
f))\ehat(x)}^2\ dx \\
&= \sum_{\abs{\beta}=m} c_\beta
\sum_{\abs{\gamma}=k-m} c_\gamma
\int_{\straightletter{R}^d}
\abs{(D^\gamma (D^\beta(\phi_h *
f)))\ehat(x)}^2\ dx \\
&= \sum_{\abs{\beta}=m} c_\beta
\sum_{\abs{\gamma}=k-m} c_\gamma
\int_{\straightletter{R}^d}
\abs{(D^\gamma (D^\beta(\phi_h *
f)))(x)}^2\ dx .
\end{split}
\end{equation*}
Since the operation of differentiation commutes with convolution, we
have that
\begin{equation}\label{pre cauchy-schwarz}
\sum_{\abs{\alpha}=k} c_\alpha \int_{\straightletter{R}^d} \abs{(D^\alpha (\phi_h
* f))(x)}^2\ dx = \sum_{\abs{\beta}=m} c_\beta
\sum_{\abs{\gamma}=k-m} c_\gamma \int_{\straightletter{R}^d} \abs{(D^\gamma \phi_h
* D^\beta f )(x)}^2\ dx.
\end{equation}
Combining Equation \eqref{post cauchy-schwartz} with Equation
\eqref{pre cauchy-schwarz} we deduce that
\begin{align*}
\sum_{\abs{\alpha}=k} c_\alpha \int_{\straightletter{R}^d} &\abs{(D^\alpha (\phi_h
* f))(x)}^2\ dx
\\
&\leq \sum_{\abs{\beta}=m} c_\beta \sum_{\abs{\gamma}=k-m} c_\gamma
h^{-2\abs{\gamma}} \int_{\straightletter{R}^d} \abs{(D^\gamma \phi)(t)}^2\ dt
\int_{\straightletter{R}^d} \int_{K} \abs{(D^\beta f)(x-h t)}^2\ dtdx\\
&= h^{2(m-k)} \abs{\phi}_{k-m,\straightletter{R}^d}^2 \sum_{\abs{\beta}=m} c_\beta
\int_{\straightletter{R}^d} \int_{K} \abs{(D^\beta f)(x-h t)}^2\ dtdx.
\end{align*}
Fubini's theorem permits us to change the order of integration in
the previous inequality. Thus,
\begin{equation*}
\sum_{\abs{\alpha}=k} c_\alpha \int_{\straightletter{R}^d} \abs{(D^\alpha (\phi_h
* f))(x)}^2\, dx \leq h^{2(m-k)} \abs{\phi}_{k-m,\straightletter{R}^d}^2
\sum_{\abs{\beta}=m} c_\beta \int_{K} \int_{\straightletter{R}^d} \abs{(D^\beta
f)(x-h t)}^2\, dxdt.
\end{equation*}
Finally, a change of variables in the inner integral above yields
\begin{equation*}
\sum_{\abs{\alpha}=k} c_\alpha \int_{\straightletter{R}^d} \abs{(D^\alpha (\phi_h
* f))(x)}^2\, dx \leq h^{2(m-k)} \abs{\phi}_{k-m,\straightletter{R}^d}^2
\sum_{\abs{\beta}=m} c_\beta \int_{K} \int_{\straightletter{R}^d} \abs{(D^\beta
f)(z)}^2\, dzdt.
\end{equation*}
Setting $C=\abs{\phi}_{k-m,\straightletter{R}^d} \sqrt{\vol{K}}$ we conclude that
$\abs{\phi_h * f}_{k,\straightletter{R}^d} \leq C h^{m-k} \abs{f}_{m,\straightletter{R}^d}$ as
required. To deal with the remaining statement of the lemma, we
observe that for $\gamma \neq 0$ we have
\begin{equation*}
\int_{K} (D^\gamma \phi)(t)\, dt= \int_{\straightletter{R}^d} (D^\gamma \phi)(t)\, dt = (\widehat{D^\gamma \phi})(0) =
((i\,\cdot\,)^\gamma \widehat{\phi})(0) = 0.
\end{equation*}
Then it follows from Equation~\eqref{little oh} that for
$\abs{\beta}=m$,
\begin{equation*}
\int_{\straightletter{R}^d} \abs{(D^\gamma \phi_h * D^\beta f)(x)}^2\ dx =
h^{-2\abs{\gamma}} \int_{\straightletter{R}^d} \biggabs{ \int_{K} (D^\gamma
\phi
)(t)( (D^\beta f)(x-h t) - (D^\beta f)(x)) \ dt}^2\ dx.
\end{equation*}
Now, if we continue in precisely the same manner as before, we
obtain
\begin{multline*}
\sum_{\abs{\alpha}=k} c_\alpha \int_{\straightletter{R}^d} \abs{(D^\alpha (\phi_h
* f))(x)}^2\ dx \\\leq h^{2(m-k)} \abs{\phi}_{k-m,\straightletter{R}^d}^2
\sum_{\abs{\beta}=m} c_\beta \int_{K} \int_{\straightletter{R}^d} \abs{(D^\beta
f)(x-h t) - (D^\beta f)(x)}^2\ dxdt.
\end{multline*}
Since $D^\beta f \in L^2(\straightletter{R}^d)$ for each $\beta \in \mathord{\rm Z\mkern-6.1mu Z}^d_+$ with
$\abs{\beta}=m$, it follows that for almost all $t,x \in \straightletter{R}^d$,
\begin{equation*}
\abs{(D^\beta f)(x -ht) - (D^\beta f)(x)} \To 0,\qquad \mbox{as $h \To 0$.}
\end{equation*}
Furthermore, setting
\begin{equation*}
g(x,t) = 2\abs{(D^\beta f)(x-ht)}^2 + 2 \abs{(D^\beta f)(x)}^2,
\qquad \mbox{for almost all $x,t\in\straightletter{R}^d$,}
\end{equation*}
we see that
\begin{equation*}
\abs{(D^\beta f )(x -ht) - (D^\beta f)(x)}^2 \leq g(x,t),
\end{equation*}
for almost all $x,t\in\straightletter{R}^d$ and each $h>0$. It follows by
calculations similar to those used above that
\begin{equation*}
\int_{K} \int_{\straightletter{R}^d} g(x,t)\, dxdt =
4\mbox{vol}(K)\int_{\straightletter{R}^d}\abs{(D^\beta f)(x)}^2 \, dx <\infty.
\end{equation*}
Applying Lebesgue's dominated convergence theorem, we obtain
\begin{equation*}
\int_{K} \int_{\straightletter{R}^d} \abs{(D^\beta f)(x-h t) - (D^\beta f)(x)}^2\
dxdt \To 0,\qquad \mbox{as $h \To 0$.}
\end{equation*}
Hence, for $m\leq k$, $\abs{\phi_h *
f}_{k,\straightletter{R}^d} = o(h^{m-k})$ as $h \To 0$. \qed
\begin{lemma}\label{poly repro}
Suppose $\phi \in C^{\infty}_0(\straightletter{R}^{d})$ is supported on the unit ball and
satisfies
\begin{equation*}
\int_{\straightletter{R}^d} \phi(x)\ dx = 1\qquad \mbox{and}\qquad \int_{\straightletter{R}^d}
\phi(x) x^\alpha\ dx =0,\qquad \mbox{for all}\ 0< \abs{\alpha}
\leq
{m-1}.
\end{equation*}
For each $\eps>0$ and $x \in \straightletter{R}^d$, let $\phi_\eps(x) =
\eps^{-d}
\phi(x/\eps)$. Let $B$ be any ball of radius $h$ and center $a \in
\straightletter{R}^d$. For a fixed $p \in \Pi_{m-1}$ let $f$ be a mapping from
$\straightletter{R}^d$ to $\straightletter{R}$ such that $f(x) = p(x)$ for all $x \in B$. Then
$(\phi_\eps * f) (a) = p(a)$ for all $\eps \leq h$.
\end{lemma}
\proof Let $B_1$ denote the unit ball in $\straightletter{R}^d$. We begin by
employing a change of variables to deduce
\begin{equation*}
\begin{split}
(\phi_\eps * f) (a) &= \int_{\straightletter{R}^d} \phi_\eps(a-y) f(y)\ dy\\
&= \eps^{-d} \int_{\straightletter{R}^d}
\phi\Bigl(\frac{a-y}{\eps}\Bigr)
f(y)\ dy\\
&= \int_{\straightletter{R}^d} \phi(x)
f(a-x\eps)\ dx\\
&= \int_{B_1} \phi(x)
f(a-x\eps)\ dx.
\end{split}
\end{equation*}
Then, for $x \in B_1$,
$\abs{(a-x\eps)-a} \leq \eps \leq h$. Thus, $f(a-x\eps)=p(a-x\eps)$
for all $x\in B_1$. Moreover, there are numbers $b_\alpha$ such that
$p(a-x\eps)=p(a)+\sum_{0<\abs{\alpha}\leq {m-1}} b_\alpha x^\alpha$.
Hence,
\begin{equation*}
\begin{split}
(\phi_\eps * f) (a) &= \int_{B_1} \phi(x)
p(a-x\eps)\ dx\\
&= \int_{\straightletter{R}^d} \phi(x)
\biggl(p(a)+\sum_{0<\alpha\leq {m-1}}b_\alpha
x^\alpha \biggr)\ dx\\
&= p(a). \qedhere
\end{split}
\end{equation*}
\begin{defin}
Let $\Omega$ be an open, bounded subset of $\straightletter{R}^d$. Let $\nodes$ be a
set of points in $\Omega$. The quantity $\sup_{x \in
\Omega} \inf_{a \in \nodes} \abs{x-a} = h$ is called the fill-distance of
$\nodes$ in $\Omega$. The separation of $\nodes$ is
given by the quantity
\begin{equation*}
q=\min_{\substack{a,b \in \nodes \\ a\neq b} }
\frac{\abs{a-b}}{2}.
\end{equation*}
The quantity $h/q$ will be called the mesh-ratio of $\nodes$.
\end{defin}
\begin{thm}\label{general result}
Let $\nodes$ be a finite subset of $\straightletter{R}^d$ of separation $q>0$ and let $d<2m\leq 2k$. Then
for all $f \in BL^m(\straightletter{R}^d)$ there exists an $F \in BL^k(\straightletter{R}^d)$
such that
\begin{enumerate}
\item $F(a)=f(a)$ for all $a \in \nodes$;
\item there exists a $C>0$, independent of $f$ and $q$, such that $\abs{F}_{m,\straightletter{R}^d}\leq C \abs{f}_{m,\straightletter{R}^d}$
and $\abs{F}_{k,\straightletter{R}^d} \leq C q^{m-k}\abs{f}_{m,\straightletter{R}^d}$.
\end{enumerate}
\end{thm}
\proof Take $f \in BL^m(\straightletter{R}^d)$. For each $a \in \nodes$ let $B_a
\subset \straightletter{R}^d$ denote the ball of radius $\delta = q/4$ centered at
$a$. For each $B_a$ let $g_a$ be constructed in accordance with
Lemma~\ref{poly extention}. That is, for each $a \in \nodes$ take
$c'=(\dotz{c_2}{c_\ell}) \in B_a^{\ell-1}$ and $g_a \in W^m_2(\straightletter{R}^d)$
such that
\begin{enumerate}
\item $a,\dotz{c_2}{c_\ell}$ are unisolvent with respect to
$\Pi_{m-1}$;
\item $g_a(x)=(f-P_{(a,c')} f)(x)$ for all $x \in B_a$;
\item $P_{(a,c')} f \in \Pi_{m-1}$ and $(P_{(a,c')} f)(a) = f(a)$;
\item $g_a(x)=0$ for all $\abs{x-a}>2\delta$;
\item there exists a $C_1>0$, independent of $f$ and $B_a$, such
that $\abs{g_a}_{m,\straightletter{R}^d} \leq C_1 \abs{f}_{m,B_a}$.
\end{enumerate}
Note that if $a\neq b$, then $\supp(g_a)$ does not intersect
$\supp(g_b)$, because if $x\in \supp(g_a)$ then
\begin{equation*}
\abs{x - b} > \abs{b-a} - \abs{x-a} \geq 2q - 2 \delta = 6 \delta.
\end{equation*}
Using the observation above regarding the supports of the $g_a$'s it
follows that
\begin{align}
\biggabs{\sum_{a \in \nodes} g_a}_{m,\straightletter{R}^d}^2
&= \sum_{\abs{\alpha}=m} c_\alpha \int_{\straightletter{R}^d} \biggabs{\sum_{a \in
\nodes} (D^\alpha g_a)(x)}^2\ dx \nonumber \\
&= \sum_{\abs{\alpha}=m} c_\alpha \sum_{b \in \nodes} \int_{\supp(g_b)} \biggabs{\sum_{a \in
\nodes} (D^\alpha g_a)(x)}^2\ dx \nonumber \\
&= \sum_{\abs{\alpha}=m} c_\alpha \sum_{b \in \nodes} \int_{\supp(g_b)} \abs{(D^\alpha g_b)(x)}^2\ dx \nonumber \\
&= \sum_{a \in \nodes} \abs{g_a}_{m,\straightletter{R}^d}^2. \nonumber
\end{align}
Applying Condition 5 to the above equality we have
\begin{equation*}
\biggabs{\sum_{a \in \nodes} g_a}_{m,\straightletter{R}^d}^2 \leq C_1^2 \sum_{a \in \nodes}
\abs{f}_{m,B_a}^2\leq C_1^2 \abs{f}_{m,\straightletter{R}^d}^2.
\end{equation*}
Now set $H= f-\sum_{a\in \nodes} g_a$. It then follows from
Condition 2 above that $H(x) = (P_{(a,c')} f)(x)$ for all $x\in
B_a$, and from Condition 3 that $H(a) = f(a)$ for all $a\in \nodes$.
Let $\phi \in C^{\infty}_0(\straightletter{R}^{d})$ be supported on the unit ball
and enjoy the properties
\begin{equation*}
\int_{\straightletter{R}^d} \phi(x)\ dx = 1\qquad \mbox{and}\qquad \int_{\straightletter{R}^d}
\phi(x) x^\alpha\ dx =0,\qquad \mbox{for all}\ 0< \abs{\alpha} \leq
{m-1}.
\end{equation*}
Now set $F=\phi_\delta * H$. Using Lemma~\ref{seminorm convolution
bound}, there is a constant $C_2>0$, independent of $q$ and $f$,
such that
\begin{align}
\abs{F}_{k,\straightletter{R}^d}^2
&\leq C_2 \delta^{2(m-k)} \biggabs{f - \sum_{a \in \nodes} g_a}_{m,\straightletter{R}^d}^2 \nonumber\\
&\leq 2 C_2 \delta^{2(m-k)} \biggl( \abs{f}_{m,\straightletter{R}^d}^2+ \biggabs{\sum_{a \in \nodes}
g_a}_{m,\straightletter{R}^d}^2 \biggr)\nonumber\\
&\leq 2 C_2 (1+C_1^2) \delta^{2(m-k)} \abs{f}_{m,\straightletter{R}^d}^2.
\nonumber
\end{align}
Similarly, there is a constant $C_3>0$, independent of $q$ and $f$,
such that
\begin{align}
\abs{F}_{m,\straightletter{R}^d}^2
&\leq C_3 \biggabs{f - \sum_{a \in \nodes} g_a}_{m,\straightletter{R}^d}^2 \nonumber\\
&\leq 2 C_3 (1+C_1^2)\abs{f}_{m,\straightletter{R}^d}^2.
\nonumber
\end{align} Thus
$\abs{F}_{k,\straightletter{R}^d} \leq C q^{m-k} \abs{f}_{m,\straightletter{R}^d}$ and
$\abs{F}_{m,\straightletter{R}^d} \leq C \abs{f}_{m,\straightletter{R}^d}$ for some appropriate
constant $C>0$. Since $F=\phi_\delta
* H$ and $H|_{B_a}\in \Pi_{m-1}$ for each $a\in \nodes$, it
follows from Lemma~\ref{poly repro} that $F(a)=H(a) =f(a)$ for all
$a \in \nodes$.\qed
\begin{thm}
\label{main}
Let $\Omega$ be an open, bounded, connected subset of $\straightletter{R}^d$ satisfying the
cone property and having a Lipschitz-continuous
boundary. Suppose also $d<2m\leq 2k$. For each
$h>0$, let $\nodes_h$ be a finite,
$\Pi_{k-1}$--unisolvent subset of $\Omega$ with fill-distance $h$. Assume also that there is a
quantity $\rho>0$ such that the mesh-ratio of each $\nodes_h$ is
bounded by $\rho$ for all $h>0$. For each mapping
$f:\nodes_h\rightarrow \straightletter{R}$, let $S_k^h f$ be the
minimal norm interpolant to $f$ on $\nodes_h$ from $BL^k (\straightletter{R}^d)$.
Then there exists a constant $C>0$, independent of
$h$, such that for all $f \in BL^m(\Omega)$,
\begin{equation*}
\norm{f-S^h_k f}_{L_p(\Omega)} \leq \biggl\{
\begin{array}{ll}
C h^{m-\frac{d}{2}+\frac{d}{p}}\abs{f}_{m,\Omega},& 2\leq p \leq
\infty\\
C h^{m} \abs{f}_{m,\Omega},& 1\leq p <2
\end{array},\qquad \mbox{as $h \To 0$}.
\end{equation*}
\end{thm}
\proof Take $f\in BL^m(\Omega)$. By~\citeasnoun{Duchon}, $f\in
W^m_2(\Omega)$. We define $f^\Omega$ in accordance with
Lemma~\ref{light wayne ext}. For most of this proof we wish to work
with $f^\Omega$ and not $f$, so for convenience we shall write $f$
instead of $f^\Omega$. Construct $F$ in accordance with
Theorem~\ref{general result} and set $G=f-F$. Then $F(a)=f(a)$ and
$G(a)=0$ for all $a \in \nodes_h$. Furthermore, there is a constant
$C_1>0$, independent of $f$ and $h$, such that
\begin{align}
&\abs{F}_{k,\straightletter{R}^d} \leq C_1 \left(\frac{h}{\rho}\right)^{m-k} \abs{f}_{m,\straightletter{R}^d},\label{F bound}\\
&\abs{G}_{m,\straightletter{R}^d} \leq \abs{f}_{m,\straightletter{R}^d}+\abs{F}_{m,\straightletter{R}^d} \leq
(1+C_1)\abs{f}_{m,\straightletter{R}^d}.\label{G bound}
\end{align}
Thus $S_k^h f = S_k^h F$ and $S_m^h G = 0$, where we have adopted
the obvious notation for $S_m^h$. Hence,
\begin{equation*}
\norm{f-S_k^h f}_{L_p(\Omega)} = \norm{f-S_k^hF}_{L_p(\Omega)} =
\norm{F+G-S_k^hF}_{L_p(\Omega)} \leq \norm{F-S_k^hF}_{L_p(\Omega)} +
\norm{G-S_m^hG}_{L_p(\Omega)}.
\end{equation*}
Now, employing \possessivecite{Duchon2} error estimates for surface
splines \eqref{known error}, there are positive constants $C_2>0$
and $C_3>0$, independent of $h$ and $f$, such that
\begin{equation*}
\norm{f-S_k^h f}_{L_p(\Omega)} \leq C_2 h^{\beta(k)}
\abs{F}_{k,\Omega}
+ C_3 h^{\beta(m)} \abs{G}_{m,\Omega},\qquad \mbox{as $h\rightarrow 0$,}
\end{equation*}
where we have defined
\begin{equation*}
\beta(j)=\biggl\{
\begin{array}{ll}
j-\frac{d}{2}+\frac{d}{p},& 2\leq p \leq
\infty\\
j,& 1\leq p <2
\end{array}.
\end{equation*}
Finally, using the bounds in Equations \eqref{F bound} and \eqref{G
bound} we have
\begin{equation*}
\norm{f-S_k^h f}_{L_p(\Omega)} \leq C_4 h^{\beta(m)}
\abs{f}_{m,\straightletter{R}^d},\qquad \mbox{as $h\rightarrow 0$,}
\end{equation*}
for some appropriate $C_4>0$. To complete the proof we remind
ourselves that we have substituted $f^\Omega$ with $f$, and so an
application of Lemma~\ref{light wayne ext} shows that we can find
$C_5>0$ such that
\begin{equation*}
\norm{f-S_k^h f}_{L_p(\Omega)} \leq C_4 h^{\beta(m)}
\abs{f^\Omega}_{m,\straightletter{R}^d} \leq C_4 C_5 h^{\beta(m)}
\abs{f}_{m,\Omega},\qquad \mbox{as $h\rightarrow 0$.} \qedhere
\end{equation*}
We conclude this section with a brief commentary on the approach
of~\citeasnoun{yoonnew}. It is hardly surprising that Yoon's
technique also utilises a smoothing via convolution with a smooth
kernel function corresponding closely to our function $\phi$ used in
the proof of Theorem~\ref{main}. However, Yoon's approach is simply
to smooth at this stage, obtaining the equivalent of our function
$F$ in the proof of Theorem~\ref{main}. Because there is no
preprocessing of $f$ to $H$, Yoon's function $F$ does not enjoy the
nice property $F(a) = f(a)$ for all $a\in \nodes$. It is this
property which makes the following step, where we treat $G=f-F$, a
fairly simple process. Correspondingly, Yoon has considerably more
difficulty treating his function $G$. Our method also yields the
same bound as that in Yoon, but for a wider class of functions.
Indeed we would suggest that $BL^m(\Omega)$ is the natural class of
functions for which one would wish an error estimate of the type
given in Theorem~\ref{main}.
| {'timestamp': '2007-05-29T23:24:45', 'yymm': '0705', 'arxiv_id': '0705.4281', 'language': 'en', 'url': 'https://arxiv.org/abs/0705.4281'} |
\section{Adaptivity as a Natural Model for Shared-Medium Channels}\label{sec:appendix:naturalmodel
In this section we briefly describe two natural interpretations of our adaptivity setting as modeling communication over a shared medium channel (e.g., a wireless channel). We first remark that one obvious parallel between wireless channels (or generally shared medium channels) and our model is that full-duplex communication, that is, sending and receiving at the same time, is not possible.
Beyond this, one natural interpretation relates our model to a setting in which the signals used by these two parties are not much stronger than the average background noise level. In this setting having the background noise corrupt the signal a $\rho$ fraction of the time in an undetermined way is consistent with assuming that the variable noise level will be larger than the signal level at most this often. It is also consistent with the impossibility of distinguishing between the presence and absence of a signal which leads to undetermined signal decodings in the case that both parties listen. As always, the desire to avoid making possibly unrealistic assumptions about these corruptions naturally gives rise to think of any undetermined symbols as being worst case or equivalently as being determined by an adversary.
In a second related interpretation of our model one can think of the adversary as an active malicious entity that jams the shared medium. In this case our assumptions naturally correspond to the jammer having an energy budget that allows to over-shout a sent signal at most a $\rho$ fraction of the time. In this setting it is also natural to assume that the energy required for sending a fake signal to both parties when no signal is present is much smaller than corrupting sent signals, and does as such not significantly reduce the jammer's energy budget.
\section{Results for the Exchange Problem}\label{sec:exchange}
In this section we study the \emph{Exchange Problem}, which can be viewed as the simplest instance of a two-way (i.e., interactive) communication problem. In the Exchange Problem, each party is given a bit-string of $n$ bits, that is, $i_A, i_B \in \{0,1\}^{n}$, and each party wants to know the bit-string of the other party.
Recall that the $1/4$ impossibility bound on tolerable error-rate for non-adaptive interactive protocols presented by Braverman and Rao~\cite{BR11} is this simple setting. In \Cref{sec:27rateadaptationXOR}, we show that adding rate adaptivity to the exchange algorithms helps one break this $1/4$ impossiblity bound and tolerate an error-rate of $2/7-\epsilon$, and in fact, this is done with a minimal amount of adaptivity-based decisions regarding whether a party should transmit or listen in each round. We show in \Cref{subsec:xor13impossible} that the error-rate of $2/7$ is not tolerable even for the exchange problem, even if one is given infinite number of rounds, alphabet size, and computational power. Furthermore, the intuition used to achieve the $2/7-\epsilon$ possibility result also extends to the more general \emph{simulation} problem, discussed in \Cref{sec:simulators}.
\subsection{An Algorithm for the Exchange Problem under Error-Rate $2/7-\epsilon$}\label{sec:27rateadaptationXOR}
Note that a simple solution based on error correcting codes suffices for solving exchange problem under error-rate $\frac{1}{4}-\epsilon$: parties use a code with relative distance $1-\epsilon$. In the first half of the time, Alice sends its encoded message and in the second half of the time, Bob sends its encoded message. At the end, each party decodes simply by picking the codeword closest to the received string. As discussed before, the error rate $\frac{1}{4}-\epsilon$ of this approach is the best possible if no rate adaptation is used. In the following, we explain that a simple rate adaptation technique boosts the tolerable error-rate to $\frac{2}{7}-\epsilon$, which we later prove to be optimal.
\begin{theorem}\label{thm:27xor}
In the private randomness model with rate adaptation, there is an algorithm for the $n$-bit Exchange Problem that tolerates adversarial error rate of $2/7 - \epsilon \approx 0.2857-\epsilon$ for any $\epsilon>0$.
\end{theorem}
\begin{proof}
The algorithm runs in $N=7n/\epsilon$ rounds, which means that the budget of adversary is $(2/7-\epsilon) 7n/\epsilon=2n/\epsilon-7$. Throughout the algorithm, we use an error-correction code $\mathcal{C}: \{0,1\}^{n} \rightarrow \{1, \dots, q\}^{\frac{n}{\epsilon}} $ that has distance $\frac{n}{\epsilon}-1$. Also, for simplicity, we use $\mathcal{C}^\kappa$ to denote the code formed by concatenating $\kappa$ copies of $\mathcal{C}$.
The first $6N/7$ rounds of the algorithm do not use any rate adaptation: Simply, Alice sends $\mathcal{C}^3(i_A)$ in the first $3N/7$ rounds and Bob sends $\mathcal{C}^3(i_B)$ in the second $3N/7$ rounds. At the end of this part, each party ``estimates'' the errors invested on the transmissions of the other party by simply reading the hamming distance of the received string to the closest codeword of code $\mathcal{C}^3$. If this estimate is less than $N/7=n/\epsilon$, the party---say Alice---can safely assume that the closest codeword is the correct codeword. This is because the adversary's total budget is $2n/\epsilon-7$ and the distance between two codewords of $\mathcal{C}^3(i_B)$ is at least $3n/\epsilon-3$. In this case, in the remaining $N/7$ rounds of the algorithm, Alice will be sending $\mathcal{C}(i_A)$ and never listening. On the other hand, if Alice reads an estimated error greater than $N/7=n/\epsilon$, then in the remaining $N/7$ rounds, she will be always listening. The algorithm for Bob is similar.
Note that because of the limit on the budget of the adversary, at most only one of the parties will be listening in the last $N/7$ rounds. Suppose that there is exactly one listening party and it is Alice. In this case, throughout the whole algorithm, Alice has listened a total of $4N/7=4n/\epsilon$ rounds where Bob has been sending $\mathcal{C}^4(i_B)$. Since the adversaries budget is less than $2n/\epsilon-7$, and because $\mathcal{C}^4$ has distance $\frac{4n}{\epsilon}-4$, Alice can also decode correctly by just picking the codeword of $\mathcal{C}^4$ with the smallest hamming distance to the received string.
\end{proof}
\subsection{Impossibility of Tolerating Error-Rate $2/7$ in the Exchange Problem}\label{subsec:xor13impossible}
In this section, we turn our attention to impossibility results and particularly prove that the error-rate $2/7$ is not tolerable. See the formal statement in \Cref{thm:27lowerboundoverview}.
Note that Braverman and Rao~\cite{BR11} showed that it is not possible to tolerate error-rate of $1/4$ with non-adaptive algorithms. For completeness, we present a (slightly more formal) proof in the style of distributed indistinguishably arguments, in \Cref{sec:Impossibilities}, which also covers random algorithms.
We first explain a simple (but informal) argument which shows that even with adaptivity, error-rate $1/3$ is not tolerable. A formal version of this proof is deferred to \Cref{sec:Impossibilities}. The informal version explained next serves as a warm up for the more complex argument used for proving the $2/7$ impossibility, presented formally in \Cref{thm:xor27impossible}.
\begin{figure}[t]
\centering
\includegraphics[width=0.5\textwidth]{OneThird-LB}
\caption{ The adversary's strategy for the $1/3$-impossibility proof}
\label{fig:OneThird-LB}
\end{figure}
\begin{lemma}\label{lem:xor13impossible}
There is no (deterministic or randomized) adaptive protocol that robustly simulates the Exchange Protocol for an error rate of $1/3$ with an $o(1)$ failure probability even when allowing computationally unbounded protocols that use an arbitrarily large number of rounds and an unbounded alphabet.
\end{lemma}
\begin{proof}[Informal Proof]
To simplify the discussion, here we only explain the reasoning about deterministic algorithms and we also ignore the rounds in which both parties listen. Note that by the definition of the model, in those all-listening rounds, the adversary can deliver arbitrary messages to each of the parties at no cost. A complete proof is presented in \Cref{sec:Impossibilities}.
For the sake of contradiction, suppose that there is an algorithm that solves the exchange problem under adversarial error-rate $1/3$, in $N$ rounds. We work simply with $1$-bit inputs. Let $S_{X,Y}$ denote the setting where Alice receives input $X$ and Bob gets input $Y$. The idea is to make either settings $S_{0,0}$ and $S_{0,1}$ look indistinguishable to Alice or settings $S_{0,0}$ and $S_{1,0}$ look indistinguishable to Bob.
Consider setting $S_{0,0}$ and suppose that for the first $2N/3$ rounds, the adversary does not interfere. Without loss of generality, we can assume that in this setting, Alice listens (alone) in less than $N/3$ of these $2N/3$ rounds. We next explain adversary's strategy for the case that this assumption holds. An illustration of the adversary's strategy is presented in \Cref{fig:OneThird-LB}.
First, we explain the adversary's strategy for setting $S_{0,1}$: Adversary creates a \emph{dummy personality} $\widetilde{Bob}_0$ and simulates it with Alice in setting $S_{0,0}$ where adversary does not interfere. In the first $2N/3$ rounds of setting $S_{0,1}$, whenever Alice listens (alone), the adversary delivers transmission of $\widetilde{Bob}_0$ to Alice. As a shorthand for this, we say \emph{Alice is connected to $\widetilde{Bob}_0$}. Since Alice listens less than $N/3$ of the time, the adversary will have enough budget to completely fake Bob as $\widetilde{Bob}_0$ (from Alice's viewpoint). Thus, the two settings look identical to Alice for the first $2N/3$ rounds. During the last $N/3$ rounds of the execution in setting $S_{0,1}$, the adversary lets Alice and the real Bob talk without no interference.
Now, we explain the adversary's strategy for setting $S_{0,0}$: The adversary generates another dummy personality $\widetilde{Bob}_1$ by simulating Bob in setting $S_{0,1}$ where alone-listening rounds of Alice in the first $2N/3$ rounds are connected to $\widetilde{Bob}_0$. In setting $S_{0,0}$, the adversary lets Alice and Bob talk freely during the first $2N/3$ rounds but for the last $N/3$ rounds, whenever Alice listens, the adversary connects her to $\widetilde{Bob}_1$.
To conclude, we know that in each of the settings $S_{0,1}$ and $S_{1,0}$, at most $N/3$ rounds get corrupted by the adversary. Furthermore, the two settings look identical to Alice which means that she can not know Bob's input. This contradiction completes the proof.
\end{proof}
\medskip
\begin{theorem}\label{thm:xor27impossible}[A rephrasing of \Cref{thm:27lowerboundoverview}]
There is no (deterministic or randomized) adaptive protocol that robustly simulates the Exchange Protocol for an error rate of $2/7$ with an $o(1)$ failure probability even when allowing computationally unbounded protocols that use an arbitrarily large number of rounds and an unbounded alphabet.
\end{theorem}
\begin{proof}
Suppose that there is an algorithm that solves the exchange problem under adversarial error-rate $1/3$, in $N$ rounds. We study this algorithm simply with $1$-bit inputs. Let $S_{X,Y}$ denote the setting where Alice receives input $X$ and Bob gets input $Y$. We specifically work only with settings $S_{0,0}$, $S_{0,1}$, and $S_{1,0}$. Note that if a party has an input $1$, it knows in which of these three settings we are. The idea is to present an adversarial strategy that changes the receptions of the party, or parties, that have a $0$ input so as to make that party, or parties, to not be able to distinguish (between two of) the settings.
For simplicity, we first assume that the algorithm is deterministic and we also ignore the rounds where both parties listen. Note that by the definition of the model for adaptive algorithms (see ~\Cref{sec:settingDefinitions}), for these rounds, the adversary can deliver arbitrary messages to the parties at no cost.
For this lower bound, we need to define the party that becomes the base of indistinguishability (whom we confuse by errors) in a more dynamic way, compared to that in \Cref{lem:xor13impossible} or in \cite[Claim 10]{BR11}. For this purpose, we first study the parties that have input $0$ under a particular pattern of received messages (regardless of the setting in which they are), without considering whether the adversary has enough budget to create this pattern or not. Later, we argue that the adversary indeed has enough budget to create this pattern for at least one party and make that party confused.
To determine what should be delivered to each party with input $0$, the adversary cultivates \emph{dummy personalities} $\widetilde{Alice}_0$, $\widetilde{Alice}_1$, $\widetilde{Bob}_0$, $\widetilde{Bob}_1$, by simulating Alice or Bob respectively in settings $S_{0,0}$, $S_{1,0}$, $S_{0,0}$, and $S_{0,1}$, where each of these settings is modified by adversarial interferences (to be specified). Later, when we say that in a given round, e.g. ``the adversary \emph{connects} dummy personality $\widetilde{Bob}_1$ to Alice", we mean that the adversary delivers the transmission of $\widetilde{Bob}_1$ in that round to Alice\footnote{This is assuming $\widetilde{Bob}_1$ transmits in that round, we later discuss the case where both Alice and $\widetilde{Bob}_1$ listen later.}. For each setting, the adversary uses one method of interferences, and thus, when we refer to a setting, we always mean the setting with the related adversarial interferences included.
We now explain the said pattern of received messages. Suppose that Alice has input $0$ and consider her in settings $S_{0, 0}$ and $S_{0,1}$, as a priori these two settings are identical to Alice. Using connections to \emph{dummy personalities}, the adversary creates the following pattern: In the first $2N/7$ rounds in which Alice listens alone, her receptions will be made to imply that Bob also has a $0$. This happens with no adversarial interference in setting $S_{0,0}$, but it is enforced to happen in setting $S_{0,1}$ by the adversary via connecting to Alice the dummy personality $\widetilde{Bob}_0$ cultivated in setting $S_{0,0}$. Thus, at the end of those $2N/7$ listening-alone rounds of Alice, the two settings are indistinguishable to Alice. In the rest of the rounds where Alice listens alone, the receptions will be made to look as if Bob has a $1$. That is, the adversary leaves those rounds of setting $S_{0,1}$ intact, but in rounds of setting $S_{0,0}$ in which Alice listens alone, the adversary connects to Alice the dummy personality $\widetilde{Bob}_1$ cultivated in setting $S_{0,1}$ (with the adversarial behavior described above).
The adversary creates a similar pattern of receptions for Bob when he has an input $0$, in settings $S_{0,0}$ and $S_{1,0}$. That is, the first $2N/7$ of his alone receptions are made to imply that Alice has a $0$ but the later alone-receptions imply that Alice has a $1$.
The described reception pattens make Alice unable to distinguish $S_{0,0}$ from $S_{0,1}$ and also they make Bob unable to distinguish $S_{0,0}$ from $S_{1,0}$. However, the above discussions ignore the adversary's budget. We now argue that the adversary indeed has enough budget to create this reception pattern to confuse one or both of the parties.
Let $x_A$ be the total number of rounds where Alice listens, when she has input $0$ and her receptions follow the above pattern. Similarly, define $x_B$ for Bob. If $x_A \leq 4N/7$, then the adversary indeed has enough budget to make the receptions of Alice in settings $S_{0,0}$ and $S_{0,1}$ follow the discussed behavior, where the first $2N/7$ alone-receptions of Alice are altered in $S_{0,1}$ and the remaining alone-receptions are altered in $S_{0,0}$. Thus, if $x_A \leq 4N/7$, the adversary has a legitimate strategy to make Alice confused between $S_{0,0}$ and $S_{0,1}$. A similar statement is true about Bob: if $x_B \leq 4N/7$, the adversary has a legitimate strategy to make Bob confused between $S_{0,0}$ and $S_{0,1}$.
Now suppose that $x_A> 4N/7$ and $x_B >4N/7$. In this case, the number of alone-receptions of Alice is at most $x_A - (x_A+x_B-N) = N - x_B \leq 3N/7$ and similarly, the number of alone-receptions of Bob is at most $x_B - (x_A+x_B-N) = N - x_A \leq 3N/7$. This is because $x_A+x_B-N$ is a lower bound on the overlap of the round that the two parties listen. In this case, the adversary has enough budget to simultaneously confuse both Alice and Bob of setting $S_{0,0}$; Alice will be confused between $S_{0,0}$ and $S_{0,1}$ and Bob will be confused between $S_{0,0}$ and $S_{1,0}$. For this purpose, in setting $S_{0,0}$, the adversary leaves the first $2N/7$ alone-receptions of each party intact but alters the remaining at most $N/7$ alone-receptions of each party, for a total of at most $2N/7$ errors. On the other hand, in setting $S_{0,1}$, only $2N/7$ errors are used on the first $2N/7$ alone-receptions of Alice and similarly, in setting $S_{1,0}$, only $2N/7$ errors are used on the first $2N/7$ alone-receptions of Bob. Note that these errors make the receptions of each party that has input $0$ follow the pattern explained above.
We now go back to the issue of the rounds where both parties listen. The rounds of $S_{0,0}$ in which both parties listen are treated as follows: The adversary delivers the transmission of $\widetilde{Bob}_1$ (cultivated in setting $S_{0, 1}$) to Alice and delivers the transmission of $\widetilde{Alice}_1$ (cultivated in setting $S_{1, 0}$) to Bob. Recall that the adversary does not pay for these interferences. Furthermore, note that these connections make sure that these all-listening rounds do not help Alice to distinguish $S_{0,0}$ from $S_{0,1}$ and also they do not help Bob to distinguish $S_{0,0}$ from $S_{1,0}$.
Finally, we turn to covering the randomized algorithms. Note that for this case we only show that the failure probability of the algorithm is not $o(1)$ as just by guessing randomly, the two parties can have success probability of $1/4$.
First suppose that $Pr[x_A \leq 4N/7] \geq 1/3$. Note that the adversary can easily compute this probability, or even simpler just get a $(1+o(1))$-factor estimation of it. If $Pr[x_A \leq 4N/7] \geq 1/3$, then the adversary will hedge his bets on that $x_A \leq 4N/7$, and thus, it will try to confuse Alice. In particular, he gives Alice an input $0$ and tosses a coin and gives Bob a $0$ or a $1$, accordingly. Regarding whether $Bob$ gets input $0$ or $1$, the adversary also uses the dummy personalities $\widetilde{Bob}_0$ and $\widetilde{Bob}_1$, respectively. With probability $1/3$, we will have that in fact $x_A\leq 4N/7$, and in this case the adversary by determining whether Alice hears from the real Bob or the dummy Bob, the adversary makes Alice receive the messages with the pattern described above. This means Alice would not know whether Bob has a $0$ or a $1$. Hence, the algorithm fails with probability at least $1/6$ (Alice can still guess in this case which is correct with probability $1/2$).
Similarly, if $Pr[x_B \leq 4N/7] \geq 1/3$, then adversary will make Bob confused between $S_{0,0}$ and $S_{1,0}$. On the other hand, if $Pr[x_A \leq 4N/7] < 1/3$ and $Pr[x_B \leq 4N/7] < 1/3$, then just using a union bound we know that $\Pr[x_A> 4N/7 \& x_B >4N/7] \geq 1/3$. In this case, the adversary gambles on the assumption that it will actually happen that $x_A> 4N/7$ and $x_B >4N/7$. This assumption happens with probability at least $1/3$, and in that case, the adversary makes Alice confused between $S_{0,0}$ and $S_{0,1}$ and Bob confused between $S_{0,0}$ and $S_{1,0}$, simultaneously, using the approach described above. Hence, in conclusion, in any of the cases regarding random variables $x_A$ and $x_B$, the adversary can make the algorithm fail with probability at least $1/6$.
\end{proof}
\section{Further Results Supporting the Invariability Hypothesis}\label{sec:appendix:furtherresultssupportingthehypothesis}
In this section we mention several results that further support the Invariability Hypothesis.
We first remark that the lower bound in this paper are, in all settings and all properties, already as strong as required by the hypothesis. Our positive results, as summarized in \Cref{cor:weakhyp}, furthermore show that the invariability hypothesis holds if one allows either a large alphabet or a quadratic number of rounds. Both assumptions lead to the communication rate being $O(1/n)$. Next we list several results which show that the invariability hypothesis provably extends beyond these low rate settings:
\begin{enumerate}
\item Subsequent results in \cite{GH13} show that:\\
\emph{The IH is true for all eight settings when allowing randomized algorithms with exponentially small failure probability and a round blowup of $(\log^* n)^{O(\log^* n)}$.}
\item Subsequent results in \cite{GH13} show that:\\
\emph{The Invariability Hypothesis is true for all eight settings if one removes points 5. and 6., that is, when one can use randomized protocols that can generate private and public encryption keys which the adversary cannot break.}
\item A subsequent result in \cite{GH13} gives an efficient randomized coding scheme for non-adaptive unique decoding which tolerates the optimal $1/4 - \epsilon$ error rate. This improves over the coding scheme of Brakerski-Kalai \cite{BK12} which requires an error rate below $1/16$. In different terms this result shows that:\\
\emph{The Invariability Hypothesis is true for unique decoding if one weakens point 5. to allow efficient randomized algorithms with exponentially small failure probability.}
\item The result of Braverman and Rao \cite{BR11} shows that:\\
\emph{The Invariability Hypothesis is true for non-adaptive unique decoding if one removes point 4. (which requires protocols to be computationally efficient).}
\item A subsequent result of the authors show that the $1/10 - \epsilon$ distance parameter of the tree code construction in \cite{Braverman12} can be improved to $1 - \epsilon$ which shows that:\\
\emph{The Invariability Hypothesis is true for unique decoding if one weakens point 4. to allow deterministic sub-exponential time computations.}
\item The improved tree code construction mentioned in point 5. can also be used together with the analysis of Franklin at al. \cite{FGOS13} to show that:\\
\emph{The Invariability Hypothesis is true for non-adaptive unique decoding with shared randomness if one weakens point 4. to allow deterministic sub-exponential time computations.}
\item The improved tree code construction mentioned in point 5. can also be used together with the ideas of \Cref{thm:protshard23} to show that:\\
\emph{The Invariability Hypothesis is true for adaptive unique decoding with shared randomness if one weakens point 4. to allow deterministic sub-exponential time computations.}
\end{enumerate}
Lastly, we remark that the $1/6 - \epsilon$ tolerable error rate of the knowledge preserving coding schemes given by Chung et al. \cite{KnowledgePreservingIC} can be improved to the optimal $1/4 - \epsilon$ tolerable error rate of the unique decoding setting. The communication blowup for knowledge preserving protocols is however inherently super constant.
\section{Impossibility Results for List Decodable Interactive Coding Schemes}\label{sec:appendix:furtherlowerbounds}
In this section we prove that list decodable interactive coding schemes are not possible beyond an error rate of $1/2$. This holds even if adaptivity or shared randomness is allowed (but not both). We remark that for both results we prove a lower bound for the $n$-bit Exchange Problem as list decoding with a constant list size becomes trivial for any protocol with only constantly many different inputs.
The intuitive reason why shared randomness does not help in non-adaptive protocols to go beyond an error rate of $1/2$ is because the adversary can completely corrupt the party that talks less:
\begin{lemma}\label{lem:lowerboundLDsharedrandomness}
There is no (deterministic or randomized) list decodable non-adaptive protocol that robustly simulates the $n$-bit Exchange Protocol for an error rate of $1/2$ with an $o(1)$ failure probability and a list size of $2^{n-1}$ regardless of whether the protocols have access to shared randomness and regardless of whether computationally unbounded protocols that use an arbitrarily large number of rounds and an unbounded alphabet are allowed.
\end{lemma}
\begin{proof}
We recall that non-adaptive protocols will for every round specify a sender and receiver in advance, that is, independent from the history of communication. We remark that the proof that follows continues to hold if these decisions are based on the shared randomness of the protocols
The adversary's strategy is simple: It gives both Alice and Bob random inputs, then randomly picks one of them, and blocks all symbols sent by this party by replacing them with a fixed symbol from the communication alphabet. With probability at least $1/2$ the randomly chosen player speaks less than half the time and so the fraction of errors introduced is at most $1/2$. On the other hand no information about the blocked player's input is revealed to the other player and so other player can not narrow down the list of possibilities in any way. This means that even when allowed a list size of $2^{n-1}$ there is a probability of $1/2$ that the list does not include the randomly chosen input. This results in a failure probability of at least $1/4$.
\end{proof}
A $1/2$ impossibility result also holds for list decodable coding schemes that are adaptive:
\begin{lemma}\label{lem:lowerboundLDadaptive}
There is no (deterministic or randomized) list decodable adaptive protocol that robustly simulates the $n$-bit Exchange Protocol for an error rate of $1/2$ with an $o(1)$ failure probability and a list size of $2^{n-1}$ even when allowing computationally unbounded protocols that use an arbitrarily large number of rounds and an unbounded alphabet.
\end{lemma}
To show that adaptivity is not helpful, one could try to prove that the adversary can imitate one party completely without being detected by the other party. This, however, is not possible with an error rate of $1/2$ because both parties could in principle listen for more than half of the rounds if no error occurs and use these rounds to alert the other party if tempering is detected. Our proof of \Cref{lem:lowerboundLDadaptive} shows that this ``alert'' strategy cannot work. In fact, we argue that it is counterproductive for Alice to have such a hypothetical ``alert mode'' in which she sends more than half of the rounds. The reason is that the adversary could intentionally trigger this behavior while only corrupting less than half of the rounds (since Alice is sending alerts and not listening during most rounds). The adversary can furthermore do this regardless of the input of Bob which makes it impossible for Alice to decode Bob's input. This guarantees that Alice never sends for more than half of the rounds and the adversary can therefore simply corrupt all her transmissions. In this case Bob will not be able to learn anything about Alice's input.
\begin{proof}
Suppose there exists a protocol that robustly simulates the $n$-bit Exchange Pro
tocol for an error rate of $1/2$ using $N$ rounds over an alphabet $\Sigma$. We
consider pairs of the form $(x,\vec r)$ where $x \in \{0,1\}^n$ is an input to A
lice and $\vec r = (r_1,r_2,\ldots,r_N) \in \Sigma^N$ is a string over the chann
el alphabet with one symbol for each round. We now look at the following hypothetical communication between Alice and the adversary: Alice gets input $x$ and samples her private randomness. In each round she then decides to send or listen. If she listens in round $i$ she receives the symbol $r_i$. For every pair $(x,\vec r)$ we now define $p(x,\vec r)$ to be the probability, taken over the private randomness of Alice, that in this communication Alice sends at least $N/2$ rounds (and conversely listens for at most $N/2$ rounds). The adversaries strategy now makes the following case distinction: If there is one $(x,\vec r)$ for which $p(x,\vec r)>1/2$ then the adversary picks a random input for Bob, gives Alice input $x$ and during the protocol corrupts any symbol Alice listens to according to $\vec r$. By definition of $p(x,\vec r)$ there is a probability of at least $1/2$ that the adversary can do this without using more than $N/2$ corruptions. In such an execution Alice has furthermore no information on Bob's input and even when allowed a list size of $2^{n-1}$ has at most a probability of $1/2$ to include Bob's input into her decoding list. Therefore Alice will fail to list decode correctly with probability at least $1/4$. If on the other hand for every $(x,\vec r)$ it holds that $p(x,\vec r)<1/2$ then the adversary picks a random input for Alice and an arbitrary input for Bob and during the protocol corrupts any symbol Alice sends to a fixed symbol $\sigma \in \Sigma$. Furthermore in a round in which both Alice and Bob listens it chooses to deliver the same symbol $\sigma$ to Bob. By definition of $p(x,\vec r)$ there is a probability of at least $1/2$ that the adversary can do this without using more than $N/2$ corruptions. In such an execution Bob now has no information on Alice's input and even when allowed a list size of $2^{n-1}$ he therefore has at most a probability of $1/2$ to include Alice's input into his decoding list. This leads to a failure probability of $1/4$ as well.
\end{proof}
\section{Impossibility Results for Solving the Exchange Problem Adaptively}\label{sec:Impossibilities}
In this appendix we provide the proofs we deferred in \Cref{sec:exchange}.
For completeness and to also cover randomized protocols we first reprove Claim 10 of \cite{BR11} which states that no non-adaptive uniquely decoding protocol can solve the exchange problem for an error rate of $1/4$:
\begin{lemma}[\textbf{Claim 10 of \cite{BR11}}] \label{lem:BRLB4}
There is no (deterministic or randomized) protocol that robustly simulates the Exchange Protocol for an error rate of 1/4 with an $o(1)$ failure probability even when allowing computationally unbounded protocols that use an arbitrarily large number of rounds and an unbounded alphabet.
\end{lemma}
\begin{proof
Suppose that there is an algorithm $\mathcal{A}$ with no rate adaptation that solves the exchange problem under adversarial error-rate $1/4$, in $N$ rounds. We work simply with $1$-bit inputs. Let $S_{X,Y}$ denote the setting where Alice receives input $X$ and Bob gets input $Y$. For simplicity, we first ignore the rounds in which both parties listen; note that in those rounds the adversary can deliver arbitrary messages to each of the parties at no cost.
First we explain adversary's strategy for setting $S_{0,1}$: Consider the executions of $\mathcal{A}$ in setting $S_{0,0}$ with no interference from the adversary. Without loss of generality, we can assume that in this execution, with probability at least $1/2$, Alice listens in at most $1/2$ of the rounds. Noting the restriction that $\mathcal{A}$ has no rate-adaptivity, we get that in setting $S_{0,0}$, regardless of the adversary's interferences, with probability at least $1/2$, Alice listens alone in at most $1/2$ of the rounds. Adversary generates a dummy personality $\widetilde{Bob}_0$ by simulating algorithm $\mathcal{A}$ on Bob (and Alice) in setting $S_{0,0}$ with no interferences. This dummy personality is then used in setting $S_{0,1}$. Note that at the start, only Bob can distinguish $S_{0,1}$ from $S_{0,0}$. For the first $N/4$ times that Alice listens alone in $S_{0,1}$, the adversary connects Alice to the dummy $\widetilde{Bob}_0$, that is, Alice receives transmissions of $\widetilde{Bob}_0$. Thus, up to the point that Alice has listened alone for $N/4$ rounds, Alice receives inputs (with distribution) exactly as if she was in setting $S_{0,0}$ with real Bob and hence, she can not distinguish this setting from the setting $S_{0,0}$ with no adversarial interference. After this point, the adversary lets Alice talk freely with $Bob$ with no interference.
We now explain adversary's strategy for setting $S_{0,0}$: The adversary generates another dummy personality $\widetilde{Bob}_1$ by simulating algorithm $\mathcal{A}$ on Bob (and Alice) in setting $S_{0,1}$ where the first $N/4$ listening-alone rounds of Alice were connected to $\widetilde{Bob}_0$. That is, exactly the situation that will happen to real Bob in setting $S_{0,1}$. For the first $N/4$ rounds of setting $S_{0,0}$ where Alice listens, the adversary does not interfere in the communications. After that, for the next $N/4$ rounds that Alice listens, the adversary delivers transmissions of dummy personality $\widetilde{Bob}_1$ to Alice.
To conclude the argument, the adversary give a random input $y\in\{0,1\}$ input to Bob and gambles on that Alice will be listening alone less than $1/2$ of the rounds. The adversary also uses the dummy personality $\widetilde{Bob}_i$ for $i=1-y$ and when Alice listens alone, the adversary connects Alice to the real Bob or this dummy personality according to the rules explained above. With probability at least $1/2$, Alice indeed listens in at most $N/2$ rounds. If this happens, due to the adversarial errors, the two settings look identical to Alice (more formally, she observes the same probability distributions for the inputs) and she can not distinguish them from each other. This means that algorithm $\mathcal{A}$ has a failure probability of at least $1/4$ (Alice can still guess $y$ but the guess would be incorrect with probability at at least $1/8$).
We finally explain the adversary's rules for treating the rounds where both parties listen: For setting $S_{0,0}$, if both Alice and the real Bob are listening, Alice is connected to $\widetilde{Bob}_1$ at no cost. Similarly, in setting $S_{0,1}$, if both Alice and real Bob are listening, then Alice is connected to $\widetilde{Bob}_0$ at no cost. To make sure that this definition does not have a loop, if for a round $r$, both parties are listening in both settings, then the adversary delivers a $0$ to Alice in both settings. Note that in using these rules, the behavior of the dummy personalities $\widetilde{Bob}_0$ and $\widetilde{Bob}_1$ are defined recursively based on the round number; for example, the simulation that generates the behavior of $\widetilde{Bob}_0$ in round $r$ might use the behavior of $\widetilde{Bob}_1$ in round $r-1$. Because of these rules, we get that in each round that Alice listens (at least until Alice has had $N/2$ listening-alone rounds), the messages that she receives have the same probability distribution in two settings and thus, the two settings look indistinguishable to Alice. If the execution is such that Alice listens alone less than $N/4$ rounds, which happens with probability at least $1/2$, the algorithm is bound to fail with probability at least $1/2$ in this case. This means algorithm $\mathcal{A}$ fails with probability at least $1/4$.
\end{proof}
Next we give the proof for \Cref{lem:xor13impossible} which shows that no adaptive protocol can robustly simulates the Exchange Protocol for an error rate of $2/7$:
\begin{proof}[Proof of \Cref{lem:xor13impossible}]
We first explain the adversaries strategy and then explain why this strategy makes at least one of the parties unable to know the input of the other party with probability greater than $1/2$.
Suppose that there is an algorithm $\mathcal{A}$ that solves the exchange problem under adversarial error-rate $1/3$, in $N$ rounds. We work simply with $1$-bit inputs. Let $S_{X,Y}$ denote the setting where Alice receives input $X$ and Bob gets input $Y$. For simplicity, we first ignore the rounds in which both parties listen; note that in those rounds the adversary can deliver arbitrary messages to each of the parties at no cost.
First we explain adversary's strategy for setting $S_{0,1}$: Consider setting $S_{0,0}$ and suppose that for the first $2N/3$ rounds in this, the adversary does not interfere. Without loss of generality, we can assume that with probability at least $1/2$, Alice listens alone in less than $N/3$ of these rounds. Adversary creates a dummy personality $\widetilde{Bob}_0$ and simulates it with Alice in this $S_{0,0}$ setting. In the first $2N/3$ rounds of setting $S_{0,1}$, whenever Alice listens alone, the adversary connects Alice to $\widetilde{Bob}_0$, that is, Alice receives the transmission of $\widetilde{Bob}_0$. With probability at least $1/2$ regarding the randomness of Alice and $\widetilde{Bob}_0$, Alice listens less than $N/3$ of the time which means that the adversary will have enough budget to completely fake Bob as $\widetilde{Bob}_0$ (from Alice's viewpoint). In that case, the two settings look identical to Alice for the first $2N/3$ rounds. During the last $N/3$ rounds of the execution in setting $S_{0,1}$, the adversary lets Alice and the real Bob talk without no interference.
We now explain adversary's strategy for setting $S_{0,0}$: The adversary generates another dummy personality $\widetilde{Bob}_1$ by simulating Bob in setting $S_{0,1}$ where alone-listenings of Alice in the first $2N/3$ rounds are connected to $\widetilde{Bob}_0$. In setting $S_{0,0}$, the adversary lets Alice and Bob talk freely during the first $2N/3$ rounds but for the last $N/3$ rounds, whenever Alice listens (even if not alone), the adversary connects her to $\widetilde{Bob}_1$.
To conclude, the adversary give a random input $y\in\{0,1\}$ input to Bob and gambles on that Alice listens alone less than $N/3$ rounds of the first $2N/3$ rounds. The adversary also uses the dummy personality $\widetilde{Bob}_i$ for $i=1-y$ and when Alice listens alone, the adversary connects Alice to the real Bob or this dummy personality according to the rules explained above. We know that with probability at least $1/2$, Alice listens alone less than $N/3$ rounds of the first $2N/3$ rounds. If that happens, with at most $N/3$ errors, the adversary can follow the strategy explained above. Therefore, with probability $1/2$, Alice can not know Bob's input and thus will fail with probability at least $1/4$.
Regarding the rounds where both parties are listening, the rule is similar to \Cref{lem:BRLB4} but a little bit more subtle because of the rate adaptivity of algorithm $\mathcal{A}$: We need to declare what are the receptions when in the first $2N/3$ rounds of setting $S_{0,0}$, both Alice and Bob are listening. However, at that point, it's not clear whether we will make an indistinguishability argument for Alice or for Bob, which since it is affected by which one of them listens alone less, it can also depend on the receptions during the all-listening rounds of the first $2N/3$ rounds. The simple remedy is to create analogous situations for Alice and Bob so that we can determine the base of indistinguishability later at the end of $2N/3$ rounds. Adversary generates dummy personalities $\widetilde{Alice}_0$ and $\widetilde{Alice}_1$, respectively in settings $S_{0,0}$ and $S_{1,0}$, similar to those of Bob. In each all-listening round of the first $2N/3$ rounds, adversary makes Alice receive the transmission of $\widetilde{Bob}_1$ and Bob receive the transmission of $\widetilde{Alice}_1$. With these connections, whoever is more likely to listen alone less in the first $2N/3$ rounds ---which we assumed to be Alice without loss of generality in the above discussions--- with constant probability will receive messages with the exact same probability distributions, in each round in the two different settings. Thus she will not be able to distinguish the two settings.
\end{proof}
\section{Overview} \label{sec:overview
In this section we state our results and explain the high level ideas and insights behind them.
\subsection{Adaptivity}
A major contribution of this paper is to show that adaptive protocols can tolerate more than the $1/4$ error rate of the non-adaptive setting:
\begin{theorem}\label{thm:27upperboundoverview}
Suppose $\Pi$ is an $n$-round protocol over a constant bit-size alphabet. For any $\epsilon >0$, there is a deterministic computationally efficient protocol $\Pi'$ that robustly simulates $\Pi$ for an error rate of $2/7 - \epsilon$ using $O(n^2)$ rounds and an $O(1)$-bit size alphabet.
\end{theorem}
The proof is presented in \Cref{sec:AlgLargeRounds}. Furthermore, in \Cref{subsec:xor13impossible}, we show a matching impossibility result which even applies to the arguably simplest interactive protocol, namely, the \emph{Exchange Protocol}. In the Exchange Protocol each party simply gets an input---simply a one bit input in our impossibility results---and sends this input to the other party.
\begin{theorem}\label{thm:27lowerboundoverview}
There is no (deterministic or randomized) protocol that robustly simulates the Exchange Protocol for an error rate of $2/7$ with an $o(1)$ failure probability even when allowing computationally unbounded protocols that use an arbitrarily large number of rounds and an unbounded alphabet.
\end{theorem}
\paragraph{Why Adaptivity is Natural and Helpful}\label{sec:whyadaptivityhelps}
Next, we explain why it should not be surprising that adaptivity leads to a higher robustness. We also give some insights for why the $2/7$ error rate is the natural tolerable error for adaptive protocols.
It is helpful to first understand why the $1/4$ error rate was thought of as a natural barrier. The intuitive argument, presented in \cite{BR11}, for why one should not be able to cope with an error rate of $1/4$ is as follows: During any $N$ round interaction one of the parties, w.l.o.g. Alice, is the designated sender for at most half of the rounds. With an error rate of $1/4$ the adversary can corrupt half of the symbols Alice sends out. This makes it impossible for Alice to (reliably) deliver even a single input bit $x$ because the adversary can always make the first half of her transmissions consistent with $x=0$ and the second half with $x=1$ without Bob being able to know which of the two is real and which one is corrupted.
While this intuition is quite convincing at the first glance, it silently assumes that it is a priori clear which of the two parties transmits less often. This in turn essentially only holds for non-adaptive protocols for which the above argument can also be made into a formal negative result\cite[Claim 10]{BR11}. On the other hand, we show that assuming this a priori knowledge is not just a minor technical assumption but indeed a highly nontrivial restriction which is violated in many natural settings of interaction. For example, imagine a telephone conversation on a connection that is bad/noisy in one direction. One person, say Alice, clearly understands Bob while whatever Alice says contains so much noise that Bob has a hard time understanding it. In a non-adaptive conversation, Bob would continue to talk half of the time (even though he has nothing to say given the lack of understandable responses from Alice) while Alice continues to talk little enough that she can be completely out-noised. This is of course not how it would go in real life. There, Bob would listen more in trying to understand Alice and by doing this give Alice the opportunity to talk more. Of course, as soon as this happens, the adversary cannot completely out-noise Alice anymore and the conversation will be able to progress. In fact, similar \emph{dynamic rate adaptation} mechanisms that adapt the bitrate of a senders to channel conditions and the communication needs of other senders are common in many systems, one prominent example being IEEE 802.11 wireless networks.
Even if one is convinced that adaptive algorithms should be able to beat the $1/4$ error rate, it is less clear at this point what the maximum tolerable error rate should be. In particular, $2/7$ seems like a quite peculiar bound. Next, without going into details of the proofs, we want to give at least some insight why $2/7$ is arguably the right and natural error rate for the adaptive setting.
We first give an intuitive argument why even adaptive protocols cannot deal with
an error rate of $1/3$. For this, the adversary runs the same strategy as above
which concentrates all errors on one party only. In particular, given a $3N$ rounds conversation and a budget of $N$ corruptions, the adversary picks one party
, say Alice, and makes her first $N$ transmissions sound like as if her input is $x = 1$. The next $N$ transmissions are made to sound like Alice has input $x = 0$. During the first $N$ responses, regardless of whether $x=1$ (resulting in Alice talking herself) or $x=0$ (resulting in the adversary imitating the same transmissions), the whole conversation will sound legitimate. This prevents any rate adaptation, in this case on Bob's side, to kick in before $2N$ rounds of back and forth have passed. Only then it becomes apparent to the receiver of the corruptions, in this case Bob, that the adversary is trying to fool him. Knowing that the adversary will only try to fool one party, Bob can then stop talking and listen to Alice for the rest of the conversation. Still, even if Bob listens exclusively from this point on, there are only $N$ rounds left which is just enough for all of them to be corrupted. Having received $N$ transmission from Alice claiming $x=1$ and equally many claiming $x=0$, Bob is again left puzzled. This essentially proves the impossibility of tolerating an error rate of $1/3$. But even this $1/3$ error rate is not achievable. To explain why even a lower fraction of errors, namely $2/7$, leads to a negative result, we remark that the radical immediate back-off we just assumed for Bob is not possible. The reason is that if both parties are so sensitive and radical in their adjustment, the adversary can fool both parties simultaneously by simply corrupting a few transmissions of both parties after round $2N$. This would lead to both parties assuming that the transmissions of the other side are being corrupted. The result would be both parties being silent simultaneously which wastes valuable communication rounds. Choosing the optimal tradeoff for how swift and strong protocols are able to adaptively react without falling into this trap naturally leads to an error rate between $1/4$ and $1/3$, and what rate in this range could be more natural than the mediant $2/7$.
\paragraph{Other Settings}
We also give results on other settings that have been suggested in the literature, in particular, list decoding and the shared randomness setting of \cite{FGOS13}. We briefly describe these results next.
Franklin et al. \cite{FGOS13} showed that if both parties share some random string not known to the adversary, then non-adaptive protocols can boost the tolerable error rate from $1/4$ to $1/2$. We show that also in this setting adaptivity helps to increase the tolerable error rate. In particular, in \Cref{sec:sharedrand}, we prove that an error rate of $2/3 - \epsilon$ is achievable and best possible\footnote{It is interesting to note that similarly to the $2/7$ bound (see \Cref{sec:whyadaptivityhelps}), $2/3$ is the mediant between $1/2$ and $1$, that is, the mediant between the error rate for non-adaptive protocols and the hypothetical error rate of immediately reacting/adapting protocols.}:
\begin{theorem}\label{thm:23boundsoverview}
In the shared randomness setting of \cite{FGOS13}, there exists a efficient robust coding scheme for an error rate of $2/3 - \epsilon$ while no such scheme exists for an error rate of $2/3$. That is, the equivalents of \Cref{thm:27upperboundoverview} and \Cref{thm:27lowerboundoverview} hold for an error rate of $2/3$. The number of rounds of the robust coding scheme can furthermore be reduced to $O(n)$ if one allows exponential time computations.
\end{theorem}
We also give the first results for list decodable coding schemes (see \Cref{sec:settingDefinitions} for their definition). The notion of list decodability has been a somewhat recent but already widely successful addition to the study of error correcting codes. It is known that for error correcting codes such a relaxation leads to being able to efficiently \cite{SudanListDecoding} tolerate any constant error rate below $1$, which is a factor of two higher than the $1/2 - \epsilon$ error rate achievable with unique decoding. It has been an open question whether list decoding can also lead to higher tolerable error rates in interactive coding schemes (see \cite[Open Problem 9]{BravermanAllerton} and \cite[Conclusion]{BR11}). We show that this is indeed the case. In particular, for the non-adaptive setting the full factor of two improvement can also be realized in the interactive setting:
\begin{theorem}\label{thm:12upperboundlistdecodingoverview}
Suppose $\Pi$ is an $n$-round protocol over a constant bit-size alphabet. For any $\epsilon >0$ there is a $O(1)$-list decodable, non-adaptive, deterministic, and computationally efficient protocol $\Pi'$ that robustly simulates $\Pi$ for an error rate of $1/2 - \epsilon$ using $O(n^2)$ rounds and an $O(1)$-bit size alphabet.
\end{theorem}
The proof of this theorem is presented in \Cref{sec:AlgLargeRounds}. We also show that the $1/2 - \epsilon$ error rate is best possible even for adaptive coding schemes. That is, no adaptive or non-adaptive coding scheme can achieve an error rate of $1/2$. We prove these impossibility results formally in \Cref{sec:appendix:furtherlowerbounds}.
Taken together, our results provide tight negative and matching positive results for any of the eight interactive coding settings given by the three Boolean attributes, \{unique decoding / list decoding\}, \{adaptive / non-adaptive\}, and \{without shared randomness / with shared randomness\} (at least when allowing a linear size alphabet or quadratic number of rounds in the simulation). \Cref{table:bounds} shows the maximum tolerable error rate for each of these settings:\\
\begin{table}[h!]
\begin{tabular}{l|c|c|c|c|}
~ & unique dec. (UD) & UD \& shared rand. & list dec. (LD) & LD \& shared rand. \\ \hline
Non-adaptive & \ \ \ \ \ \ \ \ \ \ 1/4 \ (\cite{BR11}\ ) & \ \ \ \ \ \ \ \ \ \ 1/2 \ (\cite{FGOS13}\ ) & 1/2 & 1/2 \\ \hline
Adaptive & 2/7 & 2/3 & 1/2 & 2/3 \\ \hline
\end{tabular}
\caption {Unless marked with a citation all results in this table are new and presented in this paper. Matching positive and negative results for each setting show that the error rates are tight. Even more, the error rates are invariable of assumptions on the communication and computational complexity of the coding scheme and the adversary (see \Cref{hypthesis}, \Cref{cor:weakhyp}, and \Cref{sec:appendix:furtherresultssupportingthehypothesis}).} \label{table:bounds}
\end{table}
\subsection{Invariability Hypothesis: A Path to Natural Interactive Coding Schemes}\label{sec:largerpicture}
In this section, we take a step back and propose a general way to understand the tolerable error rates specific to each setting and to design interactive coding schemes achieving them. We first formulate a strong working hypothesis which postulates that tolerable error rates are invariable regardless of what communication and computational resources are given to the protocol and to the adversary. We then use this hypothesis to determine the tight tolerable error rate for any setting by looking at the simplest setup. Finally, we show how clean insights coming from these simpler setups can lead to designs for intuitive, natural, and easily analyzable interactive coding schemes for the more general setup.
\paragraph{Invariability Hypothesis}
In this section we formulate our invariability hypothesis.
Surveying the literature for what error rates could be tolerated by different interactive coding schemes, the maximum tolerable error rate appears to vary widely depending on the setting and more importantly, depending on what kind of efficiency one strives for. For example, even for the standard setting---that is, for non-adaptive unique decoding coding schemes using a large alphabet---the following error rates apply: for unbounded (or exponential time) computations, Schulman \cite{Schulman} tolerates a $1/240$ error rate; Braverman and Rao \cite{BR11} improved this to $1/4$; for sub-exponential time computations, Braverman \cite{Braverman12} gave a scheme working for any error rate below $1/40$; for randomized polynomial time algorithms, Brakerski and Kalai \cite{BK12} got an error rate of $1/16$; for randomized linear time computations, Brakerski and Naor \cite{BN13} obtained an unspecified constant error rate smaller than $1/32$; lastly, assuming polynomially bounded protocols and adversaries and using a super-linear number of rounds, Chung et al.~\cite{KnowledgePreservingIC} gave coding schemes tolerating an error rate of $1/6$ (with additional desirable properties).
We believe that this variety is an artifact of the current state of knowledge rather then being the essential truth. In fact, it appears that any setting comes with exactly one tolerable error rate which furthermore exhibits a strong threshold behavior: For any setting, there seems to be one error rate for which communication is impossible regardless of the resources available, while for error rates only minimally below it simple and efficient coding schemes exist. In short, the tolerable error rate for a setting seems robust and completely independent of any communication resource or computational complexity restrictions made to the protocols or to the adversary.
Taking this observation as a serious working hypothesis was a driving force in obtaining, understanding, and structuring the results obtained in this work. As we will show, it helped to identify the simplest setup for determining the tolerable error rate of a setting, served as a good pointer to open questions, and helped in the design of new, simple, and natural coding schemes. We believe that these insights and schemes will be helpful in future research to obtain the optimal, and efficient coding schemes postulated to exist. In fact, we already have a number of subsequent works confirming this (e.g., the results of \cite{Braverman12,BK12,KnowledgePreservingIC} mentioned above can all be extended to have the optimal $1/4$ error rate; see also \Cref{sec:appendix:furtherresultssupportingthehypothesis}). All in all, we believe that identifying and formulating this hypothesis is an important conceptual contribution of this work
\begin{hypothesis}[Invariability Hypothesis]\label{hypthesis}
Given any of the eight settings for interactive communication (discussed above) the maximum tolerable error rates is invariable regardless:
\begin{enumerate}
\item whether the protocol to be simulated is an arbitrary $n$-round protocol or the much simpler ($n$-bit) exchange protocol, and
\item whether only $O(1)$-bit size alphabets are allowed or alphabets of arbitrary size, and
\item whether the simulation has to run in $O(n)$ rounds or is allowed to use an arbitrary number of rounds, and
\item whether the parties are restricted to polynomial time computations or are computationally unbounded, and
\item whether the coding schemes have to be deterministic or are allowed to use private randomness (even when only requiring an $o(1)$ failure probability), and
\item whether the adversary is computationally unbounded or is polynomially bounded in its computations (allowing simulation access to the coding scheme if the coding scheme is not computationally bounded)
\end{enumerate}
\end{hypothesis}
We note that our negative results are already as strong as stipulated by the hypothesis, for all eight settings. The next corollary furthermore summarizes how far these negative results combined with the positive results presented in this work (see \Cref{table:bounds}) already imply and prove two weaker versions of the hypothesis:
\begin{corollary}\label{cor:weakhyp}
The Invariability Hypothesis holds if one weakens point 3. to ``3'. whether only $O(n)$-bit size alphabets are allowed or alphabets of arbitrary size''. \
The Invariability Hypothesis also holds if one weakens point 4. to ``4'. whether the simulation has to run in $O(n^2)$ rounds or is allowed to use an arbitrary number of rounds''.
\end{corollary}
We also refer the reader to \cite{GH13} and \Cref{sec:appendix:furtherresultssupportingthehypothesis} for further (subsequent) results supporting the hypothesis, such as, a proof that the hypothesis holds if point \emph{4.} is replaced by ``\emph{4'. whether the simulation has to run in $O(n (\log^* n)^{O(\log^* n)})$ rounds\footnote{Here $\log^* n$ stands for the iterated logarithm which is defined to be the number of times the natural logarithm function needs to be applied to $n$ to obtain a number less than or equal to $1$. We note that this round blowup is smaller than any constant times iterated logarithm applied to $n$, that is, $2^{O(\log^* n \, \cdot \, \log \log^* n)} = o(\overbrace{\log \log \ldots \log}^{\textit{constant times}} n)$.} or is allowed to use an arbitrary number of rounds}''.
\paragraph{Determining and Understanding Tolerable Error Rates in the Simplest Setting}
Next, we explain how we use the invariability hypothesis in finding the optimal tolerable error rates.
Suppose that one assumes, as a working hypothesis, the invariability of tolerable error rates to hold regardless of the computational setup and even the structure of the protocol to be simulated. Under this premise, the easiest way to approach determining the best error rate is in trying to design robust simulations for the simplest possible two-way protocol, the \emph{Exchange Protocol}. This protocol simply gives each party $n$ bits as an input and has both parties learn the other party's input bits by exchanging them (see also \Cref{sec:exchange}). Studying this setup is considerably simpler. For instance, for non-adaptive protocols, it is easy to see that both parties sending their input in an error correcting code (or for $n=1$ simply repeating their input bit) leads to the optimal robust protocol which tolerates any error rate below $1/4$ but not more. The same coding scheme with applying any ECC list decoder in the end also gives the tight $1/2$ bound for list decoding. For adaptive protocols (both with and without shared randomness), finding the optimal robust $1$-bit exchange protocol was less trivial but clearly still better than trying to design highly efficient coding schemes for general protocols right away. Interestingly, looking at simpler setup actually crystallized out well what can and cannot be done with adaptivity, and why. These insights, on the one hand, lead to the strong lower bounds for the exchange problem but, on the other hand, were also translated in a crucial manner to the same tradeoffs for robustly simulating general $n$-round protocols.
\paragraph{Natural Interactive Coding Schemes}
The invariability working hypothesis was also helpful in finding and formalizing simple and natural designs for obtaining robust coding schemes for general protocols.
Before describing these natural coding schemes we first discuss the element of ``surprise/magic'' in prior works on interactive coding. The existence of an interactive coding scheme that tolerates
a constant error rate is a fairly surprising outcome of the prior works, and remains so even in hindsight. One reason for this is that the simplest way of adding redundancy to a conversation, namely encoding each message via an error correcting code, fails dramatically because the adversary can use its budget non-uniformly and corrupt the first message(s) completely. This derails the interaction completely and makes all further exchanges useless even if no corruptions happens from there on. While prior works, such as \cite{Schulman} or \cite{BR11}, manage to overcome this major challenge, their solution remains a technically intriguing works, both in terms of the ingredients they involve (tree codes, whose existence is again a surprising event) and the recipe for converting the ingredients into a solution to the interactive coding problem. As a result it would appear that the challenge of dealing with errors in interactive communication is an inherently complex task.
In contrast to this, we aim to give an intuitive and natural strategy which lends itself nicely to a simple explanation for the possibility of robust interactive coding schemes and even for why their tolerable error rates are as they are. This strategy simply asserts that if there is no hope to fully trust messages exchanged before one should find ways to put any response into the assumed common context by (efficiently) referring back to what one said before. Putting this idea into a high-level semi-algorithmic description gives the following natural outline for a robust conversation
\begin{algorithm
\caption{Natural Strategy for a Robust Conversation (Alice's Side)}
\begin{algorithmic}[1]
\small
\State Assume nothing about the context of the conversation
\Loop
\State Listen
\State $E'_B \gets$ What you heard Bob say last (or so far)
\State $E_A \gets$ What you said last (or so far)
\If{$E_A$ and $E'_B$ makes sense together}
\State Determine the most relevant response $r$
\State Send the response $r$ \emph{but also} include an (efficient) summary of what you said so far ($E_A$)
\Else
\State Repeat what you said last ($E_A$)
\EndIf
\EndLoop
\State Assume / Output the conversation outcome(s) that seem most likely
\end{algorithmic}
\label{alg:highlevelSimulator}
\end{algorithm}
At first glance the algorithm may appear vague. In particular notions like ``making sense'', and ``most relevant response'', seem ambiguous and subject to interpretation. It turns out that this is not the case. In fact, formalizing this outline into a concrete coding scheme turns out to be straight forward. This is true especially if one accepts the invariability working hypothesis and allows oneself to not be overly concerned with having to immediately get a highly efficient implementation. In particular, this permits to use the simplest (inefficiently) summary, namely referring back word by word to everything said before. This straight-forward formalization leads to Algorithm \ref{alg:Simulator}. Indeed, a reader that compares the two algorithms side-by-side will find that Algorithm \ref{alg:Simulator} is essentially a line-by-line formalization of Algorithm \ref{alg:highlevelSimulator}.
In addition to being arguably natural, Algorithm \ref{alg:Simulator} is also easy to analyze. Simple counting arguments show that the conversation outcome output by most parties is correct if the adversary interferes at most a $1/4 - \epsilon$ fraction of the time, proving the tight tolerable error rate for the robust (while somewhat still inefficient) simulation of general $n$-round protocols. Maybe even more surprisingly, the exact same simple counting argument also directly shows our list decoding result, namely, that even with an error rate of $1/2 - \epsilon$ the correct conversation outcome is among the $1/\epsilon$ most likely choices for both parties. Lastly, it is easy to enhance both Algorithm \ref{alg:highlevelSimulator} and similarly Algorithm \ref{alg:Simulator} to be adaptive. For this one simply adds the following three, almost obvious, rules of an adaptive conversation:
\begin{rules}[Rules for a Successful Adaptive Conversation]\label{rules:ForAnAdaptiveConversation} \mbox{}\\
Be fair and take turns talking and listening, unless:
\begin{enumerate}
\item you are sure that your conversation partner already understood you fully and correctly, in which case you should stop talking and instead listen more to also understand him; or reversely
\item you are sure that you already understood your conversation partner fully and correctly, in which case you should stop listening and instead talk more to also inform him.
\end{enumerate}
\end{rules}
Our algorithm Algorithm \ref{alg:SimulatorAdaptive} exactly adds the formal equivalent of these rules to Algorithm \ref{alg:SimulatorAdaptive}. A relatively simple proof that draws on the insights obtained from analyzing the optimal robust exchange protocol then shows that this simple and natural coding scheme indeed achieves the optimal $2/7 - \epsilon$ tolerable error rate for adaptive unique-decoding. This means that Algorithm \ref{alg:SimulatorAdaptive} is one intuitive and natural algorithm that simultaneously achieves the $1/4$ error rate (if the adaptivity rules are ignored), the $2/7 - \epsilon$ error rate for adaptive protocols and the $1/2 - \epsilon$ error rate with optimal list size when list decoding is allowed. Of course, so far, this result comes with the drawback of using a large ($O(n)$-bits) alphabet. Nonetheless, this result together with the invariability hypothesis hold open the promise of such a ``perfect'' algorithm that works even without the drastic communication overhead\footnote{Subsequent works of the authors have already moved towards this constant size alphabet protocol by reducing the alphabet size from $O(n)$ bits to merely $O(\log^{\epsilon} n)$ bits (see \Cref{sec:appendix:furtherresultssupportingthehypothesis}).}.
\section{Adaptivity in the Shared Randomness Setting}\label{sec:sharedrand}
In this section, we present our positive and negative results for adaptive protocols with access to shared randomness.
In the shared randomness setting of Franklin et al. \cite{FGOS13} Alice and Bob have access to a shared random string that is not known to the adversary. As shown in Fanklin et al. \cite{FGOS13} the main advantage of this shared randomness is that Alice and Bob can use a larger communication alphabet for their protocol and agree on using only random subset of it which is independent from round to round. Since the randomness is not known to the adversary any attempt at corrupting a transmission results with good probability on an unused symbol. This makes most corruptions detectable to Alice and Bob and essentially results in corruptions being equivalent to erasures. Fanklin et al. call this way of encoding the desired transmissions into a larger alphabet to detect errors a \emph{blue-berry code}. It is well known for error correcting codes that erasures are in essence half as bad as corruption. Showing that the same holds for the use of tree codes in the algorithm of Braverman and Rao translates this factor two improvement to the interactive setting. For non-adaptive protocols this translates into a tolerable error rate of $1/2 - \epsilon$ instead of a $1/4 - \epsilon$ error rate. In what follows we show that allowing adaptive protocols in this setting allows to further rise the tolerable error rate to $2/3 - \epsilon$. We also show that no coding scheme can tolerate more than an $2/3$ error rate:
\begin{theorem}\label{thm:xorshard23impossible}[The second part of \Cref{thm:23boundsoverview}]
In the shared randomness setting there is no (deterministic or randomized) adaptive protocol that robustly simulates the Exchange Protocol for an error rate of 2/3 with an $o(1)$ failure probability even when allowing computationally unbounded protocols that use an arbitrarily large number of rounds and an unbounded alphabet.
\end{theorem}
With the intuition that in the shared randomness setting corruptions are almost as bad as erasures we prove \Cref{thm:xorshard23impossible} directly for an strictly weaker adversary, which can only delete transmitted symbols instead of altering them to other symbols. Formally, when the adversary erases the transmission of a party, we say that the other party receives a special symbol $\bot$ which identifies the \emph{erasure} as such.
\begin{proof}
Suppose that there is a coding scheme with $N$ rounds that allows Alice and Bob exchange their input bits, while the adversary erases at most $2N/3$ of the transmissions. Consider each party in the special scenario in which this party receives a $\bot$ symbol whenever it listens. Let $x_A$ and $x_B$ be the (random variable of) the number of rounds where, respectively, Alice and Bob listen when they are in this special scenario.
Suppose this number is usually small for one party, say Alice. That is, suppose we have $\Pr[x_A\leq 2N/3]\geq 1/3$. In this case the adversary gambles on the condition $x_A\leq 2N/3$ and simply erases Bob's transmission whenever Alice listens and Bob transmits. Furthermore, if both parties listen, the adversary delivers the symbol $\bot$ to both parties at no cost. This way, with probability at least $1/3$, Alice stays in the special scenario while the adversary adheres to its budget of at most $2N/3$ erasures. Since, in this case, Alice only receives the erasure symbol she cannot know Bob's input. Therefore, if the adversary chooses Bob's input to be a random bit the output of Alice will be wrong with probability at least $1/2$ leading to a total failure probability of at least $1/6$.
If on the other hand $\Pr[x_A\leq 2N/3]\leq 1/3$ and $\Pr[x_B\leq 2N/3]\leq 1/3$, then even a union bound shows that $\Pr[(x_A\geq 2N/3) \wedge (x_A\geq 2N/3)]\geq 1/3$. In this case, the adversary tries to erase all transmissions of both sides. Indeed, if $x_A\geq 2N/3$ and $x_A\geq 2N/3$ this becomes possible because in this case there must be are at least $N/3$ rounds in which both parties are listening simultaneously. For these rounds, the adversary gets to deliver the erasure symbol to both sides at no cost. In this case, which happens with probability $1/3$ there remain at most $2N/3$ rounds in which one of the parties listens alone which the adversary can erase without running out of budget. With probability $\Pr[(x_A\geq 2N/3) \wedge (x_A\geq 2N/3)] > 1/3$ the adversary can thus prevent both parties from learning the other party's input. Choosing the inputs randomly results in at least one party being wrong in its decoding with probability $3/4$ which in total leads to a failure probability of at least $1/3 \cdot 3/4 > 1/6$.
\end{proof}
Turning to positive results we first show how to solve the Exchange Problem robustly with an error rate of $2/3 - \epsilon$. While this is a much simpler setting it already provides valuable insights into the more complicated general case of robustly simulating arbitrary $n$-round protocols. For simplicity we stay with the adversary who can only erase symbols. Our general simulation result presented in \Cref{thm:protshard23} works of course for the regular adversary.
\begin{lemma}\label{thm:xorshard23prot}
Suppose $\epsilon>0$. In the shared hidden randomness model with rate adaptation there is a protocol that solves the Exchange Problem in $O(1/\epsilon)$ rounds under an adversarial erasure rate of $2/3-\epsilon$.
\end{lemma}
\begin{proof}
The protocol consists of $3/\epsilon$ rounds grouped into three parts containing $1/\epsilon$ rounds each. In the first part Alice sends her input symbol in every round while Bob listens. In the second part Bob sends his input symbol in each round while Alice listens. In the last part each of the two parties sends its input symbol if and only if they have received the other parties input and not just erasures during the first two parts; otherwise a party listens during the last part. Note that the adversary has a budget of $3/\epsilon \cdot (2/3 - \epsilon) = 2\epsilon -1$ erasures which is exactly one erasure to little to erase all transmission during the first two parts. This results in at least one party learning the other party's input symbol. If both parties learn each others input within the first two rounds then the algorithm succeeded. If on the other hand one party, say without loss of generality Alice, only received erasures then Bobs received her input symbol at least once. This results in Bob sending his input symbol during the last part while Alice is listens. Bob therefore sends his input symbol a total of $2/\epsilon$ times while Alice listens. Not all these transmissions can be erased and Alice therefore also knows Bob's input symbol in the end.
\end{proof}
\Cref{thm:xorshard23prot} and even more the structure of the robust Exchange Protocol presented in its proof already give useful insights into how to achieve a general robust simulation result. We combine these insights with the blue-berry code idea of \cite{FGOS13} to build what we call an \emph{adaptive exchange block}. An adaptive exchange block is a simple three round protocol which will serve as a crucial building block in \Cref{thm:protshard23}. An adaptive exchange block is designed to transmit one symbol $\sigma_A \in \Sigma$ from Alice to Bob and one symbol $\sigma_B \in \Sigma$ from Bob to Alice. It assumes a detection parameter $\delta < 1$ and works over an alphabet $\Sigma'$ of size $|\Sigma'|=\ceil{|\Sigma|/\delta}$ and works as follows: First, using the shared randomness, Alice and Bob use their shared randomness to agree for each of the three rounds on an independent random subset of of the larger communication alphabet $\Sigma'$ to be used. Then, in the first round of the adaptive exchange block Alice sends the agreed equivalent of $\sigma_A$ while Bob listens. In the second round Bob sends the equivalent of $\sigma_B$ while Alice listens. Both parties try to translate the received symbol back to the alphabet $\sigma$ and declare a (detected) corruption if this is not possible. In the last round of the adaptive exchange block a party sends the encoding of its $\sigma$-symbol if and only if a failure was detected; otherwise a party listens and tries again to decode any received symbol.
The following two properties of the adaptive exchange block easily verified:
\begin{lemma}\label{lem:exchangeblock}
Regardless of how many transmissions an adversary corrupted during an adaptive exchange block, the probability that at least one of the parties decodes a wrong symbol is at most $3\delta$.
In addition, if an adversary corrupted at most one transmission during an adaptive exchange block the with probability at least $1 - \delta$ both parties received their $\sigma$ symbols correctly.
\end{lemma}
\begin{proof}
For a party to decode to a wrong symbol it must be the case that during one of the three rounds the adversary hit a meaningful symbol in the alphabet $\Sigma'$ during a corruption. Since $|\Sigma|/|\Sigma'|\leq \delta$ this happens at most with probability $\delta$ during any specific round and at most with probability $3\delta$ during any of the three rounds. To prove the the second statement we note that in order for a decoding error to happen the adversary must interfere during the first two rounds. With probability $1 - \delta$ such an interference is however detected leading to the corrupted party to resend in the third round.
\end{proof}
Next we explain how to use the adaptive exchange block together with the ideas of \cite{FGOS13} to prove that adaptive protocols with shared randomness can tolerate any error rate below $2/3$:
\begin{theorem}\label{thm:protshard23}[The first part of \Cref{thm:23boundsoverview}]
Suppose $\Pi$ is an $n$-round protocol over a constant bit-size alphabet. For any $\epsilon >0$, there is an adaptive protocol $\Pi'$ that uses shared randomness and robustly simulates $\Pi$ for an error rate of $2/3 - \epsilon$ with a failure probability of $2^{-\Omega(n)}$.
\end{theorem}
\begin{proof}[Proof Sketch]
The protocol $\Pi'$ consists of $N=\frac{6n}{\epsilon}$ rounds which are grouped into $\frac{2n}{\epsilon}$ adaptive exchange blocks. The adversary has an error budget of $\frac{4n}{\epsilon}-6n$ corruptions. Choosing the with parameter $\delta$ in the exchange blocks to be at most $\epsilon/(6 \cdot 4 + 1)$ and using \Cref{lem:exchangeblock} together with a Chernoff bound gives that with probability $1 - 2^{-\Omega(n)}$ there are at most $n/4$ adaptive exchange blocks in which a wrong symbol is decoded. Similarly, the number of adaptive exchange blocks in which only one corruption occurred but not both parties received correctly is with probability $1 - 2^{-\Omega(n)}$ at most $n/3$. Lastly, the number of adaptive exchange blocks in which the adversary can force an erasure by corrupting two transmissions is at most $(\frac{4n}{\epsilon}-6n)/2$. We can therefore assume that regardless of the adversaries actions at most $n$ corruptions and at most $\frac{2n}{\epsilon} - 8/3 n$ erasures happen during any execution of $\Pi'$, at least with the required probability of $1 - 2^{-\Omega(n)}$.
We can now apply the arguments given in \cite{FGOS13}. These arguments build on the result in \cite{BR11} and essentially show that detected corruptions or erasures can essentially be counted as half a corruption. Since almost all parts of the lengthy proofs in \cite{BR11} and \cite{FGOS13} stay the same, we restrict ourselves to a proof sketch. For this we first note that the result in \cite{BR11} continues to hold if instead of taking turns transmitting for $N$ rounds both Alice and Bob use $N/2$ rounds transmitting their next symbol simultaneously. The extensions described in \cite{FGOS13} furthermore show that this algorithm still performs a successful simulation if the number effectively corrupted simultaneous rounds is at least $n$ rounds less than half of all simultaneous rounds. Here the number of effectively corrupted simultaneous rounds is simply the number of simultaneous rounds with an (undetected) corruption plus half the number of simultaneous rounds suffering from an erasure (a detected corruption). Using one adaptive exchange block to simulate one simultaneous round leads to $\frac{2n}{\epsilon}$ simultaneous rounds and an effective number of corrupted rounds of $n/4 + (\frac{2n}{\epsilon} - 8/3 n)/2 < (\frac{2n}{\epsilon})/2 - n$. Putting everything together proves that with probability at least $1 - 2^{-\Omega(n)}$ the protocol $\Pi'$ successfully simulates the protocol $\Pi$, as claimed.
This way of constructing the adaptive protocol $\Pi'$ leads to an optimal linear number of rounds and a constant size alphabet but results in exponential time computations. We remark that using efficiently decodable tree codes, such as the ones described in the postscript to \cite{Schulman} on Schulman's webpage, one can also obtain a computationally efficient coding scheme at the cost of using a large $O(n)$-bit alphabet. Lastly, applying the same ideas as in \Cref{sec:AlgLargeRounds} also allows to translate this computationally efficient coding schemes with a large alphabet into one that uses a constant size alphabet but a quadratic number of rounds.
\end{proof}
\section{Natural Interactive Coding Schemes With Large Alphabets}\label{sec:simulators}
We start by presenting a canonical format for interactive communication and then present our natural non-adaptive and adaptive coding schemes.
\subsection{Interactive Protocols in Canonical Form}
We consider the following canonical form of an $n$-round two party protocol over alphabet $\Sigma$: We call the two parties Alice and Bob. To define the protocol between them, we take a rooted complete $|\Sigma|$-ary tree of depths $n$. Each non-leaf node has $|\Sigma|$ edges to its children, each labeled with a distinct symbol from $\Sigma$. For each node, one of the edges towards children is \emph{preferred}, and these \emph{preferred} edges determine a unique leaf or equivalently a unique path from the root to a leaf. We say that the set $\mathcal{X}$ of the preferred edges at odd levels of the tree is owned by Alice and the set $\mathcal{Y}$ of the preferred edges at even levels of the tree is owned by Bob. This means that at the beginning of the protocol, Alice gets to know the preferred edges on the odd levels and Bob gets to know the preferred edges on the even levels. The knowledge about these preferred edges is considered as inputs $\mathcal{X}$ and $\mathcal{Y}$ given respectively to Alice and Bob. The output of the protocol is the unique path from the root to a leaf following only preferred edges. We call this path the \emph{common path} and the edges and nodes on this path the \emph{common edges} and the \emph{common nodes}. The goal of the protocol is for Alice and Bob to determine the common path. The protocol succeeds if and only if both Alice and Bob learn the common path. Figure \ref{fig:PointerJumping} illustrates an example: Alice's preferred edges are indicated with blue arrows and Bob's preferred edges are indicated with red arrows, and the common leaf is indicated by a green circle.
\begin{figure}[t]
\centering
\includegraphics[width=0.6\textwidth]{PointerJumping}
\caption{A Binary Interactive Protocol in the Canonical Form}
\label{fig:PointerJumping}
\end{figure}
It is easy to see that if the channel is noiseless, Alice and Bob can determine the common path of a canonical protocol $P$ by performing $n$ rounds of communication. For this Alice and Bob move down on the tree together simply by following the path of preferred edges; they take turns and exchange one symbol of $\Sigma$ per round, where each symbol indicates the next common node. We call this exchange the execution of the protocol $P$.
\subsection{Natural Non-Adaptive Coding Schemes}\label{sec:alg}
In this section, we present a non-adaptive coding scheme which can be viewed as a straightforward formalization of the natural high level approach presented in \Cref{sec:overview}. This coding scheme tolerates the optimal error rate of $1/4-\epsilon$ when unique decoding and simultaneously the optimal error rate of $1/2-\epsilon$ when list decoding. The coding scheme is furthermore simple, intuitive, and computationally efficient, but it makes use of a large $O(\frac{n}{\epsilon})$-bit size alphabet. We note that one can also view this algorithm as a simplified version of the Braverman-Rao algorithm~\cite{BR11} with larger alphabet size and without using tree codes~\cite{Schulman}.
The algorithm, for which a pseudo code is presented in Algorithm \ref{alg:Simulator}, works as follows: In the course of the algorithm, Alice and Bob respectively maintain sets $E_A$ and $E_B$ which are a subset of their own preferred tree edges that are considered to be \emph{important}. We call these \emph{important edge-sets} or sometimes simple \emph{edge-sets}. Initially these edge-sets are empty and in each iteration, Alice and Bob add one edge to their sets. In each iteration, when a party gets a turn to transmit, it sends its edge-set to the other party. The other party receives either the correct edge-set or a corrupted symbol which represents an edge-set made up by the adversary. In either case, the party combines the received edge-set with its own important edge-set and follows the common path in this set. Then, if this common path can be extended by the party's own set of \emph{preferred edges} by a new edge $e$, the party adds this edge $e$ to its edge-set, and sends this new edge-set in the next round. If, on the other hand, the common path already ends at a leaf, then the party registers this as a vote for this leaf and simply re-sends its old edge-set. In the end, both parties simply output the the leaf (respectively the $O(1/\epsilon)$ leaves) with the most votes for unique decoding (resp., for list decoding).
\begin{algorithm}[t]
\caption{Natural Non-Adaptive Coding Scheme at Alice's Side}
\begin{algorithmic}[1]
\small
\State $\mathcal{X} \gets$ the set of Alice's preferred edges;
\State $E_A \gets \emptyset$; \Comment{$E_A$ is Alice's set of \emph{important} edges. We preserve that always $E_A \subseteq \mathcal{X}$}
\State $N \gets \frac{2n}{\epsilon}$;
\For{$i=1$ to N}
\State Receive edge-set $E'_B$; \Comment{$E'_B$ is the received version of Bob's important edge-set $E_B$}
\State $E \gets E'_B \cup E_A$
\If{$E$ is a valid edgeset}
\State $r \gets \emptyset$
\State follow the common path in $E$
\If{the common path ends at a leaf}
\State Add one vote to this leaf
\Else
\State $r \gets \{e\}$ where $e$ is the next edge in $\mathcal{X}$ continuing the common path in $E$ (if any)
\EndIf
\State $E_A \gets E_A \cup r$
\State Send $E_A$
\Else
\State Send $E_A$
\EndIf
\EndFor
\State Output the leaf with the most votes for unique decoding
\State Output the $O(1/\epsilon)$ leaves with the most votes for list decoding
\end{algorithmic}
\label{alg:Simulator}
\end{algorithm}
\paragraph{Analysis}
We now prove that Algorithm \ref{alg:Simulator} indeed achieves the optimal tolerable error rates for non-adaptive unique decoding and list decoding.
\begin{theorem}\label{thm:nonadaptiveSim}
For any $\epsilon>0$, Algorithm \ref{alg:Simulator} is a deterministic polynomial time non-adaptive simulator with alphabet size of $O(\frac{n}{\epsilon})$-bits and round complexity $\frac{2n}{\epsilon}$ that tolerates an error-rate of $1/4-\epsilon$ for unique decoding, and also tolerates an error-rate of $1/2 - \epsilon$ for list decoding with a list of size $\frac{1}{\epsilon}$.
\end{theorem}
\begin{proof}
Clearly, both $E_A$ and $E_B$ grow by at most one edge per round. Furthermore, the edges always attach to an already present edge and therefore, each of these edge-sets always forms a subtree with size at most $N$starting at the root of the tree of the canonical form, which has depth $n$. One can easily see that each such subtree can be encoded using $O(N)$ bits, e.g., by encoding each edge of the breadth first search traversal of the subtree using alphabet of size $3$ (indicating ``left", ``right" or ``up"). Hence, parties can encode their edge-sets using $O(\frac{n}{\epsilon})$-bits symbols, which shows that the alphabet size is indeed as specified.
We now prove the correctness of the algorithm, starting with that of unique decoding. Note that any two consecutive rounds in which Bob and Alice's transmissions are not corrupted by adversary, one of the following two good things happens: Either the path in $E_A \cup E_B$ gets extended by at least one edge, or both Alice and Bob receive a vote for the correct leaf.
Now suppose that the simulation runs in $N = 2n/\epsilon$ rounds which can be grouped into $n/\epsilon$ round pairs. Given the error rate of $1/4 - \epsilon$ at most a $1/2 - 2\epsilon$ fraction of these round pairs can be corrupted, which leaves a total of $N/2 (1/2 + 2\epsilon)$ uncorrupted round pairs. At most $n$ of these round pairs grow the path while the remaining $N/2 (1/2 + 2\epsilon) - n$ rounds vote for the correct leaf. This results in at least $N(1/2 + 2\epsilon) - \ceil{n/2} = \frac{n}{2\epsilon} + 2n - n > N/4$ out of $N/2$ votes being correct.
For the list decoding, with error rate $1/2-\epsilon$, we get that at most $1-2\epsilon$ fraction of round-pairs are corrupted, and thus at least $N\epsilon= 2n$ uncorrupted pairs exist. Hence, the correct leaf gets a vote of at least $2n-n$. Noting that the total number of the votes that one party gives to its leaves is $N/2=\frac{n}{\epsilon}$, we get that the correct leaf has at least a $\epsilon$ fraction of all the votes. Therefore, if we output the $1/\epsilon$ leaves with the most votes, the list will include the correct leaf.
\end{proof}
\iffalse
\begin{remark}We should be able to get Mark's new result that there is an algorithm that uniquely decodes if the adversary promises to compromise at most an $e_A$ fraction of Alice's transmissions and an $e_B$ fraction of Bob's transmissions where $e_A, e_B$ satisfy: $e_A + 2 e_B < 1 - \epsilon$ and $2 e_A + e_B < 1 - \epsilon$.
\end{remark}
\fi
\subsection{Natural Adaptive Coding Scheme}
In this section we show that the simplest way to introduce adaptation into the natural coding scheme presented in Algorithm \ref{alg:Simulator}. In particular we use the simple rules specified as \Cref{rules:ForAnAdaptiveConversation} and show that this leads to a coding scheme tolerating an error rate of $2/7 - \epsilon$, the optimal error rate for this setting.
\begin{algorithm}[t]
\caption{Natural Adaptive Coding Scheme at Alice's Side}
\begin{algorithmic}[1]
\small
\State $\mathcal{X} \gets$ the set of Alice's preferred edges;
\State $E_A \gets \emptyset$;
\State $N \gets \Theta(\frac{n}{\epsilon})$;
\For{$i=1$ to $\frac{6}{7}N$}
\State Receive edge-set $E'_B$;
\State $E \gets E'_B \cup E_A$
\If{$E$ is a valid edgeset}
\State $r \gets \emptyset$
\State follow the common path in $E$
\If{the common path ends at a leaf}
\State Add one vote to this leaf
\Else
\State $r \gets \{e\}$ where $e$ is the next edge in $\mathcal{X}$ continuing the common path in $E$ (if any)
\EndIf
\State $E_A \gets E_A \cup r$
\State Send $E_A$
\Else
\State Send $E_A$
\EndIf
\EndFor
\State Let $s$ be number of votes of the leaf with the most votes and $t$ be the total number of votes
\If{$s\geq t-\frac{N}{7}$}
\For{$i=1$ to $\frac{N}{7}$}
\State Send $E_A$
\EndFor
\Else
\For{$i=1$ to $\frac{N}{7}$}
\State Receive edge-set $E'_B$; $E = E'_B \cup E_A$
\If{$E$ is a valid edge-set}
\State follow the common path in $E$
\If{the common path ends at a leaf}
\State Add one vote to this leaf
\EndIf
\EndIf
\EndFor
\EndIf
\State Output the leaf with the most votes
\end{algorithmic}
\label{alg:SimulatorAdaptive}
\end{algorithm}
Next we explain how to incorporate the rules specified in \Cref{rules:ForAnAdaptiveConversation} easily and efficiently into Algorithm \ref{alg:Simulator}. For this we note that for example if one party has a leaf with more than $(2/7 - \epsilon)N$ votes, since adversary has only budget of $(2/7 - \epsilon)N$, this leaf is the correct leaf and thus the party can follow the second rule. Generalizing this idea, we use the rule that, if the party has a leaf $v$ such that only at most $\frac{N}{7}$ votes are on leaves other than $v$, then the party can safely assume that this is the correct leaf. In our proof we show that this assumption is indeed safe and furthermore, at least one party can safely decode at the end of the first $6/7$ fraction of the simulation. Since both parties know this in advance, if a party can not safely decode after $6/7$ fraction of the time, it knows that the other party has safely decoded---which corresponds to the condition in the first rule--- and in this case, the party only listens for the last $1/7$ fraction of the protocol. The pseudo code for this coding scheme is presented in Algorithm \ref{alg:SimulatorAdaptive}.
\begin{theorem}\label{thm:uniquedecoding}
Algorithm \ref{alg:SimulatorAdaptive} is a deterministic adaptive coding scheme with alphabet size of $O(\frac{n}{\epsilon})$-bits, round complexity of $O(\frac{n}{\epsilon})$, and polynomial computational complexity that tolerates an error-rate of $2/7-\epsilon$ for unique decoding.
\end{theorem}
\begin{proof}
First, we show that if at the end of $\frac{6N}{7}$ rounds, one party has $t$ votes, $s\geq t-\frac{N}{7}$ of which are dedicated to one leaf $v$, then this party can safely assume that this leaf $v$ is the correct leaf. Proof of this part is by a simply contradiction. Note that if the party has $s$ votes, then there are at least $\frac{3N}{7}-t$ that either stopped the growth of the path or turned an edge-set into a nonvalid edge-set. Furthermore, if $v$ is not the correct leaf, then the votes $v$ are created by errors of adversary which means that adversary has invested $s$ errors on turning the edge-sets sent by the other party into other valid-looking edge-sets. Hence, in total, adversary has spent at least $\frac{3N}{7}-t+s\geq \frac{3N}{7}-t+t-\frac{N}{7} \geq \frac{2N}{7}$ errors which is a contradiction.
Now that we know that the rule for safely decoding at the end of $\frac{6N}{7}$ rounds is indeed safe, we show that at least one party will safely decode at that point of time. Suppose that no party can decode safely. Also assume that Alice has $t_{A}$ votes, $r_{A}$ of which are votes on the good leaf. That means at least adversary has turned at least $t_{A}-r_{A}$ edge-sets sent by Bob into other valid-looking edge-sets. Similarly, $t_{B}-r_{B}$ errors are introduced by the adversary on edge-sets sent by Alice. If neither Alice nor Bob can decode safely, we know that $t_{A}-r_{A}\geq \frac{N}{7}$ and $t_{B}-r_{B}\geq \frac{N}{7}$, which means that in total, adversary has introduced at least $\frac{2N}{7}$ errors. Since this is not possible give adversary's budget, we conclude that at the end of $\frac{6N}{7}$ rounds, at least one party decodes safely.
Now suppose that only one party, say Alice, decodes safely at the end of $\frac{6N}{7}$ rounds. Then, in the last $\frac{N}{7}$ rounds, Bob is listening and Alice is sending. In this case, we claim that Bob's leaf that gets the majority of the votes at the end is the correct leaf. The reason is, suppose that Bob has $t_{B}$ votes from the first $\frac{6N}{7}$ rounds and $t'_{B}$ votes from the last $\frac{N}{7}$ rounds. Furthermore, suppose that the correct leaf had $r_{B}$ votes from the first $\frac{6N}{7}$ rounds and $r'_{B}$ votes from the last $\frac{N}{7}$ rounds. Then, the adversary has introduced at least $(\frac{3N}{7}-t_{B})+(\frac{N}{7}-t'_{B})+ (t_{B}-r_{B})+(t'_{B}-r'_{B})= \frac{4N}{7}-r_{B}+r'_{B}$ errors. Since adversaries budget is at most $(\frac{2}{7}-\epsilon)N$, we get that $r_{B}+r'_{B} > \frac{2N}{7}$. Hence, since clearly Bob has at most $\frac{4N}{7}$ votes in total, the correct leaf has the majority.
\end{proof}
\section{Coding Schemes with Small Alphabet and $O(n^2)$ Rounds}\label{sec:AlgLargeRounds}
In this section, we show how to implement the natural coding schemes presented as Algorithms \ref{alg:Simulator} and \ref{alg:SimulatorAdaptive} over a channel supporting only constant size symbols at the cost of increasing the number of rounds to $O(n^2)$.
To emulate Algorithms \ref{alg:Simulator} and \ref{alg:SimulatorAdaptive} over a finite alphabet, we use Error Correcting Codes (ECC) and list decoding. In particular, on the transmission side, we encode each edge-set, which will have size at most $O(\frac{n}{\epsilon^2})$, using an ECC with relative distance $1-\epsilon/10$, alphabet size $O(\frac{1}{\epsilon})$, and code-length $O(\frac{n}{\epsilon^3})$ symbols and send this encoding in $O(\frac{n}{\epsilon^3})$ rounds. We call each such $O(\frac{n}{\epsilon^3})$ rounds related to transmission of one edge-set a \emph{block}. On the receiver side, we use a list decoder to produce a list of $O(\frac{1}{\epsilon})$ edge-sets such that, if the error-rate in the \emph{block} is at most $1-\epsilon/3$, then one of the edge-sets in the list in indeed equal to the transmitted edge-set. If the error-rate is greater than $1-\epsilon/3$, the list provided by the list decoder does not need to provide any guarantee.
We use $O(\frac{n}{\epsilon^3})$ rounds for each block as because of list decoding, now each edge-set contains up to $O(\frac{n}{\epsilon^2})$ edges (compare to $O(\frac{n}{\epsilon})$ edges in Algorithms \ref{alg:Simulator} and \ref{alg:SimulatorAdaptive}). Adding edges corresponding to each of these edge-sets to the set of \emph{important} edges leads to the list decoding result:
\begin{lemma}
If the error rate is at most $1/2 - \epsilon$, then in both parties, the set of $O(1/\epsilon^2)$ leaves with the highest votes includes the correct leaf.
\end{lemma}
\begin{proof}
This is because, with error-rate $1/2-\epsilon$, the adversary can corrupt at most $1/2-\epsilon/3$ blocks beyond corruption rate of $1-\epsilon/3$. Hence, we are now in a regime that at most $1/2-\epsilon/3$ fraction of edge-sets are corrupted, each corrupted possibly even completely, and for every other edge-set, the (list) decoding includes the correct transmitted edge-set. Hence, similar to the proof of \Cref{thm:nonadaptiveSim}, we get that the correct leaf gets a vote of at least $\Omega(\epsilon N)$. On the other hand, now each block might give a vote of at most $O({1}/{\epsilon})$ to different leaves and thus, the total weight is at most $O({N}/{\epsilon})$. Therefore, the correct leaf is within the top $O(1/\epsilon^2)$ voted leaves.
\end{proof}
We next explain a simple idea that allows us to extend this result to unique decoding; specifically non-adaptive unique decoding when error-rate is at most $1/4-\epsilon$, and adaptive unique decoding when error-rate is at most $2/7-\epsilon$.
The idea is to use a variation of Forney's technique for decoding concatenated codes~\cite{forney1966concatenated}. Recall that each received edge-set might lead to two things: (1) extending the common path, or (2) adding a vote to a leaf. While we keep the first part as above with list decoding, we make the voting weighted. In particular, in the receiver side, we take each edge-set leading to a leaf (when combined with local important edge set) as a vote for the related leaf but we weight this vote according to the hamming-distance to the received block. More precisely, if the edge-set has relative distance $\delta$ from the received block, the related leaf gets a vote of $\max\{1-2\delta, 0\}$.
Using this weighting function, intuitively we have that if the adversary corrupts an edge-set to $\frac{1}{2}$ corruption rate, even though the edge-sets gets added to the set of important edges, in the weighting procedure this edge-set can only add at most weight $2\epsilon$ to any (incorrect) leaf. Hence, for instance if the adversary wants to add a unit of weight to the votes of an incorrect leaf, it has to almost completely corrupt the symbols of the related block.
\begin{lemma}\label{lem:14constant} If the error rate is at most $1/4 - \epsilon$, then in both parties, the leaf with the highest weighted votes is the correct leaf.
\end{lemma}
\begin{proof}
We show that the correct leaf $u_{g}$ get strictly more weighted votes compared to any other leaf $u_b$. For this, we use a potential $\Phi=W(u_g)-W(u_b)$, that is, the total weight added to the good leaf minus that of the bad leaf, and we show this potential to be strictly positive. Let $P_{c}$ be the set of edges of the correct path. Let $t$ be the time at which point the common path in $E_A \cup E_B\cup P_{c}$ ends in a leaf. First note that for each two consecutive blocks before time $t$ in which the corruption rate is at most $1-\epsilon/3$, the common path in $E_A \cup E_B\cup P_{c}$ gets extended by (at least) one edge (towards the correct leaf). Hence, at most $n$ such (``not completely corrupted") block pairs are spent on growing the common path in $E_A \cup E_B\cup P_{c}$ and also, $t$ happens after at most $N 2(1/2-2\epsilon)(1+\epsilon/3) <N (1-2\epsilon) < N-4n$ blocks. That is, $t$ happens at least $4n$ blocks before the end of the simulation. Before time $t$, we do not add any weight to $W(u_g)$ but each block corrupted with rate $x\geq 1/2$ changes $\Phi$ in the worst case as $1-2x\leq 0$. For each block after time $t$, each block corrupted with rate $x \in [0,1-\epsilon/3]$ changes $\Phi$ in the worst case by $1-2x$ and each block corrupted to rate $x\in (1-\epsilon/3, 1]$ changes $\Phi$ by at most $-1$. These two cases can be covered as a change of no worse than $1-2(1+\epsilon/3)x$. In total, since adversary's error rate is at most $(1/4-\epsilon)$, in total of before and after time $t$, we get that it can corrupt at most $1/2-2\epsilon$ fraction of the receptions of one party and thus, $\Phi \geq 1-2(1/2-2\epsilon)(1+\epsilon/3) \geq 3\epsilon >0$.
\end{proof}
For the $2/7-\epsilon$ adaptive algorithm, we first present a lemma about the total weight assigned to the leaves due to one edge-set reception.
\begin{lemma}\label{lem:listweight}For each list decoded block that has corruption rate $\rho$, the summation of the weight given to all the codewords in the list is at most $|1-2\rho|+3\epsilon/5$.
\end{lemma}
\begin{proof} First we show that only at most $3$ codewords receive nonzero weight. The proof is by contradiction. Suppose that there are $4$ codewords $x_1$ to $x_4$ that each agree with the received string $x$ in at least $\epsilon/10$ fraction of the symbols. Furthermore, for each $x_i$, let $S_i$ be the set of coordinates where $x_i$ agrees with the received string $x$. Let $\ell$ be the length of string $x$. Note that $\forall i\neq j \in \{1, 2, 3, 4\}$, $|S_i|\geq \ell/2$ and $|S_i \cap S_j| \leq \epsilon/10$. Hence, with a simple inclusion-exclusion, we have $$\ell \geq |\cup_{i} S_i |\geq \sum_{i}|S_i| -\sum_{i<j} |S_i \cap S_j|\geq \frac{4\ell}{2} - \frac{6\epsilon}{10} >\ell,$$
which is a contradiction.
Having that at most $3$ codewords receive nonzero weight, we now conclude the proof. Let $x_1$ to $x_3$ be the codewords that receive nonzero weights and assume that $x_1$ is the closest codeword to $x$. We have $$\forall i> 1, \Delta(x, x_i) \geq \Delta(x_1, x_i)-\Delta(x, x_1)\geq 1-\epsilon/10 - 1/2 \geq 1/2-\epsilon/10.$$
Thus, the weight that $x_2$ or $x_3$ get is at most $2\epsilon/10$. On the other hand, $$\Delta(x_1, x) \geq \min\{\rho, 1-\epsilon/10-\rho\}= \frac{1-\epsilon/10}{2} - |\frac{1-\epsilon/10}{2} -\rho|.$$ Thus, the weight that $x_1$ receives is at most $\epsilon/10+|1-\epsilon/10-2\rho| \leq 2\epsilon/10+|1-2\rho|$. Summing up with the weight given to $x_2$ and $x_3$, we get that the total weight given to all codewords is at most $3\epsilon/5+|1-2\rho|$.
\end{proof}
The algorithm is as in Algorithm \ref{alg:SimulatorAdaptive}, now enhanced with list decoding and weighted voting. In the end of the $\frac{6N}{7}$ rounds, a party decodes safely (and switched to only transmitting after that) if for the leaf $u$ that has the most votes, the following holds: $\Psi=\frac{W(u)+W_{\emptyset}-W(v)}{N} > 1/7$ where here, $W(v)$ is the weighted vote of the second leaf that has the most votes and $W_{\emptyset}$ is the total weight for decoded codewords that are inconsistent with the local important edge set (and thus mean an error). We call $\Psi$ the parties \emph{confidence}.
The following lemma serves as completing the proof of \Cref{thm:27upperboundoverview}.
\begin{lemma} If the error rate is at most $2/7 - \epsilon$, then in the end, in both parties, the leaf with the highest weighted votes is the correct leaf.
\end{lemma}
\begin{proof}
We first show that at the end of the first $6N/7$ rounds, at least one party has confidence at least $1/7+\epsilon/2$. For this, we show the summation of the confidence of the two parties to be at least $2/7+\epsilon$. The reasoning is similar to the proof that of \Cref{lem:14constant}. Let $t$ be the time at which point the common path in $E_A \cup E_B\cup P_{c}$ ends in a leaf and note that for each two consecutive blocks before time $t$ in which the corruption rate is at most $1-\epsilon/3$, the common path in $E_A \cup E_B\cup P_{c}$ gets extended by (at least) one edge (towards the correct leaf). Hence, at most $n$ such (``not completely corrupted") block pairs are spent on growing the common path in $E_A \cup E_B\cup P_{c}$. Furthermore, $t$ happens after at most $2 \cdot N(2/7-\epsilon)(1+\epsilon/3) <N (4/7-\epsilon)$ blocks. Specifically, $t$ happens at least $N(2/7+\epsilon)$ blocks before the end of the first $6N/7$ blocks. On the other hand, for each corruption rate $x$ on one block, only the confidence of the party receiving it gets effected and in worst case it goes down by $\frac{1-2x(1+\epsilon/3)}{N}$. Since the adversary's total budget is $N(2/7-\epsilon)$, at the end of the first $6N/7$ rounds, the summation of the confidence of the two parties is at least $\frac{6N/7 - 2n - 2N(2/7-\epsilon)(1+\epsilon/3)}{N} \geq N(2/7+\epsilon)$.
Now we argue that if a party decodes at the end of $6N/7$ blocks because it has confidence at least $1/7$, then the decoding of this party is indeed correct. Proof is by contradiction. Using \Cref{lem:listweight}, each block with corruption rate $x$ can add a weight of at most $\max\{2x-1 +3\epsilon/5, 0\}$ to the set of bad leaves or those incorrect codeword edge-sets that do not form a common path with the local edge-sets, i.e., the weight given to $W_{\emptyset}$. Hence, knowing that the adversary has budget of $N(2/7-\epsilon)$, it can create a weight of at most $2N(2/7-\epsilon) -N/3(1-3\epsilon/5) < N/7 - \epsilon/5 < N/7$, which means that there can not be a confidence of more than $1/7$ on an incorrect leaf.
The above shows that at least one party decides after $6N/7$ blocks and if a party decides then, it has decided on the correct leaf. If a party does not decide, it listens for the next $N/7$ blocks where the other party is constantly transmitting. It is easy to see that in this case, this leaf that has the maximum vote in this listening party is the correct party.
\end{proof}
\section{Introduction}
``Interactive Coding'' or ``Coding for Interactive Communication'' studies the task of protecting an interaction between two parties in the presence of communication errors. This line of work was initated by Schulman~\cite{Schulman} who showed, surprisingly at the time, that protocols with $n$ rounds of communication
can be protected against a (small) constant fraction of adversarial errors while incurring only a constant overhead in the total communication complexity.
In a recent powerful result that revived this area, Braverman and Rao~\cite{BR11} explored the maximal rate of errors that could be tolerated in an interactive coding setting. They showed the existence of a protocol that handles a $1/4 - \epsilon$ error rate and gave a matching negative result under the assumption that the coding scheme is {\em non-adaptive} in deciding which player transmits (and which one listens) at any point of time. They left open the questions whether the $1/4$ error rate can be improved by allowing adaptivity (see \cite[Open Problem 7]{BravermanAllerton} and \cite[Conclusion]{BR11}) or by reducing the decoding requirement to list decoding (see \cite[Open Problem 9]{BravermanAllerton} and \cite[Conclusion]{BR11}), that is, requiring each party only to give a small list of possible outcomes of which one has to be correct.
In this work we answer both questions in the affirmative (in a somewhat different regime of computational and communication resources): We give a rate adaptive coding scheme that tolerates any error rate below $2/7$. We furthermore show matching impossibility result which strongly rules out any coding scheme achieving an error rate of $2/7$.
Moreover, we also consider the adaptive coding schemes in the setting of \cite{FGOS13} in which both parties share some randomness not known to the adversary. While non-adaptive coding schemes can tolerate any error rate below $1/2$ this bound increases to $2/3$ using adaptivity, which we show is also best possible.
Lastly, we initiate the study of list decodable interactive communication. We show that allowing both parties to output a constant size list of possible outcomes allows non-adaptive coding schemes that are robust against any error rate below $1/2$, which again is best possible for in both the adaptive and non-adaptive setting.
All our coding schemes are deterministic and work with communication and computation being polynomially bounded in the length of the original protocol. We note that most previous works considered the more restrictive setting of linear amount of communication (often at the cost of exponential time computations). Interestingly, our matching negative results hold even if the communication and computation are unbounded. We show that this sharp threshold behavior extends to many other computational and communication complexity measures and is common to all settings of interactive communication studied in the literature. In fact, an important conceptual contribution of this paper is the formulation of a strong working hypothesis that stipulates that maximum tolerable error rates are invariable with changes in complexity and efficiency restrictions on the coding scheme. Throughout this paper this hypothesis lead us to consider the simplest setting for positive results and then expanding on the insights derived to get the more general positive results. We believe that in this way, the working hypothesis yields a powerful guideline for the design of simple and natural coding schemes as also the search for negative results. This has been already partially substantiated by subsequent results (see \cite{GH13} and \Cref{sec:appendix:furtherresultssupportingthehypothesis}).
\paragraph{Organization}
In what follows,
we briefly introduce the interactive communication model more formally
in \Cref{sec:settingDefinitions}. We also introduce the model
for adaptive interaction there. Then, in \Cref{sec:overview}, we explain our results as well as the underlying high-level ideas and techniques.
In \Cref{sec:exchange} we describe the simple Exchange problem
and give an adaptive protocol tolerating $2/7$-fraction error
in \Cref{sec:27rateadaptationXOR}. (Combined with \Cref{sec:overview}
this section introduces all the principal ideas of the paper. Rest
of the paper may be considered supplemental material.)
In the remainder of \Cref{sec:exchange}, we prove that error-rate of $2/7$ is the
best achievable for the Exchange problem and thus also for the general
case of interactive communication.
In \Cref{sec:simulators}, we give interactive coding schemes over
large alphabets tolerating $2/7$ error rate for general interactions. In
\Cref{sec:AlgLargeRounds} we then convert these to coding schemes
over constant size alphabets.
Finally, in \Cref{sec:sharedrand}, we give protocols tolerating an $2/3$ error rate
in the presence of shared randomness. The appendix contains some technical proofs, as well as some simple
impossibility results showing tightness of our protocols.
\section{New and Old Settings for Interactive Coding}\label{sec:settingDefinitions}
In this section, we define the classical interactive coding setup as well as all new settings considered in this work, namely, list decoding, the shared randomness setting, and adaptive protocols.
We start with some standard terminology:
An $n$-round {\em interactive protocol} $\Pi$ between two players Alice and Bob is given by two functions $\Pi_A$ and $\Pi_B$. For each {\em round} of communication, these functions map (possibly probabilistically) the history of communication and the player's private input to a decision on whether to listen or transmit, and in the latter case also to a symbol of the {\em communication alphabet}. All protocols studied prior to this work are {\em non-adaptive}\footnote{Braverman and Rao~\cite{BR11} referred to protocols with this property as \emph{robust}.} in that the decision of a player to listen or transmit deterministically depends only on the round number, ensuring that exactly one party transmits in each round. In this case, the {\em channel} delivers the chosen symbol of the transmitting party to the listening party, unless the adversary interferes and alters the symbol arbitrarily. In the adversarial channel model with {\em error rate} $\rho$, the number of such errors is at most $\rho n$. The outcome of a protocol is defined to be the transcript of the interaction.
A protocol $\Pi'$ is said to {\em robustly simulate} a protocol $\Pi$ for an error rate $\rho$ if the following holds: Given any inputs to $\Pi$, both parties can {\em uniquely decode} the transcript of an error free execution of $\Pi$ on these inputs from the transcript of any execution of $\Pi'$ in which at most a $\rho$ fraction of the transmissions were corrupted. This definition extends easily to {\em list decoding} by allowing both parties to produce a small (constant size) list of transcripts that is required to include the correct decoding, i.e., the transcript of $\Pi$. We note that the simulation $\Pi'$ typically uses a larger alphabet and a larger number of rounds. While our upper bounds are all deterministic, we strengthen the scope of our lower bounds by also considering \emph{randomized protocols} in which both parties have access to independent private randomness. We also consider the setting of \cite{FGOS13} in which both parties have access to \emph{shared randomness}. In both cases we assume that the adversary does not know the randomness and we say a randomized protocol robustly simulates a protocol $\Pi$ with \emph{failure probability} $p$ if, for any input and any adversary, the probability that both parties correctly (list) decode is at least $1 - p$.
We now present the notion of an {\em adaptive} protocol. It turns out that defining a formal model for adaptivity leads to several subtle issues. We define the model first and discuss these issues later.
In an {\em adaptive} protocol, the communicating players are allowed to base their decision on whether to transmit or listen (probabilistically) on the communication history. In particular, this allows players to base their decision on estimates of the amount of errors that have happened so far (see \Cref{sec:whyadaptivityhelps} for why this kind of adaptivity is a natural and useful approach). This can lead to rounds in which both parties transmit or listen simultaneously. In the first case no symbols are delivered while in the latter case the symbols received by the two listening parties are chosen by the adversary, without it being counted as an error.
\paragraph{Discussion on the adaptivity model.}
It was shown in \cite{BR11} that protocols which under no circumstances have both parties transmit or listen simultaneously are necessarily non-adaptive. Any model for adaptivity must therefore allow parties to simultaneously transmit or listen and specify what happens in either case. Doing this and also deciding on how to measure the amount of communication and the number of errors leads to several subtle issues.
While it seems pessimistic to assume that the symbols received by two simultaneously listening parties are determined by the adversary this is a crucial assumption. If, e.g., a listening party could find out without doubt if the other party transmitted or listened by receiving silence in the latter case then uncorrupted communication could be arranged by simply using the listen/transmit state as an incorruptible one-bit communication channel. More subtle points arise when considering how to define the quantity of communication on which the adversaries budget of corruptions is based. The number of transmissions performed by the communicating parties, for example, seems like a good choice. This however would make the adversaries budget a variable (possibly probabilistic) quantity that, even worse, non-trivially depends on when and how this budgets is spent. It would furthermore allow parties to time-code, that is, encode a large number (even an encoding of all answers to all possible queries) in the time between two transmissions. While time-coding strategies do not seem to lead to very efficient algorithms they would prevent strong lower bounds which show that even over an unbounded number of rounds no meaningful communication is possible (see, e.g., \Cref{thm:27lowerboundoverview} which proves exactly this for an error rate of $2/7$).
Our model avoids all these complications. For non-adaptive protocols that perfectly coordinate a designated sender and receiver in each round our model matches the standard setting. For the necessary case that adaptive parties fail to coordinate our model prevents any signaling or time-sharing capabilities and in fact precludes any information exchange. This matches the intuition that in a conversation no advantage can be derived from both parties speaking or listening at the same time. It also guarantees that the product between the number of rounds and the bit-size of the communication alphabet is a clean and tight information theoretic upper bound on the total amount of information or entropy that can be exchanged (in either direction) between the two parties. This makes the number of rounds the perfect quantity to base the adversaries budget on. All this makes our model, in hindsight, the arguably cleanest and most natural extension of the standard setting to adaptive protocols (see also \Cref{sec:appendix:naturalmodel} for a natural interpretation as a wireless channel model with bounded unpredictable noise). The strongest argument for our model however is the fact that it allows to prove both strong and natural positive \emph{and} negative results, implying that our model does not overly restrict or empower the protocols or the adversary.
| {'timestamp': '2013-12-09T02:02:49', 'yymm': '1312', 'arxiv_id': '1312.1764', 'language': 'en', 'url': 'https://arxiv.org/abs/1312.1764'} |
\subsection{Quick guide to running \texttt{BeamHNL} in \texttt{GENIE}} \label{appdx: code}
A \textcolor{red}{fork} of \verb|GENIE| with the \verb|BeamHNL| module implementation may be found on GitHub \cite{HNLGENIE}.
The configuration file has been designed with ease-of-use in batch jobs, such as might be submitted to a computing grid, in mind.
For this reason, there exists a single file \verb|config/CommonHNL.xml| which houses all the options that the module's components need to know about in order to simulate the HNL production, propagation, and decay.
A sample file may be found in \verb|src/contrib/beamhnl|, and explanations for the various options are available in \verb|config/NHLPrimaryVtxGenerator.xml|.
In this section, we will briefly outline the main components.
\begin{itemize}
\item \verb|ParameterSpace| defines $M_{\textrm{N}4}$ in $\text{GeV}/c^{2}$, $\{\left|U_{\alpha 4}\right|^{2}\}$, and Dirac vs. Majorana nature of the HNL.
\item \verb|InterestingChannels| enumerates the 10 decay channels (the 7 summarised in Table \ref{tab: decayChannels} and $N_{4} \rightarrow \pi^{\pm}\pi^{0}\ell^{\mp}, \pi^{0}\pi^{0}\nu$) kinematically accessible to an HNL below the kaon mass. Entries set to \verb|false| will be inhibited from entering the event record, whereas \verb|true| entries are treated as valid truth signal channels.
\item \verb|CoordinateXForm| defines the unique translation and rotation vectors $\boldsymbol{T}, \boldsymbol{R}$ from NEAR to BEAM and from NEAR to USER coordinates. $\boldsymbol{T}$ is given in metres, with respect to the NEAR system, and $\boldsymbol{R}$ is a vector of 3 Euler angles, following the ``extrinsic $X-Z-X$" convention; that is, the rotation matrix $R$ is given as
\begin{equation}
R(\alpha,\beta,\gamma) = R_{\textrm{X}}(\alpha)R_{\textrm{Z}}(\beta)R_{\textrm{X}}(\gamma),
\end{equation}
where $X,Y,Z$ are the fixed NEAR axes.
\item \verb|FluxCalc| provides switches for the user to enable/disable certain features; namely, the module's polarisation accept/reject weight, and whether the simulation should evaluate Eq. \ref{eq: acceptance_correction} assuming the parent to be perfectly focused (setting $\zeta_{+}$ to one-half the detector's angular opening, and $\zeta_{-} = 0$)
\end{itemize}
For the purposes of running this module, an input \verb|dk2nu|-like flux ROOT flat-tree and a ROOT geometry file describing the detector are required.
Sample inputs that are module-compliant are supplied along with the source code in \verb|src/contrib/beamhnl| for the user to be able to run the module immediately.
\par A folder \verb|flatDk2nus| contains two flat ROOT trees, corresponding to one \verb|dk2nu| flux file from NuMI in neutrino mode, and one in antineutrino mode \cite{NuMIBeamFlux}.
It also contains scripts to produce ROOT flat-trees from \verb|dk2nu| flux files.
These flat-trees can then be used as input of the \verb|BeamHNL| module.
Detailed instructions on how to generate these are written in the \verb|README| file located inside the folder. \\
Three ROOT macros \verb|makeBox.C|, \verb|makeCylinder.C|, and \verb|makeHexagon.C| along with three respective outputs are also inside the \verb|contrib/beamhnl| directory, which allow the user to make three different ROOT geometries of arbitrary dimensions and rotation with respect to the USER coordinate system. \\
To enable the \verb|BeamHNL| module, the user must first configure \verb|GENIE| appropriately by adding the following line to the \verb|configure| script (an example can be seen at the \verb|GENIE| website \cite{GENIEWebsite}):
\begin{center}
\verb|--enable-heavy-neutral-lepton \ |
\end{center}
After running \verb|make|, the \verb|BeamHNL| module is ready.
The archetypal run command is
\begin{center}
\verb|gevgen_hnl -n <nSignalEvents>| \verb|-f <path/to/flux/dir> -g <path/to/geom/file.root>|
\end{center}
Detailed instructions can be referred to by passing the \verb|-h| flag.
\subsection{HNL production and decay rates} \label{appdx: QM}
The available production channels for an HNL with mass $M_{\textrm{N}4} \lesssim m_{\textrm{K}}$ are summarised in Table \ref{tab: prodChannels}.
\footnote{
Throughout this section, we will engage in a mild abuse of notation and write down such expressions as $K^{\pm} \rightarrow N_{4} + \mu^{\pm}$.
The reader will notice that $N_{4}$ has been defined in Eq. \ref{eq: fourth_nu_mixing} as a \emph{mass eigenstate}, rather than a flavour eigenstate.
In order to shorten notation, we will sacrifice formal correctness (i.e. $K^{\pm} \rightarrow \nu_{\mu} + \mu^{\pm},\,\,\nu_{\mu} = \sum_{i}U_{\mu i}\nu_{i} + U_{\mu 4}N_{4}$) and directly concentrate on the mixed-in element $U_{\alpha 4}N_{4}$, hoping that the implied association does not cause confusion.
}
For each channel, the threshold, kinematic scaling $\mathcal{K}$ and SM branching ratio are listed.
The kinematic scaling $\mathcal{K}$ is defined through the decay width as in Eq. \ref{eq: kineScaling}
\begin{equation}
\Gamma\left(P \rightarrow N_{4} + \ell_{\alpha} + ...\right) = \left|U_{\alpha 4}\right|^{2}\mathcal{K}\cdot\Gamma\left(P \rightarrow \nu + \ell_{\alpha} + ...\right)
\end{equation}
The decay width is then, for a given channel, written as
\begin{equation}
\Gamma\left(M_{\textrm{N}4},|U_{\alpha 4}|^{2}\right) = \mathcal{K}\left(M_{\textrm{N}4}\right)\cdot|U_{\alpha 4}|^{2},
\end{equation}
For each parent species ($\pi^{\pm}, K^{\pm}, K^{0}_{\textrm{L}}, \mu^{\pm}$) a series of scores $\{s_{i}\}, s_{i+1} = s_{i} + \Delta s$ is constructed, which is then used to determine the production channel for the HNL.
The scores are calculated as
\begin{equation}
\Delta s = \frac{\mathfrak{B}_{\text{channel}}}{\mathfrak{B}_{\text{tot}}},
\end{equation}
where $\mathfrak{B}_{\text{tot}}$ is the sum of $\mathfrak{B}_{\text{channel}}$ over all kinematically accessible HNL production channels (see \ref{tab: decayChannels}).
The following definitions are used for the case of 2-body HNL production \cite{Shrock1981}:
\begin{subequations}\label{eq: scores}
\begin{eqnarray}
\delta(m,M) &\equiv& \delta_{M}^{m}:= \frac{m^{2}}{M^{2}},
\\
\lambda(x,y,z) &:=& x^{2} + y^{2} + z^{2} - 2(xy + yz + zx), \label{eq: kallen}
\\
f_{m}(x,y) &:=& x + y - (x-y)^{2},
\\
\rho(x,y) &:=& f_{m}(x,y)\cdot\lambda^{1/2}(x,y,1),
\\
\mathcal{P}_{\ell}(x,y) &:=& \frac{\rho(x,y)}{x(1-x)^{2}}.
\end{eqnarray}
\end{subequations}
Calculations of the three-body HNL scaling factor for production have been done in the literature.
We have used the helicity-summed scaling factors reported in \cite{Ballett2020} to construct through interpolation a scaling function $\mathcal{S}_{\textrm{P}\ell}$, with $P = K^{\pm}, K^{0}_{\textrm{L}}$ and $\ell = e,\mu$.
For the special case of muon decay to HNL, we start from the known decay $\mu^{\pm} \rightarrow \nu_{\mu}\nu_{e}e^{\pm}$ and then ``promote" either the $\nu_{\mu}$ or $\nu_{e}$ to HNL, depending on the mixings $|U_{\textrm{e}4}|^{2}, |U_{\upmu 4}|^{2}$.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{Images/betterFeynman.pdf}
\caption{Feynman diagrams for the decay $N_{4} \rightarrow \pi^{\pm}+\pi^{0}+\ell^{\mp}$.}
\label{fig:pipi0ell}
\end{figure}
\par We adopt the HNL decay widths in the context of an effective field theory describing interactions of HNL with mesons, as detailed in \cite{ColomaDUNEHNL}.
For the double-pion channels $N_{4} \rightarrow \pi^{\pm}\pi^{0}\ell^{\mp}, \pi^{0}\pi^{0}\nu$, whose thresholds for emission lie below the kaon mass, the decay widths in the literature have been noted to be dominated by the chain involving the emission of an on-shell $\rho$.
Because $m_{\uprho} > m_{\textrm{K}}$, this argument cannot be applied in the case $M_{\textrm{N}4} < m_{\textrm{K}}$, and it is necessary to perform an explicit calculation to estimate the decay width resulting from double-pion channels.
To that effect, we have used the Lagrangian for Dirac particles, made public by the authors of \cite{ColomaDUNEHNL} in the \verb|FeynRules| model database \cite{FeynRulesWebsite}, and extracted the double-differential decay rate over final-state particle energies using \verb|FeynRules| \cite{FeynRulesManual}, \verb|FeynArts| \cite{FeynArts} and \verb|FeynCalc| \cite{FeynCalc1, FeynCalc2, FeynCalc3}.
The resulting expression for the two-dimensional $\textrm{d}^{2}\Gamma/\textrm{d}E_{\uppi}\textrm{d}E_{\ell}$ depends on the energy of the $\pi^{0}$ due to energy conservation in the HNL rest frame.
Therefore, the integrated decay rate is obtained in runtime by numerically integrating the differential decay rate.
\par As can be seen from the relevant tree-level Feynman diagrams in Fig. \ref{fig:pipi0ell}, the decay $N_{4} \rightarrow \pi^{+}\pi^{0}\ell^{-}$ is mediated by both light and heavy neutrinos.
Compared to the left-hand diagrams, the right-hand diagram where $N_{4}$ enters both pion vertices is suppressed by a factor $\left|U_{\alpha 4}\right|^{2}/\left|U_{\alpha i}\right|^{2} \ll 1$, and is safely ignored.
\par For the $\pi^{0}\pi^{0}\nu$ diagrams, the intermediate propagator is always a $N_{4}$, which renders the entire channel's decay width subleading to $\pi^{\pm}\pi^{0}\ell^{\mp}$.
\par Comparing $N_{4} \rightarrow \pi^{\pm}\pi^{0}\ell_{\alpha}^{\pm}$, the $\alpha = e$ case has a larger decay width due to the larger phase space available to the final state.
The result is
\begin{equation}
\frac{\Gamma\left(N_{4}\rightarrow\pi^{+}\pi^{0}e^{-}\right)}{\sum_{\text{all lighter channels}}\Gamma} \lesssim 10^{-11},
\end{equation}
confirming the argument that the essential process for double-pion production is decays of HNL into on-shell $\rho$.
The calculation proceeds similarly for Majorana HNL.
\par To conserve computing time, we only simulate those channels that the user has explicitly defined in the configuration file as being ``interesting" as a signal.
For each run of the module, a C++ \verb|std::map<HNLDecayMode_t, double>| is constructed which contains the accessible and interesting channels and their decay widths.
These are then used to construct scores similarly to Eq. \ref{eq: scores}, after which a standard Monte Carlo transformation method \cite{NumericalRecipes3rd} maps a uniform random number to a decay channel and fills the appropriate decay product list for use with \verb|GENIE|'s phase-space generator.
\par To calculate the decay rates, we make use of the following definitions found in \cite{ColomaDUNEHNL}:
\begin{subequations}
\begin{eqnarray}
&B_{1}& := \frac{1}{4}\left(1 - 4\sin^{2}\theta_{\textrm{W}} + 8\sin^{4}\theta_{\textrm{W}}\right),
\\
&B_{2}& := \frac{1}{2}\sin^{2}\theta_{\textrm{W}}\left(2\sin^{2}\theta_{\textrm{W}} - 1\right),
\\
&\mathcal{C}_{\alpha}\left(M_{\textrm{N}4}\right)& := \sum_{\alpha}|U_{\alpha 4}|^{2}\cdot\big[F_{1}\left(m_{\alpha}/M_{\textrm{N}4}\right)B_{1}
\\
\nonumber &&+ F_{2}\left(m_{\alpha}/M_{\textrm{N}4}\right)B_{2}\big],
\\
&\mathcal{D}_{\alpha}\left(M_{\textrm{N}4}\right)& := |U_{\alpha 4}|^{2}\sin^{2}\theta_{\textrm{W}}
\\
\nonumber &&\times \left[2F_{1}\left(M_{\textrm{N}4}\right) + F_{2}\left(M_{\textrm{N}4}\right)\right],
\\
&\mathcal{G}\left(x,y\right)& := \lambda^{1/2}\left(x,y,1\right)
\\
\nonumber &&\times \left[1-y^{2} - x^{2}(2-x^{2}+y^{2})\right],
\\
&F_{1}(x) =& (1 - 14x^{2} - 2x^{4} - 12x^{6})\sqrt{1 - 4x^{2}}
\\
\nonumber &&+ 12x^{4}(x^{4}-1)L(x),
\\
&F_{2}(x) =& 4x^{2}(2 + 10x^{2} - 12x^{4})\sqrt{1 - 4x^{2}}
\\
\nonumber &&+ 24x^{4}(1 - 2x^{2} + 2x^{4})L(x),
\\
&L(x) =& \ln\left[\frac{1 - 3x^{2} - (1-x^{2})\sqrt{1 - 4x^{2}}}{x^{2}\left(1 + \sqrt{1 - 4x^{2}} \right)}\right],
\end{eqnarray}
\end{subequations}
where $\theta_{\textrm{W}}$ is the Weinberg mixing angle, and $\lambda(x,y,z)$ is the K\"{a}ll\'{e}n function defined in Eq. \ref{eq: kallen}.
\subsection{Coordinate systems} \label{appdx: coords}
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{Images/extrinsicEuler.pdf}
\caption{Extrinsic Euler rotations. The fixed system $(X,Y,Z)$ is rotated to the new system $(X_{\textrm{U}}, Y_{\textrm{U}}, Z_{\textrm{U}})$ by applying successive rotations first about $X$ (green), then about $Z$ (blue), and finally about $X$ again (red).}
\label{fig:eulerAngles}
\end{figure}
A general detector can be both displaced and rotated arbitrarily with respect to the NEAR frame.
One can parametrise any such configuration by two vectors: one translation and one rotation.
We have made the choice to use extrinsic Euler angles to write rotations (see Fig. \ref{fig:eulerAngles}): that is, the rotation matrix describing the transition between two coordinate systems $(X, Y, Z), (X_{\textrm{U}}, Y_{\textrm{U}}, Z_{\textrm{U}})$ centred around the same point is written as
\begin{align}
\begin{split}
&R(\alpha, \beta, \gamma) = \\
&= R_{\textrm{X}}(\gamma)R_{\textrm{Z}}(\beta)R_{\textrm{X}}(\alpha) \\
&= \begin{pmatrix} 1 &0 &0 \\ 0 &c_{\gamma} &-s_{\gamma} \\ 0 &s_{\gamma} &c_{\gamma} \end{pmatrix} \begin{pmatrix} c_{\beta} &0 &-s_{\beta} \\ 0 &1 &0 \\ s_{\beta} &0 &c_{\beta} \end{pmatrix} \begin{pmatrix} 1 &0 &0 \\ 0 &c_{\alpha} &-s_{\alpha} \\ 0 &s_{\alpha} &c_{\alpha} \end{pmatrix} \\
&= \begin{pmatrix} c_{\beta} &-s_{\alpha}s_{\beta} &-c_{\alpha}s_{\beta} \\ -s_{\beta}s_{\gamma} &c_{\alpha}c_{\gamma} - c_{\beta}s_{\alpha}s_{\gamma} &-c_{\gamma}s_{\alpha} -c_{\alpha}c_{\beta}s_{\gamma} \\ c_{\gamma}s_{\beta} &c_{\beta}c_{\gamma}s_{\alpha} + c_{\alpha}s_{\gamma} &c_{\alpha}c_{\beta}c_{\gamma} - s_{\alpha}s_{\gamma} \end{pmatrix},
\end{split}
\end{align}
using $c_{\theta}, s_{\theta} \equiv \cos\theta, \sin\theta$.
\subsection{Bookkeeping} \label{appdx: book}
There are a certain number of quantities that must be kept track of during the simulation in order to correctly estimate the number of signal events, given a detector volume and number of protons on target \footnote{By ``protons on target" we will account for the mean number of POT for a single HNL decay to \emph{signal} event. This information is calculated taking the input detector volume and beamline simulation; different inputs will yield different outputs.}.
These can be summarised in Fig. \ref{fig:POT_map}; we shall explain the steps taken forthwith.
\par Suppose that the user has selected a channel $C$ as the channel of interest for a particular detector; for example, $N_{4} \rightarrow \mu^{-} \pi^{+}$.
We work our way backwards in order to obtain, \textit{for each signal event}, an estimate of the number of POT $N_{\text{POT}}$ that would result in one signal event occurring in the detector, and return this estimate as a weight attached to the \verb|EventRecord| describing the signal event, which makes POT counting a straightforward loop on the analysis side.
\par First to be obtained is the expected number of \emph{total} HNL decays occurring in the detector, signal or not (for example, invisible decays $N_{4} \rightarrow \nu\nu\nu$ are almost never going to be considered signal events due to the inability to detect neutrinos in the final state directly), as
\begin{equation}
N_{H} = \frac{\sum_{i \in\text{all channels}}\Gamma_{i}}{\Gamma_{C}} \equiv \frac{\Gamma_{\text{tot}}}{\Gamma_{C}}.
\end{equation}
One then takes the detector geometry into account, by considering the HNL's lifetime $\tau = \hbar/\Gamma_{\text{tot}}$ and requiring that the HNL decays inside the detector volume.
Suppose a beam of HNL of rest-frame lifetime $\tau$ and velocity $\beta c$ propagates along the $z$ axis, and the detector volume is in between the planes $z = z_{1}$ and $z = z_{2}$.
The probability distribution for the HNL decay location is
\begin{equation}
p(z) = \frac{1}{\widetilde{N}}\exp\left(-\frac{z}{\beta c \gamma\tau}\right),
\end{equation}
where $\widetilde{N}$ is some normalisation constant.
The probability of decay inside the detector is then
\begin{equation}
P\left(z_{1} \leq z_{\text{decay}} \leq z_{2}\right) = \int_{z_{1}}^{z_{2}}\textrm{d}u\,p(u)
\end{equation}
which yields
\begin{align} \label{eq: survival}
\begin{split}
P &= \frac{\beta\gamma c\tau}{\widetilde{N}}\exp\left(-\frac{z_{1}}{\beta\gamma c\tau}\right)\left[1 - \exp\left(-\frac{z_{2}-z_{1}}{\beta\gamma c\tau}\right)\right] \\
&= P(\text{arrival}) \cdot P(\text{decay} | \text{arrival}).
\end{split}
\end{align}
\begin{widetext}
\begin{figure*}
\setlength{\belowcaptionskip}{-5pt}
\centering
\includegraphics[width=\textwidth]{Images/POT_map.pdf}
\caption{Sequence for estimating number of POT for each event, working backwards. See text for definitions.}
\label{fig:POT_map}
\end{figure*}
\end{widetext}
\begin{figure}
\setlength{\belowcaptionskip}{-1pt}
\centering
\includegraphics[width=0.48\textwidth]{Images/genie_angdeviation.pdf}
\caption{Calculation of deviation angles $\zeta_{\mp}$. The parent's momentum $\boldsymbol{p}_{\textrm{P}}$ is projected to the point $\textrm{V}_{0}$ such that $z_{\textrm{V}0} = z_{\textrm{C}}$, with $\textrm{C}$ the centre of the detector. The entry and exit points $\textrm{V}_{\mp}$ lie on the line $\epsilon: \boldsymbol{r}(u) = \boldsymbol{r}_{\textrm{V}0} + u\cdot\boldsymbol{\delta}$, where $\boldsymbol{\delta}$ is a sweep direction: $\boldsymbol{\delta} := \boldsymbol{r}_{\textrm{C}} - \boldsymbol{r}_{\textrm{V}0}$. The angles $\zeta_{\mp}$ are $\langle\boldsymbol{r}_{\mp}, \boldsymbol{p}_{\textrm{P}}\rangle$.}
\label{fig:deviation}
\end{figure}
Equation \ref{eq: survival} states that the HNL must first survive long enough to reach the detector, and then decay promptly while inside it.
\par We can generalise this to the full 3D picture.
Let D, E, and X be the HNL production point, and the intersections of its trajectory with the detector at entry and exit, respectively.
Then for every $N_{\textrm{P}}$ HNL emitted that could be accepted, $N_{\textrm{A}}$ will survive until the detector, and $N_{\textrm{H}}$ will decay without exiting the detector.
These quantities are straightforwardly obtained :
\begin{equation}\label{eq: lifetimeWeight}
N_{\textrm{A}} = N_{\textrm{H}}\cdot\exp\left(\frac{\ell}{\beta c \gamma \tau}\right),
\end{equation}
\begin{equation}
N_{\textrm{P}} = N_{\textrm{A}}\cdot\exp\left(\frac{L}{\beta c \gamma \tau}\right)
\end{equation}
where $\ell, L$ are the distances $|\boldsymbol{r}_{\textrm{X}} - \boldsymbol{r}_{\textrm{E}}|, |\boldsymbol{r}_{\textrm{D}} - \boldsymbol{r}_{\textrm{E}}|$.
\par Next, we factor in those HNL that decayed before reaching the detector volume to get the total number $N_{\textrm{P}}$ of HNL produced in the accepted region, using once again Eq. \ref{eq: lifetimeWeight} but substituting $\ell = |\boldsymbol{r}_{\textrm{X}} - \boldsymbol{r}_{\textrm{E}}|$ for $L = |\boldsymbol{r}_{\textrm{E}} - \boldsymbol{r}_{\textrm{D}}|$.
\par Further backward, we have the estimate of acceptance; only those HNL emitted in the correct angular region would intersect the detector.
It is assumed that the decays of parents are isotropic in the parents' rest frame, which is correct for pseudoscalar mesons ($\pi^{\pm}, K^{\pm}, K^{0}$).
Furthermore, the detector is assumed to be sufficiently far away that the small angle approximation $\sin\theta \simeq \theta$ is valid.
The size and position of the detector defines a window in the observer's frame, which is then transformed into a rest-frame window using the collimation-effect function $f:[0,\pi]\rightarrow[0,\pi]$ described in Fig. \ref{fig:collimation}.
The lab-frame emission angle can be interpreted as an angular deviation of the HNL's trajectory from the parent's momentum.
The angular size of the detector is given by $\zeta_{+} - \zeta_{-}$, where the angles $\zeta_{\mp}$ are the minimum and maximum deviation angles for which the HNL's trajectory can intersect the detector.
Figure \ref{fig:deviation} shows how these deviation angles are calculated.
A ``sweep direction" $\boldsymbol{\delta}$ is constructed from the parent's momentum $\boldsymbol{p}_{\textrm{P}}$ and the detector centre C, and the points of entry $\textrm{V}_{-}$ and exit $\textrm{V}_{+}$ along this sweep are obtained, giving the deviation angles as the angles between the parent momentum and the vectors $\boldsymbol{r}_{-}, \boldsymbol{r}_{+}$.
\par We thus estimate the acceptance correction $\mathcal{A}$ induced by the collimation effect of HNL becoming dominated by their parent's Lorentz boost, as
\begin{equation}\label{eq: acceptance_correction}
\mathcal{A} = \frac{|\mathcal{I}_{\textrm{N}4}|}{|\mathcal{I}_{\nu}|},
\end{equation}
where $\mathcal{I}$ is the pre-image under $f$ of the angular opening $[\zeta_{-},\zeta_{+}]$.
Especially for large HNL masses, it is often $\mathcal{I}_{\textrm{N}4} = \mathcal{I}_{\textrm{F}} \sqcup \mathcal{I}_{\textrm{B}}$ where $\mathcal{I}_{\textrm{F},\textrm{B}}$ are the forward and backward angular regions where HNL can be accepted.
One can contrast this with the case of light neutrinos, where $f$ increases monotonously and only forward emitted neutrinos can reach the detector.
In the end, the number of total HNL emitted (``ancestry events") is given by
\begin{equation}
N_{\textrm{C}} = \frac{N_{\textrm{P}}}{\sum_{\alpha=\textrm{e},\upmu,\uptau}\left|U_{\alpha 4}\right|^{2}}\cdot \frac{1}{\omega_{\text{det}}\mathcal{B}^{2}\mathcal{A}},
\end{equation}
where $\mathcal{B}, \mathcal{A}$ were defined in Eqs. \ref{eq: boostFactor}, \ref{eq: acceptance_correction} and $\omega_{\text{det}}$ is the angular size of the detector in the lab frame, with $D$ at the origin.
The angular size of the detector in the parent's rest frame is then $\omega_{\text{det}}\mathcal{B}^{2}$, assuming its face is roughly perpendicular to the parent momentum; the acceptance correction $\mathcal{A}$ parametrises the intrinsic increase of the probability that a randomly chosen direction for HNL emission will be boosted such that the HNL gets accepted.
The prefactor $(\sum |U_{\alpha 4}|^{2})^{-1}$ is inserted to correct for the fact that, to conserve computing resources, we assume all parent decays result in an HNL. \\
Finally, we estimate $N_{\text{POT}} = N_{\textrm{C}}\cdot\widetilde{N}(P)$, $\widetilde{N}(P) = n(\text{all particles})/n(P)$, where $n$ is the number of particles produced in a $p$+target interaction.
For example, if in $n(\pi^{+}) = 0.4, n(K^{+}) = 0.1, n(K^{0}) \simeq n(K^{0}_{L}) = 0.05, n(p) = 0.15, n(\mu^{+}+\text{other}) \simeq 0.05$, then the relevant factors are
\begin{equation}
\widetilde{N}\left\{\pi^{+}, K^{+}, K^{0}_{\textrm{L}}, \mu^{+}, p\right\} = \left\{1.875, 7.5, 15, 5, 15\right\}.
\end{equation}
For practical purposes, the code expects the values $\widetilde{N}(P)$ as inputs from the user in the configuration file.
\end{document}
\subsection{HNL production in beamline}
The starting off point for deriving a neutrino flux is a precalculated spectrum of particles, such as is typically produced from proton interactions on a target in a neutrino beamline.
These particles (such as $\pi^{\pm}, K^{\pm}, K^{0}_{\textrm{L}}, \mu^{\pm}$) are then usually focused by a series of magnets to perform some charge selection.
The particles then propagate downstream, decaying in some suitably long decay volume to neutrinos (or particles that eventually decay to neutrinos).
They are referred to as ``parents" if they decay directly to neutrinos, or more generally ``ancestors" if one of their decay products is a parent.
One or more particle absorbers are normally placed in between the decay volume and the detector, so that the only particles that survive to reach the detector are neutrinos.
This same production philosophy applies to HNL, which by Eq. \ref{eq: fourth_nu_mixing} have a probability $\propto \left| U_{\alpha 4}\right|^{2}$ to be the mass state that corresponds to the $\nu_{\alpha}$ flavour state made during neutrino production.
\par Experiments simulate the spectrum for each parent species using a description of their beamlines and suitable external experimental data (\cite{NA49, NA61}) to constrain the simulation uncertainty \cite{NuMIBeamFlux}.
There are various formats the output of this simulation can be stored in; we have chosen to adapt the \verb|dk2nu| format \cite{Dk2NuProposal} developed for the Fermilab Intensity Frontier experiments to a ``flat dk2nu" format which mirrors the \verb|dk2nu| tree structure, without containing any complex classes.
This was done to minimise the build complexity for \verb|GENIE|.
However, other flux formats can readily be converted into the format required for this simulation, and an example input flux with the necessary structure has been provided in the \verb|$GENIE/src/contrib/beamhnl| directory accompanying our module.
\par One application of this general format is the ability to use this module not only for accelerator neutrino experiments such as \cite{DUNEBSMOverview, SBNReview, DarkQuestHNL}, but also in the context of higher-energy collider neutrino experiments with detectors lying downstream of the neutrino production point \cite{SHiPSensitivity, FASERHNL}.
This is particularly interesting, because at higher energies heavy mesons (such as $D, D_{s}$), which can decay to HNL heavier than the kaon mass, are produced copiously enough to have a strong sensitivity to HNL with $M_{\textrm{N}4} > m_{\textrm{K}}$.
\par We have used the production branching ratios from \cite{Shrock1981, Ballett2020} for the evaluation of HNL production probabilities, for the most commonly produced parent types: $\pi^{\pm}, K^{\pm}, K^{0}_{L}$ (directly from the beamline), and $\mu^{\pm}$ (from hadron decay themselves).
Further hadron types are not yet a part of our simulation, but are compelling targets to include in light of the above fact.
\par To conserve computing resources, we force every parent in the simulation to decay to HNL; that is equivalent to using the uncorrected branching ratios from Fig. \ref{fig:prodBR}.
This is corrected in the simulation bookkeeping by applying Eq. \ref{eq: pseudounitarity} on the production probability of the HNL.
\par A salient feature of the HNL production is that, depending on the mass of the HNL, the acceptance of the detector as seen by the HNL at its production point can change significantly, owing to the fact that HNL travel with $\beta_{\textrm{N}4} < 1$.
A more detailed review of massive-particle kinematics can be found in \cite{rubbia_2022}. \\
\par Consider a parent $P$ of energy $E_{\textrm{P}}$, decaying leptonically to a neutral lepton $L$ which can either be light ($\nu$) or heavy ($N_{4}$) and a charged lepton $\ell$.
The energy of $L$ in the rest frame of $P$ is then
\begin{equation}
E_{\textrm{L}}^{*} = \frac{m_{\textrm{P}}^{2} - m_{\ell}^{2} + m_{\textrm{L}}^{2}}{2m_{\textrm{P}}},
\end{equation}
Suppose, without loss of generality, that an observer sees $P$ propagate along the $z$ axis, $\boldsymbol{p}_{\textrm{P}} = (0, 0, p_{\textrm{P}})$.
Then the angle $\Theta$ at which $L$ is emitted with respect to the $z'$ axis in the rest frame can be related to the emission angle $\theta$ in the observer's frame, as
\begin{equation}\label{eq: collimation_effect}
\tan\theta = \frac{q_{\textrm{L}}\sin\Theta}{\gamma_{\textrm{P}}\left(\beta_{\textrm{P}}E_{\textrm{L}}^{*} + q_{\textrm{L}}\cos\Theta\right)},
\end{equation}
where $q_{\textrm{L}}$ is the rest-frame momentum of $L$, and $\beta,\gamma$ are the relativistic parameters of $P$ in the lab frame.
Figure \ref{fig:collimation} shows $\theta$ as a function of $\Theta$ for a kaon parent, with $E_{\textrm{K}} = 1\,\,\text{GeV}$, for various HNL masses produced in the decay $K^{\pm} \rightarrow N_{4} + \mu^{\pm}$.
\par The cases of massless neutrinos and HNL are quite different; for massless Standard Model neutrinos, $q_{\textrm{L}} = E_{\textrm{L}}^{*}$ and the resulting function $\tan^{-1}\left[\gamma_{\textrm{P}}^{-1}\sin\Theta\left(\beta_{\textrm{P}}+\cos\Theta\right)^{-1}\right]$ is monotonically increasing.
This means that, for any arbitrary emission angle $\theta_{0}$ in the lab frame, there exists some suitable pre-image $\Theta_{0}$ in the rest frame, for any parent velocity.
In a Standard Model calculation, \emph{any} parent can produce a neutrino that is accepted by a detector, \emph{for any} parent momentum and relative position of the neutrino production vertex to the detector.
\par In contrast, for HNL with masses large enough, it is not generically true that any $\theta_{0}$ can be achieved; this places increased importance on the kinematics of the parent $P$.
The greater the angle between $P$'s momentum $\boldsymbol{p}_{\textrm{P}}$ and the relative separation $\boldsymbol{\mathcal{O}}$ between neutrino production vertex and detector, and the greater the HNL mass, the smaller the acceptance becomes, generally speaking; for large enough angles and masses, the HNL could not be accepted at all, regardless of its rest-frame emission angle.
This same collimation effect can, at the same time, account for \emph{increased} acceptance of HNL with respect to a Standard Model neutrino, if $P$ is well-collimated enough.
This happens because Eq. \ref{eq: collimation_effect} now reaches a maximum and tends towards $0$ for large $\Theta$: in other words, backwards-emitted HNL are swept forwards by the Lorentz boost into the lab frame, and end up accepted by the detector.
In general, this \emph{acceptance correction} respective to a Standard Model neutrino is calculated on an event-by-event basis, and applied as an additional weight to each HNL.
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{Images/largegoodkaon.pdf}
\caption{Collimation effect for in $K^{\pm} \rightarrow N_{4}\mu^{\pm}$, $E_{\textrm{K}^{\pm}} = 1\,\,\text{GeV}$. For $M_{\textrm{N}4}$ heavy enough, backwards emission in the rest frame is equivalent to forwards emission in the lab frame.}
\label{fig:collimation}
\end{figure}
One can also obtain an estimate for the neutrino energy at the detector, again setting $p_{\textrm{L}}^{x} = 0$ without loss of generality
\begin{align}\label{eq: boostFactor}
\begin{split}
&\Rightarrow E_{\textrm{L}}^{*} = \gamma_{\textrm{P}}E_{\textrm{L}} - \gamma_{\textrm{P}}\beta_{\textrm{P}}p_{\textrm{L}}\cos\theta_{\textrm{D}}, \\
&\Rightarrow E_{\textrm{L}} = \frac{E_{\textrm{L}}^{*}}{\gamma_{\textrm{P}}\left(1 - \beta_{\textrm{P}}\beta_{\textrm{L}}\cos\theta_{\textrm{D}}\right)} \equiv \mathcal{B}E_{\textrm{L}}^{*},
\end{split}
\end{align}
where $\mathcal{B}$ is termed the \emph{boost factor}, and $\theta_{\textrm{D}}$ is the viewing angle between $\boldsymbol{p}_{\textrm{P}}$ and $\boldsymbol{\mathcal{O}}$.
From a Standard Model simulation standpoint, this equation simplifies to
\begin{equation}
\mathcal{B}_{\nu} = \frac{1}{\gamma_{\text{P}}\left(1 - \beta_{\textrm{P}}\cos\theta_{\textrm{D}}\right)},
\end{equation}
which uniquely determines the lab-frame energy for the massless neutrino.
For a massive neutrino, there is a complication: Eq. \ref{eq: boostFactor} depends on knowledge of the lab-frame velocity through $\beta_{\textrm{L}}$.
We estimate $\beta_{\textrm{L}}$ by imposing a geometric constraint; using the worldline $(T, \boldsymbol{\mathcal{O}})$ with $T = |\boldsymbol{\mathcal{O}}|/(\beta_{\textrm{L}}c)$, we construct a candidate lab-frame momentum by boosting $(T, \boldsymbol{\mathcal{O}})$ into $P$'s rest frame and forcing the HNL momentum to point to that direction, then boosting the result back into the lab frame.
We check the distance between the point of closest approach and the detector centre; if this is too large, we decrement $\beta_{\textrm{L}}$ and repeat the procedure.
\par In Fig. \ref{fig:NuMI_acceptance}, we plot the differential geometrical acceptance, defined as the probability a HNL emitted isotropically in the parent's rest frame will be accepted by the detector, for the case of the MINER$\nu$A detector \cite{TheMINERvADetector} in the NuMI Medium-Energy beam \cite{NuMIBeamFlux}.
We have plotted the acceptance for the processes $K^{\pm} \rightarrow N_{4} + \mu^{\pm}$ (left-hand side plots) and $K^{\pm} \rightarrow N_{4} + \pi^{0} + e^{\pm}$ (right-hand side plots).
Panels (a), (b) show the differential acceptance under the assumption the parent kaons are perfectly focused, which is to say an HNL emitted with momentum collinear to the kaon momentum would definitely be accepted.
Generally, the acceptance remains about the same as for Standard Model neutrinos, though the peak shifts first to higher and then to lower energies.
This is caused by the Lorentz boost becoming more efficient for all HNL as the mass initially increases, followed by the decrease associated with the drop in boost factor $\mathcal{B}$ as the HNL's velocity drops significantly.
In panels (c), (d), we have used the kaon spectrum from the NuMI beamline simulation including realistic focusing, but not applied acceptance correction.
The shape of the acceptance remains roughly similar, but the normalisation and integrated acceptance change dramatically.
This is caused by the suppression of the boost factor $\mathcal{B}$ at both high angles and low velocities.
Finally, panels (e), (f) show the full simulation accounting for both realistic focusing and the change in suitable emission regions in the parent rest frame due to acceptance correction.
Peaks of these distributions shift decidedly to lower energy, as the higher energy HNL are more collimated with their parents and, unlike Standard Model neutrinos, are not necessarily able to reach the detector.
This effect becomes increasingly prominent as $M_{\textrm{N}4}$ goes up and the kinematic constraints become more severe.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{Images/allCOption.pdf}
\caption{Acceptance for a $K^{\pm}$ decaying to HNL and either $\mu^{\pm}$ or $e^{\pm} + \pi^{0}$, for the MINER$\nu$A detector \cite{TheMINERvADetector}. Top row: Parents perfectly focused: momentum of parent always points towards detector. Middle row: parents not perfectly focused, but no acceptance correction $\mathcal{A}$ applied. Bottom row: full focusing and acceptance correction applied. \newline
The normalisation of these curves drops considerably when focusing is not perfect; this happens because the boost factor $\mathcal{B}$ drops with opening angle between parent momentum and detector location. The effect of $\mathcal{A}$ is mainly to suppress the high-$E_{\textrm{N}4}$ tails, as hard HNL cannot deviate from their parents enough to reach the detector; also, at low enough HNL mass, $\mathcal{A}$ increases normalisation as backward-emitted HNL are accepted by the detector.}
\label{fig:NuMI_acceptance}
\end{figure*}
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{Images/ratioAcc.pdf}
\caption{Effect of focusing on acceptance; flux at MINER$\nu$A (realistic focusing) / flux (perfect focusing). Note the dramatic reduction in acceptance from pion parents at $M_{\textrm{N}4} = 100\,\,\text{MeV}/c^{2}$. Error bands are purely statistical (note for $M_{\textrm{N}4} > 0$ they are scaled up by a factor $5$).}
\label{fig:ratioAcceptance}
\end{figure}
\par We further demonstrate the effect of parent focusing on the acceptance in Fig. \ref{fig:ratioAcceptance}, by simulating the flux under the assumption of perfect parent focusing or using realistic parent focusing as provided from the NuMI beamline simulation.
We show the ratios of realistic over perfect acceptance, as a function of HNL energy, for masses $0, 100, ..., 400\,\,\text{MeV}/c^{2}$.
The effect is striking at $100$ and $400\,\,\text{MeV}/c^{2}$, where the thresholds of HNL production by pions and kaons are almost reached; at these masses, the collimation effect becomes most severe, and proportionally more parents can not produce HNL that are accepted by the detector, unless the parent happens to be travelling in a direction which would intersect the detector.
The effect is more pronounced at high HNL energy $E_{\textrm{N}4}$, because more energetic HNL have a stronger collimation effect due to the larger Lorentz boost.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{Images/OA_pions_emission.pdf}
\caption{Boost factor $\mathcal{B}$ as function of pion energy, for different values of $\theta$. (a): $\beta_{\textrm{L}} = 1$ (Standard Model); (b): $\beta_{\textrm{L}} = 0.9999$; (c): $\beta_{\textrm{L}} = 0.999$; (d): $\beta_{\textrm{L}} = 0.99$.}
\label{fig:pion_OA_emission}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{Images/kineAnalysisMultiAxis.pdf}
\caption{Pion energy at which $\mathcal{B}$ is highest, for different values of $\theta$.}
\label{fig:pion_OA_peaks}
\end{figure}
\par Another profound impact can be seen on off-axis neutrino spectra, relevant for PRISM-like searches.
Varying Eq. \ref{eq: boostFactor} over $\cos\theta$ for $\beta_{\textrm{L}} = 1$ yields the celebrated \emph{off-axis effect} for the Standard Model.
In the general case $\beta_{\textrm{L}} \neq 1$, however, the off-axis effect becomes progressively less pronounced.
In Fig. \ref{fig:pion_OA_emission}, we have chosen a pion parent, and four different values of $\beta_{\textrm{L}}$; the Standard Model case is retrieved in (a), and progressively smaller $\beta_{\textrm{L}}$ are shown in (b), (c), (d).
The various curves representing different values of $\theta_{\textrm{D}}$ collapse to a single curve.
Writing out the derivative $\partial E_{\textrm{L}}/\partial E_{\textrm{P}} = E_{\textrm{L}}^{*}\partial\mathcal{B}/\partial E_{\textrm{P}}$, one can find the peak $\mathcal{B}$ to be at
\begin{equation}
E_{P}\big|_{\mathcal{B}\,\,\text{max}} = \frac{m_{P}}{(1 - \beta_{\textrm{L}}^{2}\cos^{2}\theta)^{1/2}}.
\end{equation}
In Fig. \ref{fig:pion_OA_peaks} we have plotted, for different values of $\theta_{\textrm{D}}$ and for pion parent, $E_{\textrm{P}}\big|_{\mathcal{B}\,\,\text{max}}$ (x axis) as a parametric plot of $\log_{10}\left(1-\beta_{\textrm{L}}\right)$ (y axis).
Standard Model neutrinos are the limit $y \rightarrow \infty$; additionally, we have drawn two axes for visualisation: red is the mass of a $1\,\,\text{GeV}$ HNL for the given $\beta$, and blue is the energy a $100\,\,\text{MeV}/c^{2}$ HNL would have at that $\beta$.
For very high $\beta$, the position of the peaks varies greatly with $E_{\uppi}$, meaning the off-axis effect is visible; this motivates the PRISM concept \cite{DUNE_ND_CDR} of utilising a narrower neutrino spectrum at high off-axis angles to constrain the neutrino flux.
However, as $\beta$ decreases, so does the variation of the neutrino spectrum (as one can also see in Fig. \ref{fig:pion_OA_emission} with curves converging into one for $\beta = 0.99$): since $\beta$ scales inversely to $M_{N4}$, the HNL flux is expected to be \emph{less} sensitive to the off-axis effect than Standard Model neutrinos.
In principle, the spectrum of HNL is still expected to become softer \cite{DuneNDBSM}, but the effect seen would be much smaller than predicted previously.
In Fig. \ref{fig:prism}, one can see our calculation of the flux shapes at the DUNE PRISM with $|U_{\textrm{e}4}|^{2}:|U_{\upmu 4}|^{2}:|U_{\uptau 4}|^{2} = 1:1:0$, for masses $M_{\textrm{N}4} = 0, 100, ..., 400\,\,\text{MeV}/c^{2}$.
Notice how as the off axis angle $\theta_{\textrm{OA}}$ changes, the massless neutrino flux shifts in accordance with the SM prediction, whereas the $M_{N4} = 100, ..., 400\,\,\text{MeV}/c^{2}$ HNL have a far smaller (but still present) dependence on the off axis angle.
The baseline for the detector is 575 m, which means the reference off axis transverse displacements of $5, 10, 20,$ and $30$ m are equivalent to $\theta_{\textrm{OA}} \simeq 0.5, 1.0, 2.0, 3.0^{\circ}$.
\par Our HNL simulation produces the flux calculation based on minimal information from the beamline simulation, which is passed as an input.
There are two fundamentally important inputs:
\begin{enumerate}
\item Parent momentum and decay vertex position in NEAR coordinates;
\item ``Importance weight" \cite{Dk2NuProposal, Goodwin2022} - a multiplicity factor for hadrons with very similar kinematics.
\end{enumerate}
Based on this information, the module assigns the appropriate decay channel, calculates the boost factor to obtain the energy of the HNL at the detector under the constraint that the HNL can reach it, and constructs the first particle in \verb|GENIE|'s particle stack that corresponds to an HNL that decays in the detector.
We provide details of the bookkeeping in Appendix \ref{appdx: book}.
\subsection{Decay to Standard Model particles}
Unlike Standard Model neutrinos, HNL are unstable and can decay directly (semi-)leptonically to SM particles.
\begin{widetext}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{Images/combinedPlots.pdf}
\caption{HNL fluxes at DUNE PRISM, as function of HNL energy and off-axis displacement.}
\label{fig:prism}
\end{figure*}
\end{widetext}
\par The lifetime of an HNL is generally inversely proportional to the mixing with the SM leptons $\sum_{\alpha} |U_{\alpha 4}|^{2}$ and to the HNL mass $M$.
Depending on the parameter space point being searched, the branching ratios for HNL decays vary as thresholds for various channels open; a few possibilities can be seen in Fig. \ref{fig:BR}.
Details of how we keep track of the decay channels and selects the correct one are in Appendix \ref{appdx: QM}.
\par An HNL has angular momentum same as a Standard Model neutrino; however, its mass implies that it is \emph{partially}, rather than completely, polarised \cite{Levy}.
Equivalently, the HNL and the other particle(s) which were produced in the decay of $P$ form a pure $J = 0$ state, but the HNL itself is not a pure state.
Considering the leptonic production mode $P \rightarrow N_{4} + \ell'$, one can write down a polarisation vector $\mathbb{P}$ for the HNL.
$\mathbb{P}$ has magnitude $|\mathbb{P}| < 1$ (partial polarisation) and direction collinear to the momentum $\boldsymbol{q}_{\ell'}$ where $\boldsymbol{q}$ is written in the HNL's rest frame.
Because in $P$'s rest frame the momenta $\boldsymbol{p}_{\textrm{N}4}$ and $\boldsymbol{p}_{\ell'}$ are collinear, this direction is the same as the momentum $\boldsymbol{q}_{\textrm{P}}$ of $P$ in the HNL rest frame.
\par It is important, moreover, to keep track of where the polarisation vector $\mathbb{P}$ of the HNL is pointing, since it defines the only ``privileged direction" in the HNL rest frame, and thus the axis with which an angular dependence of the decay products $N_{4} \rightarrow X$ manifests.
This direction can be tracked at the moment of HNL production, if its parent $P$ is a particle with no angular momentum.
\par The partial polarisation of $N_{4}$ also means that its differential decay rate into the generic final state $X$ is multiplied, again, by the appropriate coefficient determined by $|\mathbb{P}|$.
Essentially, instead of the well-known $1\mp\cos\theta$ dependence expected from fully polarised spin-$1/2$ particles, the angular dependence is modulated by a suitable polarisation modulus $H$ which is \textit{a priori} dependent on both the production and decay modes of the HNL and produces an angular dependence $1 \pm H\cos\theta$.
The reader will notice that $-1 < H < 1$ - the polarisation modulus is not (semi-)definite.
Its sign is the same as that of the helicity expectation value, $s_{\textrm{N}4}$.
\begin{widetext}
In the simple case of two-body production and two-body decay $P \rightarrow N_{4} + \ell', N_{4} \rightarrow D + \ell$, $H$ is explicitly given as \cite{Levy}
\begin{equation}
H = - \frac{\left(m_{\ell'}^{2} - M_{\textrm{N}4}^{2}\right)\lambda^{1/2}\left(m_{\textrm{P}}^{2},M_{\textrm{N}4}^{2},m_{\ell'}^{2}\right)}{m_{\textrm{P}}^{2}\left(M_{\textrm{N}4}^{2} + m_{\ell'}^{2}\right)\left(m_{\ell'}^{2}-M_{\textrm{N}4}^{2}\right)^{2}}\cdot\frac{\left(M_{\textrm{N}4}^{2} - m_{\ell}^{2}\right)\lambda^{1/2}\left(M_{\textrm{N}4}^{2},m_{\ell'}^{2},m_{\textrm{D}}^{2}\right)}{m_{\textrm{D}}^{2}\left(M_{\textrm{N}4}^{2}-m_{\ell}^{2}\right)^{2}\left(M_{\textrm{N}4}^{2} + m_{\ell}^{2}\right)} = - \left(s_{\textrm{N}4}\cdot|\mathbb{P}|\right)\cdot|\mathbb{D}|,
\end{equation}
where $\lambda(x,y,z)$ is the K\"{a}ll\'{e}n function defined in Appendix \ref{appdx: QM}, $s_{\textrm{N}4} = +1(-1)$ if $M_{\textrm{N}4}$ is greater(smaller) than $m_{\ell'}$, and $\mathbb{D}$ is the factor resulting from the decay $N_{4} \rightarrow D + \ell$.
The factorisation of the polarisation modulus from HNL production and decay is thus made apparent.
\par For HNL below the kaon mass, the most prevalent production channels are the two-body production channels $\pi \rightarrow N_{4} + \ell$ and, above the pion threshold, $K \rightarrow N_{4} + \ell$; for HNL above about $100\,\,\text{MeV}/c^{2}$, the main decay channels are the two-body channels $N_{4} \rightarrow \pi + \ell, N_{4} \rightarrow \pi^{0} + \nu$.
\end{widetext}
\begin{figure}[t]
\centering
\begin{subfigure}[b]{0.23\textwidth}
\centering
\includegraphics[width=\textwidth]{Images/allLab.pdf}
\caption{Lab frame}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.23\textwidth}
\centering
\includegraphics[width=\textwidth]{Images/allCm.pdf}
\caption{HNL rest frame}
\end{subfigure}
\caption{Angular distributions of final-state muons in $K^{+} \rightarrow N_{4}+e^{+}\,\,N_{4} \rightarrow \pi^{+}+\mu^{-}$ for Dirac HNL with $M_{\textrm{N}4} = 400\,\,\text{MeV}/c^{2}$, using NuMI flux and for a detector located at MINER$\nu$A's coordinates. See text for details.}
\setlength{\belowcaptionskip}{-5pt}
\label{fig:polarisation}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{Images/polz.pdf}
\caption{Polarisation modulus $H$ for the chain $K^{\pm} \rightarrow N_{4} + e^{\pm}, N_{4} \rightarrow \mu^{\mp} + \pi^{\pm}$.}
\setlength{\belowcaptionskip}{-5pt}
\label{fig:polModulus}
\end{figure}
Since the SM weak force couples to left-chiral particles and right-chiral antiparticles, the decays of $N_{4}$ and $\overline{N}_{4}$ have opposite angular dependencies \cite{TastetPol}; the well-known corollary is that in two-body decays only Dirac HNL have a $\cos\theta$ dependence in their decay spectra, whereas Majorana HNL do not (as has been shown explicitly in the case of neutral-mediated decays in \cite{BahaDiracVsMajPol}).
In practical terms, charge-blind detectors that cannot distinguish between leptons and antileptons cannot search for forward-backward asymmetries in the decay distributions of HNL \cite{uBooNEMuPiHNL}.
\par In this work, we have implemented the simple, yet analytically calculable ``two-body-production, two-body-decay" polarisation prescription for all (Dirac) HNL decays, assigning an angular distribution $1 \pm H\cos\theta$ to the spectrum of decay products.
However, we have implemented a switch that allows the user to switch this simple scheme off if it is desirable to do so, reverting to pure phase-space decays instead.
Because the user has access to the truth-level four-momentum of each of the decay particles, it is in principle possible to reweight the spectra of the final state products with any desired polarisation scheme.
The implementation of a fuller description of polarisation effects, including in the three-body decays of Majorana HNL (see for example \cite{MajoranaPol, Ballett2020} for discussions on Majorana HNL polarisations), remains an appealing avenue for future work.
We comment on this in Section \ref{SECT_discussion}.
\par We have simulated in Fig. \ref{fig:polarisation} the effect of three different polarisation prescriptions for $400\,\,\text{MeV}/c^{2}$ HNL from the kaon decay $K^{+} \rightarrow N_{4} + e^{+}$, decaying as $N_{4} \rightarrow \pi^{+} + \mu^{-}$ at the MINER$\nu$A detector in the NuMI beam \cite{TheMINERvADetector}.
The modulus $H$ is plotted as a function of $M_{\textrm{N}4}$ in Fig. \ref{fig:polModulus}, with the red curve corresponding to the production mode $K^{+} \rightarrow N_{4} + e^{+}$ and the blue curve to $K^{+} \rightarrow N_{4} + \mu^{+}$; for $M_{\textrm{N}4} = 400\,\,\text{MeV}/c^{2}$ and $K^{+} \rightarrow N_{4} + e^{+}$, it is $H \simeq 0.9961$.
The simplest prescription (blue curve) is the absence of any polarisation effect, i.e. the switch set to false; the angular distribution of final-state products is isotropic, as expected.
In the red curve, a ``maximal scenario" of polarisation is implemented.
The \emph{direction} of $\mathbb{P}$ is kept fixed to $\widehat{z}$, which shows up as a distribution $1 + H\cos\theta_{\mathbb{P}}$, where $\theta_{\mathbb{P}}$ is the angle between the muon and $\widehat{z}$.
This shows that, for suitable HNL mass, there could be a significant polarisation effect if one did not account for the variation of the direction of $\mathbb{P}$.
Finally, the magenta curve implements the realistic scenario where the direction of $\mathbb{P}$ is evaluated event-by-event according to the procedure outlined above.
It is immediately apparent that the polarisation is almost completely washed out; this occurs as a consequence of backwards-emitted HNL being accepted by the detector.
For such a heavy mass, there is almost the same chance of a backwards-emitted HNL being accepted as a forwards-emitted one; this ``averages out" the polarisation effect.
\par The proper implementation of polarisation effects is doubtlessly going to be crucially important for lower-energy beamlines, or decays at rest, where the Lorentz transformations into the lab frame are small or identity; for the case of the $120\,\,\text{GeV}$ NuMI beam, for example, the transformation into the lab frame makes polarisation effects matter very little for the correct description of final-state kinematics, affecting the opening angle $\Theta_{\mathbb{P}}$ between the muon momentum and beam direction in the region $\Theta_{\mathbb{P}} \lesssim 0.1^{\circ}$.
\begin{widetext}
\begin{figure*}
\centering
\begin{subfigure}[b]{0.38\textwidth}
\centering
\subfloat[$25\,\,\text{MeV}/c^{2}$, unweighted]{%
\includegraphics[clip,width=\textwidth]{Images/offAxis_25MeV_unweighted.pdf}
}
\subfloat[$100\,\,\text{MeV}/c^{2}$, unweighted]{%
\includegraphics[clip,width=\textwidth]{Images/offAxis_100MeV_unweighted.pdf}
}
\subfloat[$250\,\,\text{MeV}/c^{2}$, unweighted]{%
\includegraphics[clip,width=\textwidth]{Images/offAxis_250MeV_unweighted.pdf}
}
\end{subfigure}
%
\begin{subfigure}[b]{0.38\textwidth}
\centering
\subfloat[$25\,\,\text{MeV}/c^{2}$, weighted]{%
\includegraphics[clip,width=\textwidth]{Images/offAxis_25MeV.pdf}
}
\subfloat[$100\,\,\text{MeV}/c^{2}$, weighted]{%
\includegraphics[clip,width=\textwidth]{Images/offAxis_100MeV.pdf}
}
\subfloat[$250\,\,\text{MeV}/c^{2}$, weighted]{%
\includegraphics[clip,width=\textwidth]{Images/offAxis_250MeV.pdf}
}
\end{subfigure}
\caption{Delays of HNL with respect to a SM neutrino produced by the same parent, for the MINER$\nu$A detector. Each circular slice represents one timing bin. The final timing bin is $[1.48,\,1.6]\,\upmu\text{s}$. Panels (a), (b), (c): HNL decay events as a proportion of all simulated events. Panels (d), (e), (f): HNL decay events, weighted for the probability of production, propagation, and decay, as a proportion of the sum of weights of all events. Note the logarithmic scale on the radial axis.}
\label{fig:delays}
\end{figure*}
\end{widetext}
\subsection{Determination of decay vertex}
\par The final task for the description of a single HNL decay event is its location in spacetime, given some origin; frequently, the user will input their own coordinate system, which is just the USER frame defined earlier.
Knowledge of the position in space where the decay happens is important, because coupled with the knowledge of the HNL's velocity it can give information about the spatial distribution and the timing of HNL signal events.
This is particularly interesting for heavier HNL which tend to ``lag behind" Standard Model neutrinos, and which could for long enough baselines and small enough velocities decay during a timing window with little to no Standard Model expected background such as the period in between beam spills.
This lends itself primarily to the development of special HNL triggers \cite{Porzio2019, Ballett2017}, and underlines the importance of the accurate determination of the position of the HNL decay, as well as the HNL energy and position relative to the USER frame.
\par Given the HNL three-momentum $\boldsymbol{p}_{\textrm{N}4}$ and its production vertex D, it is possible to construct a line $\epsilon(u) = \boldsymbol{r}_{\textrm{D}} + u \cdot \boldsymbol{p}_{\textrm{N}4}$.
Combining that with a description of the detector geometry and a description of where the detector centre is located allows one to calculate two points E, X where $\epsilon$ intersects the detector, using standard methods.
Another straightforward task, given the velocity $\beta$ and lifetime $\tau$ of the HNL, is the calculation of where in the detector the decay occurs, which is of unique interest to segmented detectors such as the DUNE near detector \cite{DUNE_ND_CDR}, whose different submodules may have different tracking capability, thresholds, geometries and fiducial volumes, etc. \\
Since the HNL is unstable, the probability that it can traverse a length $z$ is given by
\begin{equation}
\mathcal{P}(z) \propto \exp\left(-\frac{z}{\beta c}\cdot\frac{1}{\gamma\tau}\right).
\end{equation}
\par Parametrising the appropriate region of $\epsilon$ as $\epsilon(u) = \boldsymbol{r}_{\textrm{E}} + u\cdot(\boldsymbol{r}_{\textrm{X}} - \boldsymbol{r}_{\textrm{E}})$, where $u = z / |\boldsymbol{r}_{\textrm{X}} - \boldsymbol{r}_{\textrm{E}}|$, one gets the position of the vertex V. \\
Apart from the spatial information, the timing information that can be gained is useful as well, with detectors in the appropriate baseline potentially able to see HNL events in between proton-beam buckets \cite{Ballett2017}, or after the beam spill.
In both cases, backgrounds from neutrino interactions would drop off significantly, leading to far easier HNL signal identification.
\par In Fig. \ref{fig:delays}, we show the distribution of the delay for HNL arrival at the MINER$\nu$A detector, compared to the arrival time of Standard Model neutrinos.
Each slice of a circular plot corresponds to one bin of delay, i.e. the first bin signifies delay $\Delta t \in [0, 10]\,\,\text{ns}$, and so on.
The span of the timing bins covers the range $\Delta t \in [0, 1.6]\,\,\upmu\text{s}$, which is roughly the length of one beam batch in NuMI.
The left panels (a), (b), (c) are filled with the proportion of HNL events in each bin over all HNL events, whereas the right panels (d), (e), (f) are filled with the HNL events, weighted for the overall probability that each HNL would be produced, propagated, and decayed inside the detector, normalised to all HNL.
Note that the radial direction is on a logarithmic scale, and that bins are only drawn if their content is greater than $10^{-3.5}$.
The mass $M_{\textrm{N}4}$ increases from $25\,\,\text{MeV}/c^{2}$ on the top row to $250\,\,\text{MeV}/c^{2}$ on the bottom row.
\par As expected, heavier HNL travel slower than lighter ones, and end up arriving at the detector appreciably later.
Though there exist a small handful of HNL events with great delays which could end up delayed by about the length of one NuMI beam batch, they are less likely to survive long enough to reach the detector.
This explains why the largest bins seem to ``drop off" in the weighted right column of plots.
\par We also see that a significant proportion (about $10\%$) of HNL have a delay within the small delay bins $\sim \ord{10\,\,\text{ns}}$.
This implies that, for detectors which support sufficient triggering capability to distinguish beam-spill timing from beam-off timing, it is possible in principle to obtain a trigger for delayed HNL which arrive after the Standard Model neutrinos from the beam have traversed the detector.
Work utilising such a trigger has already been done by the MicroBooNE collaboration \cite{uBooNEMuPiHNL}, and studied for the SBN programme \cite{Ballett2017}.
\par The simulation, much like Standard Model \verb|GENIE| output, returns \verb|EventRecord|s that summarise the HNL decay.
The defining features of the event record are:
\begin{itemize}
\item The particle stack, containing each particle's PDG code and four-momentum;
\item The event vertex, in USER coordinates, with the time component measuring the delay $\Delta t := t\left(\text{HNL reaches}\,\text{V}\right) - t\left(\text{SM}\,\nu\,\text{reaches}\,\text{V}\right)$;
\item The weight, containing the calculated $N_{\text{POT}}$ for this signal event.
\end{itemize}
We present more information about configuration and running of the module in Appendix \ref{appdx: code}.
\end{document}
\section{Introduction} \label{SECT_Introduction}
\subfile{Sections/Introduction}
\section{The HNL model} \label{SECT_model}
\subfile{Sections/Model}
\onecolumngrid
\section{Previous implementations of HNL simulation} \label{SECT_overview}
\twocolumngrid
\subfile{Sections/MC}
\begin{widetext}
\end{widetext}
\section{Implementation in \small{GENIE} v3} \label{SECT_simulation}
\subfile{Sections/Simulation}
\section{Discussion} \label{SECT_discussion}
\subfile{Sections/Discussion}
\section{Summary} \label{SECT_conclusion}
\subfile{Sections/Conclusion}
\section*{Acknowledgments} \label{SECT_acknowledgments}
\subfile{Sections/Acknowledgments}
| {'timestamp': '2022-11-21T02:11:20', 'yymm': '2211', 'arxiv_id': '2211.10210', 'language': 'en', 'url': 'https://arxiv.org/abs/2211.10210'} |
\section{Introduction}
Single-document summarization methods can be divided into two categories: extractive and abstractive. Extractive summarization systems form summaries by selecting and copying text snippets from the document, while abstractive methods aim to generate concise summaries with paraphrasing. This work is primarily concerned with extractive summarization. Though abstractive summarization methods have made strides in recent years, extractive techniques are still very attractive as they are simpler, faster, and more reliably yield semantically and grammatically correct sentences.
Many extractive summarizers work by selecting sentences from the input document \citep{tra_ext1_luhn1958automatic,tra_ext3_mihalcea2004textrank,tra_ext5_wong2008extractive,ext1_kaageback2014extractive,ext2_2015Yin,ext3_cao2015learning,yasunaga2017graph}. Furthermore, a growing trend is to frame this sentence selection process as a sequential binary labeling problem, where binary inclusion/exclusion labels are chosen for sentences one at a time, starting from the beginning of the document, and decisions about later sentences may be conditioned on decisions about earlier sentences. Recurrent neural networks may be trained with stochastic gradient ascent to maximize the likelihood of a set of ground-truth binary label sequences \citep{ext4_cheng2016neural,ext5_summarunner}.
However, this approach has two well-recognized disadvantages. First, it suffers from exposure bias, a form of mismatch between training and testing data distributions which can hurt performance \citep{Ranzato2015SequenceLT, rl1_bahdanau+al-2017-actorcritic-iclr, abs5_paulus2017deep}. Second, extractive labels must be generated by a heuristic, as summarization datasets do not generally include ground-truth extractive labels; the ultimate performance of models trained on such labels is thus fundamentally limited by the quality of the heuristic.
An alternative to maximum likelihood training is to use reinforcement learning to train the model to directly maximize a measure of summary quality, such as the ROUGE score between the generated summary and a ground-truth abstractive summary \citep{DBLP:conf/aaai/WuH18}. This approach has become popular because it avoids exposure bias, and directly optimizes a measure of summary quality. However, it also has a number of downsides. For one, the search space is quite large: for a document of length $T$, there are $2^T$ possible extractive summaries. This makes the exploration problem faced by the reinforcement learning algorithm during training very difficult. Another issue is that due to the sequential nature of selection, the model is inherently biased in favor of selecting earlier sentences over later ones, a phenomenon which we demonstrate empirically in Section \ref{sec:discussion}. The first issue can be resolved to a degree using either a cumbersome maximum likelihood-based pre-training step (using heuristically-generated labels) \citep{DBLP:conf/aaai/WuH18}, or placing a hard upper limit on the number of sentences selected. The second issue is more problematic, as it is inherent to the sequential binary labeling setting.
In the current work, we introduce \textsc{BanditSum}, a novel method for training neural network-based extractive summarizers with reinforcement learning. This method does away with the sequential binary labeling setting, instead formulating extractive summarization as a contextual bandit. This move greatly reduces the size of the space that must be explored, removes the need to perform supervised pre-training, and prevents systematically privileging earlier sentences over later ones. Although the strong performance of Lead-3 indicates that good sentences often occur early in the source article, we show in Sections \ref{sec:results} and \ref{sec:discussion} that the contextual bandit setting greatly improves model performance when good sentences occur late without sacrificing performance when good sentences occur early.
Under this reformulation, \textsc{BanditSum } takes the document as input and outputs an \textit{affinity} for each of the sentences therein. An affinity is a real number in $[0, 1]$ which quantifies the model's propensity for including a sentence in the summary. These affinities are then used in a process of repeated sampling-without-replacement which does not privilege earlier sentences over later ones. \textsc{BanditSum } is free to process the document as a whole before yielding affinities, which permits affinities for different sentences in the document to depend on one another in arbitrary ways. In our technical section, we show how to apply policy gradient reinforcement learning methods to this setting.
The contributions of our work are as follows:
\begin{itemize}
\item We propose a theoretically grounded method, based on the contextual bandit formalism, for training neural network-based extractive summarizers with reinforcement learning. Based on this training method, we propose the \textsc{BanditSum } system for extractive summarization.
\item We perform experiments demonstrating that \textsc{BanditSum } obtains state-of-the-art performance on a number of datasets and requires significantly fewer update steps than competing approaches.
\item We perform human evaluations showing that in the eyes of human judges, summaries created by \textsc{BanditSum } are less redundant and of higher overall quality than summaries created by competing approaches.
\item We provide evidence, in the form of experiments in which models are trained on subsets of the data, that the improved performance of \textsc{BanditSum } over competitors stems in part from better handling of summary-worthy sentences that come near the end of the document (see Section~\ref{sec:discussion}).
\end{itemize}
\section{Related Work\label{sec:related-work}}
Extractive summarization has been widely studied in the past. Recently, neural network-based methods have been gaining popularity over classical methods \citep{tra_ext1_luhn1958automatic,tra_ext2_gong2001generic, tra_ext4_conroy2001text,tra_ext3_mihalcea2004textrank,tra_ext5_wong2008extractive}, as they have demonstrated stronger performance on large corpora. Central to the neural network-based models is the encoder-decoder structure. These models typically use either a convolution neural network \cite{cnn1_kalchbrenner2014convolutional,cnn2_kim2014convolutional,ext2_2015Yin,ext3_cao2015learning}, a recurrent neural network \cite{rnn2_chung2014gru,ext4_cheng2016neural,ext5_summarunner}, or a combination of the two \cite{DBLP:Narayan/2018,DBLP:conf/aaai/WuH18} to create sentence and document representations, using word embeddings \cite{we1_mikolov2013efficient,we2_pennington2014glove} to represent words at the input level. These vectors are then fed into a decoder network to generate the output summary.
The use of reinforcement learning (RL) in extractive summarization was first explored by \citet{rl_ryang2012frameworkRL}, who proposed to use the TD($\lambda$) algorithm to learn a value function for sentence selection. \citet{rl_rioux2014fear} improved this framework by replacing the learning agent with another TD($\lambda$) algorithm. However, the performance of their methods was limited by the use of shallow function approximators, which required performing a fresh round of reinforcement learning for every new document to be summarized. The more recent work of \citet{abs5_paulus2017deep} and \citet{DBLP:conf/aaai/WuH18} use reinforcement learning in a sequential labeling setting to train abstractive and extractive summarizers, respectively, while \citet{chen2018abstractive} combines both approaches, applying abstractive summarization to a set of sentences extracted by a pointer network \citep{vinyals2015pointer} trained via REINFORCE.
However, pre-training with a maximum likelihood objective is required in all of these models.
The two works most similar to ours are \citet{yao2018deep} and \citet{DBLP:Narayan/2018}. \citet{yao2018deep} recently proposed an extractive summarization approach based on deep Q learning, a type of reinforcement learning. However, their approach is extremely computationally intensive (a minimum of 10 days before convergence), and was unable to achieve ROUGE scores better than the best maximum likelihood-based approach. \citet{DBLP:Narayan/2018} uses a cascade of filters in order to arrive at a set of candidate extractive summaries, which we can regard as an approximation of the true action space. They then use an approximation of a policy gradient method to train their neural network to select summaries from this approximated action space. In contrast, \textsc{BanditSum } samples directly from the true action space, and uses exact policy gradient parameter updates.
\section{Extractive Summarization as a Contextual Bandit \label{sec:technical}}
Our approach formulates extractive summarization as a contextual bandit which we then train an agent to solve using policy gradient reinforcement learning. A bandit is a decision-making formalization in which an agent repeatedly chooses one of several actions, and receives a reward based on this choice. The agent's goal is to quickly learn which action yields the most favorable distribution over rewards, and choose that action as often as possible. In a \textit{contextual} bandit, at each trial, a context is sampled and shown to the agent, after which the agent selects an action and receives a reward; importantly, the rewards yielded by the actions may depend on the sampled context. The agent must quickly learn which actions are favorable in which contexts. Contextual bandits are a subset of Markov Decision Processes in which every episode has length one.
Extractive summarization may be regarded as a contextual bandit as follows. Each document is a context, and each ordered subset of a document's sentences is a different action. Formally, assume that each context is a document $d$ consisting of sentences $s = (s_1, \dots, s_{N_d})$, and that each action is a length-$M$ sequence of unique sentence indices $i = (i_1, \dots, i_M)$ where $i_t \in \{1, \dots, N_d\}$, $i_t \neq i_{t'}$ for $t \neq t'$, and $M$ is an integer hyper-parameter. For each $i$, the extractive summary induced by $i$ is given by $(s_{i_1}, \dots, s_{i_M})$. An action $i$ taken in context $d$ is given a reward $R(i, a)$, where $a$ is the gold-standard abstractive summary that is paired with document $d$, and $R$ is a scalar reward function quantifying the degree of match between $a$ and the summary induced by $i$.
A policy for extractive summarization is a neural network $p_\theta(\cdot | d)$, parameterized by a vector $\theta$, which, for each input document $d$, yields a probability distribution over index sequences. Our goal is to find parameters $\theta$ which cause $p_\theta(\cdot|d)$ to assign high probability to index sequences that induce extractive summaries that a human reader would judge to be of high-quality. We achieve this by maximizing the following objective function with respect to parameters $\theta$:
\begin{align}
J(\theta) = E\left[R(i, a)\right]\label{eqn:objective}
\end{align}
where the expectation is taken over documents $d$ paired with gold-standard abstractive summaries $a$, as well as over index sequences $i$ generated according to $p_\theta(\cdot | d)$.
\subsection{Policy Gradient Reinforcement Learning}
Ideally, we would like to maximize \eqref{eqn:objective} using gradient ascent. However, the required gradient cannot be obtained using usual techniques (e.g. simple backpropagation) because $i$ must be discretely sampled in order to compute $R(i, a)$.
Fortunately, we can use the likelihood ratio gradient estimator from reinforcement learning and stochastic optimization \cite{williams1992simple, sutton2000policy}, which tells us that the gradient of this function can be computed as:
\begin{align}
\nabla_\theta J(\theta) = E\left[ \nabla_\theta \log p_\theta(i | d) R(i, a) \right] \label{eqn:gradient}
\end{align}
where the expectation is taken over the same variables as \eqref{eqn:objective}.
Since we typically do not know the exact document distribution and thus cannot evaluate the expected value in \eqref{eqn:gradient}, we instead estimate it by sampling. We found that we obtained the best performance when, for each update, we first sample one document/summary pair $(d, a)$, then sample $B$ index sequences $i^1, \dots, i^B$ from $p_\theta(\cdot | d)$, and finally take the empirical average:
\begin{align}
\nabla_\theta J(\theta) \approx \frac{1}{B} \sum_{b=1}^{B}\nabla_\theta \log p_\theta(i^b | d) R(i^b, a) \label{eqn:estimated-gradient}
\end{align}
This overall learning algorithm can be regarded as an instance of the REINFORCE policy gradient algorithm \citep{williams1992simple}.
\subsection{Structure of $p_\theta(\cdot|d)$}
There are many possible choices for the structure of $p_\theta(\cdot | d)$; we opt for one that avoids privileging early sentences over later ones. We first decompose $p_\theta(\cdot | d)$ into two parts: $\pi_\theta$, a deterministic function which contains all the network's parameters, and $\mu$, a probability distribution parameterized by the output of $\pi_\theta$. Concretely:
\begin{align}
p_\theta(\cdot | d) = \mu(\cdot | \pi_\theta(d))
\end{align}
Given an input document $d$, $\pi_\theta$ outputs a real-valued vector of \textit{sentence affinities} whose length is equal to the number of sentences in the document (i.e. $\pi_\theta(d) \in \mathbb{R}^{N_d}$) and whose elements fall in the range $[0, 1]$. The $t$-th entry $\pi(d)_t$ may be roughly interpreted as the network's propensity to include sentence $s_t$ in the summary of $d$.
Given sentence affinities $\pi_\theta(d)$, $\mu$ implements a process of repeated sampling-without-replacement. This proceeds by repeatedly normalizing the set of affinities corresponding to sentences that have not yet been selected, thereby obtaining a probability distribution over unselected sentences, and sampling from that distribution to obtain a new sentence to include. This normalize-and-sample step is repeated $M$ times, yielding $M$ unique sentences to include in the summary.
At each step of sampling-without-replacement, we also include a small probability $\epsilon$ of sampling uniformly from all remaining sentences. This is used to achieve adequate exploration during training, and is similar to the $\epsilon$-greedy technique from reinforcement learning.
Under this sampling scheme, we have the following expression for $p_\theta(i|d)$:
\begin{align}
\prod_{j=1}^M \left( \frac{\epsilon}{N_d - j + 1} + \frac{(1-\epsilon)\pi(d)_{i_j}}{z(d) - \sum_{k=1}^{j-1} \pi(d)_{i_k}} \right)
\end{align}
where $z(d) = \sum_t\pi(d)_t$. For index sequences that have length different from $M$, or that contain duplicate indices, we have $p_\theta(i|d) = 0$. Using this expression, it is straightforward to use automatic differentiation software to compute $\nabla_\theta \log p_\theta(i|d)$, which is required for the gradient estimate in \eqref{eqn:estimated-gradient}.
\subsection{Baseline for Variance Reduction} \label{subsec:baseline}
Our sample-based gradient estimate can have high variance, which can slow the learning. One potential cause of this high variance can be seen by inspecting \eqref{eqn:estimated-gradient}, and noting that it basically acts to change the probability of a sampled index sequence to an extent determined by the reward $R(i, a)$. However, since ROUGE scores are always positive, the probability of every sampled index sequence is increased, whereas intuitively, we would prefer to decrease the probability of sequences that receive a comparatively low reward, even if it is positive. This can be remedied by the introduction of a so-called baseline which is subtracted from all rewards.
Using a baseline $\overline{r}$, our sample-based estimate of $\nabla_\theta J(\theta)$ becomes:
\begin{align}
\frac{1}{B} \sum_{i=1}^{B}\nabla_\theta \log p_\theta(i^b | d) (R(i^b, a) - \overline{r})\label{eqn:estimated-gradient-baseline}
\end{align}
It can be shown that the introduction of $\overline{r}$ does not bias the gradient estimator and can significantly reduce its variance if chosen appropriately \citep{sutton2000policy}.
There are several possibilities for the baseline, including the long-term average reward and the average reward across different samples for one document-summary pair. We choose an approach known as self-critical reinforcement learning, in which the test-time performance of the current model is used as the baseline \citep{Ranzato2015SequenceLT,rennie2017self,abs5_paulus2017deep}. More concretely, after sampling the document-summary pair $(d, a)$, we greedily generate an index sequence using the current parameters $\theta$:
\begin{align}
i_\textit{greedy} = \argmax_i p_\theta(i | d)
\end{align}
and calculate the baseline for the current update as $\overline{r} = R(i_\textit{greedy}, a)$. This baseline has the intuitively satisfying property of only increasing the probability of a sampled label sequence when the summary it induces is better than what would be obtained by greedy decoding.
\subsection{Reward Function}
A final consideration is a concrete choice for the reward function $R(i, a)$. Throughout this work we use:
\begin{multline}
R(i, a) = \frac{1}{3} (\text{ROUGE-1}_f(i, a) + {} \\
\text{ROUGE-2}_f(i, a) +
\text{ROUGE-L}_f(i, a)).
\end{multline}
The above reward function optimizes the average of all the ROUGE variants \cite{eva1_lin:2004:ACLsummarization} while balancing precision and recall.
\section{Model\label{sec:model}}
In this section, we discuss the concrete instantiations of the neural network $\pi_\theta$ that we use in our experiments. We break $\pi_\theta$ up into two components: a document encoder $f_{\theta1}$, which outputs a sequence of sentence feature vectors $(h_1, \dots, h_{N_d})$ and a decoder $g_{\theta2}$ which yields sentence affinities:
\begin{align}
h_1, \dots, h_{N_d} &= f_{\theta1}(d)\\
\pi_\theta(d) &= g_{\theta2}(h_1, \dots, h_{N_d})
\end{align}
\noindent \textbf{Encoder.} Features for each sentence in isolation are first obtained by applying a word-level Bidirectional Recurrent Neural Network (BiRNN) to the embeddings for the words in the sentence, and averaging the hidden states over words. A separate sentence-level BiRNN is then used to obtain a representations $h_i$ for each sentence in the context of the document.
\noindent \textbf{Decoder.} A multi-layer perceptron is used to map from the representation $h_t$ of each sentence through a final sigmoid unit to yield sentence affinities $\pi_\theta(d)$.
The use of a bidirectional recurrent network in the encoder is crucial, as it allows the network to process the document as a whole, yielding representations for each sentence that take all other sentences into account. This procedure is necessary to deal with some aspects of summary quality such as redundancy (avoiding the inclusion of multiple sentences with similar meaning), which requires the affinities for different sentences to depend on one another. For example, to avoid redundancy, if the affinity for some sentence is high, then sentences which express similar meaning should have low affinities.
\section{Experiments\label{sec:experiments}}
In this section, we discuss the setup of our experiments. We first discuss the corpora that we used and our evaluation methodology. We then discuss the baseline methods against which we compared, and conclude with a detailed overview of the settings of the model parameters.
\subsection{Corpora}
Three datasets are used for our experiments: the CNN, the Daily Mail, and combined CNN/Daily Mail \citep{data1_hermann2015teaching,data2_nallapati2016abstractive}. We use the standard split of \citet{data1_hermann2015teaching} for training, validating, and testing and the same setting without \textit{anonymization} on the three corpus as \citet{abs4_SeeLM17}. The Daily Mail corpus has 196,557 training documents, 12,147 validation documents and 10,397 test documents; while the CNN corpus has 90,266/1,220/1,093 documents, respectively.
\subsection{Evaluation}
The models are evaluated based on ROUGE \cite{eva1_lin:2004:ACLsummarization}.
We obtain our ROUGE scores using the standard pyrouge
package\footnote{\url{https://pypi.python.org/pypi/pyrouge/0.1.3}} for the test set evaluation and a faster python implementation of the ROUGE metric\footnote{We use the modified version based on \url{https://github.com/pltrdy/rouge}}
for training and evaluating on the validation set. We report the F1 scores of ROUGE-1, ROUGE-2, and ROUGE-L, which compute the uniform, bigram, and longest common subsequence overlapping with the reference summaries.
\subsection{Baselines}
We compare \textsc{BanditSum } with other extractive methods including: the Lead-3 model, SummaRuNNer \citep{ext5_summarunner}, Refresh \citep{DBLP:Narayan/2018}, RNES \citep{DBLP:conf/aaai/WuH18}, DQN \citep{yao2018deep}, and NN-SE \citep{ext4_cheng2016neural}. The Lead-3 model simply produces the leading three sentences of the document as the summary.
\subsection{Model Settings}
We use 100-dimensional Glove embeddings \citep{we2_pennington2014glove} as our embedding initialization. We do not limit the sentence length, nor the maximum number of sentences per document. We use one-layer BiLSTM for word-level RNN, and two-layers BiLSTM for sentence-level RNN. The hidden state dimension is 200 for each direction on all LSTMs. For the decoder, we use a feed-forward network with one hidden layer of dimension 100.
During training, we use Adam \citep{adam_kingma2014adam} as the optimizer with the learning rate of $5e^{-5}$, beta parameters $(0, 0.999)$, and a weight decay of $1e^{-6}$, to maximize the objective function defined in equation \eqref{eqn:objective}. We employ gradient clipping of 1 to regularize our model. At each iteration, we sample $B = 20$ times to estimate the gradient defined in equation \ref{eqn:estimated-gradient}. For our system, the reported performance is obtained within two epochs of training.
At the test time, we pick sentences sorted by the predicted probabilities until the length limit is reached. The full-length ROUGE F1 score is used as the evaluation metric. For $M$, the number of sentences selected per summary, we use a value of 3, based on our validation results as well as on the settings described in \citet{ext5_summarunner}.
\section{Experiment Results\label{sec:results}}
In this section, we present quantitative results from the ROUGE evaluation and qualitative results based on human evaluation. In addition, we demonstrate the stability of our RL model by comparing the validation curve of \textsc{BanditSum } with SummaRuNNer \citep{ext5_summarunner} trained with a maximum likelihood objective.
\subsection{Rouge Evaluation}
\begin{table}[!h]
\footnotesize
\centering
\begin{tabular}{|l|l|l|l|}
\hline
Model & \multicolumn{3}{l|}{ROUGE} \\
\hline
& 1 & 2 & L \\
\hline
Lead\citep{DBLP:Narayan/2018} & 39.6 & 17.7 & 36.2 \\
Lead-3(ours) & 40.0 & 17.5 & 36.2 \\
SummaRuNNer & 39.6 & 16.2 & 35.3 \\
DQN & 39.4 & 16.1 & 35.6 \\
Refresh & 40.0 &18.2 &36.6\\
RNES w/o coherence & 41.3 & \textbf{18.9} & \textbf{37.6}\\
\hline
\textsc{BanditSum } & \textbf{41.5} & 18.7 & \textbf{37.6} \\
\hline
\end{tabular}
\caption{Performance comparison of different extractive summarization models on the combined CNN/Daily Mail test set using full-length F1. }
\label{table:results_cnn}
\end{table}
\begin{table}[!h]
\small
\centering
\setlength\tabcolsep{4 pt}
\begin{tabular}{|l|l|l|l|l|l|l|}
\hline
Model & \multicolumn{3}{l|}{CNN} & \multicolumn{3}{l|}{Daily Mail} \\ \hline
& 1 & 2 & L & 1 & 2 & L \\ \hline
Lead-3 & 28.8 & 11.0 & 25.5 & 41.2 & 18.2 & 37.3 \\
NN-SE & 28.4 & 10.0 & 25.0 & 36.2 & 15.2 & 32.9 \\
Refresh & 30.4 & \textbf{11.7} & 26.9& 41.0 & 18.8 & 37.7 \\
\textsc{BanditSum } & \textbf{30.7} & 11.6 & \textbf{27.4} & \textbf{42.1} & \textbf{18.9} & \textbf{38.3}\\
\hline
\end{tabular}
\caption{The full-length ROUGE F1 scores of various extractive models on the CNN and the Daily Mail test set separately.}
\label{table:daily_mail}
\end{table}
We present the results of comparing \textsc{BanditSum } to several baseline algorithms\footnote{
Due to different pre-processing methods and different numbers of selected sentences, several papers report different Lead scores \citep{DBLP:Narayan/2018,abs4_SeeLM17}.
We use the test set provided by \citet{DBLP:Narayan/2018}. Since their Lead score is a combination of Lead-3 for CNN and Lead-4 for Daily Mail, we recompute the Lead-3 scores for both CNN and Daily Mail with the preprocessing steps used in \citet{abs4_SeeLM17}. Additionally, our results are not directly comparable to results based on the anonymized dataset used by \citet{ext5_summarunner}.}
on the CNN/Daily Mail corpus in Tables~\ref{table:results_cnn} and \ref{table:daily_mail}.
Compared to other extractive summarization systems, \textsc{BanditSum } achieves performance that is significantly better than two RL-based approaches, Refresh \citep{DBLP:Narayan/2018} and DQN \citep{yao2018deep}, as well as SummaRuNNer, the state-of-the-art maximum liklihood-based extractive summarizer \cite{ext5_summarunner}. \textsc{BanditSum } performs a little better than RNES \citep{DBLP:conf/aaai/WuH18} in terms of ROUGE-1 and slightly worse in terms of ROUGE-2. However, RNES requires pre-training with the maximum likelihood objective on heuristically-generated extractive labels; in contrast, \textsc{BanditSum } is very light-weight and converges significantly faster. We discuss the advantage of framing the extractive summarization based on the contextual bandit (\textsc{BanditSum}) over the sequential binary labeling setting (RNES) in the discussion Section~\ref{sec:discussion}.
We also noticed that different choices for the policy gradient baseline (see Section~\ref{subsec:baseline}) in \textsc{BanditSum } affect learning speed, but do not significantly affect asymptotic performance. Models trained with an average reward baseline learned most quickly, while models trained with three different baselines (greedy, average reward in a batch, average global reward) all perform roughly the same after training for one epoch. Models trained without a baseline were found to under-perform other baseline choices by about 2 points of ROUGE score on average.
\subsection{Human Evaluation}
We also conduct a qualitative evaluation to understand the effects of the improvements introduced in \textsc{BanditSum } on human judgments of the generated summaries. To assess the effect of training with RL rather than maximum likelihood, in the first set of human evaluations we compare \textsc{BanditSum } with the state-of-the-art maximum likelihood-based model SummaRuNNer. To evaluate the importance of using an exact, rather than approximate, policy gradient to optimize ROUGE scores, we perform another human evaluation comparing \textsc{BanditSum } and Refresh, an RL-based method that uses the an approximation of the policy gradient.
We follow a human evaluation protocol similar to the one used in \citet{DBLP:conf/aaai/WuH18}. Given a set of $N$ documents, we ask $K$ volunteers to evaluate the summaries extracted by both systems. For each document, a reference summary, and a pair of randomly ordered extractive summaries (one generated by each of the two models) is presented to the volunteers. They are asked to compare and rank the extracted summaries along three dimensions: overall, coverage, and non-redundancy.
\begin{table}[h!]
\small
\begin{center}
\begin{tabularx}{\columnwidth}{|l|X|X|X|}
\hline
Model & Overall & Coverage & Non-Redundancy \\ \hline
SummaRuNNer & 1.67 &\textbf{1.46} &1.70 \\ \hline
\textsc{BanditSum } & \textbf{1.33} & 1.54 & \textbf{1.30} \\ \hline
\end{tabularx}
\end{center}
\caption{Average rank of human evaluation based on 5 participants who expressed 57 pairwise preferences between the summaries generated by SummaRuNNer and \textsc{BanditSum}. The model with the lower score is better.}
\label{table:human_eval_summarunner}
\end{table}
\begin{table}[h!]
\small
\begin{center}
\begin{tabularx}{\columnwidth}{|l|X|X|X|}
\hline
Model & Overall & Coverage & Non-Redundancy \\ \hline
Refresh & 1.53 &\textbf{1.34} & 1.55 \\ \hline
\textsc{BanditSum } & \textbf{1.50} & 1.58 & \textbf{1.30} \\ \hline
\end{tabularx}
\end{center}
\caption{Average rank of manual evaluation with 4 participants who expressed 20 pairwise preferences between the summaries generated by Refresh and our system. The model with the lower score is better.}
\label{table:human_eval_refresh}
\end{table}
To compare with SummaRuNNer, we randomly sample 57 documents from the test set of DailyMail and ask 5 volunteers to evaluate the extracted summaries.
While comparing with Refresh, we use the 20 documents (10 CNN and 10 DailyMail) provided by \citet{DBLP:Narayan/2018} to 4 volunteers. Tables \ref{table:human_eval_summarunner} and \ref{table:human_eval_refresh} show the results of human evaluation in these two settings. \textsc{BanditSum } is shown to be better than Refresh and SummaRuNNer in terms of overall quality and non-redundancy. These results indicate that the use of the true policy gradient, rather than the approximation used by Refresh, improves overall quality. It is interesting to observe that, even though \textsc{BanditSum } does not have an explicit redundancy avoidance mechanism, it actually outperforms the other systems on non-redundancy.
\subsection{Learning Curve}
Reinforcement learning methods are known for sometimes being unstable during training. However, this seems to be less of a problem for \textsc{BanditSum}, perhaps because it is formulated as a contextual bandit rather than a sequential labeling problem. We show this by comparing the validation curves generated by \textsc{BanditSum } and the state-of-the-art maximum likelihood-based model -- SummaRuNNer \citep{ext5_summarunner} (Figure~\ref{fig:train_efficiency}).
\begin{figure}[!h]
\includegraphics[width=0.5\textwidth]{figures/dm_nll_rl.png}
\caption[Caption for LOF]{Average of ROUGE-1,2,L F1 scores on the Daily Mail validation set within one epoch of training on the Daily Mail training set. The x-axis (multiply by 2,000) indicates the number of data example the algorithms have seen. The supervised labels in SummaRuNNer are used to estimate the \texttt{upper\_bound}.}
\label{fig:train_efficiency}
\end{figure}
From Figure~\ref{fig:train_efficiency}, we observe that \textsc{BanditSum } converges significantly more quickly to good results than SummaRuNNer. Moreover, there is less variance in the performance of \textsc{BanditSum}. One possible reason is that extractive summarization does not have well-defined supervised labels. There exists a mismatch between the provided labels and human-generated abstractive summaries. Hence, the gradient, computed from the maximum likelihood loss function, is not optimizing the evaluation metric of interest. Another important message is that both models are still far from the estimated upper bound\footnote{The supervised labels for the \texttt{upper\_bound} estimation are obtained using the heuristic described in \citet{ext5_summarunner}.}, which shows that there is still significant room for improvement.
\subsection{Run Time}
On CNN/Daily mail dataset, our model's time-per-epoch is about 25.5 hours on a TITAN Xp. We trained the model for 3 epochs, which took about 76 hours in total. For comparison, DQN took about 10 days to train on a GTX 1080 \citep{yao2018deep}. Refresh took about 12 hours on a single GPU to train \citep{DBLP:Narayan/2018}. Note that this figure does not take into account the significant time required by Refresh for pre-computing ROUGE scores.
\section{Discussion: Contextual Bandit Setting Vs. Sequential Full RL Labeling \label{sec:discussion}}
We conjecture that the contextual bandit (CB) setting is a more suitable framework for modeling extractive summarization than the sequential binary labeling setting, especially in the cases when good summary sentences appear later in the document. The intuition behind this is that models based on the sequential labeling setting are affected by the order of the decisions, which biases towards selecting sentences that appear earlier in the document. By contrast, our CB-based RL model has more flexibility and freedom to explore the search space, as it samples the sentences without replacement based on the affinity scores. Note that although we do not explicitly make the selection decisions in a sequential fashion, the sequential information about dependencies between sentences is implicitly embedded in the affinity scores, which are produced by bi-directional RNNs.
\newcommand{\mathit{D}_{\textit{early}}}{\mathit{D}_{\textit{early}}}
\newcommand{\mathit{D}_{\textit{late}}}{\mathit{D}_{\textit{late}}}
We provide empirical evidence for this conjecture by comparing \textsc{BanditSum } to the sequential RL model proposed by~\citet{DBLP:conf/aaai/WuH18} (Figure \ref{fig:model_comparison}) on two subsets of the data: one with good summary sentences appearing early in the article, while the other contains articles where good summary sentences appear late. Specifically, we construct two evaluation datasets by selecting the first 50 documents ($\mathit{D}_{\textit{early}}$, i.e., best summary occurs early) and the last 50 documents ($\mathit{D}_{\textit{late}}$, i.e., best summary occurs late) from a sample of 1000 documents that is ordered by the average extractive label index \textit{$\overline{idx}$}. Given an article with $n$ sentences indexed from $1,\ldots, n$ and a greedy extractive labels set with three sentences $(i,j,k)$\footnote{For each document, a length-$3$ extractive summary with near-optimal ROUGE score is selected following the heuristic proposed by \citet{ext5_summarunner}.}, the average index for the extractive label is computed by \textit{$\overline{idx}$}$ = (i+j+k)/3n$.
\begin{figure}[!h]
\includegraphics[width=0.5\textwidth]{figures/data_comparision.png}
\caption{Model comparisons of the average value for ROUGE-1,2,L F1 scores ($\overline{f}$) on $\mathit{D}_{\textit{early}}$ and $\mathit{D}_{\textit{late}}$. For each model, the results were obtained by averaging $\overline{f}$ across ten trials with 100 epochs in each trail. $\mathit{D}_{\textit{early}}$ and $\mathit{D}_{\textit{late}}$ consist of 50 articles each, such that the good summary sentences appear early and late in the article, respectively. We observe a significant advantage of \textsc{BanditSum } compared to RNES and RNES3 (based on the sequential binary labeling setting) on $\mathit{D}_{late}$.}
\label{fig:model_comparison}
\end{figure}
Given these two subsets of the data, three different models (\textsc{BanditSum}, RNES and RNES3) are trained and evaluated on each of the two datasets without extractive labels. Since the original sequential RL model (RNES) is unstable without supervised pre-training, we propose the RNES3 model that is limited to select no more then three sentences. Starting with random initializations without supervised pre-training, we train each model ten times for 100 epochs and plot the learning curve of the average ROUGE-F1 score computed based on the trained model in Figure \ref{fig:model_comparison}. We can clearly see that \textsc{BanditSum } finds a better solution more quickly than RNES and RNES3 on both datasets. Moreover, it displays a significantly speed-up in the exploration and finds the best solution when good summary sentences appeared later in the document ($\mathit{D}_{\textit{late}}$).
\section{Conclusion \label{sec:conclusion}}
In this work, we presented a contextual bandit learning framework, \textsc{BanditSum }, for extractive summarization, based on neural networks and reinforcement learning algorithms. \textsc{BanditSum } does not require sentence-level extractive labels and optimizes ROUGE scores between summaries generated by the model and abstractive reference summaries. Empirical results show that our method performs better than or comparable to state-of-the-art extractive summarization models which must be pre-trained on extractive labels, and converges using significantly fewer update steps than competing approaches. In future work, we will explore the direction of adding an extra coherence reward \citep{DBLP:conf/aaai/WuH18} to improve the quality of extracted summaries in terms of sentence discourse relation.
\section*{Acknowledgements}
The research was supported in part by Natural Sciences and Engineering Research Council of Canada (NSERC). The authors would like to thank Compute Canada for providing the computational resources.
\section{Credits}
This document has been adapted from the instructions for earlier ACL
and NAACL proceedings. It represents a recent build from
\url{https://github.com/acl-org/acl-pub}, with modifications by Micha
Elsner and Preethi Raghavan, based on the NAACL 2018 instructions by
Margaret Michell and Stephanie Lukin, 2017/2018 (NA)ACL bibtex
suggestions from Jason Eisner, ACL 2017 by Dan Gildea and Min-Yen Kan,
NAACL 2017 by Margaret Mitchell, ACL 2012 by Maggie Li and Michael
White, those from ACL 2010 by Jing-Shing Chang and Philipp Koehn,
those for ACL 2008 by Johanna D. Moore, Simone Teufel, James Allan,
and Sadaoki Furui, those for ACL 2005 by Hwee Tou Ng and Kemal
Oflazer, those for ACL 2002 by Eugene Charniak and Dekang Lin, and
earlier ACL and EACL formats. Those versions were written by several
people, including John Chen, Henry S. Thompson and Donald
Walker. Additional elements were taken from the formatting
instructions of the {\em International Joint Conference on Artificial
Intelligence} and the \emph{Conference on Computer Vision and
Pattern Recognition}.
\section{Introduction}
The following instructions are directed to authors of papers submitted
to \confname{} or accepted for publication in its proceedings. All
authors are required to adhere to these specifications. Authors are
required to provide a Portable Document Format (PDF) version of their
papers. \textbf{The proceedings are designed for printing on A4
paper.}
\section{General Instructions}
Manuscripts must be in two-column format. Exceptions to the
two-column format include the title, authors' names and complete
addresses, which must be centered at the top of the first page, and
any full-width figures or tables (see the guidelines in
Subsection~\ref{ssec:first}). {\bf Type single-spaced.} Start all
pages directly under the top margin. See the guidelines later
regarding formatting the first page. The manuscript should be
printed single-sided and its length
should not exceed the maximum page limit described in Section~\ref{sec:length}.
Pages are numbered for initial submission. However, {\bf do not number the pages in the camera-ready version}.
By uncommenting {\small\verb|\aclfinalcopy|} at the top of this
document, it will compile to produce an example of the camera-ready formatting; by leaving it commented out, the document will be anonymized for initial submission.
The review process is double-blind, so do not include any author information (names, addresses) when submitting a paper for review.
However, you should maintain space for names and addresses so that they will fit in the final (accepted) version. The \confname{} \LaTeX\ style will create a titlebox space of 2.5in for you when {\small\verb|\aclfinalcopy|} is commented out.
The author list for submissions should include all (and only) individuals who made substantial contributions to the work presented. Each author listed on a submission to \confname{} will be notified of submissions, revisions and the final decision. No authors may be added to or removed from submissions to \confname{} after the submission deadline.
\subsection{The Ruler}
The \confname{} style defines a printed ruler which should be presented in the
version submitted for review. The ruler is provided in order that
reviewers may comment on particular lines in the paper without
circumlocution. If you are preparing a document without the provided
style files, please arrange for an equivalent ruler to
appear on the final output pages. The presence or absence of the ruler
should not change the appearance of any other content on the page. The
camera ready copy should not contain a ruler. (\LaTeX\ users may uncomment the {\small\verb|\aclfinalcopy|} command in the document preamble.)
Reviewers: note that the ruler measurements do not align well with
lines in the paper -- this turns out to be very difficult to do well
when the paper contains many figures and equations, and, when done,
looks ugly. In most cases one would expect that the approximate
location will be adequate, although you can also use fractional
references ({\em e.g.}, the first paragraph on this page ends at mark $108.5$).
\subsection{Electronically-available resources}
\conforg{} provides this description in \LaTeX2e{} ({\small\tt
emnlp2018.tex}) and PDF format ({\small\tt emnlp2018.pdf}), along
with the \LaTeX2e{} style file used to format it ({\small\tt
emnlp2018.sty}) and an ACL bibliography style ({\small\tt
acl\_natbib\_nourl.bst}) and example bibliography ({\small\tt
emnlp2018.bib}). These files are all available at
\url{http://emnlp2018.org/downloads/emnlp18-latex.zip}; a Microsoft
Word template file ({\small\tt emnlp18-word.docx}) and example
submission pdf ({\small\tt emnlp18-word.pdf}) is available at
\url{http://emnlp2018.org/downloads/emnlp18-word.zip}. We strongly
recommend the use of these style files, which have been appropriately
tailored for the \confname{} proceedings.
\subsection{Format of Electronic Manuscript}
\label{sect:pdf}
For the production of the electronic manuscript you must use Adobe's
Portable Document Format (PDF). PDF files are usually produced from
\LaTeX\ using the \textit{pdflatex} command. If your version of
\LaTeX\ produces Postscript files, you can convert these into PDF
using \textit{ps2pdf} or \textit{dvipdf}. On Windows, you can also use
Adobe Distiller to generate PDF.
Please make sure that your PDF file includes all the necessary fonts
(especially tree diagrams, symbols, and fonts with Asian
characters). When you print or create the PDF file, there is usually
an option in your printer setup to include none, all or just
non-standard fonts. Please make sure that you select the option of
including ALL the fonts. \textbf{Before sending it, test your PDF by
printing it from a computer different from the one where it was
created.} Moreover, some word processors may generate very large PDF
files, where each page is rendered as an image. Such images may
reproduce poorly. In this case, try alternative ways to obtain the
PDF. One way on some systems is to install a driver for a postscript
printer, send your document to the printer specifying ``Output to a
file'', then convert the file to PDF.
It is of utmost importance to specify the \textbf{A4 format} (21 cm
x 29.7 cm) when formatting the paper. When working with
{\tt dvips}, for instance, one should specify {\tt -t a4}.
Or using the command \verb|\special{papersize=210mm,297mm}| in the latex
preamble (directly below the \verb|\usepackage| commands). Then using
{\tt dvipdf} and/or {\tt pdflatex} which would make it easier for some.
Print-outs of the PDF file on A4 paper should be identical to the
hardcopy version. If you cannot meet the above requirements about the
production of your electronic submission, please contact the
publication chairs as soon as possible.
\subsection{Layout}
\label{ssec:layout}
Format manuscripts two columns to a page, in the manner these
instructions are formatted. The exact dimensions for a page on A4
paper are:
\begin{itemize}
\item Left and right margins: 2.5 cm
\item Top margin: 2.5 cm
\item Bottom margin: 2.5 cm
\item Column width: 7.7 cm
\item Column height: 24.7 cm
\item Gap between columns: 0.6 cm
\end{itemize}
\noindent Papers should not be submitted on any other paper size.
If you cannot meet the above requirements about the production of
your electronic submission, please contact the publication chairs
above as soon as possible.
\subsection{Fonts}
For reasons of uniformity, Adobe's {\bf Times Roman} font should be
used. In \LaTeX2e{} this is accomplished by putting
\begin{quote}
\begin{verbatim}
\usepackage{times}
\usepackage{latexsym}
\end{verbatim}
\end{quote}
in the preamble. If Times Roman is unavailable, use {\bf Computer
Modern Roman} (\LaTeX2e{}'s default). Note that the latter is about
10\% less dense than Adobe's Times Roman font.
\begin{table}[t!]
\begin{center}
\begin{tabular}{|l|rl|}
\hline \bf Type of Text & \bf Font Size & \bf Style \\ \hline
paper title & 15 pt & bold \\
author names & 12 pt & bold \\
author affiliation & 12 pt & \\
the word ``Abstract'' & 12 pt & bold \\
section titles & 12 pt & bold \\
document text & 11 pt &\\
captions & 10 pt & \\
abstract text & 10 pt & \\
bibliography & 10 pt & \\
footnotes & 9 pt & \\
\hline
\end{tabular}
\end{center}
\caption{\label{font-table} Font guide. }
\end{table}
\subsection{The First Page}
\label{ssec:first}
Center the title, author's name(s) and affiliation(s) across both
columns. Do not use footnotes for affiliations. Do not include the
paper ID number assigned during the submission process. Use the
two-column format only when you begin the abstract.
{\bf Title}: Place the title centered at the top of the first page, in
a 15-point bold font. (For a complete guide to font sizes and styles,
see Table~\ref{font-table}) Long titles should be typed on two lines
without a blank line intervening. Approximately, put the title at 2.5
cm from the top of the page, followed by a blank line, then the
author's names(s), and the affiliation on the following line. Do not
use only initials for given names (middle initials are allowed). Do
not format surnames in all capitals ({\em e.g.}, use ``Mitchell'' not
``MITCHELL''). Do not format title and section headings in all
capitals as well except for proper names (such as ``BLEU'') that are
conventionally in all capitals. The affiliation should contain the
author's complete address, and if possible, an electronic mail
address. Start the body of the first page 7.5 cm from the top of the
page.
The title, author names and addresses should be completely identical
to those entered to the electronical paper submission website in order
to maintain the consistency of author information among all
publications of the conference. If they are different, the publication
chairs may resolve the difference without consulting with you; so it
is in your own interest to double-check that the information is
consistent.
{\bf Abstract}: Type the abstract at the beginning of the first
column. The width of the abstract text should be smaller than the
width of the columns for the text in the body of the paper by about
0.6 cm on each side. Center the word {\bf Abstract} in a 12 point bold
font above the body of the abstract. The abstract should be a concise
summary of the general thesis and conclusions of the paper. It should
be no longer than 200 words. The abstract text should be in 10 point font.
{\bf Text}: Begin typing the main body of the text immediately after
the abstract, observing the two-column format as shown in
the present document. Do not include page numbers.
{\bf Indent}: Indent when starting a new paragraph, about 0.4 cm. Use 11 points for text and subsection headings, 12 points for section headings and 15 points for the title.
\begin{table}
\centering
\small
\begin{tabular}{cc}
\begin{tabular}{|l|l|}
\hline
{\bf Command} & {\bf Output}\\\hline
\verb|{\"a}| & {\"a} \\
\verb|{\^e}| & {\^e} \\
\verb|{\`i}| & {\`i} \\
\verb|{\.I}| & {\.I} \\
\verb|{\o}| & {\o} \\
\verb|{\'u}| & {\'u} \\
\verb|{\aa}| & {\aa} \\\hline
\end{tabular} &
\begin{tabular}{|l|l|}
\hline
{\bf Command} & {\bf Output}\\\hline
\verb|{\c c}| & {\c c} \\
\verb|{\u g}| & {\u g} \\
\verb|{\l}| & {\l} \\
\verb|{\~n}| & {\~n} \\
\verb|{\H o}| & {\H o} \\
\verb|{\v r}| & {\v r} \\
\verb|{\ss}| & {\ss} \\\hline
\end{tabular}
\end{tabular}
\caption{Example commands for accented characters, to be used in, {\em e.g.}, \BibTeX\ names.}\label{tab:accents}
\end{table}
\subsection{Sections}
{\bf Headings}: Type and label section and subsection headings in the
style shown on the present document. Use numbered sections (Arabic
numerals) in order to facilitate cross references. Number subsections
with the section number and the subsection number separated by a dot,
in Arabic numerals.
Do not number subsubsections.
\begin{table*}[t!]
\centering
\begin{tabular}{lll}
output & natbib & previous \conforg{} style files\\
\hline
\citep{Gusfield:97} & \verb|\citep| & \verb|\cite| \\
\citet{Gusfield:97} & \verb|\citet| & \verb|\newcite| \\
\citeyearpar{Gusfield:97} & \verb|\citeyearpar| & \verb|\shortcite| \\
\end{tabular}
\caption{Citation commands supported by the style file.
The citation style is based on the natbib package and
supports all natbib citation commands.
It also supports commands defined in previous \conforg{} style files
for compatibility.
}
\end{table*}
{\bf Citations}: Citations within the text appear in parentheses
as~\cite{Gusfield:97} or, if the author's name appears in the text
itself, as Gusfield~\shortcite{Gusfield:97}.
Using the provided \LaTeX\ style, the former is accomplished using
{\small\verb|\cite|} and the latter with {\small\verb|\shortcite|} or {\small\verb|\newcite|}. Collapse multiple citations as in~\cite{Gusfield:97,Aho:72}; this is accomplished with the provided style using commas within the {\small\verb|\cite|} command, {\em e.g.}, {\small\verb|\cite{Gusfield:97,Aho:72}|}. Append lowercase letters to the year in cases of ambiguities.
Treat double authors as
in~\cite{Aho:72}, but write as in~\cite{Chandra:81} when more than two
authors are involved. Collapse multiple citations as
in~\cite{Gusfield:97,Aho:72}. Also refrain from using full citations
as sentence constituents.
We suggest that instead of
\begin{quote}
``\cite{Gusfield:97} showed that ...''
\end{quote}
you use
\begin{quote}
``Gusfield \shortcite{Gusfield:97} showed that ...''
\end{quote}
If you are using the provided \LaTeX{} and Bib\TeX{} style files, you
can use the command \verb|\citet| (cite in text)
to get ``author (year)'' citations.
If the Bib\TeX{} file contains DOI fields, the paper
title in the references section will appear as a hyperlink
to the DOI, using the hyperref \LaTeX{} package.
To disable the hyperref package, load the style file
with the \verb|nohyperref| option: \\{\small
\verb|\usepackage[nohyperref]{acl2018}|}
\textbf{Digital Object Identifiers}: As part of our work to make ACL
materials more widely used and cited outside of our discipline, ACL
has registered as a CrossRef member, as a registrant of Digital Object
Identifiers (DOIs), the standard for registering permanent URNs for
referencing scholarly materials. \conforg{} has \textbf{not} adopted the
ACL policy of requiring camera-ready references to contain the appropriate
DOIs (or as a second resort, the hyperlinked ACL Anthology
Identifier). But we certainly encourage you to use
Bib\TeX\ records that contain DOI or URLs for any of the ACL
materials that you reference. Appropriate records should be found
for most materials in the current ACL Anthology at
\url{http://aclanthology.info/}.
As examples, we cite \cite{P16-1001} to show you how papers with a DOI
will appear in the bibliography. We cite \cite{C14-1001} to show how
papers without a DOI but with an ACL Anthology Identifier will appear
in the bibliography.
\textbf{Anonymity:} As reviewing will be double-blind, the submitted
version of the papers should not include the authors' names and
affiliations. Furthermore, self-references that reveal the author's
identity, {\em e.g.},
\begin{quote}
``We previously showed \cite{Gusfield:97} ...''
\end{quote}
should be avoided. Instead, use citations such as
\begin{quote}
``\citeauthor{Gusfield:97} \shortcite{Gusfield:97}
previously showed ... ''
\end{quote}
Preprint servers such as arXiv.org and workshops that do not
have published proceedings are not considered archival for purposes of
submission. However, to preserve the spirit of blind review, authors
are encouraged to refrain from posting until the completion of the
review process. Otherwise, authors must state in the online submission
form the name of the workshop or preprint server and title of the
non-archival version. The submitted version should be suitably
anonymized and not contain references to the prior non-archival
version. Reviewers will be told: ``The author(s) have notified us that
there exists a non-archival previous version of this paper with
significantly overlapping text. We have approved submission under
these circumstances, but to preserve the spirit of blind review, the
current submission does not reference the non-archival version.''
\textbf{Please do not use anonymous citations} and do not include
when submitting your papers. Papers that do not
conform to these requirements may be rejected without review.
\textbf{References}: Gather the full set of references together under
the heading {\bf References}; place the section before any Appendices,
unless they contain references. Arrange the references alphabetically
by first author, rather than by order of occurrence in the text.
By using a .bib file, as in this template, this will be automatically
handled for you. See the \verb|\bibliography| commands near the end for more.
Provide as complete a citation as possible, using a consistent format,
such as the one for {\em Computational Linguistics\/} or the one in the
{\em Publication Manual of the American
Psychological Association\/}~\cite{APA:83}. Use of full names for
authors rather than initials is preferred. A list of abbreviations
for common computer science journals can be found in the ACM
{\em Computing Reviews\/}~\cite{ACM:83}.
The \LaTeX{} and Bib\TeX{} style files provided roughly fit the
American Psychological Association format, allowing regular citations,
short citations and multiple citations as described above.
\begin{itemize}
\item Example citing an arxiv paper: \cite{rasooli-tetrault-2015}.
\item Example article in journal citation: \cite{Ando2005}.
\item Example article in proceedings, with location: \cite{borsch2011}.
\item Example article in proceedings, without location: \cite{andrew2007scalable}.
\end{itemize}
See corresponding .bib file for further details.
Submissions should accurately reference prior and related work, including code and data. If a piece of prior work appeared in multiple venues, the version that appeared in a refereed, archival venue should be referenced. If multiple versions of a piece of prior work exist, the one used by the authors should be referenced. Authors should not rely on automated citation indices to provide accurate references for prior and related work.
{\bf Appendices}: Appendices, if any, directly follow the text and the
references (but see above). Letter them in sequence and provide an
informative title: {\bf Appendix A. Title of Appendix}.
\subsection{URLs}
URLs can be typeset using the \verb|\url| command. However, very long
URLs cause a known issue in which the URL highlighting may incorrectly
cross pages or columns in the document. Please check carefully for
URLs too long to appear in the column, which we recommend you break,
shorten or place in footnotes. Be aware that actual URL should appear
in the text in human-readable format; neither internal nor external
hyperlinks will appear in the proceedings.
\subsection{Footnotes}
{\bf Footnotes}: Put footnotes at the bottom of the page and use 9
point font. They may be numbered or referred to by asterisks or other
symbols.\footnote{This is how a footnote should appear.} Footnotes
should be separated from the text by a line.\footnote{Note the line
separating the footnotes from the text.}
\subsection{Graphics}
{\bf Illustrations}: Place figures, tables, and photographs in the
paper near where they are first discussed, rather than at the end, if
possible. Wide illustrations may run across both columns. Color
illustrations are discouraged, unless you have verified that
they will be understandable when printed in black ink.
{\bf Captions}: Provide a caption for every illustration; number each one
sequentially in the form: ``Figure 1. Caption of the Figure.'' ``Table 1.
Caption of the Table.'' Type the captions of the figures and
tables below the body, using 11 point text.
\subsection{Accessibility}
\label{ssec:accessibility}
In an effort to accommodate people who are color-blind (as well as those printing
to paper), grayscale readability for all accepted papers will be
encouraged. Color is not forbidden, but authors should ensure that
tables and figures do not rely solely on color to convey critical
distinctions. A simple criterion: All curves and points in your figures should be clearly distinguishable without color.
\section{Translation of non-English Terms}
It is also advised to supplement non-English characters and terms
with appropriate transliterations and/or translations
since not all readers understand all such characters and terms.
Inline transliteration or translation can be represented in
the order of: original-form transliteration ``translation''.
\section{Length of Submission}
\label{sec:length}
The \confname{} main conference accepts submissions of long papers and
short papers.
Long papers may consist of up to eight (8) pages of
content plus unlimited pages for references. Upon acceptance, final
versions of long papers will be given one additional page -- up to nine (9)
pages of content plus unlimited pages for references -- so that reviewers' comments
can be taken into account. Short papers may consist of up to four (4)
pages of content, plus unlimited pages for references. Upon
acceptance, short papers will be given five (5) pages in the
proceedings and unlimited pages for references.
For both long and short papers, all illustrations and tables that are part
of the main text must be accommodated within these page limits, observing
the formatting instructions given in the present document. Supplementary
material in the form of appendices does not count towards the page limit; see appendix A for further information.
However, note that supplementary material should be supplementary
(rather than central) to the paper, and that reviewers may ignore
supplementary material when reviewing the paper (see Appendix
\ref{sec:supplemental}). Papers that do not conform to the specified
length and formatting requirements are subject to be rejected without
review.
Workshop chairs may have different rules for allowed length and
whether supplemental material is welcome. As always, the respective
call for papers is the authoritative source.
\section*{Acknowledgments}
The acknowledgments should go immediately before the references. Do
not number the acknowledgments section. Do not include this section
when submitting your paper for review. \\
\noindent {\bf Preparing References:} \\
Include your own bib file like this:
{\small\verb|\bibliographystyle{acl_natbib_nourl}|
\verb| | {'timestamp': '2019-05-09T02:01:02', 'yymm': '1809', 'arxiv_id': '1809.09672', 'language': 'en', 'url': 'https://arxiv.org/abs/1809.09672'} |
\section{Introduction}
The H~{\sc i} proximity effect has been widely studied since it was
discovered. One thing of particular
interest is that the study of the H~{\sc i} proximity
effect provides a method of measuring the intensity of the MUVB
at the H~{\sc i} edge. However, the H~{\sc i} proximity effect
does not reveal information on the spectral shape of the MUVB.\par
We demonstrate that the proximity effect
on metal elements may provide a method for constraining
the spectral shape of the MUVB. Songalia and Cowie (1996)
\cite{SonC96} reported that
metal elements are detected in 75\% of clouds with $N_{\hbox{$\scriptstyle\rm HI$}} >3.0\times 10^{14}
\,{\rm cm^{-2}}$. The lack of data for studying the proximity effect
on metal elements is going to overcome. If future observations
detect what we expect for the proximity effect on metal elements,
it will confirm that the proximity effect is an ionization effect
and photoionization is the dominant ionization mechanism for metal
ions in which the proximity effect is seen.
\section{Results}
The electronic configurations of metal elements are complicated
and a large number of ionization stages can coexist simultaneously.
In general, a simple feature does not exist for metal proximity effect,
unlike for the H~{\sc i} proximity effect. Therefore, we may study the
dependence of the column density ratios of two ions in successive
ionization stages on $\omega$.\par
We found that [O~{\sc iv}/O~{\sc iii}]$_{\hbox{$\scriptstyle\rm Q/\rm B\ $}}$ is the most optimum
indicator of the spectral shape of the MUVB (we use [X]$_{\hbox{$\scriptstyle\rm Q/\rm B\ $}}$ to represent
the ratio of the value of X in the vicinity of the QSO to that
in background case). The HST does provide
the possibility to observe O~{\sc iv} and O~{\sc iii} simultaneously
\cite{TriL97} and \cite{VogR95}.
It is clear from Figure 1 that [O~{\sc iv}/O~{\sc iii}]$_{\hbox{$\scriptstyle\rm Q/\rm B\ $}}$
strongly depends on the assumed spectral shape of the MUVB
and less sensitive to the assumed $U_{\hbox{$\scriptstyle\rm B$}}$ (the ionization parameter
in background case)
in small $\omega$ region. Therefore, [O~{\sc iv}/O~{\sc iii}]$_{\hbox{$\scriptstyle\rm Q/\rm B\ $}}$
is good for constraining the spectral shape even if $U_{\hbox{$\scriptstyle\rm B$}}$ is unknown.
\begin{figure}
\centerline{\vbox{
\psfig{figure=liujmF1.ps,height=7.cm}
}}
\caption[]{[O~{\sc iv}/O~{\sc iii}]$_{\hbox{$\scriptstyle\rm Q/\rm B\ $}}$ vs $\omega$.
The heavy curves are for $U_{\hbox{$\scriptstyle\rm B$}}=10^{-2.7}$; the light
curves are for $U_{\hbox{$\scriptstyle\rm B$}}=10^{-1.5}$.
The solid curves are for $\alpha_{\hbox{$\scriptstyle\rm B$}}=\alpha_{\hbox{$\scriptstyle\rm Q$}}=1.5$ and $b=1$;
the dotted curves are for $\alpha_{\hbox{$\scriptstyle\rm B$}}=0.5$,
$b=12.5$, and $\alpha_{\hbox{$\scriptstyle\rm Q$}}=1.5$; the dashed curves are for
$\alpha_{\hbox{$\scriptstyle\rm B$}}=0.3$, $b=100$, and $\alpha_{\hbox{$\scriptstyle\rm Q$}}=1.5$.
}
\end{figure}
The ratio Si~{\sc iv}/C~{\sc iv}, contrast to previous thought, is
not a good indicator to the spectral shape, because it
strongly depends on both the ionization parameter $U_{\hbox{$\scriptstyle\rm B$}}$ and
the spectral shape of the MUVB. A small uncertainty on either of them
must results in a big error in determination of the other.
We also found that the extra ionizing radiation from QSOs does not
always reduce the ratio C~{\sc iv}/H~{\sc i}, but enhances
C~{\sc iv}/H~{\sc i} in the case of $U_{\hbox{$\scriptstyle\rm B$}}\lower.5ex\hbox{\ltsima} 10^{-2}$.
So, the observed excess of C~{\sc iv} systems in luminous
QSOs may be a photoionization effect,
but not a gravitational lensing effect \cite{VanQ96}.
\acknowledgements{We are grateful to Dr Avery Meiksin for his comments.}
\begin{iapbib}{99}{
\bibitem{LiuM97} Liu, J.M., \& Meiksin, A., 1997, in preparation
\bibitem{SonC96} Songaila, A., \& Cowie, L.L. 1996, AJ, 112, 335
\bibitem{TriL97} Tripp, T.M., Lu, L., \& Savage, B.D. 1997,
[astro-ph/9703080]
\bibitem{VanQ96} Vanden Berk, D.E., Quashnock, J.M., York, D.G.,
\& Yanny, B. 1996, 469, 78
\bibitem{VogR95} Vogel, S., \& Reimers, D. 1995, A\&A, 294, 377
}
\end{iapbib}
\vfill
\end{document}
| {'timestamp': '1997-09-13T18:09:29', 'yymm': '9709', 'arxiv_id': 'astro-ph/9709126', 'language': 'en', 'url': 'https://arxiv.org/abs/astro-ph/9709126'} |
\section{Introduction}\label{sec1}
The temporal and spatial mixing of strong fundamental frequency ($\omega$) and second harmonic ($2\omega$) laser fields in gas-phase medium can produce a ultrashort terahertz (THz) pulse \cite{Cook00}. Given the absence of emitter damage and absorption, the two-color scheme is particularly promising for intense and broadband THz radiation \cite{Koulouklidis2020, Mitrofanov2020,Jang2019}. The wonderful characteristics of the THz source has already been used in time-resolved terahertz spectroscopy \cite{Pashkin2011,Valverde-Chavez2015,Wang2016} and transient absorption spectroscopy \cite{Chen2016}. The bright broadband THz radiation has potential applications in broadband spectroscopy \cite{Cossel17}, atomic and molecular ultrafast imaging \cite{Zhang2018a}, advanced accelerator technique \cite{Zhang2018b}, etc.
Although the THz generation in two-color scheme has already been a widely-used, well-established technique, yet the attempts to improve the technique are never abandoned. In experimental setups, multiple control knobs have been tuned to increase the THz conversion efficiency and bandwidth. Firstly, the appropriate parameters of driving lasers, including the wavelength and pulse duration, can optimize the THz generation. The wavelength scaling investigations \cite{Clerici2013,Nguyen2019} exhibited that the driving laser with longer wavelength strongly enhances the THz down-conversion efficiency by one or two orders of magnitude comparing to THz generation driven by 800 nm pulses at same input energies. In practical experiments, mid-infrared fields at 3.9 $\mu$m delivered by optical parametric amplifiers have been used to produce extremely strong THz field above 100 MV/cm \cite{Koulouklidis2020, Mitrofanov2020,Jang2019}. Simultaneously, tuning the wavelength ratio of two-color fields \cite{Vvedenskii2014,Kostin2016,Zhang2017} can manipulate the central wavelength and effectively extend the THz bandwidth \cite{Thomson2010,Babushkin2011,Balciunas2015}. If more than two-color fields are involved in the process, multiple wavelength fields can also improve the THz performance \cite{Martinez2015}.
Short pulses by means of pulse compression technique significantly broadens the bandwidth of THz pulse, yielding supercontinuum radiation of bandwidth $>$100 THz by using 7 fs driving pulse \cite{Thomson2010,Matsubara2012,Blank2013}.
In addition, controlling the polarization of two-color fields is also necessary. It has been well acknowledged that the relative polarization of two-color lasers should be optimized in THz conversion process \cite{Xie2006,Zhang2020}. And two-color co-rotating circularly-polarized fields can further amplify THz emission by a factor of 5 comparing to linearly polarized fields \cite{Meng2016}.
Finally, the plasma profile also plays a role on THz generation. By manipulating the focal length \cite{Oh2013}, beam wavefront \cite{Kuk2016,Zhang2018c,Sheng2021}, gas pressure and species \cite{Yoo2019}, the spatiotemporal dynamics in plasma formation and pulse propagation have been modified to apparently influence THz radiation.
In the two-color scheme, the temporal and spatial overlapping of $\omega$ and $2\omega$ beams is a basic premise in beam alignment, where the two plasmas induced by the $\omega$ and $2\omega$ beams should spatially merge into a big plasma (overlapped plasma) acting as the best experimental condition. However, we found, when the two-color plasmas are concentrically separated along the propagation axis into cascading plasmas, the characteristics of THz radiation is significantly improved. Therefore, we introduce a new control knob, the distance between the two-color foci, to considerably increase the yield and bandwidth of THz emission. In this paper, the THz strength and spectral profile generated by linearly and circularly-polarized two-color fields, are measured as a function of the distance, which shows that the maximum conversion efficiency exceeds $10^{-3}$ with bandwidth $>$100 THz by using 800 nm, 35 fs driving lasers. The theory involving laser pulse propagation equation and photocurrent model indicates that the spatial redistribution of laser intensity in cascading plasmas balances THz generation and absorption in the plasma channel, leading to the maximum THz output.
\section{Experiment}\label{sec2}
\begin{figure}[h]%
\centering
\includegraphics[width=0.9\textwidth]{Geometry.jpg}
\caption{Experimental schematics of ultra-broadband THz amplification. The $\omega$ and 2$\omega$ beams are respectively focused by two identical lenses with 10 cm focus length. In the experiment, we fix the lens position of $\omega$ beam, and the position of the 2$\omega$ lens is longitudinally moved along the propagation direction. The profile of the plasma channel strongly depends on the displacement $d$ of the 2$\omega$ lens, which is introduced as a new knob to amplify THz radiation.}\label{fig1}
\end{figure}
The experiment is implemented on a Ti:sapphire femtosecond amplification system. The $\omega$ beam with center wavelength of 800 nm is converted to the 2$\omega$ beam by a $\beta$-barium borate (BBO) crystal. The $\omega$ and 2$\omega$ beams respectively pass through two arms of a Michelson interferometer, and the polarization of the two beams can be individually controlled by the waveplates in the two arms. The lengths of the two arms are actively stabilized, and the relative phase delay $\tau$ of the $\omega$-2$\omega$ pulses is tunable with the accuracy of sub-femtosecond. The commonly-used focusing geometry, using one mirror (or lens) to focus two-color beams, is not applied in our setup, whereas a complicated focusing geometry is utilized. We place two identical lenses with 10 cm focus length in two arms of the interferometer to respectively focus the $\omega$ and 2$\omega$ beams, and the position of the lens can be moved along the propagation direction to change the focusing condition and plasma channel profile. As shown in Fig.~\ref{fig1}, the foci of the $\omega$ and 2$\omega$ beams are spatially separated to form cascading plasmas. The strength and bandwidth of the THz radiation strongly depends on the distance between the foci of $\omega$ and 2$\omega$ beams, which is controlled by the lens displacement $d$ of the 2$\omega$ beam. The THz strength as a function of $d$ is calibrated with electro-optical sampling (EOS) and pyroelectric detector, and the bandwidth is measured with a Fourier transform spectroscopy. The setup of THz generation and detection is referred to \textit{Method Section}.
\section{Experimental Results}\label{sec3}
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{IntensityTHz.pdf}
\caption{Terahertz amplification in cascading plasmas. (a) The profile of cascading plasmas versus lens displacement $d$. The $\omega$ and 2$\omega$ beams produce the plasmas respectively. The two plasmas are spatially overlapped when $d=0 \ \mathrm{mm}$, and concentrically separated when $d$ is varied. The plasmas produced by 400 nm and 800 nm beams are marked. (b) The THz power versus $d$ and input laser pulse energy $I_{\mathrm{THz}} (d, I_{\omega},I_{\mathrm{2}\omega})$ measured with pyroelectric detector. The \textit{x} axis is lens displacement $d$, and \textit{y} axis is THz power. The THz pulse is generated by the co-rotating circularly-polarized two-color fields. Surprisingly, the cascading plasmas at $d=2\ \mathrm{mm}$ radiates the most intense THz pulse, which is significantly larger than that at $d=0\ \mathrm{mm}$. (c) The \textit{s}-polarized THz electric field versus $d$ and input laser pulse energy $\boldsymbol{E}_{\mathrm{THz}} (d, I_{\omega},I_{\mathrm{2}\omega})$, which is measured with electro-optical sampling.}
\label{fig2}
\end{figure}
Firstly, we measured the THz intensity $I_{\mathrm{THz}}$ versus the lens displacement $d$. In our setup, both the $\omega$ and 2$\omega$ beams produce plasmas, and the distance between the plasmas can be changed. The plasma profiles versus $d$ is recorded by a CCD camera, as shown in Fig.~\ref{fig2}(a). In commonly used setups, two plasmas are spatially overlapped to form a single plasma, where we define as $d=0\ \mathrm{mm}$. The third harmonic generation \cite{He2021} versus $d$ is used to precisely calibrate the position $d=0\ \mathrm{mm}$. When changing $d$ the big plasma is separated along the propagation direction into two cascading plasmas, and the distance between the cascading plasmas approximately equals the lens displacement $d$. The brightness of 2$\omega$ plasma is obviously varied versus $d$. When the 2$\omega$ plasma approaches the $\omega$ plasma, the brightness of 2$\omega$ plasma suddenly decreases, reflecting the complex process in plasma channel formation.
After the spatiotemporal optimization of the two-color beams, the THz power versus $d$ and input laser power $I_{\mathrm{THz}}(d, I_{\omega},I_{\mathrm{2}\omega})$ is calibrated by a commercial pyroelectric detector, shown in Fig.~\ref{fig2}(b). Here, the co-rotating circularly-polarized two-color fields are used to obtain maximum conversion efficiency, and the THz emissions $I_{\mathrm{THz}}(d)$ in linearly polarized two-color fields have the similar $d$-dependent behavior. It is surprising that the THz radiation is significantly amplified by over one order of magnitude at $d=2\ \mathrm{mm}$ when the two plasmas are spatially well separated along propagation direction. The maximum conversion efficiency is obtained when the plasma formed by $\omega$ field is spatially ahead of the 2$\omega$ plasma. When decreasing input laser power, the maximum $I_{\mathrm{THz}}(d, I_{\omega},I_{\mathrm{2}\omega})$ approaches $d=0\ \mathrm{mm}$. The observation conflicts with the empirical awareness that the spatial overlapping of two-color foci is a prerequisite for the most efficient terahertz generation.
Here, we can approximately estimate the conversion efficiency in cascade plasmas. Before the pyroelectric detector, two silicon wafers block the scattering light with $\sim$0.2 transmittance of THz beam. And the chopper blocks half of the input laser energy. Thus, the THz pulse energy at $d=2\ \mathrm{mm}$ exceeds 1 $\mu$J, and the conversion efficiency approaches 0.002, which is one order of magnitude higher than the analogous configuration with efficiency of $10^{-4}$. The estimation can also be confirmed in Fig.~\ref{fig2}(b), where $I_{\mathrm{THz}}(d=2\ \mathrm{mm})$ is $\sim$10 times higher than $I_{\mathrm{THz}}(d=0\ \mathrm{mm})$.
To further confirm the amplification in cascading plasmas, the THz electric field $\ensuremath{\boldsymbol{E}}_{\mathrm{THz}}(d, I_{\omega},I_{\mathrm{2}\omega})$ is measured with EOS, shown in Fig.~\ref{fig2}(c), which exhibits the similar behavior as $I_{\mathrm{THz}}(d, I_{\omega},I_{\mathrm{2}\omega})$. Here, only \textit{s}-polarized $\ensuremath{\boldsymbol{E}}_{\mathrm{THz}}$ is shown in the main text, and \textit{p}-polarized $\ensuremath{\boldsymbol{E}}_{\mathrm{THz}}$ has a similar manner (\textit{Supplementary Information}). Comparing to the intensity measurement with pyroelectric detector, the EOS measurement shows two different manners: (1) The $\ensuremath{\boldsymbol{E}}_{\mathrm{THz}}$ is sensitive to phase delay $\tau$ of $\omega$-2$\omega$ fields, but $I_{\mathrm{THz}}$ is not $\tau$ dependent. By scanning $\tau$, $\ensuremath{\boldsymbol{E}}_{\mathrm{THz}}$ is periodically modulated versus $\tau$, which is presented in \textit{Supplementary Information}. (2) The maximum efficiencies appear at different $d$. The maximum $I_{\mathrm{THz}}$ appears at $d=2\ \mathrm{mm}$, whereas $\ensuremath{\boldsymbol{E}}_{\mathrm{THz}}$ is optimized at $d=1.25\ \mathrm{mm}$. The discrepancy probably originates from the detection bandwidth of the two methods. The EOS only has response below 3 THz, while the pyroelectric detector has linear and relatively flat response function in the range of terahertz to infrared frequency. A possible explanation for unpronounced $\tau$-dependent $I_{\mathrm{THz}}$ is that, the high-frequency contributions from different spatial positions are averaged in the far field, which washes out the yield modulation versus $\tau$.
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{THzfrequency.pdf}
\caption{Terahertz bandwidth versus $d$ between two-color plasmas. The spectral features of THz generation $\tilde{E}(\nu)$ in linearly polarized two-color fields are presented at $d=-2\ \mathrm{mm}$ (red), $d=-1\ \mathrm{mm}$ (blue) and $d=0\ \mathrm{mm}$ (green). The spectral intensities are represented on linear scale. When separating the two-color plasmas, $\tilde{E}(\nu)$ is significantly broadened and shifted to high frequency region. Inset: The interferograms obtained by Fourier transform spectroscopy.}
\label{fig3}
\end{figure}
The THz spectral features $\tilde{I}_{\mathrm{THz}}(\nu)$ at $I_{\omega} = 870\ \mathrm{\mu J},I_{\mathrm{2}\omega} = 460\ \mathrm{\mu J}$ are measured with Fourier transform spectroscopy, shown in Fig.~\ref{fig3}. The home-built Fourier transform spectrometer has been calibrated with a commercial optical parametric amplifier. Due to the noise, the very low-frequency region ($< 5$ THz) is not credible, which however can be measured with EOS. The spectral measurement shows that, when varying $d$ between two-color plasmas, the THz bandwidth is significantly broadened above 100 THz. The bandwidth broadening can be validated by the interferograms (Fig.~\ref{fig3} Inset). The temporal cycles emitted from cascading plasmas is much narrower than that in overlapped plasma.
The spectrum measurement exhibits that, our new method can not only boost the THz strength, which can achieve comparable conversion efficiency as strong terahertz generation by tilted wave front excitation in lithium niobate crystal, but can also span THz spectral bandwidth up to mid-infrared region.
\section{Theory}\label{sec4}
In order to understand the conversion efficiency enhancement in cascading plasmas, the THz radiation is numerically investigated with a (2D+1) laser pulse propagation equation combined with photocurrent model. In simulation, the free electron ensemble in cascading plasmas accelerated by asymmetric $\omega$-2$\omega$ fields induces residual photocurrent, which leads to the THz radiation. The absorption of the THz wave in plasma channel is also included in the model (\textit{Method Section}).
\begin{figure}
\centering
\includegraphics[width=0.75\textwidth]{Theory.pdf}
\caption{Comparison between measurement and theory. (a) THz generation $I_{\mathrm{THz}}(d)$ (blue dots) and $\ensuremath{\boldsymbol{E}}_{\mathrm{THz}}(d)$ (yellow stars) when $I_{\omega} = 870\ \mathrm{\mu J},I_{\mathrm{2}\omega} = 460\ \mathrm{\mu J}$ in linearly polarized $\omega$-2$\omega$ fields, and theoretical prediction (red line). (b) (c) Simulated spatial distributions of electron density in plasma at $d=0\ \mathrm{mm}$ and $d= 2\ \mathrm{mm}$. In panel (c) at $d= 2\ \mathrm{mm}$, the $\omega$ and 2$\omega$ foci are spatially separated into cascading plasmas. The left plasma is formed by the $\omega$ beam, and the right plasma is formed by the 2$\omega$ beam. The simulation well reproduces the fluorescence images of plasmas in Fig.~\ref{fig2} (a), which reflect electron density distributions in plasmas. (d) (e) The theoretical results of spatial distributions of THz generation. (f) (g) THz generation minus absorption (net THz emission). The dense plasma obviously attenuates THz emission. All the values in contour maps are logarithmically scaled and normalized to the maximum.}
\label{fig4}
\end{figure}
The THz radiation $I_{\mathrm{THz}}(d)$ and $\ensuremath{\boldsymbol{E}}_{\mathrm{THz}}(d)$ at $I_{\omega} = 870\ \mathrm{\mu J},I_{\mathrm{2}\omega} = 460\ \mathrm{\mu J}$ in linearly polarized $\omega$-2$\omega$ fields are compared to the theoretical simulation, as shown in Fig.~\ref{fig4}(a). In simulation, we estimate that the waists of $\omega$ and 2$\omega$ beams at the foci are $20\ \mathrm{\mu m}$ and $10\ \mathrm{\mu m}$, and peak powers of $\omega$ and 2$\omega$ fields are $100\ \mathrm{TW/cm^2}$ and $150\ \mathrm{TW/cm^2}$.
Fig.~\ref{fig4}(a) shows the simulated THz strength versus $d$ at 1 THz, which approximately agrees with the measured $d$-dependent behaviors of $I_{\mathrm{THz}}(d)$ and $\ensuremath{\boldsymbol{E}}_{\mathrm{THz}}(d)$.
We investigate electron density distribution, THz generation and absorption in space, and give a preliminary explanation for the THz enhancement in cascading plasmas. As shown in Fig.~\ref{fig4}(b) and (c), when the foci of $\omega$ and 2$\omega$ spatially overlap at $d=0\ \mathrm{mm}$, the two-color fields produce very dense plasma, which is confined to a small volume with high electron density and strong gradient.
In the center of the plasma volume, the electron density is estimated as $n_e \sim 10^{18}\ /\mathrm{cm}^3$.
Comparatively, when the two-color foci are separated, the plasma is stretched into cascading plasmas with larger length and lower electron density. The electron density in cascade plasmas $n_e \sim 10^{16}\ /\mathrm{cm}^3$, which is more homogeneously spatially distributed than the overlapped plasma.
The THz generation is highly relevant to electron density. Although the spatial density of THz emission in the cascading plasmas (Fig.~\ref{fig4}(e)) is smaller than that in overlapped plasma (Fig.~\ref{fig4}(d)) due to low electron density, yet, it is compensated after full space integration of plasma volume. Therefore, the overlapped plasma and cascading plasmas have similar throughput of THz emission. It can be further verified by the residual photocurrents at spatial samples at $z = 1.5,\ 2.5,\ 3.5\ \mathrm{mm}$ (\textit{Supplementary Information}), indicating the spatial THz generation in plasma.
The net THz emission from plasma depends on both THz generation and absorption in plasma. Here, we define absorption length $L_a$ to describe how long the THz wave is able to propagate in plasma filament. In the center of the overlapped plasma, the electron density $n_e \sim 10^{18} /\mathrm{cm}^3$, corresponds to absorption length $L_a \sim 1\ \mathrm{\mu m}$, which is far less than the plasma length. Hence, the THz wave is mostly depleted in plasma volume, as shown in (Fig.~\ref{fig4}(f)). Comparatively, the absorption length in cascading plasmas can be estimated as $L_a \sim 6.8\ \mathrm{mm}$ according to average electron density of plasma volume. It indicates that the THz absorption in cascading plasmas is much weaker than that in overlapped plasma (Fig.~\ref{fig4}(e)). Thus, by manipulating spatial redistribution of laser input energy, the THz generation and absorption are self optimized in cascading plasmas, finally achieving one order of magnitude enhancement of conversion efficiency.
The THz spectral broadening can hardly be explained with the plasma absorption. We tentatively attribute spectral broadening to spatiotemporal reshaping of driving fields when the ultrashort pulse propagates in cascading plasmas. The dispersion and highly nonlinear effects in plasma filament lead to the temporal distortion and compression of two-color fields. The pulse reshaping would induce the rapidly time-varying photocurrent, leading to high-frequency THz component. Since the nonlinear interaction length in cascading plasmas is longer than the length of the overlapped plasma, the spectral broadening in cascading plasmas is more pronounced than that in overlapped plasma. More convincing explanations need further theoretical investigation, which may involve highly complex and rich spatiotemporal dynamics during pulse propagation in plasma filament.
\section{Conclusion}\label{sec5}
We introduce a new control knob, the distance between the two-color cascading plasmas, to promote THz radiation in two-color scheme. With the distance appropriately optimized, the THz conversion efficiency reaches $\sim10^{-3}$ and the bandwidth can be broadened $>$100 THz with 800 nm, 35 fs laser pulse. The conversion efficiency is one order of magnitude higher and the bandwidth is more than 2 times broader than the counterpart configuration of overlapped plasma. The new proposed geometry can achieve considerable brightness and supercontinuum bandwidth with fairly simple setup, which avoids sophisticated optical parametric amplifier and pulse compression technique.
The first-ever proposed method may also be applicable in long wavelength driving THz generation for further enhancement conversion efficiency, which may break the current records of the strength and bandwidth of THz ultrashort pulses. The ultra-broadband feature of THz radiation has potential applicability to study structure and ultrafast dynamics of complex systems, whereas the strength feature can be applied in nonlinear optics, strong field physics and accelerator technique. The propagation equation combined with photocurrent model indicates that the THz amplification originates from the interplay between THz generation and plasma absorption in cascading plasmas. The complex mechanism of THz radiation in cascade plasma channel is still an open question calling for further theoretical exploration.
\section{Methods
\subsection{Experimental setup}
The experimental setup is shown in \textit{Supplementary Information}. A Ti:sapphire laser delivers \textit{p}-polarized femtosecond pulse with 35 fs, $\sim$1.8 mJ/pulse, centered at 800 nm with 60 nm bandwidth.
The $\omega$ beam passes through a 200 $\mu$m type-I {$\beta$}-barium borate (BBO) crystal, and a \textit{s}-polarized 2$\omega$ beam is generated (conversion efficiency $\sim$30\%). The co-propagating $\omega$-2$\omega$ beams are separated by a dichroic mirror (DM-2) into the two arms of a Michelson interferometry. The polarization of the $\omega$-2$\omega$ beams can be arbitrarily controlled by quarter-waveplates.
The phase jitter between $\omega$-2$\omega$ beams are suppressed by introducing a actively stabilized Michelson interferometry. To stabilize the relative phase of two arms of Michelson interferometry, a continuous green laser (532 nm) co-propagates with the $\omega$-2$\omega$ beams and interferes. The interference fringes are monitored by a CCD camera as a feedback signal. A mirror fixed on a piezo actuator provides a real-time feedback to keep the interference fringes stable.
After stabilization, the relative phase fluctuation in the system is smaller than $0.02\pi$ during data acquisition. Due to the difference of refractive indices between the $\omega$-2$\omega$ fields in air, the phase delay $\tau$ can be tuned with sub-femtosecond accuracy by changing the distance between the BBO and air plasma.
The $\omega$-2$\omega$ beams are respectively focused in two arms of the interferometry with L-1 and L-2 lenses with focal length of 10 cm. The L-2 lens is fixed, and L-1 lens is installed on a translation stage. The L-1 position $d$ can be moved along propagation direction. The $\omega$-2$\omega$ beams are combined with DM-3 and focused into atmospheric air to produce plasma. The plasma profile and THz conversion efficiency highly depend on $d$. And the plasma profile can be recorded by a CCD camera.
The THz strength is measured by two methods, electro-optical sampling (EOS) and pyroelectric detector. In EOS, a 800 $\mu$m thickness silicon wafer blocks the $\omega$ and 2$\omega$ beam.
The THz beam is aligned by two parabolic mirrors and focused on a 1 mm thickness ZnTe crystal. A metal wire-grid polarizer filters out polarized THz beam, and the ZnTe crystal is fixed at the special orientation, which has the same responses for \textit{s}- and \textit{p}-polarized components. The EOS has detection bandwidth of $< 3$ THz and the polarization can be resolved. The THz intensity is also measured with a pyroelectric detector (THZ9B-BL-DZ, GENTEC-EO) with 25 Hz chopper frequency. Here, an 400 $\mu$m thickness silicon wafer is glued on the detector to block scattering light. The THz bandwidth is obtained with a home built Fourier transform spectrometer, where the pyroelectric detector is used as the detector. The wavelength calibration of the spectrometer in mid-infrared region is implemented with optical parametric amplifier.
\subsection{Theoretical Model}
The linearly polarized laser pulse propagation model consisting of the absorption loss of ionization, nonlinear Kerr effect and plasma defocusing is described as a three-dimensional (2D+1) Maxwell's wave equation \cite{Geissler1999}
\begin{equation}
\bigtriangledown ^2 E(r,z,t)-\frac{1}{c^2}\frac{\partial ^2 E(r,z,t)}{\partial t^2}=\mu_0 \frac{\partial J(r,z,t)}{\partial t}+\frac{\omega^2_0}{c^2}(1-\eta ^2_{\mathrm{eff}}) E(r,z,t),
\end{equation}
where $E(r,z,t)$ denotes the transverse electric field at radial $r$ and axial $z$, and $\mu_0$, $\omega_0$ and $c$ are the permeability of vacuum, central frequency of electric field and speed of light in vacuum. In source terms, the absorption loss of ionization is given as \cite{Gaarde08} $J(r,z,t)=\frac{W(t) n_{e}(t) I_p E(r,z,t)}{\vert E(r,z,t) \vert ^{2}}$, where $I_p$ , $W(t)$ and $n_e(t)$ are the ionization potential, ionization rate and free electron density. The effective refractive index in source term is written as $\eta_{\mathrm{eff}}=\eta_0 +\eta_2 I(r,z,t) - \omega^2_p /2\omega^2_0 $ , where the refraction and absorption of the neutral gas $\eta_0 \sim 1$, the nonlinear Kerr index in atmospheric air $\eta_2=3.2 \times 10^{-19} \mathrm{cm^2/W}$, the plasma defocusing $\omega^2_p / 2\omega^2_0 =n_e /2 n_c $, $n_c[\mathrm{/cm^3}]\sim 1.1 \times 10^{21}/\lambda^2 [\mathrm{\mu m}]$ is the critical density of the laser with wavelength of $ \lambda $. The electron density of the photoionization induced plasma is calculated with the empirical Ammosov-Delone-Krainov (ADK) formula \cite{Tong05}
\begin{equation}
\frac{d n_e(t)}{dt}=W(t)n_a(t),
\label{eq:IonizationRate}
\end{equation}
where $W(t)$ is the instantaneous tunnel ionization rate. When the first ionization dominates,
the time-dependent neutral density is written as $n_a(t)=n_0-n_e(t)$, $n_0$ is initial neutral gas density. As the major ingredient of air, the nitrogen is used to estimate the ionization rate of air.
The transient photocurrent model \cite{Kre06} is employed to calculate THz generation in the two-color fields $E=E_\omega (r,z,t)+E_{2\omega}(r,z,t)$. The THz generation at a certain position is estimated as $E_{\mathrm{THz}} \propto \frac{dJ(t)}{dt}$. The transient current which is formed by acceleration of free electrons in two-color fields can be expressed as
\begin{equation}
J(t)=-e\int _ 0 ^ t \upsilon (t^{ \prime}, t) dn_e(t^{ \prime}).
\label{eq:NetCurrent}
\end{equation}
According to Eq. \ref{eq:IonizationRate}, the increment of electron density $dn_e$ depends on ionization rate. The transverse velocity that the electron acquires from $t^{\prime}$ to $t$ is given by
\begin{equation}
\upsilon (t^{\prime} ,t) = -\frac{e}{m}\int_{t^{\prime}}^t E(\xi)d\xi,
\label{eq:NetVelocity}
\end{equation}
where free electrons are assumed to be born with initial velocity $\upsilon(t^{\prime}) = 0$. The transverse velocity depends on the laser waveform between $t^{\prime}$ and $t$.
According to Eq. \ref{eq:NetCurrent}, the residual current at the end of laser pulse can be expressed as $J(t)=-e\int _0 ^ \infty \upsilon_d (t^{ \prime}) dn_e(t^{ \prime})$. Considering Eq. \ref{eq:NetVelocity}, the transverse velocity at the end of laser pulse, i.e., the drift velocity, is written as $\upsilon_d (t^{\prime}) = -\frac{e}{m}\int_{t^{\prime}}^\infty E(\xi)d\xi$. According to integration formula, the drift velocity can be written as $\upsilon_d (t^{\prime}) = -\frac{e}{m}(\int_0 ^\infty E(\xi)d\xi-\int_0 ^{t^{\prime}} E(\xi)d\xi)=-\frac{e}{m}(A(\infty)-A(t^{\prime}))$. The two vector potentials $A(\infty)$ and $A(t^{\prime})$ are determined by asymmetry of the two-color fields.
When THz wave propagates in a dense plasma, the THz emission with frequency $\Omega$ depends on absorption length of the plasma medium. Because the coherence length between $\omega$ and $2\omega$ is far longer than absorption length for gaseous plasma, the absorption effect dominates \cite{Constant1999}.
The THz intensity without the dependence of phase shift is calculated by
\begin{equation}
I_{\mathrm{THz}}=\vert \iint E_{\mathrm{THz}}(\Omega)\exp( -\frac{L-z}{L_a} ) drdz \vert ^2,
\end{equation}
where the $L$ refers to plasma length. The absorption length depends on the electron–ion collisional frequency $\nu$ and the plasma frequency $\omega_p$, which is written as $L_a\sim 2c(\Omega^2+\nu^2)/(\omega^2_p \nu)$. The electron–ion collisional frequency is estimated as $\nu \sim 2.9 \times 10^{-6}Z^2N_i [\mathrm{/cm^3}] \ln{\Lambda _{ei}} (T_{\mathrm{eff}}[\mathrm{eV}])^{-3/2}$,
where $N_i$ is the ion density, $\ln {\Lambda}$ is the Coulomb logarithm, the effective temperature $T_{\mathrm{eff}}=\kappa_{B}T_{e}+2U_p/3$. The ponderomotive potential is written as $U_p=\frac{1}{4}\frac{e^2 E^2_0}{m \omega^2_0} \sim 9.33 \times 10^{-14} I_0 (\mathrm{W/cm^2}) \lambda^2_0 (\mu m) $. In the simulation, the local fluctuation of electron temperature is neglected, and the average temperature $T_e$ is given by an average electron–ion collisional frequency $\tilde{\nu} \sim 3 \times 10^{12}\ \mathrm{Hz}$ in the gaseous plasma when laser intensity is $\sim 100\ \mathrm{TW/cm^2}$. In the inverse bremsstrahlung heating regime, when the heating time scale $\tau^{\ast}=(1/\tilde{\nu})^{3/10} \sim 90\ \mathrm{fs} $ is larger than the
pulse duration $\tau \sim 35\ \mathrm{fs}$, the electron temperature in short-pulse regime \cite{Durfee1995} is estimated by $\kappa_{B}T_{e} \sim 2U_p \tau/(5\tau^{\ast}) \sim 1.7\ \mathrm{eV}$.
\bmhead{Acknowledgments}
The work is supported by National Natural Science Foundation of China (NSFC) (12174284, 11827806, 11874368, 11864037, 91850209). We also acknowledge the support from Shanghai-XFEL beamline project (SBP) and Shanghai High repetition rate XFEL and Extreme light facility (SHINE).
| {'timestamp': '2022-02-01T02:26:29', 'yymm': '2201', 'arxiv_id': '2201.12801', 'language': 'en', 'url': 'https://arxiv.org/abs/2201.12801'} |
\section{Introduction}
Certain calculational techniques were utilized in an earlier paper [1] for
working out the twistor equation for contravariant one-index fields in
curved spacetimes. The main aim associated to the completion of the relevant
procedures was to derive one of the simplest sets of wave equations for
conformally invariant spinor fields that should presumably take place in the
frameworks of the Infeld-van der Waerden $\gamma \varepsilon $-formalisms
[2-4]. A striking feature of these wave equations is that they involve no
couplings between the twistor fields and wave functions for gravitons [5-7].
In actuality, the only coupling configurations brought about by the
techniques allowed for thereabout take up appropriate outer products
carrying the fields themselves along with some electromagnetic wave
functions for the $\gamma $-formalism [4, 5]. Loosely speaking, the
non-occurrence of $\varepsilon $-formalism couplings stems even in the case
of charged fields from the applicability of a peculiar property of partially
contracted second-order covariant derivatives of spin-tensor densities which
carry only one type of indices as well as suitable geometric attributes
[8-10]. Indeed, the electromagnetic curvature contributions that normally
enter such derivative expansions really cancel out whenever the
non-vanishing entries of the valences of the differentiated densities are
adequately related to the respective weights and antiweights [4].
The present paper just brings forward the result that the above-mentioned
wave equations possess the same form as the ones for the corresponding
lower-index fields. It shall become clear that the legitimacy of this result
rests upon the fact that the spinor translation of the classical conformal
Killing equation leads to twistor equations which must be formally the same.
Consequently, the conventional covariant devices for keeping track of
valences of spinor differential configurations in the $\gamma $-formalism
[4, 6], turn out not to be useful as regards the attainment of the full
specification of the formal patterns for the field and wave equations being
considered. We mention, in passing, that such devices had originally been
built up in connection with the derivation of a system of sourceless
gravitational and electromagnetic wave equations [5], with the pertinent
construction having crucially been based upon the implementation of the
traditional eigenvalue equations for the $\gamma $-formalism metric spinors
[2, 3]. It may be said that the motivations for elaborating upon the
situation entertained herein rely on our interest in completing the work of
Ref. [1], thereby making up appropriately the set of $\gamma \varepsilon
-wave equations which emerge from the curved-space version of twistor
equations for one-index fields.
The paper has been outlined as follows. In Section 2, we exhibit the twistor
field equations which are of immediate relevance to us at this stage. We
look at the twistor wave equations in Section 3, but the key remarks
concerning the lack of differential correlations between them shall be made
in Section 4. It will be convenient to employ the world-spin index notation
adhered to in Ref. [11]. In particular, the action on an index block of the
symmetry operator will be indicated by surrounding the indices singled out
with round brackets. Without any risk of confusion, we will utilize a
torsion-free operator $\nabla _{a}$ upon taking account of covariant
derivatives in each formalism. Likewise, the D'Alembertian operator for
either $\nabla _{a}$ will be written as $\square $. A horizontal bar will be
used once in Section 4 to denote the ordinary operation of complex
conjugation. Einstein's equations should thus be taken a
\[
2\Xi _{ab}=\kappa (T_{ab}-\frac{1}{4}Tg_{ab}),\text{ }T\doteqdot
T^{ab}g_{ab},
\
where $T_{ab}$ amounts to the world version of the energy-momentum tensor of
some sources, $g_{ab}$ denotes a covariant spacetime metric tensor and
\kappa $ stands for the Einstein gravitational constant. By definition, the
quantity $(-2)\Xi _{ab}$ is identified with the trace-free part of the Ricci
tensor $R_{ab}$ for the Christoffel connexion of $g_{ab}$. The cosmological
constant $\lambda $ will be allowed for implicitly through the well-known
trace relatio
\[
R=4\lambda +\kappa T,\text{ }R\doteqdot R^{ab}g_{ab}.
\
Our choice of sign convention for $R_{ab}$ coincides with the one made in
Ref. [11], namely
\[
R_{ab}\doteqdot R_{ahb}{}^{h},
\
with $R_{abc}{}^{d}$ being the corresponding Riemann tensor. We will
henceforth assume that the local world-metric signature is $(+---)$. The
calculational techniques referred to before shall be taken for granted at
the outset.
\section{Twistor equations}
The differential patterns borne by the original formulation of twistor
equations in a curved spacetime [12-14] may be thought of as arising in
either formalism fro
\begin{equation}
\nabla ^{(AA^{\prime }}K^{BB^{\prime })}=\frac{1}{4}(\nabla _{CC^{\prime
}}K^{CC^{\prime }})M^{AB}M^{A^{\prime }B^{\prime }}, \label{e1}
\end{equation
and\footnote
The symmetry operation involved in Eqs. (1) and (2) must be applied to the
index pairs.
\begin{equation}
\nabla _{(AA^{\prime }}K_{BB^{\prime })}=\frac{1}{4}(\nabla ^{CC^{\prime
}}K_{CC^{\prime }})M_{AB}M_{A^{\prime }B^{\prime }}, \label{e2}
\end{equation
where the $K$-objects amount to nothing else but the Hermitian spinor
versions of a null conformal Killing vector, and the kernel letter $M$
accordingly stands for either $\gamma $ or $\varepsilon $.
It should be emphatically observed that the genuineness of (1) and (2) as a
system of equivalent field equations lies behind a general
covariant-constancy property of the Hermitian connecting objects for both
formalisms [2, 3]. Thus, these equations can be obtained from one another on
the basis of the metric-compatibility requirement
\begin{equation}
\nabla _{a}(M^{AB}M^{A^{\prime }B^{\prime }})=0\Leftrightarrow \nabla
_{a}(M_{AB}M_{A^{\prime }B^{\prime }})=0. \label{e3}
\end{equation
Hence, by putting into effect the elementary outer-product prescriptio
\begin{equation}
K^{AA^{\prime }}=\xi ^{A}\xi ^{A^{\prime }}, \label{e4}
\end{equation
along with its lower-index version, after accounting for some manipulations,
we get the statement
\begin{equation}
\nabla ^{A^{\prime }(A}\xi ^{B)}=0,\text{ }\nabla _{A^{\prime }(A}\xi
_{B)}=0, \label{e5}
\end{equation
which, when combined together with their complex conjugates, bring out the
typical form of twistor equations. We stress that solutions to twistor
equations are generally subject to strong consistency conditions (see, for
instance, Ref. [1]).
Either $\xi $-field of (5) bears conformal invariance [13, 14], regardless
of whether the underlying spacetime background bears conformal flatness. In
the $\gamma $-formalism, the entries of the pair $(\xi ^{A},\xi _{A})$, and
their complex conjugates, come into play as spin vectors under the action of
the Weyl gauge group of general relativity [15], whereas their $\varepsilon
-formalism counterparts appear as spin-vector densities of weights
(+1/2,-1/2)$ and antiweights $(+1/2,-1/2)$, respectively.
\section{Wave equations}
In the $\gamma $-formalism, $\xi ^{A}$ shows up [1] as a solution to the
wave equatio
\begin{equation}
(\square -\frac{R}{12})\xi ^{A}=\frac{2i}{3}\phi ^{A}{}_{B}\xi ^{B},
\label{e6}
\end{equation
with $\phi ^{A}{}_{B}$ denoting a wave function for Infeld-van der Waerden
photons [16-18]. In order to derive in a manifestly transparent manner the
\gamma $-formalism wave equation for the lower-index field $\xi _{A}$, we
initially recast the second of the statements (5) int
\begin{equation}
2\nabla _{A^{\prime }A}\xi _{B}=\gamma _{AB}\gamma ^{LM}\nabla _{A^{\prime
}L}\xi _{M}, \label{e7}
\end{equation
and then operate on (7) with $\nabla _{C}^{A^{\prime }}$.\ It follows that,
calling upon the splitting [5
\begin{equation}
\nabla _{C}^{A^{\prime }}\nabla _{AA^{\prime }}=\frac{1}{2}\gamma
_{AC}\square -\Delta _{AC}, \label{e8}
\end{equation
together with the definitio
\begin{equation}
\Delta _{AC}\doteqdot -\nabla _{(A}^{A^{\prime }}\nabla _{C)A^{\prime }},
\label{e9}
\end{equation
and the property [4
\begin{equation}
\nabla _{a}(\gamma _{AB}\gamma ^{LM})=0, \label{e10}
\end{equation
we arrive a
\begin{equation}
\square \xi _{A}-\frac{2}{3}\Delta _{A}{}^{B}\xi _{B}=0. \label{e11}
\end{equation
The explicit calculation of the $\Delta $-derivative of (11) give
\begin{equation}
\Delta _{A}{}^{B}\xi _{B}=\frac{R}{8}\xi _{A}+i\phi _{A}{}^{B}\xi _{B},
\label{e12}
\end{equation
whence, fitting pieces together suitably, yield
\begin{equation}
(\square -\frac{R}{12})\xi _{A}=\frac{2i}{3}\phi _{A}{}^{B}\xi _{B}.
\label{e13}
\end{equation}
It should be evident that the equality (11) remains formally valid in the
\varepsilon $-formalism as well. Therefore, since the $\varepsilon
-formalism field $\xi _{A}$ is a covariant one-index spin-vector density of
weight $-1/2$, the $\varepsilon $-counterpart of the derivative (12) has to
be expressed as the purely gravitational contribution\footnote
For a similar reason, the $\varepsilon $-formalism version of (6) reads
(\square -\frac{R}{12})\xi ^{A}=0$. It will become manifest later in Section
4 that the relation (14) is compatible with this assertion.
\begin{equation}
\Delta _{A}{}^{B}\xi _{B}=\frac{R}{8}\xi _{A}. \label{e14}
\end{equation
Hence, the $\varepsilon $-formalism statement corresponding to (13) must be
spelt out a
\begin{equation}
(\square -\frac{R}{12})\xi _{A}=0. \label{e15}
\end{equation}
\section{Concluding remarks and outlook}
The formulae shown in Section 3 supply the entire set of wave equations for
one-index conformal Killing spinors that should be tied in with the context
of the $\gamma \varepsilon $-frameworks. It is worth pointing out that the
common overall sign on the right-hand sides of (6) and (13), is due to the
\gamma $-formalism metric relationship between the differential
configuration (12) an
\[
\Delta ^{A}{}_{B}\xi ^{B}=-\hspace{0.02cm}(\frac{R}{8}\xi ^{A}+i\phi
^{A}{}_{B}\xi ^{B}),
\
with the aforesaid relationship actually coming about when we invoke the
well-known derivatives [4
\[
\Delta _{AB}\gamma _{CD}=2i\phi _{AB}\gamma _{CD},\text{ }\Delta _{AB}\gamma
^{CD}=-2i\phi _{AB}\gamma ^{CD}.
\
What happens with regard to it is, in effect, that the pieces of those
contracted $\Delta \xi $-derivatives somehow compensate for each other while
producing the formal commonness feature of the apposite couplings throug
\[
\Delta _{A}{}^{B}\xi _{B}+\Delta {}_{AB}\xi ^{B}=2i\phi _{A}{}^{B}\xi _{B}.
\]
At first sight, one might think that a set of differential correlations
between the $\gamma $-formalism wave equations for $\xi ^{A}$ and $\xi _{A}$
could at once arise out of utilizing the devices [4, 5
\[
\square \xi ^{A}=\gamma ^{AB}\square \xi _{B}+(\square \gamma ^{AB})\xi
_{B}+2(\nabla ^{h}\gamma ^{AB})\nabla _{h}\xi _{B},
\
an
\[
\square \xi _{A}=\gamma _{BA}\square \xi ^{B}+(\square \gamma _{BA})\xi
^{B}+2(\nabla ^{h}\gamma _{BA})\nabla _{h}\xi ^{B},
\
in conjunction with the eigenvalue equations [2-4
\[
\nabla _{a}\gamma _{AB}=i\beta _{a}\gamma _{AB},\text{ }\nabla _{a}\gamma
^{AB}=(-i\beta _{a})\gamma ^{AB},
\
an
\[
\square \gamma ^{AB}=-\hspace{0.02cm}\Theta \gamma ^{AB},\text{ }\square
\gamma _{AB}=-\hspace{0.02cm}\overline{\Theta }\gamma _{AB},
\
wher
\[
\Theta \doteqdot \beta ^{h}\beta _{h}+i\nabla _{h}\beta ^{h},
\
and $\beta _{a}$ is a gauge-invariant real world vector. If any such
raising-lowering device were implemented in a straightforward way, then a
considerable amount of "strange" information would thereupon be brought into
the picture whilst some of the contributions involved in the intermediate
steps of the calculations that give rise to the characteristic statement
\[
\nabla ^{(A(A^{\prime }}K^{B^{\prime })B)}=0,\text{ }\nabla _{(A(A^{\prime
}}K_{B^{\prime })B)}=0,
\
would eventually be ruled out. We can conclude that any attempt at making
use of a metric prescription to recover either of (6) and (13) from the
other, would visibly carry a serious inadequacy in that the twistor
equations (5) could not be brought forth simultaneously. It is obvious that
the property we have deduced ultimately reflects the absence of index
contractions from twistor equations.
It would be worthwhile to derive the $\gamma \varepsilon $-wave equations
for twistor fields of arbitrary valences. This issue will probably be
considered further elsewhere.
\textbf{ACKNOWLEDGEMENT:}
One of us (KW) should like to acknowledge the Brazilian agency CAPES for
financial support.
| {'timestamp': '2013-11-26T02:04:49', 'yymm': '1311', 'arxiv_id': '1311.5983', 'language': 'en', 'url': 'https://arxiv.org/abs/1311.5983'} |
\section{Introduction} \label{sec:intro}
Exploring the build-up of galaxies in the $z>6.5$ universe just a few hundred million years after the Big Bang is a key frontier in extragalactic astronomy. Though 100s of UV-bright galaxy candidates at redshifts $z>6.5$ \citep[e.g.,][]{mclure2013,Bouwens_2015,Finkelstein_2015,Ishigaki_2018,Bouwens_2021,Harikane2021} are known, characterization of their physical properties has been difficult. Deriving these properties from optical and near-IR photometry is complicated by uncertainties in the redshifts \citep[e.g.,][]{Bouwens_LP,Robertson_2021}, dust extinction \citep[e.g.,][]{Fudamoto2020arXiv200410760F,Schouws_2021,Bowler_2021}, and rest-frame optical nebular emission line properties \citep[e.g.,][]{Smit_2015,Endsley_2021,stefanon_2021a} of $z>6$ galaxies.
Fortunately, with ALMA, it is possible to make great progress on the characterization of galaxies at $z>6$ \citep[e.g.,][]{Hodge2020arXiv200400934H,Robertson_2021}. The [CII]158$\mu$m line is especially interesting as it is the strongest cooling line of warm gas ($T < 10^4$ K) in galaxies. This line has already been detected in a significant number of galaxies out to $z\sim 7$-8 \citep[e.g., ][]{Walter_2009,Wagg_2010,Riechers_2014,Willott_2015,Capak2015_Natur.522..455C,Maiolino_2015_10.1093/mnras/stv1194,Pentericci_2016,Knudsen_2016,Bradac_2017,Smit_2018Natur.553..178S,Matthee_2017,Matthee_2019,harikane_2020,carniani_2020,lefevre_2020,bethermin2020A&A...643A...2B,venemans_2020,Fudamoto_2021}. In addition to the immediate utility of [CII] to spectroscopically confirm galaxies in the reionization era and obtain a precise measurement of their redshift, the strength of this line makes it one of the prime features for studying the kinematics of high-z galaxies \citep[e.g.,][]{Smit_2018Natur.553..178S,Hashimoto_2019_10.1093/pasj/psz049,jones_2021}. Dynamical masses are particularly interesting in that they give some handle on the masses of galaxies, which can be challenging to do from the photometry alone \citep[e.g.,][]{schaerer_2009}.
Despite the great utility of [CII] and the huge interest from the community, only a modest number of $z>6.2$ galaxies had been found to show [CII] emission in the first six years of ALMA operations \citep[e.g.,][]{Maiolino_2015_10.1093/mnras/stv1194,Pentericci_2016,Bradac_2017,Matthee_2017,Smit_2018Natur.553..178S,Carniani_2018,Hashimoto_2019_10.1093/pasj/psz049}. Additionally, even in cases where the [CII] line was detected, only modest luminosities were found, i.e., $L_{[CII]}$$\lesssim\,$2$\times$10$^{8}$ $L_{\odot}$. One potentially important factor in the limited success of earlier probes for [CII] at $z>6$ may have been the almost exclusive targeting of sources with redshifts from the Ly$\alpha$ emission line. At lower redshifts at least, Ly$\alpha$ emission seems to be much more prevalent in lower mass galaxies than it is in higher mass galaxies \citep[e.g.,][]{Stark_2010}. Additionally, Ly$\alpha$ emitters have been found to be systematically low in their [CII] luminosity-to-SFR ratios \citep{Harikane_2018,Matthee_2019}. Both factors would have caused early ALMA observations to miss those galaxies with the brightest [CII] lines.
In a cycle 4 pilot, \citet{Smit_2018Natur.553..178S} demonstrated the effectiveness of spectral scans for the [CII] line in $z>6$ galaxies which are particularly luminous and which also have well constrained photometric redshifts. One aspect of the $z\sim7$ galaxies from \citet{Smit_2018Natur.553..178S} that allowed for particularly tight constraints on their redshifts were the high EWs of their strong [OIII]+H$\beta$ emission lines in the {\it Spitzer}/IRAC bands. This is due to the particular sensitivity of the {\it Spitzer}/IRAC [3.6]$-$[4.5] colors to redshift of galaxies for high-EW [OIII]+H$\beta$ emitters \citep{Smit_2015}. Remarkably, despite just $\sim$1 hour of observations becoming available on these targets, the results were nevertheless striking, with 6-8$\sigma$ [CII] lines found in two targeted sources, with redshifts $z=6.8075$ and $z=6.8540$. Additionally, the [CII] luminosities of the two sources were relatively high, being brighter in [CII] than all ALMA non-QSO targets by factors of $\sim$2-20.
Encouraged by the high efficiency of the \citet{Smit_2018Natur.553..178S} spectral scan results, we successfully proposed for similar observations for 6 other luminous $z\sim7$ galaxies (2018.1.00085.S: PI Schouws) to add to the results available from the \citet{Smit_2018Natur.553..178S} program and further refine the spectral scan strategy. The purpose of this paper is to present results from this second pilot. Results from this pilot program and an earlier program from \citet{Smit_2018Natur.553..178S} served as the motivation for the Reionization Era Bright Emission Line Survey (REBELS) ALMA large program \citep{Bouwens_LP} in cycle 7. The REBELS program significantly expanded the strategy employed in these pilot programs to a much larger set of targets, while extending the scan strategy to even higher redshift.
The paper is structured as follows. In \textsection 2, we detail the selection of targets for this second pilot program, while also describing the set up and reduction of our spectral scan observations. In \textsection 3, we describe the [CII] line search and present our new [CII] detections. We place our detections on the [CII]-SFR relation and examine the [CII]/$L_{IR}$ of our sources. We conclude the Section by examining their kinematics. In \textsection 4, we discuss the prospect of deploying the described spectral scan observations to a much larger set of $z>6$ galaxies. Finally in \textsection 5, we summarize our conclusions.
Throughout this paper we assume a standard cosmology with $H_0=70$ km s$^{-1}$ Mpc$^{-1}$, $\Omega_m=0.3$ and $\Omega_{\Lambda}=0.7$. Magnitudes are presented in the AB system \citep{oke_gunn_1983ApJ...266..713O}. For star formation rates and stellar masses we adopt a Chabrier IMF \citep{Chabrier_2003}. Error-bars indicate the $68\%$ confidence interval unless specified otherwise.
\section{High-Redshift Targets and ALMA Observations}
\subsection{UltraVISTA Search Field and Photometry}
Our source selection is based on ultradeep near-infrared imaging over the
COSMOS field \citep{Scoville_2007} from the third data release (DR3)
of UltraVISTA \citep{McCracken_2012A&A...544A.156M}. UltraVISTA provides imaging
covering 1.4 square degrees \citep{McCracken_2012A&A...544A.156M} in the Y, J, H and
Ks filters to $\sim$24-25 mag (5$\sigma$), with DR3 achieving
fainter limits over 0.7 square degrees in 4 ultradeep strips. The DR3
contains all data taken between December 2009 and July 2014 and
reaches $Y = 26.2$, $J = 26.0$, $H = 25.8$, $K = 25.5$ mag (5$\sigma$ in 1.2$''$-diameter apertures). The nominal depth we measure
in the Y, J, H, and Ks bands for the UltraVISTA DR3 release is
$\sim$0.1-0.2 mag, $\sim$0.6 mag, $\sim$0.8 mag, and $\sim$0.2 mag,
respectively, deeper than in UltraVISTA DR2 release.
The optical data we use consists of CFHT/Omegacam in g, r, i, z \citep{Erben_2009A&A...493.1197E} from the Canada-France-Hawaii Legacy Survey (CFHTLS) and
Subaru/SuprimeCam in B$_J$+V$_J$+g+r+i+z imaging \citep{Taniguchi_2007}. This analysis uses Spitzer/IRAC 3.6$\mu$m and 4.5$\mu$m
observations from S-COSMOS \citep{Sanders_2007}, the Spitzer Extended
Deep Survey \cite[SEDS:][]{Ashby_2013}, the Spitzer-Cosmic Assembly
Near-Infrared Deep Extragalactic Survey \citep[S-CANDELS:][]{Ashby_2015}, Spitzer Large Area Survey with Hyper-Suprime-Cam (SPLASH,
PI: Peter Capak), and the Spitzer Matching survey of the UltraVISTA
ultra-deep Stripes \citep[SMUVS, PI: K. Caputi:][]{Ashby_2018}. Compared to the original S-COSMOS IRAC data,
SPLASH provides a large improvement in depth over nearly the whole
UltraVISTA area, covering the central 1.2 square degree COSMOS field
to 25.5 mag (AB) at 3.6 and 4.5$\mu$m. SEDS and S-CANDELS cover
smaller areas to even deeper limits. We also make use of data from
SMUVS, which provides substantially deeper Spitzer/IRAC data over the deep
UltraVISTA stripes.
Source catalogs were constructed using SExtractor v2.19.5 \citep{Bertin_1996}, run in dual image mode, with source detection performed
on the square root of a $\chi^2$ image \citep{Szalay_1999}
generated from the UltraVISTA YJHK$_{s}$ images. In creating
our initial catalog of $z\sim7$ candidate galaxies, we started from
simply-constructed catalogs derived from the ground-based
observations. Prior to our photometric measurements, images were first convolved to the
J-band point-spread function (PSF) and carefully registered against the
detection image (mean RMS$\sim$0.05\arcsec). Color measurements were made
in small \citet{Kron1980} like apertures (SExtractor AUTO and Kron factor
1.2) with typical radius $\sim$0.35-0.50\arcsec).
We also consider color measurements made in fixed 1.2$''$-diameter
apertures when refining our selection of $z\sim7$ candidate
galaxies. For the latter color measurements, flux from a source and its nearby neighbors (12\arcsec$\times$12\arcsec$\,$region) is carefully modeled; and then flux from the neighbors is subtracted before the aperture photometry is performed. Our careful modeling of the light from neighboring sources improves the overall robustness of our final
candidate list to source confusion. Total magnitudes are derived by correcting the fluxes measured in 1.2\arcsec-diameter apertures for the light lying outside a 1.2$"$-diameter aperture. The relevant correction factors are estimated on a source-by-source basis based on
the spatial profile of each source and the relevant PSF-correction kernel. Photometry on the Spitzer/IRAC \citep{Fazio_2004} observations is more involved due to the much lower resolution FWHM = 1.7\arcsec$\,$compared to the ground-based data (FWHM = 0.7\arcsec). The lower resolution results in source blending where light from foreground sources contaminates measurements of the sources of interest. These issues can largely be overcome by modeling and subtracting the contaminating light using the higher spatial resolution near-IR images as a prior. Measurements
of the flux in the {\it Spitzer}/IRAC observations were performed with the mophongo software \citep{Labbe_2006,Labbe_2010a,Labbe_2010b,Labbe_2013,Labbe2015}. The
positions and surface brightness distributions of the sources in the
coadded JHKs image are assumed to appropriate models and, after PSF
matching to the IRAC observations, these models are simultaneously fit
to the IRAC image leaving only their normalization as a free
parameter. Subsequently, light from all neighbors is subtracted, and
flux densities were measured in 2\arcsec$\,$diameter circular apertures. The IRAC fluxes are corrected to total for missing light outside the
aperture using the model profile for the individual sources. The
procedure employed here is very similar to those of other studies
\citep[e.g.,][]{Skelton_2014,Galametz_2013,Guo_2013,Stefanon_2017,Weaver_2022}.
\begin{figure}[th!]
\epsscale{1.17}
\plotone{z3645p.pdf}
\caption{Measured [3.6]$-$[4.5] colors for the bright $z\sim 7$ galaxy candidates we have identified within UltraVISTA. The [3.6]$-$[4.5] color is largely driven by how high the EWs of the emission lines ([OIII]+H$\beta$, H$\alpha$) that fall in the [3.6] and [4.5] bands are, with the blue colors at $z\sim6.6$-6.9 driven by the [OIII]+H$\beta$ lines falling in the [3.6] band, and at $z>7$ the [OIII]+H$\beta$ lines falling in the [4.5] band. The most promising $z\sim7$ follow-up targets are indicated by the red squares and correspond to the brightest $H\lesssim 25$ mag sources over COSMOS and show robustly red or blue [3.6]$-$[4.5] colors, significantly narrowing the width of the redshift likelihood distribution over which [CII] searches are required. The filled magenta circles show the [3.6]$-$[4.5] colors measured for the two $z\sim 7$ galaxies with [CII] detections reported in \citet{Smit_2018Natur.553..178S}, while the solid orange triangles are for sources with [CII] detections in the literature \citep{Matthee_2017,Pentericci_2016,Hashimoto_2019_10.1093/pasj/psz049}. The blue triangles correspond to those bright ($H\leq 24.5$ mag) $z\sim7$ galaxies from Appendix A where the redshift is less well constrained based on the [3.6]$-$[4.5] colors (UVISTA-Z-002, 003, 004, 005, 008). The cyan shaded region shows the expected [3.6]$-$[4.5] colors for star-forming galaxies vs. redshift assuming a rest-frame equivalent width for [OIII]+H$\beta$ in the range 400\AA to 2000\AA.}
\label{fig:z3645}
\end{figure}
\subsection{Bright $z\sim7$ Selection\label{sec:z7select}}
In effort to identify a robust set of $z\sim7$ galaxies from the wide-area UltraVISTA data set for follow-up with ALMA, we require sources to be detected at $>$6$\sigma$, combining the flux in J, H, Ks,
[3.6], and [4.5]-band images (coadding the S/N’s in quadrature). The
combined UltraVISTA + IRAC detection and SNR requirements exclude
spurious sources due to noise, detector artifacts, and diffraction
features. We construct a preliminary catalog of candidate z $\sim$ 7
galaxies using those sources that show an apparent Lyman break due to
absorption of UV photons by neutral hydrogen in the IGM blueward of
the redshifted Ly$\alpha$ line. This break is measured using simple
color criteria. At $z > 6.2$, the $z$-band flux is significantly impacted by this absorption of rest-$UV$ photons, while at $z>7.1$, the $Y$-band flux is impacted. The following criteria were applied:
\begin{displaymath}
(z-Y>0.5)\wedge(Y-K<0.7)
\end{displaymath}
where $\wedge$ is the logical AND operator. In case of a
non-detection, the $z$ or $Y$-band flux in these relations is replaced
by the equivalent 1$\sigma$ upper limit.
It is worth emphasizing that our final sample of $z > 6$
bright galaxies shows little dependence on the specific limits chosen
here. For each candidate source the redshift probability distribution
P(z) is then determined using the EAZY photometric redshift software \citep{Brammer_2008}, which fits a linear combination of galaxy
spectral templates to the observed spectral energy distribution
(SED).
The template set used here is the standard EAZY v1.0 template set,
augmented with templates from the Galaxy Evolutionary Synthesis Models \citep[GALEV:][]{Kotulla_2009} which include nebular continuum and
emission lines. The implementation of nebular lines follow the
prescription of \citet{Anders_2003}, assuming
0.2$\,$Z$_{\odot}$ metallicity and a large rest-frame EW of H$\alpha$ =
1300\AA. These extreme EW reproduce the observed [3.6]$-$[4.5] colors
for many spectroscopically confirmed $z\sim7$-9 galaxies \citep[e.g.,][]{Ono_2012,Finkelstein_2013,Oesch_2015,Zitrin_2015,Roberts_Borsani_2016,Stark_2017}. A flat prior on the redshift is assumed.
To maximize the robustness of candidates selected for our $z\sim7$ samples, we require the integrated probability beyond $z = 6$ to be $>$70\%. The use of a redshift likelihood distribution $P(z)$ is very effective in rejecting faint low-redshift galaxies with a strong
Balmer/4000\AA$\,$break and fairly blue colors redward of the break.
The available optical observations are used to reject other low
redshift sources and Galactic stars by imposing $\chi^2 _{opt} < 4$.
$\chi^2 _{opt}$ is defined as $\chi_{opt} ^2 = \Sigma_{i}
\textrm{SGN}(f_{i}) (f_{i}/\sigma_{i})^2$ where $f_{i}$ is the flux in
band $i$ in a consistent aperture, $\sigma_i$ is the uncertainty in
this flux, and SGN($f_{i}$) is equal to 1 if $f_{i}>0$ and $-1$ if
$f_{i}<0$. $\chi^2 _{opt}$ is calculated in both 0.8$''$-diameter
apertures and in scaled elliptical apertures. $\chi_{opt} ^2$ is
effective in excluding $z=1$-3 low-redshift star-forming galaxies
where the Lyman break color selection is satisfied by strong line
emission contributing to one of the broad bands \citep[e.g.][]{vanderWel_2011,Atek_2011}. Sources which show a 2$\sigma$ detection in the
available ground-based imaging bands (weighting the flux in the
different bands according to the inverse square uncertainty in
$f_{\nu}$) are also excluded as potentially corresponding to
lower-redshift contaminants. Finally, to minimize contamination by low-mass stars, we fit the observed SEDs of candidate $z\sim7$ galaxies both with EAZY and with stellar templates from the SpecX prism library \citep{Burgasser_2004}. Sources which are significantly better
fit ($\Delta \chi^2$ $>$ 1) by stellar SED models are excluded. The SED templates
for lower mass stars are extended to 5$\mu$m using the known $J -[3.6]$ or
$J -[4.5]$ colors of these spectral types \citep{Patten_2006,Kirkpatrick_2011} and the nominal spectral types of stars from
the SpecX library. The approach we utilize is identical to the
SED-fitting approach employed by \citet{Bouwens_2015} for
excluding low-mass stars from the CANDELS fields. Combined, these
selection requirements result in very low expected contamination
rates.
Using the above selection criteria, we select 32 $z\sim7$ potential candidates for spectral scans over a 1.2 square degrees area in the UltraVISTA field. The sources have an H-band magnitude ranging from 23.8 mag to 25.7 mag and redshifts 6.6 to 7.1. We include a list of the sources we select in Table~\ref{tab:candidates} from Appendix A.
\begin{deluxetable*}{lcccccc}[th!]
\tablecaption{Main parameters of the ALMA observations used for this study. \label{tab:tab1}}
\tablewidth{0pt}
\tablehead{\colhead{\specialcell[c]{Source \\ Name \\ }} & \colhead{\specialcell[c]{RA}} & \colhead{\specialcell[c]{DEC}} & \colhead{\specialcell[c]{Beamwidth$^{a}$ \\ (arcsec)}} & \colhead{\specialcell[c]{Integration\\time$^{b}$\\(min.)} } & \colhead{\specialcell[c]{PWV \\ (mm)}} & \colhead{\specialcell[c]{Frequency/Redshift Range\\of the Spectral Scan\\Percentage of $p(z)^d$\\(GHz)}} }
\startdata
UVISTA-Z-001 & 10:00:43.36 & 02:37:51.3 & 1.47\arcsec$\times$1.21\arcsec & 39.31 &3.3 & 228.62 - 239.37 ($z=6.94$-7.31) (71\%) \\
UVISTA-Z-007 & 09:58:46.21 & 02:28:45.8 & 1.40\arcsec$\times$1.19\arcsec & 32.76 &1.9& 240.28 - 251.02 ($z=6.57$-6.91) (82\%) \\
UVISTA-Z-009 & 10:01:52.30 & 02:25:42.3 & 1.38\arcsec$\times$1.20\arcsec & 32.76 &1.9& 240.28 - 251.02 ($z=6.57$-6.91) (65\%)\\
UVISTA-Z-010 & 10:00:28.12 & 01:47:54.5 & 1.44\arcsec$\times$1.18\arcsec & 39.31 &3.3& 228.62 - 239.37 ($z=6.94$-7.31) (90\%) \\
UVISTA-Z-013 & 09:59:19.35 & 02:46:41.3 & 1.45\arcsec$\times$1.18\arcsec & 39.31 &3.3& 228.62 - 239.37 ($z=6.94$-7.31) (99\%) \\
UVISTA-Z-019 & 10:00:29.89 & 01:46:46.4 & 1.39\arcsec$\times$1.18\arcsec & 32.76 &1.9& 240.28 - 251.02 ($z=6.57$-6.91) (95\%)\\
\enddata
\tablecomments{
\textsuperscript{a} Beamsize for the naturally weighted moment-0 images.
\textsuperscript{b} Corresponds to the average on-source integration time for the two tunings.
\textsuperscript{c} Average precipitable water vapour during the observations.
\textsuperscript{d} Percentage of the redshift probability distribution that is covered by the spectral scan.
}
\vspace{-0.5cm}
\end{deluxetable*}
\subsection{Target Selection for the ALMA Observations}
In an effort to further demonstrate the potential of spectral scans with ALMA to characterize massive star-forming galaxies at $z\sim7$, we elected to target six sources from the $z\sim7$ galaxy sample constructed in the previous Section (and which is presented in Appendix A).
We focused on those galaxies which are brightest in the rest-frame $UV$ and have the tightest constraints on their photometric redshifts. $UV$-bright galaxies are particularly useful to target since those sources have the highest apparent SFRs and should contain particularly luminous [CII] lines, assuming the \citet{delooze2014} relation holds out to $z>6$ as \citet{Schaerer_2020} find. If there is an additional contribution from obscured star formation, the luminosity of [CII] should be further enhanced.
Additionally, it is useful to target sources with tight constraints on their redshifts from photometry to minimize the number of spectral scan windows that need to be utilized. For this purpose, a useful set of sources to target are those with particularly strong
constraints on their photometric redshifts from their \textit{Spitzer}/IRAC colors. One particularly interesting opportunity exists for
sources where the broad-band Lyman-break places sources around a
redshift $z\sim7$, as \citet{Smit_2015} and \citet{Smit_2018Natur.553..178S} have already demonstrated. This is due to the fact that at $z\sim7$, the
\textit{Spitzer}/IRAC color can reduce the width of the redshift likelihood
window for a source by as much as a factor of 2. Due to the dramatic
impact the [OIII]+H$\beta$ lines have on the [3.6]$-$[4.5] colors for
star-forming galaxies at $z\sim7$, the \textit{Spitzer}/IRAC color places robust
constraints on the redshift of a source. For sources with a robustly
red \textit{Spitzer}/IRAC [3.6]$-$[4.5] color, we can eliminate the $z<7$
solution, while for sources with a robustly blue \textit{Spitzer}/IRAC
[3.6]$-$[4.5] color, we can eliminate the $z>7$ solution.
Following from these arguments, the best targets for the detection of
luminous [CII] line emission at $z>6$ are those sources (1) which are
bright $(H<25)$, (2) have photometric Lyman breaks around $z\sim7$,
and (3) have robustly red or blue colors. We highlight these sources
in a {\it Spitzer}/IRAC [3.6]$-$[4.5] color vs. redshift diagram in
Figure~\ref{fig:z3645} as the large red squares.
\begin{figure*}
\epsscale{1.18}
\plotone{scan_windows.pdf}
\caption{\textit{(Top)} Full spectral scans as observed for the six galaxies targeted in this study. The targets are divided in two samples of three galaxies which were observed with different ALMA tunings. For both samples the ALMA tunings are shown above or below the spectral scans. We detect an emission line for 3 targets, indicated with a red line below the location of the emission line in the scans. For clarity, the spectra are binned combining 5 spectral channels. \textit{(Bottom)} Photometric redshift probability distributions for the galaxies targeted in this study (on the same scale as the spectral scans). Galaxies for which an emission line is detected are indicated with an asterisks. The two samples with different ALMA tunings are distinguished with blue and green colors.} \label{fig:scan}
\end{figure*}
\begin{deluxetable*}{lccccccccc}[]
\tablecaption{Results - UV and ALMA derived properties of the galaxies targeted in this study.\label{tab:results}}
\tablewidth{0pt}
\tablehead{\colhead{\specialcell[c]{Source\\Name}} & \colhead{\specialcell[c]{$z_{phot}$}} & \colhead{\specialcell[c]{$z_{spec}$}} & \colhead{\specialcell[c]{$L_{UV}$$^\dagger$ \\ ($arcsec$)}} & \colhead{\specialcell[c]{$\log M_{*}$$^{\dagger}$\\$M_{\odot}$}} & \colhead{\specialcell[c]{EW OIII+H$\beta^{\ddagger}$\\($\AA$)}} & \colhead{\specialcell[c]{$L_{IR}$$^{a}$\\($10^{11}L_{\odot}$)}} & \colhead{\specialcell[c]{$S_{\nu}\Delta v$ \\ ($Jy$ $km/s$)}} & \colhead{\specialcell[c]{$L_{[CII]}$\\($10^8 L_{\odot})$}} & \colhead{\specialcell[c]{FWHM$^b$\\($km/s$)}}}
\startdata
UVISTA-Z-001 & 7.00$^{+0.05}_{-0.06}$ & 7.0599(3)& 2.9$^{+0.1}_{-0.1}$ & 9.58$^{+0.09}_{-0.35}$ & 1004$^{+442}_{-206}$ & 5.0$^{+2.1}_{-2.1}$ & 0.57$\pm$0.08 & 6.7 $\pm$ 1.2 & 256 $\pm$ 27 \\
UVISTA-Z-007 & 6.72$_{-0.09}^{+0.10}$ & 6.7496(5)& 1.5$^{+0.2}_{-0.2}$ & 9.57$^{+0.35}_{-0.44}$ & 761$^{+530}_{-168}$ & $<$ 2.2$^c$ & 0.51$\pm$0.09 & 5.6 $\pm$ 1.4 & 301 $\pm$ 42 \\
UVISTA-Z-009 & 6.86$_{-0.06}^{+0.07}$ & - & 1.6$^{+0.2}_{-0.2}$ & 9.40$^{+0.32}_{-0.29}$ & 1012$^{+677}_{-257}$ & $<$ 2.4 & - & -$^{d}$ & - \\
UVISTA-Z-010 & 7.06$_{-0.07}^{+0.07}$ & - & 1.1$^{+0.2}_{-0.2}$ & 8.88$^{+0.28}_{-0.09}$ & 1706$^{+780}_{-807}$ & $<$ 2.1 & - & -$^{d}$ & - \\
UVISTA-Z-013 & 7.02$_{-0.03}^{+0.03}$ & - & 1.4$^{+0.4}_{-0.3}$ & 10.72$^{+0.03}_{-0.10}$ & 1821$^{+4364}_{-1142}$ & $<$ 2.2 & - & -$^{d}$ & - \\
UVISTA-Z-019 & 6.80$_{-0.06}^{+0.05}$ & 6.7534(2)& 1.0$^{+0.1}_{-0.1}$ & 9.51$^{+0.19}_{-0.18}$ & 628$^{+226}_{-99}$ & 2.7$^{+0.9}_{-0.9}$ & 0.80$\pm$0.06 & 8.8 $\pm$ 0.9 & 184 $\pm$ 15 \\
\enddata
\tablecomments{
\textsuperscript{$\dagger$} $UV$-Luminosities and stellar masses are taken from \citet{Schouws_2021} and were derived using the methodology described in \citet{Stefanon2019}, assuming a metallicity of 0.2 $Z_{\odot}$, a constant star formation history and a \citet{Calzetti_2000} dust law.
\textsuperscript{$\ddagger$} [OIII]+H$\beta$ equivalent widths are taken from \citet{Bouwens_LP} and Stefanon et al. (2022, in prep).
\textsuperscript{a} Total infrared luminosity integrated from 8-1000$\mu$m assuming a modified black body SED with a dust temperature of 50$\,$K and a dust emissivity index $\beta_{dust}$=1.6 after correcting for CMB effects \citep[we refer to][for details]{Schouws_2021}.
\textsuperscript{b} Observed FWHM of the [CII] emission line as measured in the 1d spectrum.
\textsuperscript{c} UVISTA-Z-007 shows dust continuum emission at a level of $2.5\sigma$ (corresponding to $L_{IR}\sim1.8\times10^{11}L_{\odot}$), but we use the $3\sigma$ upper limit on the luminosity for the remainder of our analysis to be conservative.
\textsuperscript{d} These non-detections are either caused by a [CII] luminosity is lower than our detection limit ($\sim 2\times10^{8} L_{\odot}$, see Figure \ref{fig:efficiency}) or because their redshift falls outside of the range scanned in this study. Because of this degeneracy we cannot provide limits on their [CII] luminosity outside the scanned redshift ranges (see table \ref{tab:tab1}).
}
\end{deluxetable*}
\subsection{ALMA Observations and Data Reduction} \label{sec:datareduction}
A summary of the ALMA data collected for this second pilot program is presented in Table~\ref{tab:tab1}. ALMA observations were obtained over a contiguous 10.75 GHz range (2 tunings) to scan for [CII] line. For the three targets with photometric redshifts $z \lesssim 7$, the redshift range $z=6.57$ to 6.91 was scanned (240.28 to 251.02 GHz). For the targets with photometric redshifts $z\gtrsim 7$, the redshift range $z=6.94$ to 7.31 was scanned (228.62 to 239.37 GHz). The scan windows utilized are presented in Figure \ref{fig:scan}, along with the redshift likelihood distribution inferred from our photometry. These scan windows cover between 71\% to 99\% of the estimated redshift likelihood distribution (Table~\ref{tab:tab1}). The required integration times for the observations were set assuming a similar [CII] luminosity and line width for sources as in pilot program observations of \citet{Smit_2018Natur.553..178S}, i.e., $\sim$4$\times$10$^{8}$ $L_{\odot}$ and 200 km/s for the FWHM of [CII]. To detect this line at $>$5$\sigma$, we required achieving a sensitivity of 300 $\mu$Jy per 66 km/s channel. To achieve this sensivitity, we require $\sim33$ to 39 minutes of integration time with ALMA.
The ALMA data were reduced and calibrated using \textsc{Casa} version 5.4.1 following the standard ALMA pipeline procedures. To reduce the data-size of the visibility measurement set, we averaged the data in bins of 30 seconds after carefully calculating that this bin-size does not impact our data through time-average smearing (e.g. Thompson, Moran \& Swenson 2017). We then performed initial imaging of the full data-cube using the \textsc{tclean} task with natural weighting. We clean to a depth of 3$\sigma$ per channel and use automasking\footnote{For the automasking we use the recommended settings for the 12-meter array with compact baselines from the \textsc{Casa} automasking guide: \url{https://casaguides.nrao.edu/index.php/Automasking\_Guide}. We verified that the automasking identifies the same emission regions that we would have selected when masking by hand.} to identify regions that contain emission. This initial data-cube was used to do an inital search for [CII] line candidates, details of our line-search procedure are described in the next Section.
If a significant emission line candidate is identified, we use the line properties to carefully mask the channels containing line emission to produce a continuum subtracted visibility data-set using the \textsc{uvcontsub} task. This continuum subtracted measurement set is then used to re-image the full data cube, after which we repeat the line search and verify that the same line candidates are obtained.
\begin{figure*}[ht]
\epsscale{1.125}
\plotone{test.pdf}
\caption{Overview of the galaxies detected in [CII] in this study. (\textit{left panel}) 1d extracted spectra of the [CII] line (\textit{blue}) and a Gaussian fit (\textit{red}) for UVISTA-Z-001 (\textit{top}), UVISTA-Z-007 (\textit{middle}) and UVISTA-Z-019 (\textit{bottom}). (\textit{right panels}) The spatial distribution of the [CII] line emission (\textit{blue contours}) relative to rest-$UV$ images of the sources from HST (\textit{F140W}: \textit{background image}) and dust continuum emission (\textit{orange contours}). Contours correspond to 2, 3, 4, and 5$\times$ the noise level. The dust continuum is significantly detected in UVISTA-Z-001 and UVISTA-Z-019 and marginally detected in UVISTA-Z-007 (at $2.5\sigma$) \citep[see also][for an extensive discussion of the continuum properties of these galaxies]{Schouws_2021}.
\label{fig:morphology}}
\end{figure*}
For each emission line candidate we produce an initial moment-0 image, including channels that fall within 2$\times$ the initial FWHM estimate of the line candidate\footnote{Collapsing over all channels that fall within 2$\times$ the FWHM captures $\sim$98\% of the flux for lines with a Gaussian line-profile.}. Using this moment-0 map, we produce a 1d spectrum where we include all pixel-spectra that correspond to $>3\sigma$ emission in the moment-0 map and we weigh the contribution of each pixel-spectrum by their signal-to-noise level. We then fit a Gaussian line model to this spectrum to extract the central frequency and the FWHM. Next, using this updated estimate for the FWHM, we update the moment-0 image and its associated signal-to-noise weighted 1d spectrum. We perform this 10 times and note that it converges to a stable solution within a few iterations. The line parameters that we derive with this method are also used to carefully exclude line emission from the continuum imaging used in \citet{Schouws_2021}.
\section{Results\label{sec:results}}
\subsection{[CII] Line Search} \label{sec:linesearch}
We performed a systematic search for emission line candidates using the MF3D line search algorithm \citep{Pavesi_2018}. MF3D finds line candidates using Gaussian template matching, which accounts for both spectrally and spatially extended emission lines. We used MF3D with 10 frequency templates with line-widths ranging from 50 to 600 km/s and 10 spatial templates ranging from 0 to 4.5 arcseconds. To be considered a reliable detection, we require line candidates to be within 1.5\arcsec$\,$of the rest-frame UV position of our sources and have SNR$>$5.2$\sigma$. This criterion was found to result in $>$95\% purity (Schouws et al. in prep.).
Based on this search, we find reliable emission lines for UVISTA-Z-001 at 12.8$\sigma$ at 235.80 GHz, for UVISTA-Z-007 at 9.4$\sigma$ at 245.24 GHz and for UVISTA-Z-019 at 18.3$\sigma$ at 245.12 GHz. The other datacubes did not contain any line candidates that meet the SNR requirements discussed above. For these non-detections, either their [CII] luminosity is lower than our detection limit ($\sim 2\times10^{8} L_{\odot}$, see Figure \ref{fig:efficiency}) or their redshift falls outside of the range scanned in this study. The results of the line-search are summarized in Figure \ref{fig:scan}, which shows the layout of the full spectral scan and corresponding P(z)'s for all six sources in this study.
For the detected sources we show the contours from the [CII] and dust continuum emission compared to the rest-frame $UV$ morphology and their 1d spectra in Figure \ref{fig:morphology}. The rest-frame $UV$ images are in the F140W band at 1.39$\mu$m and are from GO-13793 \citep[UVISTA-Z-001, PI:Bowler:][]{bowler2017_10.1093/mnras/stw3296} and GO-16506 (UVISTA-Z-007, UVISTA-Z-019, PI: Witstok; Witstok et al. 2022, in prep). For a detailed description of the procedure we used to produce the moment-0 images and 1d spectra we refer to Section \ref{sec:datareduction}.
We measure the integrated flux of the [CII] emission lines from the moment-0 images by fitting the data with a 1-component Gaussian model using the \textsc{imfit} task. The resulting flux measurements are shown in Table \ref{tab:results}. We double-check the measurements from \textsc{imfit} with \textsc{UVMULTIFIT} \citep{uvmultifit2014A&A...563A.136M}, which we use to fit a Gaussian model in the (u,v)-plane instead of the image-plane. We reassuringly find that both methods produce results consistent within their errorbars. Finally, we convert these [CII] fluxes to luminosities following \citet{Solomon_1992} and \citet{Carilli_2013}:
\begin{equation}
L_{[CII]}/L_{\odot} = 1.04\times S_{\nu}\Delta\nu\times\nu_{obs}\times D_{L}^2
\end{equation}
with $S_{\nu}\Delta\nu$ the integrated flux density in mJy km/s, $\nu_{obs}$ the observing frequency and $D_{L}$ the luminosity distance.
\begin{figure}[t!]
\epsscale{1.25}
\plotone{LCII_SFR.pdf}
\caption{[CII]-SFR relation for the galaxies in this study (\textit{solid blue stars}). Our results are consistent with the results from \citet{delooze2014} for local HII/Starburst galaxies within the scatter (\textit{solid black line and grey shaded region}). For context we show results from previous z$>$6.5 detections and non-detections \citep[from the compilation by][]{Matthee_2019} (\textit{green data-points and upper-limits}). We also show some fits to the $L_{[CII]}$-SFR relation from the literature for observations \citep{Schaerer_2020,harikane_2020}, semi-analytic models \citep[][at $z=7$]{Lagache2018}, and zoom-in simulations by \citet{Olsen_2017} and \citet{Vallini_2015} (with $Z=0.2Z_{\odot}$). All SFRs have been scaled to a Chabrier IMF
\citep{Chabrier_2003} and IR luminosities are calculated assuming a modified black body curve with $T=50K$ and $\beta_{d}=1.6$ \citep[as described in][]{Schouws_2021}.}\label{fig:SFR-CII}
\end{figure}
\begin{figure}[t!]
\epsscale{1.25}
\plotone{CII_deficit.pdf}
\caption{Ratio of the observed [CII] luminosity $L_{[CII]}$ to the IR luminosity $L_{IR}$ as a function of $L_{IR}$ for our small $z\sim7$ galaxy sample (\textit{solid blue stars}, $1\sigma$ uncertainties). IR luminosities $L_{IR}$ are estimated assuming a modified blackbody SED with dust temperature 50K and an emissivity index $\beta_d$ of 1.6. With the grey arrows in the top-right corner we show the effect of changing the assumed dust temperature by $\pm$10K. For context, we also show results from the $z=0$ GOALS sample \citep[][\textit{grey circles}]{Diazsantos2013,Diaz_Santos_2017}, the $z\sim4$-6 ALPINE sample \citep[][\textit{yellow squares}]{lefevre_2020,bethermin2020A&A...643A...2B,Faisst_2020_10.1093/mnras/staa2545}, and other results from the literature (\textit{green circles}). Lower limits on $L_{[CII]}/L_{IR}$ and upper limits on $L_{IR}$ are $3\sigma$. On the right vertical axis, the ratio of integration times required to detect [CII]$_{158}$ at $5\sigma$ to the time required to detect sources in the dust continuum at $3\sigma$ is indicated. Our new measurements show slightly higher $L_{[CII]}/L_{IR}$ ratios than the $z=0$ GOALS results at a given $L_{IR}$ and appear to be qualitatively very similar to the $z\sim4$-6 results obtained by ALPINE.}\label{fig:CII-deficit}
\end{figure}
\begin{figure*}[th]
\epsscale{1.1}
\plotone{test2-cropped.pdf}
\caption{(\textit{left three panels}) Low-resolution [CII] kinematics of our sources overlaid on the rest-UV imaging with the [CII] and dust continuum contours, as in Figure \ref{fig:morphology}. The velocity gradients for all sources are shown on the same scale, starting at $-$60 km/s (\textit{blue}) to 60 km/s (\textit{red}). (\textit{rightmost panel}) Zoom-in on the central pixel-spectra of UVISTA-Z-019. The spectrum consists of a narrow component (FWHM $\sim70km/s$) responsible for $\sim20\%$ of the total flux and a broad component (FWHM $\sim220km/s$) that accounts for $\sim80\%$ of the total flux.} \label{fig:velocity}
\end{figure*}
\subsection{The [CII]-SFR Relation}
The large luminosity and favourable atmospheric opacity of [CII] enable detection up to very high redshifts. Local galaxies studies have found a tight correlation between the [CII] luminosity and SFR \citep{deLooze_2011,delooze2014, Kapala_2014, cormier2015A&A...578A..53C, Herrera_Camus_2015, Diaz_Santos_2017}, [CII] has therefore been proposed as an efficient and unbiased probe of the SFR at high redshift.
In past few years, this correlation between SFR and L$_{[CII]}$ has been observed out to $z\sim8$ with an increasing number of detections and upper-limits. Of particular note are the results from the ALPINE large program, which finds little to no evolution in the [CII]-SFR relation in a large sample of normal galaxies at $4.4<z<5.9$ \citep{Schaerer_2020}. At even higher redshifts the current samples are less uniform, but still seem to be consistent with the local relation, albeit with a larger scatter \citep[e.g.,][]{Carniani_2018,Matthee_2019}.
Nevertheless, there has been an increasing number of observations of galaxies that fall well below the local relations
\citep[e.g.,][]{Ota_2014,Matthee_2019,Laporte201910.1093/mnrasl/slz094,Bakx2020,Binggeli2021A&A...646A..26B,Uzgil_2021,Rybak_2021,jolly2021}. In particular lensed galaxies probing lower M$_*$ and SFRs, and Lyman-$\alpha$ emitters (LAE) seem to be deficient in [CII] \citep{harikane_2020,jolly2021}.
We show the position of our sources on the [CII]-SFR relation in Figure \ref{fig:SFR-CII}, and find that the galaxies targeted in this study are also consistent with the local relation from \citet{delooze2014} within the expected scatter. The three sources where [CII] remains undetected are not shown on this Figure, since it is unclear whether the [CII] line is below our detection threshold (i.e., $L_{[CII]} < 2 \times 10^{8} L_{\odot}$) or whether the true redshift is outside the range of our spectral scan. Because these sources have SFRs between 18 and 26 $M_{\odot}yr^{-1}$ our detection limit falls only slightly below and within the scatter from the local relation (\citet{delooze2014}). From this Figure, it is clear that most of the $z>6.5$ galaxies with SFR$\gtrsim 20 M_{\odot}yr^{-1}$ are consistent with the local relation, while at lower SFRs a significant fraction of currently observed galaxies seems to fall below the relation.
\subsection{[CII] vs. FIR}
It has been found that [CII] can account for up to $\sim1\%$ of the total infrared luminosity of galaxies, however it has also been found that this fraction decreases by $\sim2$ orders of magnitude with increasing $L_{IR}$, leading to a [CII]-deficit in luminous galaxies \citep[e.g.,][]{genzel2000,Malhotra_2001,Hodge2020arXiv200400934H}. Specifically, observed [CII]/FIR ratios range from $\sim$10$^{-2}$ for normal z$\sim$0 galaxies to $\sim$10$^{-4}$ for the most luminous objects, with a large scatter \citep[e.g.,][]{Diazsantos2013}.
The reason for this observed [CII]-deficit remains a topic of discussion in the literature, with a large range of possible explanations including optically thick [CII] emission, effects from AGN, changes in the IMF, thermal saturation of [CII], positive dust grain charging and dust-dominated HII region opacities \citep[e.g.,][]{Casey_2014b,Smith_2016,Ferrara2019,Rybak_2019,Hodge2020arXiv200400934H}.
The galaxies in this study have far infrared luminosities of $L_{IR} = 1-5\times 10^{11} L_{\odot}$\footnote{To calculate the far infrared luminosities we assume a modified black body dust-SED with $T=50$K and $\beta=1.6$, see \citet{Schouws_2021} for details.}, which puts them in the Luminous Infrared Galaxy (LIRG) classification. Comparing their infrared to their [CII] luminosity we find ratios of [CII]/$L_{IR}\sim1-3\times10^{-3}$ (see Figure \ref{fig:CII-deficit}). Compared to local (U)LIRGS from the GOALS survey \citep{Diazsantos2013,Diaz_Santos_2017}, we find that our galaxies are less deficient in [CII] by a factor $\sim$0.3 dex. This result has a minor dependence on the assumed dust temperature as shown with the grey arrows on Figure \ref{fig:CII-deficit}. When assuming higher or lower dust temperatures the data-points move mostly parallel to the trend. Only for substantially higher dust temperatures ($\gtrsim70$K) would our measurements be consistent with the local results.
Our measured [CII]/FIR ratios are consistent with other studies at high redshifts, which also find that high redshift galaxies tend to be less [CII] deficient \citep[e.g.,][]{Capak2015_Natur.522..455C,Schaerer_2020}. A possible explanation for this lack of [CII]-deficit in high redshift galaxies could be different dust conditions. Specifically, a lower dust-to-gas ratio at a fixed far infrared luminosity could increase the [CII] luminosity with respect to the infrared \citep{Capak2015_Natur.522..455C}.
\subsection{[CII] Kinematics}
Due to its high intrinsic luminosity, [CII] is an efficient tracer of the kinematics of high redshift galaxies \citep[e.g.,][]{Neeleman_2019}. To investigate the kinematics of our sources, we derive the velocity maps of our galaxies by fitting Gaussians to the pixel-spectra in our cube, including all pixels for which the uncertainty on the velocity is less than 50 km/s. The resulting velocity fields are shown in Figure \ref{fig:velocity}. Despite the low ($\sim$1.3\arcsec, see Table \ref{tab:tab1}) resolution of our observations, some of our sources show significant velocity gradients.
In particular, we find that both UVISTA-Z-001 and UVISTA-Z-007 display a significant velocity gradient, with velocity amplitudes of $v_{amp}\sin(i)=108^{+45}_{-36}$ km/s and $v_{amp}\sin(i)=119^{+87}_{-50}$ km/s respectively\footnote{We derive $v_{amp}\sin(i)$ by fitting a rotating thin disk model to the 3D datacube using forward modeling with our kinematics fitting code \textsc{Skitter} (Schouws et al., in prep.). The maximum velocities on the kinematics maps shown in Figure \ref{fig:velocity} are lower than the actual $v_{amp}\sin(i)$ due to beam-smearing effects.}. If we compare the observed rotational velocities assuming no correction for inclination ($i=0$, hence $v_{obs}$ = $v_{amp}\sin(i)$) to the total line-width of the 1d spectrum ($\sigma_{obs}$) (see Table \ref{tab:results} and Figure \ref{fig:velocity}), we find that the $v_{obs}/\sigma_{obs}$ are $1.0_{-0.4}^{+0.6}$ and $0.9^{+0.9}_{-0.5}$ respectively. This would mean that both sources should most likely be classified as rotation dominated \citep[defined as $v_{obs}/\sigma_{obs}>0.8$ as utilized in e.g.][]{Forster_Schreiber_2009}. This calculation does not assume a correction for the inclination of the system, which would increase $v_{obs}/\sigma_{obs}$. The observed velocity gradient could however also be caused by close mergers. At the current resolution, rotating disks are indistinguishable from mergers \citep[e.g.][Schouws et al., in prep]{jones2020}.
In particular, we find indications that UVISTA-Z-007 could be a merger. The HST F140W imaging (Witstok et al. in prep.) shows clearly that this source consists of two distinct components (see Figure \ref{fig:morphology}). The observed velocity gradient is in the same direction as the two UV components (as shown in Figure \ref{fig:velocity}), making it likely that the observed velocity gradient in the [CII] is in fact due to the merger of these two components.
For UVISTA-Z-019 we do not observe a significant velocity gradient and constrain the maximum rotation velocity to $v_{amp}sin(i)<50$ km/s, implying that this system is either dominated by dispersion or a face-on system ($i=0$). A more detailed look at the pixel-spectra within the cube indicates that in the central part of this source, the [CII] emission seems to break down into two distinct components (rightmost panel on Figure \ref{fig:velocity}), consisting of a narrow component (FWHM $\sim70$ km/s) responsible for $\sim20\%$ of the total flux and a broad component (FWHM $\sim220$ km/s) that accounts for $\sim80\%$ of the total flux.
This interesting spectral feature could be caused by several processes, such as the effect of an outflow. However, this would imply that the majority of the [CII] luminosity originates from the outflow and that the emission from the galaxy would have a very narrow FWHM, implying a low dynamical mass, lower than the stellar mass ($M_{dyn}\sim9\times10^{8}$ $M_{\odot}$\footnote{We derive dynamical masses following \citet{Wang_2013}: $\frac{M_{dyn}}{M_{\odot}sin(i)}=1.94\times10^{5}\cdot(\frac{FWHM}{km/s})^{2}\cdot \frac{r_{1/2}}{kpc}$ where $FWHM$ is the full width half maximum of the [CII] line in $km/s$ and $r_{1/2}$ the [CII] half light radius in kpc. Because the narrow [CII] component is unresolved, we assume a size of $r_{1/2}\sim$1 kpc \citep[consistent with][]{bowler2017_10.1093/mnras/stw3296}.} versus $M_{*}\sim3\times10^{9}$ $M_{\odot}$). An alternative explanation for the spectral feature could be a minor-merger, where the narrow component originates from an in-falling galaxy. However, this would mean that the LOS velocity is rather small at only $\Delta V \sim 50$ km/s, despite this likely being one of the final stages of the merger. Indeed, HST morphology tentatively shows two close components separated by $\sim$1.5 kpc (see also Figure \ref{fig:morphology}). Finally, the kinematics could also be evidence for a bright clump of intense star formation within a larger system, indicating the complex structure of high redshift sources \citep[e.g.,][]{kohandel2019,kohandel2020}. Hence an alternative interpretation of the clumps in HST imaging could be the presence of multiple star-forming regions in a larger system. Higher spatial resolution ALMA observations or deep rest-frame optical observations with JWST would be invaluable to definitively distinguish between these scenarios.
\begin{figure}[t!]
\epsscale{1.125}
\plotone{CII.pdf}
\caption{In this study we have presented a method to identify and efficiently spectroscopically confirm the redshifts of luminous [CII] emitting galaxies (\textit{solid (this study) and open \citep{Smit_2018Natur.553..178S} red stars}). We compare the derived luminosities of our sources to the literature (\textit{grey datapoints and upper-limits}). The present compilation is from \citet{Matthee_2019} and \citet{Bakx2020}. The newly discovered lines are more luminous than most previous detections. The luminosities of those sources without [CII] detections are less clear, but for those sources where our scans cover the full likelihood distribution, it is likely that the lines are fainter and more in line with the typical sources. For context we show the expected limiting luminosities (for a 5$\sigma$ detection and 350 km/s FWHM) that can be achieved with ALMA with 20, 60 and 180 minute integrations (\textit{red dashed lines}) and find that our targets could have been detected in as little as 20 minutes per scan-window. We also indicate the achieved depth of our observations (30-40 minutes) with the \textit{solid blue lines}. This demonstrates the great potential to use ALMA for spectral scans to obtain spectroscopic redshifts of UV-luminous galaxies in the epoch of reionization.} \label{fig:efficiency}
\end{figure}
\section{The Efficiency of Spectral Scans and Future Prospects\label{sec:potential}}
In this study we have obtained redshifts for three galaxies without a prior spectrocopic redshift. In particular by targeting UV-luminous galaxies with high SFRs, the [CII] lines we detect are also luminous. In Figure \ref{fig:efficiency} we show that the [CII] emission of our sources could have been detected at $>$5$\sigma$ in only $\sim20$ minute integrations per tuning with ALMA.
Our sources benefit from very tight constraints on the photometric redshift due to the large break in the IRAC colors (see Figure \ref{fig:z3645}), which enables us to cover a significant fraction of the $P(z)$ with only 2 tunings. For sources that lack this additional constraint, 3 or 4 tunings would be necessary to cover the $P(z)$ appropriately. Nevertheless, this would still mean that galaxies like the ones targeted in this study could be spectroscopically confirmed in less than $<1.5$ hours per source.
It should be noted though that for sources with a lower SFR (and hence lower [CII] luminosities), spectral scanning becomes expensive quickly. A spectral scan targeting an $L_*$ galaxy ($SFR_{UV}\sim 8 M_{\odot}/yr$ at $z=7$) would cost $\sim12$ hours on source adopting the \citet{delooze2014} $L_{[CII]}$-SFR relation. Therefore, to study the [CII] emission from $L\leq L_*$ galaxies, either targeting lensed galaxies for spectral scans or following-up galaxies with a prior spectroscopic redshift (e.g. from JWST or ground-based Lyman-$\alpha$) remain the most suitable options.
One significant advantage of using a spectral scan strategy is the time spent integrating in regions of the spectra not containing prominent ISM cooling lines. These integrations allow us to probe continuum emission from our targets. This is important -- since the continuum is much harder to detect than [CII], as one can see from the [CII]/$L_{IR}$ ratios of our sources. This is illustrated in Figure \ref{fig:CII-deficit} (right vertical axis), on which we show the ratio of the integration time needed to detect the dust continuum versus the [CII] line. We find that it is necessary to integrate up to $\sim20\times$ longer to obtain a $3\sigma$ detection of the dust continuum. This means that the time spent observing tunings that do not contain a spectral line is not wasted, but contributes to the necessary sensitivity to detect the faint dust continuum.
Based on the results presented in this paper and \citet{Smit_2018Natur.553..178S}, we proposed and were awarded the time to apply the spectral scan method to a significantly larger sample of galaxies, covering a much larger range in galaxy properties and redshift. The result was the on-going Reionization Era Bright Emission Line Survey (REBELS) large program, in which we pursue spectral scans for [CII] or [OIII] in a sample of 40 $z>6.5$ galaxies \citep{Bouwens_LP}.
\section{Summary}
In this paper we present the results of new ALMA spectral scan observations targeting [CII] in a small sample of six luminous Lyman Break Galaxies at
$z\sim 7$. The targeted sources were identified from deep, wide-area near-IR, optical, and \textit{Spitzer}/IRAC observations and are particularly luminous. The targeted sources also feature tight constraints on their redshifts leveraging the abrupt changes that occur in the IRAC colour around $z\sim7$ (where strong line emission from [OIII]+H$\beta$ shifts from the [3.6] to [4.5] band). This improves the efficiency of the spectral scans by $\sim$2$\times$ based on a small number of tunings required to cover the inferred $P(z)$. The present results build on the exciting results from \citet{Smit_2018Natur.553..178S}, who previously demonstrated the potential of spectral scans for [CII] with just two sources.
Our main results are summarized below:
\begin{itemize}
\item We detect ($>9\sigma$) [CII] lines for three of the six galaxies we target with our spectral scans (shown in Figure \ref{fig:scan}). The [CII] lines are strong with luminosities between $5.6 \times 10^{8}L_{\odot}$ and $8.8\times10^{8}L_{\odot}$. We also observe that the [CII], dust and rest-frame UV emission are well aligned within the resolution of our observation (see Figure \ref{fig:morphology}).
\item Placing our new detections on the [CII]-SFR relation shows that our sources are consistent with the local relation from \citet{delooze2014} (for HII/starburst galaxies) (shown in Figure \ref{fig:SFR-CII}), and we find slightly higher [CII]/$L_{IR}\sim1-3\times10^{-3}$ compared to local (U)LIRGS (see Figure \ref{fig:CII-deficit}), which is consistent with previous studies of high redshift galaxies.
\item Although our observations are taken at a relatively low resolution ($\sim$1.3\arcsec), we find that our sources display a broad spectrum of kinematic diversity. One of our sources seems to be rotation dominated, one source is most likely a major merger and one source is dominated by dispersion. We also find possible kinematic evidence for a bright star forming clump within the dispersion dominated source (see Figure \ref{fig:velocity}). However, higher resolution observations are necessary to confirm our interpretation of the kinematics of our sources.
\item We discuss the lack of evolution of the [CII]-SFR relation found for luminous high redshift galaxies by reviewing the literature on the physical effects that drive the [CII] emission in high redshift galaxies. While one would naively expect a trend towards lower [CII]/SFR values with redshift based on the higher ionization parameter, lower metallicities and higher densities of high redshift galaxies, this is not observed. We speculate that a lower dust-to-gas or dust-to-metal ratio, which increases the [CII] emission, could compensate for those effects.
\end{itemize}
These new results illustrate the tremendous potential spectral scans with ALMA have for characterizing luminous galaxies in the epoch of reionization (see Figure \ref{fig:efficiency}), including deriving spectroscopic redshifts for sources, and probing the kinematics and dynamical masses of sources, as well as the dust continuum \citep{Schouws_2021}. Results from this data set showed the potential (\S\ref{sec:potential}) and were important in successfully proposing for the REBELS large program in cycle 7 \citep{Bouwens_LP}. Future studies (Schouws et al. 2022, in prep) will significantly add to the current science using that considerable data set.\\ \\
\acknowledgements
\subsection*{Acknowledgements}
We are greatly appreciative to our ALMA program coordinator Daniel Harsono for support with our ALMA program. This paper makes use of the following ALMA data: ADS/JAO.ALMA 2018.1.00085.S. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada), MOST and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ. Sander Schouws and Rychard Bouwens acknowledge support from TOP grant TOP1.16.057 and a NOVA (Nederlandse Onderzoekschool Voor Astronomie) 5 grant. JH acknowledges support of the VIDI research programme with project number 639.042.611, which is (partly) financed by the Netherlands Organisation for Scientific Research (NWO). HSBA acknowledges support from the NAOJ ALMA Scientific Research Grant Code 2021-19A. JW and RS acknowledge support from the ERC Advanced Grant 695671, "QUENCH", and the Fondation MERAC. RS acknowledges support from an STFC Ernest Rutherford
Fellowship (ST/S004831/1). The PI acknowledges assistance from Allegro, the European ALMA Regional Center node in the Netherlands.
\newpage
| {'timestamp': '2022-02-10T02:00:20', 'yymm': '2202', 'arxiv_id': '2202.04080', 'language': 'en', 'url': 'https://arxiv.org/abs/2202.04080'} |
\section{Introduction}
\label{sec:introduction}
Customizable and highly-configurable software systems play a major role in today's software landscape~\cite{Fischer2014, Pohl2005}.
\emph{Software product lines} (SPLs) are a systematic approach to create and maintain such highly-configurable software systems.
With SPLs, customized variants of software systems can be compiled from reusable artifacts, according to the customer's particular requirements.
Thus, SPLs allow mass customization, while still promising reduced development costs for software vendors after initial investments~\cite{Clements2002, Knauber2001, Pohl2005}.
Past research on SPLs has mainly been motivated by industrial needs, while industry has also benefited from developments in research, such as variability modeling techniques~\cite{Berger2013, Rabiser2018}.
For example, it has been well-investigated how companies can transition to an SPL-based approach by extracting an SPL from a monolithic legacy software system~\cite{Kruger2016, Krueger2002, Clements2002a}.
However, the increasing adoption of SPL methodologies in industry introduces new challenges.
In particular, the question arises how an organization may maintain and evolve an SPL after it has been established with one of the aforementioned techniques~\cite{McGregor2003}.
This is particularly relevant because SPLs require increased upfront investments and are intended as long-lasting assets to an organization.
Consequently, the relative importance of SPL \emph{maintenance} and \emph{evolution} in industry as well as research has grown with the increasing maturity of the SPL community.
\begin{figure}
\vspace{4ex}
\includegraphics[width=\linewidth]{diagram}
\caption{Top 10 of 27 research interests among 33 SPL researchers and practitioners as of 2018~\cite{RabiserMaterial}.}
\label{fig:interests}
\end{figure}
In fact, evolution is currently one of the largest research interests in the SPL community:
In~\autoref{fig:interests}, we show a ranking of the most demanded activities in SPL engineering, which is based on a yet unpublished survey carried out at SPLC 2018 by~\citet{RabiserMaterial}.
In this survey, 33 SPL researchers and practitioners, having on average eight years of experience in this field, rated 27 different activities according to whether they were already well-investigated or needed further work in the future.
Using this information, we rank the activities according to the participants' opinions and show the ten activities with most interest in \autoref{fig:interests}.
In particular, 70\% of the participants think that SPL maintenance and evolution warrants further investigation, on par with SPL validation and ecosystems, and only 6\% of the participants judge it as well-investigated.
In this technical report, we survey and discuss selected papers in detail that address evolution of SPLs and have been published at SPLC.
In particular, we contribute the following:
\begin{itemize}
\item We highlight commonalities, differences, and insights regarding SPL evolution from the selected papers.
\item We distinguish between works from academia and industry, compare them, and give directions for future research.
\end{itemize}
Extending the work of \citet{Rabiser2018}, we provide deeper insights regarding product-line research on maintenance and evolution by taking a closer look at actual publications on this topic.
The remainder of this report is organized as follows:
In \autoref{sec:background}, we give background regarding SPLs and their evolution.
In \autoref{sec:method}, we describe the methodology of our survey.
In \autoref{sec:results}, we present the results of our survey, which we discuss and compare in \autoref{sec:discussion}.
We further discuss threats to validity in \autoref{sec:validity} and conclude the report in \autoref{sec:conclusion}.
\section{Background}
\label{sec:background}
We start by giving a short introduction to SPLs and their evolution.
\subsection{Basic Concepts}
A \emph{software product line} (SPL) is ``a set of software-intensive systems sharing a common, managed set of features that satisfy the specific needs of a particular market segment or mission and that are developed from a common set of core assets in a prescribed way''~\cite{Clements2002}.
With an SPL, customers' requirements are coded in a \emph{configuration} that is used to automatically derive a concrete \emph{product}, which is thus tailored to the customers' needs.
SPL engineering comprises all activities involved in conceiving, developing, and maintaining an SPL.
One particular activity of interest in SPL engineering is \emph{variability modeling}, in which domain engineers analyze, determine, and model an SPL's variability.
The most common way to do so is by employing \emph{features}, which represent functionalities that can be either present or absent in a product derived from the SPL~\cite{Apel2013a, Berger2013}.
Variability modeling plays an important role in SPL evolution, where features may be added, removed or modified over the course of an SPL's lifetime.
Typically, features are associated with assets, such as code or requirements, which then play a role in deriving a product~\cite{Apel2013a}.
For example, the Linux kernel (which can be considered an SPL) includes a \emph{Networking} feature, which, when selected in a configuration, adds code to the derived Linux kernel that enables networking scenarios.
Thus, the evolution of a feature's assets (i.e., changes to the code) must also be considered for SPL evolution.
\subsection{Maintenance and Evolution}
The terms \emph{maintenance} and \emph{evolution}, with regard to software development in general, are not clearly distinguished in the literature~\cite{bennett2000software, von1995program}.
\emph{Software maintenance}, as defined by IEEE Standard 1219~\cite{IEEE1219}, refers to the ``modification of a software product after delivery to correct faults, to improve performance or other attributes, or to adapt the product to a modified environment''.
In this definition, the \emph{post-delivery} aspect is crucial; that is, software is maintained by the vendor's ongoing provision of support, bug fixes etc.
McGregor~\cite{McGregor2003} defines \emph{software evolution} as the ``accumulated effects of change over time'', where ``forces drive the change in a certain direction at a certain point in time, and whether those forces are anticipated or controlled is uncertain''.
Thus, software evolution relates closely to its maintenance.
However, maintaining a software system usually only involves preserving and improving its existing functionality; whereas evolution can also comprise major changes to the software, such as the introduction of new functionality~\cite{bennett2000software}.
The notion of maintenance and evolution is generalizable to entire SPLs~\cite{McGregor2003, Marques2019, svahnberg1999evolution, Botterweck2014}, which has even solicited a dedicated international \emph{VariVolution} workshop~\cite{Linsbauer:2018:IWV:3233027.3241372}.
In particular, new methods are required to address evolution scenarios that explicitly address an SPL's variability, and to provide safety guarantees about such evolutions (discussed in the following section).
Continuous neglection of evolutionary issues can lead to an SPL's \emph{erosion}, that is, a deviation of the variability model to a degree where desired key properties of the SPL no longer hold~\cite{johnsson2000quantifying, Marques2019}.
\section{Methodology}
\label{sec:method}
The results of this survey build on a 2018 study by~\citet{Rabiser2018}, where a random subset (140) of all (593) papers published at the \emph{International Systems and Software Product Line Conference} (SPLC) has been analyzed, according to whether their authors are from academia or industry.
The major research question of their work was whether phases, activities, and topics addressed in academic and industry SPL research align with each other.
They find that, in spite of the common assumption that academic SPL research has been drifting apart from the industry in past years, there is little evidence to support this claim.
In their study, they also classify the considered papers according to different SPL phases, activities, and topics.
For details on the methodology and classification, we refer to the original study~\cite{Rabiser2018}.
Out of 140 papers, 20 have been classified under the (single) activity \emph{maintenance/evolution}.
We take these 20 papers as the basis of our survey, as they are well-distributed over 20 years of SPLC and include a representative number of papers from academia (12) as well as industry (8).
Naturally, this selection of papers does not cover the full extent of papers on SPL evolution published at SPLC; nonetheless, it is a representative subset because of the uniformly distributed selection process.
\section{Survey}
\label{sec:results}
In this section, we present the main results of our survey.
We split our results according to the primary affiliation of the authors with academia or industry as classified originally by~\citet{Rabiser2018}.
\subsection{Insights from Academia}
\label{sec:academia}
\begin{table}
\caption{Reviewed papers classified as academic research.}
\label{tab:academia}
\begin{tabular}{rll}
\toprule
Year & Authors & Topic \\
\midrule
1998 & \citet{Weiderman1998} & Migration towards SPL \\
2001 & \citet{Svahnberg2001} & Industry collaboration \\
2008 & \citet{Dhungana2008} & Industry collaboration \\
& & Evolution operators \\
2012 & \citet{Seidl2012} & Evolution operators \\
2012 & \citet{DeOliveira2012} & Change impact analysis \\
2012 & \citet{Rubin2012} & Change impact analysis \\
2013 & \citet{Linsbauer2013} & Change impact analysis \\
2014 & \citet{Quinton2014} & Evolution operators \\
2015 & \citet{Teixeira2015} & Change impact analysis \\
2016 & \citet{Sampaio2016} & Change impact analysis \\
& & Evolution operators \\
\bottomrule
\end{tabular}
\end{table}
Of the reviewed papers, \citeauthor{Rabiser2018} classify twelve as academic research, that is, the majority of authors are from an academic context (e.g., universities and research institutes).
We omit two papers from our review, a systematic mapping study and an approach for synthesizing attributed feature models, both of which do not address evolution directly and, thus, seem to be misclassified~\cite{Marimuthu2017a, Becan2015}.
In \autoref{tab:academia}, we summarize the academic papers reviewed in this section.
We categorize the papers with regard to the topics addressed, and highlight how they can be used to solve specific evolutionary problems.
We discuss several individual papers in detail, but we also group similar papers according to their topic (cf.~\autoref{tab:academia}) where appropriate.
We begin with the oldest paper of our reviewed academic papers, which motivates the problems encountered in SPL maintenance and evolution.
\paragraph{Migration towards an SPL}
\citet{Weiderman1998} investigated the question whether it is feasible to extract SPLs from existing stovepipe systems (i.e., legacy systems with limited focus and functionality that do not allow to share data easily).
Although classified as \emph{maintenance/evolution} by \citeauthor{Rabiser2018}, this paper rather focuses on how to evolve \emph{towards} an SPL.
Nonetheless, it is a first effort in this area with some key insights:
First, developers and researchers tend to focus on building new systems, rather than evolving legacy systems.
In particular, the migration of stovepipe systems towards a more structured SPL approach was largely unrecognized at the time.
Second, they noticed that the rising importance of the Internet in the late '90s began to enable interconnection of such systems, which further unraveled their weakness in sharing data with other systems.
Third, they recognized that for successful evolution, a high-level understanding of the evolved system is crucial (i.e., software modules may be treated as black boxes with defined interfaces); an insight that also applies to continuous evolution of an SPL.
\paragraph{Evolution operators}
A common pattern found in the reviewed academic papers is the identification of so-called \emph{evolution operators} (also termed templates, edits or simply evolutions)~\cite{Sampaio2016, Seidl2012, Quinton2014, Botterweck2010b, Dhungana2008}.
Such evolution operators specify different situations that can arise in SPL evolution scenarios, usually involving changes in an SPL's variability model.
Commonly specified evolution operators include \emph{adding, removing, splitting} and \emph{merging features} and \emph{asset changes} (e.g., modification of a feature's source code).
It is surprising that such operators are frequently redefined, which is most likely because they are tailored to the specific paper's research question.
For example, \citet{Seidl2012} focus on maintaining the integrity of an SPL's mapping from features to their implementation artifacts, so they adapt their evolution operators accordingly.
A single analysis and formalization of evolution operators, however, would be beneficial, as it would provide a common, compatible ground for different approaches and facilitate the implementation of tool support.
It should be noted that evolution operators are not the only possibility for encoding change:
Difference or \emph{delta models}~\cite{schaefer2010delta, Botterweck2014} serve a similar purpose, although they are not discussed in the reviewed selection of papers.
\paragraph{Change impact analyses}
A \emph{change impact analysis} is used to predict which assets a specific change will affect~\cite{McGregor2003}.
Thus, it can support evolution endeavors, as engineers can confirm how a change will affect the SPL as a whole.
Several of the reviewed papers relate to this concept.
First, \citet{Sampaio2016} present an approach for the \emph{partially safe evolution} of SPLs.
To understand this concept, consider \emph{fully safe evolution}~\cite{Neves:2015:SET:2794082.2794113}:
In this notion, an evolution is \emph{fully safe} if, for every product of the (unaltered) SPL, a product with compatible behavior exists in the SPL even after the change has been applied.
Thus, no existing users are inadvertently affected by any performed change.
However, this notion of fully safe evolution is very strict:
For example, suppose there is a (breaking) change to the Linux network stack.
Obviously, this affects all existing users that have chosen to include the networking stack into their kernel; thus, it is not considered a fully safe evolution.
However, users that deselected the networking stack are guaranteed to not be affected.
\citeauthor{Sampaio2016} formalize this concept as a \emph{partially safe evolution}, that is, an evolution that may affect some, but not all products of an SPL.
This approach has more practical applications than fully safe evolution:
For instance, this concept can help to maintain support for existing configurations by only allowing evolution that is partially safe with regard to these configurations.
\citet{Teixeira2015} further extend the safe evolution concept to account for \emph{multi product lines}, that is, a composition of product lines, such as the Linux kernel paired with the Ubuntu distributions.
They describe several concrete bugs that arise from unwanted interactions between evolutionary changes in Linux and Ubuntu and could have been avoided with suitable guidelines.
They present and formalize such guidelines, however, it remains to be seen how these concepts can be implemented in practice to aid in an industrial context.
\citet{Linsbauer2013} take a slightly different approach:
Instead of considering the safety of an evolutionary change, they focus on recovering traceability between features and code.
This is not always a trivial problem; in fact, feature interactions and implementation techniques can obscure the consequences of a change in the code.
Thus, they propose a \emph{traceability mining} approach to recover this information, with promising results (correct identification in 99\% of cases) on several non-trivial case studies.
Note that this approach is primarily aimed at the maintenance of an SPL, that is, preventing new bugs from being introduced into the product line.
In the same vein, \citet{DeOliveira2012} present an approach to analyze the \emph{bug prevalence} in SPLs with a \emph{product genealogy}, that is, a tree that reflects how products are evolved and derived from each other.
In this context, \emph{bug prevalence} refers to the number of products affected by a particular bug, and the genealogy tree may be used to determine which and how many products are \emph{infected} by this bug, making this approach interesting in the context of SPL maintenance.
However, their approach has not been evaluated in an industrial context yet.
\citet{Rubin2012} address the problem of maintaining forked product variants (also known as \emph{clone-and-own}).
As part of their approach, they propose the \emph{product line changeset dependency model} (PL-CDM), which is a meta-model that encodes information about an SPL, its products, features, and relationships between those features.
This PL-CDM may be integrated with software configuration management (SCM) systems; for example, to notify the developer of a certain product that another developer's changes affect this product.
Thus, their approach can be used to assist in change impact analysis when maintaining forked product variants.
However, their approach is visionary and has, to the best of our knowledge, neither been implemented nor evaluated.
\paragraph{Industry collaborations}
The last two papers discussed in this section each address a collaboration with an industry partner.
Nevertheless, we include them in this section, as \citet{Rabiser2018} have classified them as academic research according to their authors' affiliations; that is, they are written from an academic point of view.
\citet{Dhungana2008} report about their experience with SPLs at Siemens VAI.
They found that motivating factors for SPL evolution include new customer requirements, technological changes, and internal improvements of the SPL's assets.
Further, they identified several issues their industry partner faced:
First, the knowledge about variability in an SPL may be scattered over multiple teams, which hampers efficient collaboration on evolutionary changes~\cite{Yi2012}.
Second, different parts of the system may evolve at different speeds.
Third, meta-models (i.e., models that determine the structure of an actual variability model) may also co-evolve in parallel with variability models.
Their proposed solution involves dividing variability models into fragments, which can be evolved independently and finally merged together.
Their approach has been used and refined successfully at Siemens VAI using real-world models.
\citet{Svahnberg2001} describe an industrial example in the automatic guided vehicle domain, where the company Danaher Motion Särö AB migrated from their previous, hardware-centered generation of an SPL to a new, software-centered one.
The motivation for this was that the hardware in the domain was subsequently being replaced with software solutions, which posed some new challenges the old SPL could not meet.
This represents a rather large evolutionary change, so that they had to plan their course of action carefully.
Thus, they employed an iterative process, where the old SPL is subsequently replaced by the new generation.
Notably, they place large emphasis on continuous support, as customary in their domain:
Customers' configurations should be actively maintained and supported for at least ten years.
To achieve this, the exact configurations of all customers need to be tracked, and it must be carefully investigated whether and how evolutionary changes may affect them.
For the latter, the partially safe evolution approach proposed by \citet{Sampaio2016} may be of use.
Similar to \citeauthor{Dhungana2008}, \citeauthor{Svahnberg2001} further struggled with finding the right granularity for their evolution, that is, whether to migrate a given component completely, split it into different parts or eliminate it entirely from their architecture.
However, at the time the paper was written, the migration process had only been partly concluded, so it is not known whether this large evolution eventually succeeded.
In later work, Svahnberg et al.~\cite{doi:10.1002/spe.652, Svahnberg2003} have further compiled a taxonomy of variability realization techniques based on this and other industrial collaborations, which is intended to guide the selection of a software architecture suitable for graceful evolution of SPLs.
\subsection{Insights from Industry}
\begin{table}
\caption{Reviewed papers classified as industrial research.}
\label{tab:industry}
\begin{tabular}{rll}
\toprule
Year & Authors & Topic \\
\midrule
2000 & \citet{Dager2000} & Migration towards SPL \\
2001 & \citet{VanOmmering2001} & Development process \\
2002 & \citet{VanDerLinden2002} & Project description \\
2003 & \citet{Karhinen2003} & Knowledge management \\
2009 & \citet{Pech2009} & Variability management \\
2011 & \citet{Vierhauser2011} & Deployment \\
2012 & \citet{Martini2012} & Speed and reuse \\
2013 & \citet{Zhang2013} & Variability erosion \\
\bottomrule
\end{tabular}
\end{table}
The remaining eight reviewed papers are motivated mainly from industry, that is, companies and their research departments, as classified by \citet{Rabiser2018}.
In \autoref{tab:industry}, we summarize the industrial papers reviewed in this section.
Similar to the academic papers, we categorize the reviewed papers regarding their addressed topic.
In the following, we discuss the papers individually in chronological order.
We further connect the dots between these industrial papers, and correlate them with discussed academic papers, where possible.
Similar to above, we start with an early paper that examines the SPL maintenance and evolution problem from an industrial perspective.
\paragraph{Cummins}
\citet{Dager2000} describes the development and maintenance of an SPL architecture at Cummins, a large manufacturer of diesel and gas engines.
Up until the early '90s, Cummins had a practice of starting from scratch for every new developed product.
However, management soon realized that to preserve and consolidate Cummins' market position, new ideas were needed, as the old approach was becoming uneconomical and could no longer serve the growing diversity in customer requirements.
In addition, software engineering gradually became more important for developing embedded systems, to the point that software is now as much an asset to Cummins as the actual hardware.
These factors motivated a big effort at Cummins to transition to an SPL approach, which \citeauthor{Dager2000} describes in detail; here, we focus on Cummins' experiences related to SPL maintenance and evolution.
The initial switch to an SPL architecture at Cummins went very well:
A core team was established to develop central software artifacts, which could then be reused throughout the company by the market and fuel systems teams for different products.
This concept was simple, but immediately paid off, resulting in faster time-to-market, improved portability, and more predictable product behavior for the end customers.
However, the concept began to falter under the huge amount of product requirements added onto the system, and within five years, software reuse deteriorated by more than 50\%.
The core team traced this observation back to several issues.
Among the identified problems were a lack of a proper evolution process and architecture documentation to keep the end products maintainable.
In particular, Cummins realized that adopting SPLs would not lead to optimized product reuse on its own; instead, this transition should be accompanied by institutional changes as well as new roles and responsibilities matching the SPL approach.
A new effort was made to improve the first SPL approach by addressing these problems.
The reconsideration of Cummins' business needs showed that software reusability is only one driver of profitable product development; however, neglecting other drivers would prevent Cummins from unlocking the full potential of their products.
Among these other drivers, \citeauthor{Dager2000} notes, are easy maintenance and evolution of the SPL, which had not been a focus in the initial SPL approach.
In particular, stable software components are expected not to break due to maintenance efforts, which is an example for partially safe evolution as proposed by \citet{Sampaio2016}.
Because the identified cost drivers may be opposed, Cummins needed to carefully weigh and study these factors for each developed product in the future.
According to \citeauthor{Dager2000}, this new approach can be expected to be more successful, due to increased awareness of other requirements and key drivers for successful SPL engineering---however, the paper provides no information on the outcome of this endeavor.
\paragraph{Philips}
Van Ommering~\cite{VanOmmering2001} proposes an approach for planning ahead the architecture of \emph{product populations} by combining a top-down SPL approach with a bottom-up reusable component approach.
At Philips, a product population is considered a family of product lines; for example, TV and VCR devices each form a product line, while together they form a product population.
The problem with applying SPL concepts directly to product populations is that SPLs are usually organized around a common architecture with individual variation points; a top-down design that lends itself well to products with high similarities and few differences.
However, TV and VCR devices, for example, are not represented easily in this design, as they have many differences.
In this context, reusable software components are better-suited, as they are designed to be used in any context.
This bottom-up approach, however, lacks the holistic view on the product population provided by an SPL approach, so that components can end up too generic and therefore costly to develop and maintain.
Van Ommering suggests to combine the SPL and software component approaches by contextualizing all developed software components in their usage context according to the SPL.
To this end, \citeauthor{VanOmmering2001} introduces a planning scheme to bring together component developers and product engineers, where components offered by the developers are associated with the requirements of product engineers.
Notably, the planning scheme involves a time component, which allows to plan for maintenance and evolution.
This way, evolution steps (such as adding and removing functionality) are modeled explicitly, which is an important requirement for Philips.
In particular, future evolution steps can be planned ahead, and automatic consistency checks are possible.
In more recent work, this idea has resurfaced in the form of \emph{temporal feature models}, which consider evolution a first-class dimension in variability modeling~\cite{Nieke2016}.
According to an extension paper~\cite{DBLP:conf/icse/Ommering02}, Philips successfully applied this planning scheme to four business groups for several types of TVs, which also necessitated some organizational changes, similar to what \citet{Dager2000} observed at Cummins.
\paragraph{ESAPS}
From 1999 to 2001, the \emph{Engineering Software Architectures, Processes and Platforms for System Families} (ESAPS) project was performed, which resulted in a cooperation between 22 research institutes and companies (including Philips, Nokia, Siemens, Thales, and Telvent).
Van der Linden~\cite{VanDerLinden2002} summarizes the goals of ESAPS, that is, improving both development paradigms and reuse level in software development.
The paper also describes the results of the project, where we again focus on maintenance and evolution.
Notably, a distinction of variability arose within ESAPS, namely \emph{variability-in-the-large} and \emph{variability-in-the-long-term}:
The first term refers to SPLs with many products existing at any given time (e.g., consumer products), while the second term refers to SPLs with few products at a time, but many over the course of time (e.g., professional products).
The project identified differences between these two cases, for instance, in their software architecture.
The latter, variability-in-the-long-term, usually involves evolution, as products are continuously enhanced.
The project recognized this and particular attention was given to change management (similar to change impact analysis, cf.~\autoref{sec:academia}).
Guidelines, processes, and hierarchies for change management were proposed and introduced accordingly, but not described further by \citeauthor{VanDerLinden2002}, and the detailed project report is no longer available.
The success of ESAPS led to a follow-up project, CAFÉ, in which change impact analysis should also be further investigated~\cite{van2002software}.
\paragraph{Help desk}
\citet{Karhinen2003} emphasize the importance of a well-designed software architecture in a high-quality SPL.
They argue that together with an SPL's software architecture, the needs for communication evolve as well, because the long-term evolution of an SPL can also involve organizational and personal changes (e.g., newly recruited employees).
This mirrors the experiences of \citet{Dager2000} and \citet{VanOmmering2001} described above.
Thus, they propose to install a \emph{software architecture help desk} in the organization, which is its focal point of communication regarding software architecture.
This help desk is intended to assist not only in initial development, but also in the subsequent maintenance and evolution of an SPL.
During these phases, the help desk is a point of reference for all employees:
It guides developers with the design and evolution of an SPL's architecture, for instance by giving courses and answering questions.
Thus, the help desk steers the software architecture and prevents divergence from organization-wide policies (e.g., due to local developer communities).
\citeauthor{Karhinen2003} have put this concept into practice in cooperation with two industry partners.
However, they only report early results concerning the conception phase of the developed SPLs.
\paragraph{Wikon}
\citet{Pech2009} describe how Wikon, a small company in the embedded systems domain, switched to a decision modeling approach for one of their variability-intensive software systems.
Initially, Wikon maintained a variable software system based on conditional compilation with C preprocessor directives (e.g., \texttt{\#ifdef} and \texttt{\#define}), coupled with custom header files for each product to be derived.
To manage variability, these header files had to be edited manually, which was tedious and error-prone, so that the three responsible developers spent much time only on maintenance.
An increasing number of products and variation points necessitated more explicit variability modeling, which Wikon implemented with assistance from Fraunhofer IESE.
One particular goal of this transition was to facilitate maintenance and evolution of the modeled variability.
The implementation of a decision modeling approach ensured this in two ways:
First, the usage of one common decision model ensures that all derived products (in the form of custom header files) are syntactically valid.
This prevents that an evolutionary change accidentally overlooks one of the custom header files; for example, when removing some functionality from the SPL.
Second, the decision model comprises several \emph{constraints}, which explicitly model domain knowledge that had been tacit before.
Thus, an automated system can check the internal consistency of a product; that is, whether the product actually conforms to the modeled domain.
This was helpful because only one of the three responsible developers at Wikon had the necessary domain knowledge, and the others could now rely on the automatic consistency checking.
\citeauthor{Pech2009} successfully introduced this approach:
A new product was easily derived, and according to the developers, the maintenance effort for the entire SPL after the transition was reduced to the maintenance effort for a single product before the transition.
\paragraph{Siemens}
\citet{Vierhauser2011} describe a deployment infrastructure for product line models and tools, which is used in a family of electrode control systems at Siemens VAI.
In their product line, sales documents like customer-specific offers (the products) can be automatically derived from document templates (the artifacts) to speed up the process of creating an offer and also prevent legal issues.
This product line, however, is frequently updated and must be re-deployed to all sales people, which includes updating the actual product line models, but also the surrounding tooling.
To this end, \citeauthor{Vierhauser2011} contribute a technical approach based on \emph{product line bundles}.
Notably, their approach involves setting an expiry date for outdated product line models, so that sales people (which often work offline) are forced to use the latest, updated model.
Furthermore, a mechanism is provided to migrate a product based on an outdated model to the newest model version.
However, \citeauthor{Vierhauser2011} do not report on experiences with this approach.
\paragraph{Speed and reuse}
\citet{Martini2012} analyze how systematic software reuse can be combined with an agile software development process.
They identify two aspects with profound impact on the success of an agile process, namely \emph{speed} and \emph{reuse}.
High speed is required to compete with other contenders, while high reuse has positive effects on productivity and thus profit.
In a case study with three organizations, \citeauthor{Martini2012} identify and categorize 114 factors that influence speed and reuse in different ways.
They distinguish three kinds of speed: \emph{first deployment}, \emph{replication}, and \emph{evolution speed}.
Evolution speed, which is relevant for SPL maintenance and evolution, is described as the ``time used to decide whether an incoming evolution request should be addressed or not, and the time for actually developing it''~\cite{Martini2012}.
A high evolution speed is necessary shortly after introducing a new product to roll out urgent updates.
While the product is on sale, evolution speed is also important to be able to compete on the market.
\citeauthor{Martini2012} find that the studied organizations focused first and foremost on first deployment speed (the time until the product is first released), rather than replication or evolution speed.
They also found that, when categorizing the most harmful factors regarding speed, key areas for improvement in the studied cases are communication and knowledge management.
This again reflects the findings of \citet{Dager2000}, \citet{VanOmmering2001}, and \citet{Karhinen2003} in the context of agile software development.
Unfortunately, \citeauthor{Martini2012} did not look further into evolution speed in their paper or technical report~\cite{martini2012factors}.
\paragraph{Danfoss}
In the final paper reviewed in our survey, \citet{Zhang2013} investigate the long-term evolution of a large-scale industrial SPL at Danfoss Power Electronics.
Their research is motivated by the observation that uncontrolled evolution (in particular, adding new variability and, thus, variants) can lead to a reduction of productivity, which manifests in the erosion of an SPL (i.e., artifacts that get much more complex and inconsistent over time).
Based on their industrial experiences, they claim that there are no proper tools and methods for detecting, removing, and preventing such erosion---thus, organizations tend to create a new SPL instead, which is expensive and contradicts, for instance, the ten-year support requirement of \citet{Svahnberg2001}.
Thus, \citeauthor{Zhang2013} aim to identify tactics for detecting and forecasting SPL erosion to sustain the productivity of an evolving SPL even under economic constraints.
For their actual analysis, \citeauthor{Zhang2013} consider 31 versions (each 3.6 MLOC on average) of an SPL of frequency converters, covering a period of four years of maintenance and evolution history.
They identify several metrics for measuring SPL erosion based on the conditional compilation technique in C (i.e., \texttt{\#ifdef} annotations), which they mined from the SPL's source code with the \emph{VITAL} tool suite.
Among the examined metrics are the number of features, variability nesting, tangling, and scattering, as well as the complexity of variable source code files.
\citeauthor{Zhang2013} find that \texttt{\#ifdef} nesting, tangling (files that contain code for many different features), and scattering (features that are referred to from many different files) lead to a high maintenance cost if the affected features are likely to evolve.
They propose to solve these problems by refactoring such code, for example, towards an aspect-oriented approach.
By calculating metrics for all versions with VITAL, \citeauthor{Zhang2013} also determine that nesting, tangling, and scattering are often introduced gradually during the evolution of the SPL.
Consequently, an erosion trend can be calculated from the history, so that erosion metrics for future SPL versions can be predicted.
Thus, developers are able to identify potential erosion hotspots in advance and take measures to avert erosion.
However, the approach of \citeauthor{Zhang2013} is only applicable in the context of the conditional compilation technique.
\section{Discussion}
\label{sec:discussion}
In this section, we discuss and compare the results from the surveyed papers. We focus on the relationship between academic and industrial papers, as well as key insights and possible future directions for SPL evolution research.
\paragraph{Academia and industry}
\begin{figure}
\includegraphics[width=0.7\linewidth]{classification}
\caption{Classification of academic and industrial papers.}
\label{fig:classification}
\end{figure}
In many of the reviewed academic papers, there is a clear focus on concepts, methods, and technical advancements for facilitating SPL evolution, such as the definition and study of evolution operators and change impact analyses.
On the other hand, many papers from an industrial context (unsurprisingly) tend to focus on the creation of value, economic and organizational constraints as well as human factors (such as communication and knowledge management).
We investigate this in more detail in \autoref{fig:classification}, where we show the classification of all reviewed papers into academic and industrial as performed by \citet{Rabiser2018}.
In addition, we include our assessment of this classification's accuracy.
Of ten reviewed academic papers, seven were written exclusively by authors from academia~\cite{Weiderman1998, Seidl2012, Rubin2012, Linsbauer2013, Quinton2014, Teixeira2015, Sampaio2016}.
These papers contribute novel concepts and ideas for SPL evolution and have no particular connection to industry.
Similarly, four of eight industrial papers are exclusively from an industrial context, often written by a single author---these papers mostly share experiences and measures taken by organizations to solve evolutionary issues~\cite{Dager2000, VanOmmering2001, VanDerLinden2002, Karhinen2003}.
The remaining seven papers, however, are not as easily classified, as they are collaborations between industry and academia:
Six papers (four classified as industrial, two as academic) were mostly written by authors from academia, but address industrial motivations, examples or experiences~\cite{Svahnberg2001, Dhungana2008, Pech2009, Vierhauser2011, Martini2012, Zhang2013}; also, a single paper (classified as academic) has a purely conceptual contribution although written by authors from industry~\cite{DeOliveira2012}.
From these results and the survey of \citet{RabiserMaterial} (cf.~\autoref{fig:interests}) we infer that both academia and industry are clearly aware of the importance of SPL maintenance and evolution.
Further, there is a fair amount of collaboration between academia and industry in our subset of reviewed papers, to a degree that the binary distinction between academic and industrial papers made by \citet{Rabiser2018} is only partially applicable.
\paragraph{Key insights}
Finally, we share some key insights from our survey that shed some light on specific challenges and possible future work in the field of SPL maintenance and evolution.
\begin{itemize}
\item Successful long-term SPL evolution and maintenance is a driver of profitable product development and necessitates organizational changes (e.g., new roles and responsibilities)~\cite{Dager2000, VanOmmering2001, Karhinen2003, Martini2012}.
Particular areas of improvement include communication and knowledge management~\cite{Karhinen2003, Martini2012}.
\item Safe evolution of SPLs is requested by the industry to satisfy stability and support requirements~\cite{Dager2000, Svahnberg2001, VanDerLinden2002} and has been investigated by academia~\cite{Sampaio2016, Teixeira2015, DeOliveira2012}, although these techniques are not widely adopted by industry yet.
More change impact analyses and traceability approaches have been proposed by academia~\cite{Linsbauer2013, Rubin2012}---in principle, this line of work has been acknowledged as important by the industry, although not adopted yet~\cite{VanDerLinden2002, Zhang2013}.
\item There is a lack of publicly available industrial-sized case studies and long-term evolution histories~\cite{Marques2019}.
Further, experience reports typically only report early-stage results~\cite{Svahnberg2001, Dager2000, Karhinen2003, Vierhauser2011}; only one of the reviewed papers performs an empirical investigation on an evolving SPL~\cite{Zhang2013}.
This may be due to the increased effort of a long-term study or because industry partners are reluctant to share their SPLs.
\item SPL erosion is a known, but rarely investigated problem~\cite{Marques2019}. The term is not clearly defined, although it usually relates to a reduction in productivity as software is aging~\cite{Dager2000, Zhang2013}.
Among others, erosion is promoted by a lack of evolution planning~\cite{VanOmmering2001, Zhang2013, Nieke2016}.
\item SPL evolution research tends to focus on adding and maintaining variability, while removal of variability is rarely considered~\cite{Berger2014, Marques2019}.
However, only adding variability can lead to a gradual increase of complexity and, thus, to SPL erosion~\cite{Zhang2013}.
Instead, variation points should be continuously reconsidered and removed as soon as they become obsolete to prevent erosion in the first place.
Awareness for this issue still has to be raised~\cite{Zhang2013, Marques2019}---for instance, future research may investigate whether the concept of (partially) safe evolutions is reconcilable with the notion of removing variability.
\item The adoption of more advanced tooling for variability management (such as \emph{pure::variants}) can lead to a significant acceleration of code size and growth in variability~\cite{Zhang2013}.
On the one hand, such improvements in tooling allow more efficient development and, thus, a faster time-to-market.
However, this also promotes SPL erosion in the long run and must therefore be carefully considered~\cite{Zhang2013}.
\end{itemize}
\section{Threats to Validity}
\label{sec:validity}
To discuss threats to the validity of our survey, we distinguish internal and external validity~\cite{Wohlin2003, campbell2015experimental}.
To ensure internal validity, we considered a random subset of papers published at SPLC (140 of 593 total), which has been studied by \citet{Rabiser2018}.
In said study, seven researchers (each with more than 15 years of experience in the field) classified 20 papers as concerning SPL maintenance and evolution.
The identified papers are well-distributed over 20 years of SPLC and are motivated from academia (12) as well as industry (8), illustrating several key issues of the field.
Thus, the conclusions we draw from the surveyed papers can be considered sound in the context of the studied subset of 140 SPLC papers.
However, our survey suffers from a lack of external validity, as we only consider papers published at a single conference (SPLC).
Further, although the random selection of studied SPLC papers may paint a representative picture of the state of SPL evolution, key contributions to the field could have been overlooked.
Thus, our survey is by no means comprehensive, but rather an in-depth insight into specific issues of SPL maintenance and evolution.
\section{Conclusion}
\label{sec:conclusion}
In this technical report, we surveyed and discussed selected papers about SPL maintenance and evolution that were published at SPLC over a period of 20 years.
We selected the papers according to an existing classification by \citet{Rabiser2018} and make use of their differentiation between academic and industrial papers.
We further extended the work of \citeauthor{Rabiser2018} with an in-depth discussion of evolutionary issues covered in the reviewed papers.
We found that 7 of 18 reviewed papers can be considered collaborations between industry and academia, which suggests that both are rather in line with each other.
Although both practitioners and researchers are aware of the importance of SPL evolution, our survey suggests that several problems found in industrial applications are still unsolved.
In particular, future work should look further into the erosion of SPLs, its long-term effects on an SPL, and how to remove or even prevent it.
Further, the field would benefit from a larger number of publicly available case studies originating in industry.
\balance
\bibliographystyle{ACM-Reference-Format}
| {'timestamp': '2022-12-13T02:24:53', 'yymm': '2212', 'arxiv_id': '2212.05953', 'language': 'en', 'url': 'https://arxiv.org/abs/2212.05953'} |
\section{Instruction and main results}
The main purpose of this paper is to investigate
the well-posedness of $L^p$ Neumann problems
for nonhomogeneous elliptic systems, arising in the homogenization theory.
More precisely, we continue to consider the following operators depending on a parameter $\varepsilon > 0$,
\begin{eqnarray*}
\mathcal{L}_{\varepsilon} =
-\text{div}\big[A(x/\varepsilon)\nabla +V(x/\varepsilon)\big] + B(x/\varepsilon)\nabla +c(x/\varepsilon) +\lambda I
\end{eqnarray*}
where $\lambda\geq 0$ is a constant, and $I$ is an identity matrix.
Let $d\geq 3$, $m\geq 1$, and $1 \leq i,j \leq d$ and $1\leq \alpha,\beta\leq m$.
Suppose that $A = (a_{ij}^{\alpha\beta})$, $V=(V_i^{\alpha\beta})$, $B=(B_i^{\alpha\beta})$, $c=(c^{\alpha\beta})$ are real measurable functions,
satisfying the following conditions:
\begin{itemize}
\item the uniform ellipticity condition
\begin{equation}\label{a:1}
\mu |\xi|^2 \leq a_{ij}^{\alpha\beta}(y)\xi_i^\alpha\xi_j^\beta\leq \mu^{-1} |\xi|^2
\quad \text{for}~y\in\mathbb{R}^d,~\text{and}~\xi=(\xi_i^\alpha)\in \mathbb{R}^{md},~\text{where}~ \mu>0;
\end{equation}
(The summation convention for repeated indices is used throughout.)
\item the periodicity condition
\begin{equation}\label{a:2}
A(y+z) = A(y),~~ V(y+z) = V(y),
~~B(y+z) = B(y),~~ c(y+z) = c(y)
\end{equation}
for $y\in\mathbb{R}^d$ and $z\in \mathbb{Z}^d$;
\item the boundedness condition
\begin{equation}\label{a:3}
\max\big\{\|V\|_{L^{\infty}(\mathbb{R}^d)},
~\|B\|_{L^{\infty}(\mathbb{R}^d)},~\|c\|_{L^{\infty}(\mathbb{R}^d)}\big\}
\leq \kappa;
\end{equation}
\item the regularity condition
\begin{equation}\label{a:4}
\max\big\{ \|A\|_{C^{0,\tau}(\mathbb{R}^d)},~ \|V\|_{C^{0,\tau}(\mathbb{R}^d)},
~\|B\|_{C^{0,\tau}(\mathbb{R}^d)}\big\} \leq \kappa,
\qquad \text{where}~\tau\in(0,1)~\text{and}~\kappa > 0.
\end{equation}
\end{itemize}
Although we do not seek the operator $\mathcal{L}_\varepsilon$ to be a self-adjoint operator,
the symmetry condition on its leading term, i.e.,
\begin{equation*}
A^* = A \quad (a_{ij}^{\alpha\beta} = a_{ji}^{\alpha\beta}\big)
\end{equation*}
is necessary in the later discussion. To ensure the solvability,
the following constant is crucial,
\begin{equation*}
\lambda_0 = \frac{c(m,d)}{\mu}\Big\{\|V\|_{L^\infty(\mathbb{R}^d)}^2 +
\|B\|_{L^\infty(\mathbb{R}^d)}^2+ \|c\|_{L^\infty(\mathbb{R}^d)}\Big\}.
\end{equation*}
Throughout the paper, we always assume $\Omega\subset\mathbb{R}^d$ is a bounded Lipschitz domain, and $r_0$ denotes
the diameter of $\Omega$, unless otherwise stated.
In order to state the Neumann boundary value problem,
the conormal derivatives related to $\mathcal{L}_\varepsilon$ is defined as
\begin{equation*
\frac{\partial}{\partial\nu_\varepsilon}
= n\cdot\big[A(\cdot/\varepsilon)\nabla + V(\cdot/\varepsilon)\big]
\qquad \text{on}~\partial\Omega,
\end{equation*}
where $n=(n_1,\cdots,n_d)$ is the outward unit normal
vector to $\partial\Omega$.
\begin{thm}[nontangential maximal function estimates]\label{thm:1.1}
Let $1<p<\infty$.
Suppose that the coefficients $\eqref{a:1}$, $\eqref{a:2}$, $\eqref{a:3}$ and
$\eqref{a:4}$ with $\lambda\geq\max\{\mu,\lambda_0\}$ and $A^*=A$. Let $\Omega$
be a bounded $C^{1,\eta}$ domain with some $\eta\in(0,1)$.
Then for any $g\in L^p(\partial\Omega;\mathbb{R}^m)$,
the weak solution $u_\varepsilon\in H^1(\Omega;\mathbb{R}^m)$ to
\begin{equation}\label{pde:1}
(\mathbf{NH_\varepsilon})\left\{
\begin{aligned}
\mathcal{L}_\varepsilon(u_\varepsilon) &= 0 &\quad &\emph{in}~~\Omega, \\
\frac{\partial u_\varepsilon}{\partial\nu_\varepsilon} &= g &\emph{n.t.~}&\emph{on} ~\partial\Omega, \\
(\nabla u_\varepsilon)^* &\in L^p(\partial\Omega), &\quad &
\end{aligned}\right.
\end{equation}
satisfies the uniform estimate
\begin{equation}\label{pri:1.1}
\|(\nabla u_\varepsilon)^*\|_{L^p(\partial\Omega)}
+ \|(u_\varepsilon)^*\|_{L^p(\partial\Omega)} \leq C\|g\|_{L^p(\partial\Omega)}
\end{equation}
where $C$ depends on $\mu,\kappa,\tau,\lambda,m,d$ and $\Omega$.
\end{thm}
Note that the second line of $(\mathbf{NH_\varepsilon})$
means that the conormal derivative of $u_\varepsilon$ converges to $f$ in a nontangenial way
instead of in the sense of trace, and using the abbreviation ``n.t.'' depicts this difference.
The notation $(\nabla u_\varepsilon)^*$ in the third line
represents the nontangential maximal function of
$\nabla u_\varepsilon$ on $\partial\Omega$ (see Definition $\ref{def:1}$).
The main strategy in the proof of the above theorem has been well developed in \cite{KFS1}.
Roughly speaking, the proof should be divided into two parts: (i) $2\leq p<\infty$
and (ii) $1<p<2$. On account of a real method given by Z. Shen in \cite{S5},
originally inspired by L. Caffarelli and I. Peral in \cite{CP},
the case (i) will be reduced to a revise H\"older inequality. For the case (ii), one may
derive the estimate $\|(\nabla u_\varepsilon)^*\|_{L^1(\partial\Omega)}\leq
C\|g\|_{H^1_{at}(\partial\Omega)}$ as in \cite{KFS1,DK}, where the right-hand side means the given data
g is in the atomic $H^1$ space (see for example \cite[pp.438]{DK}), and then by a interpolating argument one may obtain the desired estimate.
However, to complete the whole proof of Theorem $\ref{thm:1.1}$ is not as easy as it appears.
In terms of layer potential methods, we first establish the estimate $\eqref{pri:1.1}$ for $p=2$
in Lipschitz domains (see \cite[Theorem 1.6]{X2}). Then, applying the real method
(see Lemma $\ref{lemma:2.1}$) to the nonhomogeneous operators, one may derive the following result.
\begin{thm}\label{thm:2.1}
Let $p>2$ and $\Omega$ be a bounded Lipschitz domain. Assume that
\begin{equation}\label{a:2.1}
\bigg(\dashint_{B(Q,r)\cap\partial\Omega}|(\nabla u_\varepsilon)^*|^p dS\bigg)^{1/p}
\leq C\bigg(\dashint_{B(Q,2r)\cap\partial\Omega}|(\nabla u_\varepsilon)^*|^2dS\bigg)^{1/2}
+ C\bigg(\dashint_{B(Q,2r)\cap\partial\Omega}
|(u_\varepsilon)^*|^2dS\bigg)^{1/2},
\end{equation}
whenever $u_\varepsilon\in H^1(B(Q,3r)\cap\Omega;\mathbb{R}^m)$ is a weak solution to
$\mathcal{L}_\varepsilon(u_\varepsilon) = 0$ in $B(Q,3r)\cap\Omega$ with
$\partial u_\varepsilon/\partial\nu_\varepsilon = 0$ on $B(Q,3r)\cap\partial\Omega$ for some
$Q\in\partial\Omega$ and $0<r<r_0$. Then the weak solutions to
$\mathcal{L}_\varepsilon(u_\varepsilon) = 0$ in $\Omega$ and
$\partial u_\varepsilon/\partial\nu_\varepsilon = g \in L^p(\partial\Omega;\mathbb{R}^m)$
satisfy the estimate
$\|(\nabla u_\varepsilon)^*\|_{L^p(\partial\Omega)}\leq C\|g\|_{L^p(\partial\Omega)}$.
\end{thm}
Compared to the homogeneous case, here we need to treat the quantity
``$\nabla u_\varepsilon+u_\varepsilon$'' as a whole.
The reason is that $u_\varepsilon$ as a solution is full certainty, and we can not
use Poincar\'e's inequality as freely as in the homogeneous case.
This point leads to the main technical difficulties in the paper.
In view of the above theorem, the problem is reduced to show the estimate
$\eqref{a:2.1}$, and it will be done by the following boundary estimate.
\begin{thm}[boundary Lipschitz estimates]\label{thm:1.0}
Let $\Omega$ be a bounded $C^{1,\eta}$ domain.
Suppose that the coefficients of $\mathcal{L}_\varepsilon$ satisfy the conditions
$\eqref{a:1}$, $\eqref{a:2}$, $\eqref{a:3}$ with $\lambda\geq\lambda_0$ and $A,V$ additionally satisfy $\eqref{a:4}$.
Let $u_\varepsilon\in H^1(B(Q,r)\cap\Omega;\mathbb{R}^m)$ be a weak solution
to $\mathcal{L}_\varepsilon(u_\varepsilon) = \emph{div}(f)+F$ in $B(Q,r)\cap\Omega$ with
$\partial u_\varepsilon/\partial\nu_\varepsilon = g-n\cdot f$ on $B(Q,r)\cap\partial\Omega$
for some $Q\in\partial\Omega$ and $0<r\leq 1$. Assume that
\begin{equation*}
\begin{aligned}
\mathcal{R}(F,f,g;r)& = r\Big(\dashint_{B(Q,r)\cap\Omega}|F|^p\Big)^{1/p}
+ \|f\|_{L^\infty(B(Q,r)\cap\partial\Omega)}
+ r^\sigma [f]_{C^{0,\sigma}(B(Q,r)\cap\partial\Omega)}\\
&+ \|g\|_{L^\infty(B(Q,r)\cap\partial\Omega)}
+ r^\sigma [g]_{C^{0,\sigma}(B(Q,r)\cap\partial\Omega)} <\infty,
\end{aligned}
\end{equation*}
where $p>d$ and $0<\sigma\leq\eta<1$. Then we have
\begin{equation}\label{pri:1.0}
\sup_{B(Q,r/2)\cap\Omega}|\nabla u_\varepsilon|
\leq C\bigg\{\frac{1}{r}\Big(\dashint_{B(Q,r)\cap\Omega}|u_\varepsilon|^2\Big)^{1/2}
+ \mathcal{R}(F,f,g;r)\bigg\},
\end{equation}
where $C$ depends on $\mu,\kappa,\tau,\lambda,m,d$ and the character of $\Omega$.
\end{thm}
In fact, the first author has developed the global Lipschitz estimate
in \cite[Theorem 1.2]{X1}. The main idea is to construct the connection between the solutions
corresponding to $L_\varepsilon=\text{div}(A(x/\varepsilon)\nabla)$ and $\mathcal{L}_\varepsilon$ via
the Neumann boundary corrector (see \cite[pp.4371]{X1}), such that the regularity results on $L_\varepsilon$ can be applied to
$\mathcal{L}_\varepsilon$ directly. Thus, his proof of the global Lipschitz estimate avoids the
the stated estimate $\eqref{pri:1.0}$.
Generally speaking, if there are the global estimates in our hand,
the corresponding boundary estimates will be obtained simply by
using the localization technique as in \cite[Lemma 2.17]{X1}.
Unfortunately, the estimate $\eqref{pri:1.0}$ can not be easily achieved in this way,
even for homogeneous operator $L_\varepsilon$. Because,
\begin{equation*}
L_\varepsilon(w_\varepsilon) = - \text{div}\big[A(x/\varepsilon)\nabla\phi u_\varepsilon\big]
- A(x/\varepsilon)\nabla u_\varepsilon\nabla\phi \quad\text{in~} \Omega,
\end{equation*}
where $w_\varepsilon = u_\varepsilon\phi$,
and $u_\varepsilon$ satisfies $L_{\varepsilon}(u_\varepsilon) = 0$ in $\Omega$ with
$\phi\in C_0^1(\mathbb{R}^d)$ being a cut-off function. It is clear to see that the first term in the
right-hand side involves ``$A(x/\varepsilon)$'', which will produce a factor $\varepsilon^{-\sigma}$
in a H\"older semi-norm with the index $\sigma\in(0,1)$.
Obviously, we need an additional effort to conceal this factor and we have no plan to show the related
techniques in this direction. Instead, we want to prove the estimate $\eqref{pri:1.0}$ based upon
a convergence rate coupled with the so-called Campanato iteration.
This method has been well studied in \cite{AM,ASC,ASZ,S5} for periodic and nonperiodic settings.
Compared to the compactness argument shown in \cite{MAFHL,MAFHL3}, we are released from estimating
the boundary correctors, which is usually a very tough work.
The main idea in the proof of Theorem $\ref{thm:1.0}$ is similar to that in \cite{AM,S5},
but the nonhomogeneous operator $\mathcal{L}_\varepsilon$ will cause
some critical differences and technical difficulties.
For example, the solution $u_\varepsilon$ to $(\mathbf{NH_\varepsilon})$ is assured by given data.
It made us employ the following quantity
\begin{equation*}
\inf_{M\in\mathbb{R}^{d\times d}}\dashint_{B(Q,r)\cap\Omega}|u_\varepsilon - Mx - \tilde{c}|^2
\quad \text{instead of} \quad
\inf_{M\in\mathbb{R}^{d\times d}\atop
c\in\mathbb{R}^d}\dashint_{B(Q,r)\cap\Omega}|u_\varepsilon - Mx - c|^2
\end{equation*}
to carry out the iteration program, where $Q\in\partial\Omega$ and $\varepsilon\leq r<1$.
Moreover, $\tilde{c}$ may be given by $u_0(Q)$, which is the approximating solution to
$\mathcal{L}_0(u_0) = \mathcal{L}_\varepsilon(u_\varepsilon)$ in $B(Q,r)\cap\Omega$ with $\partial u_0/\partial\nu_0
= \partial u_\varepsilon/\partial\nu_\varepsilon$ on $\partial (B(Q,r)\cap\Omega)$.
Thus saying the solution assured means that
\begin{equation*}
|\tilde{c}| \leq C\Big(\dashint_{B(Q,r)\cap\Omega}|u_0|^2\Big)^{1/2}
\leq C\Big(\dashint_{B(Q,2r)\cap\Omega}|u_\varepsilon|^2\Big)^{1/2} + \text{give~data},
\end{equation*}
where we also use the following approximating result (see Lemma $\ref{lemma:4.1}$)
\begin{equation*}
\begin{aligned}
\Big(\dashint_{B(Q,r)\cap\Omega} |u_\varepsilon - u_0|^2 \Big)^{1/2}
\leq C\left(\frac{\varepsilon}{r}\right)^{\rho}
\bigg\{ \Big(\dashint_{B(Q,2r)\cap\Omega}|u_\varepsilon|^2 \Big)^{1/2}
+ \text{given~data}\bigg\}
\end{aligned}
\end{equation*}
with some $\rho\in(0,1)$. In order to continue the iteration,
let $v_\varepsilon = u_\varepsilon - \tilde{c} - \varepsilon\chi_0(x/\varepsilon)\tilde{c}$ and
$v_0 = u_0 - \tilde{c}$, and then we give a revised approximating lemma (see Lemma $\ref{lemma:4.4}$), which says
\begin{equation*}
\begin{aligned}
\Big(\dashint_{B(Q,r)\cap\Omega} |v_\varepsilon - v_0|^2 \Big)^{1/2}
\leq C\left(\frac{\varepsilon}{r}\right)^{\rho}
\bigg\{
\Big(\dashint_{B(Q,2r)\cap\Omega}|u_\varepsilon-\tilde{c}|^2 \Big)^{1/2}
+ r|\tilde{c}| + \text{give~data}\bigg\}.
\end{aligned}
\end{equation*}
Here we remark that if we regard the constant $\tilde{c}$ as the given data,
and it will play a role as $F$ and $g$ (see for example Remark $\ref{re:4.1}$).
Thus, it is equivalent to $|\nabla u_\varepsilon|$ or $|\nabla^2 u_\varepsilon|$
in the sense of rescaling, and that is the reason why we have a factor ``$r$''
in front of the constant $|\tilde{c}|$, and this factor is very important in the later iterations.
Also, we made a few modification on the iteration lemma (see Lemma $\ref{lemma:4.2}$),
which has been proved by Z. Shen in \cite{S5},
originally by S. Armstrong, C. Smart in \cite{ASC}. Then
a routine computation leads to a large scale estimate,
\begin{equation*}
\Big(\dashint_{B(Q,r)\cap\Omega}|\nabla u_\varepsilon|^2\Big)^{1/2}
\leq C\bigg\{\Big(\dashint_{B(Q,1)\cap\Omega}|u_\varepsilon|^2\Big)^{1/2}
+ \Big(\dashint_{B(Q,2r)\cap\Omega}|u_\varepsilon|^2\Big)^{1/2}+\text{given~data}\bigg\}
\end{equation*}
for any $\varepsilon\leq r<1$. Obviously, the second term in the right-hand side requires a uniform
control with respect to the scale $r$, and it would be done by a local $W^{1,p}$ estimate with $p>2$,
which involves the so-called bootstrap argument.
Consequently, the proof of $\eqref{pri:1.0}$ will be completed by a blow-up argument.
However, there is a gap between the desired estimate $\eqref{a:2.1}$
and the stated estimate $\eqref{pri:1.0}$, and our only recourse is the Neumann boundary corrector here.
We refer the reader to Lemma $\ref{lemma:6.1}$ for the details. Also, we mention that
if the symmetry condition $A=A^*$ is additionally assumed, then the Neumann boundary corrector will have
a better estimate (see Remark $\ref{re:6.1}$).
Up to now, we have specified the key points in the proof of Theorem $\ref{thm:1.1}$
for $p\geq 2$. We mention that the proof in the case $1<p<2$ can not been derived by duality arguments.
For given boundary atom data $g$ in $L^2$ Neumann problem, we need to establish the following estimate
\begin{equation*}
\int_{\partial\Omega} (\nabla u_\varepsilon)^* \leq C
\end{equation*}
(see Theorem $\ref{thm:6.1}$), which is based upon the decay estimates of Neumann functions.
Since we have investigated the fundamental solutions of $\mathcal{L}_\varepsilon$ in \cite{X2},
this part of the proof may follow from those in \cite{KFS1,DK} without any real difficulty.
In terms of Lipschitz domains, the well-posedness of $(\mathbf{NH_\varepsilon})$ may be known
whenever $p$ is closed to $2$. For a $C^1$ domain,
whether Theorem $\ref{thm:1.1}$ is correct or not is still an open question,
while it is true for homogenized system $(\mathbf{NH_0})$,
and the reader may find a clue in \cite[Section 3]{X2}.
We mention that $L^p$ Dirichlet problem on $\mathcal{L}_\varepsilon$ has already been given
by \cite[Theorem 1.4]{X0} in $C^{1,\eta}$ domains. The assumption of $d\geq 3$ is not essential but
convenient to organize the paper.
Finally, without attempting to be exhaustive, we refer the reader to
\cite{ABJLGP,CP,DK,GF,GNF,GX,HMT,VSO,KS2,aKS1,NSX,S4,S1,ST1,ZVVPSE} and references therein for more results.
This paper is organized as follows. Some definitions
and known lemmas and the proof of Theorem $\ref{thm:2.1}$ are
introduced in section 2. We show a convergence rate in section 3.
Section 4 is devoted to study boundary estimates and we prove some decay estimates of
Neumann functions in section 5. The proof of Theorem $\ref{thm:1.1}$ is
consequently given in the last section.
\section{Preliminaries}
Define the correctors $\chi_k = (\chi_{k}^{\alpha\beta})$ with $0\leq k\leq d$, related to $\mathcal{L}_\varepsilon$ as follows:
\begin{equation}
\left\{ \begin{aligned}
&L_1(\chi_k) = \text{div}(V) \quad \text{in}~ \mathbb{R}^d, \\
&\chi_k\in H^1_{per}(Y;\mathbb{R}^{m^2})~~\text{and}~\int_Y\chi_k dy = 0
\end{aligned}
\right.
\end{equation}
for $k=0$, and
\begin{equation}
\left\{ \begin{aligned}
&L_1(\chi_k^\beta + P_k^\beta) = 0 \quad \text{in}~ \mathbb{R}^d, \\
&\chi_k^\beta \in H^1_{per}(Y;\mathbb{R}^m)~~\text{and}~\int_Y\chi_k^\beta dy = 0
\end{aligned}
\right.
\end{equation}
for $1\leq k\leq d$, where $P_k^\beta = x_k(0,\cdots,1,\cdots,0)$ with 1 in the
$\beta^{\text{th}}$ position, $Y = (0,1]^d \cong \mathbb{R}^d/\mathbb{Z}^d$, and $H^1_{per}(Y;\mathbb{R}^m)$ denotes the closure
of $C^\infty_{per}(Y;\mathbb{R}^m)$ in $H^1(Y;\mathbb{R}^m)$.
Note that $C^\infty_{per}(Y;\mathbb{R}^m)$ is the subset of $C^\infty(Y;\mathbb{R}^m)$,
which collects all $Y$-periodic vector-valued functions. By asymptotic expansion arguments
(see \cite[pp.103]{ABJLGP} or \cite[pp.31]{VSO}), we obtain the homogenized operator
\begin{equation}\label{eq:2.1}
\mathcal{L}_0 = -\text{div}(\widehat{A}\nabla+ \widehat{V}) + \widehat{B}\nabla + \widehat{c} + \lambda I,
\end{equation}
where $\widehat{A} = (\widehat{a}_{ij}^{\alpha\beta})$, $\widehat{V}=(\widehat{V}_i^{\alpha\beta})$,
$\widehat{B} = (\widehat{B}_i^{\alpha\beta})$ and $\widehat{c}= (\widehat{c}^{\alpha\beta})$ are given by
\begin{equation}\label{eq:2.2}
\begin{aligned}
\widehat{a}_{ij}^{\alpha\beta} = \int_Y \big[a_{ij}^{\alpha\beta} + a_{ik}^{\alpha\gamma}\frac{\partial\chi_j^{\gamma\beta}}{\partial y_k}\big] dy, \qquad
\widehat{V}_i^{\alpha\beta} = \int_Y \big[V_i^{\alpha\beta} + a_{ij}^{\alpha\gamma}\frac{\partial\chi_0^{\gamma\beta}}{\partial y_j}\big] dy, \\
\widehat{B}_i^{\alpha\beta} = \int_Y \big[B_i^{\alpha\beta} + B_j^{\alpha\gamma}\frac{\partial\chi_i^{\gamma\beta}}{\partial y_j}\big] dy, \qquad
\widehat{c}^{\alpha\beta} = \int_Y \big[c^{\alpha\beta} + B_i^{\alpha\gamma}\frac{\partial\chi_0^{\gamma\beta}}{\partial y_i}\big] dy.
\end{aligned}
\end{equation}
\begin{remark}
\emph{It is well known that $u_\varepsilon\to u_0$ strongly in $L^2(\Omega;\mathbb{R}^m)$,
where $u_0\in H^1(\Omega;\mathbb{R}^m)$ satisfies the equation
\begin{equation*}
(\mathbf{NH_0})\left\{
\begin{aligned}
\mathcal{L}_0(u_0) &= 0 &\quad &\text{in}~~\Omega, \\
\frac{\partial u_0}{\partial\nu_0} &= g & \quad &\text{on} ~\partial\Omega,
\end{aligned}\right.
\end{equation*}
where $\partial/\partial\nu_0 = n\cdot\big(\widehat{A}\nabla +\widehat{V}\big)$,
(see for example \cite[pp.4374-4375]{X1}).}
\end{remark}
\begin{definition}\label{def:1}
\emph{The nontangential maximal function of $u$ is defined by
\begin{equation*}
(u)^*(Q) = \sup\big\{|u(x)|:x\in\Gamma_{N_0}(Q)\big\}
\qquad \forall Q\in\partial\Omega,
\end{equation*}
where $\Gamma_{N_0}(Q) = \big\{x\in\Omega:|x-Q|\leq N_0\delta(x)\big\}$ is the cone with vertex
$Q$ and aperture $N_0$, and $N_0>1$ is sufficiently large.}
\end{definition}
\begin{lemma}\label{lemma:2.3}
Suppose that the coefficients of $\mathcal{L}_\varepsilon$ satisfies $\eqref{a:1}$ and $\eqref{a:3}$
with $A\in \emph{VMO}(\mathbb{R}^d)$.
Let $u_\varepsilon$ be the solution of $\mathcal{L}_\varepsilon(u_\varepsilon) = 0$ in $\Omega$.
Then we have the following estimate
\begin{equation}\label{pri:2.1}
(u_\varepsilon)^*(Q) \leq C\mathrm{M}_{\partial\Omega}(\mathcal{M}(u_\varepsilon))(Q)
\end{equation}
for any $Q\in\partial\Omega$,
where $C$ depends only on $\mu,\kappa,\lambda,m,d$ and $\|A\|_{\emph{VMO}}$.
\end{lemma}
\begin{remark}
\emph{The definition of $\text{VMO}(\mathbb{R}^d)$ may be found in \cite[pp.43]{S4}, and
the radial maximal function operator $\mathcal{M}$ is defined in \cite[Remark 2.21]{X1}.}
\end{remark}
\begin{proof}
Fixed $x\in\Lambda_{N_0}(Q)$,
the estimate $\eqref{pri:2.1}$ is based upon the interior estimate (see \cite[Corollary 3.5]{X0})
\begin{equation*}
\begin{aligned}
|u_\varepsilon(x)|
&\leq C\Big(\dashint_{B(x,r)}|u_\varepsilon|^2\Big)^{1/2} \\
& \leq C\dashint_{B(Q,c_0 r)\cap\partial\Omega}|\mathcal{M}(u_\varepsilon)|
\leq C\mathrm{M}_{\partial\Omega}(\mathcal{M}(u_\varepsilon))(Q),
\end{aligned}
\end{equation*}
where $r=\text{dist}(x,\partial\Omega)$, and $c_0>0$ is determined by $N_0$.
\end{proof}
\begin{lemma}\label{lemma:2.4}
Let $\Omega\subset\mathbb{R}^d$ be a bounded
Lipschitz domain, and $\mathcal{M}$ be defined
as the radical maximal function operator. Then
for any $h\in H^{1}(\Omega)$, we have the following estimate
\begin{equation}\label{pri:2.2}
\|\mathcal{M}(h)\|_{L^p(\partial\Omega)}
\leq C\|h\|_{W^{1,p}(\Omega)}
\end{equation}
where $C$ depends only on $d$ and
the character of $\Omega$.
\end{lemma}
\begin{proof}
It would be done by a few modification to the proof \cite[Lemma 2.24]{X1}.
\end{proof}
\begin{remark}
\emph{For the ease of the statement, we introduce the following notation.
\begin{equation*}
\begin{aligned}
D(Q,r) &= B(Q,r)\cap \Omega = \big\{(x^\prime,x_d)\in\mathbb{R}^d:|x^\prime|<r
~\text{and}~ \psi(x^\prime) < x_d < C_0r\big\},\\
\Delta(Q,r) &= B(Q,r)\cap \partial\Omega = \big\{(x^\prime,x_d)\in\mathbb{R}^d:|x^\prime|<r\big\},
\end{aligned}
\end{equation*}
where $\psi:\mathbb{R}^{d-1}\to\mathbb{R}$ is a Lipschitz or $C^{1,\eta}$ function.
We usually denote $D(Q,r)$ and $\Delta(Q,r)$ by $D_r$ and $\Delta_r$.}
\end{remark}
\begin{lemma}[A real method]\label{lemma:2.1}
Let $S_0$ be a cube of $\partial \Omega$ and $F\in L^2(2S_0)$. Let $p>2$ and $f\in L^q(2S_0)$ for some
$2<q<p$. Suppose that for each dyadic subcube $S$ of $S_0$ with $|S|$ with $|S|\leq \beta|S_0|$, there
exist two functions $F_S$ and $R_S$ on $2S$ such that $|F|\leq |F_S|+|R_S|$ on $2S$, and
\begin{equation}\label{pri:2.6}
\bigg\{\dashint_{2S}|R_S|^p\bigg\}^{1/p}
\leq C_1\bigg\{\Big(\dashint_{\alpha S}|F|^2\Big)^{1/2}
+\sup_{S^\prime\supset\supset S}\Big(\dashint_{S^\prime}|f|^2\Big)^{1/2}\bigg\},
\end{equation}
\begin{equation}\label{pri:2.7}
\dashint_{2S}|F_S|^2 \leq C_2 \sup_{S^\prime\subset S}\dashint_{S^\prime}|f|^2,
\end{equation}
where $C_1,C_2$ and $0<\beta<1<\alpha$. Then
\begin{equation}\label{pri:2.8}
\bigg\{\dashint_{S_0}|F|^q\bigg\}^{1/q}
\leq C\bigg\{\Big(\dashint_{2S_0}|F|^2\Big)^{1/2}+\Big(\dashint_{2S_0}|f|^q\Big)^{1/q}\bigg\},
\end{equation}
where $C>0$ depends only on $p,q,C_1,C_2,\alpha,\beta,d$ and the character of $\Omega$.
\end{lemma}
\begin{proof}
See for example \cite[Lemma 2.2]{aKS1}.
\end{proof}
\noindent\textbf{Proof of Theorem $\ref{thm:2.1}$}.
The main idea may be found in \cite[Lemma 9.2]{KFS2}, and
we make some modifications in the original proof to fit the case of nonhomogeneous operators.
To show the stated result, on account of a covering argument, it suffices to prove the following estimate
\begin{equation}\label{pri:2.5}
\bigg\{\dashint_{\Delta(Q,r)} |(\nabla u_\varepsilon)^*|^p dS\bigg\}^{1/p}
\leq C\bigg\{\dashint_{\Delta(Q,2r)}\Big(|(\nabla u_\varepsilon)^*|^2
+|(u_\varepsilon)^*|^2\Big)dS\bigg\}^{1/2}
+ C\bigg\{\dashint_{\Delta(Q,2r)}
|g|^pdS\bigg\}^{1/p}
\end{equation}
for any $0<r<r_0$, and it will accomplished by a real variable method originating in \cite{CP}
and further developed in \cite{S1,S2,S3}. Precisely speaking, we will apply Lemma $\ref{lemma:2.1}$ to our case.
Let $\chi_{\Delta_{8r}}$ represent the characteristic function of a set $\Delta_{8r}\subset\partial\Omega$,
where $r\in(0,r_0/100)$. Define $f = g\chi_{\Delta_{8r}}$, and
then we consider $u_\varepsilon = v_\varepsilon + w_\varepsilon$, in which
$v_\varepsilon$ and $w_\varepsilon$ satisfy $L^2$ Neumann problems
\begin{equation*}
(\text{I})\left\{
\begin{aligned}
\mathcal{L}_\varepsilon(v_\varepsilon) &=0 &\quad& \text{in}~\Omega,\\
\frac{\partial v_\varepsilon}{\partial\nu_\varepsilon} &= f &\quad& \text{on}~\partial\Omega,
\end{aligned}
\right.\qquad
(\text{II})\left\{\begin{aligned}
\mathcal{L}_\varepsilon(w_\varepsilon) &=0 &\quad& \text{in}~\Omega,\\
\frac{\partial w_\varepsilon}{\partial\nu_\varepsilon} &= (1-\chi_{\Delta_{8r}})g
&\quad& \text{on}~\partial\Omega,
\end{aligned}
\right.
\end{equation*}
respectively.
For (\text{I}). It follows from the $L^2$ solvability (see \cite[Theorem 1.5]{X2}) that
\begin{equation*}
\dashint_{\Delta_{r}} |(\nabla v_\varepsilon)^*|^2
\leq \frac{C}{r^{d-1}}\int_{\partial\Omega} |(\nabla v_\varepsilon)^*|^2 dS
\leq \frac{C}{r^{d-1}}\int_{\partial\Omega} |f|^2 dS
\leq C\dashint_{\Delta_{8r}} |g|^2.
\end{equation*}
On the other hand, in view of the estimates $\eqref{pri:2.1}$ and $\eqref{pri:2.2}$, we have
\begin{equation*}
\dashint_{\Delta_{r}} |(v_\varepsilon)^*|^2
\leq C\dashint_{\Delta_{r}} |\mathcal{M}(v_\varepsilon)|^2
\leq \frac{C}{r^{d-1}}\int_{\Omega}\big(|\nabla v_\varepsilon|^2 + |v_\varepsilon|^2\big)dx
\leq C\dashint_{\Delta_{r}} |g|^2.
\end{equation*}
Let $F_S = (\nabla v_\varepsilon)^* + (v)^*$, and combining the above two inequalities leads to
\begin{equation}\label{f:2.1}
\dashint_{\Delta_{r}}|F_S|^2 \leq C \dashint_{\Delta_{8r}} |g|^2.
\end{equation}
This gives the estimate $\eqref{pri:2.7}$ in Lemma $\ref{lemma:2.1}$.
Observing (\text{II}), we have that $w_\varepsilon\in H^1(D_{3r};\mathbb{R}^m)$ satisfies
$\mathcal{L}_\varepsilon(w_\varepsilon) = 0$ in $D_{3r}$ with
$\partial w_\varepsilon/\partial\nu_\varepsilon = 0$ on $\Delta_{3r}$. Hence, it follows from the
reverse H\"older assumption $\eqref{a:2.1}$ that
\begin{equation}\label{f:2.2}
\begin{aligned}
\bigg(\dashint_{\Delta_r} |(\nabla w_\varepsilon)^*|^p \bigg)^{1/p}
&\leq C\bigg\{\dashint_{\Delta_{2r}} |(\nabla w_\varepsilon)^*
+(w_\varepsilon)^*|^2 \bigg\}^{1/2}\\
&\leq C\bigg\{\dashint_{\Delta_{2r}} |(\nabla u_\varepsilon)^*
+(u_\varepsilon)^*|^2 \bigg\}^{1/2} + C\bigg\{\dashint_{\Delta_{2r}} |F_S|^2\bigg\}^{1/2}\\
&\leq C\bigg\{\dashint_{\Delta_{2r}} |(\nabla u_\varepsilon)^*
+(u_\varepsilon)^*|^2 \bigg\}^{1/2} + C\bigg\{\dashint_{\Delta_{8r}} |g|^2\bigg\}^{1/2}.
\end{aligned}
\end{equation}
Meanwhile, by the boundary $L^\infty$ estimate $\eqref{pri:2.3}$ and \cite[Corollary 3.5]{X0}, one may have
\begin{equation}\label{f:2.3}
\begin{aligned}
\bigg\{\dashint_{\Delta_r}|(w_\varepsilon)^*|^p\bigg\}^{1/p}
&\leq C\bigg\{\dashint_{D_{2r}}|w_\varepsilon|^2\bigg\}^{1/2}
\leq C\bigg\{\dashint_{\Delta_{2r}}|(w_\varepsilon)^*|^2\bigg\}^{1/2} \\
&\leq C\bigg\{\dashint_{\Delta_{2r}}|(u_\varepsilon)^*|^2\bigg\}^{1/2}
+ C\bigg\{\dashint_{\Delta_{2r}}|(v_\varepsilon)^*|^2\bigg\}^{1/2} \\
&\leq C\bigg\{\dashint_{\Delta_{2r}} |(\nabla u_\varepsilon)^*
+(u_\varepsilon)^*|^2 \bigg\}^{1/2} + C\bigg\{\dashint_{\Delta_{8r}} |g|^2\bigg\}^{1/2},
\end{aligned}
\end{equation}
where we also use the estimate $\eqref{f:2.1}$ in the last inequality.
Let $R_S = (\nabla w_\varepsilon)^* + (w_\varepsilon)^*$, and it follows from the estimates
$\eqref{f:2.2}$ and $\eqref{f:2.3}$ that
\begin{equation}
\bigg\{\dashint_{\Delta_r}|R_S|^p\bigg\}^{1/p}
\leq C\bigg\{\dashint_{\Delta_{2r}} |F|^2 \bigg\}^{1/2} + C\bigg\{\dashint_{\Delta_{8r}} |g|^2\bigg\}^{1/2},
\end{equation}
where $F=(\nabla u_\varepsilon)^*+(u_\varepsilon)^*$, and this gives
the estimate $\eqref{pri:2.6}$. Thus, it is clear to see that
$F\leq F_S + R_S$ on $\partial\Omega$, and in terms of Lemma $\ref{lemma:2.1}$ we may have
\begin{equation}
\bigg\{\dashint_{\Delta_r}|F|^q\bigg\}^{1/q}
\leq C\bigg\{\dashint_{\Delta_{2r}} |F|^2 \bigg\}^{1/2} + C\bigg\{\dashint_{\Delta_{2r}} |g|^2\bigg\}^{1/2}.
\end{equation}
for any $2<q<p$, where we also employ a simple covering argument.
This implies the stated estimate $\eqref{pri:2.5}$, and we have completed the proof.
\qed
\begin{lemma}
Let $\Omega\subset\mathbb{R}^d$ be a bounded Lipschitz domain.
Suppose $A$ satisfies $\eqref{a:1}$. Let $u_\varepsilon\in H^1(\Omega;\mathbb{R}^m)$ be a weak solution
to $\mathcal{L}_\varepsilon(u_\varepsilon) = F$ in $\Omega$ and
$\partial u_\varepsilon /\partial\nu_\varepsilon = g$ on $\partial\Omega$, where
$F\in L^2(\Omega;\mathbb{R}^m)$ and $g\in L^2(\partial\Omega;\mathbb{R}^m)$. Then where exists
$p>2$ depending on $\mu,d$ and the character of $\Omega$, such that
\begin{equation}\label{pri:2.13}
\|\nabla u_\varepsilon\|_{L^p(\Omega)}
\leq C\Big\{\|F\|_{L^2(\Omega)}+\|g\|_{L^2(\partial\Omega)}\Big\},
\end{equation}
where $C$ depends on $\mu,\kappa,d,m$ and $\Omega$.
\end{lemma}
\begin{proof}
If $A\in\text{VMO}(\mathbb{R}^d)$ additionally satisfies $\eqref{a:2}$, one may show
\begin{equation}\label{f:2.7}
\|\nabla u_\varepsilon\|_{L^p(\Omega)}
\leq C\Big\{\|F\|_{L^q(\Omega)}+\|g\|_{B^{-1/p,p}(\partial\Omega)}\Big\}
\end{equation}
for $2\leq p<\infty$, with $1/q = 1/p+1/d$, which has been proved in \cite[Lemma 3.3]{X1}. Clearly,
we can choose $p>2$ close to $2$ such that $L^2(\Omega)\subset L^q(\Omega)$ and
$L^2(\partial\Omega)\subset B^{-1/p,p}(\partial\Omega)$, and this gives the estimate $\eqref{pri:2.13}$.
Note that without the periodicity and $\text{VMO}$ condition on $A$,
the estimate $\eqref{f:2.7}$ still holds for $|1/p-1/2|<\delta$,
where $\delta$ depends on $\mu,d$ and the character of $\Omega$, and we do not reproduce
the proof which is based upon a real method and reverse H\"older inequality (see \cite[Theorem 1.1.4]{S4}).
\end{proof}
\section{Convergence rates in Lipschitz domains}
\begin{thm}[convergence rates]\label{thm:3.1}
Let $\Omega\subset\mathbb{R}^d$ be a bounded Lipschitz domain. Suppose that
the coefficients satisfy $\eqref{a:1}$, $\eqref{a:2}$ and $\eqref{a:3}$.
Given $F\in L^{2}(\Omega;\mathbb{R}^m)$ and $g\in L^2(\partial\Omega;\mathbb{R}^m)$,
we assume that $u_\varepsilon,u_0\in H^1(\Omega;\mathbb{R}^m)$ satisfy
\begin{equation*}
(\emph{NH}_\varepsilon)\left\{\begin{aligned}
\mathcal{L}_\varepsilon(u_\varepsilon) & = F &\quad&\emph{in}~\Omega,\\
\frac{\partial u_\varepsilon}{\partial\nu_\varepsilon}
& = g &\quad & \emph{on}~\partial\Omega,
\end{aligned}\right.
\qquad
(\emph{NH}_0)\left\{\begin{aligned}
\mathcal{L}_0(u_0) & = F &\quad&\emph{in}~\Omega,\\
\frac{\partial u_0}{\partial\nu_0}
& = g &\quad & \emph{on}~\partial\Omega,
\end{aligned}\right.
\end{equation*}
respectively. Then we have
\begin{equation}\label{pri:3.1}
\|u_\varepsilon - u_0\|_{L^2(\Omega)}
\leq C\varepsilon^\rho
\Big\{\|F\|_{L^2(\Omega)}+\|g\|_{L^2(\partial\Omega)}\Big\},
\end{equation}
where $\rho>0$ and $C>0$ depend only on $\mu,\kappa,\lambda,m,d$ and $\Omega$.
\end{thm}
\begin{remark}
\emph{We mention that the results in this lemma do not depend on the symmetry condition $A=A^*$.
If it is assumed, then we have the convergence rate $O(\varepsilon\ln(r_0/\varepsilon))$
(see \cite[Theorem 1.2]{X3}). We introduce the following notation.
The co-layer set is $\Sigma_{r}=\{x\in\Omega:\text{dist}(x,\partial\Omega)>r\}$ and
$\Omega\setminus\Sigma_r$ is referred to as the layer part of $\Omega$.
We define the cut-off function $\psi_r\in C^1_0(\Omega)$ such that
$\psi_r = 1$ in $\Sigma_{2r}$, $\psi_r = 0$ outside $\Sigma_{r}$ and
$|\nabla \psi_r|\leq C/r$.}
\end{remark}
\begin{lemma}\label{lemma:3.1}
Assume the same conditions as in Theorem $\ref{thm:3.1}$. Suppose that
the weak solutions $u_\varepsilon\in H^1(\Omega;\mathbb{R}^m)$ satisfies
$\mathcal{L}_\varepsilon(u_\varepsilon) = \mathcal{L}_0(u_0)$ in $\Omega$, and
$\partial u_\varepsilon/ \partial\nu_\varepsilon = \partial u_0/\partial\nu_0$ on $\partial\Omega$
with $u_0\in H^2(\Omega;\mathbb{R}^m)$. Let the first approximating corrector be defined by
\begin{equation}
w_\varepsilon = u_\varepsilon - u_0 - \varepsilon\chi_0(\cdot/\varepsilon)S_\varepsilon(\psi_{4\varepsilon} u_0)
-\varepsilon\chi_k(\cdot/\varepsilon)S_\varepsilon(\psi_{4\varepsilon}\nabla_k u_0),
\end{equation}
where $\psi_{4\varepsilon}$ is the cut-off function and
$S_\varepsilon$ is the smoothing operator (see \cite[Definition 2.10]{X1}).
Then we have
\begin{equation}\label{pri:3.4}
\|w_\varepsilon\|_{H^1(\Omega)}
\leq C\Big\{\|u_0\|_{H^1(\Omega\setminus\Sigma_{8\varepsilon})}
+\varepsilon\|u_0\|_{H^2(\Sigma_{4\varepsilon})}\Big\},
\end{equation}
where $C$ depends on $\mu,\kappa,\lambda,m,d$ and $\Omega$.
\end{lemma}
\begin{proof}
In fact, the desired result $\eqref{pri:3.4}$ has been shown in \cite[Lemma 5.3]{X1},
and its proof is too long to be reproduced here. We refer the reader to
\cite[Lemmas 5.2, 5.3]{X1} for the details.
\end{proof}
\begin{lemma}[layer $\&$ co-layer type estimates]\label{lemma:3.2}
Assume the same conditions as in Theorem $\ref{thm:3.1}$.
Let $u_0\in H^1(\Omega;\mathbb{R}^m)$ be the weak solution to $(\emph{NH}_0)$. Then
there exists $p>2$ such that
\begin{equation}\label{pri:3.2}
\|u_0\|_{H^1(\Omega\setminus\Sigma_{p_1\varepsilon})}
\leq C\varepsilon^{\frac{1}{2}-\frac{1}{p}}\Big\{\|F\|_{L^2(\Omega)}
+\|g\|_{L^2(\partial\Omega)}\Big\}
\end{equation}
and
\begin{equation}\label{pri:3.3}
\|\nabla^2 u_0\|_{L^2(\Sigma_{p_2\varepsilon})}
\leq C\varepsilon^{-\frac{1}{2}-\frac{1}{p}}\Big\{\|F\|_{L^2(\Omega)}
+\|g\|_{L^2(\partial\Omega)}\Big\}
\end{equation}
where $p_1,p_2>0$ are fixed real numbers, and $C$ depends on $\mu,d,p_1,p_2,\sigma,p$ and $\Omega$.
\end{lemma}
\begin{proof}
The main ideas may be found in \cite[Lemma 5.1.5]{S4},
and we provide a proof for the sake of the completeness. We first handle the layer type estimate
$\eqref{pri:3.2}$, and it follows from H\"older's inequality and the estimate $\eqref{pri:2.13}$ that
\begin{equation*}
\begin{aligned}
\|u_0\|_{H^1(\Omega\setminus\Sigma_{p_1\varepsilon})}
\leq C\varepsilon^{\frac{1}{2}-\frac{1}{p}}\|u_0\|_{W^{1,p}(\Omega)}
\leq C\varepsilon^{\frac{1}{2}-\frac{1}{p}}\Big\{\|F\|_{L^2(\Omega)}+\|g\|_{L^2(\partial\Omega)}\Big\}.
\end{aligned}
\end{equation*}
On account of the interior estimate for $\mathcal{L}_0$, we have
\begin{equation}\label{f:3.20}
\dashint_{B(x,\delta(x)/4)}|\nabla^2 u_0|^2 dy
\leq \frac{C}{[\delta(x)]^{2}}\dashint_{B(x,\delta(x)/2)}|\nabla u_0|^2 dy +
C\dashint_{B(x,\delta(x)/2)}|u_0|^2 dy
+ C\dashint_{B(x,\delta(x)/2)}|F|^2 dy
\end{equation}
for any $x\in\Sigma_{p_2\varepsilon}$, where $\delta(x)=\text{dist}(x,\partial\Omega)$.
Since $|y-x|\leq \delta(x)/4$, it is not hard to see that $|\delta(x)-\delta(y)|\leq |x-y|\leq \delta(x)/4$ and
this implies $(4/5)\delta(y)< \delta(x)< (4/3)\delta(y)$. Therefore,
\begin{equation*}
\begin{aligned}
\int_{\Sigma_{p_2\varepsilon}}|\nabla^2 u_0|^2 dx
\leq \int_{\Sigma_{p_2\varepsilon}}\dashint_{B(x,\delta(x)/4)}|\nabla^2 u_0|^2dy dx
\leq \int_{\Sigma_{(p_2\varepsilon)/2}}|\nabla^2 u_0|^2 dx.
\end{aligned}
\end{equation*}
Then integrating both sides of $\eqref{f:3.20}$ over co-layer set $\Sigma_{p_2\varepsilon}$ leads to
\begin{equation*}
\begin{aligned}
\int_{\Sigma_{p_2\varepsilon}}|\nabla^2 u_0|^2 dx
&\leq C\int_{\Sigma_{(p_2\varepsilon)/2}}|\nabla u_0|^2 [\delta(x)]^{-2} dx
+ C\int_\Omega |F|^2 dx + C\int_\Omega |u_0|^2 dx \\
&\leq C\varepsilon^{-1-\frac{2}{p}}\Big(\int_{\Omega}
|\nabla u_0|^pdx\Big)^{\frac{2}{p}}
+ C\int_\Omega |F|^2 dx + C\int_\Omega |u_0|^2 dx,
\end{aligned}
\end{equation*}
and this together with $\eqref{pri:2.13}$ and $H^1$ estimate (see \cite[Lemma 3.1]{X1}) gives the
stated estimate $\eqref{pri:3.3}$. We have completed the proof.
\end{proof}
\noindent\textbf{Proof of Theorem $\ref{thm:3.1}$.}
On account of Lemmas $\ref{lemma:3.1},\ref{lemma:3.2}$, it is not hard to see that
\begin{equation}\label{f:3.1}
\begin{aligned}
\|u_\varepsilon - u_0\|_{L^2(\Omega)}
&\leq C\Big\{\|u_0\|_{H^1(\Omega\setminus\Sigma_{8\varepsilon})}
+ \varepsilon\|\nabla^2 u_0\|_{L^2(\Sigma_{4\varepsilon})} + \varepsilon\|u_0\|_{H^1(\Omega)}\Big\}\\
&\leq C\varepsilon^{\frac{1}{2}-\frac{1}{p}}\Big\{\|F\|_{L^2(\Omega)}+\|g\|_{L^2(\partial\Omega)}
\Big\},
\end{aligned}
\end{equation}
where we employ the estimate $H^1$ estimate. Let $\rho = 1/2-1/p$, and we have completed the proof.
\qed
\begin{corollary}\label{cor:3.1}
Assume the same conditions as in Theorem $\ref{thm:3.1}$.
For any $\xi\in\mathbb{R}^m$,
let $v_\varepsilon = u_\varepsilon - \xi - \varepsilon\chi_0(x/\varepsilon)\xi$ and
$v_0 = u_0 -\xi$, where $u_\varepsilon$ and $u_0$ satisfy
$(\emph{NH}_\varepsilon)$ and $(\emph{NH}_0)$, respectively. Then we have
\begin{equation}
\|v_\varepsilon - v_0\|_{L^2(\Omega)}
\leq C\varepsilon^\rho
\Big\{\|F\|_{L^2(\Omega)}+\|g\|_{L^2(\partial\Omega)}+|\xi|\Big\},
\end{equation}
where $\rho>0$ and $C>0$ depend only on $\mu,\kappa,\lambda,m,d$ and $\Omega$.
\end{corollary}
\begin{remark}
\emph{Let $v_\varepsilon$ and $v_0$ be given in Corollary $\ref{cor:3.1}$.
Then one may have the following equations}
\begin{equation}\label{eq:3.1}
\left\{\begin{aligned}
\mathcal{L}_\varepsilon (v_\varepsilon) & = F
+ \varepsilon\emph{div}\big(V_\varepsilon\chi_{0,\varepsilon}\xi\big)
- \big[B_\varepsilon(\nabla\chi_0)_\varepsilon + c_\varepsilon+\lambda I\big]\xi
-\varepsilon\chi_{0,\varepsilon}\big[c_\varepsilon+\lambda I\big]\xi
\quad &\emph{in}&~\Omega, \\
\frac{\partial v_\varepsilon}{\partial \nu_\varepsilon}
& = g -\varepsilon n\cdot V_\varepsilon\chi_{0,\varepsilon}\xi
-n\cdot\big[A_\varepsilon(\nabla\chi_0)_\varepsilon+V_\varepsilon\big]\xi
\quad &\emph{on}&~\partial\Omega,
\end{aligned}\right.
\end{equation}
\emph{and}
\begin{equation}\label{eq:3.2}
\left\{\begin{aligned}
\mathcal{L}_0 (v_0) & = F - (\widehat{c} + \lambda I)\xi
\quad &\emph{in}&~\Omega, \\
\frac{\partial v_0}{\partial \nu_0}
& = g
-n\cdot\widehat{V}\xi
\quad &\emph{on}&~\partial\Omega,
\end{aligned}\right.
\end{equation}
\emph{
in which such the notation $V_\varepsilon = V(x/\varepsilon)$
and $\chi_{0,\varepsilon} = \chi_0(x/\varepsilon)$
follow the same simplified way as in \cite[Remark 2.15]{X1}.
Here we plan to give some simple computations as a preparation. Recalling the form of $\widehat{c}$ in
$\eqref{eq:2.2}$, let $\Delta \vartheta_0 = \widehat{c} - c - B\nabla\chi_0$ in $Y$ and
$\int_Y\vartheta_0(y)dy = 0$. This implies $\vartheta_0\in H^2_{loc}(\mathbb{R}^d)$
(see \cite[Remark 2.7]{X1}). Also, set $b_{0} = \widehat{V} - V(y) - A(y)\nabla\chi_0(y)$, and
$n\cdot b_0(y) = \frac{\varepsilon}{2}\big[n_i\frac{\partial}{\partial x_j} - n_j\frac{\partial}{\partial x_i}
\big]E_{ji0}(y)$, where $E_{ji0}$ is referred to as the dual correctors and $y=x/\varepsilon$
(see \cite[Lemma 4.4]{X1}).
Hence, there hold
\begin{equation}\label{eq:3.3}
\left\{\begin{aligned}
\mathcal{L}_0(v_0)
&= \mathcal{L}_\varepsilon(v_\varepsilon) - \varepsilon\text{div}\big[V(y)\chi_{0}(y)
+ (\nabla \vartheta_0)(y)\big]\xi
+\varepsilon\chi_{0}(y)\big[ c(y)+\lambda I\big]\xi
\quad &\text{in}&~~\Omega, \\
\frac{\partial v_0}{\partial\nu_0}
& = \frac{\partial v_\varepsilon}{\partial\nu_\varepsilon} + \varepsilon n\cdot V(y)\chi_{0}(y)\xi
- \frac{\varepsilon}{2}\Big[n_i\frac{\partial}{\partial x_j}
-n_j\frac{\partial}{\partial x_i}\Big]E_{ji0}(y) \xi
\quad &\text{on}&~\partial\Omega,
\end{aligned}\right.
\end{equation}}
\emph{and it will benefit the later discussion in the approximating lemma.}
\end{remark}
\section{Local boundary estimates}
\begin{thm}[Lipschitz estimates at large scales]\label{thm:4.1}
Let $\Omega$ be a bounded $C^{1,\eta}$ domain.
Suppose that the coefficients of $\mathcal{L}_\varepsilon$ satisfy
$\eqref{a:1}$, $\eqref{a:2}$, and $\eqref{a:3}$.
Let $u_\varepsilon\in H^1(D_5;\mathbb{R}^m)$ be a weak solution of
$\mathcal{L}_\varepsilon(u_\varepsilon) = F$ in $D_5$ and
$\partial u_\varepsilon/\partial\nu_\varepsilon = g$ on $\Delta_5$,
where $F\in L^p(D_5;\mathbb{R}^m)$ with $p>d$,
and $g\in C^{0,\sigma}(\Delta_5;\mathbb{R}^m)$ with $0<\sigma\leq\eta<1$.
Then there holds
\begin{equation}\label{pri:4.6}
\begin{aligned}
\Big(\dashint_{D_r} |\nabla u_\varepsilon|^2dx\Big)^{\frac{1}{2}}
\leq C\bigg\{\Big(\dashint_{D_1} |u_\varepsilon|^2dx\Big)^{\frac{1}{2}}
+ \Big(\dashint_{D_{2r}} |u_\varepsilon|^2dx\Big)^{\frac{1}{2}}
+ \Big(\dashint_{D_1} |F|^pdx\Big)^{\frac{1}{p}}
+ \|g\|_{C^{0,\sigma}(\Delta_1)}\bigg\}
\end{aligned}
\end{equation}
for any $\varepsilon\leq r<(1/4)$,
where $C$ depends only on $\mu, \lambda, \kappa, d, m, p$ and the character of $\Omega$.
\end{thm}
\begin{lemma}[boundary Caccioppoli's inequality]\label{lemma:4.3}
Let $\Omega\subset\mathbb{R}^d$ be a bounded Lipschitz domain. Suppose that
the coefficients of $\mathcal{L}_\varepsilon$ satisfy $\eqref{a:1}$ and $\eqref{a:3}$ with
$\lambda\geq\lambda_0$. Let $u_\varepsilon\in H^1(D_2;\mathbb{R}^m)$ be a weak
solution of $\mathcal{L}_\varepsilon(u_\varepsilon) = \emph{div}(f)+F$ in $D_2$ with
$\partial u_\varepsilon/\partial\nu_\varepsilon = g-n\cdot f$ on $\Delta_2$. Then there holds
\begin{equation}\label{pri:2.12}
\Big(\dashint_{D_r}|\nabla u_\varepsilon|^2\Big)^{1/2}
\leq C_{\mu}\bigg\{\frac{1}{r}\Big(\dashint_{D_{2r}}|u_\varepsilon|^2\Big)^{1/2}
+ \Big(\dashint_{D_{2r}}|f|^2\Big)^{1/2}
+ r\Big(\dashint_{D_{2r}}|F|^2\Big)^{1/2}
+ \Big(\dashint_{\Delta_{2r}}|g|^2\Big)^{1/2}\bigg\}
\end{equation}
for any $0<r\leq 1$, where $C_\mu$ depends only on $\mu,d,m$, and the character of $\Omega$.
\end{lemma}
\begin{remark}
\emph{The condition $\lambda\geq\lambda_0$ guarantees that the constant $C_\mu$ in $\eqref{pri:2.12}$
do not depend on $\kappa$, which may lead to a scaling-invariant estimate even for the case $r>1$
(see \cite[Lemma 2.7]{X2}). However, we do not seek
such the convenience here. Also, we mention that the range of $0<r\leq 1$ is necessary in our proof.}
\end{remark}
\begin{proof}
By rescaling arguments we may prove the result for $r=1$.
The proof is quite similar to that given for \cite[Lemma 2.7]{X0}, and it is not hard to
derive that
\begin{equation*}
\begin{aligned}
\frac{\mu}{2}\int_{D_2} & \phi^2|\nabla u_\varepsilon|^2 dx
+ (\lambda - \lambda_0)\int_{D_2} \phi^2 |u_\varepsilon|^2 dx \\
&\leq C_\mu\int_{D_2} |\nabla\phi|^2|u_\varepsilon|^2 dx
+C_{\mu}\int_{D_2} \phi^2|f|^2 dx + \int_{D_2} \phi^2|F||u_\varepsilon| dx
+ \underbrace{\int_{\Delta_2} \phi^2g u_\varepsilon dS}_{I},
\end{aligned}
\end{equation*}
where $\phi\in C_0^1(\mathbb{R}^d)$ is a cut-off function satisfying
$\phi = 1$ in $D_1$ and $\phi = 0$ outside $D_{3/2}$ with $|\nabla\phi|\leq C$, and
the ellipticity condition $\eqref{a:1}$ coupled with integration by parts has been used in the computations.
Note that the last term $I$ is the new thing compared to the proof in \cite[Lemma 2.7]{X0}, and
the reminder of the proof is standard. Thus, we have that
\begin{equation*}
I \leq \frac{\mu}{10}\int_{D_2}|\phi\nabla u_\varepsilon|^2 dx
+ C\int_{D_2}|\phi u_\varepsilon|^2 dx + C\int_{\Delta_2} |\phi g|^2 dS
\end{equation*}
Note that the constant $C$ actually depends on $\mu,m,d$ and the character of $\Omega$.
Thus we can not use $\lambda_0$ to absorb this constant, which also means we can not deal with
the case $r>1$ by simply using the rescaling argument. We have completed the proof.
\end{proof}
\begin{remark}\label{re:4.1}
\emph{Assume the same conditions and $u_\varepsilon$ as in Lemma $\ref{lemma:4.3}$. Let
$v_\varepsilon = u_\varepsilon - \xi - \varepsilon\chi_0(x/\varepsilon)\xi$ satisfy
$\eqref{eq:3.1}$ in $D_2$. Then there holds
\begin{equation}\label{pri:4.7}
\Big(\dashint_{D_r}|\nabla v_\varepsilon|^2\Big)^{1/2}
\leq C_{\mu}\bigg\{\frac{1}{r}\Big(\dashint_{D_{2r}}|v_\varepsilon|^2\Big)^{1/2}
+ \Big(\dashint_{D_{2r}}|f|^2\Big)^{1/2}
+ r\Big(\dashint_{D_{2r}}|F|^2\Big)^{1/2}
+ \Big(\dashint_{\Delta_{2r}}|g|^2\Big)^{1/2} + |\xi|\bigg\}
\end{equation}
for any $0<r\leq 1$, where $C_\mu$ depends only on $\mu,d,m$, and the character of $\Omega$.}
\end{remark}
\begin{lemma}[local $W^{1,p}$ boundary estimate]\label{lemma:2.2}
Let $\Omega\subset\mathbb{R}^d$ be a bounded $C^1$ domain, and $2< p<\infty$.
Suppose that the coefficients of $\mathcal{L}_\varepsilon$ satisfy $\eqref{a:1}$ and $\eqref{a:3}$, and
$A\in\emph{VMO}(\mathbb{R}^d)$ additionally satisfies $\eqref{a:2}$.
Given $f\in L^p(D_2;\mathbb{R}^{md})$, $F\in L^q(D_2;\mathbb{R}^m)$ with $q=\frac{pd}{d+p}$ and
$g\in L^\infty(\Delta_2;\mathbb{R}^m)$, define a local source quantity as
\begin{equation*}
\mathcal{R}_p(f,F,g;r) = \Big(\dashint_{D_r}|f|^p\Big)^{1/p} + r\Big(\dashint_{D_r}|F|^q\Big)^{1/q}
+ \|g\|_{L^\infty(\Delta_r)}
\end{equation*}
for any $0<r\leq 1$.
Let $u_\varepsilon\in H^1(D_2;\mathbb{R}^m)$ be the weak solution to
$\mathcal{L}_\varepsilon(u_\varepsilon) = \emph{div}(f)+ F$ in $D_2$ and
$\partial u_\varepsilon/\partial\nu_\varepsilon = g -n\cdot f$ on $\Delta_2$ with the local boundedness
assumption
\begin{equation}\label{a:2.2}
\|u_\varepsilon\|_{W^{1,2}(D_1)}+ \mathcal{R}_p(f,F,g;1) \leq 1.
\end{equation}
Then, there exists $C_p>0$, depending on $\mu,\kappa,\lambda,m,d,p,\|A\|_{\emph{VMO}}$ and
the character of $\Omega$, such that
\begin{equation}\label{pri:2.10}
\|u_\varepsilon\|_{W^{1,p}(D_{1/2})} \leq C_p.
\end{equation}
\end{lemma}
\begin{proof}
The proof is based upon the localization technique coupled with a bootstrap argument which may be found
in \cite[Lemma 2.19]{X1} and \cite[Theorem 3.3]{X0}. Let $w_\varepsilon =\phi u_\varepsilon$, where
$\phi\in C_0^1(\mathbb{R}^d)$ be a cut-off function satisfying $\phi = 1$ in $D_{1/2}$ and
$\phi = 0$ outside $D_{1}$ with $|\nabla\phi|\leq C$. Then we have
\begin{equation*}
\left\{\begin{aligned}
\mathcal{L}_\varepsilon(w_\varepsilon) &= \text{div}(\tilde{f}) + \tilde{F} &\quad&\text{in} ~~D_2,\\
\frac{\partial w_\varepsilon}{\partial\nu_\varepsilon}
&= \big(\frac{\partial u_\varepsilon}{\partial\nu_\varepsilon}\big)\phi - n\cdot \tilde{f}
&\quad&\text{on}~\partial D_2,
\end{aligned}\right.
\end{equation*}
where
\begin{equation*}
\tilde{f} = f\phi - A(x/\varepsilon)\nabla\phi u_\varepsilon,\quad
\tilde{F} = F\phi - f\cdot\nabla\phi -A(x/\varepsilon)\nabla u_\varepsilon\nabla\phi
+\big[B(x/\varepsilon)-V(x/\varepsilon)\big]\nabla\phi u_\varepsilon.
\end{equation*}
Thus, according to the global $W^{1,p}$ estimate (see \cite[Theorem 3.1]{X1}), we may obtain
\begin{equation}\label{f:2.5}
\|u_\varepsilon\|_{W^{1,p}(D_{1/2})}\leq \|w_\varepsilon\|_{W^{1,p}(D_{1/2})}
\leq C\Big\{\|u_\varepsilon\|_{W^{1,q}(D_1)}+\mathcal{R}_p(f,F,g;1)\Big\}
\end{equation}
where we use the Sobolev embedding theorem
$\|u_\varepsilon\|_{L^p(D_{1})}\leq C\|u_\varepsilon\|_{W^{1,q}(D_1)}$ with $q=\frac{pd}{p+d}$.
The interval $[1/2,1]$ may be divided into $1/2\leq r_1<\cdots <r_i<r_{i+1}<\cdots<r_{k_0}\leq 1$
with $i=1,\cdots,k_0$, where $k_0=\big[\frac{d}{2}\big]+1$ denotes the times of iteration, and
$[\frac{d}{2}]$ represents the integer part of $d/2$. By choosing the cut-off function
$\phi_i\in C_0^1(\mathbb{R}^d)$ such that $\phi_i =1$ in $D_{r_i}$ and
$\phi_i =0$ outside $D_{r_{i+1}}$ with $|\nabla\phi_i|\leq C/(r_{i+1}-r_i)$, one may derive
that
\begin{equation}\label{f:2.6}
\begin{aligned}
\|u_\varepsilon\|_{W^{1,p}(D_{1/2})}
\leq \cdots
&\leq \| w_\varepsilon\|_{W^{1,p_i}(D_{r_i})} + \cdots \\
&\leq C_{p_i}\Big\{\|u_\varepsilon\|_{W^{1,p_{i-1}}(D_{r_{i+1}})}
+\mathcal{R}_{p_i}(f,F,g;r_{i+1})\Big\}
+ C(d)\mathcal{R}_p(f,F,g;1) \\
\leq \cdots
&\leq
C\Big\{\|u_\varepsilon\|_{W^{1,2}(D_{1})}+\mathcal{R}_p(f,F,g;1)\Big\}\leq C
\end{aligned}
\end{equation}
where $p_i = 2d/(d-2i)$ and we note that there are two cases $p>p_{k_0} = \frac{2d}{d-2k_0}$
and $p\in(2,p_{k_0}]$ should be discussed. We refer the reader to \cite[Theorem 3.3]{X0} for the details.
Also, to obtain the second line of $\eqref{f:2.6}$ we use the following fact that
\begin{equation*}
\mathcal{R}_{p_i}(f,F,g;r_i) \leq C(d)\mathcal{R}_{p}(f,F,g;1)
\end{equation*}
for any $2<p_i\leq p$ and $r_i\in[1/2,1]$. We end the proof here.
\end{proof}
\begin{corollary}
Assume the same conditions as in Lemma $\ref{lemma:2.2}$. Let $0<\sigma<1$, and $p=d/(1-\sigma)$.
Suppose that $u_\varepsilon\in H^1(D_2;\mathbb{R}^m)$ is a weak solution of
$\mathcal{L}_\varepsilon(u_\varepsilon) = \emph{div}(f)+F$ in $D_2$ and
$\partial u_\varepsilon/\partial\nu_\varepsilon = g-n\cdot f$ on $\Delta_2$ with the local boundedness
assumption $\eqref{a:2.2}$. Then we have the boundary H\"older estimate
\begin{equation}\label{pri:2.11}
\|u_\varepsilon\|_{C^{0,\sigma}(D_{1/2})} \leq C_\sigma,
\end{equation}
where $C_\sigma$ depends on $\mu,\kappa,\lambda,m,d,\sigma,\|A\|_{\emph{VMO}}$
and the character of $\Omega$. In particularly, for any $s>0$ there holds
\begin{equation}\label{pri:2.3}
\|u_\varepsilon\|_{L^\infty(D_{r/2})}\leq C\bigg\{\Big(\dashint_{D_{r}}|u_\varepsilon|^s\Big)^{1/s}
+ r\mathcal{R}_p(f,F,g;r)\bigg\}
\end{equation}
for any $0<r\leq 1$, where $C$ depends on $s$ and
$C_\sigma$.
\end{corollary}
\begin{proof}
The estimate $\eqref{pri:2.11}$ directly follows from the Sobolev embedding theorem and the estimate
$\eqref{pri:2.10}$. To show the estimate $\eqref{pri:2.3}$ we also employ Caccioppoli's inequality
$\eqref{pri:2.12}$ and a rescaling argument. The details may be found in \cite[Corollary 3.5]{X0} and we
do not reproduce here.
\end{proof}
\begin{lemma}[approximating lemma]\label{lemma:4.1}
Let $\varepsilon\leq r<1$.
Assume the same conditions as in Theorem $\ref{thm:4.1}$.
Let $u_\varepsilon\in H^1(D_{2r};\mathbb{R}^m)$ be a weak solution of
$\mathcal{L}_\varepsilon(u_\varepsilon) = F$ in $D_{2r}$
and $\partial u_\varepsilon/\partial\nu_\varepsilon = g$ on $\Delta_{2r}$. Then there exists
$w\in H^1(D_{r};\mathbb{R}^m)$ such that
$\mathcal{L}_0(w) = F$ and
$\partial w/\partial\nu_0 = g$ on $\Delta_{r}$,
and there holds
\begin{equation}\label{pri:4.1}
\begin{aligned}
\Big(\dashint_{D_r} |u_\varepsilon - w|^2 \Big)^{1/2}
\leq C\left(\frac{\varepsilon}{r}\right)^{\rho}
\bigg\{
&\Big(\dashint_{D_{2r}}|u_\varepsilon|^2 \Big)^{1/2}
+ r^2\Big(\dashint_{D_{2r}} |F|^2\Big)^{1/2}
+ r\Big(\dashint_{\Delta_{2r}}|g|^2\Big)^{1/2}\bigg\},
\end{aligned}
\end{equation}
where $\rho\in(0,1/2)$
and $C>0$ $\mu,\lambda,\kappa,d,m$ and the the character of $\Omega$.
\end{lemma}
\begin{proof}
The idea may be found in \cite[Theorem 5.1.1]{S4}.
By rescaling argument one may assume $r=1$. For any $t\in (1,3/2)$,
there exists $w\in H^1(D_t;\mathbb{R}^m)$ satisfying
$\mathcal{L}_0(w)= F$ in $D_t$,
and $\partial w/\partial\nu_0 = \partial u_\varepsilon/\partial\nu_\varepsilon$ on $\partial D_t$.
In view of Theorem $\ref{thm:3.1}$, we have
\begin{equation}\label{f:4.5}
\begin{aligned}
\|u_\varepsilon - w\|_{L^2(D_t)}
\leq C\varepsilon^{\rho}\bigg\{
\|F\|_{L^{2}(D_t)}
+\|g\|_{L^{2}(\Delta_2)}
+\|u_\varepsilon\|_{W^{1,2}(\partial D_t\setminus\Delta_2)}\bigg\},
\end{aligned}
\end{equation}
and it remains to estimate the last term in the right-hand side of $\eqref{f:4.5}$. Due to
the estimate $\eqref{pri:2.12}$ and co-area formula, we have
\begin{equation}\label{f:4.1}
\|u_\varepsilon\|_{W^{1,2}(\partial D_t\setminus\Delta_2)} \leq C\Big\{\|u_\varepsilon\|_{L^2(D_2)}
+ \|F\|_{L^2(D_2)} + \|g\|_{L^{2}(\Delta_2)}\Big\}
\end{equation}
for some $t\in(1,3/2)$.
Hence, combining $\eqref{f:4.5}$ and $\eqref{f:4.1}$ we acquire
\begin{equation*}
\begin{aligned}
\big\|u_\varepsilon - w\big\|_{L^2(D_1)}
&\leq C\varepsilon^{\rho}
\bigg\{\Big(\dashint_{D_{2}}|u_\varepsilon|^2 \Big)^{1/2}
+ \Big(\dashint_{D_{2}}|F|^2 \Big)^{1/2}
+ \Big(\dashint_{\Delta_{2}}|g|^2\Big)^{1/2} \bigg\}.
\end{aligned}
\end{equation*}
By rescaling argument we can derive
the desired estimate $\eqref{pri:4.1}$, and we complete the proof.
\end{proof}
\begin{lemma}[revised approximating lemma]\label{lemma:4.4}
Let $\varepsilon\leq r<1$.
Assume the same conditions as in Theorem $\ref{thm:4.1}$.
Let $u_\varepsilon\in H^1(D_{2r};\mathbb{R}^m)$ be a weak solution of
$\mathcal{L}_\varepsilon(u_\varepsilon) = F$ in $D_{2r}$
and $\partial u_\varepsilon/\partial\nu_\varepsilon = g$ on $\Delta_{2r}$.
Let $v_\varepsilon = u_\varepsilon - \xi - \varepsilon\chi_0(x/\varepsilon)\xi$
for some $\xi\in\mathbb{R}^d$.
Then there exists
$v_0 = u_0-\xi\in H^1(D_{r};\mathbb{R}^m)$ such that
the equation $\eqref{eq:3.3}$ holds in $D_r$, and we have
\begin{equation}\label{pri:4.8}
\begin{aligned}
\Big(\dashint_{D_r} |v_\varepsilon - v_0|^2 \Big)^{1/2}
\leq C\left(\frac{\varepsilon}{r}\right)^{\rho}
\bigg\{
&\Big(\dashint_{D_{2r}}|u_\varepsilon-\xi|^2 \Big)^{1/2}
+ r^2\Big(\dashint_{D_{2r}} |F|^2\Big)^{1/2}
+ r\Big(\dashint_{\Delta_{2r}}|g|^2\Big)^{1/2} + r|\xi|\bigg\},
\end{aligned}
\end{equation}
where $\rho\in(0,1/2)$
and $C>0$ depend only on $\mu,\lambda,\kappa,d,m$ and the the character of $\Omega$.
\end{lemma}
\begin{proof}
Here we need to employ Caccioppoli's inequality $\eqref{pri:4.7}$ for $v_\varepsilon$, and Corollary $\ref{cor:3.1}$.
The rest of the proof is as the same as the previous lemma, and we omit the proof.
\end{proof}
Before we proceed further, for any matrix $M\in \mathbb{R}^{m\times d}$,
we denote $G(r,v)$ as the following
\begin{equation}
\begin{aligned}
G(r,v) &= \frac{1}{r}\inf_{M\in\mathbb{R}^{d\times d}}
\Bigg\{\Big(\dashint_{D_r}|v-Mx-\tilde{c}|^2dx\Big)^{\frac{1}{2}}
+ r^2\Big(\dashint_{D_r}|F|^p\Big)^{\frac{1}{p}}
+ r^2\Big(\dashint_{D_r}|Mx+\tilde{c}|^p\Big)^{\frac{1}{p}} \\
&\qquad + r^2 |M| + r\Big\|g-\frac{\partial}{\partial\nu_0}\big(Mx+\tilde{c}\big)\Big\|_{L^\infty(\Delta_{r})}
+r^{1+\sigma}\Big[g-\frac{\partial}{\partial\nu_0}\big(Mx+\tilde{c}\big)\Big]_{C^{0,\sigma}(\Delta_r)}\Bigg\},
\end{aligned}
\end{equation}
where we set $\tilde{c} = u_0(0)$.
\begin{lemma}\label{lemma:5.5}
Let $u_0\in H^1(D_2;\mathbb{R}^m)$ be a solution of
$\mathcal{L}_0(u_0) = F$ in $D_2$ and
$\partial u_0/\partial\nu_0 = g$ on $\Delta_2$, where $g\in C^{0,\sigma}(\Delta_2;\mathbb{R}^m)$. Then
there exists $\theta\in(0,1/4)$, depending on $\mu,d,\kappa,\lambda,m,d$ and the character of $\Omega$,
such that
\begin{equation}\label{pri:5.10}
G(\theta r,u_0)\leq \frac{1}{2} G(r,u_0)
\end{equation}
holds for any $r\in(0,1)$.
\end{lemma}
\begin{proof}
We may assume $r=1$ by rescaling argument. By the definition of $G(\theta,u_0)$, we see that
\begin{equation*}
\begin{aligned}
G(\theta,u_0) &\leq \frac{1}{\theta}
\bigg\{\Big(\dashint_{D_\theta}|u_0-M_0x-\tilde{c}|^2\Big)^{\frac{1}{2}}
+ \theta^2\Big(\dashint_{D_\theta}|F|^p\Big)^{\frac{1}{p}}
+ \theta^2\Big(\dashint_{D_\theta}|M_0 x+\tilde{c}|^p\Big)^{\frac{1}{p}}+\theta^2 |M_0|\\
&\quad
+ \theta\Big\|\frac{\partial}{\partial\nu_0}\big(u_0-M_0x-\tilde{c}\big)\Big\|_{L^\infty(\Delta_{\theta})}
+\theta^{1+\sigma}\Big[\frac{\partial}{\partial\nu_0}\big(u_0-M_0x-\tilde{c}\big)\Big]_{C^{0,\sigma}(\Delta_\theta)}\bigg\} \\
&\leq \theta^{\sigma}\bigg\{\|u_0\|_{C^{1,\sigma}(D_{1/2})}
+\Big(\dashint_{D_{1/2}}|F|^pdx\Big)^{\frac{1}{p}}
\bigg\},
\end{aligned}
\end{equation*}
where we choose $M_0 = \nabla u_0(0)$.
For any
$M\in \mathbb{R}^{m\times d}$, we let $\tilde{u}_0 = u_0 - Mx-\tilde{c}$.
Obviously, it satisfies the equation:
\begin{equation*}
\mathcal{L}_0(\tilde{u}_0) = F -\mathcal{L}_0(Mx+\tilde{c})
\quad\text{in}~~D_2,\qquad \frac{\partial \tilde{u}_0}{\partial\nu_0}
= g - \frac{\partial}{\partial\nu_0}\big(Mx+\tilde{c}\big)\quad \text{on}~\Delta_2.
\end{equation*}
Hence, it follows from boundary Schauder estimates (see for example \cite[Lemma 2.19]{X1}) that
\begin{equation*}
\begin{aligned}
\big\|\tilde{u}_0\big\|_{C^{1,\sigma}(D_{1/2})}
\leq CG(1,u_0).
\end{aligned}
\end{equation*}
Note that
\begin{equation*}
\begin{aligned}
\|u_0\|_{C^{1,\sigma}(D_{1/2})} &\leq \big\|\tilde{u}_0\big\|_{C^{1,\sigma}(D_{1/2})}
+ |M| + \|Mx+\tilde{c}\|_{L^\infty(D_{1/2})}\\
&\leq \big\|\tilde{u}_0\big\|_{C^{1,\sigma}(D_{1/2})}
+ |M| + C\Big(\dashint_{D_1}|Mx+\tilde{c}|^p\Big)^{\frac{1}{p}}
\end{aligned}
\end{equation*}
where we use the fact that $Mx + \tilde{c}$ is harmonic in $\mathbb{R}^d$.
It is clear to see that there exists $\theta\in(0,1/4)$ such that
$G(\theta,u_0)\leq \frac{1}{2}G(1,u_0)$.
Then the desire result $\eqref{pri:5.10}$ can be obtained simply by a rescaling argument.
\end{proof}
For simplicity, we also denote $\Phi(r)$ by
\begin{equation*}
\begin{aligned}
\Phi(r) = \frac{1}{r}\bigg\{
\Big(\dashint_{D_{r}}|u_\varepsilon - \tilde{c}|^2 \Big)^{1/2}
+ r^2\Big(\dashint_{D_{r}} |F|^p\Big)^{1/p}
+ r\|g\|_{L^\infty(\Delta_r)} + r|\tilde{c}|\bigg\}.
\end{aligned}
\end{equation*}
\begin{lemma}\label{lemma:5.6}
Let $\rho$ be given in Lemma $\ref{lemma:4.1}$.
Assume the same conditions as in Theorem $\ref{thm:4.1}$.
Let $u_\varepsilon$ be the solution of
$\mathcal{L}_\varepsilon(u_\varepsilon) = F$ in $D_2$ with
$\partial u_\varepsilon/\partial\nu_\varepsilon= g$ on $\Delta_2$.
Then we have
\begin{equation}
G(\theta r, u_\varepsilon) \leq \frac{1}{2}G(r,u_\varepsilon)
+ C\left(\frac{\varepsilon}{r}\right)^\rho\Phi(2r)
\end{equation}
for any $r\in[\varepsilon,1/2]$, where $\theta\in(0,1/4)$ is given in Lemma $\ref{lemma:5.5}$.
\end{lemma}
\begin{proof}
Fix $r\in[\varepsilon,1/2]$, let $w$ be a solution to
$\mathcal{L}_0(w) = F$ in $D_r$,
and $\partial w/\partial\nu_0 = \partial u_\varepsilon/\partial\nu_\varepsilon$ on
$\partial D_r$. Also, let $v_\varepsilon = u_\varepsilon - \tilde{c}
- \varepsilon\chi_0(x/\varepsilon)\tilde{c}$ and $v_0 = w - \tilde{c}$.
Then we obtain
\begin{equation*}
\begin{aligned}
G(\theta r,u_\varepsilon)
&\leq \frac{1}{\theta r}\Big(\dashint_{D_{\theta r}}|u_\varepsilon - w|^2 \Big)^{\frac{1}{2}}
+ G(\theta r, w) \\
&\leq \frac{C}{r}\Big(\dashint_{D_{r}}|u_\varepsilon - w|^2\Big)^{\frac{1}{2}}
+ \frac{1}{2}G(r, w)\\
&\leq \frac{1}{2}G(r, u_\varepsilon)
+ \frac{C}{r}\Big(\dashint_{D_{r}}|u_\varepsilon - w|^2\Big)^{\frac{1}{2}}\\
&\leq \frac{1}{2}G(r, u_\varepsilon)
+ \frac{C}{r}\Big(\dashint_{D_{r}}|v_\varepsilon - v_0|^2\Big)^{\frac{1}{2}}
+ C(\varepsilon/r)|\tilde{c}|\\
&\leq \frac{1}{2}G(r, u_\varepsilon) + C(\varepsilon/r)^\rho
\bigg\{
\frac{1}{r}\Big(\dashint_{D_{2r}}|u_\varepsilon-\tilde{c}|^2 \Big)^{1/2}
+ r\Big(\dashint_{D_{2r}} |F|^p\Big)^{1/p}
+ \|g\|_{L^\infty(\Delta_{2r})}+|\tilde{c}|\bigg\},
\end{aligned}
\end{equation*}
where we use the estimate $\eqref{pri:5.10}$ in the second inequality,
and $\eqref{pri:4.8}$ in the last one. The proof is complete.
\end{proof}
\begin{lemma}\label{lemma:4.2}
Let $\Psi(r)$ and $\psi(r)$ be two nonnegative continuous functions on the integral $(0,1]$.
Let $0<\varepsilon<\frac{1}{4}$. Suppose that there exists a constant $C_0$ such that
\begin{equation}\label{pri:4.2}
\left\{\begin{aligned}
&\max_{r\leq t\leq 2r} \Psi(t) \leq C_0 \Psi(2r),\\
&\max_{r\leq s,t\leq 2r} |\psi(t)-\psi(s)|\leq C_0 \Psi(2r),
\end{aligned}\right.
\end{equation}
and $0\leq c(2r) \leq C_0 c(1)$ for any $r\in[\varepsilon,1/2]$. We further assume that
\begin{equation}\label{pri:4.3}
\Psi(\theta r)\leq \frac{1}{2}\Psi(r) + C_0w(\varepsilon/r)\Big\{\Psi(2r)+\psi(2r)+c(2r)\Big\}
\end{equation}
holds for any $r\in[\varepsilon,1/2]$, where $\theta\in(0,1/4)$ and $w$ is a nonnegative
increasing function in $[0,1]$ such that $w(0)=0$ and
\begin{equation}\label{pri:4.5}
\int_0^1 \frac{w(t)}{t} dt <\infty.
\end{equation}
Then, we have
\begin{equation}\label{pri:4.4}
\max_{\varepsilon\leq r\leq 1}\Big\{\Psi(r)+\psi(r)\Big\}
\leq C\Big\{\Psi(1)+\psi(1)+c(1)\Big\},
\end{equation}
where $C$ depends only on $C_0, \theta$ and $w$.
\end{lemma}
\begin{proof}
Here we refer the reader to \cite[Lemma 8.5]{S5}. Although
we make a few modification on it, the proof is almost the same thing.
\end{proof}
\noindent\textbf{Proof of Theorem $\ref{thm:4.1}$}.
It is fine to assume $0<\varepsilon<1/4$, otherwise it follows from the classical theory.
In view of Lemma $\ref{lemma:4.2}$,
we set $\Psi(r) = G(r,u_\varepsilon)$, $w(t) =t^\lambda$,
where $\lambda>0$ is given in Lemma $\ref{lemma:4.1}$. In order to prove the desired estimate
$\eqref{pri:4.4}$, it is sufficient to verify $\eqref{pri:4.2}$ and $\eqref{pri:4.3}$.
Let $\psi(r) = |M_r|$, where $M_r$ is the matrix associated with $\Psi(r)$, respectively.
\begin{equation*}
\begin{aligned}
\Psi(r) &= \frac{1}{r}
\Bigg\{\Big(\dashint_{D_r}|u_\varepsilon-M_r x - \tilde{c}|^2\Big)^{\frac{1}{2}}
+ r^2\Big(\dashint_{D_r}|F|^p\Big)^{\frac{1}{p}}+r^2|M_r|
+ r^2\Big(\dashint_{D_r}|M_rx+\tilde{c}|^p\Big)^{\frac{1}{p}}\\
&\qquad\quad + r\Big\|g-\frac{\partial}{\partial\nu_0}\big(M_rx+\tilde{c}\big)\Big\|_{L^\infty(\Delta_{r})}
+r^{1+\sigma}\Big[g-\frac{\partial}{\partial\nu_0}\big(M_r x + \tilde{c}\big)\Big]_{C^{0,\sigma}(\Delta_r)}\Bigg\},
\end{aligned}
\end{equation*}
Then we have
\begin{equation*}
\Phi(r) \leq C\Big\{\Psi(2r) + \psi(2r) + c(2r)\Big\},
\end{equation*}
where $c(2r) = \|g\|_{L^\infty(\Delta_{2r})} + |\tilde{c}|$.
This together with Lemma $\ref{lemma:5.6}$ gives
\begin{equation*}
\Psi(\theta r)\leq \frac{1}{2}\Psi(r) + C_0 w(\varepsilon/r)\Big\{\Psi(2r)+\psi(2r)+c(2r)\Big\},
\end{equation*}
which satisfies the condition $\eqref{pri:4.3}$ in Lemma $\ref{lemma:4.2}$.
Let $t,s\in [r,2r]$, and $v(x)=(M_t-M_s)x$. It is clear to see $v$ is harmonic in $\mathbb{R}^d$.
Since $D_r$ satisfies the interior ball condition, we arrive at
\begin{equation}\label{f:5.15}
\begin{aligned}
|M_t-M_s|&\leq \frac{C}{r}\Big(\dashint_{D_r}|(M_t-M_s)x-\tilde{c}|^2\Big)^{\frac{1}{2}}\\
&\leq \frac{C}{t}
\Big(\dashint_{D_t}|u_\varepsilon - M_tx-\tilde{c}|^2\Big)^{\frac{1}{2}}
+ \frac{C}{s}\Big(\dashint_{D_s}|u_\varepsilon - M_sx-\tilde{c}|^2\Big)^{\frac{1}{2}}\\
&\leq C\Big\{\Psi(t)+\Psi(s)\Big\}\leq C\Psi(2r),
\end{aligned}
\end{equation}
where the second and the last steps are based on the fact that $s,t\in[r,2r]$. Due to the same reason, it
is easy to obtain $\Psi(r)\leq C\Psi(2r)$, where we use the assumption $p>d$.
The estimate $\eqref{f:5.15}$ satisfies the condition
$\eqref{pri:4.2}$. Besides, $w$ here obviously satisfies the condition $\eqref{pri:4.5}$.
Hence, according to Lemma $\ref{lemma:4.2}$, for any $r\in[\varepsilon,1/4]$,
we have the following estimate
\begin{equation}\label{f:4.6}
\frac{1}{r}\Big(\dashint_{D_{2r}}|u_\varepsilon - \tilde{c}|^2 \Big)^{\frac{1}{2}}
\leq C\Big\{\Psi(2r) + \psi(2r)\Big\}
\leq C\Big\{\Psi(1) + \psi(1) + c(1)\Big\}.
\end{equation}
Hence, for $\varepsilon\leq r<(1/4)$, the desired estimate $\eqref{pri:4.6}$ consequently follows from
$\eqref{f:4.6}$ and Caccioppoli's inequality $\eqref{pri:4.7}$,
\begin{equation*}
\begin{aligned}
\Big(\dashint_{D_r}|\nabla u_\varepsilon|^2\Big)^{1/2}
&\leq \Big(\dashint_{D_r}|\nabla v_\varepsilon|^2\Big)^{1/2} + C|\tilde{c}| \\
&\leq C\bigg\{\frac{1}{r}\Big(\dashint_{D_{2r}}|u_\varepsilon - \tilde{c}|^2\Big)^{1/2}
+ r\Big(\dashint_{D_{2r}}|F|^p\Big)^{1/p}
+ \|g\|_{L^\infty(\Delta_{2r})} + |\tilde{c}|\bigg\} \\
&\leq C\bigg\{\Big(\dashint_{D_{1}}|u_\varepsilon|^2\Big)^{1/2}
+ \Big(\dashint_{D_{2r}}|u_\varepsilon|^2\Big)^{1/2}
+ \Big(\dashint_{D_{1}}|F|^p\Big)^{1/p}
+ \|g\|_{C^{0,\sigma}(\Delta_{1})}\bigg\},
\end{aligned}
\end{equation*}
where $v_\varepsilon = u_\varepsilon - \tilde{c} - \varepsilon\chi_0(x/\varepsilon)\tilde{c}$,
and we also use the following estimate
\begin{equation}
\begin{aligned}
|\tilde{c}| = |u_0(0)| \leq C\Big(\dashint_{D_r}|u_0|^2\Big)^{1/2}
&\leq C\Big(\dashint_{D_r}|u_\varepsilon|^2\Big)^{1/2}
+ C\Big(\dashint_{D_r}|u_\varepsilon - u_0|^2\Big)^{1/2} \\
&\leq C\bigg\{\Big(\dashint_{D_{2r}}|u_\varepsilon|^2\Big)^{1/2}
+ \Big(\dashint_{D_{1}}|F|^p\Big)^{1/p}
+ \|g\|_{L^{\infty}(\Delta_{1})}\bigg\}
\end{aligned}
\end{equation}
in the last step, which is due to the estimate $\eqref{pri:4.1}$ and the fact $r\geq\varepsilon$.
We have completed the proof.
\qed
\noindent\textbf{Proof of Theorem $\ref{thm:1.0}$}.
By a rescaling argument we may prove $\eqref{pri:1.0}$ for $r=1$.
Let $u_\varepsilon = v_\varepsilon + w_\varepsilon$, where $v_\varepsilon,w_\varepsilon$ satisfy
\begin{equation*}
(1)\left\{\begin{aligned}
\mathcal{L}_\varepsilon(v_\varepsilon) &= F &\text{in}&~D_1,\\
\frac{\partial v_\varepsilon}{\partial\nu_\varepsilon} &= g
&\text{on}&~\Delta_1,
\end{aligned}\right.
\qquad
(2)\left\{\begin{aligned}
\mathcal{L}_\varepsilon(w_\varepsilon) &= \text{div}(f) &\text{in}&~D_1,\\
\frac{\partial w_\varepsilon}{\partial\nu_\varepsilon} &= -n\cdot f
&\text{on}&~\partial D_1,
\end{aligned}\right.
\end{equation*}
respectively. For (1), we claim that we can prove
\begin{equation}\label{f:4.30}
\|\nabla v_\varepsilon\|_{L^\infty(D_{1/2})}
\leq \bigg\{\|v_\varepsilon\|_{L^2(D_{1})}+ \|F\|_{L^p(D_1)} + \|g\|_{C^{0,\sigma}(\Delta_1)}\bigg\},
\end{equation}
where $C$ depends on $\mu,\tau,\kappa,\lambda,p,\sigma$ and the character of $\Omega$.
In terms of $(2)$, it follows the global Lipschitz estimate \cite[Theorem 1.2]{X2} that
\begin{equation}\label{f:4.31}
\|\nabla w_\varepsilon\|_{L^\infty(D_{1})}
\leq C\|f\|_{C^{0,\sigma}(D_1)}.
\end{equation}
Combining the estimates $\eqref{f:4.30}$ and $\eqref{f:4.31}$ lead to
the stated estimate $\eqref{pri:1.0}$,
in which we also need $H^1$ estimate for $w_\varepsilon$ (see for example \cite[Lemma 3.1]{X2}).
We now turn to prove the estimate $\eqref{f:4.30}$.
Let $\Delta_{1/2}\subset\cup_{i=1}^{N_0} B(Q_i,r)\subset \Delta_{2/3}$ for $Q_i\in\partial\Omega$ and
some $0<r<1$.
Let $\tilde{v}_\varepsilon = v_\varepsilon - \xi -\varepsilon\chi_0(x/\varepsilon)\xi$,
and $\tilde{v}_\varepsilon$ satisfies the equation $\eqref{eq:3.1}$ in $D(Q_i,r)$.
By translation we may assume $Q_i = 0$. Then it follows
classical boundary Lipschitz estimate (see \cite[Lemma 2.19]{X1}) that
\begin{equation}
\begin{aligned}
\|\nabla \tilde{v}_\varepsilon\|_{L^\infty(D_{\varepsilon/2})}
&\leq C\bigg\{\frac{1}{\varepsilon}\Big(\dashint_{D_\varepsilon}|v_\varepsilon
- \xi|^2\Big)^{1/2}
+ |\xi| + \mathcal{R}(F,0,g;\varepsilon)\bigg\} \\
&\leq C\bigg\{\Big(\dashint_{D_\varepsilon}|\nabla v_\varepsilon|^2\Big)^{1/2}
+ \Big(\dashint_{D_\varepsilon}|v_\varepsilon|^2\Big)^{1/2} + \mathcal{R}(F,0,g;1)\bigg\} \\
&\leq C\bigg\{\Big(\dashint_{D_{1}}|v_\varepsilon|^2\Big)^{1/2}
+ \mathcal{R}(F,0,g;1)\bigg\},
\end{aligned}
\end{equation}
where we choose $\xi = \dashint_{D_\varepsilon} v_\varepsilon$ in the first line, and the
second step follows from Poincar\'e's inequality and the fact $p>d$. In the last one,
the estimate $\eqref{pri:4.6}$ and the uniform H\"older estimate $\eqref{pri:2.3}$ have been employed,
and by a simple covering argument we have proved the stated estimate $\eqref{f:4.30}$,
and completed the whole proof.
\qed
\section{Neumann functions}
Let $\mathbf{\Gamma}_\varepsilon(x,y)$ denote the matrix of fundamental solutions of
$\mathcal{L}_\varepsilon$ in $\mathbb{R}^d$, with pole at $y$.
Suppose that the coefficients of $\mathcal{L}_\varepsilon$ satisfy
$\eqref{a:1}$, $\eqref{a:2}$, $\eqref{a:3}$ and $\eqref{a:4}$ with $\mu\geq\max\{\mu,\lambda_0\}$,
one may use \cite[Theorem 1.1]{X2} to show that for $d\geq 3$,
\begin{equation}\label{pri:5.2}
\begin{aligned}
\big|\mathbf{\Gamma}_\varepsilon(x,y)\big|&\leq C|x-y|^{2-d} \\
\big|\nabla_x\mathbf{\Gamma}_\varepsilon(x,y)\big|
+ \big|\nabla_y\mathbf{\Gamma}_\varepsilon(x,y)\big| &\leq C|x-y|^{1-d},
\end{aligned}
\end{equation}
where $C$ depends only on $\mu,\kappa,\lambda,\tau,m,d$. Let $U_\varepsilon(x,y)$ be the solution of
\begin{equation}\label{pde:5.1}
\left\{\begin{aligned}
\mathcal{L}_\varepsilon(U_\varepsilon^\beta(\cdot,y)) &= 0 &\quad&\text{in}~~\Omega,\\
\frac{\partial}{\partial\nu_\varepsilon}\big(U_\varepsilon^\beta(\cdot,y)\big)
& = \frac{\partial}{\partial\nu_\varepsilon}\big(\mathbf{\Gamma}_\varepsilon^\beta(\cdot,y)\big)
&\quad&\text{on}~\partial\Omega,
\end{aligned}
\right.
\end{equation}
where $\mathbf{\Gamma}_\varepsilon^\beta(x,y)
= \big(\mathbf{\Gamma}_\varepsilon^{1\beta}(x,y),\cdots,\mathbf{\Gamma}_\varepsilon^{m\beta}(x,y)\big)$.
We now define
\begin{equation}
\mathbf{N}_\varepsilon(x,y) = \mathbf{\Gamma}_\varepsilon(x,y) - U_\varepsilon(x,y)
\end{equation}
for $x,y\in\Omega$. Note that, if
$\mathbf{N}_\varepsilon^\beta(x,y) = \mathbf{\Gamma}_\varepsilon^\beta(x,y) - U_\varepsilon^\beta(x,y)$,
\begin{equation}
\left\{\begin{aligned}
\mathcal{L}_\varepsilon\big(\mathbf{N}_\varepsilon^\beta(\cdot,y)\big) &= e^\beta\delta_y
&\quad&\text{in}~~\Omega,\\
\frac{\partial}{\partial\nu_\varepsilon}\big(\mathbf{N}_\varepsilon^\beta(\cdot,y)\big)
& = 0 &\quad&\text{on}~\partial\Omega,
\end{aligned}\right.
\end{equation}
where $\delta_y$ denotes the Dirac delta function with pole at $y$.
We will call $\mathbf{N}_\varepsilon(x,y)$ the matrix of Neumann functions for
$\mathcal{L}_\varepsilon$ in $\Omega$.
\begin{lemma}
Suppose that the coefficients of $\mathcal{L}_\varepsilon$ satisfy $\eqref{a:1}$,
$\eqref{a:2}$, $\eqref{a:3}$ and $\eqref{a:4}$ with $\lambda\geq\max\{\mu,\lambda_0\}$.
Let $U_\varepsilon(x,y)$ be defined by $\eqref{pri:5.1}$. Then there holds
\begin{equation}\label{pri:5.5}
|U_\varepsilon(x,y)|\leq C\big[\delta(x)\big]^{\frac{2-d}{2}}\big[\delta(y)\big]^{\frac{2-d}{2}}
\end{equation}
for any $x,y\in\Omega$, where $\delta(x)= \emph{dist}(x,\partial\Omega)$ and $C$ depends on
$\mu,d,m$ and $\Omega$.
\end{lemma}
\begin{proof}
Fix $y\in\Omega$, and let $w_\varepsilon(x) = U_\varepsilon(x,y)$. In view of $\eqref{pde:5.1}$ we have
\begin{equation}\label{f:5.1}
\|w_\varepsilon\|_{H^1(\Omega)}
\leq C\big\|\frac{\partial w_\varepsilon}{\partial\nu_\varepsilon}\big\|_{H^{-\frac{1}{2}}(\partial\Omega)}
\leq C\big\|\frac{\partial w_\varepsilon}{\partial\nu_\varepsilon}\big\|_{L^{p}(\partial\Omega)},
\end{equation}
where $p= \frac{2(d-1)}{d}$. On account of the estimates $\eqref{pri:5.2}$,
\begin{equation}\label{f:5.2}
\begin{aligned}
\big\|\frac{\partial w_\varepsilon}{\partial\nu_\varepsilon}\big\|_{L^{p}(\partial\Omega)}
&\leq C\Big\{\int_{\partial\Omega}\frac{dS(x)}{|x-y|^{p(d-1)}}\Big\}^{1/p} \\
&\leq C\bigg\{\int_{\delta(y)}^{\infty}s^{p(1-d)+d-2}ds
+ \int_{B(y,2\delta(y))\cap\partial\Omega}\frac{dS(y)}{|x-y|^{p(d-1)}}\bigg\}^{1/p}
\leq C\big[\delta(y)\big]^{\frac{2-d}{2}}.
\end{aligned}
\end{equation}
Also, it follows from interior estimate \cite[Corollary 3.5]{X0} that
\begin{equation*}
\begin{aligned}
|w_\varepsilon(x)|&\leq C\Big(\dashint_{B(x,\delta(x))}|w_\varepsilon|^{2^*}\Big)^{1/2^*}
\leq C\big[\delta(x)\big]^{\frac{2-d}{d}}\big\|w_\varepsilon\big\|_{H^1(\Omega)}
\leq C\big[\delta(x)\big]^{\frac{2-d}{d}}\big[\delta(y)\big]^{\frac{2-d}{d}},
\end{aligned}
\end{equation*}
where we use the estimates $\eqref{f:5.1}$ and $\eqref{f:5.2}$ in the last inequality,
and this ends the proof.
\end{proof}
\begin{thm}
Let $\Omega\subset\mathbb{R}^d$ be a bounded $C^{1,\tau}$ domain.
Suppose that the coefficients of $\mathcal{L}_\varepsilon$ satisfy $\eqref{a:1}$,
$\eqref{a:2}$, $\eqref{a:3}$ and $\eqref{a:4}$ with $\lambda\geq\max\{\mu,\lambda_0\}$. Then
\begin{equation}\label{pri:5.3}
\big|\mathbf{N}_\varepsilon(x,y)\big|\leq C|x-y|^{2-d}
\end{equation}
and for any $\sigma\in(0,1)$,
\begin{equation}\label{pri:5.4}
\begin{aligned}
\big|\mathbf{N}_\varepsilon(x,y)-\mathbf{N}_\varepsilon(z,y)\big|
&\leq C_\sigma\frac{|x-z|^\sigma}{|x-y|^{d-2+\sigma}},\\
\big|\mathbf{N}_\varepsilon(y,x)-\mathbf{N}_\varepsilon(y,z)\big|
&\leq C_\sigma\frac{|x-z|^\sigma}{|x-y|^{d-2+\sigma}},
\end{aligned}
\end{equation}
where $|x-z|<|x-y|/4$.
\end{thm}
\begin{proof}
Due to the boundary H\"older's estimate $\eqref{pri:2.12}$, it suffices to prove
the estimate $\eqref{pri:5.3}$. By the estimate $\eqref{pri:5.5}$,
\begin{equation}\label{f:3.5}
\big|\mathbf{N}_\varepsilon(x,y)\big|\leq C\Big\{|x-y|^{2-d}
+\big[\delta(x)\big]^{2-d}+\big[\delta(y)\big]^{2-d}\Big\}.
\end{equation}
Then let $r=|x-y|$, and it follows from the estimates $\eqref{pri:2.3}$, $\eqref{f:3.5}$ that
\begin{equation*}
\begin{aligned}
\big|\mathbf{N}_\varepsilon(x,y)\big|
&\leq C\bigg\{\dashint_{B(x,r/4)\cap\Omega}|\mathbf{N}_\varepsilon(z,y)|^sdz\bigg\}^{1/s} \\
&\leq C\Big\{|x-y|^{2-d}+[\delta(y)]^{2-d}\Big\},
\end{aligned}
\end{equation*}
where we choose $s>0$ such that $s(d-2)<1$. Using the estimate $\eqref{pri:2.3}$ again, the
above estimate leads to
\begin{equation*}
\big|\mathbf{N}_\varepsilon(x,y)\big|
\leq C\bigg\{\dashint_{B(y,r/4)\cap\Omega}|\mathbf{N}_\varepsilon(x,z)|^sdz\bigg\}^{1/s}
\leq C|x-y|^{2-d},
\end{equation*}
and we have completed the proof.
\end{proof}
\begin{thm}
Let $\Omega\subset\mathbb{R}^d$ be a bounded $C^{1,\tau}$ domain. Suppose that
the coefficients of $\mathcal{L}_\varepsilon$ satisfy $\eqref{a:1}$, $\eqref{a:2}$,
$\eqref{a:3}$ and $\eqref{a:4}$ with $\lambda\geq\max\{\mu,\lambda_0\}$.
Let $x_0,y_0,z_0\in\Omega$ be such that $|x_0-z_0|<|x_0-y_0|/4$. Then for any $\sigma\in(0,1)$,
\begin{equation}\label{pri:5.1}
\bigg(\dashint_{B(y_0,\rho/4)\cap\Omega}
\big|\nabla_y\big\{\mathbf{N}_\varepsilon(x_0,y)-\mathbf{N}_\varepsilon(z_0,y)\big\}\big|^2dy\bigg)^{1/2}
\leq C\rho^{1-d}\Big(\frac{|x_0-z_0|}{\rho}\Big)^{\sigma},
\end{equation}
where $\rho = |x_0-y_0|$ and $C$ depends only on $\mu,\kappa,\lambda,\tau,m,d,\sigma$
and the character of $\Omega$.
\end{thm}
\begin{proof}
Let $f\in C_0^\infty(B(y_0,\rho/2)\cap\Omega)$, and
\begin{equation*}
u_\varepsilon(x) = \int_{\Omega}\mathbf{N}_\varepsilon(x,y)f(y)dy.
\end{equation*}
Then $\mathcal{L}_\varepsilon(u_\varepsilon) = f$ in $\Omega$ and
$\partial u_\varepsilon/\partial\nu_\varepsilon = 0$ in $B(x_0,\rho/2)\cap\partial\Omega$, it
follows from the boundary H\"older estimate and interior estimates
\begin{equation}
\big|u_\varepsilon(x_0)-u_\varepsilon(z_0)\big|
\leq C\Big(\frac{|x_0-z_0|}{\rho}\Big)^{\sigma}
\bigg\{\dashint_{B(x_0,\rho/2)\cap\Omega}|u_\varepsilon|^2\bigg\}^{1/2}.
\end{equation}
On the other hand, for any $x\in B(x_0,\rho/2)\cap\Omega$, we obtain
\begin{equation}
\big|u_\varepsilon(x)\big|\leq C\rho^2 \bigg\{\dashint_{B(y_0,\rho/2)\cap\Omega}|f|^2 dy\bigg\}^{1/2},
\end{equation}
and this implies
\begin{equation}
\Big\{\dashint_{B(x_0,\rho/2)\cap\Omega}|u_\varepsilon|^2 dx\Big\}^{1/2}
\leq C\rho^{2-d/2}\big\|f\big\|_{L^2(\Omega)}.
\end{equation}
Thus we obtain that
\begin{equation}\label{f:5.3}
\bigg(\dashint_{B(y_0,\rho/2)\cap\Omega}
\big|\mathbf{N}_\varepsilon(x_0,y)-\mathbf{N}_\varepsilon(z_0,y)\big|^2dy\bigg)^{1/2}
\leq C\rho^{2-d}\Big(\frac{|x_0-z_0|}{\rho}\Big)^{\sigma},
\end{equation}
and the stated estimate $\eqref{pri:5.1}$ follows from Caccippoli's inequality $\eqref{pri:2.12}$. We
have completed the proof.
\end{proof}
\section{The proof of Theorem $\ref{thm:1.1}$}
In the case of $p=2$, the estimate $\eqref{pri:1.1}$ has been established
in \cite[Theorem 1.6]{X2}. For $2<p<\infty$, according to Theorem $\ref{thm:2.1}$, it suffices to
establish the following reverse H\"older inequality.
\begin{lemma}\label{lemma:6.1}
Let $\Omega$ be a bounded $C^{1,\eta}$ domain. Suppose the coefficients of $\mathcal{L}_\varepsilon$
satisfy $\eqref{a:1}$, $\eqref{a:2}$, $\eqref{a:3}$ and $\eqref{a:4}$ with $\lambda\geq\lambda_0$ and
$A=A^*$.
For any $Q\in\partial\Omega$ and $0<r<1$,
let $u_\varepsilon\in H^1(B(Q,3r)\cap\partial\Omega;\mathbb{R}^m)$ be the weak solution of
$\mathcal{L}_\varepsilon(u_\varepsilon) = 0$ in $B(Q,3r)\cap\Omega$, and
$\partial u_\varepsilon/\partial\nu_\varepsilon = 0$ on $B(Q,3r)\cap\partial\Omega$.
Then we have
\begin{equation}\label{pri:6.1}
\sup_{B(Q,r)\cap\partial\Omega}|(\nabla u_\varepsilon)^*|
\leq C\bigg\{\Big(\dashint_{B(Q,2r)\cap\partial\Omega}|(\nabla u_\varepsilon)^*|^2\Big)^{1/2}
+\Big(\dashint_{B(Q,2r)\cap\partial\Omega}|(u_\varepsilon)^*|^2\Big)^{1/2}\bigg\},
\end{equation}
where $C$ depends on $\mu,\tau,\kappa,\lambda,m,d$ and the character of $\Omega$.
\end{lemma}
\begin{proof}
The main idea could be found in \cite[Lemma 9.1]{KFS1}, and we have to impose some
new tricks to derive $\eqref{pri:6.1}$, which additionally involves the so-called Neuamnn correctors defined
in \cite{X1}, i.e.,
\begin{equation*}
-\text{div}\big[A(x/\varepsilon)\nabla \Psi_{\varepsilon,0}\big] = \text{div}(V(x/\varepsilon))
\quad \text{in}~\Omega,
\qquad n\cdot A(x/\varepsilon)\nabla\Psi_{\varepsilon,0} = n\cdot{\widehat{V}-V(x/\varepsilon)}
\quad \text{on}~\partial\Omega.
\end{equation*}
The purpose is to establish the following boundary estimate
\begin{equation}\label{pri:6.2}
\|\nabla u_\varepsilon\|_{L^\infty(D_{r/2})}
\leq C\bigg\{\Big(\dashint_{D_r}|\nabla u_\varepsilon|^2\Big)^{1/2}
+\Big(\dashint_{D_r}|u_\varepsilon|^2\Big)^{1/2}\bigg\},
\end{equation}
where $C$ depends on $\mu,\tau,\kappa,\lambda,m,d$ and $\Omega$.
First of all, we consider $\varepsilon\leq r<1$.
Let $v_\varepsilon = u_\varepsilon - \Psi_{\varepsilon,0}\xi$ for some $\xi\in\mathbb{R}^m$, and then we have
\begin{equation}\label{pde:6.1}
\left\{\begin{aligned}
\mathcal{L}_\varepsilon(v_\varepsilon) &= \text{div}\big[V_\varepsilon(\Psi_{\varepsilon,0}-I)\xi\big]
-B_\varepsilon\nabla\Psi_{\varepsilon,0}\xi - (c_\varepsilon+\lambda I)\Psi_{\varepsilon,0}\xi
\quad&\text{in}& ~ B(Q,3r)\cap\Omega,\\
\frac{\partial v_\varepsilon}{\partial\nu_\varepsilon}
& = n\cdot V_\varepsilon(I-\Psi_{\varepsilon,0})\xi - n\cdot\widehat{V}\xi
\quad &\text{on}&~B(Q,3r)\cap\partial\Omega,
\end{aligned}\right.
\end{equation}
where $V_\varepsilon(x) = V(x/\varepsilon)$ and $c_\varepsilon(x)=c(x/\varepsilon)$. Note that
\begin{equation}\label{f:6.1}
\|\Psi_{\varepsilon,0}-I\|_{L^\infty(\Omega)}\leq C\varepsilon,
\qquad \|\Psi_{\varepsilon,0}\|_{W^{1,\infty}(\Omega)}\leq C,
\end{equation}
where $C$ depends on $\mu,\kappa,\tau,m,d$ and $\Omega$. The above results have been proved in
\cite[Theorem 4.2]{X1} and Remark $\ref{re:6.1}$.
Applying the boundary estimate $\eqref{pri:1.0}$ to the equation $\eqref{pde:6.1}$, we have
\begin{equation*}
\begin{aligned}
\|\nabla v_\varepsilon\|_{L^\infty(D_{r/2})}
&\leq C\bigg\{\frac{1}{r}\Big(\dashint_{D_r}|v_\varepsilon|^2\Big)^{1/2}
+|\xi|\bigg\} \\
&\leq C\bigg\{\frac{1}{r}\Big(\dashint_{D_r}|u_\varepsilon-\xi|^2\Big)^{1/2}
+\frac{\varepsilon}{r}|\xi|+|\xi|\bigg\}\\
&\leq C\bigg\{\frac{1}{r}\Big(\dashint_{D_r}|u_\varepsilon-\xi|^2\Big)^{1/2}+|\xi|\bigg\}\\
&\leq C\bigg\{\Big(\dashint_{D_r}|\nabla u_\varepsilon|^2\Big)^{1/2}
+\Big(\dashint_{D_r}|u_\varepsilon|^2\Big)^{1/2}\bigg\}
\end{aligned}
\end{equation*}
where we use the estimate $\eqref{f:6.1}$ in the first and second inequalities, and
the fact $r\geq \varepsilon$ in the third one. In the last step, we choose
$\xi=\dashint_{D_r} u_\varepsilon$ and employ Poincar\'e's inequality.
The above estimate implies $\eqref{pri:6.2}$ for $\varepsilon\leq r<1$.
In the case of $0<r<\varepsilon$,
let $v_\varepsilon = u_\varepsilon - \xi - \varepsilon\chi_0(x/\varepsilon)\xi$
for some $\xi\in\mathbb{R}^m$, and $v_\varepsilon$ satisfies the equation
$\eqref{eq:3.1}$ by setting $F=g=0$. Again, by choosing $\xi=\dashint_{D_r} u_\varepsilon$,
and the estimate $\eqref{pri:1.0}$, one may derive the estimate $\eqref{pri:6.2}$
for $0<r<\varepsilon$.
Recall the definition of nontangential maximal function, and we have
\begin{equation*}
(\nabla u_\varepsilon)^*(P) = \max\{\mathcal{M}_{r,1}(\nabla u_\varepsilon)(P),
\mathcal{M}_{r,2}(\nabla u_\varepsilon)(P)\},
\end{equation*}
where
\begin{equation}\label{def:6.1}
\begin{aligned}
\mathcal{M}_{r,1}(\nabla u_\varepsilon)(P)
&=\big\{|\nabla u_\varepsilon|:x\in\Omega, |x-P|\leq c_0r;
~|x-P|\leq N_0\text{dist}(x,\partial\Omega)\big\},\\
\mathcal{M}_{r,2}(\nabla u_\varepsilon)(P)
&=\big\{|\nabla u_\varepsilon|:x\in\Omega, |x-P|> c_0r;
~|x-P|\leq N_0\text{dist}(x,\partial\Omega)\big\}.
\end{aligned}
\end{equation}
Here $c_0>0$ is a small constant.
We first handle the estimate for $\mathcal{M}_{r,1}(\nabla u_\varepsilon)$.
It follows from the estimate $\eqref{pri:6.2}$ that
\begin{equation}\label{f:6.2}
\begin{aligned}
\sup_{B(Q,r)\cap\partial\Omega}\mathcal{M}_{r,1}(\nabla u_\varepsilon)
&\leq \sup_{B(Q,3r/2)\cap\partial\Omega}|\nabla u_\varepsilon| \\
&\leq C\bigg\{\Big(\dashint_{D_{2r}}|\nabla u_\varepsilon|^2\Big)^{1/2}
+\Big(\dashint_{D_{2r}}|u_\varepsilon|^2\Big)^{1/2}\bigg\} \\
&\leq C\bigg\{\Big(\dashint_{\Delta_{2r}}|(\nabla u_\varepsilon)^*|^2\Big)^{1/2}
+\Big(\dashint_{\Delta_{2r}}|(u_\varepsilon)^*|^2\Big)^{1/2}\bigg\}.
\end{aligned}
\end{equation}
Similarly, one may derive the following interior Lipschitz estimate
\begin{equation*}
|\nabla u_\varepsilon(x)|\leq
C\bigg\{\Big(\dashint_{B(x,\delta(x)/10)}|\nabla u_\varepsilon|^2\Big)^{1/2}
+\Big(\dashint_{B(x,\delta(x)/10)}|u_\varepsilon|^2\Big)^{1/2}\bigg\}
\end{equation*}
from \cite[Theorem 4.4]{X0}, and this will give
\begin{equation}\label{f:6.3}
\begin{aligned}
\sup_{B(Q,r)\cap\partial\Omega}\mathcal{M}_{r,2}(\nabla u_\varepsilon)
\leq C\bigg\{\Big(\dashint_{\Delta_{2r}}|(\nabla u_\varepsilon)^*|^2\Big)^{1/2}
+\Big(\dashint_{\Delta_{2r}}|(u_\varepsilon)^*|^2\Big)^{1/2}\bigg\}.
\end{aligned}
\end{equation}
Combining the estimates $\eqref{f:6.2}$ and $\eqref{f:6.3}$ consequently lead to
the stated estimate $\eqref{pri:6.1}$. We have completed the proof.
\end{proof}
\begin{remark}\label{re:6.1}
\emph{Here we plan give a sketch of the proof of the first estimate in $\eqref{f:6.1}$.
Let $H_{\varepsilon,0} = \Psi_{\varepsilon,0} - I -\varepsilon\chi_0(x/\varepsilon)$. Then it satisfies
$L_\varepsilon(H_{\varepsilon,0}) = 0$ in $\Omega$ and
$\partial H_{\varepsilon,0}/\partial n_\varepsilon = \sum_{ij}
\big(n_i\frac{\partial}{\partial x_j}-n_j\frac{\partial}{\partial x_i}\big)g_{ij}$ on $\partial\Omega$
with $\|g_{ij}\|_{L^\infty(\Omega)}\leq C\varepsilon$ (the computation will be found in
\cite[Lemma 4.4]{X1}),
where $\partial/\partial n_\varepsilon = n\cdot A(x/\varepsilon)\nabla$ denotes
the conormal derivative operator associated with $L_\varepsilon$. According to the proof of \cite[Lemma 4.3]{X1}, one may have
\begin{equation*}
|H_{\varepsilon,0}(x) - H_{\varepsilon,0}(y)|\leq C\varepsilon
\end{equation*}
for any $x,y\in\Omega$. Thus, it is not hard to see that
\begin{equation*}
\|H_{\varepsilon,0}(x)\|_{L^\infty(\Omega)}\leq C\varepsilon + C\|H_{\varepsilon,0}\|_{L^2(\Omega)}.
\end{equation*}
Note that
\begin{equation*}
\|H_{\varepsilon,0}\|_{L^2(\Omega)}
\leq C\|(H_{\varepsilon,0})^*\|_{L^2(\partial\Omega)}
\leq C\|H_{\varepsilon,0}\|_{L^2(\partial\Omega)} \leq C\varepsilon,
\end{equation*}
and the last inequality is due to a duality argument (see \cite{KFS2}).
Let $\phi_\varepsilon\in H^1(\Omega;\mathbb{R}^m)$ be a solution of
$L_\varepsilon(\phi_\varepsilon) = 0$ in $\Omega$, and
$\partial\phi_\varepsilon/\partial n_\varepsilon = f$ on $\partial\Omega$ with
$\int_{\partial\Omega}fdS = 0$.
\begin{equation*}
\Big|\int_{\partial\Omega}H_{\varepsilon,0}f dS\Big|
=\Big|\int_{\partial\Omega} g_{ij}\Big(n_i\frac{\partial}{\partial x_j}
-n_j\frac{\partial}{\partial x_i}\Big)\phi_\varepsilon dS\Big|
\leq C\varepsilon\|f\|_{L^2(\partial\Omega)},
\end{equation*}
where we use the Rellich estimate
$\|\nabla_{\text{tan}}\phi_\varepsilon\|_{L^2(\partial\Omega)}\leq C\|f\|_{L^2(\partial\Omega)}$
(see \cite{KS1}). Thus the above estimates imply
\begin{equation*}
\|\Psi_{\varepsilon,0} - I\|_{L^\infty(\Omega)}
\leq \|H_{\varepsilon,0}\|_{L^\infty(\Omega)} + C\varepsilon \leq C\varepsilon.
\end{equation*}
We mention that the above estimate additionally relies on the symmetry condition $A=A^*$, which
actually improves the corresponding result in \cite[Theorem 4.2]{X1}.}
\end{remark}
One may study the solutions of the $L^2$ Neumann problem with atomic data
$\partial u_\varepsilon/\partial \nu_\varepsilon = a$ on $\partial\Omega$,
where $\int_{\partial\Omega} a(x) dS = 0$, and $\text{supp}(a)\subset B(Q,r)\cap\partial\Omega$
for some $Q\in\partial\Omega$ and $0<r<r_0$, and $\|a\|_{L^\infty(\partial\Omega)}\leq r^{1-d}$.
In fact, the stated estimate $\eqref{pri:1.1}$ holding for $1<p<2$ follows from the following
result by interpolation.
\begin{thm}\label{thm:6.1}
Let $a$ be an atom on $\Delta_r$ with $0<r<r_0$. Suppose that
$u_\varepsilon$ is a weak solution of $\mathcal{L}_\varepsilon(u_\varepsilon) = 0$ in $\Omega$
with $\partial u_\varepsilon/\partial \nu_\varepsilon = a$ on $\partial\Omega$. Then we have
the following estimate
\begin{equation}\label{pri:5.6}
\int_{\partial\Omega} (\nabla u_\varepsilon)^* dS\leq C,
\end{equation}
where $C$ depends only on $\mu,\kappa,\lambda,d,m$ and $\Omega$.
\end{thm}
\begin{proof}
The main ideas of the proof may be found in \cite[pp.932-933]{S1},
as well as in \cite[Lemma 2.7]{DK}. Clearly, the integral in the left-hand side of
$\eqref{pri:5.6}$ may be divided into
\begin{equation*}
\int_{\partial\Omega} (\nabla u_\varepsilon)^* dS
= \bigg\{
\int_{B(Q,Cr)\cap\partial\Omega} + \int_{\partial\Omega\setminus B(Q,Cr)}\bigg\}
(\nabla u_\varepsilon)^*dS,
\end{equation*}
and it follows from $L^2$ estimate (\cite[Theorem 1.6]{X2}) and H\"older's inequality that
\begin{equation}\label{f:5.6}
\int_{B(Q,Cr)\cap\partial\Omega} (\nabla u_\varepsilon)^* dS
\leq Cr^{\frac{d-1}{2}}\big\|(\nabla u_\varepsilon)^*\big\|_{L^2(\partial\Omega)}
\leq Cr^{\frac{d-1}{2}}\|a\|_{L^2(\partial\Omega)} \leq C,
\end{equation}
where we also use the assumption $\|a\|_{L^\infty(\partial\Omega)}\leq r^{1-d}$.
Let $\rho = |P_0-Q|\geq Cr$, and one may show
\begin{equation}\label{f:5.5}
\int_{B(P_0,c\rho)\cap\partial\Omega} (\nabla u_\varepsilon)^* dS
\leq C\Big(\frac{r}{\rho}\Big)^{\sigma}
\end{equation}
for some $\sigma>0$. Since $\int_{\partial\Omega} a(x)dS = 0$, we have the formula
\begin{equation*}
u_\varepsilon(x) = \int_{B(Q,r)\cap\partial\Omega}
\Big\{\mathbf{N}_\varepsilon(x,y)-\mathbf{N}_\varepsilon(x,Q)\Big\}a(y)dS(y),
\end{equation*}
and it follows that
\begin{equation*}
|\nabla u_\varepsilon(x)|
\leq C\dashint_{B(Q,r)\cap\partial\Omega}
\big|\nabla_x\big\{\mathbf{N}_\varepsilon(x,y)-\mathbf{N}_\varepsilon(x,Q)\big\}\big|dS(y).
\end{equation*}
Note that for any $z\in\Omega$ such that $c\rho\leq |z-P|<N_0\delta(z)$ for some
$P\in B(P_0,c\rho)\cap\partial\Omega$, and it follows from interior
Lipschitz estimates (which is based upon \cite[Theorem 4.4]{X0} coupled with the techniques in
the proof of Lemma $\ref{lemma:6.1}$), and
$\eqref{pri:5.1}$ and $\eqref{f:5.3}$ that
\begin{equation*}
\begin{aligned}
|\nabla u_\varepsilon(z)|
&\leq C \bigg(\dashint_{B(z,c\delta(z))}|\nabla u_\varepsilon|^2 dx\bigg)^{1/2}
+ C\delta(x) \dashint_{B(z,c\delta(z))} |u_\varepsilon| dx \\
&\leq C\dashint_{B(Q,r)\cap\partial\Omega}
\bigg(\dashint_{B(z,c\delta(z))}
\big|\nabla_x\big\{\mathbf{N}_\varepsilon(x,y)-\mathbf{N}_\varepsilon(x,Q)\big\}\big|^2
dx\bigg)^{1/2}dS(y) \\
& + C\delta(x)
\dashint_{B(Q,r)\cap\partial\Omega}
\bigg(\dashint_{B(z,c\delta(z))}
\big|\mathbf{N}_\varepsilon(x,y)-\mathbf{N}_\varepsilon(x,Q)\big|^2
dx\bigg)^{1/2}dS(y) \\
& \leq C\rho^{1-d}\Big(\frac{r}{\rho}\Big)^{\sigma}
+ C\delta(z)\rho^{2-d}\Big(\frac{r}{\rho}\Big)^{\sigma}
\leq C\rho^{1-d}\Big(\frac{r}{\rho}\Big)^{\sigma},
\end{aligned}
\end{equation*}
where we use
Minkowski's inequality in the second step and the fact that $(c\rho)/N_0<\delta(x)<r_0$ in the last one.
According to the definition of $\mathcal{M}_{\rho,2}$ in $\eqref{def:6.1}$, we have
\begin{equation}\label{f:5.4}
\int_{B(P_0,c\rho)\cap\partial\Omega}\mathcal{M}_{\rho,2}(\nabla u_\varepsilon) dS
\leq C\Big(\frac{r}{\rho}\Big)^{\sigma}.
\end{equation}
For any $\theta\in[1,3/2]$, it is known that
$\mathcal{L}_\varepsilon(u_\varepsilon) = 0$ in $\Omega$ and
$\partial u_\varepsilon/\partial\nu_\varepsilon = 0$ on $B(P_0,\theta c\rho)\cap\partial\Omega$.
In terms of $L^2$ nontangential maximal function estimate \cite[Thereom 1.7]{X2}, we have
\begin{equation*}
\int_{B(P_0,\theta c\rho)\cap\partial\Omega} |\mathcal{M}_{\rho,1}(\nabla u_\varepsilon)|^2 dS
\leq C\int_{\partial D_{\theta c\rho}\setminus\partial\Omega}|\nabla u_\varepsilon|^2 dS.
\end{equation*}
Integrating in $\theta$ on $[1,3/2]$ yields
\begin{equation*}
\int_{B(P_0,c\rho)\cap\partial\Omega}
|\mathcal{M}_{\rho,1}(\nabla u_\varepsilon)|^2 dS
\leq \frac{C}{\rho}
\int_{D_{2c\rho}}|\nabla u_\varepsilon|^2 dx,
\end{equation*}
and
\begin{equation*}
\int_{B(P_0,c\rho)\cap\partial\Omega}
\mathcal{M}_{\rho,1}(\nabla u_\varepsilon) dS
\leq C\rho^{d-1}\bigg(\dashint_{B(P_0,2c\rho)\cap\Omega}|\nabla u_\varepsilon|^2\bigg)^{1/2}
\leq C\Big(\frac{r}{\rho}\Big)^\sigma.
\end{equation*}
This together with $\eqref{f:5.4}$ gives the estimate $\eqref{f:5.5}$.
Consequently, the desired estimate $\eqref{pri:5.6}$ follows from
the estimates $\eqref{pri:5.5}$ and $\eqref{pri:5.6}$ by a covering argument. We are done.
\end{proof}
\noindent\textbf{Proof of Theorem $\ref{thm:1.1}$}.
The desired estimate
for $\|(\nabla u_\varepsilon)^*\|_{L^p(\partial\Omega)}$ in $\eqref{pri:1.1}$
is based upon Theorem $\ref{thm:2.1}$,
Lemma $\ref{lemma:6.1}$ and
Theorem $\ref{thm:6.1}$. In view of Lemmas $\ref{lemma:2.3}$ and $\ref{lemma:2.4}$, one may derive
\begin{equation*}
\|(u_\varepsilon)^*\|_{L^p(\partial\Omega)}
\leq C\|\mathcal{M}(u_\varepsilon)\|_{L^p(\partial\Omega)}
\leq C\|u_\varepsilon\|_{W^{1,p}(\Omega)}
\leq C\|g\|_{L^{p}(\partial\Omega)},
\end{equation*}
where we use $W^{1,p}$ estimate (see \cite[Theorem 1.1]{X1}) in the last step.
The proof is complete.
\qed
\begin{center}
\textbf{Acknowledgements}
\end{center}
The authors thank Prof. Zhongwei Shen for very helpful discussions
regarding this work when he visited Peking University.
The first author also appreciates his constant and illuminating instruction.
The first author was supported by the China Postdoctoral Science Foundation (Grant No. 2017M620490), and
the second author was supported by the National Natural Science Foundation of China
(Grant NO. 11571020).
| {'timestamp': '2018-06-08T02:12:10', 'yymm': '1806', 'arxiv_id': '1806.02632', 'language': 'en', 'url': 'https://arxiv.org/abs/1806.02632'} |
\section{Introduction}
The circulation of money is generally studied in an abstract sense, for example as the extent to which monetary policy, productivity improvements, supply disruptions, or other shocks affect aggregate indicators of economic output~\cite{nakamura_identification_2018,mcnerney_how_2022,carvalho_supply_2016,acemoglu_network_2012}. Detailed observation has long been impractical for lack of empirical data. However, modern payment infrastructure is increasingly digital~\cite{adrian_rise_2019}, and the circulation of money is leaving real-time records on the servers of financial institutions worldwide. These transaction records offer especially high granularity in time and in space, and open up the possibility of fined-grained data-driven studies of financial ecosystems~\cite{frankova_transaction_2014,alessandretti_anticipating_2018,aladangady_transactions_2019,bouchaud_trades_2018,mattsson_trajectories_2021,bardoscia_physics_2021,carvalho_tracking_2021}. In this paper we consider the question of how best to study the \emph{circulation} of money as observed in transaction records. We argue that \emph{networks of monetary flow} are a suitable representation for patterns of circulation over a period of time. Our findings show that techniques in network science --- in particular walk-based community detection, measures of cyclic structure, network mixing patterns, and walk-based centrality metrics --- together capture key aspects of circulation within a real-world currency system. We demonstrate that important practical and theoretical questions around the circulation of money can be studied using networks of monetary flow.
The main focus of this paper is on complementary currencies whose modern implementations produce comprehensive digital records---cases where transaction records are available for an entire currency system. Complementary currencies circulate in parallel to national currencies in that tokens are \textit{not} legal tender, nor necessarily exchangeable for legal tender~\cite{stodder_complementary_2009,lietaer_complementary_2004,ussher_complementary_2021}; they are used under mutual agreements that come in many forms, from local community currencies~\cite{muralt_woergl_1934,kichiji_network_2008,frankova_transaction_2014} to global cryptocurrencies~\cite{nakamoto_bitcoin_2008,kondor_rich_2014,elbahrawy_evolutionary_2017}. Sardex, for example, is a digital complementary currency used among businesses in Sardinia. Digital records of transactions in Sardex have been studied to show that cycle motifs are related to performance and stability of the currency system~\cite{iosifidis_cyclic_2018,fleischman_balancing_2020}. The full transaction histories of Bitcoin and other cryptocurrencies can be reconstructed from public ledgers~\cite{ober_structure_2013,meiklejohn_fistful_2016,zhang_heuristic-based_2020}. Bitcoin transactions reveal a currency system that supports substantial trade outside centralized marketplaces, but where inequality has been increasing over time~\cite{nadini_emergence_2022,kondor_rich_2014}. Sarafu, the currency considered in this work, is a ``Community Inclusion Currency'' (CIC) that incorporates elements of both community currencies and cryptocurrencies~\cite{mattsson_sarafu_2022}.
Digital administrative records of the Sarafu CIC from January 2020 to June 2021 have been published by Grassroots Economics (GE)~\cite{ruddick_sarafu_2021}. GE is a non-profit foundation based in Kenya that operates Sarafu and leads related economic development projects in marginalized and food-insecure areas of the country. What began as several local, physical, community currencies was progressively digitized and then brought together onto a single platform, as Sarafu. The observation period began as this consolidation occurred, at which point Sarafu was available throughout Kenya. Mimicking the well-developed mobile payment infrastructure of the national currency~\cite{mbogo_impact_2010,stuart_cash_2011,mbiti_mobile_2011,suri_mobile_2017,koblanck_digital_2018,baah_state_2021}, each Sarafu account was tied to a Kenyan mobile number and accessible over a mobile interface. An account could be created with an activation code sent to a particular mobile number, then used and managed via a series of simple menus. The resulting digital records became a dataset that includes anonymized account information for tens of thousands of users and records of hundreds of thousands of Sarafu transactions. Previously, the published dataset has been described in raw form\cite{mattsson_sarafu_2022} and used in a case study introducing CICs as a modality for humanitarian aid~\cite{ussher_complementary_2021}.
In the context of community currencies, circulation is a crucial measure of economic impact---these currencies are typically created with the aim to support local economic activity~\cite{muralt_woergl_1934,kichiji_network_2008,ruddick_eco-pesa_2011,iosifidis_cyclic_2018}. We detail transaction volumes in Sarafu over time and then study the resulting circulation of Sarafu as a network of monetary flow among around 40,000 regular users. This weighted, directed, time-aggregated network captures the patterns of circulation in intricate detail, allowing us to study what shapes the Sarafu currency system as a whole. Anonymized information on account holders allows us to label each node with a geographic area, livelihood category, registration date, and reported gender. We apply network analysis techniques to the Sarafu flow network to answer three research questions with important implications:
\textit{Among whom is Sarafu circulating?} The Sarafu user base grew rapidly over the observation period, especially as the COVID-19 pandemic disrupted regular economic activities. We summarize the resulting patterns of circulation using a so-called community detection method developed especially for flow networks. Specifically, the map equation framework and the associated Infomap algorithm~\cite{rosvall_maps_2008,ding_community_2014} group nodes into modules that capture as much volume as possible. Since the link weights of the Sarafu flow network reflect observed flows of money, the discovered modules signal sub-populations within which Sarafu was \emph{circulating}. We go on to investigate the composition of these sub-populations.
\textit{What network structures support the circulation of Sarafu?} Degree disassortativity has been noted in a variety of economic networks~\cite{fujiwara_large-scale_2010,kondor_rich_2014,mattsson_functional_2021,campajola_microvelocity_2022} in that high-degree nodes generally transact with low-degree nodes. It has also been noted that network cycles may be key to the `health' of currency systems and of individual accounts\cite{iosifidis_cyclic_2018}. Indeed, detecting cycles and brokering `missing' financial connections is seen by private actors as a promising credit clearing and risk management service~\cite{fleischman_balancing_2020,fleischman_liquidity-saving_2020}. In a similar vein, Ussher et al.\cite{ussher_complementary_2021} argue that community currencies compare favorably to cash assistance as an economic development intervention because they help establish economic connections that keep money local. We study the network structure underlying the observed circulation of Sarafu using several suitable network analysis techniques. Specifically, network assortativity measures and the density of cycles.
\textit{What characterizes the most prominent Sarafu users?} We would like to understand patterns in who holds Sarafu accounts that are especially prominent, or perhaps even systematically important. Prominent users are identified by means of a network centrality measure that is directly related to the circulation of Sarafu, as captured by a network of monetary flow. Specifically, weighted PageRank~\cite{page_pagerank_1999} computes a metric that corresponds to the share of funds a given account would control, at any given time, if the observed dynamics were to continue indefinitely. We calibrate this measure against empirical account balances and use it to investigate the account features most associated with prominent users.
Our results indicate that circulation was modular and geographically localized, occurring within particular areas and among users with diverse livelihoods. Moreover, using network analysis, we confirm the intuitive notion that circulation requires cycles. This implies that community currencies can help support specific areas during periods of economic stress, so long as local economic activities are sufficiently diverse and adoption is sufficiently coordinated as to allow cycles to emerge. This has concrete implications for humanitarian policy in marginalized areas, in that rapid deployment may be necessary and impact can be expected to be higher in areas with a mix of economic activities already present. Community currencies also support localized economic development over longer periods of time~\cite{stodder_complementary_2009,iosifidis_cyclic_2018,ussher_complementary_2021}. We find that community-based financial institutions, and, in a few cases, faith leaders, are especially prominent among Sarafu users. Furthermore, these local ``hubs'' play a key structural role in that the network underlying Sarafu is consistently degree-disassortative.
The findings presented in this paper provide a fine-grained understanding of the circulation of Sarafu over a highly dynamic period that includes the arrival of the COVID-19 pandemic to Kenya. Our work demonstrates how networks of monetary flow capture key features of circulation. Moreover, walk-based and cycle-based network analysis are interpretable methods for understanding the underlying currency system. Noteworthy is that the methodology presented in this paper can be applied to study any currency system where digital transaction records are available. Indeed, there appear to be important regularities in the network structure underlying the circulation of money in such systems, and these would be well worth exploring further.
The remainder of this paper is organized as follows. The \nameref{sec:data} section briefly describes the Sarafu system over this especially tumultuous period. The \nameref{sec:results} section presents our findings on patterns of circulation, prominent users, and the network structure underlying circulation. We synthesize these contributions and discuss the implications of our findings in the \nameref{sec:discussion} section. Finally, the \nameref{sec:methods} section details the data preparation, network analysis measures, and statistical methods used in this study and provides references to facilitate data, code, and software availability.
\section{Data}\label{sec:data}
Sarafu expanded dramatically as the COVID-19 pandemic arrived in Kenya, growing from 8,354 registered accounts in January 2020 to almost 55,000 in June 2021. Figure~\ref{fig:sarafu} shows the transaction volumes for each of the complete months over the observation period. Beginning in April 2020 and continuing through the second wave of COVID-19 in Kenya, Sarafu saw transaction volumes almost ten times higher than in February 2020. This dramatic expansion occurred primarily in particular regions, described below, and we see a return towards the baseline in these areas by the end of the observation period. The overall pattern is perhaps best explained by the counter-cyclical nature of complementary currencies, which are known to see spikes in usage levels during periods of economic disruption~\cite{stodder_complementary_2009,stodder_macro-stability_2016,zeller_economic_2020}.
\begin{figure}[!t]
\centering
\includegraphics[width=0.9\textwidth]{figures/volume_monthly_detail.pdf}
\caption{Monthly transaction volumes in total, and in each geographic area (shown at two different scales).}
\label{fig:sarafu}
\end{figure}
The figures in this work employ a consistent color scheme for geographic area. Purple corresponds to \emph{Kinango Kwale}, a rural area where GE has had a substantial presence for many years; this area saw much growth during the COVID-19 pandemic due largely to word of mouth. Light green is \emph{Mukuru Nairobi}, an urban area that was the site of a targeted introduction beginning in March 2020. For details we refer to the \nameref{sec:methods:data} section. Accounts located elsewhere in \emph{Nairobi} are shown in dark green. In light blue are accounts in \emph{Kisauni Mombasa}, the site of a second introduction beginning in early 2021. Accounts located elsewhere in \emph{Mombasa} are shown in dark blue. \emph{Kilifi}, in dark grey, is the county where GE is headquartered. Users with an unknown location (the largest category within \emph{other}), are in light grey. In Figure~\ref{fig:sarafu}, \emph{other} closely tracks the remote rural county of \emph{Turkana}, in orange. Teal and red correspond to locations in \emph{Nyanza} county or elsewhere in rural Kenya, respectively.
\section{Results} \label{sec:results}
The Sarafu system supported over 400,000 transactions among more than 40,000 regular accounts between January 2020 and June 2021. This resulted in the circulation of 293.7 million Sarafu, visualized in Figure~\ref{fig:network} as a network of monetary flow. The \emph{nodes} are registered accounts, for which we know attributes such as the geographic area, livelihood category, and reported gender of the account holder. An \emph{edge} from one account to another indicates that at least one transaction occurred across that link. The \emph{edge weight} corresponds to the observed monetary flow along an edge, i.e., the total sum of transaction amounts across that link. The Sarafu flow network is a \emph{weighted, directed, time-aggregated network representation} of the total circulation over the observation period, excluding system-run accounts. For details on the construction of the network, we refer to the \nameref{sec:methods:data} section of \nameref{sec:methods}. The network visualization employs the same colors for geographic area as does Figure~\ref{fig:sarafu}, revealing patterns suggestive of modular and geographically localized circulation.
\begin{figure}[ht!]
\centering
\includegraphics[width=6in]{figures/viz_textres.png}
\caption{Visualization of the Sarafu flow network. Nodes are colored by the geographic area of the location reported for the account (see Figure~\ref{fig:sarafu} for legend), and node size is proportional to the value of unweighted PageRank as computed for that node.}
\label{fig:network}
\end{figure}
In the remainder of this section, we share findings resulting from network analysis of the Sarafu flow network. The \nameref{sec:results:flow} section considers sub-populations within which Sarafu was circulating, and their composition along lines of geographic area and livelihood category. In the \nameref{sec:results:structure} section, we consider the network structure that supports this circulation, including analyses of cyclic density and network mixing patterns. The \nameref{sec:results:hubs} section compares relevant network centrality measures and describes the most prominent users of Sarafu.
\subsection{Modular circulation} \label{sec:results:flow}
To more precisely understand the patterns of circulation present in the Sarafu flow network, we apply an especially suitable community detection method. The map equation\cite{rosvall_maps_2008} is defined in terms of flow networks and the associated Infomap algorithm\cite{ding_community_2014} groups nodes into hierarchical \emph{modules}. Specifically, Infomap assigns nodes to modules (and sub-modules) within which a ``random walker'' on the network would stay for relatively long periods of time. In our case, the weights on the edges of the Sarafu flow network reflect real, observed flows of Sarafu and so the Infomap algorithm will seek to discover modules that contain especially much transaction volume. This identifies sub-populations within which Sarafu tended to \emph{circulate}. For details about these methods, we refer to the \nameref{sec:methods:analysis:circ} section.
The Infomap algorithm recovers a hierarchical, nested, modular structure to the Sarafu flow network. The hierarchical structure consists of top-level modules, sub-modules and sub-sub-modules at respectively the first, second and third level of the community hierarchy. Circulation of the Sarafu community currency was highly modular in that activity occurred almost exclusively within distinct sub-populations. At the first hierarchical level, 99.7\% of the total transaction volume was contained within the five largest so-called \emph{top-level modules}. Moreover, there are 37 \emph{sub-modules} composed of 100 or more accounts and these contained 96.5\% of the total transaction volume. Only a small share of the overall circulation took place between the sub-populations defined at the second hierarchical level, and circulation within these sub-populations itself had a nested, modular structure. Indeed, the 455 \emph{sub-sub-modules} composed of 10 or more accounts capture 80\% of the total transaction volume. Altogether, these findings suggest that the circulation of Sarafu was extremely modular over the observed period.
\subsubsection*{Geographic localization} \label{sec:results:geog}
We investigate the extent to which the distinct sub-populations discovered above correspond to geographic location, as reported in the account dataset described in the \nameref{sec:data} section. Figure~\ref{fig:matrix} shows the geographic composition of the top-level modules---four of the five map directly onto one of the main areas labeled in the data: \textit{Kinango Kwale}, \textit{Mukuru Nairobi}, \textit{Kisauni Mombasa}, or \textit{Turkana}. Only one of the modules has substantial membership from several regions; its sub-modules are, however, also geographically delineated. This top-level module combines several less prominent localities, including in \emph{Kilifi}, in \emph{Nyanza}, and in two localities elsewhere in \emph{Nairobi}. We conclude that the circulation of Sarafu was geographically localized over the observed period.
\begin{figure}[!t]
\centering
\includegraphics[width=0.45\textwidth]{figures/modules.pdf}
\caption{Geographic composition of the five largest top-level modules and relevant numbered sub-modules.}
\label{fig:matrix}
\end{figure}
The top-level modules are amalgamations of circulation within sub-modules, which appear to correspond to geographic areas more granular than those labeled in the data. Indeed, raw reported locations were often quite precise and were converted to broader area labels in the anonymization that occurred prior to the publication of the data~\cite{mattsson_sarafu_2022}. Several of the sub-modules highlighted in Figure~\ref{fig:matrix} coincide with areas where early, physical community currencies were operating in the years before Sarafu became all-digital~\cite{marion_voucher_2018}. Within \emph{Kinango Kwale}, moreover, the sub-modules likely correspond to individual rural villages or clusters of villages~\cite{ussher_complementary_2021}. Thus, circulation was geographically local, predominantly. We will further consider the sub-populations delineated by the Infomap sub-modules in subsequent analyses.
\subsubsection{Diversity of economic activities}
\label{sec:results:uses}
Now that we understand the modular structure and geographic localization of circulation, we consider the composition of the localized sub-populations with respect to economic activity. This is of particular interest to practitioners as it helps illustrate \emph{among whom} Sarafu was circulating. There are 14 categories of economic activities into which user-reported livelihoods were grouped, the most common of which are \emph{labour} in urban areas and \emph{farming} in rural areas. Many other users (in both urban and rural areas) report selling \emph{food}, running a \emph{shop}, or providing \emph{transport}.
Most notably, we see a mix of the different economic activities within the largest second-level sub-populations. Figure~\ref{fig:uses} illustrates the livelihood category given for each account in the 15 largest sub-modules identified by the Infomap algorithm. To give a sense of how this diversity is experienced within sub-populations, we compute and report the view from the average user. The average user participates in a sub-module with around 2000 other users, and of these others, 66\% report a category of work that is different from what they themselves report. Little diversity is lost as we consider even finer scales. The average user appears in a sub-sub-module with around 250 other users, 59\% of whom do not share their same livelihood category. We conclude that the circulation of Sarafu involves a diversity of economic activities, even at the scale of a single village.
\begin{figure}[ht]
\includegraphics[width=0.8\textwidth]{figures/mod_2-business_type.pdf}
\caption{Composition of discovered sub-modules (bars) in terms of user livelihoods (colors, as shown in legend).}
\label{fig:uses}
\end{figure}
We also see that the composition of the sub-populations using Sarafu is substantively different in urban and rural areas. In Figure~\ref{fig:uses}, the sub-modules where \emph{farming} or \emph{fuel/energy} are prominent are rural and composed of users reporting a location within \textit{Kinango Kwale}, almost exclusively. Those where \emph{labour} is prominent correspond to sub-populations localized primarily in urbal or peri-urban areas including \textit{Mukuru Nairobi}, \textit{Kisauni Mombasa}, and \textit{Kilifi}. The geographic aspect of circulation is further refined by means of the type of geographical area.
\subsection{Underlying network structure} \label{sec:results:structure}
In this section, we consider the network structure underlying the circulation of Sarafu. Each of the sub-modules considered above in the~\nameref{sec:results:flow} section is associated with not just a sub-population of 100 or more accounts, but also a sub-network of 100 or more nodes. An (unweighted) edge from one account to another indicates that at least one transaction occurred across that edge. Node degree corresponds to an accounts' number of unique transaction partners, incoming and outgoing, in their same sub-population. In the \nameref{sec:results:structure:cycles} section we count the cycles present in the sub-networks, relating the presence of cycles to the notion of circulation and the sustainable operation of complementary currency systems. Next, the \nameref{sec:results:structure:mixing} section quantifies network mixing patterns, relating degree disassortativity to the structural importance of local ``hubs'' in the sub-networks.
\subsubsection{Cyclic density} \label{sec:results:structure:cycles}
Network cycles may be key to understanding the conditions under which an area is, or overtime becomes, able to sustain local circulation\cite{iosifidis_cyclic_2018,fleischman_balancing_2020,ussher_complementary_2021}. We explore the presence of cycles in the Sarafu sub-networks using $k$-cycle density\cite{iosifidis_cyclic_2018}. This measure quantifies how much higher, on a log scale, is the number of cycles in the empirical network as compared to the expectation from a null model. We use two of the most common null models, as in prior work: Erdős-Rényi (ER) networks and randomized degree-preserving (RD) networks. ER networks have the same number of nodes and edges as the empirical network, but are wired randomly. RD networks preserve the indegree and outdegree sequences. For details we refer to the \nameref{sec:methods:analysis:cycles} section of \nameref{sec:methods}.
Figure~\ref{fig:kcycle_submod} shows the cycle densities computed for each of the Sarafu sub-networks. The $k$-cycle density has values mostly in the range from $3$ to $6$ for cycles of length $2$ and $3$, indicating that the empirical networks have orders of magnitude more cycles than do the null models. Moreover, the $k$-cycle density appears to be even higher for longer cycles of length $4$ and $5$. These findings are closely in line with prior results computed for the Sardex currency in Sardinia\cite{iosifidis_cyclic_2018}. Notably, this is the case even though the currency management practices followed by the two providers are quite distinct\cite{iosifidis_cyclic_2018,mattsson_sarafu_2022}. Based on these findings, we can conclude that cycles are a prominent network connectivity pattern in the circulation of (community) currencies.
\begin{figure}[htb]
\centering
\includegraphics[width=0.7\textwidth]{figures/scatter_complete_reduced.pdf}
\caption{Values of $k$-cycle density for each sub-module, at different cycle lengths. Points correspond to sub-modules, and are colored based on the dominant geographic area of users placed in the top-level module to which it belongs.}
\label{fig:kcycle_submod}
\end{figure}
\subsubsection{Structural correlations}
\label{sec:results:structure:mixing}
Degree disassortativity is an expected feature of specifically currency networks~\cite{kondor_rich_2014,campajola_microvelocity_2022} and of economic networks, more broadly~\cite{fujiwara_large-scale_2010,mattsson_functional_2021}. In networks with this property, high-degree nodes generally interact with low-degree nodes, not other high-degree nodes, and local ``hubs'' play a key structural role. Recall also that Sarafu sub-modules are diverse with respect to the livelihood reported by accounts (\nameref{sec:results:uses} section). Here we consider these and other structural correlations that help us better understand the circulation of Sarafu. Since the overall influence of account attributes on the Sarafu flow network is limited by the constraints of geography, and may be heterogeneous across sub-populations, we consider degree and attribute assortativities across the Sarafu sub-networks. For details we refer to the \nameref{sec:methods:analysis:mix} section of \nameref{sec:methods}.
Table~\ref{tab:assortativity} reports the average, the median, and the range for each property as computed on the undirected version of each sub-network. We find substantial disassortativity in degree across nearly all sub-networks. As expected, we also find that attribute assortativity is consistently low along the dimension of livelihood category. The consistency of these observations across sub-populations suggests that there may be important regularities in the structural correlations of networks that support the circulation of money.
\begin{table}[h]
\centering
\caption{Network statistics and feature assortativity across sub-modules with 100 or more nodes.}
\label{tab:assortativity}
\begin{tabular}{l|rrr|rrrrr}
& \multicolumn{3}{|l}{Network Statistics} & \multicolumn{5}{|l}{Assortativity} \\
& Nodes & Edges & Volume & Business & Gender & Registration & Degree & W. Degree \\ \hline
mean & 1021 & 2636 & 7.67m & 0.032 & 0.146 & 0.154 & -0.215 & -0.066 \\
std & 1082 & 3789 & 10.83m & 0.047 & 0.188 & 0.273 & 0.119 & 0.143 \\
min & 136 & 170 & 0.01m & -0.104 & -0.081 & -0.323 & -0.448 & -0.428 \\
25\% & 222 & 544 & 0.59m & 0.003 & 0.017 & -0.072 & -0.265 & -0.168 \\
50\% & 537 & 1151 & 2.99m & 0.029 & 0.121 & 0.147 & -0.221 & -0.096 \\
75\% & 1522 & 3513 & 11.05m & 0.058 & 0.208 & 0.322 & -0.152 & 0.012 \\
max & 4286 & 20458 & 43.64m & 0.121 & 1.000 & 0.845 & 0.247 & 0.269
\end{tabular}
\end{table}
Correlations with respect to \emph{gender} and \emph{registration date} in the structure of the sub-networks can also be substantial, although these effects are not consistent across sub-populations. Again from Table~\ref{tab:assortativity}, attribute assortativity on gender is present in about half of the 37 sub-populations and substantial in several. This may be related to the activity of community-based savings and investment groups, where women's participation is high\cite{avanzo_relational_2019,rasulova_impact_2017}. Within Sarafu, such groups provide opportunities to transact assortatively on gender. Gender assortativity in payment networks may also reflect, for instance, gendered economic roles in ways that deserve further study. Strong correlations in registration date also appear in several sub-networks, indicating a cohort effect. For example, during targeted introductions as described in the \nameref{sec:methods:data} section, groups of users who share latent economic ties would together adopt Sarafu over a relatively short period of time. Correlations by cohort are likely to appear in any digital payment system where adoption and use are voluntary.
\subsection{Prominent Sarafu users} \label{sec:results:hubs}
Local hubs play a key structural role in the circulation of Sarafu, and it is important to understand who takes on such prominent positions. We ask what features are especially consistent among accounts with high network centrality, now across the entire Sarafu flow network. In the section \nameref{sec:results:hubs:centrality} we consider an account's number of unique transaction partners, transaction volumes, and additional measures for computing network centrality. Next, the \nameref{sec:results:hubs:users} section asks what features of Sarafu accounts are strongly and consistently associated with high network centrality.
\subsubsection{Identifying prominent users} \label{sec:results:hubs:centrality}
As a first step towards understanding prominent Sarafu users, we consider distributions of relevant account statistics. Figure~\ref{fig:degrees} (left) shows smoothed empirical distributions of node degree on a logarithmic scale. We note that values are highly heterogeneous across accounts; the tail of the right-skewed distribution indicates that a small share of accounts has orders of magnitude more unique transaction partners than do most accounts. Transaction volumes into and out of accounts are spread over an even wider range, also exhibiting a ``heavy tail''. Figure~\ref{fig:degrees} (right) shows smoothed empirical distributions of weighted degree. A relatively small number of account holders spend orders of magnitude more (or less) Sarafu than do the bulk of the users. As expected, the Sarafu flow network has so-called ``hubs'' in that a small share of nodes are especially prominent.
\begin{figure}[!t]
\includegraphics[width=0.45\textwidth]{figures/degrees.pdf}
\includegraphics[width=0.45\textwidth]{figures/strengths.pdf}
\caption{Distribution of degree (left) and weighted degree (right) for the Sarafu flow network. Probability densities are scaled such that nodes with a degree value of zero shrink the distribution total, as zero cannot be plotted on a logarithmic scale.}
\label{fig:degrees}
\end{figure}
Figure~\ref{fig:correlations} shows the Pearson correlation between node degree, weighted degree, and the centrality metrics discussed in the \nameref{sec:methods:analysis:centrality} section in \nameref{sec:methods}. First we note that degree and weighted degree are not interchangeable, capturing different notions of prominence in the network. Weighted in- and out- degree themselves are exceptionally highly correlated, because accounts must receive large amounts of Sarafu in order to spend large amounts of Sarafu. This empirically confirms the underlying accounting consistency present in networks of monetary flow. The unweighted PageRank algorithm produces a non-zero value for each node that is correlated with both the in- and out- degree; this makes it a practical centrality metric for downstream tasks involving the unweighted network. Most interesting, weighted PageRank captures something distinct from the in- or out- degree, the weighted in- or out- degree, and unweighted PageRank. Noteworthy is that values of weighted PageRank are interpretable as the share of system funds that the accounts would eventually control, if the observed dynamics were to continue. An empirical calibration to account balances is presented in the \nameref{sec:methods:analysis:centrality} section in \nameref{sec:methods}.
\begin{figure}[!b]
\centering
\includegraphics[width=0.45\textwidth]{figures/centralities.pdf}
\caption{Pearson correlation between values for degree, weighted degree, and centrality metrics.}
\label{fig:correlations}
\end{figure}
\subsubsection{Characterizing prominent users} \label{sec:results:hubs:users}
To characterize prominent users of the Sarafu system, we ask what features are especially consistent among accounts with high network centrality. Figure~\ref{fig:regression} illustrates the regression coefficients on account properties when PageRank and weighted, inflow-adjusted PageRank are used as outcome variables. Ordinary least squares (OLS) provides an estimated statistical contribution for each account feature, while Elastic Net (EN) incorporates regularization to highlight only those features most consistently associated with centrality. For details about this methodology, we refer to the \nameref{sec:methods:analysis:regression} section.
\begin{figure}[!t]
\centering
\includegraphics[width=0.8\textwidth]{figures/coefs_pr_a085.pdf}
\caption{Regression coefficients for linear models fitting account features to centrality measures, using Ordinary least squares (OLS) and Elastic Net (EN). For the three categorical predictors, the reference categories are accounts that report a location in \textit{Kinango Kwale}, report selling \textit{food}, and do not report a gender.}
\label{fig:regression}
\end{figure}
PageRank and weighted inflow-adjusted PageRank capture distinct aspects of node importance, but are positively and negatively associated with many of the same account features. Most strongly and consistently associated with high network centrality are accounts held by \textit{savings} groups. Indeed, community-based savings and investment groups are a key feature of local economies in Kenya and of the Sarafu system (as noted in the \nameref{sec:data} section). The size of this category is quite small, however, containing only 264 accounts. The number of \textit{faith} leaders is even smaller, and some appear to play an especially prominent role in the local circulation supported by this community currency.
The other regression coefficients in Figure~\ref{fig:regression} reveal additional nuances among some of the largest categories of users. Accounts that were created \textit{prior} to the consolidation of Sarafu, which occurred as the data collection period began, are consistently associated with high network centrality; early adopters show a higher tendency to be prominent users. We also find that account holders reporting their gender as \textit{female} are associated with higher centrality in the Sarafu flow network---this prominence of women is remarkable. In fact, it conforms to qualitative accounts from field studies in Kinango, Kwale that report strong participation of women, and women's leadership, within community-based savings and investment groups that use Sarafu\cite{avanzo_relational_2019}. This has also been noted about savings groups in Kenya, more generally\cite{rasulova_impact_2017}. With respect to geography, recall that \textit{Mukuru Nairobi} and \textit{Kisauni Mombasa} refer to the site of targeted introductions in spring 2020 and early 2021, respectively. Timing appears to have made a substantial difference: the second intervention did not spur large transaction volumes, while those reporting a location within \textit{Mukuru Nairobi} have higher network centrality (on average) than users in \textit{Kinango Kwale} (the reference category). Perhaps most interestingly, \textit{farming} is associated with lower centrality than other reported economic activities. Non-farming activities (e.g. selling \textit{food}, running a \textit{shop}, or providing \textit{labour}) appear to be ``central'' to local economies even in areas of rural Kenya, such as \textit{Kinango Kwale}, that rely heavily on subsistence agriculture.
\section{Discussion} \label{sec:discussion}
With respect to the circulation of money, this work has demonstrated how a network approach can unveil meaningful patterns and extract relevant insights from individual transaction records. Coinciding with the arrival of the COVID-19 pandemic to Kenya, the Sarafu community currency saw dramatic growth in its user base and accommodated large spikes in transaction volumes. We find that Sarafu remained a community currency wherein circulation was very modular, happening predominantly within distinct sub-populations constrained to a large extent by geography. Circulation within these localized sub-populations occurred among users with diverse livelihoods over networks with many short cycles. The underlying sub-networks are also consistently disassortative, indicating that local ``hubs'' play a key structural role in the circulation of Sarafu. Savings and investment groups, and perhaps other community-based institutions, appear to take on these prominent positions in the underlying network.
Our results shed new light on the conditions under which community currencies might form part of successful humanitarian or development interventions. In response to acute economic stress, rapid deployment appears to be possible in areas where local economic activities are already diverse. It may be that coordinated adoption helps to quickly establish the cycles needed for circulation to take hold. Over longer periods of time, and in more peripheral areas, community currencies support economic development to the extent that they encourage diverse productive activities and strengthen short, local supply chains that keep money within a community~\cite{ussher_complementary_2021}. Practically speaking, it may be possible to identify ``missing links'' in local economic or financial networks such that policymakers and organizers might intervene to close cycles by brokering among local businesses~\cite{fleischman_balancing_2020,fleischman_liquidity-saving_2020}. Our findings complement and corroborate a growing body of work informing policy on alternative interventions in marginalized areas~\cite{ruddick_complementary_2015,mauldin_local_2015,fuders_smarter_2016,gomez_monetary_2018,ussher_complementary_2021}.
Methodologically, our conclusions demonstrate the explanatory power of representing the circulation of money as a network of monetary flow. Walk-based methods applied to such networks, specifically PageRank and Infomap, produce readily interpretable results that can provide clear answers to context-rich research questions about currency systems. Notably, these methods rely on scalable algorithms meaning that our approach can be applied to study sizeable currency systems where transaction data is recorded in digital form. This includes other community currencies~\cite{muralt_woergl_1934,kichiji_network_2008,frankova_transaction_2014,iosifidis_cyclic_2018} as well as major global cryptocurrencies~\cite{kondor_rich_2014,elbahrawy_evolutionary_2017,nadini_emergence_2022,campajola_microvelocity_2022}. Recent methodological advances~\cite{mattsson_trajectories_2021,kawamoto_single-trajectory_2022} promise to extend applicability also to payment systems that are not themselves full currency systems, such as mobile money systems~\cite{blumenstock_airtime_2016,economides_mobile_2017,mattsson_trajectories_2021}, large value payment systems~\cite{soramaki_topology_2007,iori_network_2008,kyriakopoulos_network_2009,bech_illiquidity_2012,barucca_organization_2018,rubio_classifying_2020,bianchi_longitudinal_2020}, major banks~\cite{zanin_topology_2016,rendon_de_la_torre_topologic_2016,carvalho_tracking_2021,ialongo_reconstructing_2021}, and, in an exciting development, centralized national payment infrastructures~\cite{triepels_detection_2018,sabetti_shallow_2021,arevalo_identifying_2022} or central bank digital currencies~\cite{bank_of_canada_central_2021}. Modern economic infrastructure makes detailed observation possible, and the circulation of money can be studied as (interconnected) networks of monetary flow.
Finally, the structural features we identify in the Sarafu network---degree disassortativity and an elevated cycle density---are likely to be general features of the economic networks underlying currency systems. Indeed, degree disassortativity has been found across many economic networks~\cite{fujiwara_large-scale_2010,kondor_rich_2014,mattsson_functional_2021,campajola_microvelocity_2022}. And our results regarding the presence of cycles are closely in line with prior analysis of the Sardex currency system~\cite{iosifidis_cyclic_2018}. This is despite considerable contextual differences. Kenya and Sardinia differ in their level of economic development, Sardex is aimed at businesses whereas Sarafu is aimed at individuals, and pandemic times are certainly different from regular times. Moreover, the two currency systems are operated differently~\cite{iosifidis_cyclic_2018,mattsson_sarafu_2022}. There appear to be important network-structural regularities underlying the circulation of money, which deserve to be further explored across currency systems large and small.
\section{Methods} \label{sec:methods}
The \nameref{sec:methods:data} section provides a detailed description of the portion of the raw Sarafu data used in constructing our timeseries and our network of monetary flow, plus three peculiarities of the Sarafu currency system that are of relevance to the implementation or interpretation of our analyses. Network analysis methods are used to quantitatively analyze the Sarafu flow network. The \nameref{sec:methods:analysis:circ} section articulates how the map equation framework captures and quantifies the circulation of money given a network of monetary flow. The \nameref{sec:methods:analysis:centrality} section describes walk-based measures of network centrality for characterizing prominent users. The \nameref{sec:methods:analysis:cycles} and \nameref{sec:methods:analysis:mix} sections introduce cyclic density and assortativity as tools to analyse the structure of the underlying, unweighted network.
\subsection{Data preparation}
\label{sec:methods:data}
The Sarafu CIC data\cite{ruddick_sarafu_2021} includes a transaction dataset and an account dataset collected from January 25th, 2020 to June 15th 2021. The raw form of this data has previously been described in detail\cite{mattsson_sarafu_2022}. The transaction records are labeled with a transaction type, and we consider the \textsc{standard} transactions. Figure~\ref{fig:sarafu} shows the total volume of such transactions for each complete month. Note that the value of one Sarafu was understood by users to be about that of a Kenyan Shilling, though actual exchange was facilitated only in very limited instances. The Sarafu flow network is constructed from the \textsc{standard} transactions that occurred within the Sarafu system over the observation period. Basic network statistics are shown in Table~\ref{tab:network}. As noted in the main text, the \emph{nodes} are registered accounts, for which the account dataset includes relevant account features (detailed below). An \emph{edge} from one account to another indicates that at least one \textsc{standard} transaction occurred across that link. The \emph{edge weight} corresponds to the total sum of all \textsc{standard} transaction amounts across that link. Then, system-run accounts are filtered out from the Sarafu flow network. Regular accounts who neither sent nor received even a single \textsc{standard} transaction from another regular account are isolates, which we also exclude from the network. Note that the giant connected component (GCC) encompasses nearly all the nodes, meaning that the majority of users are indirectly connected through their transactions.
\begin{table}[htb]
\centering
\begin{tabular}{l|rrrr}
& Nodes & Edges & Transactions & Volume (Sarafu) \\
\textsc{standard} transactions & 40,767 & 146,615 & 422,721 & 297.0 million \\
Sarafu flow network & 40,657 & 145,661 & 421,329 & 293.7 million \\
GCC & 38,653 & 143,724 & 418,675 & 293.4 million
\end{tabular}
\caption{\label{tab:network}Basic statistics for the network of aggregated \textsc{standard} transactions, the Sarafu flow network, and its giant connected component (GCC).}
\end{table}
\medskip
\noindent
\textbf{Account features}.
The account dataset includes the registration date and reported gender of the account holder as well as categorical labels derived from reported information on home location and livelihood. Mattsson, Criscione, \& Ruddick\cite{mattsson_sarafu_2022} provide a descriptive overview of each account feature. Notably, each geographic area is a combination of user-reported localities that could be quite precise. Ussher et al.\cite{ussher_complementary_2021} present an overview of the user-reported work activities that make up the livelihood categories. System-run accounts are those labeled with \textit{system} in place of the node attribute indicating the user's livelihood, or assigned a formal role as an \textsc{admin} or \textsc{vendor} account.
\medskip
\noindent
\textbf{Savings \& investment groups}.
Community-based savings and investment groups are common in Kenya~\cite{biggart_banking_2001,central_bank_of_kenya_inclusive_2019} and a key feature of many localities that use Sarafu, specifically~\cite{marion_voucher_2018,ussher_complementary_2021}. Several hundreds of so-called ``chamas'' are present in the data, many with the label \textit{savings} in place of the node attribute indicating the user's livelihood. For a time, Sarafu operator Grassroots Economics also had a program whereby field staff would verify the operation of community-based groups and provide additional support to verified chamas\cite{mattsson_sarafu_2022}. Notably, verified groups were conduits for development initiatives and humanitarian aid on several occasions. Some of these initiatives involved payments made to system-run accounts, in Sarafu, in exchange for donated food, items, or Kenyan Shillings.
\medskip
\noindent
\textbf{Targeted introductions}.
There were two so-called targeted introductions during the observation period, conducted by the Kenyan Red Cross in collaboration with Grassroots Economics~\cite{mattsson_sarafu_2022}. These consisted of outreach efforts and training programs in specific areas. The Mukuru kwa Njenga slum in Nairobi was the site of the first; educational and outreach programs began in April 2020. Soon thereafter, this intervention was scaled up in response to the COVID-19 pandemic and related economic disruptions. Again, community currencies tend to gain in popularity during times of economic and financial crisis~\cite{stodder_complementary_2009,stodder_macro-stability_2016,zeller_economic_2020}. A second Red Cross intervention began in in Kisauni, Mombasa in early 2021. This resulted in a wave of account creations~\cite{mattsson_sarafu_2022} and rising activity by accounts with location \textit{Kisauni Mombasa}~\cite{ussher_complementary_2021}. However, as we can see in Figure~\ref{fig:sarafu}, overall transaction volumes did not rise as dramatically during this targeted introduction, as they did during the first.
\medskip
\noindent
\textbf{Currency creation}.
The digital payment system, as a whole, saw inflows when new units of Sarafu were created. For instance, newly-created accounts would receive an initial disbursement of 400 Sarafu, later reduced to 50 Sarafu. New users could receive an additional sum if and when they verified their account information with staff at the non-profit currency operator Grassroots Economics. Existing users could also receive newly created funds, such as in reward for transaction activity and as bonus for referring others. These and other non-\textsc{standard} inflows are summarized as an aggregated value in the account dataset. We refer to prior work for a full account of currency management and system administration over the data collection period\cite{mattsson_sarafu_2022}.
\subsection{Circulation} \label{sec:methods:analysis:circ}
To study circulation we turn to the map equation framework~\cite{rosvall_maps_2008} and the associated Infomap algorithm~\cite{ding_community_2014}. This is an approach based on computations involving a walk process over a given network, which is relevant in that financial transactions describe a real-world walk process\cite{mattsson_trajectories_2021}. Infomap takes a weighted, directed network as input and outputs a hierarchical mapping of nodes grouped into discovered \emph{modules}. This grouping is done via computational optimization. Specifically, the map equation defines a notion of entropy whose value is higher the more of the flow over the given network occurs between rather than within modules. The Infomap algorithm exploits meso-scale network structure to minimize that value, grouping nodes with much flow among themselves (and little outside). These are discovered sub-populations among whom a ``random walker'' would tend to stay for relatively long. We refer to top-level modules, sub-modules and sub-sub-modules at respectively the first, second and third level of the discovered hierarchy. The composition of these sub-populations can then be understood by means of an approach where we quantify their heterogeneity along dimensions of geography, livelihood, and gender, i.e., the node attributes. Implementation details for running Infomap and analyzing the resulting module mapping are included in Supplementary File 2.
\subsection{Network cycles} \label{sec:methods:analysis:cycles}
To describe the network connectivity patterns underlying the circulation of Sarafu, we consider cycles. A \textit{cycle} is a simple path starting and ending at the same node. In the context of complementary currencies, cycles ensure the flow of liquidity throughout the system. For cycles to occur, users must be willing to both spend and earn in complementary currency. Following this observation, we analyze cyclic structures in the Sarafu network using $k$-cycle density\cite{iosifidis_cyclic_2018}. The measure of $k$-cycle density is defined as the logarithm of the ratio between the number of cycles of length $k$ detected in an empirical network and the number expected from a relevant null model. Equation~\ref{eqn:cycles} reproduces the definition of the $k$-cycle density for an empirical network $G$ and the chosen null model for $G$, $G_{null}$.
\begin{equation}
C_k(G)=\log \left( \frac{|P_k(G)|}{\mathbb{E} \left( |P_k(G_{null})| \right)} \right)
\label{eqn:cycles}
\end{equation}
where $P_k(G)$ is the set of cycles of length $k$ for the network $G$ and $|P_k(G)|$ is its cardinality, that is, the number of unique cycles of length $k$ in the network $G$. We use $\mathbb{E}$ to denote an expected value. Hence, $\mathbb{E}(|P_k(G_{null})|)$ is the expected number of cycles of length $k$ for the chosen null model for $G$, $G_{null}$.
We consider two common null models: Erdős-Rényi (ER) networks and randomized degree-preserving (RD) networks. ER networks have the same number of nodes and edges as the empirical network, but are wired randomly. RD networks preserve the indegree and outdegree sequences of empirical network, but instead has edges assigned to link endpoints randomly.
The cycle density is computed separately for each sub-module identified by the Infomap algorithm described in the \nameref{sec:methods:analysis:circ} section. Directed cycles are detected and counted using an existing approach~\cite{butts_cycle_2006}. This is done for each empirical sub-network, and for ER graphs generated with the number of nodes and edges of the empirical sub-network. We use 30 realizations, and the expected number of cycles in the ER null model is the average over these realizations. As in prior work, the expected number of cycles in the RD null model (for each sub-network) is computed analytically\cite{bianconi_local_2008,iosifidis_cyclic_2018}. The $k$-cycle density is computed using cycles of length 2, 3, 4, and 5. An implementation is provided in Supplementary Files 4 and 5.
\subsection{Network mixing patterns} \label{sec:methods:analysis:mix}
To characterize the mixing patterns underlying the network structure of the Sarafu community currency, we consider degree and attribute assortativity~\cite{newman_mixing_2003,foster_edge_2010}. Values are computed separately for each sub-network delineated by the sub-modules identified by the Infomap algorithm described in the \nameref{sec:methods:analysis:circ} section. The categorical attribute assorativity is calculated along dimensions of livelihood category and reported gender, using the undirected version of the networks. These measures compare the number of links between accounts with the same livelihood or gender to that which would be expected at random, and can range from -1 to 1. A value of 0 corresponds to random expectation; a value of 1 corresponds to a network where transactions only occurred between accounts with the same attribute value. When there is no variation within a sub-population, the sub-network is given an assortativity value of 1. Similarly, we calculate the numerical attribute assorativity to quantify mixing patterns with respect to registration date, in-degree, and out-degree. Implementation details are reported in Supplementary File 2.
\subsection{Network centrality}
\label{sec:methods:analysis:centrality}
To characterize prominent users of Sarafu, we employ network centrality measures on the Sarafu flow network. Purely structural node-based metrics such as degree and weighted degree correspond to straightforward statistics about accounts. We also use walk-based methods for node centrality as these are especially interpretable with respect to monetary flow; the well-known PageRank algorithm is flexible and computationally tractable. These centrality measures are computed for our network, and then interpreted in the context of node attributes of the account-holders using linear regression. This lets us characterize prominent users without highlighting individual account holders, which is neither our goal nor desirable for privacy reasons (see the \nameref{sec:methods:data_availability} section). Below, we briefly discuss each employed measure.
\textbf{Indegree and outdegree} in the Sarafu flow network correspond to an account's number of unique incoming and outgoing transaction partners, respectively, over the observation period. It is possible for nodes to have zero indegree \emph{or} outdegree, but accounts with neither incoming nor outgoing transaction partners would be isolates and hence do not appear in the network.
\textbf{Weighted indegree and weighted outdegree} in the Sarafu flow network correspond to total transaction volumes into and out of accounts over the observation period.
\textbf{PageRank} is an algorithm that produces a walk-based metric for node centrality given a directed network~\cite{page_pagerank_1999,frankova_transaction_2014}. The obtained centrality values approximate the probability of finding a random walker at a given node at any given moment. More specifically, PageRank computes the stationary probability of a random walk process with restarts on a given network. A single parameter $\alpha$ is used to control the propensity for the simulated walkers to restart. An $\alpha$ value of $0.85$ is the long-established default, meaning that $15\%$ of times a random walker will restart rather than follow an out-link from the node where it currently resides. By default, restarts are uniformly random across the nodes. However, it is also possible to specify the probability of restarting at any particular node using a so-called personalization vector.
\textbf{Weighted PageRank} is an analogous centrality metric for weighted, directed networks. Over a weighted network, the random walkers choose among available out-links in proportion to their edge weights. These are flows of money, in our case, and so the stationary probability then corresponds to the share of the total balance held by each account at equilibrium. This is especially applicable in a currency system context, since it means that the values obtained by the Weighted PageRank algorithm are interpretable as the share of system funds that an account would eventually control if observed dynamics continued. Within this intuition, \textbf{Weighted Inflow-adjusted PageRank} employs a personalization vector to better capture idiosyncratic patterns of currency creation; real-world currency systems may be poorly represented by the default assumption of uniformly random restarts. Recall from the \nameref{sec:methods:data} section that Sarafu users could receive disbursements and rewards in addition to inflows from regular transactions. We use the aggregated values of non-\textsc{standard} inflows, available in the account dataset, to set the PageRank personalization vector. The simulated random walk process is then constrained to reproduce the observed pattern of currency creation, on average.
\subsubsection{Empirical calibration}
\label{sec:methods:analysis:centrality:calibration}
Running the Weighted PageRank algorithm requires specifying the aforementioned parameter $\alpha$. We would like to understand whether this parameter affects the suitability of these values as a centrality measure for networks of monetary flow. Recall that Weighted PageRank extrapolates the observed patterns of circulation towards a future where an equilibrium is reached. This means that the output values can be understood as a prediction for hypothetical future account balances (as a fraction of the total balance). While we cannot expect such strong modeling assumptions to produce especially accurate estimates, it is nonetheless instructive to compare to empirical account balances. In particular, we can determine whether this centrality metric is sensitive to $\alpha$ and whether modeling non-random currency creation, via the PageRank personalization vector, matters for this particular real-world system.
\begin{figure}[!t]
\centering
\includegraphics[width=0.45\textwidth]{figures/pageranks.pdf}
\caption{Pearson correlation of Weighted and Weighted Inflow-adjusted PageRank with final account balances.}
\label{fig:alphas}
\end{figure}
We consider the correlation of our centrality metrics computed on the Sarafu flow network with Sarafu balances observed at the time of data collection on June 15th, 2021. Figure~\ref{fig:alphas} plots the correlation between these final balances and the values given by the Weighted PageRank algorithm, with and without adjusting the simulated walk process to account for currency creation. The resulting correlations are at most $R^2 = 0.57$ and $R^2 = 0.56$, respectively. Taking the perspective that Weighted PageRank estimates hypothetical future account balances, it is encouraging to note that these values correlate more closely with final balances than do the in- or out- degree ($R^2 = 0.28$, $R^2 = 0.21$), and the weighted in- or out- degree ($R^2 = 0.52$, $R^2 = 0.47$). Moreover, both versions of Weighted PageRank produce values that are consistently correlated with final balances over a wide range of parameter values that includes the long-established default ($\alpha = 0.85$); our centrality metrics are not overly sensitive to the propensity for restarts. We conclude that Weighted PageRank, especially Weighted Inflow-adjusted PageRank, is a highly suitable centrality metric for downstream analyses of networks of monetary flow.
\subsection{Linear regression} \label{sec:methods:analysis:regression}
To assess what recorded features of the account holders associate with higher prominence, as measured by network centrality, we use linear regression. Ordinary least squares (OLS) is used to fit a linear model to an outcome, in our case a network centrality measure, providing an estimated contribution for each input feature~\cite{montgomery_introduction_2012}. Regularization is a fitting technique that introduces a penalty term to the optimization limiting the number of regressors and/or their magnitude~\cite{friedman_regularization_2010}. So-called Elastic Net (EN) regularization, as we use it, penalizes the number of regressors and their magnitude equally. The penalty weight is selected using five-fold cross validation, just before the point where additional features begin entering the model without qualitatively improving the statistical fit. Further implementation details are noted in Supplementary File 3, alongside the code that replicates the analysis.
\subsection{Data availability}
\label{sec:methods:data_availability}
The dataset analyzed in this study is available via the UK Data Service~\cite{ruddick_sarafu_2021} under the UK Data Service End User License, which stipulates suitable data-privacy protections. An extensive description of the raw data is available~\cite{mattsson_sarafu_2022}.
\subsection{Software availability} All software used in this study are available under an open-source licence:
\begin{itemize}
\setlength\itemsep{0em}
\item \texttt{infomap v1.6.0}~\cite{edler_mapequation_2021}
\item \texttt{networkx v2.6.3}~\cite{hagberg_exploring_2008}
\item \texttt{netdiffuseR v1.22.3}~\cite{vega_yon_netdiffuser_2021}
\item \texttt{sna v2.6}~\cite{butts_sna_2020}
\item \texttt{statsmodels v0.13.2}~\cite{seabold_statsmodels_2010}
\item \texttt{seaborn v0.11.2}~\cite{waskom_seaborn_2021}
\item \texttt{matplotlib v3.5.2}~\cite{hunter_matplotlib_2007}
\item \texttt{pandas v1.4.2}~\cite{reback_pandas-devpandas_2022}
\end{itemize}
\subsection{Code availability}
The code required to reproduce each analysis is included the Supplementary Information:
\begin{itemize}
\item \textbf{Supplementary File 1} is a Jupyter Notebook containing the code to construct the network from the raw data.
\item \textbf{Supplementary File 2} is a Jupyter Notebook containing the code to reproduce the analysis in the \nameref{sec:results:flow} and \nameref{sec:results:structure:mixing} sections.
\item \textbf{Supplementary File 3} is a Jupyter Notebook containing the code to reproduce the analysis in the \nameref{sec:results:hubs} section.
\item \textbf{Supplementary File 4} is an R Notebook containing the code to reproduce the analysis in the \nameref{sec:results:structure:cycles} section.
\item \textbf{Supplementary File 5} is a Jupyter Notebook containing the code to reproduce the figures in the \nameref{sec:results:structure:cycles} section.
\item \textbf{Supplementary File 6} is a high-resolution version of Figure~\ref{fig:network}.
\end{itemize}
| {'timestamp': '2022-07-20T02:03:43', 'yymm': '2207', 'arxiv_id': '2207.08941', 'language': 'en', 'url': 'https://arxiv.org/abs/2207.08941'} |
\section{Conclusions and Future Directions}
\label{sec:conclusion}
In this work, we presented the leakage profile definitions for searchable encrypted relational databases, and investigated the leakage-based attacks proposed int the literature.
We also proposed \textsl{\mbox{P-McDb}}\xspace, a dynamic searchable encryption scheme for multi-cloud outsourced databases.
\textsl{\mbox{P-McDb}}\xspace does not leak information about search, access, and size patterns.
It also achieves both forward and backward privacy, where the CSPs cannot reuse cached queries for checking if new records have been inserted or if records have been deleted.
\textsl{\mbox{P-McDb}}\xspace has minimal leakage, making it resistant to exiting leakage-based attacks.
As future work, first we will do our performance analysis by deploying the scheme in the real multi-cloud setting.
Second, we will try to address the limitations of \textsl{\mbox{P-McDb}}\xspace.
Specifically, \textsl{\mbox{P-McDb}}\xspace protects the search, access, and size patterns from the CSPs.
However, it suffers from the collusion attack among CSPs.
In \textsl{\mbox{P-McDb}}\xspace, the SSS knows the search result for each query, and the other two knows how the records are shuffled and re-randomised.
If the SSS colludes with the IWS or RSS, they could learn the search and access patterns.
We will also consider the techniques to defend the collusion attack among CSPs.
Moreover, in this work, we assume all the CSPs are honest.
Yet, in order to learn more useful information, the compromised CSPs might not behave honestly as assumed in the security analysis.
For instance, the SSS might not search all the records indexed by $IL$, and the RSS might not shuffle the records properly.
In the future, we will give a mechanism to detect if the CSPs honestly follow the designated protocols.
\section{Notations and Definitions}
\label{sec:notation}
\begin{table}
\centering
\caption{Notation and description}
\scriptsize
\label{tbl:notation}
\begin{tabular}{|l|l|} \hline
\textbf{Notation} & \textbf{Description} \\ \hline
$e$ & Data element \\ \hline
$|e|$ &The length of data element \\ \hline
$F$ & Number of attributes or fields \\ \hline
$rcd_{id}=(e_{id, 1}, \ldots, e_{id, F})$ & The $id$-th record \\ \hline
$N$ & Number of records in the database \\ \hline
$DB=\{rcd_1, \ldots, rcd_N\}$ & Database \\ \hline
$DB(e)=\{rcd_{id} | e \in rcd_{id}\}$ & Records containing $e$ in $DB$ \\ \hline
$O(e)=|DB(e)|$ & Occurrence of $e$ in $DB$ \\ \hline
$U_f=\cup \{e_{id, f}\}$ & The set of distinct elements in field $f$ \\ \hline
$U=\{U_{1}, ..., U_F\}$ & All the distinct elements in $DB$ \\ \hline
$e^*$ & Encrypted element \\ \hline
$Ercd$ & Encrypted record \\ \hline
$EDB$ & Encrypted database \\ \hline
$Q=(type, f, e)$ & Query \\ \hline
$Q.type$ & `select' or `delete' \\ \hline
$Q.f$ & Identifier of interested field \\ \hline
$Q.e$ & Interested keyword \\ \hline
$EQ$ & Encrypted query \\ \hline
$EDB(EQ)$ or $EDB(Q)$ & Search result of $Q$ \\ \hline
$(f, g)$ & Group $g$ in field $f$ \\ \hline
$\bm{E}_{f,g}$ & Elements included in group $(f,g)$ \\ \hline
$\tau_{f,g}=\max \{O(e)\}_{e \in \bm{E}_{f, g}}$ & Threshold of group $(f, g)$ \\ \hline
$(\bm{E}_{f,g}, \tau_{f,g})^*$ & Ciphertext of $(\bm{E}_{f,g}, \tau_{f,g})$ \\ \hline
\end{tabular}
We say $EQ(Ercd)=1$ when $Ercd$ matches $EQ$.
Thus, the search result $EDB(EQ) = \{Ercd_{id} | EQ(Ercd_{id})=1\}$.
\end{table}
In this section, we give formal definitions for the search, access, and size patterns, as well as for forward and backward privacy.
Before that, in Table~\ref{tbl:notation}, we define the notations used throughout this article.
\begin{mydef}[\textbf{Search Pattern}]
Given a sequence of $q$ queries $\bm{Q}=(Q_1, \ldots, Q_q)$, the search pattern of $\bm{Q}$ represents the correlation between any two queries $Q_i$ and $Q_j$, \textit{i.e.,}\xspace $\{Q_i\stackrel{?}{=}Q_j\}_{Q_i, Q_j \in \bm{Q}}$\footnote{$Q_i=Q_j$ only when $Q_i.type=Q_j.type$, $Q_i.f=Q_j.f$, and $Q_i.e=Q_j.e$}, where $1 \leq i, j \leq q$.
\end{mydef}
In previous works, access pattern is generally defined as the records matching each query \cite{Curtmola:2006:Searchable}, \textit{i.e.,}\xspace the search result.
In fact, in leakage-based attacks, such as \cite{Zhang:2016:All,Islam:2012:Access,Cash:2015:leakage}, the attackers leverage the intersection between search results (explained in Section \ref{sec:attack}) to recover queries, rather than each single search result.
Therefore, in this work, we define the intersection between search results as access pattern.
\begin{mydef}[\textbf{Access Pattern}]
The access pattern of $\bm{Q}$ represents the intersection between any two search results, \textit{i.e.,}\xspace $\{EDB(Q_i) \cap EDB(Q_j)\}_{Q_i, Q_j \in \bm{Q}}$.
\end{mydef}
\begin{mydef}[\textbf{Size Pattern}]
The size pattern of $\bm{Q}$ represents the number of records matching each query, \textit{i.e.,}\xspace $\{|DB(Q_i)|\}_{Q_i\in \bm{Q}}$.
\end{mydef}
\begin{mydef}[\textbf{Forward Privacy}]
Let $Ercd^t$ be an encrypted record inserted or updated at time $t$, a dynamic SE scheme achieves forward privacy, if $EQ(Ercd^t)\stackrel{?}{=}0$ is always true for any query $EQ$ issued at time $t^*$, where $t^* < t$.
\end{mydef}
\begin{mydef}[\textbf{Backward Privacy}]
Let $Ercd^t$ be an encrypted record deleted at time $t$, a dynamic SE scheme achieves backward privacy, if $EQ(Ercd^t)\stackrel{?}{=}0$ is always true for any query $EQ$ issued at time $t'$, where $t < t'$.
\end{mydef}
\section{Introduction}
\label{sec:introduction}
Cloud computing is a successful paradigm offering companies and individuals virtually unlimited
data storage and computational power at very attractive costs.
However, uploading sensitive data, such as medical, social, and financial information, to public cloud environments is still a challenging issue due to security concerns.
In particular, once such data sets and related operations are uploaded to cloud environments, the tenants must therefore trust the Cloud Service Providers (CSPs).
Yet, due to possible cloud infrastructure bugs~\cite{GunawiHLPDAELLM14}, misconfigurations~\cite{dropboxleaks} and external attacks~\cite{verizonreport}, the data could be disclosed or corrupted.
Searchable Encryption (SE) is an effective approach that allows organisations to outsource their databases and search operations to untrusted CSPs, without compromising the confidentiality of records and queries.
Since the seminal SE paper by Song \textit{et al.}\xspace \cite{Song:2000:Practical}, a long line of work has investigated SE schemes with flexible functionality and better performance \cite{Asghar:2013:CCSW,Curtmola:2006:Searchable,Popa:2011:Cryptdb,Sarfraz:2015:DBMask}.
These schemes are proved to be secure in certain models under various cryptographic assumptions.
Unfortunately, a series of more recent work \cite{Islam:2012:Access,Naveed:2015:Inference,Cash:2015:leakage,Zhang:2016:All,Kellaris:ccs16:Generic,Abdelraheem:eprint17:record}
illustrates that they are still vulnerable to inference attacks, where malicious CSPs could recover the content of queries and records by (i) observing the data directly from the encrypted database and (ii) learning about the results and queries when users access the database.
From the encrypted database, the CSP might learn the frequency information of the data.
From the search operation, the CSP is able to know the \emph{access pattern}, \textit{i.e.,}\xspace the records returned to users in response to given queries.
The CSP can also infer if two or more queries are equivalent, referred to as the \emph{search pattern}, by comparing the encrypted queries or matched data.
Last but not least, the CSP can simply log the number of matched records or files returned by each query, referred to as the \emph{size pattern}.
When an SE scheme supports insert and delete operations, it is referred to as a \emph{dynamic} SE scheme.
Dynamic SE schemes might leak extra information if they do not support \emph{forward privacy} and \emph{backward privacy} properties.
Lacking forward privacy means that the CSP can learn if newly inserted data or updated data matches previously executed queries.
Missing backward privacy means that the CSP learns if deleted data matches new queries.
Supporting forward and backward privacy is fundamental to limit the power of the CSP to collect information on how the data evolves over time.
However, only a few schemes \cite{BostMO17,ZuoSLSP18,ChamaniPPJ18,AmjadKM19} ensure both properties simultaneously.
Initiated by Islam \textit{et al.}\xspace (IKK)\cite{Islam:2012:Access}, more recent works \cite{Cash:2015:leakage,Naveed:2015:Inference,Zhang:2016:All,Kellaris:ccs16:Generic} have shown that such leakage can be exploited to learn sensitive information and break the scheme.
Naveed \textit{et al.}\xspace \cite{Naveed:2015:Inference} recover more than $60\%$ of the data in CryptDB \cite{Popa:2011:Cryptdb} using frequency analysis only.
Zhang \textit{et al.}\xspace \cite{Zhang:2016:All} further investigate the consequences of leakage by injecting chosen files into the encrypted storage.
Based on the access pattern, they could recover a very high fraction of searched keywords by injecting a small number of known files.
Cash \textit{et al.}\xspace \cite{Cash:2015:leakage} give a comprehensive analysis of the leakage in SE solutions for file collection and introduced the \emph{count attack}, where an adversary could recover queries by counting the number of matched records even if the encrypted records are semantically secure.
In this article, we investigate the leakage and attacks against relational databases\footnote{In the rest of this article, we use the term \emph{database} to refer to a relational database.} and present a \underline{P}rivacy-preserving \underline{M}ulti-\underline{c}loud based dynamic SSE scheme for \underline{D}ata\underline{b}ases (\textsl{\mbox{P-McDb}}\xspace).
\textsl{\mbox{P-McDb}}\xspace can effectively resist attacks based on the search, size or/and access patterns.
Our key technique is to use three non-colluding cloud servers: one server stores the data and performs the search operation, and the other two manage re-randomisation and shuffling of the database for protecting the access pattern.
A user with access to all servers can perform an encrypted search without leaking the search, access, or size pattern.
When updating the database, \textsl{\mbox{P-McDb}}\xspace also ensures both forward and backward privacy.
We give full proof of security against honest-but-curious adversaries and show how \textsl{\mbox{P-McDb}}\xspace can hide these patterns effectively.
The contributions of this article can be summarised as follows:
\begin{itemize}
\item
We provide leakage definition specific to searchable encrypted databases, and then review how existing attacks leverage the leakage to recover queries and records.
\item We propose a privacy-preserving SSE database \textsl{\mbox{P-McDb}}\xspace, which protects the search, access, and size patterns, and achieves both forward and backward privacy, thus ensuring protection from leakage-based attacks.
\item
We give full proof of security against honest-but-curious adversaries and show how \textsl{\mbox{P-McDb}}\xspace can effectively hide these patterns and resist leakage-based attacks
\item
Finally, we implement a prototype of \textsl{\mbox{P-McDb}}\xspace and show its practical efficiency by evaluating its performance on TPC-H dataset.
\end{itemize}
The rest of this article is organised as follows.
In Section~\ref{sec:notation}, we define notations.
We present the leakage levels in SE schemes and review leakage-based attacks in Section~\ref{sec:leakage}.
In Section~\ref{sec:overview}, we provide an overview of \textsl{\mbox{P-McDb}}\xspace.
Solution details can be found in Section~\ref{sec:MCDB-details}.
In Section~\ref{sec:security}, we analyse the security of \textsl{\mbox{P-McDb}}\xspace.
Section~\ref{sec:MCDB-perf} reports the performance of \textsl{\mbox{P-McDb}}\xspace.
Finally, we conclude this article in Section~\ref{sec:conclusion}.
\subsection{Attacks against SE Solutions}
\label{sec:attack}
In recent years, leakage-based attacks against SE schemes have been investigated in the literature.
Table \ref{tbl:summary} summarises the existing SE solutions for relational databases and the attacks applicable to them.
In the following, we illustrate how the existing leakage-based attacks could recover the data and queries.
Specifically, for each attack, we analyse its leveraged leakage, required knowledge, process, and consequences.
\subsubsection{Frequency Analysis Attack}
In \cite{Naveed:2015:Inference}, Naveed \textit{et al.}\xspace describe an attack on PPE-based SE schemes, where the CSP could recover encrypted records by analysing the leaked frequency information, \textit{i.e.,}\xspace data distribution.
To succeed in this attack, in addition to the encrypted database, the CSP also requires some auxiliary information, such as the application background, publicly available statistics, and prior versions of the targeted database.
In PPE-based SE schemes, the frequency information of an encrypted database is equal to that of the database in plaintext.
By comparing the leaked frequency information with the obtained statistics relevant to the application, the CSP could recover the encrypted data elements stored in encrypted databases.
In \cite{Naveed:2015:Inference}, Naveed \textit{et al.}\xspace recovered more than $60\%$ of records when evaluating this attack with real electronic medical records using CryptDB.
We stress that this attack does not require any queries or interaction with users.
The encrypted databases with $\mathcal{L}_3$ leakage profile, \textit{i.e.,}\xspace PPE-based databases, such as CryptDB and DBMask, are vulnerable to this attack.
\subsubsection{IKK Attack}
IKK attack proposed by Islam \textit{et al.}\xspace \cite{Islam:2012:Access} is the first attack exploiting the access pattern leakage.
The goal of the IKK attack is to recover encrypted queries in encrypted file collection systems, \textit{i.e.,}\xspace recover the plaintext of searched keywords.
Note that this attack can also be used to recover queries in encrypted databases since it does not leverage the leakage specific to file collections.
In this attack, the CSP needs to know possible keywords in the dataset and the expected probability of any two keywords appearing in a file (\textit{i.e.,}\xspace co-occurrence probability).
Formally, the CSP guesses $m$ potential keywords and builds an $m\times m$ matrix $\tilde{C}$ whose element is the co-occurrence probability of each keyword pair.
The CSP mounts the IKK attack by observing the access pattern revealed by the encrypted queries.
Specifically, by checking if any two queries match the same files or not, the number of files containing any two searched keywords (\textit{i.e.,}\xspace the co-occurrence rate) can be reconstructed.
Assume the CSP observes $n$ queries.
It constructs an $n \times n$ matrix $C$ with their co-occurrence rates.
By using the simulated annealing technique \cite{KirkpatrickGV83}, the CSP can find the best match between $\tilde{C}$ and $C$ and map the encrypted keywords to the guesses.
In \cite{Islam:2012:Access}, Islam \textit{et al.}\xspace mounted the IKK attack over the Enron email dataset \cite{eronemail:2017} and recovered $80\%$ of the queries with certain vocabulary sizes.
The encrypted relational databases with leakage profile $\mathcal{L}_2$ or $\mathcal{L}_1$, such as Arx \cite{Poddar:arx:eprint16}, Blind Seer \cite{Pappas:BlindSeer:SP14}, and PPQED \cite{Samanthula:2014:Privacy}, are also vulnerable to the IKK attack.
\subsubsection{File-injection and Record-injection Attack}
The file-injection attack \cite{Zhang:2016:All} is an active attack mounted on encrypted file collections, which is also named as \emph{chosen-document attack} in \cite{Cash:2015:leakage}.
The file-injection attack attempts to recover encrypted queries by exploiting access pattern in encrypted file storage.
More recently, Abdelraheem \textit{et al.}\xspace \cite{Abdelraheem:eprint17:record} extended this attack to encrypted databases and defined it as \emph{record-injection attack}.
Compared with the IKK and count attack (will be discussed in Section \ref{subsec:conut}), much less auxiliary knowledge is required: the CSP only needs to know the keywords universe of the system.
In \cite{Zhang:2016:All}, Zhang \textit{et al.}\xspace presented the first concrete file-injection attack and showed that the encrypted queries can be revealed with a small set of injected files.
Specifically, in this attack, the CSP (acting as an active attacker) sends files composed of the keywords of its choice, such as emails, to users who then encrypt and upload them to the CSP, which are called \emph{injected files}.
If no other files are uploaded simultaneously, the CSP can easily know the storage location of each injected file.
Moreover, the CSP can check which injected files match the subsequent queries.
Given enough injected files with different keyword combinations, the CSP could recover the keyword included in a query by checking the search result.
The encrypted databases with $\mathcal{L}_2$ or $\mathcal{L}_3$ leakage profiles are vulnerable to this attack.
Although some works \cite{BostMO17,ZuoSLSP18,ChamaniPPJ18,AmjadKM19} ensure both forward and backward privacy, they are still vulnerable to the file-injection attack due to the leakage of access pattern.
That is, after searching, the attacker could still learn the intersections between previous insert queries and the search result of current queries.
\subsubsection{Count and Relational-count Attack}
\label{subsec:conut}
The count attack is proposed by Cash \textit{et al.}\xspace in \cite{Cash:2015:leakage} to recover encrypted queries in file storage systems based on the access and size patterns leakage.
In \cite{Abdelraheem:2017:Seachable}, Abdelraheem \textit{et al.}\xspace have applied this attack to databases and named it a \emph{relational-count attack}.
As in the IKK attack scenario, the CSP is also assumed to know an $m \times m$ matrix $\tilde{C}$, where its entry $\tilde{C}[w_i, w_j]$ holds the co-occurrence rate of keyword $w_i$ and $w_j$ in the targeted dataset.
In order to improve the attack efficiency and accuracy, the CSP is assumed to know, for each keyword $w$, the number of matching files $count(w)$ in the targeted dataset.
The CSP mounts the count attack by counting the number of files matching each encrypted query.
For an encrypted query, if the number of its matching files is unique and equals to a known $count(w)$, the searched keyword must be $w$.
However, if the result size of a query $EQ$ is not unique, all the keywords with $count(w)=|EDB(EQ)|$ could be the candidates.
Recall that the CSP can construct another matrix $C$ that represents the observed co-occurrence rate between any two queries based on the leakage of access pattern.
By comparing $C$ with $\tilde{C}$, the candidates for the queries with non-unique result sizes can be reduced.
With enough recovered queries, it is possible to determine the keyword of $EQ$.
In \cite{Cash:2015:leakage}, Cash \textit{et al.}\xspace tested the count attack against Enron email dataset and successfully recovered almost all the queries.
The SE solutions for databases with leakage profiles above $\mathcal{L}_1$ are vulnerable to this attack.
\subsubsection{Reconstruction Attack}
In ORAM-based systems, such as SisoSPIR proposed by Ishai \textit{et al.}\xspace \cite{Ishai:2016:Private}, the size and access patterns are concealed.
Unfortunately, Kellaris \textit{et al.}\xspace \cite{Kellaris:ccs16:Generic} observe that the ORAM-based systems have fixed communication overhead between the CSP and users, where the length of the message sent from the CSP to the user as the result of a query is proportional to the number of records matching the query.
That is, for a query $Q$, the size of the communication sent from the CSP to the user is $\alpha |DB(Q)|+ \beta$, where $\alpha$ and $\beta$ are two constants.
In theory, by giving two (query, result) pairs, the CSP can derive $\alpha$ and $\beta$, and then infer the result sizes of other queries.
In \cite{Kellaris:ccs16:Generic}, Kellaris \textit{et al.}\xspace present the \emph{reconstruction attack} that exploits the leakage of communication volume, and could reconstruct the attribute names in encrypted databases supporting range queries.
In this attack, the CSP only needs to know the underlying query distribution prior to the attack.
Their experiment illustrated that after a certain number of queries, all the attributes can be recovered in a few seconds.
Since we focus on equality queries in this work, we do not give the attack details here.
Nonetheless, after recovering the size pattern for each query, the CSP could also mount the count attack on equality queries.
The SE schemes with $\mathcal{L}_1$ leakage profile are vulnerable to this attack.
\section{Leakage and Attacks}
\label{sec:leakage}
\subsection{Leakage Definition}
In \cite{Cash:2015:leakage}, Cash \textit{et al.}\xspace define four different levels of leakage profiles for encrypted file collections according to the method of encrypting files and the data structure supporting encrypted search.
Yet, we cannot apply these definitions to databases directly, since the structure of a file is different from that of a record in the database.
In particular, a file is a collection of related words arranged in a semantic order and tagged with a set of keywords for searching; whereas, a record consists of a set of keywords with predefined attributes.
Moreover, a keyword may occur more than once in a file, and different keywords may have different occurrences; whereas, a keyword of an attribute generally occurs only once in a record.
Inspired by the leakage levels defined in \cite{Cash:2015:leakage}, in this section, we provide our own layer-based leakage definition for encrypted databases.
Specifically, we use the terminology \emph{leakage} to refer to the information the CSP can learn about the data directly from the encrypted database and the information about the results and queries when users are accessing the database.
The simplest type of SE scheme for databases is encrypting both the records and queries with Property-Preserving Encryption (PPE), such as the DETerministic (DET).
In DET-based schemes, the same data has the same ciphertext once encrypted.
In this type of SE schemes, the CSP can check whether each record matches the query efficiently by just comparing the corresponding ciphertext; however, these solutions result in information leakage.
Specifically, in DET-based schemes, such as CryptDB \cite{Popa:2011:Cryptdb} (where the records are protected only with the PPE layer), DBMask \cite{Sarfraz:2015:DBMask}, and Cipherbase \cite{Arasu:CIDR13:Cipherbase}, before executing any query, the CSP can learn the data distribution, \textit{i.e.,}\xspace the number of distinct elements and the occurrence of each element, directly from the ciphertext of the database.
Formally, we say the data distribution of $DB$ is leaked if $e^*$ and $e$ have the same occurrence, \textit{i.e.,}\xspace $O(e)=O(e^*)$, for each $e \in U$.
We define this leakage profile set as $\mathcal{L}_3$:
\begin{itemize}
\item $\mathcal{L}_3=\{O(e)\}_{e \in U}$.
\end{itemize}
The second type of SE for databases encrypts the data with semantically secure primitives, but still encrypts the queries with DET encryption.
By doing so, the data distribution is protected, and the CSP can still search the encrypted database efficiently by repeating the randomisation over the DET query and then comparing it with the randomised data, as done in \cite{Hahn:2014:Searchable}, Arx \cite{Poddar:arx:eprint16}, and most of the Public-key Encryption with Keyword Search (PEKS) systems, such as \cite{BonehCOP04} and BlindSeer \cite{Fisch:SP15:BlindSeer}.
However, after executing a query, the CSP could still learn the access and size patterns.
Moreover, due to the DET encryption for queries, the search pattern is also leaked.
Given a sequence of $q$ queries $\textbf{Q}=(Q_1, \ldots, Q_q)$, we define the leakage profile as:
\begin{itemize}
\item $\mathcal{L}_2=\{|DB(Q_i)|, \{EDB(Q_i)\cap EDB(Q_j), Q_i\stackrel{?}{=}Q_j\}_{Q_j \in \bm{Q}}\}_{Q_i \in \textbf{Q}}$
\end{itemize}
Note that after executing queries, PPE-based databases also leak the profiles included in $\mathcal{L}_2$.
A more secure SE solution leverages Oblivious RAM (ORAM) \cite{Goldreich:1996:SPS,Stefanov:2013:PathORAM} or combines Homomorphic Encryption (HE) \cite{Paillier:1999:Public,Gentry:2009:FHE} with oblivious data retrieval to hide the search and access patterns.
For instance, the HE-based $PPQED_a$ proposed by Samanthula \textit{et al.}\xspace \cite{Samanthula:2014:Privacy} and the ORAM-based SisoSPIR given by Ishai \textit{et al.}\xspace \cite{Ishai:2016:Private} hide both the search and access patterns.
Unfortunately, in both schemes, the CSP can still learn how many records are returned to the user after executing a query, \textit{i.e.,}\xspace \emph{the communication volume}.
According to \cite{Kellaris:ccs16:Generic}, the HE-based and ORAM-based SE schemes have fixed communication overhead between the CSP and users.
Specifically, the length of the message sent from the CSP to the user as the result of query execution is proportional to the number of records matching the query.
Based on this observation, the CSP can still infer the size pattern.
Thus, the HE-based and ORAM-based SE schemes are vulnerable to size pattern-based attacks, \textit{e.g.,}\xspace count attack \cite{Cash:2015:leakage}.
The profile leaked in HE-based and ORAM-based SE schemes can be summarised below:
\begin{itemize}
\item $\mathcal{L}_1=\{|DB(Q_i)|\}_{Q_i \in \bm{Q}}$.
\end{itemize}
\section{Overview of \textsl{\mbox{P-McDb}}\xspace}
\label{sec:overview}
In this work, we propose \textsl{\mbox{P-McDb}}\xspace, a multi-cloud based dynamic SSE scheme for databases that can resist the aforementioned leakage-based attacks.
Specifically, our scheme not only hides the frequency information of the database, but also protects the size, search, and access patterns.
Moreover, it ensures both forward and backward privacy when involving insert and delete queries.
Comparing with the existing SE solutions, \textsl{\mbox{P-McDb}}\xspace has the smallest leakage.
In this section, we define the system and threat model, and illustrate the techniques used in \textsl{\mbox{P-McDb}}\xspace at high-level.
\subsection{System Model}
\label{subsec:sm}
\begin{figure}
\centering
\includegraphics[width=.5\textwidth]{figs/arch.pdf}\\
\caption{An overview of \textsl{\mbox{P-McDb}}\xspace:
Users can upload records and issue queries.
The SSS, IWS, and RSS represent independent CSPs.
The SSS stores encrypted records and executes queries.
The IWS stores index and auxiliary information, and provides witnesses to the SSS for performing encrypted search.
After executing each query, the SSS sends searched records to the RSS for shuffling and re-randomising to protect patterns privacy.}
\label{Fig:arch}
\end{figure}
In the following, we define our system model to describe the entities involved in \textsl{\mbox{P-McDb}}\xspace, as shown in Fig.~\ref{Fig:arch}:
\begin{itemize}
\item \textbf{Admin}: An admin is responsible for the setup and maintenance of databases, user management as well as specification and deployment of access control policies.
\item \textbf{User}: A user can issue insert, select, delete, and update queries to read and write the database according to the deployed access control policies.
\textsl{\mbox{P-McDb}}\xspace allows multiple users to read and write the database.
\item \textbf{Storage and Search Service (SSS)}:
It provides encrypted data storage, executes encrypted queries, and returns matching records in an encrypted manner.
\item \textbf{Index and Witness Service (IWS)}:
It stores the index and auxiliary information, and provides witnesses to the SSS for retrieving data.
The IWS has no access to the encrypted data.
\item \textbf{Re-randomise and Shuffle Service (RSS)}:
After executing each query, it re-randomises and shuffles searched records to achieve the privacy of access pattern.
The RSS does not store any data.
\end{itemize}
Each of the SSS, IWS, and RSS is deployed on the infrastructure managed by CSPs that are in conflict of interest.
According to the latest report given by RightScale \cite{RightScale:2016:report}, organisations are using more than three public CSPs on average, which means the schemes based on multi-cloud are feasible for most organisations.
The CSPs have to ensure that there is a two-way communication between any two of them, but our model assumes there is no collusion between the CSPs.
\subsection{Threat Model}
\label{subsec:tm}
We assume the admin is fully trusted.
All the users are only assumed to securely store their keys and the data.
The CSPs hosting the SSS, IWS, and RSS are modelled as honest-but-curious.
More specifically, they honestly perform the operations requested by users according to the designated protocol specification.
However, as mentioned in the above leakage-based attacks, they are curious to gain knowledge of records and queries by 1) analysing the outsourced data, 2) analysing the information leaked when executing queries, 3) and injecting malicious records.
As far as we know, \textsl{\mbox{P-McDb}}\xspace is the first SE scheme that considers active CSPs that could inject malicious records.
Moreover, as assumed in \cite{Samanthula:2014:Privacy,Hoang:2016:practical,Stefanov:CCS2013:Multi-cloud}, we also assume the CSPs do not collude.
In other words, we assume an attacker could only compromise one CSP.
In practice, any three cloud providers in conflict of interest, such as Amazon S3, Google Drive, and Microsoft Azure, could be considered since they may be less likely to collude in an attempt to gain information from their customers.
We assume there are mechanisms in place for ensuring data integrity and availability of the system.
\subsection{Approach Overview}
\label{subsec:appo}
\textsl{\mbox{P-McDb}}\xspace aims at hiding the search, access, and size patterns.
\textsl{\mbox{P-McDb}}\xspace also achieves both backward and forward privacy.
We now give an overview of our approach.
To protect the search pattern, \textsl{\mbox{P-McDb}}\xspace XORs the query with a nonce, making identical queries look different once encrypted (\textit{i.e.,}\xspace the encrypted query is semantically secure).
However, the CSP may still infer the search pattern by looking at the access pattern.
Specifically, the CSP can infer that two queries are equivalent if the same records are returned.
To address this issue, after executing each query, we shuffle the locations of the searched records.
Moreover, we re-randomise their ciphertexts, making them untraceable.
In this way, even if a query equivalent to the previous one is executed, the CSP will see a new set of records being searched and returned, and cannot easily infer the search and access pattern.
Another form of leakage is the size pattern, where the CSP can learn the number of records returned after performing a query, even after shuffling and re-randomisation.
Moreover, the CSP can guess the search pattern from the size pattern.
Specifically, the queries matching different numbers of records must be different, and the queries matching the same number of records could be equivalent.
To protect the size pattern, we introduce a number of dummy records that look exactly like the real ones and could match queries.
Consequently, the search result for each query will contain a number of dummy records making it difficult for the CSP to identify the actual number of real records returned by a query.
To break the link between size and search pattern, our strategy is to ensure all queries always match the same number of records, and the concrete method is to pad all the data elements in each field into the same occurrence with dummy records.
By doing so, the size pattern is also protected from the communication volume since there is no fixed relationship between them.
However, a large number of dummy records might be required for the padding.
To reduce required dummy records and ensure \textsl{\mbox{P-McDb}}\xspace's performance, we virtually divide the distinct data elements into groups and only pad the elements in the same group into the same occurrence.
By doing so, the queries searching values in the same group will always match the same number of records.
Then, the CSP cannot infer their search pattern.
Here we clarify that the search pattern is not fully protected in \textsl{\mbox{P-McDb}}\xspace.
Specifically, the CSP can still tell the queries are different if their search results are in different groups.
\textsl{\mbox{P-McDb}}\xspace also achieves forward and backward privacy.
Our strategy is to blind records also with nonces and re-randomise them using fresh nonces after executing each query.
Only queries that include the current nonce could match records.
In this way, even if a malicious CSP tries to use previously executed queries with old nonces, they will not be able to match the records in the dataset, ensuring forward privacy.
Similarly, deleted records (with old nonces) will not match newly issued queries because they use different nonces.
The details and algorithms of our scheme will be discussed in the following section.
\section{Security Analysis}
\label{sec:security}
In this section, we first analyse the leakage in \textsl{\mbox{P-McDb}}\xspace.
Second, we prove the patterns and forward and backward privacy are protected against the CSPs.
\subsection{Leakage of \textsl{\mbox{P-McDb}}\xspace}
Roughly speaking, given an initial database $DB$ and a sequence of queries $\bm{Q}$, the information leaked to each CSP in \textsl{\mbox{P-McDb}}\xspace can be defined as:
\begin{align*}
\mathcal{L} = \{\mathcal{L}_{\rm Setup}(DB), \{\mathcal{L}_{\rm Query}(Q_i)~or~\mathcal{L}_{\rm Update}(Q_i)\}_{Q_i \in \bm{Q}}\}
\end{align*}
where $\mathcal{L}_{\rm Setup}$, $\mathcal{L}_{\rm Query}$, and $\mathcal{L}_{\rm Update}$ represent the profiles leaked when setting up the system, executing queries and updating the database, respectively.
Specifically, $\mathcal{L}_{\rm Update}$ could be $\mathcal{L}_{\rm Insert}$ or $\mathcal{L}_{\rm Delete}$.
In the following, we analyse the specific information each CSP can learn from the received messages in each phase.
\paragraph{$\mathcal{L}_{\rm Setup}$}
When setting up the system, for the initial database $DB$, as shown in Algorithm \ref{alg:mcdb-setup}, the SSS gets the encrypted database $EDB$, and the IWS gets the group information $GDB$ and nonce information $NDB$.
In this phase, no data is sent to the RSS.
From $EDB$, the SSS learns the number of encrypted records $|EDB|$, the number of fields $F$, the length of each element $|e|$, and the length of tag $|tag|$.
From $NDB$ and $GDB$, the IWS learns $|NDB|$ ($|NDB|=|EDB|$), the length of each seed $|seed|$, the length of each nonce $|\bm{n}|$ ($|\bm{n}|=F|e|+|tag|$), the number of groups $|GDB|$, and the record identifiers $IL$ and $|(\bm{E}, \tau)^*|$ of each group.
In other words, the IWS learns the group information of each record in $EDB$.
Therefore, in this phase, the leakage $\mathcal{L}_{\rm Setup}(DB)$ learned by the SSS, IWS, and RSS can be respectively defined as:
\begin{align*}
\mathcal{L}^{SSS}_{\rm Setup}(DB) = &\{|EDB|, \mathcal{L}_{rcd}\} \\
\mathcal{L}^{IWS}_{\rm Setup}(DB) = &\{|NDB|, |\bm{n}|, |seed|, |GDB|,\\ & \{IL_{f, g}, |(\bm{E}_{f, g}, \tau)^*|\}_{(f, g) \in GDB}\} \\
\mathcal{L}^{RSS}_{\rm Setup}(DB) = &\emptyset
\end{align*}
where $\mathcal{L}_{rcd}=\{F, |e|, |tag|\}$.
\paragraph{$\mathcal{L}_{\rm Query}$}
When processing queries, as mentioned in Algorithm \ref{alg:macdb-search} and \ref{alg:mcdb-shuffle}, the SSS gets the encrypted query $EQ$, $IL$ and encrypted nonces $EN$.
Based on them, the SSS can search over $EDB$ and gets the search result $SR$.
After shuffling, the SSS also gets the shuffled records $Ercds$ from the RSS.
From $\{EQ, IL, EN, SR, Ercds\}$, the SSS learns $\{Q.type, Q.f, |Q.e|, IL, |w|, |t|\}$, where $|t|=|seed|$.
In addition, the SSS can also infer the threshold $\tau$ ($\tau=|SR|$) and the number of distinct elements $|\bm{E}|$ ($|\bm{E}|=\frac{|IL|}{\tau}$) of the searched group.
The IWS only gets $(EQ.f, g, \eta)$ from the user, from which the IWS learns the searched field and group information of each query, and $|\eta|$ ($|\eta|=|Q.e|$).
The RSS gets the searched records $Ercds$, shuffled record identifies $IL'$, and new nonces $NN$ for shuffling and re-randomising.
From them, the RSS only learns $|Ercd|$ ($|Ercd|=|\bm{n}|$), $IL$ and $IL'$.
In summary,
\begin{align*}
\mathcal{L}^{SSS}_{\rm Query}(Q) = &\{Q.f, Q.type, |Q.e|, Q.\bm{G}, |t|, |w|\} \\
\mathcal{L}^{IWS}_{\rm Query}(Q) = &\{Q.f, g, |\eta|, |t|, |w|\} \\
\mathcal{L}^{RSS}_{\rm Query}(Q) = &\{|Ercd|, IL, IL'\}
\end{align*}
where the group information $Q.\textbf{G}=\{g, IL, \tau, |\bm{E}|\}$.
\paragraph{$\mathcal{L}_{\rm Update}$}
Since different types of queries are processed in different manners, the SSS can learn if users are inserting, deleting or updating records, \textit{i.e.,}\xspace $Q.type$.
As mentioned in Section \ref{subsec:update}, when inserting a real record, the user generates $W$ dummy ones, encrypts both the real and dummy records with $RcdEnc$, and sends them and their group information to the SSS and IWS, respectively.
Consequently, the SSS learns $W$, which represents the threshold or the number of elements of a group, and the IWS also learns the group information of each record.
Moreover, both the SSS and IWS can learn if the insert query introduces new elements that not belong to $U$ based on $|\bm{E}_{f, g}|$ or $|(\bm{E}_{f, g}, \tau_{f, g})^*|$.
The RSS only performs the shuffle operation.
Therefore, $\mathcal{L}_{\rm Insert}(Q)$ learned by each CSP is
\begin{align*}
\mathcal{L}^{SSS}_{\rm Insert}(Q) = &\{W, \mathcal{L}_{rcd}\} \\
\mathcal{L}^{IWS}_{\rm Insert}(Q) = &\{Grcd, \{|(\bm{E}_{f, g}, \tau_{f, g})^*|\}_{g_f \in Grcd}, W\} \\
\mathcal{L}^{RSS}_{\rm Insert}(Q) = &\mathcal{L}^{RSS}_{\rm Query}(Q)
\end{align*}
Delete queries are performed as select queries in \textsl{\mbox{P-McDb}}\xspace, thus $\mathcal{L}_{\rm Delete} = \mathcal{L}_{\rm Query}$ for each CSP.
Above all,
\begin{align*}
\mathcal{L}^{SSS} = &\{|EDB|, F, |e|, |tag|, |t|, |w|\\ &\{\{Q_i.f, Q_i.type, |Q_i.e|, Q_i.\bm{G}\} ~or~ W\}_{Q_i \in \textbf{Q}}\} \\
\mathcal{L}^{IWS} = &\{|NDB|, |GDB|, |\bm{n}|, |seed|, |w|, \\ &\{IL_{f, g}, |(\bm{E}_{f, g}, \tau)^*|\}_{(f, g) \in GDB}, \{\{Q_i.f, Q_i.g, |Q_i.e|\} \\ & ~or~ \{Grcd, \{|(\bm{E_{f, g}}, \tau_{f, g})^*|\}_{g_f \in Grcd}, W\}\}_{Q_i \in \textbf{Q}} \} \\
\mathcal{L}^{RSS} = &\{|Ercd|, \{IL, IL'\}_{Q_i \in \bm{Q}}\}
\end{align*}
\subsection{Proof of Security}
Given the above leakage definition for each CSP, in this part, we prove adversaries do not learn anything beyond $\mathcal{L}^{csp}$ by compromising the CSP $csp$, where $csp$ could be the SSS, IWS, or RSS.
It is clear that adversaries cannot infer the search, access and size patterns, and forward and backward privacy of queries within a group from $\mathcal{L}^{csp}$.
Therefore, proving \textsl{\mbox{P-McDb}}\xspace only leaks $\mathcal{L}^{csp}$ to $csp$ indicates \textsl{\mbox{P-McDb}}\xspace protects the patterns and ensures forward and backward privacy within groups.
To prove \textsl{\mbox{P-McDb}}\xspace indeed only leaks $\mathcal{L}^{csp}$ to $csp$, we follow the typical method of using a real-world versus ideal-world paradigm \cite{Bost:2017:forward,CashJJJKRS14,KamaraPR12}.
The idea is that first we assume the CSP $csp$ is compromised by a Probabilistic Polynomial-Time (PPT) honest-but-curious adversary $\mathcal{A}$ who follows the protocol honestly as done by $csp$, but wants to learn more information by analysing the received messages and injecting malicious records.
Second, we build two experiments: $\textbf{Real}^{\rm \Pi}_{\mathcal{A}}(k)$ and $\textbf{Ideal}^{\rm \Pi}_{\mathcal{A, S}, \mathcal{L}}(k)$, where $\Pi$ represents \textsl{\mbox{P-McDb}}\xspace.
In $\textbf{Real}^{\rm \Pi}_{\mathcal{A}}(k)$, all the messages sent to $\mathcal{A}$ are generated as specified in \textsl{\mbox{P-McDb}}\xspace.
Whereas, in $\textbf{Ideal}^{\rm \Pi}_{\mathcal{A, S}, \mathcal{L}}(k)$, all the messages are generated by a PPT simulator $\mathcal{S}$ that only has access to $\mathcal{L}^{csp}$.
That is, $\mathcal{S}$ ensures $\mathcal{A}$ only learns the information defined in $\mathcal{L}^{csp}$ from received messages in $\textbf{Ideal}^{\rm \Pi}_{\mathcal{A, S}, \mathcal{L}}(k)$.
In the game, $\mathcal{A}$ chooses an initial database, triggers $Setup$, and adaptively issues \emph{select}, \emph{insert}, and \emph{delete} queries of its choice.
In response, either $\textbf{Real}^{\rm \Pi}_{\mathcal{A}}(k)$ or $\textbf{Ideal}^{\rm \Pi}_{\mathcal{A, S}, \mathcal{L}}(k)$ is invoked to process the database and queries.
Based on the received messages, $\mathcal{A}$ distinguishes if they are generated by $\textbf{Real}^{\rm \Pi}_{\mathcal{A}}(k)$ or $\textbf{Ideal}^{\rm \Pi}_{\mathcal{A, S}, \mathcal{L}}(k)$.
If $\mathcal{A}$ cannot distinguish that with non-negligible advantage, it indicates $\textbf{Real}^{\rm \Pi}_{\mathcal{A}}(k)$ has the same leakage profile as $\textbf{Ideal}^{\rm \Pi}_{\mathcal{A, S}, \mathcal{L}}(k)$.
\begin{mydef}
We say the dynamic SSE scheme is $\mathcal{L}$-adaptively-secure against the CSP $csp$, with respect to the leakage function $\mathcal{L}^{csp}$, if for any PPT adversary issuing a polynomial number of queries, there exists a PPT simulator $\mathcal{S}$, such that $\textbf{Real}^{\rm \Pi}_{\mathcal{A}}(k)$ and $\textbf{Ideal}^{\rm \Pi}_{\mathcal{A, S}, \mathcal{L}}(k)$ are distinguishable with negligible probability $\textbf{negl}({k})$.
\end{mydef}
Herein, we acknowledge that \textsl{\mbox{P-McDb}}\xspace leaks the group information of queries and records and leaks whether the elements involved in select, insert and delete queries belong to $U$ or not. For clarity, in the proof we assume there is only one group in each field, and omit the group processing. Moreover, we assume all the queries issued by $\mathcal{A}$ only involve elements in $U$.
In this case, the leakage learned by each CSP can be simplified into:
\begin{align*}
\mathcal{L}^{SSS} = &\{|EDB|, F, |e|, |tag|, |t|, |w|\\ &\{\{Q_i.f, Q_i.type, |Q_i.e|\} ~or~ W\}_{Q_i \in \textbf{Q}}\} \\
\mathcal{L}^{IWS} = &\{|NDB|, |\bm{n}|, |seed|, |w|, \{|(\bm{E}_{f}, \tau)^*|\}_{f \in [1, F]}, \\ &\{Q_i.f~or~W\}_{Q_i \in \textbf{Q}} \} \\
\mathcal{L}^{RSS} = &\{|Ercd|, \{IL'\}_{Q_i \in \bm{Q}}\}
\end{align*}
\begin{theorem}\label{the::SSS}
If $\Gamma$ is secure PRF, $\pi$ is a secure PRP, and $H'$ is a random oracle, \textsl{\mbox{P-McDb}}\xspace is a $\mathcal{L}$-adaptively-secure dynamic SSE scheme against the SSS.
\end{theorem}
\begin{proof}
To argue the security, as done in \cite{Bost:2017:forward,CashJJJKRS14,KamaraPR12}, we prove through a sequence of games.
The proof begins with $\textbf{Real}^{\rm \Pi}_{\mathcal{A}}(k)$, which is exactly the real protocol, and constructs a sequence of games that differ slightly from the previous game and show they are indistinguishable.
Eventually we reach the last game $\textbf{Ideal}^{\rm \Pi}_{\mathcal{A}, \mathcal{S}}(k)$, which is simulated by a simulator $\mathcal{S}$ based on the defined leakage profile $\mathcal{L}^{SSS}$.
By the transitive property of the indistinguishability, we conclude that $\textbf{Real}^{\rm \Pi}_{\mathcal{A}}(k)$ is indistinguishable from $\textbf{Ideal}^{\rm \Pi}_{\mathcal{A}, \mathcal{S}}(k)$ and complete our proof.
Since $RcdDec$ is unrelated to CSPs, it is omitted in the games.
\begin{algorithm}[!htp]
\scriptsize
\caption{$\textbf{Real}^{\rm \Pi}_{\mathcal{A}}(k).RcdEnc(rcd, flag)$ $\|$ \fbox{$\mathcal{G}_1$}, \fbox{$\mathcal{G}_2$}, \fbox{$\mathcal{G}_3$}}
\label{proof::h1::enc}
\begin{algorithmic}[1]
\STATE $seed \stackrel{\$}{\leftarrow} \{0,1\}^{|seed|}$
\STATE $\bm{n} \leftarrow \Gamma_{s_2} (seed)$ $\vartriangleleft$ \fbox{$\mathcal{G}_1$: $\bm{n} \leftarrow \bm{Nonce}[seed]$}, where $\bm{n}= \ldots \parallel n_f \parallel \ldots \parallel n_{F+1}$, $|n_f|=|e|$ and $|n_{F+1}|=|tag|$
\FOR {each element $e_f \in rcd$}
\STATE ${e^*_f} \leftarrow Enc_{s_1}(e_f) \oplus n_f$ $\vartriangleleft$ {\fbox{$\mathcal{G}_3$: $e^*_f \leftarrow \{0, 1\}^{|e|}$}}
\ENDFOR
\IF {$flag=1$}
\STATE $S \stackrel{\$}{\leftarrow} \{0,1\}^{|e|}$
\STATE $tag \leftarrow (H_{s_1}(S)||S )\oplus n_{F+1}$
$\vartriangleleft$ {\fbox{$\mathcal{G}_3$: $tag \leftarrow \{0, 1\}^{|tag|}$}}
\ELSE
\STATE $tag \stackrel{\$}{\leftarrow} \{0,1\}^{|tag|}$
\ENDIF
\RETURN $Ercd=(e^*_1, \ldots, e^*_F, tag)$ and $(seed, \bm{n})$
\end{algorithmic}
\end{algorithm}
\begin{algorithm}[!htp]
\scriptsize
\caption{$\textbf{Real}^{\rm \Pi}_{\mathcal{A}}(k).Query(Q)$ $\|$ \fbox{$\mathcal{G}_1$}, \fbox{$\mathcal{G}_2$}, \fbox{$\mathcal{G}_3$}}
\label{h1:macdb-search}
\begin{algorithmic}[1]
\STATE \underline{User: $QueryEnc(Q)$}
\STATE $EQ.type \leftarrow Q.type$, $EQ.f \leftarrow Q.f$
\STATE $\eta \stackrel{\$}{\leftarrow} \{0,1\}^{|e|}$
\STATE $EQ.e^* \leftarrow Enc_{s_1}(Q.e) \oplus \eta$ $\vartriangleleft$ {\fbox{$\mathcal{G}_3$: $EQ.e^* \leftarrow \{0, 1\}^{|e|}$}}
\STATE Send $EQ=(type, f, op, e^*)$ to the SSS
\STATE Send $(EQ.f, \eta)$ to the IWS
~\\
\STATE \underline{IWS: $NonceBlind(EQ.f, \eta)$}
\STATE $EN \leftarrow \emptyset$
\STATE \fbox{$\mathcal{G}_2$: Randomly put $\tau_f$ record identifiers into $\bm{I}$}
\FOR {each $id \in NDB$ }
\STATE $(seed, \bm{n}) \leftarrow NDB(id)$, where $\bm{n}= \ldots ||n_{EQ.f}|| \ldots $ and $|n_{EQ.f}|=|\eta|$ $\vartriangleleft$ \fbox{Deleted in $\mathcal{G}_2$}
\STATE ${w} \leftarrow H'(n_{EQ.f} \oplus \eta)$ $\vartriangleleft$ \fbox{$\mathcal{G}_2$:~~\begin{minipage}[c][1.0cm][t]{4.5cm}{\textbf{if} $id \in \bm{I}$ \\ $w_{id} \leftarrow H'(EDB(id, EQ.f) \oplus EQ.e^*)$ \\ \textbf{else} \\ $w_{id} \leftarrow \{0, 1\}^{|w|}$ } \end{minipage}}
\STATE $t \leftarrow \eta \oplus seed$ $\vartriangleleft$ {\fbox{$\mathcal{G}_3$: $t \leftarrow \{0, 1\}^{|seed|}$}}
\STATE $EN(id) \leftarrow (w, t)$
\ENDFOR
\STATE Send the encrypted nonce set $EN=((w, t), \ldots)$ to the SSS
~\\
\STATE \underline{SSS: $Search(EQ, EN)$}
\STATE $SR \leftarrow \emptyset$
\FOR {each $id \in EDB$}
\IF {$H'(EDB(id, EQ.f) \oplus EQ.e^*) = EN(id).w$}
\STATE $SR \leftarrow SR \cup (EDB(id), EN(id).t)$
\ENDIF
\ENDFOR
\STATE Send the search result $SR$ to the user
\end{algorithmic}
\end{algorithm}
\begin{algorithm}[!ht]
\scriptsize
\caption{$\textbf{Real}^{\rm \Pi}_{\mathcal{A}}(k).Shuffle()$ $\|$ \fbox{$\mathcal{G}_1$}, \fbox{$\mathcal{G}_2$}, \fbox{$\mathcal{G}_3$}}
\label{h1:mcdb-shuffle}
\begin{algorithmic}[1]
\STATE \underline{IWS: $PreShuffle()$}
\STATE {$IL' \leftarrow \pi(NDB)$}
\FOR {each $id \in IL'$}
\STATE $seed \stackrel{\$}{\leftarrow} \{0, 1\}^{|seed|}$
\STATE $\bm{n}' \leftarrow \Gamma_{s_2}(seed)$ $\vartriangleleft$ \fbox{$\mathcal{G}_1$: $\bm{n}' \leftarrow \bm{Nonce}[seed]$}
\STATE $NN(id) \leftarrow NDB(id).\bm{n} \oplus \bm{n}' $ $\vartriangleleft$ \fbox{$\mathcal{G}_3$: $NN(id) \leftarrow \{0, 1\}^{|\bm{n}|}$}
\STATE $NDB(id)\leftarrow (seed, \bm{n}')$
\ENDFOR
\STATE Send $(IL', NN)$ to the RSS.
~\\~
\STATE{\underline{RSS: $Shuffle(Ercds, IL', NN)$}}
\STATE Shuffle $Ercds$ based on $IL'$
\FOR {each $id \in IL'$}
\STATE {$Ercds(id) \leftarrow Ercds(id) \oplus NN(id)$} $\vartriangleleft$ \fbox{$\mathcal{G}_3$: $Ercds(id) \leftarrow \{0, 1\}^{|\bm{n}|}$}
\ENDFOR
\STATE Send $Ercds$ to the SSS.
\end{algorithmic}
\end{algorithm}
\noindent\textbf{Game $\mathcal{G}_1$}: Comparing with $\textbf{Real}^{\rm \Pi}_{\mathcal{A}}(k)$, the difference in $\mathcal{G}_1$ is that the PRF $\Gamma$ for generating nonces, in $RcdEnc$ and $PreShuffle$ algorithms, is replaced with a mapping \textbf{Nonce}.
Specifically, as shown in Algorithm \ref{proof::h1::enc} and \ref{h1:mcdb-shuffle}, for each unused $seed$ (the length of seed is big enough), a random string of length $F|e|+|tag|$ is generated as the nonce, stored in \textbf{Nonce}, and then reused thereafter.
This means that all of the $\bm{n}$ are uniform and independent strings.
In this case, the adversarial distinguishing advantage between $\textbf{Real}^{\rm \Pi}_{\mathcal{A}}(k)$ and $\mathcal{G}_1$ is exactly the distinguishing advantage between a truly random function and PRF.
Thus, this change made negligible difference between between $\textbf{Real}^{\rm \Pi}_{\mathcal{A}}(k)$ and $\mathcal{G}_1$, \textit{i.e.,}\xspace
\[
\centerline { $|{\rm Pr}[\textbf{Real}^{\rm \Pi}_{\mathcal{A}}(k)=1] - {\rm Pr}[\mathcal{G}_1=1]| \leq \textbf{negl}({k})$}
\]
where ${\rm Pr}[\mathcal{G}=1]$ represents the probability of that the messages received by $\mathcal{A}$ are generated by $\mathcal{G}$.
\noindent\textbf{Game $\mathcal{G}_2$}:
From $\mathcal{G}_1$ to $\mathcal{G}_2$, $w$ is replaced with a random string, rather than generated via $H'$.
However, it is necessary to ensure $\mathcal{A}$ gets $\tau_f$ matched records after searching over $EDB$, since that is the leakage $\mathcal{A}$ learns, where $\tau_f$ is the threshold of the searched field.
To achieve that, the experiment randomly picks $\tau_f$ witnesses and programs their values.
Specifically, as shown in Algorithm \ref{h1:macdb-search}, the experiment first randomly picks a set of record identifiers $\bm{I}$, where $|\bm{I}|=\tau_f$.
Second, for each identifier $id \in \bm{I}$, the experiment programs $w_{id} \leftarrow H'(EDB(id, EQ.f) \oplus EQ.e^*)$.
By doing so, the records identified by $\bm{I}$ will match the query.
For the identifier $id \notin \bm{I}$, $w_{id} \leftarrow \{0, 1\}^{|w|}$.
The only difference between $\mathcal{G}_2$ and $\mathcal{G}_1$ is the generation of $w$.
In the following, we see if $\mathcal{A}$ can distinguish the two games based on $w$.
In $\mathcal{G}_2$,
\begin{align}\notag
For~id \in \bm{I}, ~w_{id} &\leftarrow H'(EDB(id, EQ.f) \oplus EQ.e^*) \\ \notag
For~id \notin \bm{I}, ~w_{id} &\leftarrow \{0, 1\}^{|w|}
\end{align}
Recall that in $\mathcal{G}_1$.
\begin{equation}\notag
w_{id} \leftarrow H'(n_{EQ.f} \oplus \eta)
\end{equation}
In $\mathcal{G}_1$, $n_{EQ.f}$ and $\eta$ are random strings.
In $\mathcal{G}_2$, due to the one-time pad encryption in $RcdEnc$ and $QueryEnc$, $EDB(id, EQ.f)$ and $EQ.e^*$ are indistinguishable from random strings.
Thus, we can say for $id \in \bm{I}$ $w_{id}$ is generated in the same way as done in $\mathcal{G}_1$.
For $id \notin \bm{I}$, $w_{id}$ is a random string in $\mathcal{G}_2$, whereas in $\mathcal{G}_1$ $w_{id}$ is generated by deterministic $H'$.
It seems $\mathcal{A}$ could easily distinguish $\mathcal{G}_2$ and $\mathcal{G}_1$, since $\mathcal{G}_1$ outputs the same $w$ for the same input, whereas $\mathcal{G}_2$ does not.
Indeed, in $\mathcal{G}_1$ the inputs to $H'$, $n_{EQ.f}$ and $\eta$, are random strings, thus the probability of getting the same input for $H'$ is negligible, making $H'$ indistinguishable from a uniform sampling.
Thus, in both cases $w_{id}$ in $\mathcal{G}_2$ is indistinguishable from $w_{id}$ in $\mathcal{G}_1$.
Next, we discuss if $\mathcal{A}$ can distinguish the two games based on $SR$.
The leakage of $SR$ includes the identifier of each matched records and $|SR|$.
Due to the padding, $|SR|=|\bm{I}|=\tau_f$, which means the two games are indistinguishable based on $|SR|$.
In $\mathcal{G}_1$, the identifiers of matched records are determined by the shuffle operations performed for the previous query.
In $\mathcal{G}_2$, the identifiers of matched records are randomly picked.
Thus, the distinguishing advantage between $\mathcal{G}_1$ and $\mathcal{G}_2$ based on the identifiers is exactly the distinguishing advantage between a truly random permutation and PRP, which is negligible.
Above all, we have
\[
\centerline { $|{\rm Pr}[\mathcal{G}_2=1] - {\rm Pr}[\mathcal{G}_1=1]| \leq \textbf{negl}(k)$}
\]
\noindent\textbf{Game $\mathcal{G}_3$}:
The difference between $\mathcal{G}_2$ and $\mathcal{G}_3$ is that all the XORing operations, such as the generation of $e^*$, $Q.e^*$, and $t$, are replaced with randomly sampled strings (The details are shown in Algorithms \ref{proof::h1::enc}, \ref{h1:macdb-search}, and \ref{h1:mcdb-shuffle}).
Since sampling a fixed-length random string is indistinguishable from the one-time pad encryption,
we have
\[
\centerline { ${\rm Pr}[\mathcal{G}_3=1] = {\rm Pr}[\mathcal{G}_2=1] $}
\]
\begin{algorithm}[!htp]
\scriptsize
\caption{$\mathcal{S}.RcdEnc(\mathcal{L}_{rcd}$)}
\label{proof::ideal::enc}
\begin{algorithmic}[1]
\STATE \tgrey{$seed \stackrel{\$}{\leftarrow} \{0,1\}^{|seed|}$}
\STATE \tgrey{$\bm{n} \leftarrow \bm{Nonce}[seed]$}
\FOR {each $f \in [1, F]$}
\STATE ${e^*_f} \leftarrow \{0, 1\}^{|e|}$
\ENDFOR
\STATE {$tag \stackrel{\$}{\leftarrow} \{0,1\}^{|tag|}$}
\RETURN {$Ercd=(e^*_1, \ldots, e^*_F, tag)$} and \tgrey{$(seed, \bm{n})$}
\end{algorithmic}
\end{algorithm}
\begin{algorithm}[!htp]
\scriptsize
\caption{$\mathcal{S}.Query(\mathcal{L}^{SSS}_{Query})$}
\label{ideal:macdb-search}
\begin{algorithmic}[1]
\STATE \underline{User: $QueryEnc(\mathcal{L}^{SSS}_{Query})$}
\STATE $EQ.type \leftarrow Q.type$, $EQ.f \leftarrow Q.f$
\STATE \tgrey{$\eta \stackrel{\$}{\leftarrow} \{0,1\}^{| e |}$}
\STATE $EQ.e^* \leftarrow \{0, 1\}^{|e|}$
\STATE Send $EQ=(type, f, op, e^*)$ to the SSS
\STATE \tgrey{Send $(EQ.f, \eta)$ to the IWS}
~\\
\STATE \underline{IWS: $NonceBlind(\mathcal{L}^{SSS}_{Query})$}
\STATE $EN \leftarrow \emptyset$
\STATE Randomly put $\tau_f$ records identifers into $\bm{I}$
\FOR {each $id \in [1, |EDB|]$ }
\IF{$id \in \bm{I}$}
\STATE ${w} \leftarrow H'(EDB(id, EQ.f) \oplus EQ.e^*)$
\ELSE
\STATE ${w} \leftarrow \{0, 1\}^{|w|}$
\ENDIF
\STATE $t \leftarrow \{0, 1\}^{|seed|}$
\STATE $EN(id) \leftarrow (w, t)$
\ENDFOR
\STATE Send the encrypted nonce set $EN=((w, t), \ldots)$ to the SSS
~\\
\STATE \underline{SSS: $Search(EQ, EN)$}
\STATE $SR \leftarrow \emptyset$
\FOR {each $id \in EDB$}
\IF {$H'(EDB(id, EQ.f) \oplus EQ.e^*) = EN(id).w$ }
\STATE $SR \leftarrow SR \cup (EDB(id), EN(id).t)$
\ENDIF
\ENDFOR
\STATE Send the search result $SR$ to the user
\end{algorithmic}
\end{algorithm}
\begin{algorithm}
\scriptsize
\caption{$\mathcal{S}.Shuffle(\mathcal{L}^{SSS}_{Query})$}
\label{proof::ideal::shuffle}
\begin{algorithmic}[1]
\STATE \underline{IWS: $PreShuffle()$}
\STATE \tgrey{$IL' \leftarrow RP (NDB)$}
\FOR {\tgrey{each $id \in IL'$}}
\STATE \tgrey{$seed \stackrel{\$}{\leftarrow} \{0, 1\}^{|seed|}$}
\STATE \tgrey{$\bm{n'} \leftarrow \bm{Nonce}[seed]$}
\STATE \tgrey{$NN(id) \leftarrow \{0, 1\}^{|\bm{n}|}$}
\STATE \tgrey{$NDB(id)\leftarrow (seed, \bm{n'})$}
\ENDFOR
\STATE \tgrey{Send $(IL', NN)$ to the RSS.}
~\\
\STATE{\underline{RSS: $Shuffle(\mathcal{L}^{SSS}_{Query})$}}
\STATE \tgrey{Shuffle $Ercds$ based on $IL'$}
\FOR {each $id \in IL$}
\STATE $Ercds(id) \leftarrow \{0, 1\}^{|\bm{n}|} $
\ENDFOR
\STATE Send $Ercds$ to the SSS.
\end{algorithmic}
\end{algorithm}
\noindent\textbf{$\textbf{Ideal}^{\rm \Pi}_{\mathcal{A}, \mathcal{S}, \mathcal{L}}(k)$}:
From $\mathcal{G}_3$ to the final game, we just replace the inputs to $RcdEnc$, $Query$ and $Shufle$ algorithms with $\mathcal{L}^{SSS}$.
Moreover, for clarity, we fade the operations unrelated to the SSS.
From Algorithms \ref{proof::ideal::enc}, \ref{ideal:macdb-search}, and \ref{proof::ideal::shuffle}, it is easy to observe that the messages sent to $\mathcal{A}$, \textit{i.e.,}\xspace $\{Ercd, EQ, EN, Ercds\}$, can be easily simulated by only relying on $\mathcal{L}^{SSS}$.
Here we have:
\[
{\rm Pr}[\textbf{Ideal}^{\rm \Pi}_{\mathcal{A},\mathcal{S}, \mathcal{L}}(k)=1]={\rm Pr}[\mathcal{G}_3=1]
\]
By combining all the distinguishable advantages above, we get:
\[
|{\rm Pr}[\textbf{Ideal}^{\rm \Pi}_{\mathcal{A},\mathcal{S}, \mathcal{L}}(k)=1]- {\rm Pr}[\textbf{Real}^{\rm \Pi}_{\mathcal{A}}(k)=1]| < \textbf{negl}(k)
\]
In $\textbf{Ideal}^{\rm \Pi}_{\mathcal{A},\mathcal{S}, \mathcal{L}}(k)$, $\mathcal{A}$ only learns $\mathcal{L}_{rcd}$ and $\mathcal{L}^{SSS}_{Query}$.
The negligible advantage of a distinguisher between $\textbf{Real}^{\rm \Pi}_{\mathcal{A}}(k)$ and $\textbf{Ideal}^{\rm \Pi}_{\mathcal{A},\mathcal{S}, \mathcal{L}}(k)$ indicates that \textsl{\mbox{P-McDb}}\xspace also only leaks $\mathcal{L}_{rcd}$ and $\mathcal{L}^{SSS}_{Query}$.
Although the above simulation does not include the $Setup$ and updating phases, it is clear that the two phases mainly rely on $RcdEnc$ algorithm, which has been proved only leaks $\mathcal{L}_{rcd}$ to the SSS.
From $Setup$ phase, $\mathcal{A}$ only gets $EDB$, and each record in $EDB$ is encrypted with $RcdEnc$.
Thus, $\mathcal{A}$ only learns $|EDB|$ and $\mathcal{L}_{rcd}$ in this phase.
Similarly, $\mathcal{A}$ only gets $W+1$ encrypted records in $Insert$ algorithm.
Therefore, in addition to $\mathcal{L}_{rcd}$, it only learns $W$, which is equal to $|\bm{E}|$ or $\tau$ of a group.
For delete queries, $\mathcal{A}$ learns $\mathcal{L}^{SSS}_{Query}$ since they are first performed as select queries.
As proved above, the tags are indistinguishable from random strings, meaning the returned tags do not leak additional information.
\end{proof}
\begin{theorem}\label{the::IWS}
If $ENC$ is semantically secure, \textsl{\mbox{P-McDb}}\xspace is a $\mathcal{L}$-adaptively-secure dynamic SSE scheme against the IWS.
\end{theorem}
\begin{proof}
Herein, we also assume all the records are in one group.
In this case, the IWS gets $(seed, \bm{n})$ for each record and $(\bm{E}_f, \tau_f)^*$ for each field when setting up the database, gets $(Q.f, \eta)$ when executing queries, and gets updated $(\bm{E}_f, \tau_f)^*$ when inserting records.
Note that the IWS can generate $\bm{n}$ by itself since it has the key $s_2$.
Considering $seed$ and $\eta$ are random strings, from $\textbf{Real}^{\rm \Pi}_{\mathcal{A}}(k)$ to $\textbf{Ideal}^{\rm \Pi}_{\mathcal{A}, \mathcal{S}, \mathcal{L}}(k)$ we just need one step.
Specifically, given $\mathcal{L}^{IWS}$, in $\textbf{Ideal}^{\rm \Pi}_{\mathcal{A}, \mathcal{S}, \mathcal{L}}(k)$, $\mathcal{S}$ just needs to simulate $(\bm{E}_f, \tau_f)^*$ with $|(\bm{E}_f, \tau_f)^*|$-bit random strings in $Setup$ and $Insert$ algorithms.
Given $ENC$ is semantically secure, we have
\[
|{\rm Pr}[\textbf{Ideal}^{\rm \Pi}_{\mathcal{A},\mathcal{S}, \mathcal{L}}(k)=1]- {\rm Pr}[\textbf{Real}^{\rm \Pi}_{\mathcal{A}}(k)=1]| < \textbf{negl}(k)
\]
\end{proof}
\begin{theorem}\label{the::RSS}
\textsl{\mbox{P-McDb}}\xspace is a $\mathcal{L}$-adaptively-secure dynamic SSE scheme against the RSS.
\end{theorem}
\begin{proof}
In \textsl{\mbox{P-McDb}}\xspace, the RSS is only responsible for shuffling and re-randomising records after each query.
For the shuffling and re-randomising, it gets encrypted records, $IL'$ and $NN$.
Here we also just need one step to reach $\textbf{Ideal}^{\rm \Pi}_{\mathcal{A}, \mathcal{S}, \mathcal{L}}(k)$.
Given $\mathcal{L}^{RSS}$, as done in the above \textbf{Game $\mathcal{G}_3$}, $\mathcal{S}$ needs to replace $e^*$ and $tag$ in $RcdEnc$ with $|e|$-bit and $|tag|$-bit random strings respectively and simulate each element in $NN$ with a $|Ercd|$-bit random string in $PreShuffle$.
As mentioned, since sampling a fixed-length random string is indistinguishable from the one-time pad encryption, we have
\[
|{\rm Pr}[\textbf{Ideal}^{\rm \Pi}_{\mathcal{A},\mathcal{S}, \mathcal{L}}(k)=1] = {\rm Pr}[\textbf{Real}^{\rm \Pi}_{\mathcal{A}}(k)=1]|
\]
\end{proof}
\section{Solution details}
\label{sec:MCDB-details}
\begin{table}[htp]
\scriptsize
\centering
\caption{Data representation in \textsl{\mbox{P-McDb}}\xspace}
\subtable[\emph{Staff}]
{
\label{Tbl:mcdb-staff}
\begin{tabular}{|c|c|}\hline
\textbf{Name} & \textbf{Age} \\\hline
Alice & 27 \\\hline
Anna & 30 \\\hline
Bob & 27 \\\hline
Bill & 25 \\\hline
Bob & 33 \\\hline
\end{tabular}
}
\subtable[GDB on the IWS]
{
\label{Tbl:mcdb-engid}
\begin{tabular}{|c|c|c|c|}\hline
\textbf{GID} & \textbf{IL} &$(\bm{E}, \tau)^*$ \\ \hline
$(1, g_{1})$ & $\{1, 2\}$ & $(\{Alice, Anna\}, 1)^*$ \\\hline
$(1, g_{2})$ & $\{3, 4, 5, 6\}$ & $(\{Bob, Bill\}, 2)^*$ \\\hline
$(2, g'_{1})$ & $\{1, 3, 4, 6\}$ & $(\{25, 27\}, 2)^*$ \\\hline
$(2, g'_{2})$ & $\{2, 5\}$ & $(\{30, 33\}, 1)^*$ \\\hline
\end{tabular}
}
\subtable[NDB on the IWS]
{
\label{Tbl:mcdb-nonceI}
\begin{tabular}{|c|c|c|}\hline
\textbf{id} &$\bm{seed}$ & $\bm{nonce}$ \\ \hline
1 & $seed_{1}$ & $\bm{n_1}$ \\\hline
2 & $seed_{2}$ & $\bm{n_2}$ \\\hline
3 & $seed_{3}$ & $\bm{n_3}$ \\\hline
4 & $seed_{4}$ & $\bm{n_4}$ \\\hline
5 & $seed_{5}$ & $\bm{n_5}$ \\\hline
6 & $seed_{6}$ & $\bm{n_6}$ \\\hline
\end{tabular}
}
\subtable[EDB on the SSS]
{
\label{Tbl:mcdb-enstaff}
\begin{tabular}{|c|c|c|c|c|}\hline
\textbf{ID} & \textbf{1} & \textbf{2} & \textbf{Tag} \\\hline
1 & $SE(Alice)$ & $SE(27)$ & $tag_1$ \\\hline
2 & $SE(Anna)$ & $SE(30)$ & $tag_2$ \\\hline
3 & $SE(Bob)$ & $SE(27)$ & $tag_3$ \\\hline
4 & $SE(Bill)$ & $SE(25)$ & $tag_4$ \\\hline
5 & $SE(Bob)$ & $SE(33)$ & $tag_5$ \\\hline
6 & $SE(Bill)$ & $SE(25)$ & $tag_6$ \\\hline
\end{tabular}
}
\label{Tbl:mcdb-store}
\subref{Tbl:mcdb-staff} A sample \emph{Staff} table.
\subref{Tbl:mcdb-engid} GDB, the group information, is stored on the IWS.
\subref{Tbl:mcdb-nonceI} NDB contains the seeds used to generate nonces.
It might contain the nonces directly.
NDB is also stored on the IWS.
\subref{Tbl:mcdb-enstaff} EDB, the encrypted \emph{Staff} table, is stored on the SSS.
Each encrypted data element $SE(e_f)=Enc_{s_1}(e_f)\oplus n_f$.
Each record has a tag, enabling users to distinguish dummy and real records.
In this example, the last record in Table~\subref{Tbl:mcdb-enstaff} is dummy.
The RSS does not store any data.
\end{table}
\begin{algorithm}[tp]
\scriptsize
\caption{$Setup(k, DB)$}
\label{alg:mcdb-setup}
\begin{algorithmic}[1]
\STATE \underline{Admin: $KeyGen$}
\STATE $s_1, s_2 \leftarrow \{0, 1\}^k$
~\\~
\STATE $GDB \leftarrow \emptyset$, $EDB \leftarrow \emptyset$, $NDB \leftarrow \emptyset$
\STATE \underline{Admin: $GroupGen$} \label{code:setup-grpgen-be}
\FOR{each field $f$}
\STATE Collect $U_f$ and $\{O(e)\}_{e \in U_f}$, and compute $\Psi_f=\{g \leftarrow GE_{s_1}(e)\}_{e \in U_f}$
\FOR{each $g \in \Psi_f$}
\STATE $IL_{f, g} \leftarrow \emptyset$, $\bm{E}_{f, g} \leftarrow \{e\}_{e\in U_f \& GE_{s_1}(e)=g}$ \label{code:setup-grpgen-e}
\STATE $\tau_{f, g} \leftarrow \max\{|O(e)|\}_{e \in \bm{E}_{f, g}}$ \label{code:setup-grpgen-t}
\STATE $(\bm{E}_{f, g}, \tau)^* \leftarrow ENC_{s_1}(\bm{E}_{f, g}, \tau)$
\label{code:setup-gdb}
\STATE $GDB(f, g) \leftarrow (IL_{f, g}, (\bm{E}_{f, g}, \tau_{f, g})^*)$
\ENDFOR
\ENDFOR \label{code:setup-grpgen-end}
~\\~
\STATE \underline{Admin: DummyGen}\label{code:setup-dummygen-be}
\FOR{each field $f$}
\STATE $\Sigma_f \leftarrow \Sigma_{g \in \Psi_f}\Sigma_{e \in \bm{E}_{f, g}} (\tau_{f, g} - O(e))$
\ENDFOR
\STATE $\Sigma_{max} \leftarrow \max\{\Sigma_1, \ldots, \Sigma_F\}$
\STATE Add $\Sigma_{max}$ dummy records with values $(NULL, \ldots, NULL)$ into $DB$
\FOR {each field $f$}
\FOR {each $e \in U_f$}
\STATE Assign $e$ to $\tau_{f, GE_{s_1}(e)}-O(e)$ dummy records in field $f$
\ENDFOR
\ENDFOR
\STATE Mark real and dummy records with $flag=1$ and $flag=0$, respectively
\STATE Shuffle $DB$ \label{code:setup-dummygen-end}
~\\~
\STATE \underline{Admin: DBEnc} \label{code:setup-DBenc-be}
\STATE $id \leftarrow 0$
\FOR{each $rcd \in DB$}
\STATE $(Ercd, seed, \bm{n}, Grcd)\leftarrow RcdEnc(rcd, flag)$
\STATE $EDB(id) \leftarrow Ercd$, $NDB(id) \leftarrow (seed, \bm{n})$
\FOR{each $g_f \in Grcd$}
\STATE $IL_{f, g} \leftarrow IL_{f, g} \cup id$ \label{code:setup-DBenc-IL}
\ENDFOR
\STATE $id++$ \label{code:setup-DBenc-end}
\ENDFOR
\end{algorithmic}
\end{algorithm}
In this section, we give the details for setting up, searching, and updating the database.
\subsection{Setup}
\label{subsec:boot}
The system is set up by the admin by generating the secret keys $s_1$ and $s_2$ based on the security parameter $k$.
$s_1$ is only known to users and is used to protect queries and records from CSPs.
$s_2$ is generated for saving storage, and it is known to both the user and IWS and is used to generate nonces for record and query encryption.
The admin also defines the cryptographic primitives used in \textsl{\mbox{P-McDb}}\xspace.
We assume the initial database $DB$ is not empty.
The admin bootstraps $DB$ with Algorithm \ref{alg:mcdb-setup}, $Setup(k, DB) \rightarrow (EDB, GDB, NDB)$.
Roughly speaking, the admin divides the records into groups (Lines \ref{code:setup-grpgen-be}-\ref{code:setup-grpgen-end}), pads the elements in the same group into the same occurrence by generating dummy records (Lines \ref{code:setup-dummygen-be}-\ref{code:setup-dummygen-end}), and encrypts each record (Lines \ref{code:setup-DBenc-be}-\ref{code:setup-DBenc-end}).
The details of each operation are given below.
\paragraph{Group Generation}
As mentioned, inserting dummy records is necessary to protect the size and search patterns, and grouping the data aims at reducing the number of required dummy records.
Indeed, dividing the data into groups could also reduce the number of records to be searched.
Only padding the data in the same group into the same occurrence could result in the leakage of group information.
Particularly, the SSS can learn if records and queries are in the same group from the size pattern.
Considering the group information will be inevitably leaked after executing queries, \textsl{\mbox{P-McDb}}\xspace allows the SSS to know the group information in advance, and only search a group of records for each query rather than the whole database.
By doing so, the query can be processed more efficiently without leaking additional information.
Yet, in this case, the SSS needs to know which group of records should be searched for each query.
Considering the SSS only gets encrypted records and queries, the group should be determined by the admin and users.
To avoid putting heavy storage overhead on users, \textsl{\mbox{P-McDb}}\xspace divides data into groups with a Pseudo-Random Function (PRF) $GE: \{0, 1\}^* \times \{0, 1\}^k \rightarrow \{0, 1\}^*$.
The elements in field $f$ ($1\leq f \leq F$) with the same $g \leftarrow GE_{s_1}(e)$ value are in the same group, and $(f, g)$ is the group identifier.
In this way, the admin and users can easily know the group identifiers of records and queries just by performing $GE$.
The implementation of $GE$ function affects the security level of the search pattern.
Let $\lambda$ stand for the number of distinct elements contained in a group.
Since the elements in the same group will have the same occurrence, the queries involving those elements (defined as \emph{the queries in the same group}) will match the same number of records.
Then, the adversary cannot tell their search patterns from their size patterns.
Formally, for any two queries matching the same number of records, the probability they involve the same keyword is $\frac{1}{\lambda}$.
Thus, $\lambda$ also represents the security level of the search pattern.
Given $\lambda$, the implementation of $GE$ should ensure each group contains at least $\lambda$ distinct elements.
For instance, the admin could generate the group identifier of $e$ by computing $LSB_b(H_{s_1}(e))$, where $LSB_b$ gets the least significant $b$ bits of its input.
To ensure each group contains at least $\lambda$ distinct elements, $b$ can be smaller.
The details of grouping $DB$ are shown in Lines \ref{code:setup-grpgen-be}-\ref{code:setup-grpgen-end} of Algorithm \ref{alg:mcdb-setup}.
Formally, we define group $(f, g)$ as $(IL_{f, g}, \bm{E}_{f, g}, \tau_{f, g})$, where $IL_{f, g}$ stores the identifiers of the records in this group (Line \ref{code:setup-DBenc-IL}), $\bm{E}_{f, g}$ is the set of distinct elements in this group (Line \ref{code:setup-grpgen-e}), and $\tau_{f, g}=\max\{O(e) | e \in \bm{E}_{f, g}\}$ is the occurrence threshold for padding (Line \ref{code:setup-grpgen-t}).
Since the group information will be stored in the CSP, $(\bm{E}_{f, g}, \tau_{f, g})$ is encrypted into $(\bm{E}_{f, g}, \tau_{f, g})^*$ with $s_1$ and a semantically secure symmetric encryption primitive $ENC: \{0, 1\}^* \times \{0, 1\}^k \rightarrow \{0, 1\}^*$.
$(\bm{E}_{f, g}, \tau_{f, g})^*$ is necessary for insert queries (The details are given in Section~\ref{subsec:update}).
Note that if the initial database is empty, the admin can pre-define a possible $U_f$ for each field and group its elements in the same way.
In this case, $IL=\emptyset$ and $\tau=0$ for each group after the bootstrapping.
\paragraph{Dummy Records Generation}
Once the groups are determined, the next step is to generate dummy records.
The details for generating dummy records are given in Lines \ref{code:setup-dummygen-be}-\ref{code:setup-dummygen-end}, Algorithm \ref{alg:mcdb-setup}.
Specifically, the admin first needs to know how many dummy records are required for the padding.
Since the admin will pad the occurrence of each element in $\bm{E}_{f, g}$ into $\tau_{f, g}$, $\tau_{f, g}-O(e)$ dummy records are required for each $e \in \bm{E}_{f, g}$.
Assume there are $M$ groups in field $f$, then $\Sigma_f=\sum_{i=1}^{M}\sum_{e \in \bm{E}_{f, g^i}}(\tau_{f, g^i}-O(e))$ dummy records are required totally for padding field $f$.
For the database with multiple fields, different fields might require different numbers of dummy records.
Assume $\Sigma_{max}=\max\{\Sigma_1, \ldots, \Sigma_F\}$.
To ensure all fields can be padded properly, $\Sigma_{max}$ dummy records are required.
Whereas, $\Delta_f=\Sigma_{max} - \Sigma_f$ dummy records will be redundant for field $f$.
The admin assigns them a meaningless string, such as `NULL', in field $f$.
After encryption, `NULL' will be indistinguishable from other elements.
Thus, the CSP cannot distinguish between real and dummy records.
Note that users and the admin can search the records with `NULL'.
In this work, we do not consider the query with conjunctive predicates, so we do not consider to pad the element pairs also into the same occurrence.
After padding, each record $rcd$ is appended with a $flag$ to mark if it is real or dummy.
Specifically, $flag=1$ when $rcd$ is real, otherwise $flag=0$.
The admin also shuffles the dummy and real records.
\paragraph{Record Encryption}
\begin{algorithm}[tp]
\scriptsize
\caption{$RcdEnc(rcd, flag)$}
\label{alg:mcdb-enc}
\begin{algorithmic}[1]
\STATE $seed \stackrel{\$}{\leftarrow} \{0,1\}^{|seed|}$
\STATE $\bm{n} \leftarrow \Gamma_{s_2} (seed)$, where $\bm{n}=\ldots \| n_f \| \ldots \| n_{F+1}$, $|n_f|=|e|$ and $|n_{F+1}|=|H|+|e|$ \label{code:mcdb-enc-seed}
\FOR {each element $e_f \in rcd$}
\STATE $g_f \leftarrow GE_{s_1}(e_f)$ \label{code:mcdb-enc-gid}
\STATE $e^*_f \leftarrow Enc_{s_1}(e_f) \oplus n_f$ \label{code:mcdb-enc-se}
\ENDFOR
\IF {$flag=1$}
\STATE $S \stackrel{\$}{\leftarrow} \{0,1\}^{|e|}$
\STATE $tag \leftarrow (H_{s_1}(S)||S )\oplus n_{F+1}$ \label{code:mcdb-enc-tag-re}
\ELSE
\STATE $tag \stackrel{\$}{\leftarrow} \{0,1\}^{|H|+|e|}$ \label{code:mcdb-enc-tag-du}
\ENDIF
\RETURN $Ercd=(e^*_1, \ldots, e^*_F, tag)$, $(seed, \bm{n})$, and $Grcd=(g_1, \ldots, g_F)$
\end{algorithmic}
\end{algorithm}
The admin encrypts each record before uploading them to the SSS.
The details of record encryption are provided in Algorithm~\ref{alg:mcdb-enc}, $RcdEnc (rcd, flag) \rightarrow (Ercd, seed, \bm{n}, Grcd)$.
To ensure the dummy records could match queries, they are encrypted in the same way as real ones.
Specifically, first the admin generates a random string as a $seed$ for generating a longer nonce $\bm{n}$ with a Pseudo-Random Generator (PRG) $\Gamma: \{0, 1\}^{|seed|} \times \{0, 1\}^{k} \rightarrow \{0, 1\}^{*}$ (Line~\ref{code:mcdb-enc-seed}, Algorithm~\ref{alg:mcdb-enc}).
Second, the admin generates $g_f$ for each $e_f \in rcd$ by computing $GE_{s_1}(e_f)$ (Line \ref{code:mcdb-enc-gid}).
Moreover, $e_f$ is encrypted by computing $SE(e_f): e^*_f \leftarrow Enc_{s_1}(e_f) \oplus n_f$ (Line~\ref{code:mcdb-enc-se}), where $Enc:\{0,1\}^* \times \{0,1\}^k \rightarrow \{0,1\}^* $ is a deterministic symmetric encryption primitive, such as AES-ECB.
Using $n_f$, on the one hand, ensures the semantically secure of $e^*_f$.
On the other hand, it ensures the forward and backward privacy of \textsl{\mbox{P-McDb}}\xspace (as explained in Section \ref{subsec:shuffle}).
$e^*_f$ will be used for encrypted search and data retrieval.
The dummy records are meaningless items, and the user does not need to decrypt returned dummy records.
Thus, we need a way to filter dummy records for users.
Considering the CSPs are untrusted, we cannot mark the real and dummy records in cleartext.
Instead, we use a keyed hash value to achieve that.
Specifically, as shown in Lines~\ref{code:mcdb-enc-tag-re} and \ref{code:mcdb-enc-tag-du}, a tag $tag$ is generated using a keyed hash function $H: \{0,1\}^* \times \{0,1\}^k \rightarrow \{0,1\}^*$ and the secret key $s_1$ if the record is real, otherwise $tag$ is a random bit string.
With the secret key $s_1$, the dummy records can be efficiently filtered out by users before decrypting the search result by checking if:
\begin{align}\label{eq:check1}\notag
tag_{l} \stackrel{?}{=}& H_{s_1}(tag_{r}), \text{ where } tag_{l}||tag_{r} = tag \oplus n_{F+1}
\end{align}
Once all the real and dummy records are encrypted, the admin uploads the auxiliary information, \textit{i.e.,}\xspace the set of group information $GDB$ and the set of nonce information $NDB$, to the IWS, and uploads encrypted records $EDB$ to the SSS.
$GDB$ contains $(IL, (\bm{E}, \tau)^*)$ for each group.
$NDB$ contains a $(seed, \bm{n})$ pair for each record stored in $EDB$.
To reduce the storage overhead on the IWS, $NDB$ could also just store the seed and recover $\bm{n}$ by computing $\Gamma_{s_2}(seed)$ when required.
Whereas, saving the $(seed, \bm{n})$ pairs reduces the computation overhead on the IWS.
In the rest of this article, we assume NDB contains $(seed, \bm{n})$ pairs.
$EDB$ contains the encrypted record $Ercd$.
Note that to ensure the correctness of the search functionality, it is necessary to store the encrypted records and their respective $(seed, \bm{n})$ pairs in the same order in $EDB$ and $NDB$ (the search operation is explained in Section \ref{subsec:mcdb-select}).
In Table~\ref{Tbl:mcdb-store}, we take the \emph{Staff} table (\textit{i.e.,}\xspace Table \ref{Tbl:mcdb-staff}) as an example and show the details stored in $GDB$, $NDB$, and $EDB$ in Tables~\ref{Tbl:mcdb-engid},~\ref{Tbl:mcdb-nonceI}, and~\ref{Tbl:mcdb-enstaff}, respectively.
\subsection{Select Query}
\label{subsec:mcdb-select}
\begin{algorithm}[htp]
\scriptsize
\caption{Query$(Q)$}
\label{alg:macdb-search}
\begin{algorithmic}[1]
\STATE \underline{User: QueryEnc$(Q)$} \label{code:mcdb-query-user-be}
\STATE $g \leftarrow GE_{s_1}( Q.e )$ \label{code:mcdb-query-user-gid}
\STATE $EQ.type \leftarrow Q.type$, $EQ.f \leftarrow Q.f$
\STATE $\eta \stackrel{\$}{\leftarrow} \{0,1\}^{|e|}$
\STATE $EQ.e^* \leftarrow Enc_{s_1}(Q.e) \oplus \eta$ \label{code:mcdb-query-se}
\STATE Send $EQ=(type, f, e^*)$ to the SSS
\STATE Send $(EQ.f, \eta, g)$ to the IWS \label{code:mcdb-query-user-end}
~\\~
\STATE \underline{IWS: $NonceBlind(EQ.f, \eta, g)$} \label{code:mcdb-query-IWS-be}
\STATE $EN \leftarrow \emptyset$
\STATE $IL \leftarrow GDB(EQ.f, g)$ \COMMENT{If $(EQ.f, g) \notin GDB$, return $IL$ of the closest group(s).}\label{code:mcdb-query-IL}
\FOR {each $id \in IL$ }
\STATE $(seed, \bm{n}) \leftarrow NDB(id)$, where $\bm{n}= \ldots ||n_{EQ.f}|| \ldots$ and $|n_{EQ.f}|=|\eta|$
\label{code:mcdb-query-get-nonce}
\STATE $w \leftarrow H'(n_{EQ.f} \oplus \eta)$
\STATE $t \leftarrow \eta \oplus seed$
\STATE $EN(id) \leftarrow (w, t)$ \label{code:mcdb-query-enc-nonce}
\ENDFOR
\STATE Send $IL=(id, \ldots)$ and the encrypted nonce set $EN=((w, t), \ldots)$ to the SSS \label{code:mcdb-query-IWS-end}
~\\
\STATE \underline{SSS: $Search(EQ, EN, IL)$} \label{code:mcdb-query-SSS-be}
\STATE $SR \leftarrow \emptyset$
\FOR {each $id \in IL$}
\IF {$H'(EDB(id, EQ.f) \oplus EQ.e^*) = EN(id).w$ } \label{code:mcdb-query-checke}
\STATE $SR \leftarrow SR \cup (EDB(id), EN(id).t)$ \label{code:mcdb-query-sr}
\ENDIF
\ENDFOR
\STATE Send the search result $SR$ to the user \label{code:mcdb-query-SSS-end}
~\\
\STATE \underline{User: RcdDec$(SR, \eta)$} \label{code:mcdb-query-dec-be}
\FOR{each $(Ercd, t) \in SR$}
\STATE $\bm{n} \leftarrow \Gamma_{s_2} (t \oplus \eta)$ \label{code:mcdb-query-dec-seed}
\STATE $(Enc_{s_1}(rcd), tag) \leftarrow Ercd \oplus \bm{n}$
\STATE $tag_{l} || tag_{r} \leftarrow tag$, where $|tag_r|=|e|$
\IF{$tag_{l} = H_{s_1}(tag_{r})$} \label{code:mcdb-query-ducheck}
\STATE $rcd \leftarrow Enc^{-1}_{s_1} (Enc_{s_1}(rcd))$ \label{code:mcdb-query-dec}
\ENDIF
\ENDFOR \label{code:mcdb-query-dec-end}
\end{algorithmic}
\end{algorithm}
In this work, we focus on the simple query which only has one single equality predicate.
The complex queries with multiple predicates can be performed by issuing multiple simple queries and combing their results on the user side.
To support range queries, the technique used in \cite{Asghar:Espoon:IRC13} can be adopted.
For performing a select query, \textsl{\mbox{P-McDb}}\xspace requires the cooperation between the IWS and SSS.
The details of the steps performed by the user, IWS, and SSS are shown in Algorithm~\ref{alg:macdb-search}, $Query(Q) \rightarrow SR$, which consists of 4 components: $QueryEnc$, $NonceBlind$, $Search$, $RcdDec$.
\paragraph{\bm{$QueryEnc(Q)\rightarrow (EQ, \eta, g)$}}
First, the user encrypts the query $Q=(type, f, e)$ using $QueryEnc$ (Lines \ref{code:mcdb-query-user-be} - \ref{code:mcdb-query-user-end}, Algorithm \ref{alg:macdb-search}).
Specifically, to determine the group to be searched, the user first generates $g$ (Line \ref{code:mcdb-query-user-gid}).
We do not aim at protecting the query type and searched field from CSPs.
Thus, the user does not encrypt $Q.type$ and $Q.f$.
The interested keyword $Q.e$ is encrypted into $EQ.e^*$ by computing $Enc_{s_1}(Q.e)\oplus \eta$ (Line \ref{code:mcdb-query-se}).
The nonce $\eta$ ensures that $EQ.e^*$ is semantically secure.
Finally, the user sends $EQ=(type, f, e^*)$ to the SSS and sends $(EQ.f$, $\eta$, $g)$ to the IWS.
\paragraph{$\bm{NonceBlind(EQ.f, \eta, g)\rightarrow (IL, EN)}$}
Second, the IWS provides $IL$ and witnesses $EN$ of group $(EQ.f, g)$ to the SSS by running $NonceBlind$ (Line \ref{code:mcdb-query-IWS-be} - \ref{code:mcdb-query-IWS-end}).
Specifically, for each $id \in IL$, the IWS generates $EN(id)=(w, t)$ (Lines~\ref{code:mcdb-query-get-nonce}-\ref{code:mcdb-query-enc-nonce}),
where $w=H'(n_{EQ.f} \oplus \eta)$ will be used by the SSS to find the matching records, and $t= \eta \oplus seed$ will be used by the user to decrypt the result.
Here $H': \{0, 1\}^* \rightarrow \{0, 1\}^k$ is a hash function.
Note that when $(EQ.f, g)$ is not contained in $GDB$, $IL$ of the \emph{closest} group(s) will be used, \textit{i.e.,}\xspace the group in field $EQ.f$ whose identifier has the most common bits with $g$\footnote{This can be obtained by comparing the hamming weight of $g' \oplus g$ for all $(EQ.f, g') \in GDB$.}.
\paragraph{$\bm{Search(EQ, IL, EN)\rightarrow SR}$}
Third, the SSS traverses the records indexed by $IL$ and finds the records matching $EQ$ with the assistance of $EN$ (Lines \ref{code:mcdb-query-SSS-be} - \ref{code:mcdb-query-SSS-end}).
Specifically, for each record indexed by $IL$, the SSS checks if $H'(EDB(id, EQ.f) \oplus EQ.e^*) \stackrel{?}{=} EN(id).w$ (Line \ref{code:mcdb-query-checke}).
More specifically, the operation is:
\begin{equation}\notag
H'(Enc_{s_1}(e_{EQ.f})\oplus n_{EQ.f} \oplus Enc_{s_1}(Q.e) \oplus \eta ) \stackrel{?}{=} H'(n_{EQ.f} \oplus \eta)
\end{equation}
It is clear that only when $Q.e=e_{EQ.f}$ there is a match.
The SSS sends each matched record $EDB(id)$ and its corresponding $EN(id).t$ to the user as the search result $SR$, \textit{i.e.,}\xspace $EDB(EQ)$.
\paragraph{\bm{$RcdDec(SR) \rightarrow rcds$}}
To decrypt an encrypted record $Ercd$, both the secret key $s_1$ and nonce $\bm{n}$ are required.
The nonce $\bm{n}$ can be recovered from the returned $t$.
Only the user issuing the query knows $\eta$ and is able to recover $\bm{n}$ by computing $\Gamma_{s_2}(t \oplus \eta)$ (Line \ref{code:mcdb-query-dec-seed}).
With $\bm{n}$, the user can check if each returned record is real or dummy (Line \ref{code:mcdb-query-ducheck}), and decrypt each real record by computing
$Enc^{-1}_{s_1}(Ercd \oplus \bm{n})$ (Line \ref{code:mcdb-query-dec}), where $Enc^{-1}$ is the inverse of $Enc$.
\subsection{Shuffling and Re-randomisation}
\label{subsec:shuffle}
\begin{algorithm}
\scriptsize
\caption{$Shuffle(IL, Ercds)$}
\label{alg:mcdb-shuffle}
\begin{algorithmic}[1]
\STATE \underline{IWS: $PreShuffle(IL)$} \label{code:mcdb-preshuffle-be}
\STATE $IL' \leftarrow \pi (IL)$
\STATE Shuffle the $(seed, \bm{n})$ pairs indexed by $IL$ based on $IL'$
\STATE Update the indices of affected groups in $GDB$ \label{code:mcdb-shuffle-update-groups}
\FOR {each $id \in IL'$} \label{code:mcdb-shuffle-begin}
\STATE $seed \stackrel{\$}{\leftarrow} \{0, 1\}^{|seed|}$ \label{code:mcdb-seed'}
\STATE $\bm{n'} \leftarrow \Gamma_{s_2}(seed)$ \label{code:mcdb-gn'}
\STATE $NN(id) \leftarrow NDB(id).\bm{n} \oplus \bm{n'}$ \label{code:mcdb-nn}
\STATE $NDB(id) \leftarrow (seed, \bm{n'})$ \label{code:mcdb-n'}
\ENDFOR
\STATE Send $(IL', NN)$ to the RSS. \label{code:mcdb-preshuffle-end}
~\\~
\STATE{\underline{RSS: $Shuffle(Ercds, IL', NN)$}}
\STATE Shuffle $Ercds$ based on $IL'$
\FOR {each $id \in IL'$}
\STATE $Ercds(id) \leftarrow Ercds(id) \oplus NN(id) $ \label{code:mcdb-shuffle-reenc}
\ENDFOR
\STATE Send $Ercds$ to the SSS.
\end{algorithmic}
\end{algorithm}
To protect the access pattern and ensure the forward and backward privacy, \textsl{\mbox{P-McDb}}\xspace shuffles and re-randomises searched records after executing each query, and this procedure is performed by the IWS and RSS.
The details are shown in Algorithm \ref{alg:mcdb-shuffle}, consisting of $PreShuffle$ and $Shuffle$.
\paragraph{$\bm{PreShuffle(IL) \rightarrow (IL', NN)}$}
In \textsl{\mbox{P-McDb}}\xspace, the searched records are re-randomised by renewing the nonces.
Recall that $SE$ encryption is semantically secure due to the nonce.
However, the IWS stores the nonces.
If the IWS has access to encrypted records, it could observe deterministically encrypted records by removing the nonces.
To void leakage, \textsl{\mbox{P-McDb}}\xspace does not allow the IWS to access any records and involves the RSS to shuffle and re-randomise the records.
Yet, the IWS still needs to shuffle $NDB$ and generate new nonces for the re-randomisation by executing $PreShuffle$.
Specifically, as shown in Algorithm \ref{alg:mcdb-shuffle}, Lines \ref{code:mcdb-preshuffle-be}-\ref{code:mcdb-preshuffle-end}, the IWS first shuffles the $id$s in $IL$ with a Pseudo-Random Permutation (PRP) $\pi$ and gets the re-ordered indices list $IL'$.
In our implementation, we leverage the modern version of the Fisher-Yates shuffle algorithm \cite{Knuth73}, where from the first $id$ to the last one, each $id$ in $IL$ is exchanged with a random $id$ storing behind it.
After that, the IWS shuffles $(seed, \bm{n})$ pairs based on $IL'$.
Note that the shuffling operation affects the list of indices of the groups in other fields.
Thus, the IWS also needs to update the index lists of other groups accordingly (Line \ref{code:mcdb-shuffle-update-groups}).
For re-randomising records, the IWS samples a new seed and generates a new nonce $\bm{n'}$.
To ensure the records will be blinded with the respective new nonces stored in $NDB$ after shuffling and re-randomising, the IWS generates $NN=(\bm{n}\oplus\bm{n'}, \ldots)$ for RSS (Line \ref{code:mcdb-nn}).
Afterwards, IWS updates the seed and nonce stored in $NDB(id)$ with the new values.
Finally, $(IL', NN)$ is sent to the RSS.
\paragraph{$\bm{Shuffle(Ercds, IL', NN) \rightarrow Ercds}$}
After searching, the SSS sends the searched records $Ercds$ to the RSS.
Given $IL'$ and $NN$, the RSS starts to shuffle and re-randomise $Ercds$.
Specifically, the RSS first shuffles $Ercds$ based on $IL'$, and then re-randomises each record by computing $Ercds(id) \oplus NN(id)$ (Line \ref{code:mcdb-shuffle-reenc}).
In details, the operation is:
\begin{equation}\notag
(Enc_{s_1}(rcd_{id})\oplus \bm{n}) \oplus (\bm{n'} \oplus \bm{n}) = Enc_{s_1}(rcd_{id})\oplus \bm{n'}
\end{equation}
That is, $Ercds(id)$ is blinded with the latest nonce stored in $NDB(id)$.
Finally, the re-randomised and shuffled records $Ercds$ are sent back to the SSS.
By using a new set of seeds for the re-randomisation, \textsl{\mbox{P-McDb}}\xspace achieves both forward and backward privacy.
If the SSS tries to execute an old query individually, it will not be able to match any records without the new witness $w$, which can only be generated by the IWS with new nonces.
Similarly, the SSS cannot learn if deleted records match new queries.
\subsection{User Revocation}
\label{sec:revoke}
\textsl{\mbox{P-McDb}}\xspace supports flexible multi-user access in a way that the issued queries and search results of one user are protected from all the other entities. Moreover, revoking users do not require key regeneration and data re-encryption even when one of the CSPs colludes with revoked users.
As mentioned in Section \ref{subsec:mcdb-select}, for filtering dummy records and recovering returned real records, both $s_1$ and the nonce are required.
After shuffling, the nonce is only known to the IWS.
Thus, without the assistance of the IWS and SSS, the user is unable to recover records only with $s_1$.
Therefore, for user revocation, we just need to manage a revoked user list at the IWS as well as at the SSS.
Once a user is revoked, the admin informs the IWS and SSS to add this user into their revoked user lists.
When receiving a query, the IWS and the SSS will first check if the user has been revoked.
If yes, they will reject the query.
In case revoked users collude with either the SSS or IWS, they cannot get the search results, since such operation requires the cooperation of both the user issuing the query, IWS, and SSS.
\subsection{Database Updating}
\label{subsec:update}
\begin{algorithm}
\scriptsize
\caption{$Insert(rcd)$}
\label{alg:mcdb-insert}
\begin{algorithmic}[1]
\STATE \underline{$User(rcd)$}:
\STATE $(Ercd, seed, \bm{n}, Grcd) \leftarrow RcdEnc(rcd, 1)$ \label{code:mcdb-insert-user-enc}
\STATE $INS_{IWS} \leftarrow (seed, \bm{n}, Grcd)$, $INS_{SSS} \leftarrow Ercd$
\FOR {each $g_f \in Grcd$} \label{code:mcdb-insert-user-be}
\STATE $(\bm{E}_{f, g_f}, \tau_{f, g_f})^* \leftarrow GDB(f, g_f)$\COMMENT{If $(f, g_f) \notin GDB$, $g_f \leftarrow g_f'$, where $(f, g_f')$ is the closet group of $(f, g_f)$.}
\STATE $(\bm{E}_{f, g_f}, \tau_{f, g_f}) \leftarrow ENC^{-1}_{s_1}((\bm{E}_{f, g_f}, \tau_{f, g_f})^*)$ \label{code:mcdb-insert-user-ef}
\ENDFOR
\FOR{each $e_f \in rcd$} \label{code:mcdb-insert-gendum-be}
\IF{$e_f \in \bm{E}_{f, g_f}$}
\STATE $\gamma_{f} \leftarrow |\bm{E}_{f, g_f}| -1 $
\ELSE
\STATE $\gamma_{f} \leftarrow \tau_{f, g_f} -1 $
\ENDIF
\ENDFOR
\STATE $W=\max\{\gamma_{f}\}_{1 \leq f \leq F}$ \label{code:mcdb-insert-encdum-be}
\STATE Generate $W$ dummy records with values $(NULL, \ldots, NULL)$
\FOR{each $e_f \in rcd$}
\IF{$e_f \in \bm{E}_{f, g_f}$}
\STATE Assign $\bm{E}_{f, g_f} \setminus e_f$ to $\gamma_f$ dummy records in field $f$
\STATE $\tau_{f, g_f} ++$
\ELSE
\STATE Assign $e_f$ to $\gamma_f$ dummy records in field $f$
\STATE $\bm{E}_{f, g_f} \leftarrow \bm{E}_{f, g_f} \cup e_f$
\ENDIF
\STATE $(\bm{E}_{f, g_f}, \tau_{f, g_f})^* \leftarrow ENC_{s_1}(\bm{E}_{f, g_f}, \tau_{f, g_f})$ \label{code:mcdb-insert-gendum-end}
\ENDFOR
\FOR{each dummy record $rcd'$}
\STATE $(Ercd, seed, \bm{n}, Grcd) \leftarrow RcdEnc(rcd', 0)$ \label{code:mcdb-insert-enc-du}
\STATE $INS_{IWS} \leftarrow INS_{IWS} \cup (seed, \bm{n}, Grcd)$
\STATE $INS_{SSS} \leftarrow INS_{SSS} \cup Ercd$ \label{code:mcdb-insert-user-end}
\ENDFOR
\STATE Send $INS_{IWS}$ and $((\bm{E}_{f, g_f}, \tau_{f, g_f})^*)_{1 \leq f \leq F}$ to the IWS
\STATE Send $INS_{SSS}$ to the SSS
~\\
\STATE \underline{$SSS(INS_{SSS}$}): \label{code:mcdb-insert-csp-be}
\STATE $IDs \leftarrow \emptyset$
\FOR{each $Ercd \in INS_{SSS}$}
\STATE $EDB(++id) \leftarrow Ercd$ \label{code:mcdb-insert-csp-insert}
\STATE $IDs \leftarrow IDs \cup id$
\ENDFOR
\STATE Send $IDs$ to the IWS \label{code:mcdb-insert-csp-end}
~\\
\STATE \underline{$IWS(INS_{IWS}, (\bm{E}_{f, g_f}, \tau_{f, g_f})^*)_{1 \leq f \leq F}, IDs)$}: \label{code:mcdb-insert-wss-be}
\FOR{each $(seed, \bm{n}, Grcd) \in INS_{IWS}$ and $id \in IDs$} \label{code:mcdb-insert-wss-seed}
\STATE $NDB(id) \leftarrow (seed, \bm{n})$
\FOR{$f=1$ to $F$}
\STATE $GDB (f, g_f) \leftarrow (GDB(f, g_f).IL_{f, g_f} \cup id, (\bm{E}_{f, g_f}, \tau_{f, g_f})^*)$
\label{code:mcdb-insert-wss-group}
\ENDFOR
\ENDFOR \label{code:mcdb-insert-wss-end}
\end{algorithmic}
\end{algorithm}
\textsl{\mbox{P-McDb}}\xspace allows users to update the database after bootstrapping.
However, after updating, the occurrences of involved elements will change.
To effectively protect the search pattern, we should ensure the elements in the same group always have the same occurrence.
\textsl{\mbox{P-McDb}}\xspace achieves that by updating dummy records.
\paragraph{Insert Query}
In \textsl{\mbox{P-McDb}}\xspace, the insert query is also performed with the cooperation of the user, IWS, and SSS.
The idea is that a number of dummy records will be generated and inserted with the real one to ensure all the elements in the same group always have the same occurrence.
The details are shown in Algorithm \ref{alg:mcdb-insert}.
Assume the real record to be inserted is $rcd=(e_1, \ldots, e_F)$.
The user encrypts it with \emph{RcdEnc}, and gets $(Ercd, seed, \bm{n}, Grcd)$ (Line \ref{code:mcdb-insert-user-enc}, Algorithm \ref{alg:mcdb-insert}).
For each $g_f \in Grcd$, the user gets $(\bm{E}_{f, g_f}, \tau_{f, g_f})^*$ of group $(f, g_f)$ and decrypts it.
Note that if $(f, g_f) \notin GDB$, the IWS returns $(\bm{E}_{f, g_f}, \tau_{f, g_f})^*$ of the closest group(s), instead of adding a new group.
That is, $e_f$ will belong to its closet group in this case.
The problem of adding new groups is that when the new groups contain less than $\lambda$ elements, adversaries could easily infer the search and access patterns within these groups.
The next step is to generate dummy records (Lines \ref{code:mcdb-insert-gendum-be}-\ref{code:mcdb-insert-gendum-end}).
The approach of generating dummy records depends on whether $e_f \in \bm{E}_{f, g_f}$, \textit{i.e.,}\xspace whether $rcd$ introduces new element(s) that not belongs to $U$ or not.
If $e_f \in \bm{E}_{f, g_f}$, after inserting $rcd$, $O(e_f)$ will increase to $\tau_{f, g_f}+1$ automatically.
In this case, the occurrence of other elements in $\bm{E}_{f, g_f}$ should also be increased to $\tau_{f, g_f}+1$.
Otherwise, $O(e_f)$ will be unique in the database, and adversaries can tell if users are querying $e_f$ based on the size pattern.
To achieve that, $\gamma_f=|\bm{E}_{f, g_f}| -1$ dummy records are required for field $f$, and each of them contains an element in $\bm{E}_{f, g_f} \setminus e_f$.
If $e_f \notin \bm{E}_{f, g_f}$, $O(e_f)=0$ in $EDB$.
After inserting, we should ensure $O(e_f)=\tau_{f, g_f}$ since it belongs to the group $(f, g_f)$.
Thus, this case needs $\gamma_f=\tau_{f, g_f} -1$ dummy records for field $f$, and all of them are assigned with $e_f$ in field $f$.
Assume $W$ dummy records are required for inserting $rcd$, where $W=\max\{\gamma_f\}_{1 \leq f \leq F}$.
The user generates $W$ dummy records as mentioned above (`NULL' is used if necessary), and encrypts them with $RcdEnc$ (Lines \ref{code:mcdb-insert-encdum-be}-\ref{code:mcdb-insert-gendum-end}).
Meanwhile, the user adds each new element into the respective $\bm{E}_{f, g_f}$ if there are any, updates $\tau_{f, g_f}$, and re-encrypts $(\bm{E}_{f, g_f}, \tau_{f, g_f})$.
All the encrypted records are sent to the SSS and added into $EDB$ (Lines \ref{code:mcdb-insert-csp-be}-\ref{code:mcdb-insert-csp-end}).
All the $(seed, \bm{n})$ pairs and $(\bm{E}_{f, g_f}, \tau_{f, g_f})^*$ are sent to the IWS and inserted into $NDB$ and $GDB$ accordingly (Lines \ref{code:mcdb-insert-wss-be}-\ref{code:mcdb-insert-wss-end}).
Finally, to protect the access pattern, the shuffling and re-randomising operations over the involved groups will be performed between the IWS and RSS.
\paragraph{Delete Query}
Processing delete queries is straightforward.
Instead of removing records from the database, the user sets them to dummy by replacing their $tag$s with random strings.
In this way, the occurrences of involved elements are not changed.
Moreover, the correctness of the search result is guaranteed.
However, only updating the tags of matched records leaks the access pattern of delete queries to the RSS.
Particularly, the RSS could keep a view of the latest database and check which records' tags were modified when the searched records are sent back for shuffling.
To avoid such leakage, in \textsl{\mbox{P-McDb}}\xspace, the user modifies the tags of all the searched records for each delete query.
Specifically, the SSS returns the identifiers of matched records and the tags of all searched records to the user.
For matched records, the user changes their tags to random strings directly.
Whereas, for each unmatched records, the user first checks if it is real or dummy, and then generates a proper new tag as done in Algorithm~\ref{alg:mcdb-enc}.
Likewise, the \emph{PreShuffle} and \emph{Shuffle} algorithms are performed between the IWS and RSS after updating all the tags.
However, if the system never removes records, the database will increase rapidly.
To avoid this, the admin periodically removes the records consisting of $F$ `NULL' from the database.
Specifically, the admin periodically checks if each element in each group is contained in one dummy record.
If yes, for each element, the admin updates one dummy record containing it to `NULL'.
As a consequence, the occurrence of all the elements in the same group will decrease, but still is the same.
When the dummy record only consists of `NULL', the admin removes it from the database.
\paragraph{Update query}
In \textsl{\mbox{P-McDb}}\xspace, update queries can be performed by deleting the records with old elements and inserting new records with new elements.
\section{Performance Analysis}
\label{sec:MCDB-perf}
We implemented \textsl{\mbox{P-McDb}}\xspace in C using MIRACL 7.0 library for cryptographic primitives.
The performance of all the entities was evaluated on a desktop machine with Intel i5-4590 3.3 GHz 4-core processor and 8GB of RAM.
We evaluated the performance using TPC-H benchmark \cite{TPC:2017:h}, and tested equality queries with one singe predicates over `O\_CUSTKEY' field in `ORDERS' table.
In the following, all the results are averaged over $100$ trials.
\subsection{Group Generation}
\begin{table}[h]
\scriptsize
\centering
\caption{The storage overhead with different numbers of groups}
\begin{tabular}{|l|l|l|l|l|}
\hline
\#Groups &\#Dummy records &\#Elements in a group & \#Records in a group \\ \hline
1 & 2599836 & 99996 & =4099836 \\
10 & 2389842 & 10000 & $\approx$38000 \\
100 & 1978864 & 1000 & $\approx$35000 \\
1000 & 1567983 & 100 & $\approx$3000 \\
10000 & 1034007 & 10 & $\approx$240 \\
\hline
\end{tabular}
\label{Tbl:oblidb-storage-perf}
\end{table}
In `ORDERS' table, all the `O\_CUSTKEY' elements are integers.
For simplicity, we divided the records into groups by computing $e~ mod~ b$ for each element $e$ in `O\_CUSTKEY' field.
Specifically, we divide the records into 1, 10, 100, 1000, and 10000 groups by setting $b=$1, 10, 100, 1000, and 10000, respectively.
Table \ref{Tbl:oblidb-storage-perf} shows the number of required records and the number of elements included in a group when dividing the database into 1, 10, 100, 1000, and 10000 groups.
In particular, when all the records are in one group, $2599836$ dummy records are required in total, $\lambda=99996$, and the CSP has to search $4099836$ records for each query.
When we divide the records into more groups, fewer dummy records are required, fewer records will be searched by the CSP, but fewer elements will be contained in each group.
When there are $10000$ groups, only $1034007$ dummy records are required totally, $\lambda=10$, and the CSP just needs to search around $240$ records for each query.
\subsection{Query Latency}
\begin{figure}[htp]
\centering
\includegraphics[width=0.32\textwidth]{figs/entity.pdf}\\
\caption{The overhead on different entities with different group numbers}
\label{Fig:mcdb-entity}
\end{figure}
An important aspect of an outsourced service is that most of the intensive computations should be off-loaded to the CSPs.
To quantify the workload on each of the entities, we measured the latency on the user, IWS, SSS, and RSS for processing the query with different numbers of groups.
The results are shown in Fig.~\ref{Fig:mcdb-entity}.
We can notice that the computation times taken on the IWS, SSS, and RSS are much higher than that on the user side when there are less than 10000 groups.
In the following, we will discuss the performance on CSPs and the user in details.
\subsubsection{Overhead on CSPs}
\begin{figure}[htp]
\centering
\includegraphics[width=0.32\textwidth]{figs/CSPs.pdf}\\
\caption{The overhead on CSPs with different group numbers}
\label{Fig:mcdb-csps}
\end{figure}
Fig. \ref{Fig:mcdb-csps} shows the performance of the operations running in the CSPs when increasing the number of groups.
Specifically, in \textsl{\mbox{P-McDb}}\xspace, the IWS runs \emph{NonceBlind} and \emph{PreShuffle}, the SSS runs \emph{Search}, and the RSS runs \emph{Shuffle} algorithms.
We can notice that the running times of all the four operations reduce when increasing the number of groups.
The reason is that \textsl{\mbox{P-McDb}}\xspace only searches and shuffles a group of records for each query.
The more groups, the fewer records in each group for searching and shuffling.
Thanks to the efficient XOR operation, even when $g=1$, \textit{i.e.,}\xspace searching the whole database (contains 4099836 records in total), \emph{NonceBlind}, \emph{Search}, and \emph{Shuffle} can be finished in around $2$ seconds.
\emph{PreShuffle} is the most expensive operation in \textsl{\mbox{P-McDb}}\xspace, which takes about $11$ seconds when $g=1$.
Fortunately, in \emph{PreShuffle}, the generation of new nonces (\textit{i.e.,}\xspace Lines \ref{code:mcdb-seed'}-\ref{code:mcdb-gn'} in Algorithm \ref{alg:mcdb-shuffle}) is not affected by the search operation, thus they can be pre-generated.
By doing so, \emph{PreShuffle} can be finished in around $2.4$ seconds when $g=1$.
\subsubsection{Overhead on Users}
\begin{figure}[htp]
\centering
\includegraphics[width=0.32\textwidth]{figs/user_group.pdf}\\
\caption{The overhead on the user with different group numbers}
\label{Fig:mcdb-user}
\end{figure}
In \textsl{\mbox{P-McDb}}\xspace, the user only encrypts queries and decrypts results.
In Fig. \ref{Fig:mcdb-user}, we show the effect on the two operations when we change the number of groups.
The time for encrypting the query does not change with the number of groups.
However, the time taken by the result decryption decreases slowly when increasing the number of groups.
For recovering the required records, in \textsl{\mbox{P-McDb}}\xspace, the user first filters out the dummy records and then decrypts the real records.
Therefore, the result decryption time is affected by the number of returned real records as well as the dummy ones.
In this experiment, the tested query always matches 32 real records.
However, when changing the number of groups, the number of returned dummy records will be changed.
Recall that, the required dummy records for a group is $\sum_{e \in \bm{E}_{f, g}}(\tau_{f, g}-O(e))$, and the threshold $\tau_{f, g}=\max\{O(e)\}_{e \in \bm{E}_{f, g}}$.
When the records are divided into more groups, fewer elements will be included in each group.
As a result, the occurrence of the searched element tends to be closer to $\tau_{f, g}$, and then fewer dummy records are required for its padding.
Thus, the result decryption time decreases with the increase of the group number.
In the tested dataset, the elements have very close occurrences, which ranges from $1$ to $41$.
The number of matched dummy records are $9$, $9$, $2$, $1$, and $0$ when there are $1$, $10$, $100$, $1000$, and $10000$ groups, respectively.
For the dataset with a bigger element occurrence gap, the result decryption time will change more obviously when changing the number of groups.
\subsubsection{End-to-end Latency}
Fig. \ref{Fig:mcdb-user} also shows the end-to-end latency on the user side when issuing a query.
In this test, we did not simulate the network latency, so the end-to-end latency shown here consists of the query encryption time, the nonce blinding time, the searching time and the result decrypting time.
The end-to-end latency is dominated by the nonce blinding and searching times, thus it decreases when increasing the number of groups.
Specifically, the end-to-end query decreases from $2.16$ to $0.0006$ seconds when the number of groups increases from $1$ to $10000$.
In this test, we used one trick to improve the performance.
As described in Algorithm \ref{alg:macdb-search}, the SSS is idle before getting $(IL, EN)$.
Indeed, the IWS can send $IL$ to the SSS first, and then the SSS can pre-compute $temp_{id}=H'(EDB(id, EQ.f) \oplus EQ.e^*)$ while the IWS generating $EN$.
After getting $EN$, the SSS just needs to check if $temp_{id}=EN(id).w$.
By computing $(w, t)$ and $temp$ simultaneously, the user can get the search result more efficiently.
In this test, the SSS computed $t$ simultaneously when the IWS generating $EN$.
Note that the shuffle operation does not affect the end-to-end latency on the user side since it is performed after returning search results to users.
\subsection{Insert and Delete Queries}
\begin{figure}
\centering
\includegraphics[width=.32\textwidth]{figs/insdelsel.pdf}\\
\caption{The execution times of the insert, delete and select queries with different numbers of groups}
\label{Fig:mcdb-insert}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=.3\textwidth]{figs/insdelsel_result.pdf}\\
\caption{The execution times of the insert, delete, and select queries with different result sizes}
\label{Fig:mcdb-insert-result}
\end{figure}
Since \textsl{\mbox{P-McDb}}\xspace is a dynamic SE scheme, we also tested its performance for insert and delete queries.
In Fig. \ref{Fig:mcdb-insert}, we show the execution times of insert and delete when changing the number of groups\footnote{The times taken by the $PreShuffle$ and $Shuffle$ algorithms are not included.}.
Moreover, we take the end-to-end latency of select queries as the baseline.
Fig. \ref{Fig:mcdb-insert} shows both insert and delete queries execute faster when there are more groups.
For insert queries, as mentioned in Section \ref{subsec:update}, $W=\max\{\gamma_{f}\}_{1 \leq f \leq F}$ dummy records should be inserted when inserting a real record.
Thus, the performance of insert queries is affected by the number of elements in involved groups.
When the database is divided into more groups, the fewer elements will be included in each group.
In this experiment, when there are 1, 10, 100, 1000, and 10000 groups, the user has to generate and encrypt 99996, 10000, 1000, 100, and 10 dummy records, respectively.
Specifically, when there is only one group, \textsl{\mbox{P-McDb}}\xspace takes only around $1.5$ seconds to encrypt and insert $99997$ records, which is slightly better than select queries.
For delete queries, \textsl{\mbox{P-McDb}}\xspace first performs the search operation to get the matched records, and then turn them to dummy by changing the tags.
Moreover, to hide the access pattern from the RSS, the user also needs to change the tags of all the other searched records.
The more groups, the fewer records should be searched, and the fewer tags should be changed.
Therefore, the performance of delete queries also gets better when there are more groups.
However, comparing with select queries, delete queries takes much longer time to change the tags.
Specifically, it takes around $20$ seconds to execute a delete query when there is only one group.
We also tested how the result size affects the performance of select and delete queries.
For this test, we divided the database into 10 groups, and the searched group contains $360000$ records.
Moreover, we manually changed the data distribution in the group to be searched to ensure that we can execute queries matching $36$, $360$, $3600$, $36000$, $360000$ records.
From Fig. \ref{Fig:mcdb-insert-result}, we can see that the performance of delete queries is better when the result size is bigger.
The reason is that tags of matched records are processed in a much efficient way than unmatched records.
Specifically, as mentioned in Section \ref{subsec:update}, the user directly changes the tags of matched records to random strings.
However, for each unmatched record, the user first has to detect if it is dummy or not, and then update their tags accordingly.
When all the searched records match the delete query, it takes only $0.6$ seconds to turn them to dummy.
Nonetheless, select queries take longer time when more records matching the query, since there are more records should be processed on the user side.
\iffalse
\subsection{Comparison with Other Schemes}
To better investigate the performance of our approach, here we roughly compare the search time of \textsl{\mbox{P-McDb}}\xspace with $PPQED_a$ and SisoSPIR.
Although we did not access their implementation, our experiments were conducted on Linux machines with approximate power\footnote{$PPQED_a$ was tested on a Linux machine with an
Intel Xeon 6-Core CPU 3.07 GHz processor and 12 GB RAM and SisoSPIR was tested on a machine with an Intel i7-2600K 3.4GHz 4-core CPU 8GB RAM.}.
Searching over 1 million records takes more than 10 seconds in SisoSPIR.
In $PPQED_a$, it takes 1 second to check if a predicate matches with a record when the data size is $10$ bits.
However, \textsl{\mbox{P-McDb}}\xspace only takes less than 2 seconds when searching over 4.1 million records, which is much more efficient than the other two schemes.
\fi
\section{Conclusions and Future Directions}
\label{sec:conclusion}
In this work, we presented the leakage profile definitions for searchable encrypted relational databases, and investigated the leakage-based attacks proposed int the literature.
We also proposed \textsl{\mbox{P-McDb}}\xspace, a dynamic searchable encryption scheme for multi-cloud outsourced databases.
\textsl{\mbox{P-McDb}}\xspace does not leak information about search, access, and size patterns.
It also achieves both forward and backward privacy, where the CSPs cannot reuse cached queries for checking if new records have been inserted or if records have been deleted.
\textsl{\mbox{P-McDb}}\xspace has minimal leakage, making it resistant to exiting leakage-based attacks.
As future work, first we will do our performance analysis by deploying the scheme in the real multi-cloud setting.
Second, we will try to address the limitations of \textsl{\mbox{P-McDb}}\xspace.
Specifically, \textsl{\mbox{P-McDb}}\xspace protects the search, access, and size patterns from the CSPs.
However, it suffers from the collusion attack among CSPs.
In \textsl{\mbox{P-McDb}}\xspace, the SSS knows the search result for each query, and the other two knows how the records are shuffled and re-randomised.
If the SSS colludes with the IWS or RSS, they could learn the search and access patterns.
We will also consider the techniques to defend the collusion attack among CSPs.
Moreover, in this work, we assume all the CSPs are honest.
Yet, in order to learn more useful information, the compromised CSPs might not behave honestly as assumed in the security analysis.
For instance, the SSS might not search all the records indexed by $IL$, and the RSS might not shuffle the records properly.
In the future, we will give a mechanism to detect if the CSPs honestly follow the designated protocols.
\section{Notations and Definitions}
\label{sec:notation}
\begin{table}
\centering
\caption{Notation and description}
\scriptsize
\label{tbl:notation}
\begin{tabular}{|l|l|} \hline
\textbf{Notation} & \textbf{Description} \\ \hline
$e$ & Data element \\ \hline
$|e|$ &The length of data element \\ \hline
$F$ & Number of attributes or fields \\ \hline
$rcd_{id}=(e_{id, 1}, \ldots, e_{id, F})$ & The $id$-th record \\ \hline
$N$ & Number of records in the database \\ \hline
$DB=\{rcd_1, \ldots, rcd_N\}$ & Database \\ \hline
$DB(e)=\{rcd_{id} | e \in rcd_{id}\}$ & Records containing $e$ in $DB$ \\ \hline
$O(e)=|DB(e)|$ & Occurrence of $e$ in $DB$ \\ \hline
$U_f=\cup \{e_{id, f}\}$ & The set of distinct elements in field $f$ \\ \hline
$U=\{U_{1}, ..., U_F\}$ & All the distinct elements in $DB$ \\ \hline
$e^*$ & Encrypted element \\ \hline
$Ercd$ & Encrypted record \\ \hline
$EDB$ & Encrypted database \\ \hline
$Q=(type, f, e)$ & Query \\ \hline
$Q.type$ & `select' or `delete' \\ \hline
$Q.f$ & Identifier of interested field \\ \hline
$Q.e$ & Interested keyword \\ \hline
$EQ$ & Encrypted query \\ \hline
$EDB(EQ)$ or $EDB(Q)$ & Search result of $Q$ \\ \hline
$(f, g)$ & Group $g$ in field $f$ \\ \hline
$\bm{E}_{f,g}$ & Elements included in group $(f,g)$ \\ \hline
$\tau_{f,g}=\max \{O(e)\}_{e \in \bm{E}_{f, g}}$ & Threshold of group $(f, g)$ \\ \hline
$(\bm{E}_{f,g}, \tau_{f,g})^*$ & Ciphertext of $(\bm{E}_{f,g}, \tau_{f,g})$ \\ \hline
\end{tabular}
We say $EQ(Ercd)=1$ when $Ercd$ matches $EQ$.
Thus, the search result $EDB(EQ) = \{Ercd_{id} | EQ(Ercd_{id})=1\}$.
\end{table}
In this section, we give formal definitions for the search, access, and size patterns, as well as for forward and backward privacy.
Before that, in Table~\ref{tbl:notation}, we define the notations used throughout this article.
\begin{mydef}[\textbf{Search Pattern}]
Given a sequence of $q$ queries $\bm{Q}=(Q_1, \ldots, Q_q)$, the search pattern of $\bm{Q}$ represents the correlation between any two queries $Q_i$ and $Q_j$, \textit{i.e.,}\xspace $\{Q_i\stackrel{?}{=}Q_j\}_{Q_i, Q_j \in \bm{Q}}$\footnote{$Q_i=Q_j$ only when $Q_i.type=Q_j.type$, $Q_i.f=Q_j.f$, $Q_i.op=Q_j.op$ and $Q_i.e=Q_j.e$}, where $1 \leq i, j \leq q$.
\end{mydef}
In previous works, access pattern is generally defined as the records matching each query \cite{Curtmola:2006:Searchable}, \textit{i.e.,}\xspace the search result.
In fact, in leakage-based attacks, such as \cite{Zhang:2016:All,Islam:2012:Access,Cash:2015:leakage}, the attackers leverage the intersection between search results (explained in Section \ref{sec:attack}) to recover queries, rather than each single search result.
Therefore, in this work, we define the intersection between search results as access pattern.
\begin{mydef}[\textbf{Access Pattern}]
The access pattern of $\bm{Q}$ represents the intersection between any two search results, \textit{i.e.,}\xspace $\{EDB(Q_i) \cap EDB(Q_j)\}_{Q_i, Q_j \in \bm{Q}}$.
\end{mydef}
\begin{mydef}[\textbf{Size Pattern}]
The size pattern of $\bm{Q}$ represents the number of records matching each query, \textit{i.e.,}\xspace $\{|DB(Q_i)|\}_{Q_i\in \bm{Q}}$.
\end{mydef}
\begin{mydef}[\textbf{Forward Privacy}]
Let $Ercd^t$ be an encrypted record inserted or updated at time $t$, a dynamic SE scheme achieves forward privacy, if $EQ(Ercd^t)\stackrel{?}{=}0$ is always true for any query $EQ$ issued at time $t^*$, where $t^* < t$.
\end{mydef}
\begin{mydef}[\textbf{Backward Privacy}]
Let $Ercd^t$ be an encrypted record deleted at time $t$, a dynamic SE scheme achieves backward privacy, if $EQ(Ercd^t)\stackrel{?}{=}0$ is always true for any query $EQ$ issued at time $t'$, where $t < t'$.
\end{mydef}
\section{Introduction}
\label{sec:introduction}
Cloud computing is a successful paradigm offering companies and individuals virtually unlimited
data storage and computational power at very attractive costs.
However, uploading sensitive data, such as medical, social, and financial information, to public cloud environments is still a challenging issue due to security concerns.
In particular, once such data sets and related operations are uploaded to cloud environments, the tenants must therefore trust the Cloud Service Providers (CSPs).
Yet, due to possible cloud infrastructure bugs~\cite{GunawiHLPDAELLM14}, misconfigurations~\cite{dropboxleaks} and external attacks~\cite{verizonreport}, the data could be disclosed or corrupted.
Searchable Encryption (SE) is an effective approach that allows organisations to outsource their databases and search operations to untrusted CSPs, without compromising the confidentiality of records and queries.
Since the seminal SE paper by Song \textit{et al.}\xspace \cite{Song:2000:Practical}, a long line of work has investigated SE schemes with flexible functionality and better performance \cite{Asghar:2013:CCSW,Curtmola:2006:Searchable,Popa:2011:Cryptdb,Sarfraz:2015:DBMask}.
These schemes are proved to be secure in certain models under various cryptographic assumptions.
Unfortunately, a series of more recent work \cite{Islam:2012:Access,Naveed:2015:Inference,Cash:2015:leakage,Zhang:2016:All,Kellaris:ccs16:Generic,Abdelraheem:eprint17:record}
illustrates that they are still vulnerable to inference attacks, where malicious CSPs could recover the content of queries and records by (i) observing the data directly from the encrypted database and (ii) learning about the results and queries when users access the database.
From the encrypted database, the CSP might learn the frequency information of the data.
From the search operation, the CSP is able to know the \emph{access pattern}, \textit{i.e.,}\xspace the records returned to users in response to given queries.
The CSP can also infer if two or more queries are equivalent, referred to as the \emph{search pattern}, by comparing the encrypted queries or matched data.
Last but not least, the CSP can simply log the number of matched records or files returned by each query, referred to as the \emph{size pattern}.
When an SE scheme supports insert and delete operations, it is referred to as a \emph{dynamic} SE scheme.
Dynamic SE schemes might leak extra information if they do not support \emph{forward privacy} and \emph{backward privacy} properties.
Lacking forward privacy means that the CSP can learn if newly inserted data or updated data matches previously executed queries.
Missing backward privacy means that the CSP learns if deleted data matches new queries.
Supporting forward and backward privacy is fundamental to limit the power of the CSP to collect information on how the data evolves over time.
However, only a few schemes \cite{BostMO17,ZuoSLSP18,ChamaniPPJ18,AmjadKM19} ensure both properties simultaneously.
Initiated by Islam \textit{et al.}\xspace (IKK)\cite{Islam:2012:Access}, more recent works \cite{Cash:2015:leakage,Naveed:2015:Inference,Zhang:2016:All,Kellaris:ccs16:Generic} have shown that such leakage can be exploited to learn sensitive information and break the scheme.
Naveed \textit{et al.}\xspace \cite{Naveed:2015:Inference} recover more than $60\%$ of the data in CryptDB \cite{Popa:2011:Cryptdb} using frequency analysis only.
Zhang \textit{et al.}\xspace \cite{Zhang:2016:All} further investigate the consequences of leakage by injecting chosen files into the encrypted storage.
Based on the access pattern, they could recover a very high fraction of searched keywords by injecting a small number of known files.
Cash \textit{et al.}\xspace \cite{Cash:2015:leakage} give a comprehensive analysis of the leakage in SE solutions for file collection and introduced the \emph{count attack}, where an adversary could recover queries by counting the number of matched records even if the encrypted records are semantically secure.
In this article, we investigate the leakage and attacks against relational databases\footnote{In the rest of this article, we use the term \emph{database} to refer to a relational database.} and present a \underline{P}rivacy-preserving \underline{M}ulti-\underline{c}loud based dynamic SSE scheme for \underline{D}ata\underline{b}ases (\textsl{\mbox{P-McDb}}\xspace).
\textsl{\mbox{P-McDb}}\xspace can effectively resist attacks based on the search, size or/and access patterns.
Our key technique is to use three non-colluding cloud servers: one server stores the data and performs the search operation, and the other two manage re-randomisation and shuffling of the database for protecting the access pattern.
A user with access to all servers can perform an encrypted search without leaking the search, access, or size pattern.
When updating the database, \textsl{\mbox{P-McDb}}\xspace also ensures both forward and backward privacy.
We give full proof of security against honest-but-curious adversaries and show how \textsl{\mbox{P-McDb}}\xspace can hide these patterns effectively.
The contributions of this article can be summarised as follows:
\begin{itemize}
\item
We provide leakage definition specific to searchable encrypted databases, and then review how existing attacks leverage the leakage to recover queries and records.
\item We propose a privacy-preserving SSE database \textsl{\mbox{P-McDb}}\xspace, which protects the search, access, and size patterns, and achieves both forward and backward privacy, thus ensuring protection from leakage-based attacks.
\item
We give full proof of security against honest-but-curious adversaries and show how \textsl{\mbox{P-McDb}}\xspace can effectively hide these patterns and resist leakage-based attacks
\item
Finally, we implement a prototype of \textsl{\mbox{P-McDb}}\xspace and show its practical efficiency by evaluating its performance on TPC-H dataset.
\end{itemize}
The rest of this article is organised as follows.
In Section~\ref{sec:notation}, we define notations.
We present the leakage levels in SE schemes and review leakage-based attacks in Section~\ref{sec:leakage}.
In Section~\ref{sec:overview}, we provide an overview of \textsl{\mbox{P-McDb}}\xspace.
Solution details can be found in Section~\ref{sec:MCDB-details}.
In Section~\ref{sec:security}, we analyse the security of \textsl{\mbox{P-McDb}}\xspace.
Section~\ref{sec:MCDB-perf} reports the performance of \textsl{\mbox{P-McDb}}\xspace.
Finally, we conclude this article in Section~\ref{sec:conclusion}.
\subsection{Attacks against SE Solutions}
\label{sec:attack}
In recent years, leakage-based attacks against SE schemes have been investigated in the literature.
Table \ref{tbl:summary} summarises the existing SE solutions for relational databases and the attacks applicable to them.
In the following, we illustrate how the existing leakage-based attacks could recover the data and queries.
Specifically, for each attack, we analyse its leveraged leakage, required knowledge, process, and consequences.
\subsubsection{Frequency Analysis Attack}
In \cite{Naveed:2015:Inference}, Naveed \textit{et al.}\xspace describe an attack on PPE-based SE schemes, where the CSP could recover encrypted records by analysing the leaked frequency information, \textit{i.e.,}\xspace data distribution.
To succeed in this attack, in addition to the encrypted database, the CSP also requires some auxiliary information, such as the application background, publicly available statistics, and prior versions of the targeted database.
In PPE-based SE schemes, the frequency information of an encrypted database is equal to that of the database in plaintext.
By comparing the leaked frequency information with the obtained statistics relevant to the application, the CSP could recover the encrypted data elements stored in encrypted databases.
In \cite{Naveed:2015:Inference}, Naveed \textit{et al.}\xspace recovered more than $60\%$ of records when evaluating this attack with real electronic medical records using CryptDB.
We stress that this attack does not require any queries or interaction with users.
The encrypted databases with $\mathcal{L}_3$ leakage profile, \textit{i.e.,}\xspace PPE-based databases, such as CryptDB and DBMask, are vulnerable to this attack.
\subsubsection{IKK Attack}
IKK attack proposed by Islam \textit{et al.}\xspace \cite{Islam:2012:Access} is the first attack exploiting the access pattern leakage.
The goal of the IKK attack is to recover encrypted queries in encrypted file collection systems, \textit{i.e.,}\xspace recover the plaintext of searched keywords.
Note that this attack can also be used to recover queries in encrypted databases since it does not leverage the leakage specific to file collections.
In this attack, the CSP needs to know possible keywords in the dataset and the expected probability of any two keywords appearing in a file (\textit{i.e.,}\xspace co-occurrence probability).
Formally, the CSP guesses $m$ potential keywords and builds an $m\times m$ matrix $\tilde{C}$ whose element is the co-occurrence probability of each keyword pair.
The CSP mounts the IKK attack by observing the access pattern revealed by the encrypted queries.
Specifically, by checking if any two queries match the same files or not, the number of files containing any two searched keywords (\textit{i.e.,}\xspace the co-occurrence rate) can be reconstructed.
Assume the CSP observes $n$ queries.
It constructs an $n \times n$ matrix $C$ with their co-occurrence rates.
By using the simulated annealing technique \cite{KirkpatrickGV83}, the CSP can find the best match between $\tilde{C}$ and $C$ and map the encrypted keywords to the guesses.
In \cite{Islam:2012:Access}, Islam \textit{et al.}\xspace mounted the IKK attack over the Enron email dataset \cite{eronemail:2017} and recovered $80\%$ of the queries with certain vocabulary sizes.
The encrypted relational databases with leakage profile $\mathcal{L}_2$ or $\mathcal{L}_1$, such as Arx \cite{Poddar:arx:eprint16}, Blind Seer \cite{Pappas:BlindSeer:SP14}, and PPQED \cite{Samanthula:2014:Privacy}, are also vulnerable to the IKK attack.
\subsubsection{File-injection and Record-injection Attack}
The file-injection attack \cite{Zhang:2016:All} is an active attack mounted on encrypted file collections, which is also named as \emph{chosen-document attack} in \cite{Cash:2015:leakage}.
The file-injection attack attempts to recover encrypted queries by exploiting access pattern in encrypted file storage.
More recently, Abdelraheem \textit{et al.}\xspace \cite{Abdelraheem:eprint17:record} extended this attack to encrypted databases and defined it as \emph{record-injection attack}.
Compared with the IKK and count attack (will be discussed in Section \ref{subsec:conut}), much less auxiliary knowledge is required: the CSP only needs to know the keywords universe of the system.
In \cite{Zhang:2016:All}, Zhang \textit{et al.}\xspace presented the first concrete file-injection attack and showed that the encrypted queries can be revealed with a small set of injected files.
Specifically, in this attack, the CSP (acting as an active attacker) sends files composed of the keywords of its choice, such as emails, to users who then encrypt and upload them to the CSP, which are called \emph{injected files}.
If no other files are uploaded simultaneously, the CSP can easily know the storage location of each injected file.
Moreover, the CSP can check which injected files match the subsequent queries.
Given enough injected files with different keyword combinations, the CSP could recover the keyword included in a query by checking the search result.
The encrypted databases with $\mathcal{L}_2$ or $\mathcal{L}_3$ leakage profiles are vulnerable to this attack.
Although some works \cite{BostMO17,ZuoSLSP18,ChamaniPPJ18,AmjadKM19} ensure both forward and backward privacy, they are still vulnerable to the file-injection attack due to the leakage of access pattern.
That is, after searching, the attacker could still learn the intersections between previous insert queries and the search result of current queries.
\subsubsection{Count and Relational-count Attack}
\label{subsec:conut}
The count attack is proposed by Cash \textit{et al.}\xspace in \cite{Cash:2015:leakage} to recover encrypted queries in file storage systems based on the access and size patterns leakage.
In \cite{Abdelraheem:2017:Seachable}, Abdelraheem \textit{et al.}\xspace have applied this attack to databases and named it a \emph{relational-count attack}.
As in the IKK attack scenario, the CSP is also assumed to know an $m \times m$ matrix $\tilde{C}$, where its entry $\tilde{C}[w_i, w_j]$ holds the co-occurrence rate of keyword $w_i$ and $w_j$ in the targeted dataset.
In order to improve the attack efficiency and accuracy, the CSP is assumed to know, for each keyword $w$, the number of matching files $count(w)$ in the targeted dataset.
The CSP mounts the count attack by counting the number of files matching each encrypted query.
For an encrypted query, if the number of its matching files is unique and equals to a known $count(w)$, the searched keyword must be $w$.
However, if the result size of a query $EQ$ is not unique, all the keywords with $count(w)=|EDB(EQ)|$ could be the candidates.
Recall that the CSP can construct another matrix $C$ that represents the observed co-occurrence rate between any two queries based on the leakage of access pattern.
By comparing $C$ with $\tilde{C}$, the candidates for the queries with non-unique result sizes can be reduced.
With enough recovered queries, it is possible to determine the keyword of $EQ$.
In \cite{Cash:2015:leakage}, Cash \textit{et al.}\xspace tested the count attack against Enron email dataset and successfully recovered almost all the queries.
The SE solutions for databases with leakage profiles above $\mathcal{L}_1$ are vulnerable to this attack.
\subsubsection{Reconstruction Attack}
In ORAM-based systems, such as SisoSPIR proposed by Ishai \textit{et al.}\xspace \cite{Ishai:2016:Private}, the size and access patterns are concealed.
Unfortunately, Kellaris \textit{et al.}\xspace \cite{Kellaris:ccs16:Generic} observe that the ORAM-based systems have fixed communication overhead between the CSP and users, where the length of the message sent from the CSP to the user as the result of a query is proportional to the number of records matching the query.
That is, for a query $Q$, the size of the communication sent from the CSP to the user is $\alpha |DB(Q)|+ \beta$, where $\alpha$ and $\beta$ are two constants.
In theory, by giving two (query, result) pairs, the CSP can derive $\alpha$ and $\beta$, and then infer the result sizes of other queries.
In \cite{Kellaris:ccs16:Generic}, Kellaris \textit{et al.}\xspace present the \emph{reconstruction attack} that exploits the leakage of communication volume, and could reconstruct the attribute names in encrypted databases supporting range queries.
In this attack, the CSP only needs to know the underlying query distribution prior to the attack.
Their experiment illustrated that after a certain number of queries, all the attributes can be recovered in a few seconds.
Since we focus on equality queries in this work, we do not give the attack details here.
Nonetheless, after recovering the size pattern for each query, the CSP could also mount the count attack on equality queries.
The SE schemes with $\mathcal{L}_1$ leakage profile are vulnerable to this attack.
\section{Leakage and Attacks}
\label{sec:leakage}
\subsection{Leakage Definition}
In \cite{Cash:2015:leakage}, Cash \textit{et al.}\xspace define four different levels of leakage profiles for encrypted file collections according to the method of encrypting files and the data structure supporting encrypted search.
Yet, we cannot apply these definitions to databases directly, since the structure of a file is different from that of a record in the database.
In particular, a file is a collection of related words arranged in a semantic order and tagged with a set of keywords for searching; whereas, a record consists of a set of keywords with predefined attributes.
Moreover, a keyword may occur more than once in a file, and different keywords may have different occurrences; whereas, a keyword of an attribute generally occurs only once in a record.
Inspired by the leakage levels defined in \cite{Cash:2015:leakage}, in this section, we provide our own layer-based leakage definition for encrypted databases.
Specifically, we use the terminology \emph{leakage} to refer to the information the CSP can learn about the data directly from the encrypted database and the information about the results and queries when users are accessing the database.
The simplest type of SE scheme for databases is encrypting both the records and queries with Property-Preserving Encryption (PPE), such as the DETerministic (DET).
In DET-based schemes, the same data has the same ciphertext once encrypted.
In this type of SE schemes, the CSP can check whether each record matches the query efficiently by just comparing the corresponding ciphertext; however, these solutions result in information leakage.
Specifically, in DET-based schemes, such as CryptDB \cite{Popa:2011:Cryptdb} (where the records are protected only with the PPE layer), DBMask \cite{Sarfraz:2015:DBMask}, and Cipherbase \cite{Arasu:CIDR13:Cipherbase}, before executing any query, the CSP can learn the data distribution, \textit{i.e.,}\xspace the number of distinct elements and the occurrence of each element, directly from the ciphertext of the database.
Formally, we say the data distribution of $DB$ is leaked if $e^*$ and $e$ have the same occurrence, \textit{i.e.,}\xspace $O(e)=O(e^*)$, for each $e \in U$.
We define this leakage profile set as $\mathcal{L}_3$:
\begin{itemize}
\item $\mathcal{L}_3=\{O(e)\}_{e \in U}$.
\end{itemize}
The second type of SE for databases encrypts the data with semantically secure primitives, but still encrypts the queries with DET encryption.
By doing so, the data distribution is protected, and the CSP can still search the encrypted database efficiently by repeating the randomisation over the DET query and then comparing it with the randomised data, as done in \cite{Hahn:2014:Searchable}, Arx \cite{Poddar:arx:eprint16}, and most of the Public-key Encryption with Keyword Search (PEKS) systems, such as \cite{BonehCOP04} and BlindSeer \cite{Fisch:SP15:BlindSeer}.
However, after executing a query, the CSP could still learn the access and size patterns.
Moreover, due to the DET encryption for queries, the search pattern is also leaked.
Given a sequence of $q$ queries $\textbf{Q}=(Q_1, \ldots, Q_q)$, we define the leakage profile as:
\begin{itemize}
\item $\mathcal{L}_2=\{|DB(Q_i)|, \{EDB(Q_i)\cap EDB(Q_j), Q_i\stackrel{?}{=}Q_j\}_{Q_j \in \bm{Q}}\}_{Q_i \in \textbf{Q}}$
\end{itemize}
Note that after executing queries, PPE-based databases also leak the profiles included in $\mathcal{L}_2$.
A more secure SE solution leverages Oblivious RAM (ORAM) \cite{Goldreich:1996:SPS,Stefanov:2013:PathORAM} or combines Homomorphic Encryption (HE) \cite{Paillier:1999:Public,Gentry:2009:FHE} with oblivious data retrieval to hide the search and access patterns.
For instance, the HE-based $PPQED_a$ proposed by Samanthula \textit{et al.}\xspace \cite{Samanthula:2014:Privacy} and the ORAM-based SisoSPIR given by Ishai \textit{et al.}\xspace \cite{Ishai:2016:Private} hide both the search and access patterns.
Unfortunately, in both schemes, the CSP can still learn how many records are returned to the user after executing a query, \textit{i.e.,}\xspace \emph{the communication volume}.
According to \cite{Kellaris:ccs16:Generic}, the HE-based and ORAM-based SE schemes have fixed communication overhead between the CSP and users.
Specifically, the length of the message sent from the CSP to the user as the result of query execution is proportional to the number of records matching the query.
Based on this observation, the CSP can still infer the size pattern.
Thus, the HE-based and ORAM-based SE schemes are vulnerable to size pattern-based attacks, \textit{e.g.,}\xspace count attack \cite{Cash:2015:leakage}.
The profile leaked in HE-based and ORAM-based SE schemes can be summarised below:
\begin{itemize}
\item $\mathcal{L}_1=\{|DB(Q_i)|\}_{Q_i \in \bm{Q}}$.
\end{itemize}
\section{Overview of \textsl{\mbox{P-McDb}}\xspace}
\label{sec:overview}
In this work, we propose \textsl{\mbox{P-McDb}}\xspace, a multi-cloud based dynamic SSE scheme for databases that can resist the aforementioned leakage-based attacks.
Specifically, our scheme not only hides the frequency information of the database, but also protects the size, search, and access patterns.
Moreover, it ensures both forward and backward privacy when involving insert and delete queries.
Comparing with the existing SE solutions, \textsl{\mbox{P-McDb}}\xspace has the smallest leakage.
In this section, we define the system and threat model, and illustrate the techniques used in \textsl{\mbox{P-McDb}}\xspace at high-level.
\subsection{System Model}
\label{subsec:sm}
\begin{figure}
\centering
\includegraphics[width=.5\textwidth]{figs/arch.pdf}\\
\caption{An overview of \textsl{\mbox{P-McDb}}\xspace:
Users can upload records and issue queries.
The SSS, IWS, and RSS represent independent CSPs.
The SSS stores encrypted records and executes queries.
The IWS stores index and auxiliary information, and provides witnesses to the SSS for performing encrypted search.
After executing each query, the SSS sends searched records to the RSS for shuffling and re-randomising to protect patterns privacy.}
\label{Fig:arch}
\end{figure}
In the following, we define our system model to describe the entities involved in \textsl{\mbox{P-McDb}}\xspace, as shown in Fig.~\ref{Fig:arch}:
\begin{itemize}
\item \textbf{Admin}: An admin is responsible for the setup and maintenance of databases, user management as well as specification and deployment of access control policies.
\item \textbf{User}: A user can issue insert, select, delete, and update queries to read and write the database according to the deployed access control policies.
\textsl{\mbox{P-McDb}}\xspace allows multiple users to read and write the database.
\item \textbf{Storage and Search Service (SSS)}:
It provides encrypted data storage, executes encrypted queries, and returns matching records in an encrypted manner.
\item \textbf{Index and Witness Service (IWS)}:
It stores the index and auxiliary information, and provides witnesses to the SSS for retrieving data.
The IWS has no access to the encrypted data.
\item \textbf{Re-randomise and Shuffle Service (RSS)}:
After executing each query, it re-randomises and shuffles searched records to achieve the privacy of access pattern.
The RSS does not store any data.
\end{itemize}
Each of the SSS, IWS, and RSS is deployed on the infrastructure managed by CSPs that are in conflict of interest.
According to the latest report given by RightScale \cite{RightScale:2016:report}, organisations are using more than three public CSPs on average, which means the schemes based on multi-cloud are feasible for most organisations.
The CSPs have to ensure that there is a two-way communication between any two of them, but our model assumes there is no collusion between the CSPs.
\subsection{Threat Model}
\label{subsec:tm}
We assume the admin is fully trusted.
All the users are only assumed to securely store their keys and the data.
The CSPs hosting the SSS, IWS, and RSS are modelled as honest-but-curious.
More specifically, they honestly perform the operations requested by users according to the designated protocol specification.
However, as mentioned in the above leakage-based attacks, they are curious to gain knowledge of records and queries by 1) analysing the outsourced data, 2) analysing the information leaked when executing queries, 3) and injecting malicious records.
As far as we know, \textsl{\mbox{P-McDb}}\xspace is the first SE scheme that considers active CSPs that could inject malicious records.
Moreover, as assumed in \cite{Samanthula:2014:Privacy,Hoang:2016:practical,Stefanov:CCS2013:Multi-cloud}, we also assume the CSPs do not collude.
In other words, we assume an attacker could only compromise one CSP.
In practice, any three cloud providers in conflict of interest, such as Amazon S3, Google Drive, and Microsoft Azure, could be considered since they may be less likely to collude in an attempt to gain information from their customers.
We assume there are mechanisms in place for ensuring data integrity and availability of the system.
\subsection{Approach Overview}
\label{subsec:appo}
\textsl{\mbox{P-McDb}}\xspace aims at hiding the search, access, and size patterns.
\textsl{\mbox{P-McDb}}\xspace also achieves both backward and forward privacy.
We now give an overview of our approach.
To protect the search pattern, \textsl{\mbox{P-McDb}}\xspace XORs the query with a nonce, making identical queries look different once encrypted (\textit{i.e.,}\xspace the encrypted query is semantically secure).
However, the CSP may still infer the search pattern by looking at the access pattern.
Specifically, the CSP can infer that two queries are equivalent if the same records are returned.
To address this issue, after executing each query, we shuffle the locations of the searched records.
Moreover, we re-randomise their ciphertexts, making them untraceable.
In this way, even if a query equivalent to the previous one is executed, the CSP will see a new set of records being searched and returned, and cannot easily infer the search and access pattern.
Another form of leakage is the size pattern, where the CSP can learn the number of records returned after performing a query, even after shuffling and re-randomisation.
Moreover, the CSP can guess the search pattern from the size pattern.
Specifically, the queries matching different numbers of records must be different, and the queries matching the same number of records could be equivalent.
To protect the size pattern, we introduce a number of dummy records that look exactly like the real ones and could match queries.
Consequently, the search result for each query will contain a number of dummy records making it difficult for the CSP to identify the actual number of real records returned by a query.
To break the link between size and search pattern, our strategy is to ensure all queries always match the same number of records, and the concrete method is to pad all the data elements in each field into the same occurrence with dummy records.
By doing so, the size pattern is also protected from the communication volume since there is no fixed relationship between them.
However, a large number of dummy records might be required for the padding.
To reduce required dummy records and ensure \textsl{\mbox{P-McDb}}\xspace's performance, we virtually divide the distinct data elements into groups and only pad the elements in the same group into the same occurrence.
By doing so, the queries searching values in the same group will always match the same number of records.
Then, the CSP cannot infer their search pattern.
Here we clarify that the search pattern is not fully protected in \textsl{\mbox{P-McDb}}\xspace.
Specifically, the CSP can still tell the queries are different if their search results are in different groups.
\textsl{\mbox{P-McDb}}\xspace also achieves forward and backward privacy.
Our strategy is to blind records also with nonces and re-randomise them using fresh nonces after executing each query.
Only queries that include the current nonce could match records.
In this way, even if a malicious CSP tries to use previously executed queries with old nonces, they will not be able to match the records in the dataset, ensuring forward privacy.
Similarly, deleted records (with old nonces) will not match newly issued queries because they use different nonces.
The details and algorithms of our scheme will be discussed in the following section.
\section{Security Analysis}
\label{sec:security}
In this section, we first analyse the leakage in \textsl{\mbox{P-McDb}}\xspace.
Second, we prove the patterns and forward and backward privacy are protected against the CSPs.
\subsection{Leakage of \textsl{\mbox{P-McDb}}\xspace}
Roughly speaking, given an initial database $DB$ and a sequence of queries $\bm{Q}$, the information leaked to each CSP in \textsl{\mbox{P-McDb}}\xspace can be defined as:
\begin{align*}
\mathcal{L} = \{\mathcal{L}_{\rm Setup}(DB), \{\mathcal{L}_{\rm Query}(Q_i)~or~\mathcal{L}_{\rm Update}(Q_i)\}_{Q_i \in \bm{Q}}\}
\end{align*}
where $\mathcal{L}_{\rm Setup}$, $\mathcal{L}_{\rm Query}$, and $\mathcal{L}_{\rm Update}$ represent the profiles leaked when setting up the system, executing queries and updating the database, respectively.
Specifically, $\mathcal{L}_{\rm Update}$ could be $\mathcal{L}_{\rm Insert}$ or $\mathcal{L}_{\rm Delete}$.
In the following, we analyse the specific information each CSP can learn from the received messages in each phase.
\paragraph{$\mathcal{L}_{\rm Setup}$}
When setting up the system, for the initial database $DB$, as shown in Algorithm \ref{alg:mcdb-setup}, the SSS gets the encrypted database $EDB$, and the IWS gets the group information $GDB$ and nonce information $NDB$.
In this phase, no data is sent to the RSS.
From $EDB$, the SSS learns the number of encrypted records $|EDB|$, the number of fields $F$, the length of each element $|e|$, and the length of tag $|tag|$.
From $NDB$ and $GDB$, the IWS learns $|NDB|$ ($|NDB|=|EDB|$), the length of each seed $|seed|$, the length of each nonce $|\bm{n}|$ ($|\bm{n}|=F|e|+|tag|$), the number of groups $|GDB|$, and the record identifiers $IL$ and $|(\bm{E}, \tau)^*|$ of each group.
In other words, the IWS learns the group information of each record in $EDB$.
Therefore, in this phase, the leakage $\mathcal{L}_{\rm Setup}(DB)$ learned by the SSS, IWS, and RSS can be respectively defined as:
\begin{align*}
\mathcal{L}^{SSS}_{\rm Setup}(DB) = &\{|EDB|, \mathcal{L}_{rcd}\} \\
\mathcal{L}^{IWS}_{\rm Setup}(DB) = &\{|NDB|, |\bm{n}|, |seed|, |GDB|,\\ & \{IL_{f, g}, |(\bm{E}_{f, g}, \tau)^*|\}_{(f, g) \in GDB}\} \\
\mathcal{L}^{RSS}_{\rm Setup}(DB) = &\emptyset
\end{align*}
where $\mathcal{L}_{rcd}=\{F, |e|, |tag|\}$.
\paragraph{$\mathcal{L}_{\rm Query}$}
When processing queries, as mentioned in Algorithm \ref{alg:macdb-search} and \ref{alg:mcdb-shuffle}, the SSS gets the encrypted query $EQ$, $IL$ and encrypted nonces $EN$.
Based on them, the SSS can search over $EDB$ and gets the search result $SR$.
After shuffling, the SSS also gets the shuffled records $Ercds$ from the RSS.
From $\{EQ, IL, EN, SR, Ercds\}$, the SSS learns $\{Q.type, Q.f, |Q.e|, IL, |w|, |t|\}$, where $|t|=|seed|$.
In addition, the SSS can also infer the threshold $\tau$ ($\tau=|SR|$) and the number of distinct elements $|\bm{E}|$ ($|\bm{E}|=\frac{|IL|}{\tau}$) of the searched group.
The IWS only gets $(EQ.f, g, \eta)$ from the user, from which the IWS learns the searched field and group information of each query, and $|\eta|$ ($|\eta|=|Q.e|$).
The RSS gets the searched records $Ercds$, shuffled record identifies $IL'$, and new nonces $NN$ for shuffling and re-randomising.
From them, the RSS only learns $|Ercd|$ ($|Ercd|=|\bm{n}|$), $IL$ and $IL'$.
In summary,
\begin{align*}
\mathcal{L}^{SSS}_{\rm Query}(Q) = &\{Q.f, Q.type, |Q.e|, Q.\bm{G}, |t|, |w|\} \\
\mathcal{L}^{IWS}_{\rm Query}(Q) = &\{Q.f, g, |\eta|, |t|, |w|\} \\
\mathcal{L}^{RSS}_{\rm Query}(Q) = &\{|Ercd|, IL, IL'\}
\end{align*}
where the group information $Q.\textbf{G}=\{g, IL, \tau, |\bm{E}|\}$.
\paragraph{$\mathcal{L}_{\rm Update}$}
Since different types of queries are processed in different manners, the SSS can learn if users are inserting, deleting or updating records, \textit{i.e.,}\xspace $Q.type$.
As mentioned in Section \ref{subsec:update}, when inserting a real record, the user generates $W$ dummy ones, encrypts both the real and dummy records with $RcdEnc$, and sends them and their group information to the SSS and IWS, respectively.
Consequently, the SSS learns $W$, which represents the threshold or the number of elements of a group, and the IWS also learns the group information of each record.
Moreover, both the SSS and IWS can learn if the insert query introduces new elements that not belong to $U$ based on $|\bm{E}_{f, g}|$ or $|(\bm{E}_{f, g}, \tau_{f, g})^*|$.
The RSS only performs the shuffle operation.
Therefore, $\mathcal{L}_{\rm Insert}(Q)$ learned by each CSP is
\begin{align*}
\mathcal{L}^{SSS}_{\rm Insert}(Q) = &\{W, \mathcal{L}_{rcd}\} \\
\mathcal{L}^{IWS}_{\rm Insert}(Q) = &\{Grcd, \{|(\bm{E}_{f, g}, \tau_{f, g})^*|\}_{g_f \in Grcd}, W\} \\
\mathcal{L}^{RSS}_{\rm Insert}(Q) = &\mathcal{L}^{RSS}_{\rm Query}(Q)
\end{align*}
Delete queries are performed as select queries in \textsl{\mbox{P-McDb}}\xspace, thus $\mathcal{L}_{\rm Delete} = \mathcal{L}_{\rm Query}$ for each CSP.
Above all,
\begin{align*}
\mathcal{L}^{SSS} = &\{|EDB|, F, |e|, |tag|, |t|, |w|\\ &\{\{Q_i.f, Q_i.type, |Q_i.e|, Q_i.\bm{G}\} ~or~ W\}_{Q_i \in \textbf{Q}}\} \\
\mathcal{L}^{IWS} = &\{|NDB|, |GDB|, |\bm{n}|, |seed|, |w|, \\ &\{IL_{f, g}, |(\bm{E}_{f, g}, \tau)^*|\}_{(f, g) \in GDB}, \{\{Q_i.f, Q_i.g, |Q_i.e|\} \\ & ~or~ \{Grcd, \{|(\bm{E_{f, g}}, \tau_{f, g})^*|\}_{g_f \in Grcd}, W\}\}_{Q_i \in \textbf{Q}} \} \\
\mathcal{L}^{RSS} = &\{|Ercd|, \{IL, IL'\}_{Q_i \in \bm{Q}}\}
\end{align*}
\subsection{Proof of Security}
Given the above leakage definition for each CSP, in this part, we prove adversaries do not learn anything beyond $\mathcal{L}^{csp}$ by compromising the CSP $csp$, where $csp$ could be the SSS, IWS, or RSS.
It is clear that adversaries cannot infer the search, access and size patterns, and forward and backward privacy of queries within a group from $\mathcal{L}^{csp}$.
Therefore, proving \textsl{\mbox{P-McDb}}\xspace only leaks $\mathcal{L}^{csp}$ to $csp$ indicates \textsl{\mbox{P-McDb}}\xspace protects the patterns and ensures forward and backward privacy within groups.
To prove \textsl{\mbox{P-McDb}}\xspace indeed only leaks $\mathcal{L}^{csp}$ to $csp$, we follow the typical method of using a real-world versus ideal-world paradigm \cite{Bost:2017:forward,CashJJJKRS14,KamaraPR12}.
The idea is that first we assume the CSP $csp$ is compromised by a Probabilistic Polynomial-Time (PPT) honest-but-curious adversary $\mathcal{A}$ who follows the protocol honestly as done by $csp$, but wants to learn more information by analysing the received messages and injecting malicious records.
Second, we build two experiments: $\textbf{Real}^{\rm \Pi}_{\mathcal{A}}(k)$ and $\textbf{Ideal}^{\rm \Pi}_{\mathcal{A, S}, \mathcal{L}}(k)$, where $\Pi$ represents \textsl{\mbox{P-McDb}}\xspace.
In $\textbf{Real}^{\rm \Pi}_{\mathcal{A}}(k)$, all the messages sent to $\mathcal{A}$ are generated as specified in \textsl{\mbox{P-McDb}}\xspace.
Whereas, in $\textbf{Ideal}^{\rm \Pi}_{\mathcal{A, S}, \mathcal{L}}(k)$, all the messages are generated by a PPT simulator $\mathcal{S}$ that only has access to $\mathcal{L}^{csp}$.
That is, $\mathcal{S}$ ensures $\mathcal{A}$ only learns the information defined in $\mathcal{L}^{csp}$ from received messages in $\textbf{Ideal}^{\rm \Pi}_{\mathcal{A, S}, \mathcal{L}}(k)$.
In the game, $\mathcal{A}$ chooses an initial database, triggers $Setup$, and adaptively issues \emph{select}, \emph{insert}, and \emph{delete} queries of its choice.
In response, either $\textbf{Real}^{\rm \Pi}_{\mathcal{A}}(k)$ or $\textbf{Ideal}^{\rm \Pi}_{\mathcal{A, S}, \mathcal{L}}(k)$ is invoked to process the database and queries.
Based on the received messages, $\mathcal{A}$ distinguishes if they are generated by $\textbf{Real}^{\rm \Pi}_{\mathcal{A}}(k)$ or $\textbf{Ideal}^{\rm \Pi}_{\mathcal{A, S}, \mathcal{L}}(k)$.
If $\mathcal{A}$ cannot distinguish that with non-negligible advantage, it indicates $\textbf{Real}^{\rm \Pi}_{\mathcal{A}}(k)$ has the same leakage profile as $\textbf{Ideal}^{\rm \Pi}_{\mathcal{A, S}, \mathcal{L}}(k)$.
\begin{mydef}
We say the dynamic SSE scheme is $\mathcal{L}$-adaptively-secure against the CSP $csp$, with respect to the leakage function $\mathcal{L}^{csp}$, if for any PPT adversary issuing a polynomial number of queries, there exists a PPT simulator $\mathcal{S}$, such that $\textbf{Real}^{\rm \Pi}_{\mathcal{A}}(k)$ and $\textbf{Ideal}^{\rm \Pi}_{\mathcal{A, S}, \mathcal{L}}(k)$ are distinguishable with negligible probability $\textbf{negl}({k})$.
\end{mydef}
Herein, we acknowledge that \textsl{\mbox{P-McDb}}\xspace leaks the group information of queries and records and leaks whether the elements involved in select, insert and delete queries belong to $U$ or not. For clarity, in the proof we assume there is only one group in each field, and omit the group processing. Moreover, we assume all the queries issued by $\mathcal{A}$ only involve elements in $U$.
In this case, the leakage learned by each CSP can be simplified into:
\begin{align*}
\mathcal{L}^{SSS} = &\{|EDB|, F, |e|, |tag|, |t|, |w|\\ &\{\{Q_i.f, Q_i.type, |Q_i.e|\} ~or~ W\}_{Q_i \in \textbf{Q}}\} \\
\mathcal{L}^{IWS} = &\{|NDB|, |\bm{n}|, |seed|, |w|, \{|(\bm{E}_{f}, \tau)^*|\}_{f \in [1, F]}, \\ &\{Q_i.f~or~W\}_{Q_i \in \textbf{Q}} \} \\
\mathcal{L}^{RSS} = &\{|Ercd|, \{IL'\}_{Q_i \in \bm{Q}}\}
\end{align*}
\begin{theorem}\label{the::SSS}
If $\Gamma$ is secure PRF, $\pi$ is a secure PRP, and $H'$ is a random oracle, \textsl{\mbox{P-McDb}}\xspace is a $\mathcal{L}$-adaptively-secure dynamic SSE scheme against the SSS.
\end{theorem}
\begin{proof}
To argue the security, as done in \cite{Bost:2017:forward,CashJJJKRS14,KamaraPR12}, we prove through a sequence of games.
The proof begins with $\textbf{Real}^{\rm \Pi}_{\mathcal{A}}(k)$, which is exactly the real protocol, and constructs a sequence of games that differ slightly from the previous game and show they are indistinguishable.
Eventually we reach the last game $\textbf{Ideal}^{\rm \Pi}_{\mathcal{A}, \mathcal{S}}(k)$, which is simulated by a simulator $\mathcal{S}$ based on the defined leakage profile $\mathcal{L}^{SSS}$.
By the transitive property of the indistinguishability, we conclude that $\textbf{Real}^{\rm \Pi}_{\mathcal{A}}(k)$ is indistinguishable from $\textbf{Ideal}^{\rm \Pi}_{\mathcal{A}, \mathcal{S}}(k)$ and complete our proof.
Since $RcdDec$ is unrelated to CSPs, it is omitted in the games.
\begin{algorithm}[!htp]
\scriptsize
\caption{$\textbf{Real}^{\rm \Pi}_{\mathcal{A}}(k).RcdEnc(rcd, flag)$ $\|$ \fbox{$\mathcal{G}_1$}, \fbox{$\mathcal{G}_2$}, \fbox{$\mathcal{G}_3$}}
\label{proof::h1::enc}
\begin{algorithmic}[1]
\STATE $seed \stackrel{\$}{\leftarrow} \{0,1\}^{|seed|}$
\STATE $\bm{n} \leftarrow \Gamma_{s_2} (seed)$ $\vartriangleleft$ \fbox{$\mathcal{G}_1$: $\bm{n} \leftarrow \bm{Nonce}[seed]$}, where $\bm{n}= \ldots \parallel n_f \parallel \ldots \parallel n_{F+1}$, $|n_f|=|e|$ and $|n_{F+1}|=|tag|$
\FOR {each element $e_f \in rcd$}
\STATE ${e^*_f} \leftarrow Enc_{s_1}(e_f) \oplus n_f$ $\vartriangleleft$ {\fbox{$\mathcal{G}_3$: $e^*_f \leftarrow \{0, 1\}^{|e|}$}}
\ENDFOR
\IF {$flag=1$}
\STATE $S \stackrel{\$}{\leftarrow} \{0,1\}^{|e|}$
\STATE $tag \leftarrow (H_{s_1}(S)||S )\oplus n_{F+1}$
$\vartriangleleft$ {\fbox{$\mathcal{G}_3$: $tag \leftarrow \{0, 1\}^{|tag|}$}}
\ELSE
\STATE $tag \stackrel{\$}{\leftarrow} \{0,1\}^{|tag|}$
\ENDIF
\RETURN $Ercd=(e^*_1, \ldots, e^*_F, tag)$ and $(seed, \bm{n})$
\end{algorithmic}
\end{algorithm}
\begin{algorithm}[!htp]
\scriptsize
\caption{$\textbf{Real}^{\rm \Pi}_{\mathcal{A}}(k).Query(Q)$ $\|$ \fbox{$\mathcal{G}_1$}, \fbox{$\mathcal{G}_2$}, \fbox{$\mathcal{G}_3$}}
\label{h1:macdb-search}
\begin{algorithmic}[1]
\STATE \underline{User: $QueryEnc(Q)$}
\STATE $EQ.type \leftarrow Q.type$, $EQ.f \leftarrow Q.f$
\STATE $\eta \stackrel{\$}{\leftarrow} \{0,1\}^{|e|}$
\STATE $EQ.e^* \leftarrow Enc_{s_1}(Q.e) \oplus \eta$ $\vartriangleleft$ {\fbox{$\mathcal{G}_3$: $EQ.e^* \leftarrow \{0, 1\}^{|e|}$}}
\STATE Send $EQ=(type, f, op, e^*)$ to the SSS
\STATE Send $(EQ.f, \eta)$ to the IWS
~\\
\STATE \underline{IWS: $NonceBlind(EQ.f, \eta)$}
\STATE $EN \leftarrow \emptyset$
\STATE \fbox{$\mathcal{G}_2$: Randomly put $\tau_f$ record identifiers into $\bm{I}$}
\FOR {each $id \in NDB$ }
\STATE $(seed, \bm{n}) \leftarrow NDB(id)$, where $\bm{n}= \ldots ||n_{EQ.f}|| \ldots $ and $|n_{EQ.f}|=|\eta|$ $\vartriangleleft$ \fbox{Deleted in $\mathcal{G}_2$}
\STATE ${w} \leftarrow H'(n_{EQ.f} \oplus \eta)$ $\vartriangleleft$ \fbox{$\mathcal{G}_2$:~~\begin{minipage}[c][1.0cm][t]{4.5cm}{\textbf{if} $id \in \bm{I}$ \\ $w_{id} \leftarrow H'(EDB(id, EQ.f) \oplus EQ.e^*)$ \\ \textbf{else} \\ $w_{id} \leftarrow \{0, 1\}^{|w|}$ } \end{minipage}}
\STATE $t \leftarrow \eta \oplus seed$ $\vartriangleleft$ {\fbox{$\mathcal{G}_3$: $t \leftarrow \{0, 1\}^{|seed|}$}}
\STATE $EN(id) \leftarrow (w, t)$
\ENDFOR
\STATE Send the encrypted nonce set $EN=((w, t), \ldots)$ to the SSS
~\\
\STATE \underline{SSS: $Search(EQ, EN)$}
\STATE $SR \leftarrow \emptyset$
\FOR {each $id \in EDB$}
\IF {$H'(EDB(id, EQ.f) \oplus EQ.e^*) = EN(id).w$}
\STATE $SR \leftarrow SR \cup (EDB(id), EN(id).t)$
\ENDIF
\ENDFOR
\STATE Send the search result $SR$ to the user
\end{algorithmic}
\end{algorithm}
\begin{algorithm}[!ht]
\scriptsize
\caption{$\textbf{Real}^{\rm \Pi}_{\mathcal{A}}(k).Shuffle()$ $\|$ \fbox{$\mathcal{G}_1$}, \fbox{$\mathcal{G}_2$}, \fbox{$\mathcal{G}_3$}}
\label{h1:mcdb-shuffle}
\begin{algorithmic}[1]
\STATE \underline{IWS: $PreShuffle()$}
\STATE {$IL' \leftarrow \pi(NDB)$}
\FOR {each $id \in IL'$}
\STATE $seed \stackrel{\$}{\leftarrow} \{0, 1\}^{|seed|}$
\STATE $\bm{n}' \leftarrow \Gamma_{s_2}(seed)$ $\vartriangleleft$ \fbox{$\mathcal{G}_1$: $\bm{n}' \leftarrow \bm{Nonce}[seed]$}
\STATE $NN(id) \leftarrow NDB(id).\bm{n} \oplus \bm{n}' $ $\vartriangleleft$ \fbox{$\mathcal{G}_3$: $NN(id) \leftarrow \{0, 1\}^{|\bm{n}|}$}
\STATE $NDB(id)\leftarrow (seed, \bm{n}')$
\ENDFOR
\STATE Send $(IL', NN)$ to the RSS.
~\\~
\STATE{\underline{RSS: $Shuffle(Ercds, IL', NN)$}}
\STATE Shuffle $Ercds$ based on $IL'$
\FOR {each $id \in IL'$}
\STATE {$Ercds(id) \leftarrow Ercds(id) \oplus NN(id)$} $\vartriangleleft$ \fbox{$\mathcal{G}_3$: $Ercds(id) \leftarrow \{0, 1\}^{|\bm{n}|}$}
\ENDFOR
\STATE Send $Ercds$ to the SSS.
\end{algorithmic}
\end{algorithm}
\noindent\textbf{Game $\mathcal{G}_1$}: Comparing with $\textbf{Real}^{\rm \Pi}_{\mathcal{A}}(k)$, the difference in $\mathcal{G}_1$ is that the PRF $\Gamma$ for generating nonces, in $RcdEnc$ and $PreShuffle$ algorithms, is replaced with a mapping \textbf{Nonce}.
Specifically, as shown in Algorithm \ref{proof::h1::enc} and \ref{h1:mcdb-shuffle}, for each unused $seed$ (the length of seed is big enough), a random string of length $F|e|+|tag|$ is generated as the nonce, stored in \textbf{Nonce}, and then reused thereafter.
This means that all of the $\bm{n}$ are uniform and independent strings.
In this case, the adversarial distinguishing advantage between $\textbf{Real}^{\rm \Pi}_{\mathcal{A}}(k)$ and $\mathcal{G}_1$ is exactly the distinguishing advantage between a truly random function and PRF.
Thus, this change made negligible difference between between $\textbf{Real}^{\rm \Pi}_{\mathcal{A}}(k)$ and $\mathcal{G}_1$, \textit{i.e.,}\xspace
\[
\centerline { $|{\rm Pr}[\textbf{Real}^{\rm \Pi}_{\mathcal{A}}(k)=1] - {\rm Pr}[\mathcal{G}_1=1]| \leq \textbf{negl}({k})$}
\]
where ${\rm Pr}[\mathcal{G}=1]$ represents the probability of that the messages received by $\mathcal{A}$ are generated by $\mathcal{G}$.
\noindent\textbf{Game $\mathcal{G}_2$}:
From $\mathcal{G}_1$ to $\mathcal{G}_2$, $w$ is replaced with a random string, rather than generated via $H'$.
However, it is necessary to ensure $\mathcal{A}$ gets $\tau_f$ matched records after searching over $EDB$, since that is the leakage $\mathcal{A}$ learns, where $\tau_f$ is the threshold of the searched field.
To achieve that, the experiment randomly picks $\tau_f$ witnesses and programs their values.
Specifically, as shown in Algorithm \ref{h1:macdb-search}, the experiment first randomly picks a set of record identifiers $\bm{I}$, where $|\bm{I}|=\tau_f$.
Second, for each identifier $id \in \bm{I}$, the experiment programs $w_{id} \leftarrow H'(EDB(id, EQ.f) \oplus EQ.e^*)$.
By doing so, the records identified by $\bm{I}$ will match the query.
For the identifier $id \notin \bm{I}$, $w_{id} \leftarrow \{0, 1\}^{|w|}$.
The only difference between $\mathcal{G}_2$ and $\mathcal{G}_1$ is the generation of $w$.
In the following, we see if $\mathcal{A}$ can distinguish the two games based on $w$.
In $\mathcal{G}_2$,
\begin{align}\notag
For~id \in \bm{I}, ~w_{id} &\leftarrow H'(EDB(id, EQ.f) \oplus EQ.e^*) \\ \notag
For~id \notin \bm{I}, ~w_{id} &\leftarrow \{0, 1\}^{|w|}
\end{align}
Recall that in $\mathcal{G}_1$.
\begin{equation}\notag
w_{id} \leftarrow H'(n_{EQ.f} \oplus \eta)
\end{equation}
In $\mathcal{G}_1$, $n_{EQ.f}$ and $\eta$ are random strings.
In $\mathcal{G}_2$, due to the one-time pad encryption in $RcdEnc$ and $QueryEnc$, $EDB(id, EQ.f)$ and $EQ.e^*$ are indistinguishable from random strings.
Thus, we can say for $id \in \bm{I}$ $w_{id}$ is generated in the same way as done in $\mathcal{G}_1$.
For $id \notin \bm{I}$, $w_{id}$ is a random string in $\mathcal{G}_2$, whereas in $\mathcal{G}_1$ $w_{id}$ is generated by deterministic $H'$.
It seems $\mathcal{A}$ could easily distinguish $\mathcal{G}_2$ and $\mathcal{G}_1$, since $\mathcal{G}_1$ outputs the same $w$ for the same input, whereas $\mathcal{G}_2$ does not.
Indeed, in $\mathcal{G}_1$ the inputs to $H'$, $n_{EQ.f}$ and $\eta$, are random strings, thus the probability of getting the same input for $H'$ is negligible, making $H'$ indistinguishable from a uniform sampling.
Thus, in both cases $w_{id}$ in $\mathcal{G}_2$ is indistinguishable from $w_{id}$ in $\mathcal{G}_1$.
Next, we discuss if $\mathcal{A}$ can distinguish the two games based on $SR$.
The leakage of $SR$ includes the identifier of each matched records and $|SR|$.
Due to the padding, $|SR|=|\bm{I}|=\tau_f$, which means the two games are indistinguishable based on $|SR|$.
In $\mathcal{G}_1$, the identifiers of matched records are determined by the shuffle operations performed for the previous query.
In $\mathcal{G}_2$, the identifiers of matched records are randomly picked.
Thus, the distinguishing advantage between $\mathcal{G}_1$ and $\mathcal{G}_2$ based on the identifiers is exactly the distinguishing advantage between a truly random permutation and PRP, which is negligible.
Above all, we have
\[
\centerline { $|{\rm Pr}[\mathcal{G}_2=1] - {\rm Pr}[\mathcal{G}_1=1]| \leq \textbf{negl}(k)$}
\]
\noindent\textbf{Game $\mathcal{G}_3$}:
The difference between $\mathcal{G}_2$ and $\mathcal{G}_3$ is that all the XORing operations, such as the generation of $e^*$, $Q.e^*$, and $t$, are replaced with randomly sampled strings (The details are shown in Algorithms \ref{proof::h1::enc}, \ref{h1:macdb-search}, and \ref{h1:mcdb-shuffle}).
Since sampling a fixed-length random string is indistinguishable from the one-time pad encryption,
we have
\[
\centerline { ${\rm Pr}[\mathcal{G}_3=1] = {\rm Pr}[\mathcal{G}_2=1] $}
\]
\begin{algorithm}[!htp]
\scriptsize
\caption{$\mathcal{S}.RcdEnc(\mathcal{L}_{rcd}$)}
\label{proof::ideal::enc}
\begin{algorithmic}[1]
\STATE \tgrey{$seed \stackrel{\$}{\leftarrow} \{0,1\}^{|seed|}$}
\STATE \tgrey{$\bm{n} \leftarrow \bm{Nonce}[seed]$}
\FOR {each $f \in [1, F]$}
\STATE ${e^*_f} \leftarrow \{0, 1\}^{|e|}$
\ENDFOR
\STATE {$tag \stackrel{\$}{\leftarrow} \{0,1\}^{|tag|}$}
\RETURN {$Ercd=(e^*_1, \ldots, e^*_F, tag)$} and \tgrey{$(seed, \bm{n})$}
\end{algorithmic}
\end{algorithm}
\begin{algorithm}[!htp]
\scriptsize
\caption{$\mathcal{S}.Query(\mathcal{L}^{SSS}_{Query})$}
\label{ideal:macdb-search}
\begin{algorithmic}[1]
\STATE \underline{User: $QueryEnc(\mathcal{L}^{SSS}_{Query})$}
\STATE $EQ.type \leftarrow Q.type$, $EQ.f \leftarrow Q.f$
\STATE \tgrey{$\eta \stackrel{\$}{\leftarrow} \{0,1\}^{| e |}$}
\STATE $EQ.e^* \leftarrow \{0, 1\}^{|e|}$
\STATE Send $EQ=(type, f, op, e^*)$ to the SSS
\STATE \tgrey{Send $(EQ.f, \eta)$ to the IWS}
~\\
\STATE \underline{IWS: $NonceBlind(\mathcal{L}^{SSS}_{Query})$}
\STATE $EN \leftarrow \emptyset$
\STATE Randomly put $\tau_f$ records identifers into $\bm{I}$
\FOR {each $id \in [1, |EDB|]$ }
\IF{$id \in \bm{I}$}
\STATE ${w} \leftarrow H'(EDB(id, EQ.f) \oplus EQ.e^*)$
\ELSE
\STATE ${w} \leftarrow \{0, 1\}^{|w|}$
\ENDIF
\STATE $t \leftarrow \{0, 1\}^{|seed|}$
\STATE $EN(id) \leftarrow (w, t)$
\ENDFOR
\STATE Send the encrypted nonce set $EN=((w, t), \ldots)$ to the SSS
~\\
\STATE \underline{SSS: $Search(EQ, EN)$}
\STATE $SR \leftarrow \emptyset$
\FOR {each $id \in EDB$}
\IF {$H'(EDB(id, EQ.f) \oplus EQ.e^*) = EN(id).w$ }
\STATE $SR \leftarrow SR \cup (EDB(id), EN(id).t)$
\ENDIF
\ENDFOR
\STATE Send the search result $SR$ to the user
\end{algorithmic}
\end{algorithm}
\begin{algorithm}
\scriptsize
\caption{$\mathcal{S}.Shuffle(\mathcal{L}^{SSS}_{Query})$}
\label{proof::ideal::shuffle}
\begin{algorithmic}[1]
\STATE \underline{IWS: $PreShuffle()$}
\STATE \tgrey{$IL' \leftarrow RP (NDB)$}
\FOR {\tgrey{each $id \in IL'$}}
\STATE \tgrey{$seed \stackrel{\$}{\leftarrow} \{0, 1\}^{|seed|}$}
\STATE \tgrey{$\bm{n'} \leftarrow \bm{Nonce}[seed]$}
\STATE \tgrey{$NN(id) \leftarrow \{0, 1\}^{|\bm{n}|}$}
\STATE \tgrey{$NDB(id)\leftarrow (seed, \bm{n'})$}
\ENDFOR
\STATE \tgrey{Send $(IL', NN)$ to the RSS.}
~\\
\STATE{\underline{RSS: $Shuffle(\mathcal{L}^{SSS}_{Query})$}}
\STATE \tgrey{Shuffle $Ercds$ based on $IL'$}
\FOR {each $id \in IL$}
\STATE $Ercds(id) \leftarrow \{0, 1\}^{|\bm{n}|} $
\ENDFOR
\STATE Send $Ercds$ to the SSS.
\end{algorithmic}
\end{algorithm}
\noindent\textbf{$\textbf{Ideal}^{\rm \Pi}_{\mathcal{A}, \mathcal{S}, \mathcal{L}}(k)$}:
From $\mathcal{G}_3$ to the final game, we just replace the inputs to $RcdEnc$, $Query$ and $Shufle$ algorithms with $\mathcal{L}^{SSS}$.
Moreover, for clarity, we fade the operations unrelated to the SSS.
From Algorithms \ref{proof::ideal::enc}, \ref{ideal:macdb-search}, and \ref{proof::ideal::shuffle}, it is easy to observe that the messages sent to $\mathcal{A}$, \textit{i.e.,}\xspace $\{Ercd, EQ, EN, Ercds\}$, can be easily simulated by only relying on $\mathcal{L}^{SSS}$.
Here we have:
\[
{\rm Pr}[\textbf{Ideal}^{\rm \Pi}_{\mathcal{A},\mathcal{S}, \mathcal{L}}(k)=1]={\rm Pr}[\mathcal{G}_3=1]
\]
By combining all the distinguishable advantages above, we get:
\[
|{\rm Pr}[\textbf{Ideal}^{\rm \Pi}_{\mathcal{A},\mathcal{S}, \mathcal{L}}(k)=1]- {\rm Pr}[\textbf{Real}^{\rm \Pi}_{\mathcal{A}}(k)=1]| < \textbf{negl}(k)
\]
In $\textbf{Ideal}^{\rm \Pi}_{\mathcal{A},\mathcal{S}, \mathcal{L}}(k)$, $\mathcal{A}$ only learns $\mathcal{L}_{rcd}$ and $\mathcal{L}^{SSS}_{Query}$.
The negligible advantage of a distinguisher between $\textbf{Real}^{\rm \Pi}_{\mathcal{A}}(k)$ and $\textbf{Ideal}^{\rm \Pi}_{\mathcal{A},\mathcal{S}, \mathcal{L}}(k)$ indicates that \textsl{\mbox{P-McDb}}\xspace also only leaks $\mathcal{L}_{rcd}$ and $\mathcal{L}^{SSS}_{Query}$.
Although the above simulation does not include the $Setup$ and updating phases, it is clear that the two phases mainly rely on $RcdEnc$ algorithm, which has been proved only leaks $\mathcal{L}_{rcd}$ to the SSS.
From $Setup$ phase, $\mathcal{A}$ only gets $EDB$, and each record in $EDB$ is encrypted with $RcdEnc$.
Thus, $\mathcal{A}$ only learns $|EDB|$ and $\mathcal{L}_{rcd}$ in this phase.
Similarly, $\mathcal{A}$ only gets $W+1$ encrypted records in $Insert$ algorithm.
Therefore, in addition to $\mathcal{L}_{rcd}$, it only learns $W$, which is equal to $|\bm{E}|$ or $\tau$ of a group.
For delete queries, $\mathcal{A}$ learns $\mathcal{L}^{SSS}_{Query}$ since they are first performed as select queries.
As proved above, the tags are indistinguishable from random strings, meaning the returned tags do not leak additional information.
\end{proof}
\begin{theorem}\label{the::IWS}
If $ENC$ is semantically secure, \textsl{\mbox{P-McDb}}\xspace is a $\mathcal{L}$-adaptively-secure dynamic SSE scheme against the IWS.
\end{theorem}
\begin{proof}
Herein, we also assume all the records are in one group.
In this case, the IWS gets $(seed, \bm{n})$ for each record and $(\bm{E}_f, \tau_f)^*$ for each field when setting up the database, gets $(Q.f, \eta)$ when executing queries, and gets updated $(\bm{E}_f, \tau_f)^*$ when inserting records.
Note that the IWS can generate $\bm{n}$ by itself since it has the key $s_2$.
Considering $seed$ and $\eta$ are random strings, from $\textbf{Real}^{\rm \Pi}_{\mathcal{A}}(k)$ to $\textbf{Ideal}^{\rm \Pi}_{\mathcal{A}, \mathcal{S}, \mathcal{L}}(k)$ we just need one step.
Specifically, given $\mathcal{L}^{IWS}$, in $\textbf{Ideal}^{\rm \Pi}_{\mathcal{A}, \mathcal{S}, \mathcal{L}}(k)$, $\mathcal{S}$ just needs to simulate $(\bm{E}_f, \tau_f)^*$ with $|(\bm{E}_f, \tau_f)^*|$-bit random strings in $Setup$ and $Insert$ algorithms.
Given $ENC$ is semantically secure, we have
\[
|{\rm Pr}[\textbf{Ideal}^{\rm \Pi}_{\mathcal{A},\mathcal{S}, \mathcal{L}}(k)=1]- {\rm Pr}[\textbf{Real}^{\rm \Pi}_{\mathcal{A}}(k)=1]| < \textbf{negl}(k)
\]
\end{proof}
\begin{theorem}\label{the::RSS}
\textsl{\mbox{P-McDb}}\xspace is a $\mathcal{L}$-adaptively-secure dynamic SSE scheme against the RSS.
\end{theorem}
\begin{proof}
In \textsl{\mbox{P-McDb}}\xspace, the RSS is only responsible for shuffling and re-randomising records after each query.
For the shuffling and re-randomising, it gets encrypted records, $IL'$ and $NN$.
Here we also just need one step to reach $\textbf{Ideal}^{\rm \Pi}_{\mathcal{A}, \mathcal{S}, \mathcal{L}}(k)$.
Given $\mathcal{L}^{RSS}$, as done in the above \textbf{Game $\mathcal{G}_3$}, $\mathcal{S}$ needs to replace $e^*$ and $tag$ in $RcdEnc$ with $|e|$-bit and $|tag|$-bit random strings respectively and simulate each element in $NN$ with a $|Ercd|$-bit random string in $PreShuffle$.
As mentioned, since sampling a fixed-length random string is indistinguishable from the one-time pad encryption, we have
\[
|{\rm Pr}[\textbf{Ideal}^{\rm \Pi}_{\mathcal{A},\mathcal{S}, \mathcal{L}}(k)=1] = {\rm Pr}[\textbf{Real}^{\rm \Pi}_{\mathcal{A}}(k)=1]|
\]
\end{proof}
\section{Solution details}
\label{sec:MCDB-details}
\begin{table}[htp]
\scriptsize
\centering
\caption{Data representation in \textsl{\mbox{P-McDb}}\xspace}
\subtable[\emph{Staff}]
{
\label{Tbl:mcdb-staff}
\begin{tabular}{|c|c|}\hline
\textbf{Name} & \textbf{Age} \\\hline
Alice & 27 \\\hline
Anna & 30 \\\hline
Bob & 27 \\\hline
Bill & 25 \\\hline
Bob & 33 \\\hline
\end{tabular}
}
\subtable[GDB on the IWS]
{
\label{Tbl:mcdb-engid}
\begin{tabular}{|c|c|c|c|}\hline
\textbf{GID} & \textbf{IL} &$(\bm{E}, \tau)^*$ \\ \hline
$(1, g_{1})$ & $\{1, 2\}$ & $(\{Alice, Anna\}, 1)^*$ \\\hline
$(1, g_{2})$ & $\{3, 4, 5, 6\}$ & $(\{Bob, Bill\}, 2)^*$ \\\hline
$(2, g'_{1})$ & $\{1, 3, 4, 6\}$ & $(\{25, 27\}, 2)^*$ \\\hline
$(2, g'_{2})$ & $\{2, 5\}$ & $(\{30, 33\}, 1)^*$ \\\hline
\end{tabular}
}
\subtable[NDB on the IWS]
{
\label{Tbl:mcdb-nonceI}
\begin{tabular}{|c|c|c|}\hline
\textbf{id} &$\bm{seed}$ & $\bm{nonce}$ \\ \hline
1 & $seed_{1}$ & $\bm{n_1}$ \\\hline
2 & $seed_{2}$ & $\bm{n_2}$ \\\hline
3 & $seed_{3}$ & $\bm{n_3}$ \\\hline
4 & $seed_{4}$ & $\bm{n_4}$ \\\hline
5 & $seed_{5}$ & $\bm{n_5}$ \\\hline
6 & $seed_{6}$ & $\bm{n_6}$ \\\hline
\end{tabular}
}
\subtable[EDB on the SSS]
{
\label{Tbl:mcdb-enstaff}
\begin{tabular}{|c|c|c|c|c|}\hline
\textbf{ID} & \textbf{1} & \textbf{2} & \textbf{Tag} \\\hline
1 & $SE(Alice)$ & $SE(27)$ & $tag_1$ \\\hline
2 & $SE(Anna)$ & $SE(30)$ & $tag_2$ \\\hline
3 & $SE(Bob)$ & $SE(27)$ & $tag_3$ \\\hline
4 & $SE(Bill)$ & $SE(25)$ & $tag_4$ \\\hline
5 & $SE(Bob)$ & $SE(33)$ & $tag_5$ \\\hline
6 & $SE(Bill)$ & $SE(25)$ & $tag_6$ \\\hline
\end{tabular}
}
\label{Tbl:mcdb-store}
\subref{Tbl:mcdb-staff} A sample \emph{Staff} table.
\subref{Tbl:mcdb-engid} GDB, the group information, is stored on the IWS.
\subref{Tbl:mcdb-nonceI} NDB contains the seeds used to generate nonces.
It might contain the nonces directly.
NDB is also stored on the IWS.
\subref{Tbl:mcdb-enstaff} EDB, the encrypted \emph{Staff} table, is stored on the SSS.
Each encrypted data element $SE(e_f)=Enc_{s_1}(e_f)\oplus n_f$.
Each record has a tag, enabling users to distinguish dummy and real records.
In this example, the last record in Table~\subref{Tbl:mcdb-enstaff} is dummy.
The RSS does not store any data.
\end{table}
\begin{algorithm}[tp]
\scriptsize
\caption{$Setup(k, DB)$}
\label{alg:mcdb-setup}
\begin{algorithmic}[1]
\STATE \underline{Admin: $KeyGen$}
\STATE $s_1, s_2 \leftarrow \{0, 1\}^k$
~\\~
\STATE $GDB \leftarrow \emptyset$, $EDB \leftarrow \emptyset$, $NDB \leftarrow \emptyset$
\STATE \underline{Admin: $GroupGen$} \label{code:setup-grpgen-be}
\FOR{each field $f$}
\STATE Collect $U_f$ and $\{O(e)\}_{e \in U_f}$, and compute $\Psi_f=\{g \leftarrow GE_{s_1}(e)\}_{e \in U_f}$
\FOR{each $g \in \Psi_f$}
\STATE $IL_{f, g} \leftarrow \emptyset$, $\bm{E}_{f, g} \leftarrow \{e\}_{e\in U_f \& GE_{s_1}(e)=g}$ \label{code:setup-grpgen-e}
\STATE $\tau_{f, g} \leftarrow \max\{|O(e)|\}_{e \in \bm{E}_{f, g}}$ \label{code:setup-grpgen-t}
\STATE $(\bm{E}_{f, g}, \tau)^* \leftarrow ENC_{s_1}(\bm{E}_{f, g}, \tau)$
\label{code:setup-gdb}
\STATE $GDB(f, g) \leftarrow (IL_{f, g}, (\bm{E}_{f, g}, \tau_{f, g})^*)$
\ENDFOR
\ENDFOR \label{code:setup-grpgen-end}
~\\~
\STATE \underline{Admin: DummyGen}\label{code:setup-dummygen-be}
\FOR{each field $f$}
\STATE $\Sigma_f \leftarrow \Sigma_{g \in \Psi_f}\Sigma_{e \in \bm{E}_{f, g}} (\tau_{f, g} - O(e))$
\ENDFOR
\STATE $\Sigma_{max} \leftarrow \max\{\Sigma_1, \ldots, \Sigma_F\}$
\STATE Add $\Sigma_{max}$ dummy records with values $(NULL, \ldots, NULL)$ into $DB$
\FOR {each field $f$}
\FOR {each $e \in U_f$}
\STATE Assign $e$ to $\tau_{f, GE_{s_1}(e)}-O(e)$ dummy records in field $f$
\ENDFOR
\ENDFOR
\STATE Mark real and dummy records with $flag=1$ and $flag=0$, respectively
\STATE Shuffle $DB$ \label{code:setup-dummygen-end}
~\\~
\STATE \underline{Admin: DBEnc} \label{code:setup-DBenc-be}
\STATE $id \leftarrow 0$
\FOR{each $rcd \in DB$}
\STATE $(Ercd, seed, \bm{n}, Grcd)\leftarrow RcdEnc(rcd, flag)$
\STATE $EDB(id) \leftarrow Ercd$, $NDB(id) \leftarrow (seed, \bm{n})$
\FOR{each $g_f \in Grcd$}
\STATE $IL_{f, g} \leftarrow IL_{f, g} \cup id$ \label{code:setup-DBenc-IL}
\ENDFOR
\STATE $id++$ \label{code:setup-DBenc-end}
\ENDFOR
\end{algorithmic}
\end{algorithm}
In this section, we give the details for setting up, searching, and updating the database.
\subsection{Setup}
\label{subsec:boot}
The system is set up by the admin by generating the secret keys $s_1$ and $s_2$ based on the security parameter $k$.
$s_1$ is only known to users and is used to protect queries and records from CSPs.
$s_2$ is generated for saving storage, and it is known to both the user and IWS and is used to generate nonces for record and query encryption.
The admin also defines the cryptographic primitives used in \textsl{\mbox{P-McDb}}\xspace.
We assume the initial database $DB$ is not empty.
The admin bootstraps $DB$ with Algorithm \ref{alg:mcdb-setup}, $Setup(k, DB) \rightarrow (EDB, GDB, NDB)$.
Roughly speaking, the admin divides the records into groups (Lines \ref{code:setup-grpgen-be}-\ref{code:setup-grpgen-end}), pads the elements in the same group into the same occurrence by generating dummy records (Lines \ref{code:setup-dummygen-be}-\ref{code:setup-dummygen-end}), and encrypts each record (Lines \ref{code:setup-DBenc-be}-\ref{code:setup-DBenc-end}).
The details of each operation are given below.
\paragraph{Group Generation}
As mentioned, inserting dummy records is necessary to protect the size and search patterns, and grouping the data aims at reducing the number of required dummy records.
Indeed, dividing the data into groups could also reduce the number of records to be searched.
Only padding the data in the same group into the same occurrence could result in the leakage of group information.
Particularly, the SSS can learn if records and queries are in the same group from the size pattern.
Considering the group information will be inevitably leaked after executing queries, \textsl{\mbox{P-McDb}}\xspace allows the SSS to know the group information in advance, and only search a group of records for each query rather than the whole database.
By doing so, the query can be processed more efficiently without leaking additional information.
Yet, in this case, the SSS needs to know which group of records should be searched for each query.
Considering the SSS only gets encrypted records and queries, the group should be determined by the admin and users.
To avoid putting heavy storage overhead on users, \textsl{\mbox{P-McDb}}\xspace divides data into groups with a Pseudo-Random Function (PRF) $GE: \{0, 1\}^* \times \{0, 1\}^k \rightarrow \{0, 1\}^*$.
The elements in field $f$ ($1\leq f \leq F$) with the same $g \leftarrow GE_{s_1}(e)$ value are in the same group, and $(f, g)$ is the group identifier.
In this way, the admin and users can easily know the group identifiers of records and queries just by performing $GE$.
The implementation of $GE$ function affects the security level of the search pattern.
Let $\lambda$ stand for the number of distinct elements contained in a group.
Since the elements in the same group will have the same occurrence, the queries involving those elements (defined as \emph{the queries in the same group}) will match the same number of records.
Then, the adversary cannot tell their search patterns from their size patterns.
Formally, for any two queries matching the same number of records, the probability they involve the same keyword is $\frac{1}{\lambda}$.
Thus, $\lambda$ also represents the security level of the search pattern.
Given $\lambda$, the implementation of $GE$ should ensure each group contains at least $\lambda$ distinct elements.
For instance, the admin could generate the group identifier of $e$ by computing $LSB_b(H_{s_1}(e))$, where $LSB_b$ gets the least significant $b$ bits of its input.
To ensure each group contains at least $\lambda$ distinct elements, $b$ can be smaller.
The details of grouping $DB$ are shown in Lines \ref{code:setup-grpgen-be}-\ref{code:setup-grpgen-end} of Algorithm \ref{alg:mcdb-setup}.
Formally, we define group $(f, g)$ as $(IL_{f, g}, \bm{E}_{f, g}, \tau_{f, g})$, where $IL_{f, g}$ stores the identifiers of the records in this group (Line \ref{code:setup-DBenc-IL}), $\bm{E}_{f, g}$ is the set of distinct elements in this group (Line \ref{code:setup-grpgen-e}), and $\tau_{f, g}=\max\{O(e) | e \in \bm{E}_{f, g}\}$ is the occurrence threshold for padding (Line \ref{code:setup-grpgen-t}).
Since the group information will be stored in the CSP, $(\bm{E}_{f, g}, \tau_{f, g})$ is encrypted into $(\bm{E}_{f, g}, \tau_{f, g})^*$ with $s_1$ and a semantically secure symmetric encryption primitive $ENC: \{0, 1\}^* \times \{0, 1\}^k \rightarrow \{0, 1\}^*$.
$(\bm{E}_{f, g}, \tau_{f, g})^*$ is necessary for insert queries (The details are given in Section~\ref{subsec:update}).
Note that if the initial database is empty, the admin can pre-define a possible $U_f$ for each field and group its elements in the same way.
In this case, $IL=\emptyset$ and $\tau=0$ for each group after the bootstrapping.
\paragraph{Dummy Records Generation}
Once the groups are determined, the next step is to generate dummy records.
The details for generating dummy records are given in Lines \ref{code:setup-dummygen-be}-\ref{code:setup-dummygen-end}, Algorithm \ref{alg:mcdb-setup}.
Specifically, the admin first needs to know how many dummy records are required for the padding.
Since the admin will pad the occurrence of each element in $\bm{E}_{f, g}$ into $\tau_{f, g}$, $\tau_{f, g}-O(e)$ dummy records are required for each $e \in \bm{E}_{f, g}$.
Assume there are $M$ groups in field $f$, then $\Sigma_f=\sum_{i=1}^{M}\sum_{e \in \bm{E}_{f, g^i}}(\tau_{f, g^i}-O(e))$ dummy records are required totally for padding field $f$.
For the database with multiple fields, different fields might require different numbers of dummy records.
Assume $\Sigma_{max}=\max\{\Sigma_1, \ldots, \Sigma_F\}$.
To ensure all fields can be padded properly, $\Sigma_{max}$ dummy records are required.
Whereas, $\Delta_f=\Sigma_{max} - \Sigma_f$ dummy records will be redundant for field $f$.
The admin assigns them a meaningless string, such as `NULL', in field $f$.
After encryption, `NULL' will be indistinguishable from other elements.
Thus, the CSP cannot distinguish between real and dummy records.
Note that users and the admin can search the records with `NULL'.
In this work, we do not consider the query with conjunctive predicates, so we do not consider to pad the element pairs also into the same occurrence.
After padding, each record $rcd$ is appended with a $flag$ to mark if it is real or dummy.
Specifically, $flag=1$ when $rcd$ is real, otherwise $flag=0$.
The admin also shuffles the dummy and real records.
\paragraph{Record Encryption}
\begin{algorithm}[tp]
\scriptsize
\caption{$RcdEnc(rcd, flag)$}
\label{alg:mcdb-enc}
\begin{algorithmic}[1]
\STATE $seed \stackrel{\$}{\leftarrow} \{0,1\}^{|seed|}$
\STATE $\bm{n} \leftarrow \Gamma_{s_2} (seed)$, where $\bm{n}=\ldots \| n_f \| \ldots \| n_{F+1}$, $|n_f|=|e|$ and $|n_{F+1}|=|H|+|e|$ \label{code:mcdb-enc-seed}
\FOR {each element $e_f \in rcd$}
\STATE $g_f \leftarrow GE_{s_1}(e_f)$ \label{code:mcdb-enc-gid}
\STATE $e^*_f \leftarrow Enc_{s_1}(e_f) \oplus n_f$ \label{code:mcdb-enc-se}
\ENDFOR
\IF {$flag=1$}
\STATE $S \stackrel{\$}{\leftarrow} \{0,1\}^{|e|}$
\STATE $tag \leftarrow (H_{s_1}(S)||S )\oplus n_{F+1}$ \label{code:mcdb-enc-tag-re}
\ELSE
\STATE $tag \stackrel{\$}{\leftarrow} \{0,1\}^{|H|+|e|}$ \label{code:mcdb-enc-tag-du}
\ENDIF
\RETURN $Ercd=(e^*_1, \ldots, e^*_F, tag)$, $(seed, \bm{n})$, and $Grcd=(g_1, \ldots, g_F)$
\end{algorithmic}
\end{algorithm}
The admin encrypts each record before uploading them to the SSS.
The details of record encryption are provided in Algorithm~\ref{alg:mcdb-enc}, $RcdEnc (rcd, flag) \rightarrow (Ercd, seed, \bm{n}, Grcd)$.
To ensure the dummy records could match queries, they are encrypted in the same way as real ones.
Specifically, first the admin generates a random string as a $seed$ for generating a longer nonce $\bm{n}$ with a Pseudo-Random Generator (PRG) $\Gamma: \{0, 1\}^{|seed|} \times \{0, 1\}^{k} \rightarrow \{0, 1\}^{*}$ (Line~\ref{code:mcdb-enc-seed}, Algorithm~\ref{alg:mcdb-enc}).
Second, the admin generates $g_f$ for each $e_f \in rcd$ by computing $GE_{s_1}(e_f)$ (Line \ref{code:mcdb-enc-gid}).
Moreover, $e_f$ is encrypted by computing $SE(e_f): e^*_f \leftarrow Enc_{s_1}(e_f) \oplus n_f$ (Line~\ref{code:mcdb-enc-se}), where $Enc:\{0,1\}^* \times \{0,1\}^k \rightarrow \{0,1\}^* $ is a deterministic symmetric encryption primitive, such as AES-ECB.
Using $n_f$, on the one hand, ensures the semantically secure of $e^*_f$.
On the other hand, it ensures the forward and backward privacy of \textsl{\mbox{P-McDb}}\xspace (as explained in Section \ref{subsec:shuffle}).
$e^*_f$ will be used for encrypted search and data retrieval.
The dummy records are meaningless items, and the user does not need to decrypt returned dummy records.
Thus, we need a way to filter dummy records for users.
Considering the CSPs are untrusted, we cannot mark the real and dummy records in cleartext.
Instead, we use a keyed hash value to achieve that.
Specifically, as shown in Lines~\ref{code:mcdb-enc-tag-re} and \ref{code:mcdb-enc-tag-du}, a tag $tag$ is generated using a keyed hash function $H: \{0,1\}^* \times \{0,1\}^k \rightarrow \{0,1\}^*$ and the secret key $s_1$ if the record is real, otherwise $tag$ is a random bit string.
With the secret key $s_1$, the dummy records can be efficiently filtered out by users before decrypting the search result by checking if:
\begin{align}\label{eq:check1}\notag
tag_{l} \stackrel{?}{=}& H_{s_1}(tag_{r}), \text{ where } tag_{l}||tag_{r} = tag \oplus n_{F+1}
\end{align}
Once all the real and dummy records are encrypted, the admin uploads the auxiliary information, \textit{i.e.,}\xspace the set of group information $GDB$ and the set of nonce information $NDB$, to the IWS, and uploads encrypted records $EDB$ to the SSS.
$GDB$ contains $(IL, (\bm{E}, \tau)^*)$ for each group.
$NDB$ contains a $(seed, \bm{n})$ pair for each record stored in $EDB$.
To reduce the storage overhead on the IWS, $NDB$ could also just store the seed and recover $\bm{n}$ by computing $\Gamma_{s_2}(seed)$ when required.
Whereas, saving the $(seed, \bm{n})$ pairs reduces the computation overhead on the IWS.
In the rest of this article, we assume NDB contains $(seed, \bm{n})$ pairs.
$EDB$ contains the encrypted record $Ercd$.
Note that to ensure the correctness of the search functionality, it is necessary to store the encrypted records and their respective $(seed, \bm{n})$ pairs in the same order in $EDB$ and $NDB$ (the search operation is explained in Section \ref{subsec:mcdb-select}).
In Table~\ref{Tbl:mcdb-store}, we take the \emph{Staff} table (\textit{i.e.,}\xspace Table \ref{Tbl:mcdb-staff}) as an example and show the details stored in $GDB$, $NDB$, and $EDB$ in Tables~\ref{Tbl:mcdb-engid},~\ref{Tbl:mcdb-nonceI}, and~\ref{Tbl:mcdb-enstaff}, respectively.
\subsection{Select Query}
\label{subsec:mcdb-select}
\begin{algorithm}[htp]
\scriptsize
\caption{Query$(Q)$}
\label{alg:macdb-search}
\begin{algorithmic}[1]
\STATE \underline{User: QueryEnc$(Q)$} \label{code:mcdb-query-user-be}
\STATE $g \leftarrow GE_{s_1}( Q.e )$ \label{code:mcdb-query-user-gid}
\STATE $EQ.type \leftarrow Q.type$, $EQ.f \leftarrow Q.f$
\STATE $\eta \stackrel{\$}{\leftarrow} \{0,1\}^{|e|}$
\STATE $EQ.e^* \leftarrow Enc_{s_1}(Q.e) \oplus \eta$ \label{code:mcdb-query-se}
\STATE Send $EQ=(type, f, e^*)$ to the SSS
\STATE Send $(EQ.f, \eta, g)$ to the IWS \label{code:mcdb-query-user-end}
~\\~
\STATE \underline{IWS: $NonceBlind(EQ.f, \eta, g)$} \label{code:mcdb-query-IWS-be}
\STATE $EN \leftarrow \emptyset$
\STATE $IL \leftarrow GDB(EQ.f, g)$ \COMMENT{If $(EQ.f, g) \notin GDB$, return $IL$ of the closest group(s).}\label{code:mcdb-query-IL}
\FOR {each $id \in IL$ }
\STATE $(seed, \bm{n}) \leftarrow NDB(id)$, where $\bm{n}= \ldots ||n_{EQ.f}|| \ldots$ and $|n_{EQ.f}|=|\eta|$
\label{code:mcdb-query-get-nonce}
\STATE $w \leftarrow H'(n_{EQ.f} \oplus \eta)$
\STATE $t \leftarrow \eta \oplus seed$
\STATE $EN(id) \leftarrow (w, t)$ \label{code:mcdb-query-enc-nonce}
\ENDFOR
\STATE Send $IL=(id, \ldots)$ and the encrypted nonce set $EN=((w, t), \ldots)$ to the SSS \label{code:mcdb-query-IWS-end}
~\\
\STATE \underline{SSS: $Search(EQ, EN, IL)$} \label{code:mcdb-query-SSS-be}
\STATE $SR \leftarrow \emptyset$
\FOR {each $id \in IL$}
\IF {$H'(EDB(id, EQ.f) \oplus EQ.e^*) = EN(id).w$ } \label{code:mcdb-query-checke}
\STATE $SR \leftarrow SR \cup (EDB(id), EN(id).t)$ \label{code:mcdb-query-sr}
\ENDIF
\ENDFOR
\STATE Send the search result $SR$ to the user \label{code:mcdb-query-SSS-end}
~\\
\STATE \underline{User: RcdDec$(SR, \eta)$} \label{code:mcdb-query-dec-be}
\FOR{each $(Ercd, t) \in SR$}
\STATE $\bm{n} \leftarrow \Gamma_{s_2} (t \oplus \eta)$ \label{code:mcdb-query-dec-seed}
\STATE $(Enc_{s_1}(rcd), tag) \leftarrow Ercd \oplus \bm{n}$
\STATE $tag_{l} || tag_{r} \leftarrow tag$, where $|tag_r|=|e|$
\IF{$tag_{l} = H_{s_1}(tag_{r})$} \label{code:mcdb-query-ducheck}
\STATE $rcd \leftarrow Enc^{-1}_{s_1} (Enc_{s_1}(rcd))$ \label{code:mcdb-query-dec}
\ENDIF
\ENDFOR \label{code:mcdb-query-dec-end}
\end{algorithmic}
\end{algorithm}
In this work, we focus on the simple query which only has one single equality predicate.
The complex queries with multiple predicates can be performed by issuing multiple simple queries and combing their results on the user side.
To support range queries, the technique used in \cite{Asghar:Espoon:IRC13} can be adopted.
For performing a select query, \textsl{\mbox{P-McDb}}\xspace requires the cooperation between the IWS and SSS.
The details of the steps performed by the user, IWS, and SSS are shown in Algorithm~\ref{alg:macdb-search}, $Query(Q) \rightarrow SR$, which consists of 4 components: $QueryEnc$, $NonceBlind$, $Search$, $RcdDec$.
\paragraph{\bm{$QueryEnc(Q)\rightarrow (EQ, \eta, g)$}}
First, the user encrypts the query $Q=(type, f, e)$ using $QueryEnc$ (Lines \ref{code:mcdb-query-user-be} - \ref{code:mcdb-query-user-end}, Algorithm \ref{alg:macdb-search}).
Specifically, to determine the group to be searched, the user first generates $g$ (Line \ref{code:mcdb-query-user-gid}).
We do not aim at protecting the query type and searched field from CSPs.
Thus, the user does not encrypt $Q.type$ and $Q.f$.
The interested keyword $Q.e$ is encrypted into $EQ.e^*$ by computing $Enc_{s_1}(Q.e)\oplus \eta$ (Line \ref{code:mcdb-query-se}).
The nonce $\eta$ ensures that $EQ.e^*$ is semantically secure.
Finally, the user sends $EQ=(type, f, e^*)$ to the SSS and sends $(EQ.f$, $\eta$, $g)$ to the IWS.
\paragraph{$\bm{NonceBlind(EQ.f, \eta, g)\rightarrow (IL, EN)}$}
Second, the IWS provides $IL$ and witnesses $EN$ of group $(EQ.f, g)$ to the SSS by running $NonceBlind$ (Line \ref{code:mcdb-query-IWS-be} - \ref{code:mcdb-query-IWS-end}).
Specifically, for each $id \in IL$, the IWS generates $EN(id)=(w, t)$ (Lines~\ref{code:mcdb-query-get-nonce}-\ref{code:mcdb-query-enc-nonce}),
where $w=H'(n_{EQ.f} \oplus \eta)$ will be used by the SSS to find the matching records, and $t= \eta \oplus seed$ will be used by the user to decrypt the result.
Here $H': \{0, 1\}^* \rightarrow \{0, 1\}^k$ is a hash function.
Note that when $(EQ.f, g)$ is not contained in $GDB$, $IL$ of the \emph{closest} group(s) will be used, \textit{i.e.,}\xspace the group in field $EQ.f$ whose identifier has the most common bits with $g$\footnote{This can be obtained by comparing the hamming weight of $g' \oplus g$ for all $(EQ.f, g') \in GDB$.}.
\paragraph{$\bm{Search(EQ, IL, EN)\rightarrow SR}$}
Third, the SSS traverses the records indexed by $IL$ and finds the records matching $EQ$ with the assistance of $EN$ (Lines \ref{code:mcdb-query-SSS-be} - \ref{code:mcdb-query-SSS-end}).
Specifically, for each record indexed by $IL$, the SSS checks if $H'(EDB(id, EQ.f) \oplus EQ.e^*) \stackrel{?}{=} EN(id).w$ (Line \ref{code:mcdb-query-checke}).
More specifically, the operation is:
\begin{equation}\notag
H'(Enc_{s_1}(e_{EQ.f})\oplus n_{EQ.f} \oplus Enc_{s_1}(Q.e) \oplus \eta ) \stackrel{?}{=} H'(n_{EQ.f} \oplus \eta)
\end{equation}
It is clear that only when $Q.e=e_{EQ.f}$ there is a match.
The SSS sends each matched record $EDB(id)$ and its corresponding $EN(id).t$ to the user as the search result $SR$, \textit{i.e.,}\xspace $EDB(EQ)$.
\paragraph{\bm{$RcdDec(SR) \rightarrow rcds$}}
To decrypt an encrypted record $Ercd$, both the secret key $s_1$ and nonce $\bm{n}$ are required.
The nonce $\bm{n}$ can be recovered from the returned $t$.
Only the user issuing the query knows $\eta$ and is able to recover $\bm{n}$ by computing $\Gamma_{s_2}(t \oplus \eta)$ (Line \ref{code:mcdb-query-dec-seed}).
With $\bm{n}$, the user can check if each returned record is real or dummy (Line \ref{code:mcdb-query-ducheck}), and decrypt each real record by computing
$Enc^{-1}_{s_1}(Ercd \oplus \bm{n})$ (Line \ref{code:mcdb-query-dec}), where $Enc^{-1}$ is the inverse of $Enc$.
\subsection{Shuffling and Re-randomisation}
\label{subsec:shuffle}
\begin{algorithm}
\scriptsize
\caption{$Shuffle(IL, Ercds)$}
\label{alg:mcdb-shuffle}
\begin{algorithmic}[1]
\STATE \underline{IWS: $PreShuffle(IL)$} \label{code:mcdb-preshuffle-be}
\STATE $IL' \leftarrow \pi (IL)$
\STATE Shuffle the $(seed, \bm{n})$ pairs indexed by $IL$ based on $IL'$
\STATE Update the indices of affected groups in $GDB$ \label{code:mcdb-shuffle-update-groups}
\FOR {each $id \in IL'$} \label{code:mcdb-shuffle-begin}
\STATE $seed \stackrel{\$}{\leftarrow} \{0, 1\}^{|seed|}$ \label{code:mcdb-seed'}
\STATE $\bm{n'} \leftarrow \Gamma_{s_2}(seed)$ \label{code:mcdb-gn'}
\STATE $NN(id) \leftarrow NDB(id).\bm{n} \oplus \bm{n'}$ \label{code:mcdb-nn}
\STATE $NDB(id) \leftarrow (seed, \bm{n'})$ \label{code:mcdb-n'}
\ENDFOR
\STATE Send $(IL', NN)$ to the RSS. \label{code:mcdb-preshuffle-end}
~\\~
\STATE{\underline{RSS: $Shuffle(Ercds, IL', NN)$}}
\STATE Shuffle $Ercds$ based on $IL'$
\FOR {each $id \in IL'$}
\STATE $Ercds(id) \leftarrow Ercds(id) \oplus NN(id) $ \label{code:mcdb-shuffle-reenc}
\ENDFOR
\STATE Send $Ercds$ to the SSS.
\end{algorithmic}
\end{algorithm}
To protect the access pattern and ensure the forward and backward privacy, \textsl{\mbox{P-McDb}}\xspace shuffles and re-randomises searched records after executing each query, and this procedure is performed by the IWS and RSS.
The details are shown in Algorithm \ref{alg:mcdb-shuffle}, consisting of $PreShuffle$ and $Shuffle$.
\paragraph{$\bm{PreShuffle(IL) \rightarrow (IL', NN)}$}
In \textsl{\mbox{P-McDb}}\xspace, the searched records are re-randomised by renewing the nonces.
Recall that $SE$ encryption is semantically secure due to the nonce.
However, the IWS stores the nonces.
If the IWS has access to encrypted records, it could observe deterministically encrypted records by removing the nonces.
To void leakage, \textsl{\mbox{P-McDb}}\xspace does not allow the IWS to access any records and involves the RSS to shuffle and re-randomise the records.
Yet, the IWS still needs to shuffle $NDB$ and generate new nonces for the re-randomisation by executing $PreShuffle$.
Specifically, as shown in Algorithm \ref{alg:mcdb-shuffle}, Lines \ref{code:mcdb-preshuffle-be}-\ref{code:mcdb-preshuffle-end}, the IWS first shuffles the $id$s in $IL$ with a Pseudo-Random Permutation (PRP) $\pi$ and gets the re-ordered indices list $IL'$.
In our implementation, we leverage the modern version of the Fisher-Yates shuffle algorithm \cite{Knuth73}, where from the first $id$ to the last one, each $id$ in $IL$ is exchanged with a random $id$ storing behind it.
After that, the IWS shuffles $(seed, \bm{n})$ pairs based on $IL'$.
Note that the shuffling operation affects the list of indices of the groups in other fields.
Thus, the IWS also needs to update the index lists of other groups accordingly (Line \ref{code:mcdb-shuffle-update-groups}).
For re-randomising records, the IWS samples a new seed and generates a new nonce $\bm{n'}$.
To ensure the records will be blinded with the respective new nonces stored in $NDB$ after shuffling and re-randomising, the IWS generates $NN=(\bm{n}\oplus\bm{n'}, \ldots)$ for RSS (Line \ref{code:mcdb-nn}).
Afterwards, IWS updates the seed and nonce stored in $NDB(id)$ with the new values.
Finally, $(IL', NN)$ is sent to the RSS.
\paragraph{$\bm{Shuffle(Ercds, IL', NN) \rightarrow Ercds}$}
After searching, the SSS sends the searched records $Ercds$ to the RSS.
Given $IL'$ and $NN$, the RSS starts to shuffle and re-randomise $Ercds$.
Specifically, the RSS first shuffles $Ercds$ based on $IL'$, and then re-randomises each record by computing $Ercds(id) \oplus NN(id)$ (Line \ref{code:mcdb-shuffle-reenc}).
In details, the operation is:
\begin{equation}\notag
(Enc_{s_1}(rcd_{id})\oplus \bm{n}) \oplus (\bm{n'} \oplus \bm{n}) = Enc_{s_1}(rcd_{id})\oplus \bm{n'}
\end{equation}
That is, $Ercds(id)$ is blinded with the latest nonce stored in $NDB(id)$.
Finally, the re-randomised and shuffled records $Ercds$ are sent back to the SSS.
By using a new set of seeds for the re-randomisation, \textsl{\mbox{P-McDb}}\xspace achieves both forward and backward privacy.
If the SSS tries to execute an old query individually, it will not be able to match any records without the new witness $w$, which can only be generated by the IWS with new nonces.
Similarly, the SSS cannot learn if deleted records match new queries.
\subsection{User Revocation}
\label{sec:revoke}
\textsl{\mbox{P-McDb}}\xspace supports flexible multi-user access in a way that the issued queries and search results of one user are protected from all the other entities. Moreover, revoking users do not require key regeneration and data re-encryption even when one of the CSPs colludes with revoked users.
As mentioned in Section \ref{subsec:mcdb-select}, for filtering dummy records and recovering returned real records, both $s_1$ and the nonce are required.
After shuffling, the nonce is only known to the IWS.
Thus, without the assistance of the IWS and SSS, the user is unable to recover records only with $s_1$.
Therefore, for user revocation, we just need to manage a revoked user list at the IWS as well as at the SSS.
Once a user is revoked, the admin informs the IWS and SSS to add this user into their revoked user lists.
When receiving a query, the IWS and the SSS will first check if the user has been revoked.
If yes, they will reject the query.
In case revoked users collude with either the SSS or IWS, they cannot get the search results, since such operation requires the cooperation of both the user issuing the query, IWS, and SSS.
\subsection{Database Updating}
\label{subsec:update}
\begin{algorithm}
\scriptsize
\caption{$Insert(rcd)$}
\label{alg:mcdb-insert}
\begin{algorithmic}[1]
\STATE \underline{$User(rcd)$}:
\STATE $(Ercd, seed, \bm{n}, Grcd) \leftarrow RcdEnc(rcd, 1)$ \label{code:mcdb-insert-user-enc}
\STATE $INS_{IWS} \leftarrow (seed, \bm{n}, Grcd)$, $INS_{SSS} \leftarrow Ercd$
\FOR {each $g_f \in Grcd$} \label{code:mcdb-insert-user-be}
\STATE $(\bm{E}_{f, g_f}, \tau_{f, g_f})^* \leftarrow GDB(f, g_f)$\COMMENT{If $(f, g_f) \notin GDB$, $g_f \leftarrow g_f'$, where $(f, g_f')$ is the closet group of $(f, g_f)$.}
\STATE $(\bm{E}_{f, g_f}, \tau_{f, g_f}) \leftarrow ENC^{-1}_{s_1}((\bm{E}_{f, g_f}, \tau_{f, g_f})^*)$ \label{code:mcdb-insert-user-ef}
\ENDFOR
\FOR{each $e_f \in rcd$} \label{code:mcdb-insert-gendum-be}
\IF{$e_f \in \bm{E}_{f, g_f}$}
\STATE $\gamma_{f} \leftarrow |\bm{E}_{f, g_f}| -1 $
\ELSE
\STATE $\gamma_{f} \leftarrow \tau_{f, g_f} -1 $
\ENDIF
\ENDFOR
\STATE $W=\max\{\gamma_{f}\}_{1 \leq f \leq F}$ \label{code:mcdb-insert-encdum-be}
\STATE Generate $W$ dummy records with values $(NULL, \ldots, NULL)$
\FOR{each $e_f \in rcd$}
\IF{$e_f \in \bm{E}_{f, g_f}$}
\STATE Assign $\bm{E}_{f, g_f} \setminus e_f$ to $\gamma_f$ dummy records in field $f$
\STATE $\tau_{f, g_f} ++$
\ELSE
\STATE Assign $e_f$ to $\gamma_f$ dummy records in field $f$
\STATE $\bm{E}_{f, g_f} \leftarrow \bm{E}_{f, g_f} \cup e_f$
\ENDIF
\STATE $(\bm{E}_{f, g_f}, \tau_{f, g_f})^* \leftarrow ENC_{s_1}(\bm{E}_{f, g_f}, \tau_{f, g_f})$ \label{code:mcdb-insert-gendum-end}
\ENDFOR
\FOR{each dummy record $rcd'$}
\STATE $(Ercd, seed, \bm{n}, Grcd) \leftarrow RcdEnc(rcd', 0)$ \label{code:mcdb-insert-enc-du}
\STATE $INS_{IWS} \leftarrow INS_{IWS} \cup (seed, \bm{n}, Grcd)$
\STATE $INS_{SSS} \leftarrow INS_{SSS} \cup Ercd$ \label{code:mcdb-insert-user-end}
\ENDFOR
\STATE Send $INS_{IWS}$ and $((\bm{E}_{f, g_f}, \tau_{f, g_f})^*)_{1 \leq f \leq F}$ to the IWS
\STATE Send $INS_{SSS}$ to the SSS
~\\
\STATE \underline{$SSS(INS_{SSS}$}): \label{code:mcdb-insert-csp-be}
\STATE $IDs \leftarrow \emptyset$
\FOR{each $Ercd \in INS_{SSS}$}
\STATE $EDB(++id) \leftarrow Ercd$ \label{code:mcdb-insert-csp-insert}
\STATE $IDs \leftarrow IDs \cup id$
\ENDFOR
\STATE Send $IDs$ to the IWS \label{code:mcdb-insert-csp-end}
~\\
\STATE \underline{$IWS(INS_{IWS}, (\bm{E}_{f, g_f}, \tau_{f, g_f})^*)_{1 \leq f \leq F}, IDs)$}: \label{code:mcdb-insert-wss-be}
\FOR{each $(seed, \bm{n}, Grcd) \in INS_{IWS}$ and $id \in IDs$} \label{code:mcdb-insert-wss-seed}
\STATE $NDB(id) \leftarrow (seed, \bm{n})$
\FOR{$f=1$ to $F$}
\STATE $GDB (f, g_f) \leftarrow (GDB(f, g_f).IL_{f, g_f} \cup id, (\bm{E}_{f, g_f}, \tau_{f, g_f})^*)$
\label{code:mcdb-insert-wss-group}
\ENDFOR
\ENDFOR \label{code:mcdb-insert-wss-end}
\end{algorithmic}
\end{algorithm}
\textsl{\mbox{P-McDb}}\xspace allows users to update the database after bootstrapping.
However, after updating, the occurrences of involved elements will change.
To effectively protect the search pattern, we should ensure the elements in the same group always have the same occurrence.
\textsl{\mbox{P-McDb}}\xspace achieves that by updating dummy records.
\paragraph{Insert Query}
In \textsl{\mbox{P-McDb}}\xspace, the insert query is also performed with the cooperation of the user, IWS, and SSS.
The idea is that a number of dummy records will be generated and inserted with the real one to ensure all the elements in the same group always have the same occurrence.
The details are shown in Algorithm \ref{alg:mcdb-insert}.
Assume the real record to be inserted is $rcd=(e_1, \ldots, e_F)$.
The user encrypts it with \emph{RcdEnc}, and gets $(Ercd, seed, \bm{n}, Grcd)$ (Line \ref{code:mcdb-insert-user-enc}, Algorithm \ref{alg:mcdb-insert}).
For each $g_f \in Grcd$, the user gets $(\bm{E}_{f, g_f}, \tau_{f, g_f})^*$ of group $(f, g_f)$ and decrypts it.
Note that if $(f, g_f) \notin GDB$, the IWS returns $(\bm{E}_{f, g_f}, \tau_{f, g_f})^*$ of the closest group(s), instead of adding a new group.
That is, $e_f$ will belong to its closet group in this case.
The problem of adding new groups is that when the new groups contain less than $\lambda$ elements, adversaries could easily infer the search and access patterns within these groups.
The next step is to generate dummy records (Lines \ref{code:mcdb-insert-gendum-be}-\ref{code:mcdb-insert-gendum-end}).
The approach of generating dummy records depends on whether $e_f \in \bm{E}_{f, g_f}$, \textit{i.e.,}\xspace whether $rcd$ introduces new element(s) that not belongs to $U$ or not.
If $e_f \in \bm{E}_{f, g_f}$, after inserting $rcd$, $O(e_f)$ will increase to $\tau_{f, g_f}+1$ automatically.
In this case, the occurrence of other elements in $\bm{E}_{f, g_f}$ should also be increased to $\tau_{f, g_f}+1$.
Otherwise, $O(e_f)$ will be unique in the database, and adversaries can tell if users are querying $e_f$ based on the size pattern.
To achieve that, $\gamma_f=|\bm{E}_{f, g_f}| -1$ dummy records are required for field $f$, and each of them contains an element in $\bm{E}_{f, g_f} \setminus e_f$.
If $e_f \notin \bm{E}_{f, g_f}$, $O(e_f)=0$ in $EDB$.
After inserting, we should ensure $O(e_f)=\tau_{f, g_f}$ since it belongs to the group $(f, g_f)$.
Thus, this case needs $\gamma_f=\tau_{f, g_f} -1$ dummy records for field $f$, and all of them are assigned with $e_f$ in field $f$.
Assume $W$ dummy records are required for inserting $rcd$, where $W=\max\{\gamma_f\}_{1 \leq f \leq F}$.
The user generates $W$ dummy records as mentioned above (`NULL' is used if necessary), and encrypts them with $RcdEnc$ (Lines \ref{code:mcdb-insert-encdum-be}-\ref{code:mcdb-insert-gendum-end}).
Meanwhile, the user adds each new element into the respective $\bm{E}_{f, g_f}$ if there are any, updates $\tau_{f, g_f}$, and re-encrypts $(\bm{E}_{f, g_f}, \tau_{f, g_f})$.
All the encrypted records are sent to the SSS and added into $EDB$ (Lines \ref{code:mcdb-insert-csp-be}-\ref{code:mcdb-insert-csp-end}).
All the $(seed, \bm{n})$ pairs and $(\bm{E}_{f, g_f}, \tau_{f, g_f})^*$ are sent to the IWS and inserted into $NDB$ and $GDB$ accordingly (Lines \ref{code:mcdb-insert-wss-be}-\ref{code:mcdb-insert-wss-end}).
Finally, to protect the access pattern, the shuffling and re-randomising operations over the involved groups will be performed between the IWS and RSS.
\paragraph{Delete Query}
Processing delete queries is straightforward.
Instead of removing records from the database, the user sets them to dummy by replacing their $tag$s with random strings.
In this way, the occurrences of involved elements are not changed.
Moreover, the correctness of the search result is guaranteed.
However, only updating the tags of matched records leaks the access pattern of delete queries to the RSS.
Particularly, the RSS could keep a view of the latest database and check which records' tags were modified when the searched records are sent back for shuffling.
To avoid such leakage, in \textsl{\mbox{P-McDb}}\xspace, the user modifies the tags of all the searched records for each delete query.
Specifically, the SSS returns the identifiers of matched records and the tags of all searched records to the user.
For matched records, the user changes their tags to random strings directly.
Whereas, for each unmatched records, the user first checks if it is real or dummy, and then generates a proper new tag as done in Algorithm~\ref{alg:mcdb-enc}.
Likewise, the \emph{PreShuffle} and \emph{Shuffle} algorithms are performed between the IWS and RSS after updating all the tags.
However, if the system never removes records, the database will increase rapidly.
To avoid this, the admin periodically removes the records consisting of $F$ `NULL' from the database.
Specifically, the admin periodically checks if each element in each group is contained in one dummy record.
If yes, for each element, the admin updates one dummy record containing it to `NULL'.
As a consequence, the occurrence of all the elements in the same group will decrease, but still is the same.
When the dummy record only consists of `NULL', the admin removes it from the database.
\paragraph{Update query}
In \textsl{\mbox{P-McDb}}\xspace, update queries can be performed by deleting the records with old elements and inserting new records with new elements.
\section{Performance Analysis}
\label{sec:MCDB-perf}
We implemented \textsl{\mbox{P-McDb}}\xspace in C using MIRACL 7.0 library for cryptographic primitives.
The performance of all the entities was evaluated on a desktop machine with Intel i5-4590 3.3 GHz 4-core processor and 8GB of RAM.
We evaluated the performance using TPC-H benchmark \cite{TPC:2017:h}, and tested equality queries with one singe predicates over `O\_CUSTKEY' field in `ORDERS' table.
In the following, all the results are averaged over $100$ trials.
\subsection{Group Generation}
\begin{table}[h]
\scriptsize
\centering
\caption{The storage overhead with different numbers of groups}
\begin{tabular}{|l|l|l|l|l|}
\hline
\#Groups &\#Dummy records &\#Elements in a group & \#Records in a group \\ \hline
1 & 2599836 & 99996 & =4099836 \\
10 & 2389842 & 10000 & $\approx$38000 \\
100 & 1978864 & 1000 & $\approx$35000 \\
1000 & 1567983 & 100 & $\approx$3000 \\
10000 & 1034007 & 10 & $\approx$240 \\
\hline
\end{tabular}
\label{Tbl:oblidb-storage-perf}
\end{table}
In `ORDERS' table, all the `O\_CUSTKEY' elements are integers.
For simplicity, we divided the records into groups by computing $e~ mod~ b$ for each element $e$ in `O\_CUSTKEY' field.
Specifically, we divide the records into 1, 10, 100, 1000, and 10000 groups by setting $b=$1, 10, 100, 1000, and 10000, respectively.
Table \ref{Tbl:oblidb-storage-perf} shows the number of required records and the number of elements included in a group when dividing the database into 1, 10, 100, 1000, and 10000 groups.
In particular, when all the records are in one group, $2599836$ dummy records are required in total, $\lambda=99996$, and the CSP has to search $4099836$ records for each query.
When we divide the records into more groups, fewer dummy records are required, fewer records will be searched by the CSP, but fewer elements will be contained in each group.
When there are $10000$ groups, only $1034007$ dummy records are required totally, $\lambda=10$, and the CSP just needs to search around $240$ records for each query.
\subsection{Query Latency}
\begin{figure}[htp]
\centering
\includegraphics[width=0.32\textwidth]{figs/entity.pdf}\\
\caption{The overhead on different entities with different group numbers}
\label{Fig:mcdb-entity}
\end{figure}
An important aspect of an outsourced service is that most of the intensive computations should be off-loaded to the CSPs.
To quantify the workload on each of the entities, we measured the latency on the user, IWS, SSS, and RSS for processing the query with different numbers of groups.
The results are shown in Fig.~\ref{Fig:mcdb-entity}.
We can notice that the computation times taken on the IWS, SSS, and RSS are much higher than that on the user side when there are less than 10000 groups.
In the following, we will discuss the performance on CSPs and the user in details.
\subsubsection{Overhead on CSPs}
\begin{figure}[htp]
\centering
\includegraphics[width=0.32\textwidth]{figs/CSPs.pdf}\\
\caption{The overhead on CSPs with different group numbers}
\label{Fig:mcdb-csps}
\end{figure}
Fig. \ref{Fig:mcdb-csps} shows the performance of the operations running in the CSPs when increasing the number of groups.
Specifically, in \textsl{\mbox{P-McDb}}\xspace, the IWS runs \emph{NonceBlind} and \emph{PreShuffle}, the SSS runs \emph{Search}, and the RSS runs \emph{Shuffle} algorithms.
We can notice that the running times of all the four operations reduce when increasing the number of groups.
The reason is that \textsl{\mbox{P-McDb}}\xspace only searches and shuffles a group of records for each query.
The more groups, the fewer records in each group for searching and shuffling.
Thanks to the efficient XOR operation, even when $g=1$, \textit{i.e.,}\xspace searching the whole database (contains 4099836 records in total), \emph{NonceBlind}, \emph{Search}, and \emph{Shuffle} can be finished in around $2$ seconds.
\emph{PreShuffle} is the most expensive operation in \textsl{\mbox{P-McDb}}\xspace, which takes about $11$ seconds when $g=1$.
Fortunately, in \emph{PreShuffle}, the generation of new nonces (\textit{i.e.,}\xspace Lines \ref{code:mcdb-seed'}-\ref{code:mcdb-gn'} in Algorithm \ref{alg:mcdb-shuffle}) is not affected by the search operation, thus they can be pre-generated.
By doing so, \emph{PreShuffle} can be finished in around $2.4$ seconds when $g=1$.
\subsubsection{Overhead on Users}
\begin{figure}[htp]
\centering
\includegraphics[width=0.32\textwidth]{figs/user_group.pdf}\\
\caption{The overhead on the user with different group numbers}
\label{Fig:mcdb-user}
\end{figure}
In \textsl{\mbox{P-McDb}}\xspace, the user only encrypts queries and decrypts results.
In Fig. \ref{Fig:mcdb-user}, we show the effect on the two operations when we change the number of groups.
The time for encrypting the query does not change with the number of groups.
However, the time taken by the result decryption decreases slowly when increasing the number of groups.
For recovering the required records, in \textsl{\mbox{P-McDb}}\xspace, the user first filters out the dummy records and then decrypts the real records.
Therefore, the result decryption time is affected by the number of returned real records as well as the dummy ones.
In this experiment, the tested query always matches 32 real records.
However, when changing the number of groups, the number of returned dummy records will be changed.
Recall that, the required dummy records for a group is $\sum_{e \in \bm{E}_{f, g}}(\tau_{f, g}-O(e))$, and the threshold $\tau_{f, g}=\max\{O(e)\}_{e \in \bm{E}_{f, g}}$.
When the records are divided into more groups, fewer elements will be included in each group.
As a result, the occurrence of the searched element tends to be closer to $\tau_{f, g}$, and then fewer dummy records are required for its padding.
Thus, the result decryption time decreases with the increase of the group number.
In the tested dataset, the elements have very close occurrences, which ranges from $1$ to $41$.
The number of matched dummy records are $9$, $9$, $2$, $1$, and $0$ when there are $1$, $10$, $100$, $1000$, and $10000$ groups, respectively.
For the dataset with a bigger element occurrence gap, the result decryption time will change more obviously when changing the number of groups.
\subsubsection{End-to-end Latency}
Fig. \ref{Fig:mcdb-user} also shows the end-to-end latency on the user side when issuing a query.
In this test, we did not simulate the network latency, so the end-to-end latency shown here consists of the query encryption time, the nonce blinding time, the searching time and the result decrypting time.
The end-to-end latency is dominated by the nonce blinding and searching times, thus it decreases when increasing the number of groups.
Specifically, the end-to-end query decreases from $2.16$ to $0.0006$ seconds when the number of groups increases from $1$ to $10000$.
In this test, we used one trick to improve the performance.
As described in Algorithm \ref{alg:macdb-search}, the SSS is idle before getting $(IL, EN)$.
Indeed, the IWS can send $IL$ to the SSS first, and then the SSS can pre-compute $temp_{id}=H'(EDB(id, EQ.f) \oplus EQ.e^*)$ while the IWS generating $EN$.
After getting $EN$, the SSS just needs to check if $temp_{id}=EN(id).w$.
By computing $(w, t)$ and $temp$ simultaneously, the user can get the search result more efficiently.
In this test, the SSS computed $t$ simultaneously when the IWS generating $EN$.
Note that the shuffle operation does not affect the end-to-end latency on the user side since it is performed after returning search results to users.
\subsection{Insert and Delete Queries}
\begin{figure}
\centering
\includegraphics[width=.32\textwidth]{figs/insdelsel.pdf}\\
\caption{The execution times of the insert, delete and select queries with different numbers of groups}
\label{Fig:mcdb-insert}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=.3\textwidth]{figs/insdelsel_result.pdf}\\
\caption{The execution times of the insert, delete, and select queries with different result sizes}
\label{Fig:mcdb-insert-result}
\end{figure}
Since \textsl{\mbox{P-McDb}}\xspace is a dynamic SE scheme, we also tested its performance for insert and delete queries.
In Fig. \ref{Fig:mcdb-insert}, we show the execution times of insert and delete when changing the number of groups\footnote{The times taken by the $PreShuffle$ and $Shuffle$ algorithms are not included.}.
Moreover, we take the end-to-end latency of select queries as the baseline.
Fig. \ref{Fig:mcdb-insert} shows both insert and delete queries execute faster when there are more groups.
For insert queries, as mentioned in Section \ref{subsec:update}, $W=\max\{\gamma_{f}\}_{1 \leq f \leq F}$ dummy records should be inserted when inserting a real record.
Thus, the performance of insert queries is affected by the number of elements in involved groups.
When the database is divided into more groups, the fewer elements will be included in each group.
In this experiment, when there are 1, 10, 100, 1000, and 10000 groups, the user has to generate and encrypt 99996, 10000, 1000, 100, and 10 dummy records, respectively.
Specifically, when there is only one group, \textsl{\mbox{P-McDb}}\xspace takes only around $1.5$ seconds to encrypt and insert $99997$ records, which is slightly better than select queries.
For delete queries, \textsl{\mbox{P-McDb}}\xspace first performs the search operation to get the matched records, and then turn them to dummy by changing the tags.
Moreover, to hide the access pattern from the RSS, the user also needs to change the tags of all the other searched records.
The more groups, the fewer records should be searched, and the fewer tags should be changed.
Therefore, the performance of delete queries also gets better when there are more groups.
However, comparing with select queries, delete queries takes much longer time to change the tags.
Specifically, it takes around $20$ seconds to execute a delete query when there is only one group.
We also tested how the result size affects the performance of select and delete queries.
For this test, we divided the database into 10 groups, and the searched group contains $360000$ records.
Moreover, we manually changed the data distribution in the group to be searched to ensure that we can execute queries matching $36$, $360$, $3600$, $36000$, $360000$ records.
From Fig. \ref{Fig:mcdb-insert-result}, we can see that the performance of delete queries is better when the result size is bigger.
The reason is that tags of matched records are processed in a much efficient way than unmatched records.
Specifically, as mentioned in Section \ref{subsec:update}, the user directly changes the tags of matched records to random strings.
However, for each unmatched record, the user first has to detect if it is dummy or not, and then update their tags accordingly.
When all the searched records match the delete query, it takes only $0.6$ seconds to turn them to dummy.
Nonetheless, select queries take longer time when more records matching the query, since there are more records should be processed on the user side.
\iffalse
\subsection{Comparison with Other Schemes}
To better investigate the performance of our approach, here we roughly compare the search time of \textsl{\mbox{P-McDb}}\xspace with $PPQED_a$ and SisoSPIR.
Although we did not access their implementation, our experiments were conducted on Linux machines with approximate power\footnote{$PPQED_a$ was tested on a Linux machine with an
Intel Xeon 6-Core CPU 3.07 GHz processor and 12 GB RAM and SisoSPIR was tested on a machine with an Intel i7-2600K 3.4GHz 4-core CPU 8GB RAM.}.
Searching over 1 million records takes more than 10 seconds in SisoSPIR.
In $PPQED_a$, it takes 1 second to check if a predicate matches with a record when the data size is $10$ bits.
However, \textsl{\mbox{P-McDb}}\xspace only takes less than 2 seconds when searching over 4.1 million records, which is much more efficient than the other two schemes.
\fi
| {'timestamp': '2019-09-27T02:13:00', 'yymm': '1909', 'arxiv_id': '1909.11624', 'language': 'en', 'url': 'https://arxiv.org/abs/1909.11624'} |
\section{Introduction}
\IEEEPARstart{T}{ext} in images contains
valuable information and is exploited in many
content-based image and video applications, such as content-based web
image search, video information retrieval, mobile based text analysis and
recognition~\cite{Zhong2000, Doermann2000, Weinman2009, Yin2011, Chew2011}.
Due to complex background, variations of font,
size, color and orientation, text in natural scene images
has to be robustly detected before being recognized or retrieved.
Existing methods for scene text detection can roughly be
categorized into three groups: sliding window based
methods~\cite{derivative_feature, adaboost_text, Kim2003},
connected component based methods~\cite{Epshtein, structure-partition, color-clustering},
and hybrid methods~\cite{pan}.
Sliding window based methods, also called as region based methods, engage a sliding window to search
for possible texts in the image and then use machine learning
techniques to identify texts.
These methods tend to be slow as the image has to be
processed in multiple scales.
Connected component based methods extract character candidates from
images by connected component analysis followed by grouping
character candidates into text; additional checks may be
performed to remove false positives.
The hybrid method presented by Pan et al.~\cite{pan} exploits a
region detector to detect text candidates and extracts
connected components as character candidates by local
binarization; non-characters are eliminated with a Conditional Random Fields~(CRFs)~\cite{crf} model,
and characters can finally be grouped into text.
More recently, Maximally Stable Extremal Region (MSER) based methods, which actually fall into the family of
connected component based methods but use MSERs~\cite{mser} as character candidates, have
become the focus of several recent projects~\cite{icdar2011, edge_mser, head_mounted,
real_time, pruned_search, neumann_method, mser2013}.
Although the MSER based method is the winning method of the benchmark data, i.e., ICDAR
2011 Robust Reading Competition~\cite{icdar2011}
and has reported promising
performance, there remains several problems to be addressed.
First, as the MSER algorithm detects a large number of non-characters,
most of the character candidates need to be removed
before further processing. The existing methods for MSERs
pruning~\cite{head_mounted, real_time}, on one hand, may still have room for further improvement in terms of the accuracies; on the other hand, they tend to be slow because
of the computation of complex features.
Second, current approaches~\cite{head_mounted, real_time, pan} for
text candidates construction, which can be categorized as
rule based and clustering based methods, work well but
are still not sufficient;
rule based methods generally require hand-tuned parameters,
which is time consuming and error pruning;
the clustering based method~\cite{pan} shows good performance but it is
complicated by incorporating a second stage processing
after minimum spanning tree clustering.
In this paper, we propose a robust and accurate MSER based scene text detection
method.
First, by exploring the hierarchical structure of MSERs and
adopting simple features, we designed a fast and accurate MSERs
pruning algorithm;
the number of character candidates to be processed is
significantly reduced with a high accuracy.
Second, we propose a novel self-training distance metric learning algorithm
that can learn distance weights and clustering threshold simultaneously and automatically;
character candidates are clustered into text candidates
by the single-link clustering algorithm using the learned parameters.
Third, we propose to use a character classifier to
estimate the posterior probabilities of text candidates
corresponding to non-text and remove text candidates with
high probabilities.
Such elimination helps to train a more powerful text
classifier for identifying text.
By integrating the above ideas, we built an accurate and robust
scene text detection system.
The system is evaluated on the benchmark ICDAR 2011 Robust Reading
Competition dataset and achieved an $f$ measure of 76\%. To our best knowledge, this result ranks the first on all the reported performance and is much higher the current best performance of 71\%.
We also validate our method on the multilingual (include Chinese and English) dataset used in ~\cite{pan}. With an $f$ measure of 74.58\%, our system significantly outperforms the competitive method~\cite{pan} that achieves only 65.2\%.
An online demo of our proposed scene text detection system
is available at \emph{\url{http://kems.ustb.edu.cn/learning/yin/dtext}}.
The rest of this paper is organized as follows. Recent MSER
based scene text detection methods
are reviewed in Section~\ref{section:related_work}.
Section~\ref{section:system} describes the proposed scene text detection method.
Section~\ref{section:experimental_results} presents the experimental results of the proposed system
on ICDAR 2011 Robust Reading Competition dataset and a
multilingual (include Chinese and English) dataset.
Final remarks are presented in Section~\ref{section:conclusion}.
\section{Related Work}
\label{section:related_work}
As described above, MSER based methods have demonstrated very promising performance in many real projects. However, current MSER based methods still have some
key limitations, i.e., they may suffer from large number of non-characters candidates in detection and also
insufficient text candidates construction algorithms. In this section, we review the MSER
based methods with the focus on these two problems. Other scene text detection methods can be referred to in some
survey papers~\cite{survey04,survey05,survey08}. A recently published MSER based method can be referred to in
Shi et al.~\cite{mser2013}.
The main advantage of MSER based methods over traditional
connected component based methods may root in the usage of MSERs as
character candidates.
Although the MSER algorithm can detect most characters even when
the image is in low quality (low resolution, strong noises,
low contrast, etc.), most of the detected character candidates correspond
to non-characters.
Carlos et al.~\cite{head_mounted} presented a MSERs pruning
algorithm that contains two steps: (1) reduction of linear
segments and (2) hierarchical filtering.
The first stage reduces linear segments in the MSER tree into
one node by maximizing the \emph{border energy} function;
the second stage walks through the tree in a depth-first
manner and eliminates nodes by checking them against a
cascade of filters: \emph{size, aspect ratio, complexity,
border energy and texture}.
Neumann and Matas~\cite{real_time} presented a two stage algorithm for
Extremal Regions (ERs) pruning.
In the first stage, a classifier trained from
incrementally computable descriptors (\emph{area, bounding
box, perimeter, Euler number and horizontal crossing}) is used to
estimate the class-conditional probabilities
$p(r|\mbox{chracter})$ of ERs; ERs corresponding to
local maximum of probabilities in the
ER inclusion relation are selected.
In the second stage, ERs passed the first stage are
classified as characters and non-characters using more
complex features. As most of the MSERs correspond to
non-characters, the purpose of using cascading filters and
incrementally computable descriptors in these above two methods is
to deal with the computational complexity caused by the high
false positive rate.
Another challenge of MSER based methods, or more generally,
CC-based methods and hybrid methods, is how to group
character candidates into text candidates.
The existing methods for text candidates construction fall
into two general approaches: rule-based~\cite{edge_mser,
head_mounted, real_time} and clustering-based methods~\cite{pan}.
Neumann and Matas~\cite{real_time} grouped character
candidates using the text line constrains, whose basic
assumption is that characters in a word can be fitted by one or
more top and bottom lines.
Carlos et al.~\cite{head_mounted} constructed a fully connected
graph over character candidates; they filtered edges by
running a set of tests (edge angle, relative position
and size difference of adjacent character candidates) and
used the remaining connected subgraphs as text candidates.
Chen et al.~\cite{edge_mser} pairwised character candidates as
clusters by putting constrains on stroke width and height
difference; they then
exploited a straight line to fit to the centroids of
clusters and declared a line as text candidate if it
connected three or more character candidates.
The clustering-based method presented by Pan et al.~\cite{pan} clusters character
candidates into a tree using the minimum spanning tree
algorithm with a learned distance metric~\cite{yin-liu:metric-2009};
text candidates are constructed by cutting off between-text
edges with an energy minimization model.
The above rule-based methods generally require hand-tuned
parameters, while the clustering-based method
is complicated by the incorporating of the
post-processing stage, where one has to specify the
energy model.
\section{Robust Scene Text Detection}
\label{section:system}
In this paper, by incorporating several key improvements over
traditional MSER based methods, we propose a novel MSER based
scene text detection method, which finally leads to significant performance improvement over the other competitive methods.
The structure of the proposed system, as well as the sample result
of each stage is presented in
Figure~\ref{fig:system_overview}.
The proposed scene text detection method includes the
following stages:
1) \emph{Character candidates extraction}.
character candidates are extracted using the MSER algorithm;
most of the non-characters are reduced by the proposed MSERs
pruning algorithm using the strategy of minimizing
regularized variations. More details are presented in Section~\ref{section:mser_extraction}.
2) \emph{Text candidates construction}.
distance weights and threshold are learned simultaneously
using the proposed metric learning algorithm; character
candidates are clustered into text candidates by the
single-link clustering algorithm using the learned
parameters. More details are presented in Section~\ref{section:region_construction}.
3) \emph{Text candidates elimination}. the
posterior probabilities of text candidates corresponding to
non-text are measured using the character classifier and
text candidates with high probabilities for non-text are removed.
More details are presented in Section~\ref{section:character_classifier}.
4) \emph{Text candidates classification}. text
candidates corresponding to true text are identified by the
text classifier. An AdaBoost
classifier is trained to decide whether an text candidate corresponding
to true text or not~\cite{Yin12}.
As characters in the same text tend to have similar features,
the uniformity of character candidates'
features are used as text candidate's features to train the classifier.
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{system-crop.pdf}
\caption{Flowchart of the proposed system and the corresponding experimental
results after each step of a sample image.
Text candidates are labeled by blue bounding rectangles;
character candidates identified as characters are
colored green, others red.
}
\label{fig:system_overview}
\end{figure}
In order to measure the performance of the proposed system
using the ICDAR 2011 competition dataset, text candidates
identified as text are further partitioned into words
by classifying inner character distances into character
spacings and word spacings using an AdaBoost classifier~\cite{Yin12}.
The following features are adopted: spacing aspect ratio,
relative width difference between left and right
neighbors, number of character candidates in the text
candidate.
\subsection{Letter Candidates Extraction}
\label{section:mser_extraction}
\subsubsection{Pruning Algorithm Overview}
The MSER algorithm is able to detect almost all
characters even when the image is in low quality.
However, as shown in Figure~\ref{fig:mser_tree_origin}, most of the detected
character candidates correspond to non-characters and
should be removed before further processing.
Figure~\ref{fig:mser_tree_origin} also shows that the detected characters forms
a tree, which is quite useful for designing the
pruning algorithm.
In real world situations, as characters cannot be
``included'' by or ``include''
other characters, it is safe to remove children once the
parent is known to be a character, and vice versa.
The parent-children elimination is a safe operation
because characters are preserved after the operation.
By reduction, if the MSER tree is pruned by applying
parent-children elimination operation
recursively in a depth-first manner, we are still in safe
place and characters are preserved.
As shown in Figure~\ref{fig:mser_tree_tree_accumulation},
the above algorithm will end up with a set of
disconnected nodes containing all the characters.
The problem with the above algorithm is that it is expensive to
identify character.
Fortunately, rather than identifying the character,
the choice between parent and children can be
made by simply choosing the one that is more likely to be
characters,
which can be estimated by
the proposed regularized variation scheme.
Considering different situations in MSER trees,
we design two versions of the parent-children
elimination method,
namely the \emph{linear reduction} and
\emph{tree accumulation} algorithm.
Non-character regions are eliminated by the linear
reduction and tree accumulation algorithm using the strategy
of minimizing regularized variations.
Our experiment on ICDAR 2011 competition training set shows that
more than 80\% of character candidates are eliminated using
the proposed pruning algorithm.
In the following sections, we first introduce the
concept of variation and explain why variations need to be
regularized. Then we introduce the linear reduction and
tree accumulation algorithm. Finally we present the
complexity analysis for the proposed algorithms.
\subsubsection{Variation and Its Regularization }
According to Matas et al.~\cite{mser},
an ``extremal region'' is a connected component of an image
whose pixels have either higher or lower intensity than its
outer boundary pixels~\cite{vlfeat, detector_compare}.
Extremal regions are extracted by applying a set of
increasing intensity levels to the gray scale image.
When the intensity level increases,
a new extremal region is extracted by
accumulating pixels of current level and joining
lower level extremal regions~\cite{head_mounted};
when the top level is reached, extremal regions of the whole image are extracted as
a rooted tree.
The variation of an extremal region is defined as follows.
Let $R_l$ be an extremal region, $B(R_l) = (R_l, R_{l+1}, ...,
R_{l+\Delta})$
($\Delta$ is an parameter) be the branch of the tree rooted at $R_l$,
the variation (instability) of $R_l$ is defined as
\begin{equation}
v(R_l) = \frac{|R_{l+\Delta} - R_l|}{|R_l|}.
\end{equation}
An extremal region $R_l$ is a maximally stable extremal region if its
variation is lower than (more stable) its parent $R_{l-1}$ and
child $R_{l+1}$~\cite{vlfeat, mser}.
Informally, a maximally stable extremal region is an extremal region
whose size remains virtually unchanged over a range of
intensity levels~\cite{real_time}.
As MSERs with lower variations have sharper
borders
and are more likely to be characters,
one possible strategy may be used by the
parent-children elimination operation is to select parent or children
based on who have the lowest variation.
However, this strategy alone will not work because
MSERs corresponding to characters may not
necessarily have lowest variations.
Consider a very common situation depicted in Figure~\ref{fig:situations}.
The children of the MSER tree in Figure~\ref{fig:situation1} correspond to
characters while the parent of the MSRE tree in
Figure~\ref{fig:situation2} corresponds to
character.
The ``minimize variation'' strategy cannot deal with this
situation because either parent or children may
have the lowest variations.
However, our experiment shows that this limitation can be easily fixed by variation
regularization, whose basic idea is to penalize variations of
MSERs with too large or too small aspect ratios.
Note that we are not requiring characters to have the lowest
variations globally, a lower variation in a parent-children
relationship suffices for our algorithm.
Let $\mathcal{V}$ be the variation and $a$ be the aspect
ratio of a MSER,
the aspect ratios of characters
are expected to fall in $[a_{min}, a_{max}]$, the
regularized variation is defined as
\begin{equation}
\mathcal{V} =
\begin{cases}
\mathcal{V} - \theta_1 (a - a_{max}) & \text{if } a > a_{max} \\
\mathcal{V} - \theta_2 (a_{min} - a) & \text{if } a < a_{min} \\
\mathcal{V} & \text{otherwise}
\end{cases}\;,
\end{equation}
where $\theta_1$ and $\theta_2$ are penalty parameters.
Based on experiments on the training dataset,
these parameters are set as
$\theta_1 = 0.03, \theta_2 = 0.08, a_{max} = 1.2 \text{ and }
a_{min} = 0.7$.
\begin{figure}[htb!]
\centering
\begin{subfigure}[b]{0.11\textwidth}
\centering
\includegraphics[width=\textwidth]{situation1.png}
\caption{}
\label{fig:situation1}
\end{subfigure}
\begin{subfigure}[b]{0.11\textwidth}
\centering
\includegraphics[width=\textwidth]{situation2.png}
\caption{}
\label{fig:situation2}
\end{subfigure}
\caption{Character correspondence in MSER trees.
(a) A MSER tree whose children corresponds to characters;
(b) a MSER tree whose parent corresponds to character.
}
\label{fig:situations}
\end{figure}
\begin{figure}[htb!]
\centering
\begin{subfigure}[b]{0.09\textwidth}
\centering
\includegraphics[trim = 60mm 14mm 0mm 107mm, clip,
width=\textwidth]{mser_tree_origin.png}
\caption{}
\label{fig:mser_tree_origin}
\end{subfigure}
\begin{subfigure}[b]{0.09\textwidth}
\centering
\includegraphics[trim = 60mm 14mm 0mm 107mm, clip, width=\textwidth]{mser_tree_var.png}
\caption{}
\label{fig:mser_tree_var}
\end{subfigure}
\begin{subfigure}[b]{0.09\textwidth}
\centering
\includegraphics[trim = 60mm 14mm 0mm 107mm, clip, width=\textwidth]{mser_tree_var_recalc.png}
\caption{}
\label{fig:mser_tree_regularized_var}
\end{subfigure}
\begin{subfigure}[b]{0.09\textwidth}
\centering
\includegraphics[trim = 60mm 0mm 0mm 35mm, clip, width=\textwidth]{mser_tree_linear_reduction.png}
\caption{}
\label{fig:mser_tree_linear_reduction}
\end{subfigure}
\begin{subfigure}[b]{0.09\textwidth}
\centering
\includegraphics[width=\textwidth]{mser_tree_tree_accumulation2.png}
\caption{}
\label{fig:mser_tree_tree_accumulation}
\end{subfigure}
\caption{MSERs pruning.
(a) MSER tree of a text segment;
(b) MSERs colored according to variations, as
variations increase, MSERs are colored from green to
yellow then to red;
(c) MSERs colored according to regularized
variations;
(d) MSER tree after linear reduction;
(e) character candidates after tree accumulation.
}\label{fig:mser_reduction}
\end{figure}
Figure~\ref{fig:mser_tree_var} shows a MSER tree colored
according to variation.
As variation increases, the color changes from green to
yellow then to red.
The same tree colored according to regularized variation is
shown in Figure~\ref{fig:mser_tree_regularized_var}.
The MSER tree in Figure~\ref{fig:mser_tree_regularized_var}
are used in our linear reduction (result presented in Figure~\ref{fig:mser_tree_linear_reduction}) and tree accumulation
algorithm (result presented in Figure~\ref{fig:mser_tree_tree_accumulation}).
Notice that ``variation'' in the following sections refer to
``regularized variation''.
\subsubsection{Linear Reduction}
The linear reduction algorithm is used in situations where
MSERs has only one child.
The algorithm chooses from parent and child the one with the
minimum variation and discards the other.
This procedure is applied across the whole tree recursively.
The detailed algorithm is presented in
Figure~\ref{fig:linear_reduction}.
Given a MSER tree, the procedure
returns the root of the processed
tree whose linear segments are reduced.
The procedure works as follows.
Given a node $t$, the procedure checks the number of
children of $t$; if $t$ has no children, returns $t$ immediately;
if $t$ has only one child,
get the root $c$ of child tree by first applying
the linear reduction procedure to the child tree; if $t$ has a
lower variation compared with $c$, link $c$'s children to $t$
and return $t$; otherwise we return $c$; if $t$ has more than
one children, process these children using linear reduction
and link the resulting trees to $t$ before returning $t$.
Figure~\ref{fig:mser_tree_linear_reduction} shows the
resulting MSER tree
after applying linear reduction to the tree shown in
Figure~\ref{fig:mser_tree_regularized_var}.
Note that in the resulting tree
all linear segments are reduced and non-leaf nodes always
have more than one children.
\begin{figure}[htb!]
\begin{algorithmic}[1]
\Procedure{Linear-Reduction}{$T$}
\If{nchildren[$T$] = 0}
\State \textbf{return} $T$
\ElsIf{nchildren[$T$] = 1}
\State $c$ $\gets$ {\Call{Linear-Reduction}{child[$T$]}}
\If{var[$T$] $\leq$ var[$c$]}
\State{link-children($T$, children[$c$])}
\State \textbf{return} $T$
\Else
\State \textbf{return} $c$
\EndIf
\Else\Comment{nchildren[$T$] $\geq$ 2}
\For{ \textbf{each} $c$ $\in$ children[$T$]}
\State link-children($T$, {\Call{Linear-Reduction}{$c$}})
\EndFor
\State \textbf{return} $T$
\EndIf
\EndProcedure
\end{algorithmic}
\caption{The linear reduction algorithm.}
\label{fig:linear_reduction}
\end{figure}
\subsubsection{Tree Accumulation}
The tree accumulation algorithm is used when MSERs has
more than one child.
Given a MSER tree, the procedure returns a set of
disconnected nodes.
The algorithm works as follows.
For a given node $t$, tree accumulation checks the number of
$t$'s children; if $t$ has no children, return $t$ immediately; if $t$
has more than two children, create an empty set $C$ and
append the result of applying tree accumulation to
$t$'s children to $C$;
if one of the nodes in $C$ has a lower variation
than $t$'s variation, return $C$, else
discard $t$'s children and return $t$.
Figure~\ref{fig:mser_tree_tree_accumulation} shows the result of
applying tree accumulation to the tree shown in
Figure~\ref{fig:mser_tree_linear_reduction}.
Note that the final result is a set of disconnected nodes
containing all the characters in the original MSER tree.
\begin{figure}[htb!]
\begin{algorithmic}[1]
\Procedure{Tree-Accumulation}{$T$}
\If{nchildren[$T$] $\geq$ 2}
\State $C \gets \emptyset$
\For{ \textbf{each} $c$ $\in$ children[$T$]}
\State $C \gets C \ \cup $ {\Call{Tree-Accumulation}{$c$}}
\EndFor
\If{var[$T$] $\leq$ min-var[$C$]}
\State discard-children($T$)
\State \textbf{return} $T$
\Else
\State \textbf{return} $C$
\EndIf
\Else\Comment{nchildren[$T$] = 0}
\State \textbf{return} $T$
\EndIf
\EndProcedure
\end{algorithmic}
\caption{The tree accumulation algorithm.}
\label{fig:tree-accumulation}
\end{figure}
\subsubsection{Complexity Analysis}
\label{section:complexity_analysis}
The linear reduction and tree accumulation algorithm
effectively visit each nodes in the MSRE tree and do simple
comparisons and pointer manipulations, thus the complexity
is linear to the number of tree nodes.
The computational complexity of the variation regularization
is mostly due to the calculations of MSERs' bounding rectangles,
which is up-bounded by the number of pixels in the image.
\subsection{Text Candidates Construction}
\label{section:region_construction}
\subsubsection{Text Candidates Construction Algorithm
Overview}
Text candidates are constructed by clustering character
candidates using the single-link
clustering algorithm~\cite{clustering}.
Intuitively, single-link clustering produce clusters that
are elongated~\cite{clustering_review} and thus is
particularly suitable for the text candidates construction task.
Single-link clustering belongs to the family of hierarchical
clustering; in hierarchical clustering,
each data point is initially treated as a
singleton cluster and clusters are successively merged
until all points have been merged into a single remaining
cluster.
In the case of single-link clustering, the two clusters
whose two closest
members have the smallest distance are merged in each step.
A distance threshold can be specified such that the clustering
progress is terminated when the distance between nearest
clusters exceeds the threshold.
The resulting clusters of single-link algorithm form
a hierarchical cluster tree or cluster forest if termination
threshold is specified.
In the above algorithm, each data point represent a
character candidate and \emph{top level} clusters in the
final hierarchical cluster tree (forest) correspond to
text candidates.
The problem is of course to determine the distance
function and threshold for the single-link algorithm.
We use the weighted sum of features as the distance
function.
Given two data points $u,v$, let $x_{u,v}$ be the feature
vector characterizing the similarity between $u$ and $v$, the distance
between $u$ and $v$ is defined as
\begin{equation}
d(u,v;w) = w^T x_{u,v},
\label{eq:metric00}
\end{equation}
where $w$, the feature weight vector together with the
distance threshold, can be learned using the proposed distance metric learning
algorithm.
In the following subsections, we first introduce the feature
space $x_{u,v}$, then detail the proposed metric learning algorithm
and finally present the empirical analysis on the proposed algorithm.
\subsubsection{Feature Space}
The feature vector $x_{u,v}$ is used to describe
the similarities between data points $u$ and $v$.
Let $x_u, y_u$ be the coordinates of top left corner of $u$'s bounding rectangle,
$h_u, w_u$ be the height and width of the bounding
rectangle of $u$,
$s_u$ be the stroke width of $u$,
$c1_u, c2_u, c3_u$ be the average three channel color value of $u$,
feature vector ${x}_{u,v}$ include the
following features:
\begin{itemize}
\item Spatial distance
\begin{align*}
abs(x_u + 0.5 h_u - x_v - 0.5 w_u) / \max(w_u , w_v).
\end{align*}
\item Width and height differences
\begin{align*}
& abs(w_u - w_v) / \max(w_u , w_v), \\
& abs(h_u - h_v) / \max(h_u , h_v).
\end{align*}
\item Top and bottom alignments
\begin{align*}
& \arctan(\frac{abs(y_u - y_v)} { abs(x_u + 0.5 h_u - x_v - 0.5 w_u)}), \\
& \arctan(\frac{abs(y_u + h_u - y_v - h_v)} { abs(x_u + 0.5 h_u - x_v - 0.5 w_u)}).
\end{align*}
\item Color difference
\begin{align*}
\sqrt{(c1_u - c1_v)^2 + (c2_u -
c2_v)^2 + (c3_u - c3_v)^2}.
\end{align*}
\item Stroke width difference
\begin{align*}
abs(s_u - s_v)
/ \max(s_u, s_v).
\end{align*}
\end{itemize}
\subsubsection{Distance Metric Learning}
There are a variety of distance metric learning methods~\cite{huang1, huang2, huang3}.
More specifically, many clustering algorithms rely on a good distance metric
over the input space.
One task of semi-supervised clustering is to learn a
distance metric that satisfies the labels or constrains in the
supervised data given the clustering
algorithm~\cite{integrating,xing_metric,klein}.
The strategy of metric learning is to the learn distance
function by minimizing
distance between point pairs in $\mathcal{C}$ while maximizing
distance between point pairs in $\mathcal{M}$,
where $\mathcal{C}$ specifies pairs of points in different
clusters and $\mathcal{M}$ specifies pairs of points in
the same cluster.
In single-link clustering, because clusters are formed by merging
smaller clusters, the final resulting clusters will form a
binary hierarchical cluster tree, in which non-singleton
clusters have exactly two direct subclusters.
It is not hard to see that the following
property holds for \emph{top level} clusters: given the termination threshold $\epsilon$,
it follows that distances between each top level cluster' subclusters
are less or equal to $\epsilon$
and distances between data pairs in different top level clusters are
great than $\epsilon$, in which the distance between clusters
is that of the two closest members in each cluster.
This property of single-link clustering enables us to design
a learning algorithm that can learn the distance function and
threshold simultaneously.
Given the top level cluster
set $\{C_k\}_{k=1}^{m}$, we randomly initialize feature weights
$w$ and set $\mathcal{C}$ and $\mathcal{M}$ as
\begin{align}
& \mathcal{C} = \{(\hat{u}_k,
\hat{v}_k)= \operatornamewithlimits{\arg min}_{u \in C_k, v \in C_{-k}}d(u,v;w)\}_{k=1}^{m},
\label{eq:cannot-link} \\
& \mathcal{M} = \{(u_k^*, v_k^*) =
\operatornamewithlimits{\arg min}_{u \in C_k^1, v \in C_k^2} d(u,v;w)\}_{k=1}^{m},
\label{eq:must-link}
\end{align}
where $C_{-k}$ is the set of points excluding points in
$C_k$, $C_k^1$ and $C_k^2$ are direct subclusters of $C_k$.
Suppose $\epsilon$ is specified as the single-link
clustering termination threshold. By the definition of
single-link clustering, we must have
\begin{align}
& d(u,v;w) > \epsilon \text{ for all } (u,v) \in
\mathcal{C},
\label{eq:cannot-link-constain}\\
& d(u,v;w) \leq \epsilon \text{ for all } (u,v) \in
\mathcal{M}.
\label{eq:must-link-constain}
\end{align}
The above equations show that $\mathcal{C}$ and
$\mathcal{M}$ can be corresponded as the positive and
negative sample set of a classification problem, such that
feature weights and threshold can be learned by minimizing
the classification error.
As we know, the logistic regression loss is the traditional loss used in classification
with a high and stable performance.
By adopting the objective function of logistic regression,
we define the following objective function
\begin{align}
\label{eq:logistic_regression1}
J(\theta: \mathcal{C}, \mathcal{M}) = & \frac{-1}{2m} ( \sum_{(u,v) \in \mathcal{C}} \log(h_{\theta}(x'_{u,v})) \\ \notag
& + \sum_{(u,v) \in \mathcal{M}} \log(1-h_{\theta}(x'_{u,v})) ),
\end{align}
where
\begin{align}
& h_{\theta}(x'_{u,v}) = 1/(1+\exp(-\theta^T x'_{u,v})), \\ \nonumber
& \theta = \left(
\begin{array}{c}
-\epsilon \\ \nonumber
w
\end{array}
\right),\\ \nonumber
& x'_{u,v} =
\left(
\begin{array}{c}
1 \\ \nonumber
x_{u,v}
\end{array}
\right).
\end{align}
The feature weights $w$ and
threshold $\epsilon$ can be learned simultaneously
by minimizing the objective function $J(\theta: \mathcal{M},
\mathcal{C})$ with respect to current assignment of $\mathcal{C}$ and $\mathcal{M}$
\begin{equation}
\label{eq:logistic_regression2}
\theta^* = \operatornamewithlimits{\arg min}_{\theta} J(\theta: \mathcal{C}, \mathcal{M})
\end{equation}
Minimization of the above objective function is a
typical nonlinear optimization problem and can be solved by
classic gradient optimization methods~\cite{elements_sl_book}.
Note that in the above learning scheme,
initial values for $w$ have to be
specified in order to generate set $\mathcal{C}$ and
$\mathcal{M}$ according to Equation~\eqref{eq:cannot-link}
and~\eqref{eq:must-link}.
For this reason, we design an iterative optimization algorithm in
which each iteration involves two successive steps
corresponding to assignments of $\mathcal{C}, \mathcal{M}$
and optimization with respect to $\mathcal{C}, \mathcal{M}$.
We call our algorithm as ``\emph{self-training distance metric learning}''.
Pseudocode for this learning algorithm is presented in
Figure~\ref{fig:metric-learning}.
Given the top level cluster set $\{C_k\}_{k=1}^{m}$, the
learning algorithm find an optimized $\theta$
such that the objective function $J(\theta:\mathcal{C},
\mathcal{M})$ is minimized with respect to $\mathcal{C}, \mathcal{M}$.
In this algorithm, initial value for $\theta$ is set before
the iteration begins; in the first stage of the iteration $\mathcal{M}$ and
$\mathcal{C}$ are update according to Equation~\eqref{eq:cannot-link}
and~\eqref{eq:must-link} with respect to current assignment
of $\theta$; in the second stage, $\theta$ is updated by
minimizing the objective function with respect to the
current assignment of $\mathcal{C}$ and $\mathcal{M}$.
This two-stage optimization is then repeated until convergence
or the maximum number of iterations is exceeded.
\begin{figure}[htb!]
\begin{algorithmic}
\State{\textbf{Input:} labeled clusters set $\{C_k\}_{k=1}^{m}$}
\State{\textbf{Output:} optimized $\theta$ such that
objective function $J$ is minimized}
\State{\textbf{Method:}}
\State{randomly initialize $\theta$}
\Repeat
\State{\textbf{stage1}: update $\mathcal{M}$ and
$\mathcal{C}$ with respect to $\theta$ }
\State{\textbf{stage2}: $\theta \gets
\operatornamewithlimits{\arg min}_{\theta}J(\theta: \mathcal{C}, \mathcal{M})$}
\Until{convergence or reach iteration limitation}
\end{algorithmic}
\caption{The self-training distance metric learning algorithm.}
\label{fig:metric-learning}
\end{figure}
Similar to most self-training algorithms, convergence of the proposed algorithm is not guaranteed
because the objective function is not assured to decrease in
stage one. However, self-training algorithms have demonstrated their success in many applications. In our case, we find that our algorithm can usually generate very good performance
after a very small number of iterations, typically in 5 iterations. This phenomenon will be investigated in the next subsection.
\subsubsection{Empirical Analysis}
We perform an empirical analysis on the proposed distance metric learning algorithm.
We labeled in the ICDAR 2011 competition dataset $466$ text candidates corresponding to true
text in the training set, 70\% of which used as
training data, 30\% as validation data.
In each iteration of the algorithm,
cannot-link set $\mathcal{C}$ and must-link set $\mathcal{M}$ are updated
in step one by generating cannot-link point pairs and must-link
point pairs from true text candidates
in every image in the training dataset;
the objective function are optimized using the L-BFGS
method~\cite{LBFGS} and the parameters are updated in stage two.
Performance of the learned distance weights and threshold in
step two is evaluated on the validation dataset in each
iteration.
As discussed in the previous section, the algorithm may
or may not converge due to different initial values
of the parameters.
Our experiments show that the learned parameters almost always
have a very low error rate on the validation set after the first several iterations and no major
improvement is observed in the continuing iterations.
As a result, whether the algorithm converge or not has no great
impact on the performance of the learned parameters.
We plot the value of the objective
function after stage one and stage two in each iteration of
two instance (correspond to a converged one and not
converged one) of the
algorithm in Figure~\ref{fig:objective}.
The corresponding error rates of the learned parameters on
the validation set in each iteration are plotted in
Figure~\ref{fig:error-rate}.
Notice that value of the objective function and
validation set error rate dropped immediately after the first several iterations.
Figure~\ref{fig:error-rate} shows that the learned
parameters have different error rates due to different
initial value, which suggests to run
the algorithm several times to get the satisfactory parameters.
The parameters for the single-link clustering algorithm in our scene text detection system are chosen
based on the performance on the validation set.
\begin{figure}[htb!]
\centering
\begin{subfigure}[b]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{objective-crop.pdf}
\caption{}
\label{fig:objective}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{error_rate-crop.pdf}
\caption{}
\label{fig:error-rate}
\end{subfigure}
\caption{
Objective function value (a) and validation set error
rate of learned parameters (b)
after stage one and stage two in each iteration of two instance of the metric
learning algorithm; the red line corresponds to the
not converged instance and the blue line corresponds to
the converged instance.
}
\label{situations}
\end{figure}
\subsection{Text Candidates Elimination}
\label{section:character_classifier}
Using the text candidates construction algorithm proposed in
Section~\ref{section:region_construction},
our experiment in ICDAR 2011 competition training set shows
that only 9\%
of the text candidates correspond to true
text.
As it is hard to train an effective text classifier using such
unbalanced dataset, most of the non-text candidates need to
be removed before training the classifier.
We propose to use a character classifier to estimate the
posterior probabilities of text candidates corresponding to
non-text and remove text candidates with high probabilities for non-text.
The following features are used to train the character classifier.
Smoothness, defined as the average difference of adjacent
boundary pixels' gradient directions, stroke width features, including the average stroke width and stroke width variation,
height, width, and aspect ratio.
Characters with small aspect ratios such as ``i'', ``j''
and ``l'' are treated as
negative samples, as it is very uncommon that some words
comprise many small aspect ratio characters.
Given a text candidate $T$, let $O(m,n;p)$ be the observation
that there are $m$ ($m \in \mathbb{N}, m \geq 2$) character
candidates in $T$, of which $n$ ($n \in \mathbb{N}, n \leq m$)
are classified as non-characters by a character classifier of
precision $p$ ($0 < p < 1$).
The probability of the observation conditioning on $T$
corresponding to text and non-text are $P(O(m,n;p) |
\text{text}) = p^{m - n} (1 - p)^{n}$ and $P(O(m,n;p) |
\text{non-text}) = (1 - p)^{m - n} p^{n}$ respectively.
Let $P(\text{text})$ and $P(\text{non-text})$ be the prior
probability of $T$ corresponding to text and non-text.
By applying Bayes' rule, the posterior probability of
$T$ corresponding to non-text given the observation is
\begin{align}
P(\text{non-text} | &O(m,n;p)) = \nonumber \\
& \frac{P(O(m,n;p) | \text{non-text}) P(\text{non-text})}{P(O(m,n;p))},
\end{align}
where $P(O(m,n;p))$ is the probability of the observation
\begin{align}
P(O(m,n;p)) &= P(O(m,n;p) | \text{text}) P(\text{text})
\nonumber \\
&\qquad {} + P(O(m,n;p) | \text{non-text})
P(\text{non-text}),
\end{align}
The candidate region is rejected if
\begin{equation}
P(\text{non-text} |O(m,n;p)) \geq \varepsilon,
\end{equation}
where $\varepsilon$ is the threshold.
Our experiment shows that text candidates of different sizes
tend to have different probability of being text.
For example, on the ICDAR training set, 1.25\% of text candiates
of size two correspond to text, while 30.67\% of text
candidates of size seven correspond to text, which suggests
to set different priors for text candidates of different
size.
Given a text candidates $T$ of size $s$, let $N_s$ be the total
number of text candidates of size $s$, $N_s^*$ be the number
of text candidates of size $s$ that correspond to text,
we estimate the prior of $T$ being text as $P_s(\text{text}) =
{N_s^*} / {N_s}$, and the prior of $T$ being non-text as
$P_s(\text{non-text}) = 1 - P_s(\text{text})$.
These priors are computed based on statistics on the ICDAR
training dataset.
\begin{figure}[htb!]
\centering
\begin{subfigure}[b]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{RegionEliminationStatF-crop.pdf}
\caption{}
\label{fig:precision_recall}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{RegionEliminationStat-crop.pdf}
\caption{}
\label{fig:elimination}
\end{subfigure}
\caption{Performance of different $\varepsilon$ on the
validation set.
(a) Precision, recall and $f$ measure of text
classification task;
(b) ratio of preserved text samples, ratio of eliminated
non-text samples and ratio of text samples.
}\label{fig:varepsilon_perfomance}
\end{figure}
To find the appreciate $\varepsilon$, we used 70\% of ICDAR
training dataset to train the character classifier and text classifier,
the remaining 30\% as validation set to test the performance
of different $\varepsilon$.
Figure~\ref{fig:precision_recall} shows the precision,
recall and $f$ measure of text candidates classification
task on the validation set.
As $\varepsilon$ increases, text candidates are more
unlikely to be eliminated, which results in the increase of
recall value. In the scene text detection task, recall is
preferred over precision, until $\varepsilon = 0.995$ is
reached, where a major decrease of $f$ measure occurred,
which can be explained by the sudden decrease of ratio of
text samples (see Figure~\ref{fig:elimination}).
Figure~\ref{fig:elimination} shows that at $\varepsilon =
0.995$, $92.95\%$ of text are preserved, while $95.25\%$ of
non-text are eliminated.
\section{Experimental Results}
\label{section:experimental_results}
\footnote{
An online demo of the proposed scene text detection system
is available at \emph{\url{http://kems.ustb.edu.cn/learning/yin/dtext}.}
}
In this section, we presented the experimental results of
the proposed scene text detection method on two publicly
available benchmark datasets, ICDAR 2011 Robust
Reading Competition dataset
\footnote{The ICDAR 2011 Robust
Reading Competition dataset is available at
\emph{\url{http://robustreading.opendfki.de/wiki/SceneText}}.}
and the multilingual dataset
\footnote{The multilingual dataset is available at \emph{\url{http://liama.ia.ac.cn/wiki/projects:pal:member:yfpan}}.}
provided by Pan et al.~\cite{pan}.
\subsection{Experiments on ICDAR 2011 Competition Dataset }
\label{section:icdar2011}
The ICDAR 2011 Robust Reading Competition (Challenge 2: Reading Text in
Scene Images) dataset ~\cite{icdar2011} is a widely used
dataset for benchmarking scene text detection algorithms.
The dataset contains $229$ training images
and $255$ testing images.
The proposed system is trained on the
training set and evaluated on the testing set.
It is worth noting that the evaluation scheme of ICDAR 2011
competition is not the same as of ICDAR 2003 and ICDAR
2005.
The new scheme, the \emph{object count/area} scheme
proposed by Wolf et al.~\cite{Wolf_Jolion_2006}, is more
complicated but offers
several enhancements over the old scheme.
Basically, these two scheme use the notation of precision,
recall and $f$ measure that is defined as
\begin{align}
& recall = \frac{\sum_{i=1}^{|G|} match_G(G_i)}{|G|}, \\
& precision = \frac{\sum_{j=1}^{|D|} match_D(D_j)}{|D|}, \\
& f = 2 \frac{recall \cdot precision}{recall + precision},
\end{align}
where $G$ is the set of groundtruth rectangles and $D$ is
the set of detected rectangles.
In the old evaluation scheme, the matching functions are defined
as
\begin{align}
& match_G(G_i) = \max_{j = 1...|D|} \frac{2 \cdot area(G_i
\cap D_j)}{area(G_i) + area(D_j)}, \\
& match_D(D_j) = \max_{i = 1...|G|} \frac{2 \cdot area(D_j
\cap G_i)}{area(D_j) + area(G_i)}.
\end{align}
The above matching functions only consider one-to-one matches
between groundtruth and detected rectangles, leaving room
for ambiguity between detection quantity and
quality~\cite{Wolf_Jolion_2006}.
In the new evaluation scheme, the
matching functions are redesigned considering detection quality
and different matching situations (one-to-one matchings,
one-to-many matchings and many-to-one matchings) between groundtruth
rectangles and detected rectangles, such that the detection
quantity and quality can both be observed using the new
evaluation scheme.
The evaluation software DetEval \footnote
{DetEval is available at \emph{\url{http://liris.cnrs.fr/christian.wolf/software/deteval/index.html}}.}
used by ICDAR 2011 competition is available online and free to use.
The performance of our system, together with Neumann and Matas'
method~\cite{real_time}, a very recent MSER based method by
Shi et al.~\cite{mser2013}
and some of the top
scoring methods
(Kim's method, Yi's method, TH-TextLoc system and Neumann's method)
from ICDAR 2011 Competition
are presented in
Table~\ref{table:performance}.
As can be seen from Table~\ref{table:performance},
our method produced much better recall, precision and
$f$ measure over other methods on this dataset.
It is worth noting that the first four methods in
Table~\ref{table:performance} are all MSER based methods and
Kim's method is the winning method of ICDAR 2011 Robust
Reading Competition.
Apart from the detection quality, the proposed system offers
speed advantage over some of the listed methods.
The average processing speed of the proposed system on a
Linux laptop with Intel (R) Core (TM)2 Duo 2.00GHZ CPU is 0.43s
per image.
The processing speed of Shi et al.'s method~\cite{mser2013}
on a PC with Intel (R) Core (TM)2 Duo 2.33GHZ CPU is 1.5s
per image.
The average processing speed of Neumann and Matas'
method~\cite{real_time} is 1.8s per image on a ``standard
PC''.
Figure~\ref{fig:samples_icdar} shows some text detection
examples by our system on ICDAR 2011 dataset.
\begin{figure}[htb!]
\centering
\includegraphics[width=0.5\textwidth]{samples_icdar.pdf}
\caption{Text detection examples on the ICDAR 2011
dataset.
Detected text by our system are labeled using red rectangles.
Notice the robustness against low contrast, complex
background and font variations.}
\label{fig:samples_icdar}
\end{figure}
\begin{table}[htb!]
\centering
\caption{Performance ($\%$) comparison of text detection algorithms on ICDAR 2011 Robust Reading Competition dataset.}
\label{table:performance}
\begin{tabular} {|c|c|c|c|}
\hline
Methods & Recall & Precision & $f$ \\ \hline
\textbf{Our Method} & \textbf{68.26} & \textbf{86.29} & \textbf{76.22} \\ \hline
Shi et al.'s method~\cite{mser2013} & 63.1 & 83.3 & 71.8 \\ \hline
Kim's Method (not published) & 62.47 & 82.98 & 71.28 \\ \hline
Neumann and Matas~\cite{real_time} & 64.7 & 73.1 & 68.7 \\ \hline
Yi's Method & 58.09 & 67.22 & 62.32 \\ \hline
TH-TextLoc System & 57.68 & 66.97 & 61.98 \\ \hline
Neumann's Method & 52.54 & 68.93 & 59.63 \\ \hline
\end{tabular}
\end{table}
To fully appreciate the benefits of
\emph{text candidates elimination}
and \emph{the MSERs pruning algorithm},
we further profiled the proposed system on this dataset using the following
schemes (see Table~\ref{table:component_profile})
1) \textbf{Scheme-I}, no text candidates elimination
performed. As can be seen from
Table~\ref{table:component_profile}, the absence of text
candidates elimination results in a major decrease in
precision value.
The degradation can be explained by the fact that
large number of non-text are
passed to the text candidates classification stage without
being eliminated.
2) \textbf{Scheme-II}, using default parameter setting~\cite{vlfeat} for the
MSER extraction algorithm.
The MSER extraction algorithm is controlled by several
parameters~\cite{vlfeat}: $\Delta$ controls how the variation is
calculated; maximal variation $v_{+}$ excludes too unstable MSERs;
minimal diversity $d_{+}$ removes duplicate MSERs by measuring the size difference
between a MSER and its parent.
As can be seen from Table~\ref{table:component_profile},
compared with our parameter setting ($\Delta = 1, v_+=0.5, d_+=0.1$),
the default parameter setting ($\Delta = 5, v_+=0.25,
d_+=0.2$) results in a major decrease in recall value.
The degradation can be explained by two reasons:
(1) the MSER algorithm is not able to detect some
low contrast characters (due to $v_+$), and
(2) the MSER algorithm tends to miss some regions that are
more likely to be characters (due to $\Delta$ and $d_+$).
Note that the speed loss (from 0.36 seconds to 0.43 seconds)
is mostly due to the MSER detection algorithm itself.
\begin{table}[htb!]
\centering
\caption{Performance (\%) of the proposed method due to
different components}
\label{table:component_profile}
\begin{tabular} {|c|c|c|c|c|}
\hline
Component & Recall & Precision & $f$ & Speed (s) \\ \hline
Overall system & 68.26 & 86.29 & 76.22 & 0.43 \\ \hline
\textbf{Scheme-I} & 65.57 & 77.49 & 71.03 & 0.41 \\ \hline
\textbf{Scheme-II} & 61.63 & 85.78 & 71.72 & 0.36 \\ \hline
\end{tabular}
\end{table}
\subsection{Experiments on Multilingual Dataset}
The multilingual (include Chinese and English, see
Figure~\ref{fig:samples_multilingual}) dataset was
initially published by Pan et al.~\cite{pan} to evaluate the
performance of their scene text detection system.
The training dataset contains $248$ images and the testing
dateset contains $239$ images.
As there are no apparent spacing between Chinese word,
this multilingual dataset only provides groundtruths for
text lines.
We hence evaluate the text line detection performance of
our system without further partitioning text into words.
Figure~\ref{fig:samples_multilingual} shows some scene text
detection examples by our system on this dataset.
\begin{table}[htb!]
\centering
\caption{Performance ($\%$) comparison of text detection
algorithms on the multilingual dataset.
Speed of Pan et al.'s method is profiled on a
PC with Pentium D 3.4GHz CPU.
}
\label{table:performance_multilingual}
\begin{tabular} {|c|c|c|c|c|}
\hline
Methods & Recall & Precision & $f$ & Speed (s) \\ \hline
\textbf{Scheme-III} & {63.23} & {79.38} & {70.39} & {0.22} \\ \hline
\textbf{Scheme-IV} & {68.45} & {82.63} & {74.58} & {0.22} \\ \hline
Pan et al.'s method~\cite{pan} & 65.9 & 64.5 & 65.2 & 3.11 \\ \hline
\end{tabular}
\end{table}
\begin{figure}[htb!]
\centering
\includegraphics[width=0.5\textwidth]{samples_multilingual.pdf}
\caption{Text detection examples on the multilingual
dataset. Detected text by our system are labeled using red rectangles.
}
\label{fig:samples_multilingual}
\end{figure}
The performance of our system (include \textbf{Scheme-III} and
\textbf{Scheme-IV}) and Pan et al.'s method~\cite{pan} is presented in
Table~\ref{table:performance_multilingual}.
The evaluation scheme in ICDAR 2003 competition (see
Section~\ref{section:icdar2011}) is used for fair comparison.
The main difference between \textbf{Scheme-III} and
\textbf{Scheme-IV} is that
the character classifier in the first scheme is trained on
the ICDAR 2011 training set while the character classifier in
the second scheme is trained on the multilingual training set
(character features for training the classifier are the same).
The result comparison between \textbf{Scheme-III} and
\textbf{Scheme-IV} in
Table~\ref{table:performance_multilingual}
shows that the performance of the proposed system is
significantly improved because of the incorporating of the
Chinese-friendly character classifier.
The basic implication of this improvement is that the
character classifier has a significant
impact on the performance of the overall system, which
offers another advantage of the proposed system: the
character classifier can be trained on desired dataset until
it is accurate enough and be plugged into the system and
the overall performance will be improved.
Table~\ref{table:performance_multilingual} also shows the
advantages of the proposed method over Pan et al.'s method
in detection quality and speed.
\section{Conclusion}
\label{section:conclusion}
This paper presents a new MSER based scene text
detection method.
Several key improvement over traditional methods have been
proposed.
We propose a fast and accurate MSERs pruning algorithm that
enables us to detect most the characters even when the image is
in low quality.
We propose a novel self-training distance metric learning algorithm that can
learn distance weights and threshold simultaneously;
text candidates are constructed by clustering character
candidates by the single-link algorithm using the learned
parameters.
We propose to use a character classifier to estimate the
posterior probability of text candidate corresponding to non-text and
eliminate text candidates with high probability for non-text, which helps
to build a more powerful text classifier.
By integrating the above ideas, we built a robust scene text
detection system that exhibited superior performance over
state-of-the-art methods on both the ICDAR 2011 Competition dataset and a multilingual dataset.
\ifCLASSOPTIONcompsoc
\section*{Acknowledgments}
\else
\section*{Acknowledgment}
\fi
The research was partly supported by National Basic Research Program of China (2012CB316301)
and National Natural Science Foundation of China (61105018, 61175020).
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
| {'timestamp': '2013-06-04T02:02:16', 'yymm': '1301', 'arxiv_id': '1301.2628', 'language': 'en', 'url': 'https://arxiv.org/abs/1301.2628'} |
\section{Introduction}
Network models of stock market have attracted a large attention in theoretical and applied
research. Different graph structures related with stock market network are considered in the literature \cite{Boginski_2006}. One of such graph structure, maximum spanning tree (MST), is a popular tool in market network analysis. Many papers are devoted to the use of MST for particular stock markets (see recent papers \cite{Walle_2018}, \cite{Sensoya_2014}, \cite{Wang_2012} and exhaustive bibliography in \cite{Marti_Arxive}). However, much less attention is payed in the literature to the estimation of uncertainty of obtained results. One particular way to measure uncertainty is related with bootstrap technique applied to the observed data \cite{Tumminello_2007}. In the present paper we suggest a general theoretical framework to measure uncertainty of MST identification. We study uncertainty in the framework of the concept of random variable network \cite{Kalyagin_2020}.
Random variable network (RVN) is a pair $(X,\gamma)$, where $X=(X_1,X_2,\ldots,X_N)$ is a random vector and $\gamma$ is a measure of similarity between pairs of random variables. This concept allows to introduce the {\it true MST} associated with RVN. We call true MST the maximum spanning tree in the complete weighted graph $(V,\Gamma)$, where $V=\{1,2,\ldots,N\}$ is the set of nodes (vertices), and $\Gamma=(\gamma_{i,j})$ is the matrix of weights, $\gamma_{i,j}=\gamma(X_i,X_j)$, $i,j=1,2,\ldots,N$, $i \neq j$, $\gamma_{i,j}=0$ for $i=j$. To model distribution of the vector $X$ we use a large class of elliptical distributions, which is widely used in applied finance \cite{Gupta_2013}. To measure similarity between stock's we consider different correlation networks for the stock's returns: Pearson correlation network, Fechner correlation network, and Kendall correlation network. Pearson correlation is most used in market network analysis. We show in the paper that for elliptical distributions the true MST in Fechner and Kendall correlation networks are the same as the true MST in Pearson correlation network for Gaussian distribution. This fact gives a theoretical basis for correct comparison of uncertainty of MST identification algorithms in different networks.
Uncertainty of MST identification in our setting is related with the difference between true MST and MST identified from observations. To assess uncertainty of MST identification we analyze different error rates known in multiple testing and binary classification. We argue that the most appropriate error rate for MST identification is the well known False Discovery Rate (FDR). In our case FDR is the proportion of false edges (non correctly identified edges) in MST. We investigate FDR of Kruskal algorithm for MST identification and show that reliability of MST identification is different in three correlation networks. We emphasize that for Pearson correlation network the FDR essentially depends on distribution of stock returns. We prove that for Fechner correlation network the FDR is non sensitive to the assumption on stock's return distribution. New and surprising phenomena are discovered for Kendall correlation network. Our experiments show that FDR of Kruskal algorithm for MST identification in Kendall correlation network weakly depend on distribution and at the same time the value of FDR is almost the best in comparison with MST identification in other networks. These facts are important in practical applications.
The paper is organized as follows. In Section \ref{Basic definitions and notations} we present necessary definitions and notations. In Section \ref{Connection} we prove that MST is the same in three correlation networks for a large class of elliptical distributions. In Section \ref{Uncertainty of MST} we discuss measures of uncertainty of MST identification. The Section \ref{Kruskal algorithm for MST identification} is devoted to the description of Kruskal algorithms for MST identification in different correlation networks. In Section \ref{Robustness of Kruskal algorithm} we prove robustness of Kruskal algorithm of MST identification in Fechner correlation network. In Section \ref{Reliability of Kruskal algorithm} we present the results of numerical investigation of reliability of Kruscal algorithm in different correlation networks. The Section \ref{Concluding remarks} summarizes the main results of the paper and discusses a further research.
\section{Basic definitions and notations.}\label{Basic definitions and notations}
Random variable network is a pair $(X,\gamma)$, where $X=(X_1,\ldots,X_N)$ is a random vector, and $\gamma$ is a pairwise measure of similarity between random variables.
One can consider different random variable networks associated with different distributions of the random vector $X$ and different measures of similarity $\gamma$. For example, the Gaussian Pearson correlation network is the random variable network, where $X$ has a multivariate Gaussian distribution and $\gamma$ is the Pearson correlation. On the same way one can consider the Gaussian partial correlation network, the Gaussian Kendall correlation network, the Student Pearson correlation network and so on.
The random variable network generates a network model. Network model for random variable network $(X, \gamma)$ is the complete weighted graph $(V,\Gamma)$ with $N$ nodes , where $V=\{1,2,\ldots,N\}$ is the set of nodes, $\Gamma=(\gamma_{i,j})$ is the matrix of weights, $\gamma_{i,j}=\gamma(X_i,X_j)$. The spanning tree in the network model $(V,\Gamma)$ is a connected graph $(V,E)$ without cycles. Weight of the spanning tree $(V,E)$ is the sum of weights of its edges $\sum_{(i,j) \in E} \gamma_{i,j}$. Maximum spanning tree (MST) is the spanning tree with maximal weight. In what follows we consider MST as unweighted graph. MST obtained in this way will be called {\it true MST} or {\it MST in true network model}. There are known many algorithms to calculate the minimum spanning tree in an undirected weighted graph \cite{Gross_2006}. All of them can be easily transformed onto algorithms to calculate the maximum spanning tree. In this paper we use classical Kruscal algorithm:
\vskip 2mm
\noindent
{\bf Kruskal algorithm for calculation of the true MST:} the Kruskal algorithm calculates the collection of edges $MST$ of maximum spanning tree in the network model $(V,\Gamma)$ by the following steps
\begin{itemize}
\item Sort the edges of the complete weighted graph $(V,\Gamma)$ into decreasing order by weights $\gamma_{i, j}$.
\item Add the first edge to $MST$.
\item Add the next edge to $MST$ if and only if it does not form a cycle in the current $MST$.
\item If $MST$ has $(N-1)$ edges, stop and output $MST$ . Otherwise go to the previous step.
\end{itemize}
We consider three correlation networks, Pearson correlation network, Fechner correlation network, and Kendall correlation network with elliptical distribution of the vector $X$.
Pearson correlation network is a random variable network with Pearson correlation as the measure of similarity $\gamma=\gamma^P$
\begin{equation}\label{Pearson measure}
\gamma^P_{i,j}=\gamma^P(X_i,X_j)=\frac{Cov(X_i,X_j)}{\sqrt{Cov(X_i,X_i)}\sqrt{Cov(X_j,X_j)}}
\end{equation}
Fechner correlation network is a random variable network with Fechner correlation as the measure of similarity $\gamma=\gamma^{Fh}=2 \gamma^{Sg}-1$
where $\gamma^{Sg}$ is so-called sign similarity
\begin{equation}\label{Sign similarity}
\gamma^{Sg}_{i,j}=\gamma^{Sg}(X_i,X_j)=P\{(X_i-E(X_i))(X_j-E(X_j))>0\}
\end{equation}
\begin{equation}\label{Fechner correlation}
\gamma^{Fh}_{i,j}=2\gamma^{Sg}_{i,j}-1
\end{equation}
Kendall correlation network is a random variable network with Kendall correlation as the measure of similarity $\gamma=\gamma^{Kd}$
\begin{equation}\label{Kendall correlation}
\gamma^{Kd}_{i,j}=\gamma^{Kd}(X_i,X_j)=2P\{(X_i^{(1)}-X_i^{(2)})(X_j^{(1)}-X_j^{(2)})>0\}-1
\end{equation}
where $(X_i^{(1)}, X_j^{(1)})$, $(X_i^{(2)}, X_j^{(2)})$ are two independent random vectors with the same distribution as the vector $(X_i,X_j)$ (see \cite{Kruskal_1958}).
Random vector $X$ belong to the class of elliptically contoured distributions (elliptical distributions) if its density function has the form \cite{Anderson_2003}:
\begin{equation}\label{density_of_elliptical_distribution}
f(x; \mu, \Lambda)=|\Lambda|^{-\frac{1}{2}}g\{(x-\mu)'\Lambda^{-1}(x-\mu)\}
\end{equation}
where $\Lambda=(\lambda_{i,j})_{i,j=1,2,\ldots,N}$ is positive definite symmetric matrix, $g(x)\geq 0$, and
$$
\int_{-\infty}^{\infty}\ldots\int_{-\infty}^{\infty}g(y'y)dy_1 dy_2\cdots dy_N=1
$$
This class includes in particular multivariate Gaussian distribution
$$
f_{Gauss}(x)=\frac{1}{(2\pi)^{N/2}|\Lambda|^{\frac{1}{2}}}e^{-\frac{1}{2}(x-\mu)'\Lambda^{-1}(x-\mu)}
$$
and multivariate Student distribution with $\nu$ degree of freedom
$$
f_{Student}(x)=\frac{\Gamma\left(\frac{\nu+N}{2}\right)}{\Gamma\left(\frac{\nu}{2}\right)\nu^{N/2}\pi^{N/2}}|\Lambda|^{-\frac{1}{2}}\left[1+\frac{(x-\mu)'\Lambda^{-1}(x-\mu)}{\nu}\right]^{-\frac{\nu+N}{2}}
$$
The class of elliptical distributions is a natural generalization of the class of Gaussian distributions. Many properties of Gaussian distributions have analogs for elliptical distributions, but this class is much larger, in particular it includes distributions with heavy tails. For detailed investigation of elliptical distributions see \cite{Fang_1990}, \cite{Anderson_2003}, \cite{Gupta_2013}. It is known that if $E(X)$ exists then $E(X)=\mu$. One important property of elliptical distributions is the connection between covariance matrix of the vector $X$ and the matrix $\Lambda$. Namely, if covariance matrix exists one has
\begin{equation}\label{Covariance_Lambda}
\sigma_{i,j}=Cov(X_i, X_j)=C \cdot \lambda_{i,j}
\end{equation}
where
$$
C=\frac{2\pi^{\frac{1}{2}N}}{\Gamma(\frac{1}{2}N)}\int_0^{+\infty}r^{N+1}g(r^2)dr
$$
In particular, for Gaussian distribution one has $Cov(X_i, X_j)=\lambda_{i,j}$. For multivariate Student distribution with $\nu$ degree of freedom ($\nu>2$) one has $\sigma_{i,j}=\nu/(\nu-1)\lambda_{i,j}$.
\section{Connection between random variable networks}\label{Connection}
There is a connection between three networks for the vector $X$ with elliptical distribution with the same matrix $\Lambda$. Let $\Lambda$ be a fixed positive definite matrix of dimension $(N \times N)$. Denote by $K(\Lambda)$ the class of distributions such that its density function has the form (\ref{density_of_elliptical_distribution}). The following statement holds.
\begin{thm}
Let $X \in K(\Lambda)$. If covariance matrix of $X$ exists, then the true MST in Pearson, Fechner, and Kendall correlation networks is the same for any network and any distribution of the vector $X$. This MST coincides with the true MST for multivariate Gaussian distribution with the covariance matrix $\Lambda$.
\end{thm}
\noindent
{\bf Proof.} Let $X$ be a random vector with elliptical distribution (\ref{density_of_elliptical_distribution}) with the matrix $\Lambda$ and the function $g(u)$. The relation (\ref{Covariance_Lambda}) implies that
$$
\gamma^P(X_i,X_j)=\displaystyle \frac{\lambda_{i,j}}{\sqrt{\lambda_{i,i}\lambda_{j,j}}},
$$
that is $\gamma^P(X_i,X_j)$ does not depend on the function $g(u)$ and are defined by the matrix $\Lambda$ only. We will prove that this is true for Fechner and Kendall correlations too. This fact is proved for the sign similarities $\gamma_{i,j}^{Sg}$ in \cite{Kalyagin_2017}, Lemma 1 and Lemma 2. Therefore it is true for Fechner correlations $\gamma^{Fh}_{i,j}=2\gamma_{i,j}^{Sg}-1$. Moreover it is proved in \cite{Kalyagin_2017} that
$$
\gamma_{i,j}^{Sg}=\displaystyle \frac{1}{2}+\frac{1}{\pi}\arcsin (\gamma^P_{i,j})
$$
For Kendall correlations consider two independent random vectors $X^{(1)}$, $X^{(2)}$ with the same distribution as the vector $X$. It can be easy proved that in this case the random vector $(X^{(1)}-X^{(2)})$ has elliptical distribution \cite{Lindskog_2003}. Calculation of the covariance matrix for this vector implies
$$
Cov(X^{(1)}_i-X^{(2)}_i, X^{(1)}_j-X^{(2)}_j)=2 Cov(X_i,X_j)=2C\lambda_{i,j}
$$
Therefore
$$
\gamma_{i,j}^{Kd}=2\gamma^{Sg}(X^{(1)}_i-X^{(2)}_i,X^{(1)}_j-X^{(2)}_j)-1=\displaystyle \frac{2}{\pi}\arcsin(\frac{\lambda_{i,j}}{\sqrt{\lambda_{i,i}\lambda_{j,j}}})
$$
It implies that Kendall correlations don't depend on the function $g(u)$.
Moreover the following relations hold for any distribution from the class $K(\Lambda)$:
$$
\gamma^{Fh}_{i,j}=\gamma^{Kd}_{i,j}=\displaystyle \frac{2}{\pi}\arcsin(\gamma^P_{i,j}).
$$
To calculate the true MST in each of three networks one can use Kruskal algorithm. The first step of the algorithm is to sort the edges of the complete weighted graph $(V,\Gamma)$ into decreasing order by weights $\gamma_{i, j}$. Note, that $\gamma^{Fh}_{i,j}$, $\gamma^{Kd}_{i,j}$ are obtained from $\gamma^P_{i,j}$ by increasing function. It means that the first step of the Kruskal algorithm will give the same edge ordering for all three networks. Next steps of the algorithm depends only on this ordering and does not depend on a particular values of the weights of edges. Therefore the true maximum spanning tree (true MST) is the same in all networks for any distribution of the vector $X \in K(\Lambda)$. True MST for multivariate Gaussian distribution with the covariance matrix $\Lambda$ is a particular case of such MST.
This statement gives a basis for a correct comparison of reliability of Kruskal algorithm for MST identification in different correlation networks.
\noindent
{\bf Remark:} The relation between Kendall and Pearson correlation for elliptical distributions is known \cite{Fang_2002}, \cite{Lindskog_2003}. We give here a
sketch of proof by the sake of completeness.
\section{Uncertainty of MST identification in random variable network}\label{Uncertainty of MST}
The main problem under discussion in this paper is a reliability of the identification of the true MST from observations. Let $(X,\gamma)$ be a random variable network and $(V,\Gamma)$ be the associated network model. True maximum spanning tree (true MST) is the spanning tree in $(V,\Gamma)$ with maximal weight. Let $X(t)$, $t=1,2,\ldots, n$ be a sample from distribution of $X$. Denote by $x(t)$ observed value of the random vector $X(t)$. Sample space is defined by the matrices $x=(x_j(t)) \in R^{N \times n}$. We define the decision space as the space of all adjacency matrices $S$ of the spanning trees in $(V,\Gamma)$:
$$
{\cal{D}}=\{S : \ S \in R^{N \times N}, S \ \mbox{ is adjacency matrix for a spanning tree in} \ (V,\Gamma) \}
$$
Any MST identification algorithm $\delta=\delta(x)$ is a map from the sample space $R^{N \times n}$ to the decision space $\cal{D}$.
Quality of an identification algorithm $\delta$ is related with the difference between true maximum spanning tree and the spanning tree given by $\delta$ which can be evaluated by a loss function $w(S,Q)$ where $S=(s_{i,j})$ is the true decision and $Q=(q_{i,j})$ is the decision given by $\delta$.
Uncertainty of an identification algorithm $\delta$ is then measured by the expected value of the loss function, which is known as the risk function
\begin{equation}\label{risk_function}
Risk(S; \delta)= \sum_{Q \in \cal{D}} w(S,Q)P(\delta=Q)
\end{equation}
The choice of the loss function is an important point for uncertainty evaluation. To discuss an appropriate choice of the loss function for MST identification we consider the following tables familiar in binary classification. Table \ref{s_ij_q_ij} illustrates Type I and Type II errors for the individual edge $(i,j)$. It represents all possible cases for different values of $s_{i,j}$ and $q_{i,j}$. Value $0$ means that the edge $(i,j)$ is not included in the MST,
value $1$ means that the edge $(i,j)$ is included in the MST. We associate the case $s_{i,j}=0$, $q_{i,j}=1$ with Type I error (false edge inclusion), and we associate the case $s_{i,j}=1$, $q_{i,j}=0$ with Type II error (false edge non inclusion).
\begin{table}[h!]
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
& & \\
$q_{i,j}$ $s_{i,j}$ & 0 & 1 \\
& & \\
\hline
& & \\
0 & \ edge is not included correctly & Type II error \ \\
& & \\
\hline
& & \\
1 & \ Type I error & edge is included correctly \ \\
& & \\
\hline
\end{tabular}
\end{center}
\caption{Type I (false edge inclusion) and Type II (false edge exclusion) errors for the edge $(i,j)$}\label{s_ij_q_ij}
\end{table}
Table \ref{TN_TP} represents the numbers of Type I errors (False Positive), number of Type II errors (False Negative), and numbers of correct decisions (True Positive and True Negative). This table has a specific properties for the numbers of errors in MST identification. First, number of edges in any spanning tree is equal to $(N-1)$,
that is $FP+TP=N-1$, $FP+TN=M-(N-1)$, where $M=C^2_N$. Second, one false included edge implies one false excluded edge and vice versa, that is $FP=FN$. In addition one has $0 \leq FP \leq N-1$, $M-2(N-1) \leq TN \leq M-(N-1)$.
\begin{table}[h!]
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
& & & \\
$Q \ S$ & 0 in $S$ & 1 in $S$ & Total\\
& & & \\
\hline
& & & \\
0 in $Q$ & TN & FN & number of 0 in $Q$ \\
& & & \\
\hline
& & & \\
1 in $Q$ & FP & TP & number of 1 in $Q$ \ \\
& & & \\
\hline
& & & \\
Total & number of 0 in $S$ & number of 1 in $S$ & $N(N-1)/2$ \\
& & & \\
\hline
\end{tabular}
\end{center}
\caption{Numbers of Type I and Type II errors for MST identification}\label{TN_TP}
\end{table}
Now we discuss the choice of the loss and risk functions appropriate for the MST identification by observations. The most simple loss function is
$$
w_{Simple}(S,Q)=
\left \{
\begin{array}{lll}
1 & if & S \neq Q \\
0 & if & S = Q \\
\end{array}
\right.
$$
The associated risk is the probability of the false decision $Risk(S; \delta)=P(\delta(x) \neq S)$. For MST identification it is the same as
FWER (Family Wise Error Rate), known in multiple hypotheses testing. FWER is the probability of at least one Type I error \cite{Hochberg_1987}, \cite{Bretz_2011}, \cite{Lehmann_2005}. The true MST is correctly identified if and only if $FP=FN=0$. This measure of uncertainty takes into account only the fact of correct identification of MST (no errors) and it does not take into account the number of errors. Moreover, one can show by simulations, that the probability of correct decision for MST identification is very small even if the number of observations is big \cite{Kalyagin_2014_1}.
Another error rates such as Conjunctive Power (CPOWER) and Disjunctive Power (DPOWER), known in multiple hypotheses testing, are related with the Type II errors \cite{Bretz_2011}. In the case of MST identification these error rates are connected with FWER. In particular one has $CPOWER=FWER$. Therefore it does not give a new measure of uncertainty.
Considered measures of uncertainty don't take into account the numbers of errors. In multiple hypotheses testing there are error rates which take into account the numbers of errors: Per-Family Error Rate (PFER), Per-Comparison Error Rate (PCER), Average Power (AVE), or True Positive Rate (TPR). PFER is defined as the expected number of Type I errors. Associated loss function can be defined as $w_{PFER}=FP$. PCER is defined by $PCER= PFER/M$, $M=N(N-1)/2$. Loss function for the Average Power (AVE) is defined by $w_{AVE}=(TP/(FN+TP))$. In binary classification $Risk_{AVE}$ is related with True Positive Rate (TPR), or Sensitivity, or Recall.
For MST identification all these uncertainty characteristics are related with False Discovery Rate (FDR).
FDR is defined by the loss function $w_{FDR}=(FP/(FP+TP))$. One has in our case $PFER=(N-1)FDR$, $PCER=2FDR/N$, $AVE=1-FDR$, $TPR=Recall=Precision=1-FDR$.
In addition, one has
$$
0 \leq TPR \leq 1, \ \ 0 \leq FPR \leq \frac{FP}{M-(N-1)}=\frac{2}{N-2}.
$$
Another measure of error in binary classification is Accuracy (ACC), or proportion of correct decisions. It is defined by the following loss function $w_{ACC}=(TP+TN)/M$, $M=N(N-1)/2$. This measure is related with FDR by the formula
$$
ACC= \displaystyle 1- \frac{4FDR}{N}
$$
ACC is not well appropriate for MST identification because for a large $N$ ACC is close to 1, independently of the number of errors.
Taking into account the above discussion we argue that FDR is an appropriate measure of uncertainty for MST identification. Note that for MST identification FDR is the proportion of false edges (non correctly identified edges) in MST.
\section{Kruskal algorithm for MST identification}\label{Kruskal algorithm for MST identification}
Let $(X,\gamma)$ be the random variables network where $X=(X_1,\ldots,X_N)$ be the random vector and $\gamma$ be the pairwise measure of dependence. Let $x_i(t),i=1,\ldots,N;t=1,\ldots,n$ be the observations of $X$ and $\hat{\gamma}_{i,j}$ be the estimations of the $\gamma_{i,j}$ constructed by observations $x_i(t),i=1,\ldots,N;t=1,\ldots,n$, $\hat{\Gamma}=(\hat{\gamma}_{i,j})$. Kruskal algorithm for MST identification can be described as follows.
\vskip 2mm
\noindent
{\bf Kruskal algorithm for MST identification by observations:} the Kruskal algorithm calculates the collection of edges $\hat{MST}$ of maximum spanning tree in the network model $(V,\hat{\Gamma})$ by the following steps
\begin{itemize}
\item Sort the edges of the complete weighted graph $(V,\hat{\Gamma})$ into decreasing order by weights $\hat{\gamma}_{i,j}$.
\item Add the first edge to $\hat{MST}$.
\item Add the next edge to $\hat{MST}$ if and only if it does not form a cycle in the current $\hat{MST}$.
\item If $\hat{MST}$ has $(N-1)$ edges, stop and output $\hat{MST}$ . Otherwise go to the previous step.
\end{itemize}
Kruskal algorithm for MST identification in Pearson correlation network uses the classical estimations of Pearson correlations (sample Pearson correlations):
\begin{equation}\label{sample_pearson_measure}
\hat{\gamma}^P_{i,j}=r_{i,j}=\frac{\sum_{t=1}^n(x_i(t)-\overline{x_i})(x_j(t)-\overline{x_j})}{\sqrt{\sum_{t=1}^n(x_i(t)-\overline{x_i})^2\sum_{t=1}^n(x_j(t)-\overline{x_j})^2}}
\end{equation}
\vskip 2mm
Kruskal algorithm for MST identification in Fechner correlation network uses the following estimations of Fechner correlations (sample Fechner correlations):
$$
\displaystyle \hat{\gamma}^{Fh}_{i,j}=2 \hat{\gamma}^{Sg}_{i,j}-1
$$
where $\hat{\gamma}^{Sg}_{i,j}$ are estimations of sign similarities. These estimations are given by
\begin{equation}\label{sample_sign_measure}
\hat{\gamma}^{Sg}_{i,j}=\frac{1}{n}\sum_{t=1}^nI_{i,j}(t)
\end{equation}
with
$$
I_{i,j}(t)=\left\{\
\begin{array}{ll}
0,& (x_i(t)-\overline{x_i})(x_j(t)-\overline{x_j})\leq 0\\
1,& (x_i(t)-\overline{x_i})(x_j(t)-\overline{x_j})>0\\
\end{array}
\right.
$$
where
$$
\overline{x_i}=\displaystyle \frac{1}{n}\sum_{t=1}^n x_i(t), \ \ i=1,2,\ldots,N
$$
In the case when the vector of means $\mu$ is known one can calculate $I_{i,j}(t)$ by
$$
I_{i,j}(t)=\left\{\
\begin{array}{ll}
0,& (x_i(t)-\mu_i)(x_j(t)-\mu_j)\leq 0\\
1,& (x_i(t)-\mu_i)(x_j(t)-\mu_j)>0\\
\end{array}
\right.
$$
\vskip 2mm
Kruskal algorithm for MST identification in Kendall correlation network uses the following estimations of Kendall correlations
\begin{equation}\label{sample_kendall_measure}
\hat{\gamma}^{Kd}_{i,j}=\frac{1}{n(n-1)}\sum_{t=1}^n\sum_{\begin{array}{l}s=1\\s\neq t \end{array}}^n I^{Kd}_{i,j}(t,s)
\end{equation}
where
$$
I^{Kd}_{i,j}(t,s)=\left\{\
\begin{array}{ll}
1, & (x_i(t)-x_i(s))(x_j(t)-x_j(s))\geq 0\\
-1, & (x_i(t)-x_i(s))(x_j(t)-x_j(s))<0
\end{array}
\right.
$$
\section{Robustness of Kruskal algorithm in Fechner correlation network}\label{Robustness of Kruskal algorithm}
Uncertainty of Kruskal algorithm for MST identification depends on the chosen correlation network. From one side, for any $X \in K(\Lambda)$ Kruskal algorithms in different correlation networks identify the same true MST. From the other side, error in the identification can be different. In this Section we state and prove an interesting property of Kruskal algorithm for MST identification in Fechner correlation network. This property can be associated with robustness of the algorithm. Indeed, robustness in general is associated with non sensitivity of an algorithm to the change of some parameters. This is the case of Kruskal algorithm for MST identification in Fechner correlation network. More precisely the following statement is true:
\begin{thm}
Let $X \in K(\Lambda)$ and the vector of means $\mu$ be known. Then FDR of Kruscal algorithm for MST identification in Fechner correlation network is the same for any vector $X$.
\end{thm}
This means that FDR as a risk function does not depend on distribution from the class $K(\Lambda)$ (distribution free risk function).
\noindent
{\bf Proof.} Proof is based on the results from our publication \cite{Kalyagin_2017}. First step of the Kruskal algorithm for MST identification in Fechner correlation network is to sort the edges of the complete weighted graph $(V,\hat{\Gamma}^{Fh})$ into decreasing order by weights $\hat{\gamma}^{Fh}_{i,j}$.
One has
$$
\displaystyle \hat{\gamma}^{Fh}_{i,j}=2 \hat{\gamma}^{Sg}_{i,j}-1
$$
Therefore, the first step of the algorithm is equivalent to sort $\hat{\gamma}^{Sg}_{i,j}$ in decreasing order.
It is proved in \cite{Kalyagin_2017} (Theorem 2) that the joint distribution of statistics $\hat{\gamma}^{Sg}_{i,j}$ (in the paper they are denoted by $T^{Sg}_{i,j}$) is the same for any $X \in K(\Lambda)$. It implies that the probability of any ordering of $\hat{\gamma}^{Sg}_{i,j}$ does not depend on distribution of the vector $X \in K(\Lambda)$. $\hat{MST}$, obtained by Kruskal algorithm of identification is completely defined by ordering of $\hat{\gamma}^{Sg}_{i,j}$. Therefore, any such ordering generates the same numbers FP, FN, TP and TN. It implies that the distribution of the loss function $w_{FDR}=(FP/(FP+TP))$ is the same for any $X \in K(\Lambda)$ and the theorem follows.
\section{Reliability of Kruskal algorithm in different correlation networks}\label{Reliability of Kruskal algorithm}
It the section we study by numerical simulations reliability (uncertainty) of Kruskal algorithm for MST identification in three correlation networks: Pearson correlation network, Fechner correlation network, and Kendall correlation network for stock market returns. The results of numerical experiments show that reliability of MST identification is different in three networks, despite the fact that true MST is the same. It is shown that for Pearson correlation network the FDR of MST identification essentially depends on distribution of stock returns. For Fechner correlation network we observe that the FDR is non sensitive to the assumption on stock's return distribution in accordance with theoretical result of the robustness of Kruskal algorithm. New and surprising phenomena are discovered for Kendall correlation network. Our experiments show that FDR of Kruskal algorithm for MST identification in Kendall correlation network weakly depend on distributions of the vector $X \in K(\Lambda)$ and at the same time the value of FDR is almost the best in comparison with MST identification in other networks. This needs a further investigation.
Our experiments are organized as follows. We take the real data of stock returns from a stock market. Using these data we estimate vector of means and correlation matrix for the stock returns. These estimations are fixed as true vector of means $\mu$ and matrix $\Lambda$ for the random vectors $X$ from the class $K(\Lambda)$ of elliptical distributions (in our experiments, we use correlation matrix as the matrix $\Lambda$). To make our conclusions more general we consider networks of different sizes.
To study how FDR of Kruskal algorithm for MST identification depends on distribution from the class $K(\Lambda)$ we consider the family of distributions from this class with the following densities
$$
f_{\epsilon}(x)=(1-\epsilon) f_{Gauss}(\mu,\Lambda)+\epsilon f_{Student}(\mu,\Lambda), \ \ \epsilon \in[0,1]
$$
Here for $f_{Student}(\mu,\Lambda)$ we fix the parameter $\nu=3$. For $\epsilon=0$ we have the multivariate Gaussian distribution, and for $\epsilon=1$ we have the multivariate Student distribution. Other distributions are a mixture of these two distributions. The computational scheme is the following
\begin{itemize}
\item For a given covariance (correlation) matrix $\Lambda$ calculate true MST.
\item Generate a sample of the size $n$ from distribution with density $f_{\epsilon}(x)$.
\item For each correlation network use Kruskal algorithm to identify MST by observations.
\item Compare true MST and MST identified by Kruskal algorithm and calculate FDR for the sample
\item Repeat last three steps $S$ times and make average of FDR's to evaluate the expected value of FDR.
\end{itemize}
\vskip 2mm
\noindent
{Experiment 1.} Consider the following $N=10$ stocks from USA stock market: A (Agilent Technologies Inc), AA (Alcoa Inc), AAP (Advance Auto Parts Inc), AAPL (Apple Inc), AAWW (Atlas Air Worldwide Holdings Inc), ABAX (Abaxis Inc), ABD (ACCO Brands Corp), ABG (Asbury Automotive Group Inc), ACWI (iShares MSCI ACWI Index Fund), ADX (Adams Express Company). We estimate the parameters $\mu$ and $\Lambda$ by the data for the 250 observations started from November 2010. Associated matrix of Pearson correlations is
$$
\left(
\begin{array} {cccccccccc} \label{arr:weightMatrix}
1.0000& 0.7220& 0.4681& 0.4809& 0.6209& 0.5380& 0.6252& 0.6285& 0.7786& 0.7909\\
0.7220& 1.0000& 0.4395& 0.5979& 0.6381& 0.5725& 0.6666& 0.6266& 0.8583& 0.8640\\
0.4681& 0.4395& 1.0000& 0.3432& 0.3468& 0.2740& 0.4090& 0.4016& 0.4615& 0.4832\\
0.4809& 0.5979& 0.3432& 1.0000& 0.4518& 0.4460& 0.4635& 0.4940& 0.6447& 0.6601\\
0.6209& 0.6381& 0.3468& 0.4518& 1.0000& 0.5640& 0.5994& 0.5369& 0.7170& 0.7136\\
0.5380& 0.5725& 0.2740& 0.4460& 0.5640& 1.0000& 0.4969& 0.4775& 0.6439& 0.6242\\
0.6252& 0.6666& 0.4090& 0.4635& 0.5994& 0.4969& 1.0000& 0.6098& 0.7161& 0.7158\\
0.6285& 0.6266& 0.4016& 0.4940& 0.5369& 0.4775& 0.6098& 1.0000& 0.6805& 0.6748\\
0.7786& 0.8583& 0.4615& 0.6447& 0.7170& 0.6439& 0.7161& 0.6805& 1.0000& 0.9523\\
0.7909& 0.8640& 0.4832& 0.6601& 0.7136& 0.6242& 0.7158& 0.6748& 0.9523& 1.0000
\end{array}
\right)
$$
True MST, obtained from this matrix is given by Fig. \ref{True_MST_N_10}.
\begin{figure}[h!]
\centering
\includegraphics*[width=1.00\textwidth]{MST.png}
\caption{True MST, $N=10$}\label{True_MST_N_10}
\end{figure}
The results of FDR evaluation for Kruskal algorithm of MST identification in three networks are presented in Tables \ref{FDR_N10_n10},\ref{FDR_N10_n100},\ref{FDR_N10_n1000}. Analysis of the results shows that for $n=10$ and $n=100$ all algorithms for MST identification have weak reliability in terms of FDR. The results of Table \ref{FDR_N10_n1000} show that 1-2 edges in identified MST are different from edges in true MST.
Interesting results were obtained for Kendall correlation network. Namely the obtained results shows that quality of MST identification in Kendall correlation network is close to the quality of MST identification in Pearson network for Gaussian distribution and are better than obtained results of MST identification in Pearson network for Student distribution. This is valid for $n=1000$ too. Besides one can see strong dependence on distribution of FDR for Kruskal algorithm of MST identification in Pearson correlation network and stability of FDR for Kruskal algorithm of MST identification in Fechner correlation network. From the other side, FDR for Kruskal algorithm of MST identification in Kendall correlation network is almost stable, weakly depending on distribution.
\vskip 2mm
\noindent
{Experiment 2.} Consider $N=50$ stocks from NASDAQ stock market with largest trade volume for the year 2014. Parameters $\mu$ and $\Lambda$ are estimated by 250 observations for 2014.
The tables \ref{FDR_N50_n1000} and \ref{FDR_N50_n10000} present the results of FDR evaluation for $n=1000$ and $n=10000$ for three networks. The results are almost the same as for Experiment 1. One can see strong dependence on distribution of FDR for Kruskal algorithm of MST identification in Pearson correlation network and stability of FDR for Kruskal algorithm of MST identification in Fechner correlation network. From the other side, FDR for Kruskal algorithm of MST identification in Kendall correlation network is almost stable, weakly depending on distribution. Reliability of MST identification in this network is almost the best with respect to other networks.
\begin{table}[h!]
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|}
\hline
measure,$\epsilon $ & 0 & 0.1 & 0.2 & 0.3 & 0.4 & 0.5 & 0.6 & 0.7 & 0.8 & 0.9 & 1 \\ \hline
Pearson & 0.66 & 0.67 & 0.67 & 0.67 & 0.68 & 0.68 & 0.67 & 0.68 & 0.69 & 0.68 & 0.69 \\ \hline
Fechner & 0.65 & 0.64 & 0.64 & 0.64 & 0.64 & 0.64 & 0.64 & 0.64 & 0.64 & 0.63 & 0.64 \\ \hline
Kendall & 0.65 & 0.65 & 0.66 & 0.66 & 0.66 & 0.67 & 0.67 & 0.67 & 0.67 & 0.66 & 0.66 \\ \hline
\end{tabular}
\end{center}
\caption{False discovery rate. N=10, n=10}\label{FDR_N10_n10}
\end{table}
\begin{table}[h!]
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|}
\hline
measure,$\epsilon $ & 0 & 0.1 & 0.2 & 0.3 & 0.4 & 0.5 & 0.6 & 0.7 & 0.8 & 0.9 & 1 \\ \hline
Pearson & 0.37 & 0.40 & 0.40 & 0.41 & 0.43 & 0.45 & 0.46 & 0.48 & 0.48 & 0.50 & 0.52 \\ \hline
Fechner & 0.52 & 0.53 & 0.54 & 0.53 & 0.53 & 0.53 & 0.53 & 0.54 & 0.53 & 0.53 & 0.53 \\ \hline
Kendall & 0.41 & 0.40 & 0.41 & 0.41 & 0.42 & 0.42 & 0.42 & 0.44 & 0.44 & 0.44 & 0.44 \\ \hline
\end{tabular}
\end{center}
\caption{False discovery rate. N=10, n=100}\label{FDR_N10_n100}
\end{table}
\begin{table}[h!]
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|}
\hline
measure,$\epsilon $ & 0 & 0.1 & 0.2 & 0.3 & 0.4 & 0.5 & 0.6 & 0.7 & 0.8 & 0.9 & 1 \\ \hline
Pearson & 0.15 & 0.17 & 0.19 & 0.21 & 0.22 & 0.23 & 0.26 & 0.29 & 0.30 & 0.33 & 0.34 \\ \hline
Fechner & 0.34 & 0.34 & 0.33 & 0.33 & 0.33 & 0.33 & 0.33 & 0.33 & 0.33 & 0.34 & 0.33 \\ \hline
Kendall & 0.17 & 0.17 & 0.17 & 0.17 & 0.18 & 0.18 & 0.18 & 0.18 & 0.19 & 0.20 & 0.20 \\ \hline
\end{tabular}
\end{center}
\caption{False discovery rate. N=10, n=1000}\label{FDR_N10_n1000}
\end{table}
\begin{table}[h!]
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|}
\hline
measure,$\epsilon $ & 0 & 0.1 & 0.2 & 0.3 & 0.4 & 0.5 & 0.6 & 0.7 & 0.8 & 0.9 & 1 \\ \hline
Pearson & 0.23 & 0.26 & 0.29 & 0.34 & 0.37 & 0.34 & 0.42 & 0.44 & 0.46 & 0.50 & 0.52 \\ \hline
Fechner & 0.41 & 0.39 & 0.40 & 0.40 & 0.39 & 0.40 & 0.40 & 0.41 & 0.41 & 0.40 & 0.41 \\ \hline
Kendall & 0.25 & 0.25 & 0.25 & 0.26 & 0.27 & 0.27 & 0.28 & 0.27 & 0.28 & 0.29 & 0.28 \\ \hline
\end{tabular}
\end{center}
\caption{False discovery rate. N=50, n=1000}\label{FDR_N50_n1000}
\end{table}
\begin{table}[h!]
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|}
\hline
measure,$\epsilon $ & 0 & 0.1 & 0.2 & 0.3 & 0.4 & 0.5 & 0.6 & 0.7 & 0.8 & 0.9 & 1 \\ \hline
Pearson & 0.08 & 0.10 & 0.13 & 0.14 & 0.16 & 0.18 & 0.21 & 0.25 & 0.24 & 0.28 & 0.32 \\ \hline
Fechner & 0.14 & 0.13 & 0.14 & 0.13 & 0.14 & 0.14 & 0.14 & 0.14 & 0.13 & 0.13 & 0.14 \\ \hline
Kendall & 0.09 & 0.08 & 0.08 & 0.09 & 0.08 & 0.08 & 0.09 & 0.09 & 0.09 & 0.08 & 0.10 \\ \hline
\end{tabular}
\end{center}
\caption{False discovery rate. N=50, n=10000}\label{FDR_N50_n10000}
\end{table}
\section{Concluding remarks}\label{Concluding remarks}
The main advantage of the proposed framework to measure uncertainty of algorithms for MST identification is that it allows to make a correct comparison of the uncertainty for different networks and for a large class of distributions. Peculiarities of Pearson, Fechner and Kendall correlation networks for elliptical distributions were emphasized in the paper on the base of this approach. It was observed that Kendall correlation network looks the most appropriate for MST identification. This phenomena will be a subject for further investigations.
| {'timestamp': '2021-03-29T02:25:00', 'yymm': '2103', 'arxiv_id': '2103.14593', 'language': 'en', 'url': 'https://arxiv.org/abs/2103.14593'} |
\section{Introduction}
Several models for stochastic motion involve the Langevin
and generalized Langevin equation (GLE) in so far as the environment
is assumed to be in a steady-state equilibrium throughout the
dynamical event, {\it vis-a-vis} the linear responding bath
is stationary and obeys a fluctuation-dissipation
theorem.\cite{kram40,zwan60b,zwan61b,prig61,ford65,mori65,kubo66,hynes85a,hynes85b,nitzan88,berne88,rmp90,tucker91,tucker93,pollak96}
In recent work,\cite{hern99a,hern99b,hern99c,hern99d,hern99e}
it was suggested that in many instances, the environment is
not stationary, and as such a nonstationary version of the
GLE would be desirable.
In principle, it is easy to write a nonstationary GLE as
\begin{subequations}\label{eq:nGLE}
\begin{equation}
{\dot v} =
- \int^tdt' \gamma(t,t'){v}(t')
+\xi(t) + F(t),\label{gle}
\end{equation}
where $F(t)$ ($\equiv -\nabla_qV(q(t))$) is the external force,
$v$ ($=\dot q$) is the velocity, $q$ is the mass-weighted
position, $\xi(t)$ is the random force
due to a solvent, and $\gamma(t,t')$
represents the nonstationary response of the solvent.
To completely specify this system of equations, a closure connecting
the random force to the friction kernel is needed.
The fluctuation-dissipation relation (FDR) provides such a
closure for the GLE.\cite{kubo66}
An obvious generalization of the FDR for nonstationary
friction kernels is the requirement,
\begin{equation}
\gamma(t,t') = \langle \xi(t) \xi(t') \rangle\;.
\label{eq:ngle_fdr}
\end{equation}\end{subequations}%
Unfortunately, such a construction will not necessarily be
satisfied for an arbitrary nonstationary friction kernel,
nor will it necessarily be associated with the dynamics of
a larger mechanical system.
The GLE model with space-dependent friction (xGLE)
developed by Lindenberg and coworkers,\cite{lind81,lind84}
and Carmeli and Nitzan\cite{carm83}
does exhibit the structure of Eq.~(\ref{eq:nGLE})
and as such is an example of nonstationary stochastic dynamics.
In recent work,\cite{hern99a,hern99b,hern99c,hern99d,hern99e}
a generalization of this model was developed
which includes arbitrary nonstationary changes in the strength
of the friction, but like the xGLE model does not include
changes in the response time.
As such it is not quite the ultimately desired nonstationary GLE.
Avoiding redundancy in the term ``generalized GLE,'' the
new formalism has been labeled the {\it irreversible}
generalized Langevin equation (iGLE), thereby emphasizing the
irreversibility---vis-a-vis nonstationarity---in
the response of the quasi-equilibrium environment.
But this ``irreversibility" may not persist at long times, and therefore
Drozdov and Tucker chose to call such an equation of motion
the multiple time-scale generalized Langevin equation (MTSGLE) in
their application of the iGLE to study local density enhancements
in supercritical fluids.\cite{tucker01}
In this paper, the iGLE model is first summarized in Sec.~\ref{sec:iGLE},
explicitly indicating the limit in which position-dependent friction may be
recovered.
In earlier work,\cite{hern99e} (Paper I) it was shown that the iGLE may
be obtained as a projection of an open Hamiltonian system,
in analogy with the similar construction for the
GLE.\cite{ford65,zwan73,caldeira81,cort85,pollak86b}
In Sec.~\ref{sec:iGLE}, the connections between the projection of
the Hamiltonian of Paper I onto a chosen dynamical variable
and that obtained by the xGLE are further illustrated,
and the possibly troubling nonlocal term it contains is
also further explored.
The results of several numerical simulations of the Hamiltonian system
are presented in Sec.~\ref{sec:sim} in order to illustrate
the effect of the nonlocal term on the dynamics, and to
verify that the constructed random force does obey the
FDR.
\section{The \lowercase{i}GLE\ and Space-Dependent Friction}\label{sec:iGLE}
\subsection{Stochastic Dynamics}
The iGLE may be written as
\begin{equation}
{\dot v}(t) =
- \int_0^t\!dt'\, g(t) g(t') \gamma_0(t-t') {v}(t')
+g(t) \xi_0(t) + F(t)\;,
\label{eq:ggle}
\end{equation}
where $g(t)$ characterizes the irreversibility in the
equilibrium environment, and there exists a FDR
between the Gaussian random force $\xi_0(t)$ and
the stationary friction kernel $\gamma_0(t-t')$.
Through the identities,
\begin{subequations}\begin{eqnarray}
\gamma(t,t') &\equiv& g(t) g(t') \gamma_0(t-t')\\
\xi(t) &\equiv& \xi_0(t)\;,
\end{eqnarray}\end{subequations}%
the iGLE is a construction of the nonstationary Eq.~(\ref{eq:nGLE}).
One possible interpretation of the role of $g(t)$ in the
iGLE is that it corresponds to the strength of the
environmental response as the reactive system traverses
the environment through an {\it a priori}
specified trajectory, call it $y(t)$.
Assuming that one also knows the field $f(y)$, which
is the strength of the environmental friction over this configuration
space, then the irreversibility may be written as,
\begin{equation}
g(t) = f(y(t))\;.\label{eq:xGLE}
\end{equation}
In the case that the chosen coordinate is itself the configuration
space over which the friction varies ---{\it i.e.}, $y=x$---
the GLE with space dependent friction of Carmeli and Nitzan is
formally recovered.\cite{carm83}
But the iGLE is more general than the xGLE
because it allows for a variety of ``irreversible" time-dependent
environments.
For example, in the WiGLE model,\cite{hern99d}
each particle ---labeled by $n$---
is in an environment induced by the average of
the properties of itself and $w$ neighbors,
\begin{subequations}\begin{eqnarray}\label{eq:wigle-ga}
g_n(t) & \equiv & \left< \left| R(t) \right| \right>_n^\zeta \\
\left< \left| R(t) \right| \right>_n & \equiv & \frac{1}{w+1}
\sum_{i\in S_{w,n}} \left| R_i(t) \right| \;, \label{eq:wigle-gb}
\end{eqnarray}\end{subequations}%
where $S_{w,n}$ is the set of $w$ neighbors.
In the limit that $w\to0$, the chosen coordinate is dissipated
only by a function of its position, which is precisely the
limit of space-dependent friction.
In the limit that $w\to\infty$, the chosen coordinate is
instead dissipated by a macroscopic average of the motion of all
the reacting systems in the sample.
The contribution of a particular particle to this average is
infinitesimally small, and hence the
friction contains no space-dependent friction.
In between these limits, there is a balance between self-dissipation
due to a space-dependent friction term, and heterogeneous
dissipation due to the average of the motion of the
$w$ neighbors.
\subsection{Mechanical Systems}
In recent work, a Hamiltonian has been obtained whose projection
is the iGLE when $g(t)$ depends exclusively on time
---{\it i.e.}, it includes neither explicit space-dependence
nor the WiGLE dependence.\cite{hern99e}
This so-called iGLE Hamiltonian may be written as
\begin{subequations}\label{eq:iGLEall}\begin{eqnarray}
\mathcal{H}_{\rm iGLE}
&=& \frac{1}{2}p_q^2 +
\left\{ V(q) + \delta V_1(q,t) + \delta V_2[q(\cdot),t] \right\}
\nonumber\\ &&\quad
- g(t) \left[\sum_{i=1}^Nc_i x_i\right]q
\nonumber\\ &&\quad
+ \sum_{i=1}^N\left[\tfrac{1}{2}p_i^2 + \tfrac{1}{2} \omega_i^2x_i^2\right]\;,
\label{eq:iGLEham}
\end{eqnarray}
where
\begin{eqnarray}
\delta V_1(q,t) &\equiv&
\tfrac{1}{2} g(t)^2\sum_{i=1}^N
\frac{c_i^2 }{ \omega_i^{2}} q^2
\label{eq:Vzero}\\
\delta V_2[q(\cdot),t] &\equiv&
\tfrac{1}{2} \int_0^t\!dt'\,a(t,t')\left[q(t')-q(t)\right]^2
\nonumber\\ &&\quad
- \tfrac{1}{2} \left[\int_0^t\!dt'\,a(t,t')\right] q(t)^2
\;,
\label{eq:Vone}
\end{eqnarray}\end{subequations}%
where
\begin{equation}
a(t,t')\equiv g(t)\dot g(t')\gamma_0(t-t')\;,
\end{equation}
and the time dependence in $q(t)$ is explicitly included in the
definition of the $\delta V_2[q(\cdot),t]$ functional for clarity.
Ignoring the $\delta V_2$ term and identifying $g$ as in Eq.~(\ref{eq:xGLE}),
this Hamiltonian is similar to the xGLE Hamiltonian for
space-dependent friction.
This result is not surprising in the sense that the iGLE has a similar
generalized structure.
However, the xGLE Hamiltonian gives rise to an additional dependence
on $q$ whereas the iGLE Hamiltonian gives rise to an additional
dependence on time $t$. The projections are thus analogous but
not exactly the same.
\subsection{Equation of Motion}
The nonlocality in the $\delta V_2[q(\cdot),t]$
term does present some difficulties
which are worth considering.
In the absence of this term, the extremization of the action
readily leads to the usual Hamilton's equations.
In general, the presence of the $\delta V_2$ term
contributes to the time evolution of the momentum,
$\dot p$, by what of the functional derivative,
\begin{equation}
-\frac{\delta S_2 }{ \delta q(t)}\;,
\end{equation}
where the contribution to the action due to $\delta V_2$
may be written as
\begin{eqnarray}
S_2 &\equiv&
\frac{1}{2} \int_0^T\!dt\,
\int_0^t\!dt'\,a(t,t')q(t')^2
\nonumber\\&&
- \int_0^T\!dt\,
\int_0^t\!dt'\,a(t,t')q(t)q(t')\;,
\end{eqnarray}
and $T$ is the arbitrary final time to which the action is evaluated.
A simple calculation readily leads to
\begin{eqnarray}
-\frac{\delta S_2 }{ \delta q(t)} &=&
\int_0^t\!dt'\,a(t,t')q(t')
\nonumber\\&&
+ \int_t^T\!dt'\,a(t,t')\left[q(t')-q(t)\right]\;.
\end{eqnarray}
The first time in the RHS precisely cancels the contribution
due to the nonstationarity in the friction kernel.
However, the remaining second term depends on the arbitrary
final time $T$.
It's presence can't be physically correct
because it leads to different dynamics depending
on the choice of $T$.
In the limit that $T$ is near ---though greater--- than $t$, this term
vanishes, however.
This suggests that an additional approximation
ignoring the second term, thereby eliminating the transient
effects from a term that depends arbitrarily on $T$, is warranted.
(And this is consistent with the Carmeli and Nitzan derivation, in
that they too need to remove transient terms.)
Within this approximation, the projection in Ref.~\onlinecite{hern99e},
then leads to the iGLE.
Thus the projection of the iGLE Hamiltonian
leading to the iGLE
with a purely time-dependent friction
is analogous (\& complementary) to the Carmeli \& Nitzan projection
to a GLE with space-dependent friction.
The construction of such a Hamiltonian for an iGLE with arbitrary
nonstationary friction, as manifested in WiGLE, is still an open
problem.
In the next section, the dynamics of the iGLE
Hamiltonian is explored
through numerical simulations in order to observe
the degree of energy conservation ---as it is affected by
$\delta V_2$--- and the correlation of the
constructed forces.
\begin{figure}
\includegraphics*[width=2.0in,angle=-90]{./switch.ps}
\caption[Switching function for the system friction.]%
{The square of the irreversible change in the environment
$g(t)^2$ is shown here as a function of time for the three cases
examined in this study.}
\label{fig:switch}
\end{figure}
\begin{figure}
\includegraphics*[width=2.5in]{./gle.ps}
\caption
{The average correlation of the random forces on the tagged particle $\langle \xi (t) \xi(t^{\prime}) \rangle$
as a function of time $t$ (solid line) for the GLE case with coupling
constants calculated as in Eqn.{tuckercrazy}. The dashed line represents
the friction kernel as calculated in Eqn.{eq:freq}.
}
\label{fig:gleforces}
\end{figure}
\section{Numerical Results}\label{sec:sim}
In this section, numerical simulations of the Hamiltonian
equivalent of the iGLE are presented. It is shown that the
inclusion of the non-local term,
$\delta V_2[q(\cdot),t]$, known to be small in the quasi-equilibrium
regime is actually necessary generally. That is,
throughout this work, the value of
$\delta V_2[q(\cdot),t]$ is non-zero during the
increase in system coupling.
The latter is specified through the function
$g(t)$ chosen to satisfy
\begin{subequations}\label{eq:switch}
\begin{eqnarray}
g(t)^2 &=& g(-\infty)^2 + \tfrac{1}{2}[g(\infty)^2 - g(-\infty)^2]
\nonumber\\ &&\quad
\times \biggl( 1 + \frac{e^{t/{\tau_{g}} -1}}{e^{t/{\tau_{g}} +1}} \biggr),
\end{eqnarray}
as illustrated in Fig.~\ref{fig:switch},
or equivalently as
\begin{eqnarray}
g(t)^2 &=& \tfrac{1}{2}[g(\infty)^2 + g(-\infty)^2]
\nonumber\\
&&+ \tfrac{1}{2}[g(\infty)^2 - g(-\infty)^2]\tanh(\tau/\tau_{g}).
\end{eqnarray}
\end{subequations}
\subsection{Coupling Constants}
The values of the coupling constants
in Eqn.~\ref{eq:iGLEham} can be obtained from
the reverse cosine transform of Eqn.~\ref{eq:freq},
\begin{eqnarray}
\frac{c_{i}^2}{\omega_{i}^2} =
\frac{2 \omega_c}{\pi}
\int_{0}^{\infty}dt \cos(\omega_i t)\gamma_{0} e^{-t/\tau},
\end{eqnarray}
such that an effective (discretized) friction kernel of the form,
\begin{equation}
\gamma_{0}(t) = \sum_{i=1}^{N} \frac{c_{i}^2}{\omega_{i}^2} \cos(\omega_{i}t).
\label{eq:freq}
\end{equation}
is used to approximate $\gamma_{0} e^{-t/\tau}$.
The coupling constants $c_i$ in Eqn.~\ref{eq:iGLEham}
are readily obtained on a discrete (and finite) grid of
$N$ nontrivial frequencies, $\omega_i\equiv i\omega_c$, by
equating the spectral densities of the exponential friction to
that of discretization.
The smallest frequency $\omega_c\equiv1/(M\tau)$ can be characterized
in terms of the charateristic integer $M$ which effectively sets
the longest time scale that can be measured before false recurrences
appear due to the discretization.
The coefficients can be written (as, for example, also obtained
by Topaler and Makri\cite{makri94}) as
\begin{eqnarray}
c_{i}=\sqrt{(\frac{2 \gamma_{0} \tau \omega_{c}}{\pi})\cdot
(\frac{\omega_{i}^2}{1+\omega_{i}^2 \tau^2})
}\;.
\label{eq:coupl}
\end{eqnarray}
$1/(N\omega_{c})$ represents the shortest time scale of interest.
This connection between the continuum stationary friction
kernel and the discrete number of frequencies and coupling strengths
(Eq.~\ref{eq:freq})
is exact in the continuum limit ($N \rightarrow \infty$).
However, $M$ must also be large enough so as to
ensure the decay in correlations between the
bath modes; typically $M\ge4$.
Simulations of the GLE (with constant friction) were performed to confirm
a suitable number of bath particles for which convergence of
the relationship in Eqn.~\ref{eq:freq} was observed.
It was found that as few as 20 harmonic bath modes
can yield acceptable convergence of the
velocity correlation function of the chosen coordinate
because its decay is much faster than the recurrence time in the
bath modes.
Nonetheless, to ensure that there are enough modes
to approximately satisfy the continuum limit at the longest
and shortest times of interest while also limiting the
requisite computing power, the number of harmonic bath modes, $N$,
used in the present work have been taken to be 200.
Although the coupling constants have been calculated
as per Eq.~\ref{eq:coupl} in the simulations
of the mechanical projection of the iGLE presented in this work,
it is beneficial to examine some other choices of the
coupling constants, $c_i$.
The main question is how to best equate the continuum (left hand side)
and discrete (right hand side) representations of Eqn.~\ref{eq:freq} in
the frequency domain.
One alternative method is that of Tucker and coworkers,\cite{reese2}
in which
the coupling constants are obtained, not by integrating the spectral function
over an infinite domain as above, but over a domain bounded by the
longest time scale $1/{\omega_c^{'}}$.
The resulting coefficients,
\begin{eqnarray}
c_i = \sqrt{(\tfrac{2}{t_c} \omega_{i}^{2} \gamma_{0})
\Biggl[
\frac{1/{\tau}}{\omega_{i}^{2} + 1/{{\tau}^{2}}} +
\frac{e^{-t_{c}/{\tau}}\omega_{i}\sin{(\omega_{i}t_{c})}}%
{\omega_{i}^{2}+ 1/{{\tau}^{2}}}
\Biggr]},
\label{tuckercrazy}
\end{eqnarray}
are associated with a correspdonging discrete frequency
$\omega_i\equiv i\omega_c^{'}$ as before, but the smallest
frequency is redefined as $\omega_{c}^{'} = \pi /(p \tau)$
for some characteristic integer $p$.
The use of the coupling constants of Eq.~\ref{tuckercrazy}
in the Hamiltonian representation of the GLE
yields a very good match between the friction kernel
as specified by Eqn.~\ref{eq:freq} and the correlation function for the random
forces on the tagged particle as shown in Fig.~\ref{fig:gleforces}.
Unfortunately, the system still retains a long-time periodicity associated
with the mode described by the largest inverse frequency.
In an attempt to sidestep this issue, the chosen frequencies in the domain
could be chosen incommensurably by choosing a frequency randomly within each
window, $I_i\equiv ( \{i-1\}\omega_c, i\omega_c]$.
Given a random sequence of frequencies $\{\omega_i^{\rm R}\}$
in which $\omega_i^{\rm R}\in I_i$ for each $i$,
the coefficients can then be re-evaluated leading to the result,
\begin{eqnarray}
c_{i} &=& \Biggl\{(\tfrac{2 \gamma_{0}}{\pi})
\left( {\omega_{i}^R}
\right)^{2}
\Biggl[\tanh^{-1}( i\tau\omega_c )
\Biggr.\Biggr.\nonumber\\ \Biggl.\Biggl.
&&\quad + \tanh^{-1}( \{i-1\}\tau\omega_c)
\Biggr]\Biggr\}^{\frac{1}{2}}
\;.
\end{eqnarray}
This choice of coefficients was also tested on the GLE but it
led to similar results for the
correlation function of the projected (random) forces $\xi(t)$,
both in terms of the accuracy
and in reproducing the long-time periodicity.
It was therefore determined that the results for the GLE
(and thereby the iGLE)
are not highly sensitive to the limiting form of the coupling constants
so long as the choice satisfies the appropriate friction kernel,
while the random choice of frequencies did not remove the false long-time
periodicity in the autocorrelation of the force $\xi(t)$.
This discussion, though somewhat pedantic, does offer a critical warning:
any numerically measured behavior in the chosen particle that is
correlated for times longer than the period of the false long-time
periodicity in the discrete representation is suspect to error.
This, in turn, places an upper bound on the slowest time scale
---{\it viz.}~$t_g$ in Eq.~\ref{eq:switch}--- that
can be imposed on the nonstationary behavior of the bath coupling
for a given choice of discrete oscillators in the Hamiltonian representation.
In the simulations that follow, this bound is, indeed satisfied.
\subsection{The Free Particle}
The numerical equivalence between the
iGLE and the Hamiltonian system of Eq.~\ref{eq:iGLEall}
can be illustrated using the
same model of the nonequilibirium change in the environment
originally investigated in the context of the
phenomenological iGLE.\cite{hern99a}
To this end, numerical results are presented for the Hamiltonian
system of a tagged free particle ($V(q) = 0$) bilinearly coupled to
a bath of 200 harmonic modes whose smallest
characteristic bath frequency is $\omega_c = 1/(M \tau)$
where $M = 4$.
Individual bath frequencies are taken at discrete values,
$\omega_{i} = (i- 1/2) \omega_{c}$, and coefficients as per
Eq.~\ref{eq:coupl}.
All other parameters have identical values to those used in the
numerical integration studies of the iGLE in Ref.~\onlinecite{hern99a}
with the exception that 100,000
trajectories were used in this study, which is a 10-fold increase.
In summary, all simulations share the following
set of parameters: $N = 100,000, k_{B}T = 2.0, \gamma_{0} = 10.0,
\tau = 0.5, \tau_{g} = 0.2, \Delta t = 1$x$10^{-4} (t \geq -8)$
and $\Delta t = 1$x$10^{-3} (t < -8).$
The time dependent friction is modulated
through the switching function $g(t)$ as shown in Fig.~\ref{fig:switch}.
Each individual trajectory is first equilibrated from $t=-20$ to
ensure that the observed dynamics are influenced only by the
irreversible change in the friction and not the dynamics of
equilibration. The system is propagated using the velocity-Verlet
method with smaller timesteps during the regime of friction increase
than for the constant friction regimes.\cite{allen87}
In order to determine whether or not the
$\delta V_2[q(\cdot),t]$ term is negligible,
simulations were first performed with $\delta V_2[q(\cdot),t] = 0$.
For all three cases of the change in friction, the non-local
$\delta V_2[q(\cdot),t]$ term was found to be non-negligible
as can be seen in Fig.~\ref{fig:vsqnoV2} by the fact that
\begin{figure}
\includegraphics*[width=2.5in]{./vsqnoV2.ps}
\caption
{The mean square velocity $\langle v^2(t) \rangle$ is
displayed for each of the cases as a function of time t
and case I, II or III with the non-local term
$\delta V_2[q(\cdot),t]$ set equal to zero.
}
\label{fig:vsqnoV2}
\end{figure}
the system does not obey equipartition of energy during times
near $t=0$.
The larger the friction increase, the larger the deviation
from equipartition. This can be seen in Fig.~\ref{fig:vsqnoV2}
as the largest deviations are seen for case I where the switching function
$g(t)$ increases from 0 to 10. The friction increases along a similar range
and the average square velocity $\langle v(t)^2 \rangle$ peaks
at near 8000 around $t=0$. The system does not conserve
total energy in these cases.
However, upon introducing the nontrivial terms,
$\delta V_2[q(\cdot),t]$ and its derivative (Eq.~\ref{eq:Vone}),
within the Hamiltonian equations of motion,
equipartition for the system is preserved, as shown in Fig.~\ref{fig:vsq}.
\begin{figure}
\includegraphics*[width=2.5in]{./vsq.ps}
\caption
{The mean square velocity $\langle v^2(t) \rangle$ is
displayed for each of the cases as a function of time t
and case I, II or III with the non-local term
$\delta V_2[q(\cdot),t]$ explicitly considered.
}
\label{fig:vsq}
\end{figure}
The $\delta V_2[q(\cdot),t]$ term is therefore not negligible in the extreme
test cases examined in this work in which all the interesting
time scales $1/\gamma_0$, $t_g$, and $\tau$ are
comparable.
Explicit integration
of $\delta V_2[q(\cdot),t]$ is computationally expensive.
This expense can be reduced when the
stationary friction kernel has an exponetial form.
In this case, the derivative,
$\frac{\partial{\delta V_2[q(\cdot),t]}}{\partial t}$,
can be calculated using the auxiliary variable
$z$, where
\begin{eqnarray}
\frac{\partial z}{\partial t} = \dot{g}(t) q(t) - \tfrac{1}{\tau} z(t).
\end{eqnarray}
The value of $\frac{\partial{\delta V_2[q(\cdot),t]}}{\partial t}$
can then be obtained at each timestep,
\begin{eqnarray}
\frac{\partial{\delta V_2[q(\cdot),t]}}{\partial t} = \gamma_{0}g(t)z(t).
\end{eqnarray}
All of the results shown in this paper have been calculated using the
integration of the auxiliary variable $z$ because it is
consequently formally equivalent to the explicit integration of Eq.~\ref{eq:Vone}
while requiring fewer computing cycles.
As can be seen from Fig.~\ref{fig:vsq},
all three test cases lead to the same level of fluctuations in the long time regime
even though they begin from different initial states. In the limit
of an infinite number of
trajectories, all cases would exhibit no fluctuations, but these
fluctuations due to finite size effects are indicative of the system
dynamics. For example, in case I the particle obeys equipartition
perfectly for the early regime, because its motion is ballistic,
whereas the early time dynamics for cases II and III clearly show the influence
of coupling to the harmonic bath particles. It can be seen
from the plot of the average square velocity that
the system conserves energy in these calculations
when the $\delta V_2[q(\cdot),t]$ term is incluced.
Unfortunately the addttional test for the conservation of energy
cannot be computed directly in these cases because
$\delta V_2[q(\cdot),t]$ depends on the function q(t)
which cannot be known {\it a priori}.
The construction of the iGLE also requires that the correlation
function of the random forces satisfies a nonstationary extension
of the FDR,
\begin{eqnarray}
k_{B}Tg(t)g(t^{\prime})\gamma_{0}(t-t') = \langle \xi(t) \xi(t') \rangle,
\end{eqnarray}
with respect to the explicit forces $\xi(t)$ on the tagged particle
seen in a microscopic system,
\begin{eqnarray}
\xi(t)= \dot{v} + \gamma_{0}g(t)\int_0^t\!dt'\, g(t') \gamma_0(t-t') {v}(t') - F(t).
\end{eqnarray}
That the mechanical system satisfies this relationship is
illustrated in
Fig.~\ref{fig:fric1}, Fig.~\ref{fig:fric2} and Fig.~\ref{fig:fric3}
for cases I, II and II respectively.
\begin{figure}
\includegraphics*[width=2.5in]{./fric1.ps}
\caption
{The nonstationary friction kernel $\gamma (t,t^{\prime})$
in case I at several different times $t$
(-4.0 thick solid line, -0.5 solid line, 0.0 dashed line,
1.0 long dashed line and 4.0 dot dashed line)
as a function of the
previous times $t-t^{\prime}$. The friction kernel is normalized
by the value $\gamma (t,t)$ for illustrative
and comparative purposes.
}
\label{fig:fric1}
\end{figure}
\begin{figure}
\includegraphics*[width=2.5in]{./fric2.ps}
\caption
{The nonstationary friction kernel $\gamma (t,t^{\prime})$
in case II at several different times $t$
(-4.0 thick solid line, -0.5 solid line, 0.0 dashed line,
1.0 long dashed line and 4.0 dot dashed line)
as a function of the
previous times $t-t^{\prime}$. The friction kernel is normalized
by the value $\gamma (t,t)$ for illustrative
and comparative purposes.
}
\label{fig:fric2}
\end{figure}
\begin{figure}
\includegraphics*[width=2.5in]{./fric3.ps}
\caption
{The nonstationary friction kernel $\gamma (t,t^{\prime})$
in case III at several different times $t$
(-4.0 thick solid line, -0.5 solid line, 0.0 dashed line,
1.0 long dashed line and 4.0 dot dashed line)
as a function of the
previous times $t-t^{\prime}$. The friction kernel is normalized
by the value $\gamma (t,t)$ for illustrative
and comparative purposes.
}
\label{fig:fric3}
\end{figure}
The integral can be simplified
using an auxiliary variable $z_{2}$ akin to the method for replacing
${\delta V_2[q(\cdot),t]}$ with $z$.
The auxiliary variable satisfies,
\begin{eqnarray}
\frac{\partial z_{2}}{\partial t} = \dot{g}(t) v(t) - \tfrac{1}{\tau} z_{2}(t).
\end{eqnarray}
The explicit expression for
the forces on the tagged particle can then be obtained
at each timestep by substitution,
\begin{eqnarray}
\xi(t)= \dot{v} + \gamma_{0}g(t)z_{2}(t) - F(t).
\end{eqnarray}
The autocorrelation function of the force due to the
bath ---{\it viz.}, the random force in the projected variables---
obey the FDR for all cases and times.
It should be emphasized that this remarkable agreement would not have been
found if the nonlocal term were omitted.
The velocity autocorrelation functions for the tagged free
particle are shown in
Fig.~\ref{fig:vauto}.
\begin{figure}
\includegraphics*[width=2.5in]{./vels.ps}
\caption
{The autocorrelation function of the velocity
of the free tagged particle, $\langle v(t)v(t^{\prime}) \rangle$,
is displayed for each of the three cases of the change in friction
at initial times $t$=
(-4.0 thick solid line, -0.5 solid line, 0.0 dashed line,
1.0 long dashed line and 4.0 dot dashed line). The panels
indicate cases I,II and III from top to bottom.
}
\label{fig:vauto}
\end{figure}
The dynamics of the chosen coordinate changes in the
same manner as for the studies on the numerical integration of the iGLE.
The curves match the previous results exactly with the exception
that the $t=0$ and $t=-0.5$ were mislabeled in Ref. \onlinecite{hern99a}.
The long time ($t= 4.0$) autocorrelation functions
are all the same indicating that all cases reach the same equilibrium
as would be expected. In case I, the early time (t= -4.0)
autocorrelation function is a straight line since the particle
is in the ballistic regime. All curves start at approximately 2.0
for $t-t^{\prime} = 0$ since the system satisfies equipartition.
\subsection{The Particle in a Harmonic Potential}
The same simulations were run for a tagged particle in a harmonic well
characterized by a frequency $\omega=1$.
The results are essentially all the same with the following exceptions:
The system with tagged particle in a harmonic well is less
sensitive to the increase in friction and can be simulated accurately
with larger timesteps than the free particle case due to the confining
effect of the harmonic well. Similarly, the system does not yield
such large spikes in the average square velocity for simulations
with $\delta V_2[q(\cdot),t] = 0$, although the explicit calculation of that
term is still a necessity for accurate results.
Clearly the early time dynamics will differ between the free particle
and harmonic particle cases as can be seen from the velocity autocorrelation
functions for the harmonic tagged particle case in
Fig.~\ref{fig:vautoh}.
\begin{figure}
\includegraphics*[width=2.5in]{./velsh.ps}
\caption
{The autocorrelation function of the velocity
of the harmonic tagged particle, $\langle v(t)v(t^{\prime}) \rangle$,
is displayed for each of the three cases of the change in friction
at initial times $t$=
(-4.0 thick solid line, -0.5 solid line, 0.0 dashed line,
1.0 long dashed line and 4.0 dot dashed line). The panels
indicate cases I,II and III from top to bottom.
}
\label{fig:vautoh}
\end{figure}
Interestingly though, this is only the case
for case I. In cases II and III the dynamics are essentially
identical to the free particle case. This is due to the fact that
the initial friction and thus the coupling is so strong
(10.0 for case I and 50.0 for case II) that the tagged particle
potential has an insignificant contribution to the
energy as compared to the contribution from the interaction
with the bath.
\section{Concluding Remarks}
The equivalence of the stochastic
iGLE and the deterministic Hamiltonian system has been
demonstrated by numerical integration of the equations of motion
corresponding to the Hamiltonian in Eqn.~\ref{eq:iGLEham}.
The Hamiltonian system with 200 bath particles
has been shown to exhibit equivalent dynamics as that seen in
numerical integration of the iGLE. This is expected to be the
case in the infinitely large bath size regime. The equivalence
is contingent upon the explicit evaluation of the non-local
dissipative term which is non-zero for all the cases
of interest in this study. Although the free particle and harmonic
oscillator cases do not contain a reactant/product boundary,
they should be sufficient to verify the general agreement between the iGLE
and the mechanical oscillator system.
The deterministic mechanical system
serves as further
evidence that the stochastic equation of motion is not purely a
fiction (e.g., phenomenological),
but rather is equivalent to a physical system with
an explicit energy function or Hamiltonian.
\section{Acknowledgments}
We gratefully acknowledge
Dr.~Eli Hershkovitz for insightful discussions
and Dr.~Alexander Popov for a critical reading of the manuscript.
This work has been partially supported by a National Science Foundation
Grant, No.~NSF 02-123320.
The Center for Computational Science and Technology
is supported through a Shared University Research (SUR)
grant from IBM and Georgia Tech.
Additionally, RH is the Goizueta Foundation Junior Professor.
| {'timestamp': '2005-01-14T01:48:00', 'yymm': '0501', 'arxiv_id': 'cond-mat/0501328', 'language': 'en', 'url': 'https://arxiv.org/abs/cond-mat/0501328'} |
\section{Introduction}
Quality assessment of hog carcasses has long been practiced in Canada and many other countries \cite{Fredeen:68,Pomar:09}. The quality of a pork carcass can be determined based on its overall body composition by measuring the amount of muscle, fat, skin and bone, or according to the quantity of these tissues inside the primary and commercial cuts. In the literature, there have been different research objectives to evaluate carcasses' quality and cuts. Gispert \textit{et al.} \cite{Gispert:07} characterized pork carcasses based on their genotypes information, and measurements were taken using a ruler and the Fat-O-Meat’er. Marcoux \textit{et al.} \cite{Marcoux:05} employed dual-energy X-ray absorptiometry (DXA) technology to predict carcass composition of three genetic lines with a wide range of varying compositions. Pomar \textit{et al.} \cite{Pomar:03} compared two grading systems based on Destron (DPG) and Hennessy (HGP) probe measurements to verify if both grading approaches result in similar lean yields and grading indices in actual pork carcasses. Engel \textit{et al.} \cite{Engel:03} proposed a different sampling scheme by considering some of the predictive variables to check the accuracy and the approval of new grading systems in slaughterhouses. Picouet \textit{et al.} \cite{Picouet:10} suggested a predictive model based on a density correction equation to determine weight and lean content. In an effort to replace traditional procedures such as dissection, Vester-Christensen \textit{et al.} \cite{Vester-Christensen:09} took advantage of computed tomography (CT)-scans and a contextual Bayesian classification scheme to classify pork carcasses into three types of tissues. The cutout and dissection procedure proposed by Nissen \textit{et al.} \cite{Nissen:06} is a well-recognized reference method to assess the quality of pork carcasses. However, this approach is time-consuming, and financially expensive and requires attention, space and qualified personnel in addition to the risk of bias between butchers \cite{Vester-Christensen:09,Picouet:10}.
Hence, the pork industry, including all stakeholders from production to meat sale, is seeking a way to make the most profitable decisions. One solution is to carry out carcass quality evaluations to know the results coming from a choice of genetic lines, a diet or a breeding method. However, due to the difficulties in conducting the cutting and dissection procedure by butchers, the commercial environment has more constraints than the research environment. Therefore, it becomes more important to develop a simple, fast and precise method to replace the traditional approaches in the commercial environment.
In this paper, we digitize carcasses in three-dimensions using a 3D scanner and then make a triangular mesh model of each pork half-carcass to develop a framework for weight prediction of the different cuts and their tissue composition. Unlike images, triangular meshes have irregular connectivity which demands an efficient and concise design to capture the intrinsic information of the object while staying robust against different triangulation \cite{qiao:19}. This requires the design of a descriptor (signature) that is invariant to isometric deformation of a meshed object while keeping discriminative geometric information \cite{Wang:20}. To this end, we employ a compact signature based on spectral analysis of the Laplace-Beltrami Operator (LBO) to capture the intrinsic geometric properties of shapes. This compact representation of 3D objects simplifies the problem of shape comparison to the problem of signature comparison and provides a relatively accurate prediction of pork cut weights.
The spectral signatures can be employed in a broad range of applications including medical shape analysis \cite{Masoumi:18b}, 3D object analysis \cite{Bronstein:11,Rodola:SHREC17,Masoumi:17}, shape matching \cite{Melzi:19}, and segmentation \cite{YI:17}. In the literature, there has been a surge of interest in eigenmodes (eigenvalues and eigenvectors) of LBO to build local or global spectral signatures. The power of spectral signatures is mainly due to the spectrum related to the natural frequencies and the associated eigenvectors that yield the wave pattern \cite{Levy:06,Atasoy:16}.
The local spectral signatures are defined on each vertex of a mesh and provide information about the neighborhood around a vertex \cite{Masoumi:19a}. Intuitively, points around a neighborhood share similar geometric information, hence their corresponding local descriptors should represent similar patterns. The local spectral signatures include heat kernel signature (HKS) \cite{Sun:09}, wave kernel signature (WKS) \cite{Aubry:11}, and global point signature (GPS) \cite{Rustamov:07}. From the graph Fourier view, HKS captures information related to the low-frequency component that relies on macroscopic information of a 3D object. Moreover, WKS allows access to the information of the high-frequency component, which corresponds to the microscopic properties of a 3D model. Furthermore, in GPS we might face the problem of eigenvector's switching when their corresponding eigenvalues are close to each other.
On the other side, global signatures encode information about the geometry of the entire 3D object. Shape-DNA~\cite{Reuter:06} was introduced by Reuter \textit{et al.} as a global signature defined by a non-trivial truncated sequence of eigenvalues normalized by mesh area that are arranged in ascending order. Gao \textit{et al.} proposed compact Shape-DNA \cite{Gao:14} by applying the discrete Fourier transform to eigenvalues of the LBO. A new version of GPS developed by Chaudhari \textit{et al.} \cite{Chaudhari:14}, called GPS embedding, is a global descriptor defined as a truncated sequence of inverse square roots eigenvalues of the LBO. However, global spectral signatures give us limited representation and fail to recognize the fine-grained patterns in a 3D model.
Recently, spectral graph wavelet signature (SGWS) has been developed by Masoumi \textit{et al.} \cite{Masoumi:16} as an efficient and informative local spectral signature, which allows analysis of the 3D mesh in different frequencies. Dissimilar to GPS, HKS, and WKS, SGWS leverages the power of the wavelet to provide the information of both macroscopic and microscopic geometry of shape, leading to a more discriminative feature. In this paper, we introduce \textit{SpectralWeight}, in which each 3D model is represented by SGWS to computerize estimation of the weight of pork cuts. Our objective in this study is to verify the accuracy of prediction for different variables of interest and possibly integrate the calculation method into a complete tool that can be used in a commercial environment. To the best of our knowledge, this is the first study on employing SGWS for weight prediction of pork carcasses.
The contribution of this paper is twofold: (1) we propose a framework to precisely model a pork half-carcass by harnessing the power of the spectral graph wavelets, which is called SpectralWeight; and (2) we exploit the SpectralWeight as a predictive model to weigh different cuts of pork.
\section{Material and Methods}\label{Method}
\subsection{Sampling scheme}
To meet the objectives of this project, we selected $195$ pork carcasses, including $100$ barrows and $95$ females, from commercial slaughterhouses in Quebec, Canada. To obtain a high variability of conformation, carcasses were sampled in a weight range of $83.8$ kg to $116.2$ kg and a backfat thickness range of $7.6$ mm to $30.6$ mm. Backfat thickness was measured using a ruler at the cleft and the level of the fourth-last thoracic vertebra. However, the official backfat measurement was retaken using a digital caliper on a chop cut at the same thoracic level (fourth-last vertebra) $7$ cm from the cleft perpendicular to the skin. The conformation of the carcasses is divided into four different classes represented by the letters C, B, A and AA. Class C represents a long carcass with a thin-looking leg, while class AA represents a stocky carcass with a highly-rounded leg shape (Figure \ref{Carcass conformation}). At the time of weighing, the hot carcass was presented with the head, tail, leaf fat, hanging tender and kidneys. We retained only carcasses properly split in the middle of the spinal column and without tissue ablation. Therefore, each carcass side was considered to be bilaterally symmetric. The scale of variation within each sampling criterion is intended to provide a more robust estimate of the predictive model parameters at the extremes of weight and backfat thickness \cite{Daumas:96}. Only the left half-carcasses were transported to the Sherbrooke Research and Development Center (RDC) of Agriculture and Agri-Food Canada (AAFC). The carcasses were stored in a cooler at $2 \degree C$ in a plastic bag to minimize water loss. The 3D scanning, cutout, dissection and determination of meat cut fat content were completed within days of receipt of the carcasses at the AAFC RDC.
\begin{figure}[t]
\setlength{\tabcolsep}{.3em}
\centering
\begin{tabular}{cc}
\includegraphics[scale=.7]{./Figures/carcass_conformation.pdf}
\end{tabular}
\caption{Pork carcass conformation classes}
\label{Carcass conformation}
\end{figure}
\medskip
\subsection{Half-carcass preparation and modeling}
Before being digitized in 3D, the half-carcasses were prepared in a standard way by removing the tail, the hanging tender and the remains of leaf fat present in the carcass cavity. The jowl was shortened to a uniform length of $15$ cm from the base of the shoulder. The final weight of the half-carcass was subsequently recorded. The total length of the half-carcass was measured using a tape measure from the tip of the rear hooves to the first cervical vertebrae. This length was used to determine the cutting site for the shoulder and ham. Three-dimensional scanning of each half-carcass was performed using the Go!SCAN 3DTM (Model $50$, Creaform, Levis, Quebec, Canada) and post-processed by 3D software (Vxelement, Version 6.3 SR1, Creaform, Levis, Quebec, Canada). The 3D scanner uses white structured light technology without requiring targets affixed to the carcass or additional lighting. Quality control was performed at the beginning of each day by scanning a target provided by the company. All quality controls were passed during the project. The resolution between the mesh points was set to $0.2$ cm and 3D models were saved and used in OBJ format.
\subsection{Cutout and dissection}
Once the half-carcass was scanned, the four primal cuts (leg, shoulder, loin, and belly) were prepared. The leg and shoulder were cut at proportional distances of $40.90\%$ and $85.54\%$ from the total length of the half-carcass, respectively. These proportions were determined in a previous cutout expertise (unpublished results). The primal loin and flank were separated by applying a straight cut passing $1.5$ cm from the tenderloin and $10$ cm from the base of the ribs opposite the fourth-last thoracic vertebra.
The primal cuts were then prepared into commercial cuts according to different standards. The commercial cuts are presented with or without the skin, more or less defatted, and with or without bone, as appropriate. The skin and ribs from the primal belly were removed. Subsequently, the mammary glands and a portion at the posterior end of the belly (belly trimmings) were cut to create a rectangular appearance. Specifications for the preparation of commercial cuts and their identification codes are presented in the Canadian Pork Handbook and the Distributor Education Program (DEP) \cite{CPI:11}. The cuts illustrated and described in this manual correspond to the basic specifications followed by the Canadian pork industry. It is worth noting that there are no reference numbers for the four primal cuts (Leg, Loin, Shoulder, and Belly) presented in \cite{CPI:11}. Figure \ref{primal cuts}, clearly illustrates the four primal cuts, belly commercial trim $C400$, and belly trimmings. The cutout work resulted in the following parts: Pork leg $C100$, Shoulder blade $C320$, Boneless shoulder blade $C325$, Hock $C355$, Shoulder picnic $C311$, Loin $C200$, Boneless loin $C201$, Skinless tenderloin $C228$, Back ribs $C505$, Belly commercial trim $C400$, Side ribs $C500$, and Belly trimmings. The amounts of bone, skin, and meat (muscle and fat not separated) contained in the primal and commercial cuts were obtained by dissection procedure, and weights were recorded. The meat contained in the main commercial cuts (Pork leg $C100$, Boneless loin $C201$, Loin $C200$, Belly commercial trim $C400$, Shoulder picnic $C311$, Boneless shoulder blade $C325$, Belly trimmings) was minced, and a representative sample was taken to determine lipid, protein and dry matter content using near-infrared transmittance \cite{Shirley:07}. It should be noted that lipid content was used in this study to calculate the weight of fat in the meat of the main commercial cuts. To convert the lipid content to dissected fat weight, a sample of pure muscle and pure fat from each meat mass was also analyzed for lipid content using the same method. Using the data collected from the muscle and fat samples for each cut, an equation was developed to convert the meat lipid content to the dissected fat content. This procedure allows an equivalent amount of fat to be obtained without physically separating the muscle and fat from the meat from the entire mass using a knife.
\begin{figure}[t]
\setlength{\tabcolsep}{.3em}
\centering
\begin{tabular}{cc}
\includegraphics[scale=1.2]{./Figures/primal_cuts.pdf}
\end{tabular}
\caption{Four primal cuts, belly commercial trim C400, and belly trimmings (illustrated in the box). (1) primal ham, (2) primal loin (3) primal shoulder (4) primal belly (5) belly commercial trim C400 (6) belly trimmings.}
\label{primal cuts}
\end{figure}
\subsection{Problem statement and method}
We model a pork half-carcass $\mathbb{T}$ as a triangulated mesh defined as $(\mathcal{V},\mathcal{E})$, where $\mathcal{V}=\{\bm{v}_{i}|i=1,\ldots,N\}$ is the set of vertices, and $\mathcal{E}=\{e_{ij}\}$ is the set of edges. For any vertex coordinate $\mathcal{P}=(p_{1},p_{2},p_{3}):\,\mathcal{V}\to\mathbb{R}^{3}$, our objective is to build a local descriptor $f(\bm{v}_{i}) \in \mathbb{R}^{d}$ for each vertex $\bm{v}_{i}$. Figure \ref{Triangulated carcass} (left) represents an example of triangulated mesh on a random pork half-carcass.
\medskip
We build the SpectralWeight framework based on the eigensystem of the LBO that are invariant to the deformation of non-rigid shapes. To achieve the eigenvalues and eigenvectors, we discretize the LBO using a cotangent weight scheme as proposed by \cite{Meyer:03}. We build our Laplacian matrix by:
\begin{equation}
\bm{L}=\bm{A}^{-1}(\bm{D-W}),
\end{equation}
where $\bm{A}=\mathrm{diag}(a_{i})$ is a mass matrix, $\bm{D}=\mathrm{diag}(d_{i})$ is a degree matrix constructed by $d_{i}=\sum_{j=1}^{n}w_{ij}$, and $\bm{W}=(w_{ij})=\left(\cot\alpha_{ij} + \cot\beta_{ij}\right)/2a_{i}$ is a sparse weight matrix if $\bm{v}_{i}\sim \bm{v}_{j}$. Also, $\alpha_{ij}$ and $\beta_{ij}$ are the angles $\angle(\bm{v}_{i}\bm{v}_{k_1}\bm{v}_{j})$ and $\angle(\bm{v}_{i}\bm{v}_{k_2}\bm{v}_{j})$ of two adjacent triangles $\bm{t}^{\alpha}=\{\bm{v}_{i},\bm{v}_{j},\bm{v}_{k_1}\}$ and $\bm{t}^{\beta}=\{\bm{v}_{i},\bm{v}_{j},\bm{v}_{k_2}\}$, and $a_i$ is the area of the Voronoi cell at vertex $\bm{v}_{i}$, the shaded area. Finally, the eigensystem of LBO is obtained by solving the \textit{generalized eigenvalue problem}, such that:
\begin{equation}
\bm{C}\bg{\xi}_{\ell}=\lambda_{\ell}\bm{A}\bg{\xi}_{\ell},
\end{equation}
where $\bm{C}=\bm{D-W}$, and $\lambda_{\ell}$ and $\bg{\xi}_{\ell}$ are the eigenvalues and eigenfunctions of LBO, respectively. We define the spectral graph wavelet based for vertex $j$ and scale $t$ as \cite{Masoumi:16}:
\begin{equation}
\bm{s}_{L}(j)=\{W_{\delta_j}(t_k,j)\mid k=1,\dots,L\}\cup\{S_{\delta_j}(j)\},
\label{Eq:SGWSignatureLevel}
\end{equation}
\begin{figure}[t]
\setlength{\tabcolsep}{.3em}
\centering
\begin{tabular}{cc}
\includegraphics[scale=.3]{./Figures/TriangleMeshRep.pdf}&
\hspace{1.5cm}
\includegraphics[scale=.3]{./Figures/cotangent_weight.pdf}
\end{tabular}
\caption{Triangulated mesh model of a pork half-carcass (Left); illustration of cotangent weight scheme (Right).}
\label{Triangulated carcass}
\end{figure}
where $ W_{\delta_j}(t_k,j)$ and $S_{\delta_j}(j)$ are the spectral graph wavelet and scaling function coefficients at resolution level $L$, respectively, as follows (readers are referred to \cite{Masoumi:16} for detailed description):
\begin{equation}
W_{\delta_j}(t,j)=\langle \bg{\delta}_{j},\bg{\psi}_{t,j} \rangle=\sum_{\ell=1}^{m}g(t\lambda_\ell)\xi_{\ell}^{2}(j),
\label{DeltaW_coefficients}
\end{equation}
and
\begin{equation}
S_{\delta_j}(j)=
\langle \bg{\delta}_{j},\bg{\phi}_{t} \rangle=
\sum_{\ell=1}^{m}h(\lambda_\ell)\xi_{\ell}^{2}(j).
\label{DeltaS_coefficients}
\end{equation}
We consider the Mexican hat wavelet as a generating filter, which treats all frequencies as equally-important and improves the discriminative power of the SpectralWeight. The SpectralWeight takes advantage of nice properties like insensitivity to isometric deformations and efficiency in computation. Moreover, SpectralWeight merges the advantages of both band-pass and low-pass filters for building the local descriptor. Figure \ref{SGWT representation} depicts a representation of SGWT when computing a $\chi^{2}$-distance from a highlighted point on the belly from other points on the carcass. As can be observed, regions with similar geometrical structures share the same color, while regions with dissimilar structures from the specified point bear different colors.
\begin{figure}[t]
\setlength{\tabcolsep}{.3em}
\centering
\begin{tabular}{ccccc}
\includegraphics[scale=.7]{./Figures/SGWT1.pdf}&
\includegraphics[scale=.7]{./Figures/SGWT2.pdf}&
\includegraphics[scale=.7]{./Figures/SGWT3.pdf}&
\includegraphics[scale=.7]{./Figures/SGWT4.pdf}&
\includegraphics[scale=.7]{./Figures/SGWT5.pdf}
\end{tabular}
\caption{Visualization of different resolutions (left to right: 1st, 2nd, 3rd, 4th, and 5th) of the spectral graph wavelet signature from a specified point (shown as a pink sphere) on a random pork half-carcass. The cooler and warmer colors represent lower and higher distance values, respectively.}
\label{SGWT representation}
\end{figure}
\medskip
The SpectralWeight framework includes the subsequent steps: We first compute an SGWS matrix $\bm{D}$ for each half-carcass in the dataset $\mathcal{S}$, where $\bm{D}=(\bm{d}_{1},\dots,\bm{d}_{m})\in\mathbb{R}^{p\times m}$, and $\bm{d}_i$ is the $p$-dimensional point signature at vertex $i$ and $m$ is the number of mesh points. In the second step, we construct a $p\times k$ dictionary matrix $\bm{V}=(\bm{v}_{1},\dots,\bm{v}_{k})$ through an unsupervised learning algorithm, i.e. clustering, by assigning each $m$ local descriptor into the $k$-th cluster with the nearest mean. In the next step, we employ the soft-assignment coding to map SGWSs $\bm{s}_{i}$ to high-dimensional mid-level feature vectors. This leads to a $k\times m$ matrix $\bm{C}=(\bm{c}_{1},\dots,\bm{c}_{m})$ whose columns are the $k$-dimensional mid-level feature codes. In a bid to aggregate the learned high-dimensional local features, we build a $k \times 1$ histogram $h_{r}=\sum_{i=1}^{m}c_{ri}$ for each half-carcass by sum-pooling the cluster assignment matrix $\bm{C}$. Then, we concatenate the SpectralWeight vectors $\bm{x}_i$ of all $n$ half-carcasses in the dataset $\mathcal{S}$ into a $k\times n$ data matrix $\bm{X}=(\bm{x}_{1},\dots,\bm{x}_{n})$. Afterward, we calculate geodesic distance $g$ \cite{kimmel:98} to extract the diameter of the 3D mesh as well as the volume $v$ of each half-carcass $\mathbb{T}_{i}$ and then aggregate them into $\bm{X}$ to provide further discrimination power for SpectralWeight. Finally, a partial least-squares regression (PLS) is performed on the data matrix $\bm{X}$ to find the equation giving the best fit for a set of data observations. The main steps of SpectralWeight framework are briefly outlined in Algorithm \ref{algo:1}.
\begin{algorithm}
\caption{SpectralWeight algorithmic steps}\label{algo:1}
\begin{algorithmic}[1]
\REQUIRE Set of triangular meshes of $n$ pork half-carcasses $\mathcal{S}=\{\mathbb{T}_1,\dots,\mathbb{T}_n\}$ and their weights $\mathbf{w}$
\STATE Simplify each model to have a uniform number of vertices.
\FOR{$j=1$ to $n$}
\STATE Compute SGWS matrix $\bm{D}_{j}$ of size $p\times m$ for each half-carcass $\mathbb{T}_{j}$.
\STATE Employ soft-assignment coding to determine the $k\times m$ code assignment matrix $\bm{C}_{j}$, where $k>p$.
\STATE Represent each half-carcass $\mathbb{T}_{j}$ as a $k \times 1$ histogram $\bm{h}$ by pooling of code assignment matrix $\bm{C}_{j}$.
\STATE Calculate volume $\bm{v}_{j}$ and diameter of the mesh through geodesic distance $\bm{g}_{j}$ for each 3D model in $\mathcal{S}$.
\ENDFOR
\STATE Arrange all the $n$ histograms $\bm{h}$ into a $n\times k$ data matrix $\bm{X}=(\bm{x}_1,\dots,\bm{x}_n)^{T}$.
\STATE Aggregate mesh diameter, volume $\bm{v}$ and weight $\bm{w}$ to $n\times (k+3)$ data matrix $\bm{X}$.
\STATE Perform partial least-squares regression on $\bm{X}$ to find the $n$-dimensional vector $\hat{\bm{y}}$ of predicted cut weights.
\ENSURE $n$-dimensional vector $\hat{\bm{y}}$ containing predicted weights of pork composition.
\end{algorithmic}
\end{algorithm}
We carried out the experiments on a laptop using an Intel Core i$7$ processor with $2.00$ GHz and $16$ GB RAM. Also, our implementation was done in MATLAB. We also considered $301$ eigenvalues and corresponding eigenvectors of the LBO.
In this study, we set the resolution parameter as $R = 2$, leading to an SGWS matrix of size $5\times m$, where $m$ is the number of points in our 3D half-carcass model. Besides, we considered four components for our PLS regression. It is noteworthy that the training process is performed offline on concatenated $p\times mn$ SGWS matrices from $n$ meshes in dataset $\mathcal{S}$ achieved by applying k-means algorithm for the dictionary building process.
\begin{table}[b]
\caption{Descriptive characteristics and predicted weight of primal cuts.}
\label{Table:primal cuts}
\centering
\resizebox{12cm}{!}{
\begin{tabular}{llllllll}
\toprule
Dependent variables\\ $n = 195$ & \quad Mean (kg) &\quad S. D. & \quad Min & \quad Max & \quad $R^{2}$ & \quad RMSE & \quad CVe (\%)\\
\midrule
Ham & \quad $12.032$ & \quad $0.844$ & \quad $9.371$ & \quad $14.003$ & \quad $0.80$ & \quad $0.377$ & \quad $3.13$\\
\midrule
Shoulder & \quad $12.507$ & \quad $0.879$ & \quad $9.976$ & \quad $14.778$ & \quad $0.79$ & \quad $0.399$ & \quad $3.19$\\
\midrule
Loin & \quad $12.318$ & \quad $0.984$ & \quad $9.448$ & \quad $15.041$ & \quad $0.73$ & \quad $0.507$ & \quad $4.12$\\
\midrule
Belly & \quad $8.692$ & \quad $0.863$ & \quad $6.288$ & \quad $10.986$ & \quad $0.78$ & \quad $0.406$ & \quad $4.67$\\
\bottomrule
\end{tabular}}
\end{table}
\section{Results and Discussion}\label{experiment}
We assessed the performance of our proposed SpectralWeight framework for measuring hog carcass quality via extensive experiments. We created 3D models of 195 half-carcasses using a 3D scanner, followed by downsampling the mesh surfaces to have roughly $3000$ vertices for each model. We subsequently applied SpectralWeight to extract geometric features of 3D models and then employed the PLS regression to find the best parameters for the weight prediction of pork compositions. The basic idea behind PLS \cite{Wold:84} is to project high-dimensional features into a subspace with a lower dimension. The features in the new subspace, so-called latent features, are a linear combination of the original features. PLS is useful in cases where the number of variables $(k+3)$ in a data matrix $\bm{X}$ are substantially greater than the number of observations $n$. We took advantage of PLS regression since multiple linear regression fails due to multicollinearity among $\bm{X}$ variables. The regression is consequently performed on the latent variables.
\begin{table}[t]
\caption{Descriptive characteristics and predicted weight of commercial cuts.}
\label{Table:commercial cuts}
\centering
\resizebox{12cm}{!}{
\begin{tabular}{llllllll}
\toprule
Dependent variables\\ $n = 195$ & \quad Mean (kg) &\quad S. D. & \quad Min & \quad Max & \quad $R^{2}$ & \quad RMSE & \quad CVe (\%)\\
\midrule
Pork leg C100 & \quad $11.314$ & \quad $0.816$ & \quad $8.745$ & \quad $13.317$ & \quad $0.80$ & \quad $0.365$ & \quad $3.22$\\
\midrule
Shoulder picnic C311 & \quad $4.510$ & \quad $0.446$ & \quad $3.529$ & \quad $5.789$ & \quad $0.54$ & \quad $0.304$ & \quad $6.73$\\
\midrule
Shoulder blade C320 & \quad $4.912$ & \quad $0.457$ & \quad $3.695$ & \quad $5.974$ & \quad $0.64$ & \quad $0.274$ & \quad $5.57$\\
\midrule
Shoulder blade boneless C325 & \quad $4.068$ & \quad $0.380$ & \quad $3.088$ & \quad $5.034$ & \quad $0.60$ & \quad $0.240$ & \quad $5.90$\\
\midrule
Loin C200 & \quad $10.065$ & \quad $0.820$ & \quad $6.849$ & \quad $12.034$ & \quad $0.70$ & \quad $0.446$ & \quad $4.43$\\
\midrule
Loin boneless C201 & \quad $7.370$ & \quad $0.661$ & \quad $4.686$ & \quad $9.392$ & \quad $0.64$ & \quad $0.395$ & \quad $5.36$\\
\midrule
Loin back ribs C505 & \quad $0.675$ & \quad $0.076$ & \quad $0.501$ & \quad $0.914$ & \quad $0.42$ & \quad $0.057$ & \quad $8.52$\\
\midrule
Tenderloin (skinless) C228 & \quad $0.477$ & \quad $0.065$ & \quad $0.262$ & \quad $0.669$ & \quad $0.56$ & \quad $0.043$ & \quad $8.97$\\
\midrule
Belly C400 & \quad $4.636$ & \quad $0.596$ & \quad $3.136$ & \quad $6.369$ & \quad $0.70$ & \quad $0.324$ & \quad $6.98$\\
\midrule
Belly trimmings & \quad $1.805$ & \quad $0.227$ & \quad $1.187$ & \quad $2.425$ & \quad $0.54$ & \quad $0.154$ & \quad $8.53$\\
\midrule
Side ribs (regular trim) C500 & \quad $1.855$ & \quad $0.214$ & \quad $1.226$ & \quad $2.434$ & \quad $0.51$ & \quad $0.150$ & \quad $8.06$\\
\midrule
Hock C355 & \quad $1.050$ & \quad $0.114$ & \quad $0.606$ & \quad $1.363$ & \quad $0.41$ & \quad $0.088$ & \quad $8.35$\\
\bottomrule
\end{tabular}}
\end{table}
\begin{table}[t]
\caption{Descriptive characteristics and predicted weight of tissue composition in major commercial cuts.}
\label{Table:major commercial cuts}
\centering
\resizebox{12cm}{!}{
\begin{tabular}{lllllllll}
\toprule
Items\\ $n = 195$ & \quad Tissue composition & \quad Mean (kg) &\quad S. D. & \quad Min & \quad Max & \quad $R^{2}$ & \quad RMSE & \quad CVe (\%)\\
\cmidrule(r){2-9}
\multirow{3}{*}{Pork leg C100} & \quad Muscle & \quad $8.014$ & \quad $0.704$ & \quad $5.860$ & \quad $10.124$ & \quad $0.68$ & \quad $0.396$ & \quad $4.95$\\
&
\quad Fat & \quad $1.934$ & \quad $0.427$ & \quad $1.115$ & \quad $3.102$ & \quad $0.67$ & \quad $0.243$ & \quad $12.59$\\
&
\quad Bone & \quad $0.941$ & \quad $0.080$ & \quad $0.757$ & \quad $1.172$ & \quad $0.56$ & \quad $0.053$ & \quad $5.62$\\
&
\quad Skin & \quad $0.391$ & \quad $0.061$ & \quad $0.257$ & \quad $0.619$ & \quad $0.28$ & \quad $0.052$ & \quad $13.22$\\
\cmidrule(r){2-9}
\multirow{3}{*}{Shoulder picnic C311} & \quad Muscle & \quad $3.016$ & \quad $0.390$ & \quad $1.965$ & \quad $4.053$ & \quad $0.42$ & \quad $0.297$ & \quad $9.83$\\
&
\quad Fat & \quad $0.937$ & \quad $0.200$ & \quad $0.484$ & \quad $1.572$ & \quad $0.56$ & \quad $0.133$ & \quad $14.18$\\
&
\quad Bone & \quad $0.380$ & \quad $0.042$ & \quad $0.313$ & \quad $0.563$ & \quad $0.38$ & \quad $0.033$ & \quad $8.66$\\
&
\quad Skin & \quad $0.164$ & \quad $0.024$ & \quad $0.100$ & \quad $0.230$ & \quad $0.23$ & \quad $0.021$ & \quad $12.93$\\
\cmidrule(r){2-9}
\multirow{3}{*}{Shoulder blade boneless C325} & \quad Muscle & \quad $3.178$ & \quad $0.315$ & \quad $2.263$ & \quad $4.026$ & \quad $0.57$ & \quad $0.207$ & \quad $6.51$\\
&
\quad Fat & \quad $0.890$ & \quad $0.169$ & \quad $0.472$ & \quad $1.372$ & \quad $0.53$ & \quad $0.116$ & \quad $13.01$\\
\cmidrule(r){2-9}
\multirow{3}{*}{Loin boneless C201} & \quad Muscle & \quad $5.847$ & \quad $0.645$ & \quad $3.503$ & \quad $7.516$ & \quad $0.61$ & \quad $0.404$ & \quad $6.91$\\
&
\quad Fat & \quad $1.523$ & \quad $0.255$ & \quad $0.757$ & \quad $2.205$ & \quad $0.63$ & \quad $0.155$ & \quad $10.17$\\
\cmidrule(r){2-9}
\multirow{3}{*}{Belly C400} & \quad Muscle & \quad $2.689$ & \quad $0.374$ & \quad $1.835$ & \quad $3.620$ & \quad $0.62$ & \quad $0.229$ & \quad $8.53$\\
&
\quad Fat & \quad $1.947$ & \quad $0.474$ & \quad $0.778$ & \quad $3.077$ & \quad $0.70$ & \quad $0.260$ & \quad $13.36$\\
\cmidrule(r){2-9}
\multirow{3}{*}{Belly Trimmings} & \quad Muscle & \quad $0.811$ & \quad $0.124$ & \quad $0.297$ & \quad $1.154$ & \quad $0.29$ & \quad $0.104$ & \quad $12.80$\\
&
\quad Fat & \quad $0.994$ & \quad $0.204$ & \quad $0.512$ & \quad $1.699$ & \quad $0.45$ & \quad $0.151$ & \quad $15.18$\\
\bottomrule
\end{tabular}}
\end{table}
To evaluate the performance of the SpectralWeight, we utilized some performance measurements such as coefficient of determination ($R^{2}-score$), root mean square error ($RMSE$), and coefficient of variation error ($CVe$). It is worth noting that since $CVe$ considers the information of the average weight of the cut, it is a more reliable and fair evaluation metric. To circumvent overfitting, we carefully performed leave-one-out cross-validation over the pork shapes by randomly sampling a set of training instances from our pork carcass dataset for learning and a separate hold-out set for testing. Tables \ref{Table:primal cuts} to \ref{Table:tissue composition} demonstrate the descriptive characteristics and predicted weight of primal cuts, commercial cuts, tissue composition in major commercial cuts, and tissue composition in half-carcasses, respectively.
Table \ref{Table:primal cuts} shows the accuracy of weight prediction for primal cuts. Also, the standard deviation and mean of each primal cut is computed and considered. As can be seen, the lowest prediction error belongs to Ham cut with $CVe=3.13$, while the highest prediction error corresponds to Belly cut with $CVe=4.67$.
Table \ref{Table:commercial cuts} shows the performance of our algorithm for predicting the weights of commercial cuts. As shown, pork leg $C100$ achieved the highest accuracy of prediction with $CVe=3.22$, while Tenderloin (skinless) $C228$ has the lowest accuracy with $CVe=8.97$.
We extended our experiments to further evaluating the major commercial cuts by predicting their tissue composition. The two major commercial cuts of pork leg $C100$ and shoulder picnic $C311$ consist of four tissues, i.e. muscle, fat, bone and skin. As can be observed from Table \ref{Table:major commercial cuts}, best weight predictions correspond to the muscle tissue of pork leg $C100$ and bone tissue of shoulder picnic $C311$ with a correlation of variation error of $4.95$ and $8.66$, respectively. Shoulder blade boneless $C325$, loin boneless $C201$, belly $C400$ and belly trimmings are the other tissues of the major commercial cuts that are composed of only muscle and fat. As shown in Table \ref{Table:major commercial cuts}, for all the four tissues, muscle tissue gained the highest prediction accuracy with $CVe$ of $6.51$, $6.91$, $8.53$, and $12.80$, respectively.
In a bid to investigate the amount of the total tissue composition of muscle, fat, bone, and skin in the half-carcasses, we present Table \ref{Table:tissue composition}, in which the characteristic information of each tissue for the $195$ half-carcasses is demonstrated separately. More precisely, the amount of muscle is achieved by summing up the major commercial cuts including $C100$, $C311$, $C325$, $C201$, $C400$ and the belly trimmings. For the fat, we took into account the major commercial cuts containing $C100$, $C311$, $C325$, $C201$, $C400$, and in the belly trimmings. To calculate the amount of bone, we considered the sum of bone tissue from the half-carcass except the bone tissue contained in the feet, the hock, and the ribs. Also, the amount of skin is obtained by adding the skins from the half-carcass except the skin on the feet, the hock, and the jowl.
As can be seen, our proposed framework is able to predict the weight of muscle tissue with a lower correlation variation of $4.11$ as well as a higher correlation of determination of $R^{2}=0.77$ than the other tissues, respectively. Since muscle is a more valuable tissue for commercial uses, our results for estimating muscle tissue make our algorithm a potential candidate for replacing the traditional methods of carcass quality assessment.
\begin{table}[t]
\caption{Descriptive characteristics and predicted weight of tissue composition in half-carcass.}
\label{Table:tissue composition}
\centering
\resizebox{12cm}{!}{
\begin{tabular}{llllllll}
\toprule
Items of composition\\ $n = 195$ & \quad Mean (kg) &\quad S. D. & \quad Min & \quad Max & \quad $R^{2}$ & \quad RMSE & \quad CVe (\%)\\
\midrule
Muscle & \quad $23.553$ & \quad $2.041$ & \quad $16.394$ & \quad $28.451$ & \quad $0.77$ & \quad $0.968$ & \quad $4.11$\\
\midrule
Fat & \quad $10.575$ & \quad $2.158$ & \quad $5.076$ & \quad $16.152$ & \quad $0.73$ & \quad $0.771$ & \quad $9.37$\\
\midrule
Bone & \quad $2.968$ & \quad $0.257$ & \quad $2.377$ & \quad $3.641$ & \quad $0.68$ & \quad $0.145$ & \quad $4.88$\\
\midrule
Skin & \quad $1.541$ & \quad $0.173$ & \quad $1.100$ & \quad $2.053$ & \quad $0.33$ & \quad $0.141$ & \quad $9.15$\\
\bottomrule
\end{tabular}}
\end{table}
\section{Conclusions}
In this study, we introduced SpectralWeight to estimate the quality of pork carcasses by weight prediction of pork cuts. We first built the spectral graph wavelet signature for every mesh point locally and then aggregated them as a global feature through the bag-of-geometric-words notion. To further ameliorate the discrimination power of SpectralWeight, we merged information of mesh diameter and volume to our pipeline. As the results show, our proposed method can predict the weight of different cuts and tissues of a pork half-carcass with high accuracy and hence is practical to be employed in the pork industry.
\section{Acknowledgements}
This work was supported by Swine Innovation Porc within the Swine Cluster 2: Driving Results Through Innovation research program. Funding is provided by Agriculture and Agri‐Food Canada through the AgriInnovation Program, provincial producer organizations and industry partners.
\bibliographystyle{splncs04}
\section{Introduction}
Quality assessment of hog carcasses has long been practiced in Canada and many other countries \cite{Fredeen:68,Pomar:09}. The quality of a pork carcass can be determined based on its overall body composition by measuring the amount of muscle, fat, skin and bone, or according to the quantity of these tissues inside the primary and commercial cuts. In the literature, there have been different research objectives to evaluate carcasses' quality and cuts. Gispert \textit{et al.} \cite{Gispert:07} characterized pork carcasses based on their genotypes information, and measurements were taken using a ruler and the Fat-O-Meat’er. Marcoux \textit{et al.} \cite{Marcoux:05} employed dual-energy X-ray absorptiometry (DXA) technology to predict carcass composition of three genetic lines with a wide range of varying compositions. Pomar \textit{et al.} \cite{Pomar:03} compared two grading systems based on Destron (DPG) and Hennessy (HGP) probe measurements to verify if both grading approaches result in similar lean yields and grading indices in actual pork carcasses. Engel \textit{et al.} \cite{Engel:03} proposed a different sampling scheme by considering some of the predictive variables to check the accuracy and the approval of new grading systems in slaughterhouses. Picouet \textit{et al.} \cite{Picouet:10} suggested a predictive model based on a density correction equation to determine weight and lean content. In an effort to replace traditional procedures such as dissection, Vester-Christensen \textit{et al.} \cite{Vester-Christensen:09} took advantage of computed tomography (CT)-scans and a contextual Bayesian classification scheme to classify pork carcasses into three types of tissues. The cutout and dissection procedure proposed by Nissen \textit{et al.} \cite{Nissen:06} is a well-recognized reference method to assess the quality of pork carcasses. However, this approach is time-consuming, and financially expensive and requires attention, space and qualified personnel in addition to the risk of bias between butchers \cite{Vester-Christensen:09,Picouet:10}.
Hence, the pork industry, including all stakeholders from production to meat sale, is seeking a way to make the most profitable decisions. One solution is to carry out carcass quality evaluations to know the results coming from a choice of genetic lines, a diet or a breeding method. However, due to the difficulties in conducting the cutting and dissection procedure by butchers, the commercial environment has more constraints than the research environment. Therefore, it becomes more important to develop a simple, fast and precise method to replace the traditional approaches in the commercial environment.
In this paper, we digitize carcasses in three-dimensions using a 3D scanner and then make a triangular mesh model of each pork half-carcass to develop a framework for weight prediction of the different cuts and their tissue composition. Unlike images, triangular meshes have irregular connectivity which demands an efficient and concise design to capture the intrinsic information of the object while staying robust against different triangulation \cite{qiao:19}. This requires the design of a descriptor (signature) that is invariant to isometric deformation of a meshed object while keeping discriminative geometric information \cite{Wang:20}. To this end, we employ a compact signature based on spectral analysis of the Laplace-Beltrami Operator (LBO) to capture the intrinsic geometric properties of shapes. This compact representation of 3D objects simplifies the problem of shape comparison to the problem of signature comparison and provides a relatively accurate prediction of pork cut weights.
The spectral signatures can be employed in a broad range of applications including medical shape analysis \cite{Masoumi:18b}, 3D object analysis \cite{Bronstein:11,Rodola:SHREC17,Masoumi:17}, shape matching \cite{Melzi:19}, and segmentation \cite{YI:17}. In the literature, there has been a surge of interest in eigenmodes (eigenvalues and eigenvectors) of LBO to build local or global spectral signatures. The power of spectral signatures is mainly due to the spectrum related to the natural frequencies and the associated eigenvectors that yield the wave pattern \cite{Levy:06,Atasoy:16}.
The local spectral signatures are defined on each vertex of a mesh and provide information about the neighborhood around a vertex \cite{Masoumi:19a}. Intuitively, points around a neighborhood share similar geometric information, hence their corresponding local descriptors should represent similar patterns. The local spectral signatures include heat kernel signature (HKS) \cite{Sun:09}, wave kernel signature (WKS) \cite{Aubry:11}, and global point signature (GPS) \cite{Rustamov:07}. From the graph Fourier view, HKS captures information related to the low-frequency component that relies on macroscopic information of a 3D object. Moreover, WKS allows access to the information of the high-frequency component, which corresponds to the microscopic properties of a 3D model. Furthermore, in GPS we might face the problem of eigenvector's switching when their corresponding eigenvalues are close to each other.
On the other side, global signatures encode information about the geometry of the entire 3D object. Shape-DNA~\cite{Reuter:06} was introduced by Reuter \textit{et al.} as a global signature defined by a non-trivial truncated sequence of eigenvalues normalized by mesh area that are arranged in ascending order. Gao \textit{et al.} proposed compact Shape-DNA \cite{Gao:14} by applying the discrete Fourier transform to eigenvalues of the LBO. A new version of GPS developed by Chaudhari \textit{et al.} \cite{Chaudhari:14}, called GPS embedding, is a global descriptor defined as a truncated sequence of inverse square roots eigenvalues of the LBO. However, global spectral signatures give us limited representation and fail to recognize the fine-grained patterns in a 3D model.
Recently, spectral graph wavelet signature (SGWS) has been developed by Masoumi \textit{et al.} \cite{Masoumi:16} as an efficient and informative local spectral signature, which allows analysis of the 3D mesh in different frequencies. Dissimilar to GPS, HKS, and WKS, SGWS leverages the power of the wavelet to provide the information of both macroscopic and microscopic geometry of shape, leading to a more discriminative feature. In this paper, we introduce \textit{SpectralWeight}, in which each 3D model is represented by SGWS to computerize estimation of the weight of pork cuts. Our objective in this study is to verify the accuracy of prediction for different variables of interest and possibly integrate the calculation method into a complete tool that can be used in a commercial environment. To the best of our knowledge, this is the first study on employing SGWS for weight prediction of pork carcasses.
The contribution of this paper is twofold: (1) we propose a framework to precisely model a pork half-carcass by harnessing the power of the spectral graph wavelets, which is called SpectralWeight; and (2) we exploit the SpectralWeight as a predictive model to weigh different cuts of pork.
\section{Material and Methods}\label{Method}
\subsection{Sampling scheme}
To meet the objectives of this project, we selected $195$ pork carcasses, including $100$ barrows and $95$ females, from commercial slaughterhouses in Quebec, Canada. To obtain a high variability of conformation, carcasses were sampled in a weight range of $83.8$ kg to $116.2$ kg and a backfat thickness range of $7.6$ mm to $30.6$ mm. Backfat thickness was measured using a ruler at the cleft and the level of the fourth-last thoracic vertebra. However, the official backfat measurement was retaken using a digital caliper on a chop cut at the same thoracic level (fourth-last vertebra) $7$ cm from the cleft perpendicular to the skin. The conformation of the carcasses is divided into four different classes represented by the letters C, B, A and AA. Class C represents a long carcass with a thin-looking leg, while class AA represents a stocky carcass with a highly-rounded leg shape (Figure \ref{Carcass conformation}). At the time of weighing, the hot carcass was presented with the head, tail, leaf fat, hanging tender and kidneys. We retained only carcasses properly split in the middle of the spinal column and without tissue ablation. Therefore, each carcass side was considered to be bilaterally symmetric. The scale of variation within each sampling criterion is intended to provide a more robust estimate of the predictive model parameters at the extremes of weight and backfat thickness \cite{Daumas:96}. Only the left half-carcasses were transported to the Sherbrooke Research and Development Center (RDC) of Agriculture and Agri-Food Canada (AAFC). The carcasses were stored in a cooler at $2 \degree C$ in a plastic bag to minimize water loss. The 3D scanning, cutout, dissection and determination of meat cut fat content were completed within days of receipt of the carcasses at the AAFC RDC.
\begin{figure}[t]
\setlength{\tabcolsep}{.3em}
\centering
\begin{tabular}{cc}
\includegraphics[scale=.7]{./Figures/carcass_conformation.pdf}
\end{tabular}
\caption{Pork carcass conformation classes}
\label{Carcass conformation}
\end{figure}
\medskip
\subsection{Half-carcass preparation and modeling}
Before being digitized in 3D, the half-carcasses were prepared in a standard way by removing the tail, the hanging tender and the remains of leaf fat present in the carcass cavity. The jowl was shortened to a uniform length of $15$ cm from the base of the shoulder. The final weight of the half-carcass was subsequently recorded. The total length of the half-carcass was measured using a tape measure from the tip of the rear hooves to the first cervical vertebrae. This length was used to determine the cutting site for the shoulder and ham. Three-dimensional scanning of each half-carcass was performed using the Go!SCAN 3DTM (Model $50$, Creaform, Levis, Quebec, Canada) and post-processed by 3D software (Vxelement, Version 6.3 SR1, Creaform, Levis, Quebec, Canada). The 3D scanner uses white structured light technology without requiring targets affixed to the carcass or additional lighting. Quality control was performed at the beginning of each day by scanning a target provided by the company. All quality controls were passed during the project. The resolution between the mesh points was set to $0.2$ cm and 3D models were saved and used in OBJ format.
\subsection{Cutout and dissection}
Once the half-carcass was scanned, the four primal cuts (leg, shoulder, loin, and belly) were prepared. The leg and shoulder were cut at proportional distances of $40.90\%$ and $85.54\%$ from the total length of the half-carcass, respectively. These proportions were determined in a previous cutout expertise (unpublished results). The primal loin and flank were separated by applying a straight cut passing $1.5$ cm from the tenderloin and $10$ cm from the base of the ribs opposite the fourth-last thoracic vertebra.
The primal cuts were then prepared into commercial cuts according to different standards. The commercial cuts are presented with or without the skin, more or less defatted, and with or without bone, as appropriate. The skin and ribs from the primal belly were removed. Subsequently, the mammary glands and a portion at the posterior end of the belly (belly trimmings) were cut to create a rectangular appearance. Specifications for the preparation of commercial cuts and their identification codes are presented in the Canadian Pork Handbook and the Distributor Education Program (DEP) \cite{CPI:11}. The cuts illustrated and described in this manual correspond to the basic specifications followed by the Canadian pork industry. It is worth noting that there are no reference numbers for the four primal cuts (Leg, Loin, Shoulder, and Belly) presented in \cite{CPI:11}. Figure \ref{primal cuts}, clearly illustrates the four primal cuts, belly commercial trim $C400$, and belly trimmings. The cutout work resulted in the following parts: Pork leg $C100$, Shoulder blade $C320$, Boneless shoulder blade $C325$, Hock $C355$, Shoulder picnic $C311$, Loin $C200$, Boneless loin $C201$, Skinless tenderloin $C228$, Back ribs $C505$, Belly commercial trim $C400$, Side ribs $C500$, and Belly trimmings. The amounts of bone, skin, and meat (muscle and fat not separated) contained in the primal and commercial cuts were obtained by dissection procedure, and weights were recorded. The meat contained in the main commercial cuts (Pork leg $C100$, Boneless loin $C201$, Loin $C200$, Belly commercial trim $C400$, Shoulder picnic $C311$, Boneless shoulder blade $C325$, Belly trimmings) was minced, and a representative sample was taken to determine lipid, protein and dry matter content using near-infrared transmittance \cite{Shirley:07}. It should be noted that lipid content was used in this study to calculate the weight of fat in the meat of the main commercial cuts. To convert the lipid content to dissected fat weight, a sample of pure muscle and pure fat from each meat mass was also analyzed for lipid content using the same method. Using the data collected from the muscle and fat samples for each cut, an equation was developed to convert the meat lipid content to the dissected fat content. This procedure allows an equivalent amount of fat to be obtained without physically separating the muscle and fat from the meat from the entire mass using a knife.
\begin{figure}[t]
\setlength{\tabcolsep}{.3em}
\centering
\begin{tabular}{cc}
\includegraphics[scale=1.2]{./Figures/primal_cuts.pdf}
\end{tabular}
\caption{Four primal cuts, belly commercial trim C400, and belly trimmings (illustrated in the box). (1) primal ham, (2) primal loin (3) primal shoulder (4) primal belly (5) belly commercial trim C400 (6) belly trimmings.}
\label{primal cuts}
\end{figure}
\subsection{Problem statement and method}
We model a pork half-carcass $\mathbb{T}$ as a triangulated mesh defined as $(\mathcal{V},\mathcal{E})$, where $\mathcal{V}=\{\bm{v}_{i}|i=1,\ldots,N\}$ is the set of vertices, and $\mathcal{E}=\{e_{ij}\}$ is the set of edges. For any vertex coordinate $\mathcal{P}=(p_{1},p_{2},p_{3}):\,\mathcal{V}\to\mathbb{R}^{3}$, our objective is to build a local descriptor $f(\bm{v}_{i}) \in \mathbb{R}^{d}$ for each vertex $\bm{v}_{i}$. Figure \ref{Triangulated carcass} (left) represents an example of triangulated mesh on a random pork half-carcass.
\medskip
We build the SpectralWeight framework based on the eigensystem of the LBO that are invariant to the deformation of non-rigid shapes. To achieve the eigenvalues and eigenvectors, we discretize the LBO using a cotangent weight scheme as proposed by \cite{Meyer:03}. We build our Laplacian matrix by:
\begin{equation}
\bm{L}=\bm{A}^{-1}(\bm{D-W}),
\end{equation}
where $\bm{A}=\mathrm{diag}(a_{i})$ is a mass matrix, $\bm{D}=\mathrm{diag}(d_{i})$ is a degree matrix constructed by $d_{i}=\sum_{j=1}^{n}w_{ij}$, and $\bm{W}=(w_{ij})=\left(\cot\alpha_{ij} + \cot\beta_{ij}\right)/2a_{i}$ is a sparse weight matrix if $\bm{v}_{i}\sim \bm{v}_{j}$. Also, $\alpha_{ij}$ and $\beta_{ij}$ are the angles $\angle(\bm{v}_{i}\bm{v}_{k_1}\bm{v}_{j})$ and $\angle(\bm{v}_{i}\bm{v}_{k_2}\bm{v}_{j})$ of two adjacent triangles $\bm{t}^{\alpha}=\{\bm{v}_{i},\bm{v}_{j},\bm{v}_{k_1}\}$ and $\bm{t}^{\beta}=\{\bm{v}_{i},\bm{v}_{j},\bm{v}_{k_2}\}$, and $a_i$ is the area of the Voronoi cell at vertex $\bm{v}_{i}$, the shaded area. Finally, the eigensystem of LBO is obtained by solving the \textit{generalized eigenvalue problem}, such that:
\begin{equation}
\bm{C}\bg{\xi}_{\ell}=\lambda_{\ell}\bm{A}\bg{\xi}_{\ell},
\end{equation}
where $\bm{C}=\bm{D-W}$, and $\lambda_{\ell}$ and $\bg{\xi}_{\ell}$ are the eigenvalues and eigenfunctions of LBO, respectively. We define the spectral graph wavelet based for vertex $j$ and scale $t$ as \cite{Masoumi:16}:
\begin{equation}
\bm{s}_{L}(j)=\{W_{\delta_j}(t_k,j)\mid k=1,\dots,L\}\cup\{S_{\delta_j}(j)\},
\label{Eq:SGWSignatureLevel}
\end{equation}
\begin{figure}[t]
\setlength{\tabcolsep}{.3em}
\centering
\begin{tabular}{cc}
\includegraphics[scale=.3]{./Figures/TriangleMeshRep.pdf}&
\hspace{1.5cm}
\includegraphics[scale=.3]{./Figures/cotangent_weight.pdf}
\end{tabular}
\caption{Triangulated mesh model of a pork half-carcass (Left); illustration of cotangent weight scheme (Right).}
\label{Triangulated carcass}
\end{figure}
where $ W_{\delta_j}(t_k,j)$ and $S_{\delta_j}(j)$ are the spectral graph wavelet and scaling function coefficients at resolution level $L$, respectively, as follows (readers are referred to \cite{Masoumi:16} for detailed description):
\begin{equation}
W_{\delta_j}(t,j)=\langle \bg{\delta}_{j},\bg{\psi}_{t,j} \rangle=\sum_{\ell=1}^{m}g(t\lambda_\ell)\xi_{\ell}^{2}(j),
\label{DeltaW_coefficients}
\end{equation}
and
\begin{equation}
S_{\delta_j}(j)=
\langle \bg{\delta}_{j},\bg{\phi}_{t} \rangle=
\sum_{\ell=1}^{m}h(\lambda_\ell)\xi_{\ell}^{2}(j).
\label{DeltaS_coefficients}
\end{equation}
We consider the Mexican hat wavelet as a generating filter, which treats all frequencies as equally-important and improves the discriminative power of the SpectralWeight. The SpectralWeight takes advantage of nice properties like insensitivity to isometric deformations and efficiency in computation. Moreover, SpectralWeight merges the advantages of both band-pass and low-pass filters for building the local descriptor. Figure \ref{SGWT representation} depicts a representation of SGWT when computing a $\chi^{2}$-distance from a highlighted point on the belly from other points on the carcass. As can be observed, regions with similar geometrical structures share the same color, while regions with dissimilar structures from the specified point bear different colors.
\begin{figure}[t]
\setlength{\tabcolsep}{.3em}
\centering
\begin{tabular}{ccccc}
\includegraphics[scale=.7]{./Figures/SGWT1.pdf}&
\includegraphics[scale=.7]{./Figures/SGWT2.pdf}&
\includegraphics[scale=.7]{./Figures/SGWT3.pdf}&
\includegraphics[scale=.7]{./Figures/SGWT4.pdf}&
\includegraphics[scale=.7]{./Figures/SGWT5.pdf}
\end{tabular}
\caption{Visualization of different resolutions (left to right: 1st, 2nd, 3rd, 4th, and 5th) of the spectral graph wavelet signature from a specified point (shown as a pink sphere) on a random pork half-carcass. The cooler and warmer colors represent lower and higher distance values, respectively.}
\label{SGWT representation}
\end{figure}
\medskip
The SpectralWeight framework includes the subsequent steps: We first compute an SGWS matrix $\bm{D}$ for each half-carcass in the dataset $\mathcal{S}$, where $\bm{D}=(\bm{d}_{1},\dots,\bm{d}_{m})\in\mathbb{R}^{p\times m}$, and $\bm{d}_i$ is the $p$-dimensional point signature at vertex $i$ and $m$ is the number of mesh points. In the second step, we construct a $p\times k$ dictionary matrix $\bm{V}=(\bm{v}_{1},\dots,\bm{v}_{k})$ through an unsupervised learning algorithm, i.e. clustering, by assigning each $m$ local descriptor into the $k$-th cluster with the nearest mean. In the next step, we employ the soft-assignment coding to map SGWSs $\bm{s}_{i}$ to high-dimensional mid-level feature vectors. This leads to a $k\times m$ matrix $\bm{C}=(\bm{c}_{1},\dots,\bm{c}_{m})$ whose columns are the $k$-dimensional mid-level feature codes. In a bid to aggregate the learned high-dimensional local features, we build a $k \times 1$ histogram $h_{r}=\sum_{i=1}^{m}c_{ri}$ for each half-carcass by sum-pooling the cluster assignment matrix $\bm{C}$. Then, we concatenate the SpectralWeight vectors $\bm{x}_i$ of all $n$ half-carcasses in the dataset $\mathcal{S}$ into a $k\times n$ data matrix $\bm{X}=(\bm{x}_{1},\dots,\bm{x}_{n})$. Afterward, we calculate geodesic distance $g$ \cite{kimmel:98} to extract the diameter of the 3D mesh as well as the volume $v$ of each half-carcass $\mathbb{T}_{i}$ and then aggregate them into $\bm{X}$ to provide further discrimination power for SpectralWeight. Finally, a partial least-squares regression (PLS) is performed on the data matrix $\bm{X}$ to find the equation giving the best fit for a set of data observations. The main steps of SpectralWeight framework are briefly outlined in Algorithm \ref{algo:1}.
\begin{algorithm}
\caption{SpectralWeight algorithmic steps}\label{algo:1}
\begin{algorithmic}[1]
\REQUIRE Set of triangular meshes of $n$ pork half-carcasses $\mathcal{S}=\{\mathbb{T}_1,\dots,\mathbb{T}_n\}$ and their weights $\mathbf{w}$
\STATE Simplify each model to have a uniform number of vertices.
\FOR{$j=1$ to $n$}
\STATE Compute SGWS matrix $\bm{D}_{j}$ of size $p\times m$ for each half-carcass $\mathbb{T}_{j}$.
\STATE Employ soft-assignment coding to determine the $k\times m$ code assignment matrix $\bm{C}_{j}$, where $k>p$.
\STATE Represent each half-carcass $\mathbb{T}_{j}$ as a $k \times 1$ histogram $\bm{h}$ by pooling of code assignment matrix $\bm{C}_{j}$.
\STATE Calculate volume $\bm{v}_{j}$ and diameter of the mesh through geodesic distance $\bm{g}_{j}$ for each 3D model in $\mathcal{S}$.
\ENDFOR
\STATE Arrange all the $n$ histograms $\bm{h}$ into a $n\times k$ data matrix $\bm{X}=(\bm{x}_1,\dots,\bm{x}_n)^{T}$.
\STATE Aggregate mesh diameter, volume $\bm{v}$ and weight $\bm{w}$ to $n\times (k+3)$ data matrix $\bm{X}$.
\STATE Perform partial least-squares regression on $\bm{X}$ to find the $n$-dimensional vector $\hat{\bm{y}}$ of predicted cut weights.
\ENSURE $n$-dimensional vector $\hat{\bm{y}}$ containing predicted weights of pork composition.
\end{algorithmic}
\end{algorithm}
We carried out the experiments on a laptop using an Intel Core i$7$ processor with $2.00$ GHz and $16$ GB RAM. Also, our implementation was done in MATLAB. We also considered $301$ eigenvalues and corresponding eigenvectors of the LBO.
In this study, we set the resolution parameter as $R = 2$, leading to an SGWS matrix of size $5\times m$, where $m$ is the number of points in our 3D half-carcass model. Besides, we considered four components for our PLS regression. It is noteworthy that the training process is performed offline on concatenated $p\times mn$ SGWS matrices from $n$ meshes in dataset $\mathcal{S}$ achieved by applying k-means algorithm for the dictionary building process.
\begin{table}[b]
\caption{Descriptive characteristics and predicted weight of primal cuts.}
\label{Table:primal cuts}
\centering
\resizebox{12cm}{!}{
\begin{tabular}{llllllll}
\toprule
Dependent variables\\ $n = 195$ & \quad Mean (kg) &\quad S. D. & \quad Min & \quad Max & \quad $R^{2}$ & \quad RMSE & \quad CVe (\%)\\
\midrule
Ham & \quad $12.032$ & \quad $0.844$ & \quad $9.371$ & \quad $14.003$ & \quad $0.80$ & \quad $0.377$ & \quad $3.13$\\
\midrule
Shoulder & \quad $12.507$ & \quad $0.879$ & \quad $9.976$ & \quad $14.778$ & \quad $0.79$ & \quad $0.399$ & \quad $3.19$\\
\midrule
Loin & \quad $12.318$ & \quad $0.984$ & \quad $9.448$ & \quad $15.041$ & \quad $0.73$ & \quad $0.507$ & \quad $4.12$\\
\midrule
Belly & \quad $8.692$ & \quad $0.863$ & \quad $6.288$ & \quad $10.986$ & \quad $0.78$ & \quad $0.406$ & \quad $4.67$\\
\bottomrule
\end{tabular}}
\end{table}
\section{Results and Discussion}\label{experiment}
We assessed the performance of our proposed SpectralWeight framework for measuring hog carcass quality via extensive experiments. We created 3D models of 195 half-carcasses using a 3D scanner, followed by downsampling the mesh surfaces to have roughly $3000$ vertices for each model. We subsequently applied SpectralWeight to extract geometric features of 3D models and then employed the PLS regression to find the best parameters for the weight prediction of pork compositions. The basic idea behind PLS \cite{Wold:84} is to project high-dimensional features into a subspace with a lower dimension. The features in the new subspace, so-called latent features, are a linear combination of the original features. PLS is useful in cases where the number of variables $(k+3)$ in a data matrix $\bm{X}$ are substantially greater than the number of observations $n$. We took advantage of PLS regression since multiple linear regression fails due to multicollinearity among $\bm{X}$ variables. The regression is consequently performed on the latent variables.
\begin{table}[t]
\caption{Descriptive characteristics and predicted weight of commercial cuts.}
\label{Table:commercial cuts}
\centering
\resizebox{12cm}{!}{
\begin{tabular}{llllllll}
\toprule
Dependent variables\\ $n = 195$ & \quad Mean (kg) &\quad S. D. & \quad Min & \quad Max & \quad $R^{2}$ & \quad RMSE & \quad CVe (\%)\\
\midrule
Pork leg C100 & \quad $11.314$ & \quad $0.816$ & \quad $8.745$ & \quad $13.317$ & \quad $0.80$ & \quad $0.365$ & \quad $3.22$\\
\midrule
Shoulder picnic C311 & \quad $4.510$ & \quad $0.446$ & \quad $3.529$ & \quad $5.789$ & \quad $0.54$ & \quad $0.304$ & \quad $6.73$\\
\midrule
Shoulder blade C320 & \quad $4.912$ & \quad $0.457$ & \quad $3.695$ & \quad $5.974$ & \quad $0.64$ & \quad $0.274$ & \quad $5.57$\\
\midrule
Shoulder blade boneless C325 & \quad $4.068$ & \quad $0.380$ & \quad $3.088$ & \quad $5.034$ & \quad $0.60$ & \quad $0.240$ & \quad $5.90$\\
\midrule
Loin C200 & \quad $10.065$ & \quad $0.820$ & \quad $6.849$ & \quad $12.034$ & \quad $0.70$ & \quad $0.446$ & \quad $4.43$\\
\midrule
Loin boneless C201 & \quad $7.370$ & \quad $0.661$ & \quad $4.686$ & \quad $9.392$ & \quad $0.64$ & \quad $0.395$ & \quad $5.36$\\
\midrule
Loin back ribs C505 & \quad $0.675$ & \quad $0.076$ & \quad $0.501$ & \quad $0.914$ & \quad $0.42$ & \quad $0.057$ & \quad $8.52$\\
\midrule
Tenderloin (skinless) C228 & \quad $0.477$ & \quad $0.065$ & \quad $0.262$ & \quad $0.669$ & \quad $0.56$ & \quad $0.043$ & \quad $8.97$\\
\midrule
Belly C400 & \quad $4.636$ & \quad $0.596$ & \quad $3.136$ & \quad $6.369$ & \quad $0.70$ & \quad $0.324$ & \quad $6.98$\\
\midrule
Belly trimmings & \quad $1.805$ & \quad $0.227$ & \quad $1.187$ & \quad $2.425$ & \quad $0.54$ & \quad $0.154$ & \quad $8.53$\\
\midrule
Side ribs (regular trim) C500 & \quad $1.855$ & \quad $0.214$ & \quad $1.226$ & \quad $2.434$ & \quad $0.51$ & \quad $0.150$ & \quad $8.06$\\
\midrule
Hock C355 & \quad $1.050$ & \quad $0.114$ & \quad $0.606$ & \quad $1.363$ & \quad $0.41$ & \quad $0.088$ & \quad $8.35$\\
\bottomrule
\end{tabular}}
\end{table}
\begin{table}[t]
\caption{Descriptive characteristics and predicted weight of tissue composition in major commercial cuts.}
\label{Table:major commercial cuts}
\centering
\resizebox{12cm}{!}{
\begin{tabular}{lllllllll}
\toprule
Items\\ $n = 195$ & \quad Tissue composition & \quad Mean (kg) &\quad S. D. & \quad Min & \quad Max & \quad $R^{2}$ & \quad RMSE & \quad CVe (\%)\\
\cmidrule(r){2-9}
\multirow{3}{*}{Pork leg C100} & \quad Muscle & \quad $8.014$ & \quad $0.704$ & \quad $5.860$ & \quad $10.124$ & \quad $0.68$ & \quad $0.396$ & \quad $4.95$\\
&
\quad Fat & \quad $1.934$ & \quad $0.427$ & \quad $1.115$ & \quad $3.102$ & \quad $0.67$ & \quad $0.243$ & \quad $12.59$\\
&
\quad Bone & \quad $0.941$ & \quad $0.080$ & \quad $0.757$ & \quad $1.172$ & \quad $0.56$ & \quad $0.053$ & \quad $5.62$\\
&
\quad Skin & \quad $0.391$ & \quad $0.061$ & \quad $0.257$ & \quad $0.619$ & \quad $0.28$ & \quad $0.052$ & \quad $13.22$\\
\cmidrule(r){2-9}
\multirow{3}{*}{Shoulder picnic C311} & \quad Muscle & \quad $3.016$ & \quad $0.390$ & \quad $1.965$ & \quad $4.053$ & \quad $0.42$ & \quad $0.297$ & \quad $9.83$\\
&
\quad Fat & \quad $0.937$ & \quad $0.200$ & \quad $0.484$ & \quad $1.572$ & \quad $0.56$ & \quad $0.133$ & \quad $14.18$\\
&
\quad Bone & \quad $0.380$ & \quad $0.042$ & \quad $0.313$ & \quad $0.563$ & \quad $0.38$ & \quad $0.033$ & \quad $8.66$\\
&
\quad Skin & \quad $0.164$ & \quad $0.024$ & \quad $0.100$ & \quad $0.230$ & \quad $0.23$ & \quad $0.021$ & \quad $12.93$\\
\cmidrule(r){2-9}
\multirow{3}{*}{Shoulder blade boneless C325} & \quad Muscle & \quad $3.178$ & \quad $0.315$ & \quad $2.263$ & \quad $4.026$ & \quad $0.57$ & \quad $0.207$ & \quad $6.51$\\
&
\quad Fat & \quad $0.890$ & \quad $0.169$ & \quad $0.472$ & \quad $1.372$ & \quad $0.53$ & \quad $0.116$ & \quad $13.01$\\
\cmidrule(r){2-9}
\multirow{3}{*}{Loin boneless C201} & \quad Muscle & \quad $5.847$ & \quad $0.645$ & \quad $3.503$ & \quad $7.516$ & \quad $0.61$ & \quad $0.404$ & \quad $6.91$\\
&
\quad Fat & \quad $1.523$ & \quad $0.255$ & \quad $0.757$ & \quad $2.205$ & \quad $0.63$ & \quad $0.155$ & \quad $10.17$\\
\cmidrule(r){2-9}
\multirow{3}{*}{Belly C400} & \quad Muscle & \quad $2.689$ & \quad $0.374$ & \quad $1.835$ & \quad $3.620$ & \quad $0.62$ & \quad $0.229$ & \quad $8.53$\\
&
\quad Fat & \quad $1.947$ & \quad $0.474$ & \quad $0.778$ & \quad $3.077$ & \quad $0.70$ & \quad $0.260$ & \quad $13.36$\\
\cmidrule(r){2-9}
\multirow{3}{*}{Belly Trimmings} & \quad Muscle & \quad $0.811$ & \quad $0.124$ & \quad $0.297$ & \quad $1.154$ & \quad $0.29$ & \quad $0.104$ & \quad $12.80$\\
&
\quad Fat & \quad $0.994$ & \quad $0.204$ & \quad $0.512$ & \quad $1.699$ & \quad $0.45$ & \quad $0.151$ & \quad $15.18$\\
\bottomrule
\end{tabular}}
\end{table}
To evaluate the performance of the SpectralWeight, we utilized some performance measurements such as coefficient of determination ($R^{2}-score$), root mean square error ($RMSE$), and coefficient of variation error ($CVe$). It is worth noting that since $CVe$ considers the information of the average weight of the cut, it is a more reliable and fair evaluation metric. To circumvent overfitting, we carefully performed leave-one-out cross-validation over the pork shapes by randomly sampling a set of training instances from our pork carcass dataset for learning and a separate hold-out set for testing. Tables \ref{Table:primal cuts} to \ref{Table:tissue composition} demonstrate the descriptive characteristics and predicted weight of primal cuts, commercial cuts, tissue composition in major commercial cuts, and tissue composition in half-carcasses, respectively.
Table \ref{Table:primal cuts} shows the accuracy of weight prediction for primal cuts. Also, the standard deviation and mean of each primal cut is computed and considered. As can be seen, the lowest prediction error belongs to Ham cut with $CVe=3.13$, while the highest prediction error corresponds to Belly cut with $CVe=4.67$.
Table \ref{Table:commercial cuts} shows the performance of our algorithm for predicting the weights of commercial cuts. As shown, pork leg $C100$ achieved the highest accuracy of prediction with $CVe=3.22$, while Tenderloin (skinless) $C228$ has the lowest accuracy with $CVe=8.97$.
We extended our experiments to further evaluating the major commercial cuts by predicting their tissue composition. The two major commercial cuts of pork leg $C100$ and shoulder picnic $C311$ consist of four tissues, i.e. muscle, fat, bone and skin. As can be observed from Table \ref{Table:major commercial cuts}, best weight predictions correspond to the muscle tissue of pork leg $C100$ and bone tissue of shoulder picnic $C311$ with a correlation of variation error of $4.95$ and $8.66$, respectively. Shoulder blade boneless $C325$, loin boneless $C201$, belly $C400$ and belly trimmings are the other tissues of the major commercial cuts that are composed of only muscle and fat. As shown in Table \ref{Table:major commercial cuts}, for all the four tissues, muscle tissue gained the highest prediction accuracy with $CVe$ of $6.51$, $6.91$, $8.53$, and $12.80$, respectively.
In a bid to investigate the amount of the total tissue composition of muscle, fat, bone, and skin in the half-carcasses, we present Table \ref{Table:tissue composition}, in which the characteristic information of each tissue for the $195$ half-carcasses is demonstrated separately. More precisely, the amount of muscle is achieved by summing up the major commercial cuts including $C100$, $C311$, $C325$, $C201$, $C400$ and the belly trimmings. For the fat, we took into account the major commercial cuts containing $C100$, $C311$, $C325$, $C201$, $C400$, and in the belly trimmings. To calculate the amount of bone, we considered the sum of bone tissue from the half-carcass except the bone tissue contained in the feet, the hock, and the ribs. Also, the amount of skin is obtained by adding the skins from the half-carcass except the skin on the feet, the hock, and the jowl.
As can be seen, our proposed framework is able to predict the weight of muscle tissue with a lower correlation variation of $4.11$ as well as a higher correlation of determination of $R^{2}=0.77$ than the other tissues, respectively. Since muscle is a more valuable tissue for commercial uses, our results for estimating muscle tissue make our algorithm a potential candidate for replacing the traditional methods of carcass quality assessment.
\begin{table}[t]
\caption{Descriptive characteristics and predicted weight of tissue composition in half-carcass.}
\label{Table:tissue composition}
\centering
\resizebox{12cm}{!}{
\begin{tabular}{llllllll}
\toprule
Items of composition\\ $n = 195$ & \quad Mean (kg) &\quad S. D. & \quad Min & \quad Max & \quad $R^{2}$ & \quad RMSE & \quad CVe (\%)\\
\midrule
Muscle & \quad $23.553$ & \quad $2.041$ & \quad $16.394$ & \quad $28.451$ & \quad $0.77$ & \quad $0.968$ & \quad $4.11$\\
\midrule
Fat & \quad $10.575$ & \quad $2.158$ & \quad $5.076$ & \quad $16.152$ & \quad $0.73$ & \quad $0.771$ & \quad $9.37$\\
\midrule
Bone & \quad $2.968$ & \quad $0.257$ & \quad $2.377$ & \quad $3.641$ & \quad $0.68$ & \quad $0.145$ & \quad $4.88$\\
\midrule
Skin & \quad $1.541$ & \quad $0.173$ & \quad $1.100$ & \quad $2.053$ & \quad $0.33$ & \quad $0.141$ & \quad $9.15$\\
\bottomrule
\end{tabular}}
\end{table}
\section{Conclusions}
In this study, we introduced SpectralWeight to estimate the quality of pork carcasses by weight prediction of pork cuts. We first built the spectral graph wavelet signature for every mesh point locally and then aggregated them as a global feature through the bag-of-geometric-words notion. To further ameliorate the discrimination power of SpectralWeight, we merged information of mesh diameter and volume to our pipeline. As the results show, our proposed method can predict the weight of different cuts and tissues of a pork half-carcass with high accuracy and hence is practical to be employed in the pork industry.
\section{Acknowledgements}
This work was supported by Swine Innovation Porc within the Swine Cluster 2: Driving Results Through Innovation research program. Funding is provided by Agriculture and Agri‐Food Canada through the AgriInnovation Program, provincial producer organizations and industry partners.
\bibliographystyle{splncs04}
| {'timestamp': '2020-05-13T02:04:05', 'yymm': '2005', 'arxiv_id': '2005.05406', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.05406'} |
\section{Introduction}
The conjugacy problem in groups has two versions:
the conjugacy decision problem (CDP) is to decide
whether given two elements are conjugate or not;
the conjugacy search problem (CSP) is to find a conjugating element
for a given pair of conjugate elements.
The conjugacy problem is of great interest for the Artin braid group $B_n$,
which has the well-known Artin presentation~\cite{Art47}:
$$
B_n = \left\langle \sigma_1,\ldots,\sigma_{n-1} \left|
\begin{array}{ll}
\sigma_i \sigma_j = \sigma_j \sigma_i & \mbox{if } |i-j| > 1, \\
\sigma_i \sigma_j \sigma_i = \sigma_j \sigma_i \sigma_j & \mbox{if } |i-j| = 1.
\end{array}
\right.\right\rangle.
$$
In the late sixties, Garside~\cite{Gar69} first solved the conjugacy problem
in the braid groups, and then his theory has been generalized
and enriched by many mathematicians.
The algorithms for solving the conjugacy problem, provided by Garside's theory,
involve computation of finite nonempty subsets of the conjugacy class
such as the summit set~\cite{Gar69},
the super summit set~\cite{EM94, BKL98, FG03},
the ultra summit set~\cite{Geb05},
the stable super summit set~\cite{LL06c},
the stable ultra summit set~\cite{BGG06b}
and the reduced super summit set~\cite{KL06}.
Let us call these sets \emph{conjugacy representative sets}.
The conjugacy representative sets depend not only on the group itself
but also on the Garside structure on it,
which is a pair of a positive monoid
(equivalently, a lattice order invariant under left multiplication)
and a Garside element.
The braid group $B_n$ admits two well-known
Garside structures, the Artin Garside structure
and the BKL Garside structure.
Let $B_n^{\operatorname{[BKL]}}$ and $B_n^{\operatorname{[Artin]}}$ denote the braid group $B_n$
endowed with the Artin and the BKL Garside structure, respectively.
The best known upper bound of the complexity for computing a conjugacy
representative set $S$ is of the form $\mathcal O(|S|\cdot p(n))$,
where $|S|$ denotes the size of the set $S$ and $p(n)$ is a polynomial in $n$.
Franco and Gonz\'alez-Meneses~\cite{FG03} and Gebhardt~\cite{Geb05}
showed this when $S$ is the super summit set
and the ultra summit set, respectively,
and it follows easily from their results that
the complexities for computing the other conjugacy representative
sets have upper bound of the same form.
All of the known algorithms for solving the CSP in braid groups,
and more generally in Garside groups, use a sort of \emph{exhaustive search}
in conjugacy representative sets.
The most popular method is as follows:
given two elements $\alpha$ and $\beta$, one computes
the conjugacy representative set $S(\alpha)$ of $\alpha$
and an element $\beta'$ in the conjugacy representative set of $\beta$,
and then check whether $\beta'$ belongs to $S(\alpha)$.
Therefore, the complexity of this kind of algorithms for the CSP
is at least the complexity for computing conjugacy representative sets,
even when there exists a very short conjugating element.
Recently, Birman, Gebhardt and Gonz\'alez-Meneses~\cite{BGG06b}
introduced two classes of subsets, called black and grey components,
of an ultra summit set, and showed that the conjugacy problem can be solved
by computing one black component and one grey component.
This new algorithm is faster than the previous ones,
however it does not improve the theoretical complexity.
The size of a conjugacy representative set is exponential
in the braid index $n$ in some cases,
especially for reducible and periodic braids.
See~\cite[\S2]{BGG06c} or Remark~\ref{rmk:size}.
(An $n$-braid is said to be {\em periodic}
if some power of it belongs to $\langle\Delta^2\rangle$, the center of $B_n$,
and \emph{reducible} if there is an essential curve system in the
punctured disk which is invariant under the action of the braid.)
Therefore, a possible way to solve the CSP more efficiently
would be either to use different Garside structures and Garside groups
such that the conjugacy representative set in question is small enough,
or to develop an algorithm for computing a conjugating element
not in an exhaustive way but in a deterministic way.
For periodic braids, Birman, Gonz\'alez-Meneses and Gebhardt showed
in~\cite{BGG06b} that
their new algorithm in~\cite{BGG06b} is not better than the previous
one in~\cite{Geb05}.
Hence they used in~\cite{BGG06c} different Garside structures and Garside groups
in order to get a small super summit set.
Using this method, they have proposed
a polynomial-time algorithm for the CSP for periodic braids.
It is well-known that the CDP for periodic braids is very easy:
the results of Eilenberg~\cite{Eil34} and Ker\'ekj\'art\'o~\cite{Ker19}
imply that an $n$-braid is periodic if and only if
it is conjugate to a power of either $\delta$ or $\epsilon$,
where $\delta =\delta_{(n)} = \sigma_{n-1}\sigma_{n-2}\cdots\sigma_1$ and
$\epsilon = \epsilon_{(n)}=\delta\sigma_1$, hence
\begin{itemize}
\item an $n$-braid $\alpha$ is periodic if and only if
either $\alpha^n$ or $\alpha^{n-1}$ belongs to $\langle\Delta^2\rangle$;
\item two periodic braids are conjugate if and only if
they have the same exponent sum.
\end{itemize}
In~\cite{BGG06c}, Birman, Gonz\'alez-Meneses and Gebhardt showed the following.
\begin{itemize}
\item
The sizes of the ultra summit sets of $\delta$ and $\epsilon$
in $B_n^{\operatorname{[Artin]}}$ are exponential in the braid index $n$,
hence the complexity of usual algorithms for the CSP is not
polynomial for periodic braids.
\item
The super summit set of $\delta^k$ in $B_n^{\operatorname{[BKL]}}$ is $\{\delta^k\}$,
hence the CSP for $\delta$-type periodic $n$-braids
is solvable in polynomial time (in the braid index $n$ and the input word length)
by using the BKL Garside structure on $B_n$.
\item
The CSP for $\epsilon$-type periodic $n$-braids
is solvable in polynomial time (in the braid index $n$ and the input word length)
by using an algorithm for the CSP
for $\delta$-type periodic braids in $B_{2n-2}^{\operatorname{[BKL]}}$,
together with algorithms for computing an isomorphism from
$P_{n,2}$ to $\operatorname{\mathit{Sym}}_{2n-2}$ and its inverse,
where $P_{n,2}$ is the subgroup of $B_n$ consisting of $n$-braids
that fix the second puncture and $\operatorname{\mathit{Sym}}_{2n-2}$ is the centralizer
of $\delta_{(2n-2)}^{n-1}$ in $B_{2n-2}$.
\end{itemize}
In this paper we develop a new polynomial-time
(in the braid index and the input word length) algorithm for solving
the CSP for periodic braids only by exploiting the BKL Garside structure
on $B_n$, and study how to improve efficiency of the algorithms.
Compared to the algorithm of Birman et.~al.~in~\cite{BGG06c},
our algorithm has lower complexity and, moreover,
it uses a single Garside group $B_n^{\operatorname{[BKL]}}$, hence the implementation
is simpler.
First, we study periodic elements in Garside groups.
An element of a Garside group is said to be \emph{periodic} if
some power of it belongs to the cyclic group generated by the Garside
element.
For periodic elements in Garside groups, the super summit set
is the same as the ultra summit set.
Observing the previous results of Bestvina~\cite{Bes99} and
Charney, Meier and Whittlesey~\cite{CMW04},
we can see that every periodic element in Garside groups has
a special type of power, which we will call a \emph{BCMW-power}.
We show the characteristics of BCMW-powers and its interesting property:
the CSP for two periodic elements $g$ and $h$ in a Garside group
is equivalent to the CSP for $g^r$ and $h^r$, where $g^r$ is a
BCMW-power of $g$.
Then we study super summit sets of periodic elements in Garside groups.
Especially, we show that super summit sets of a certain type of
periodic elements are closed under any partial cycling.
We show that, for some integers $k$,
the super summit set of $\epsilon^k$ in $B_n^{\operatorname{[BKL]}}$
is exponentially large in the braid index $n$.
Using the results on periodic elements in Garside groups and
using the characteristics of the BKL Garside structure on braid groups,
we present an explicit method to transform an arbitrary braid in
the super summit set of $\epsilon^k$ to $\epsilon^k$
by applying partial cycling polynomially many times,
even when the super summit set is exponentially large.
On the other hand, we discuss concrete methods for improving efficiency
of the algorithms.
Especially, using a known algorithm for powering integers~\cite{Coh93,Sho05}
and our recent result on
abelian subgroups of Garside groups~\cite{LL06c}, we propose an algorithm
for computing a super summit element of a power of a periodic element
in a Garside group more efficiently.
We hope that the results of this paper on periodic elements in Garside groups
are also useful in studying periodic elements in other Garside groups.
This paper is organized as follows.
\S2 gives a brief review on Garside groups and
braids groups with the BKL Garside structure.
\S3 studies periodic elements in Garside groups and their super
summit sets.
\S4 studies super summit sets of $\epsilon$-type periodic braids in $B_n^{\operatorname{[BKL]}}$,
and shows a method to find a conjugating element for any two braids
in the super summit set of $\epsilon^k$.
\S5 constructs algorithms for the conjugacy problem for periodic braids
in $B_n^{\operatorname{[BKL]}}$.
\section{Preliminaries}
\subsection{Garside monoids and groups}
The class of Garside groups, first introduced by Dehornoy and Paris~\cite{DP99},
provides a lattice-theoretic generalization of the braid groups
and the Artin groups of finite type.
For a monoid $M$, let $e$ denote the identity element.
An element $a\in M\setminus \{e\}$ is called an \emph{atom} if
$a=bc$ for $b,c\in M$ implies either $b=e$ or $c=e$.
For $a\in M$, let $\Vert a\Vert$ be the supremum
of the lengths of all expressions of
$a$ in terms of atoms. The monoid $M$ is said to be \emph{atomic}
if it is generated by its atoms and $\Vert a\Vert<\infty$
for any element $a$ of $M$.
In an atomic monoid $M$, there are partial orders $\le_L$ and $\le_R$:
$a\le_L b$ if $ac=b$ for some $c\in M$;
$a\le_R b$ if $ca=b$ for some $c\in M$.
\begin{definition}
An atomic monoid $M$ is called a \emph{Garside monoid} if
\begin{enumerate}
\item[(i)] $M$ is finitely generated;
\item[(ii)] $M$ is left and right cancellative;
\item[(iii)] $(M,\le_L)$ and $(M,\le_R)$ are lattices;
\item[(iv)] $M$ contains an element $\Delta$, called a
\emph{Garside element}, satisfying the following:\\
(a) for each $a\in M$, $a\le_L\Delta$ if and only if $a\le_R\Delta$;\\
(b) the set $\{a\in M: a \le_L\Delta\}$ generates $M$.
\end{enumerate}
\end{definition}
Recall that a partially ordered set $(P,\le)$ is called a
\emph{lattice} if there exist the gcd $a\wedge b$ and the lcm $a\vee b$
for any $a,b\in P$.
The gcd $a\wedge b$ is the unique element such that
(i) $a\wedge b\le a$ and $a\wedge b\le b$;
(ii) if $c$ is an element satisfying $c\le a$ and $c\le b$,
then $c\le a\wedge b$.
Similarly, the lcm $a\vee b$ is the unique element such that
(i) $a\le a\vee b$ and $b\le a\vee b$;
(ii) if $c$ is an element satisfying $a\le c$ and $b\le c$,
then $a\vee b\le c$.
Let $\wedge_L$ and $\vee_{\!L}$ (resp. $\wedge_R$ and $\vee_{\!R}$) denote
the gcd and lcm with respect to $\le_L$ (resp. $\le_R$).
An element $a$ of $M$ is called a \emph{simple element} if $a\le_L\Delta$.
Let $\mathcal D$ denote the set of all simple elements.
A \emph{Garside group} is defined as the group of fractions
of a Garside monoid.
When $M$ is a Garside monoid and $G$ is the group of fractions of $M$,
we identify the elements of $M$ and their images in $G$
and call them \emph{positive elements} of $G$.
$M$ is called the \emph{positive monoid} of $G$,
often denoted $G^+$.
The triple $(G, G^+, \Delta)$ is called a
\emph{Garside structure} on $G$.
We remark that a given group $G$ may
admit more than one Garside structures.
The partial orders $\le_L$ and $\le_R$, and thus the lattice structures
in the positive monoid $G^+$ can be extended to the Garside group $G$.
For $g, h\in G$, $g\le_L h$ (resp. $g\le_R h$) means $g^{-1}h\in G^+$
(resp. $hg^{-1}\in G^+$), in which
$g$ is called a {\em prefix} (resp. {\em suffix}) of $h$.
For an element $g\in G$, the \emph{(left) normal form} of $g$ is
the unique expression
$$
g=\Delta^u a_1\cdots a_\ell,
$$
where $u=\max\{r\in\mathbf Z:\Delta^r\le_L g\}$,
$a_1,\ldots,a_\ell\in \mathcal D\setminus\{e,\Delta\}$ and
$(a_i a_{i+1}\cdots a_\ell)\wedge_L \Delta=a_i$ for $i=1,\ldots,\ell$.
In this case, $\inf(g)=u$, $\sup(g)=u+\ell$ and \/$\operatorname{len}(g)=\ell$ are
called the {\em infimum}, {\em suprimum} and {\em canonical length} of $g$,
respectively.
Let $\tau : G\to G$ be the inner automorphism of $G$
defined by $\tau(g)=\Delta^{-1}g\Delta$ for all $g\in G$.
The {\em cycling} and {\em decycling} of $g$ are defined as
\begin{eqnarray*}
\mathbf c(g)&=& \Delta^u a_2\cdots a_\ell\tau^{-u}(a_1),\\
\mathbf d(g)&=&\Delta^u \tau^{u}(a_\ell)a_1\cdots a_{\ell-1}.
\end{eqnarray*}
We denote the conjugacy class $\{ h^{-1}gh : h\in G\}$ of $g$ in $G$
by $[g]$.
Define $\inf{\!}_ s(g)=\max\{\inf(h):h\in [g]\}$,
$\sup{\!}_s(g)=\min\{\sup(h):h\in [g]\}$ and $\len{\!}_s(g)=\sup{\!}_s(g)-\inf{\!}_ s(g)$,
which are called the {\em summit infimum}, {\em summit suprimum} and
{\em summit length} of $g$, respectively.
The \emph{super summit set} $[g]^S$,
the \emph{ultra summit set} $[g]^U$
and the \emph{stable super summit set} $[g]^{St}$
are finite, nonempty subsets of the conjugacy class of $g$
defined as follows (see~\cite{FG03,Geb05,LL06c} for more detail):
\begin{eqnarray*}
[g]^S&=&\{h\in [g]:\inf(h)=\inf{\!}_ s(g) \ \mbox{ and } \sup(h)=\sup{\!}_s(g)\};\\{}
[g]^{U}&=&\{h\in [g]^S: \mathbf c^k(h)=h \ \mbox{ for some positive integer $k$}\};\\{}
[g]^{St}&=&\{h\in [g]^S:h^k\in[g^k]^S \ \mbox{ for all positive integers $k$}\}.
\end{eqnarray*}
Elements of super summit sets are called \emph{super summit elements}.
For every element $g\in G$, the following limits are well-defined:
$$
\t_{\inf}(g)=\lim_{n\to\infty}\frac{\inf(g^n)}n;\quad
\t_{\sup}(g)=\lim_{n\to\infty}\frac{\sup(g^n)}n;\quad
\t_{\len}(g)=\lim_{n\to\infty}\frac{\operatorname{len}(g^n)}n.
$$
These limits were introduced in~\cite{LL06a}
to investigate translation numbers in Garside groups.
We will exploit the following properties especially in \S\ref{sec:partial_decycling}.
\begin{proposition}[{\cite{LL06a,LL06b}}]
For $g, h\in G$,
\begin{enumerate}
\item[(i)]
$\t_{\inf}(h^{-1}gh)=\t_{\inf}(g)$ and $\t_{\sup}(h^{-1}gh)=\t_{\sup}(g)$;
\item[(ii)]
$\t_{\inf}(g^n)= n\cdot\t_{\inf}(g)$ and $\t_{\sup}(g^n)= n\cdot\t_{\sup}(g)$
for all integers $n\ge 1$;
\item[(iii)]
$\inf{\!}_ s(g)=\lfloor \t_{\inf}(g)\rfloor$ and $\sup{\!}_s(g)=\lceil \t_{\sup}(g)\rceil$;
\item[(iv)]
$\t_{\inf}(g)$ and $\t_{\sup}(g)$ are rational of the form $p/q$,
where $p$ and $q$ are relatively prime integers and
$1\le q\le\Vert\Delta\Vert$.
\end{enumerate}
\end{proposition}
\subsection{The BKL Garisde structure on braid groups}
Birman, Ko and Lee~\cite{BKL98} introduced
a then-new monoid---with the following explicit presentation---whose
group of fractions is the braid group $B_n$:
$$
B_n = \left\langle a_{ij}, \ 1\le j < i\le n \left|
\begin{array}{ll}
a_{kl}a_{ij}=a_{ij}a_{kl} & \mbox{if $(k-i)(k-j)(l-i)(l-j)>0$}, \\
a_{ij}a_{jk}=a_{jk}a_{ik}=a_{ik}a_{ij} & \mbox{if $1\le k<j<i \le n$}.
\end{array}
\right.\right\rangle.
$$
The generators $a_{ij}$ are called {\em band generators}.
They are related to the classical generators by
$a_{ij}=\sigma_{i-1}\sigma_{i-2}\cdots\sigma_{j+1}\sigma_j
\sigma_{j+1}^{-1}\cdots\sigma_{i-2}^{-1}\sigma_{i-1}^{-1}$.
The BKL Garside structure of $B_n$ is determined by
the positive monoid which consists of the elements
represented by positive words in the band generators
and the Garside element
$$
\delta = a_{n,n-1}\cdots a_{3,2} a_{2,1}.
$$
The simple elements in the BKL Garside structure are in one-to-one
correspondence with non-crossing partitions.
(Note that the simple elements in $B_n^{\operatorname{[Artin]}}$ are
in one-to-one correspondence with $n$-permutations.)
Let $P_1,\ldots,P_n$ be the points in
the complex plain given by $P_k=\exp(\frac{2k\pi}ni)$.
See Figure~\ref{fig:ncp}.
Recall that a partition of a set is a collection of pairwise
disjoint subsets whose union is the entire set.
Those subsets (in the collection) are called {\em blocks}.
A partition of $\{P_1,\ldots,P_n\}$
is called a \emph{non-crossing partition}
if the convex hulls of the respective blocks are pairwise disjoint.
\begin{figure}
\includegraphics[scale=1.1]{per-ncp.eps}
\caption{The shaded regions show the blocks in the non-crossing
partition corresponding to the simple element
$[12,10,1]\,[9,8,2]\,[7,6,4,3]$ in $B_{12}$.}\label{fig:ncp}
\end{figure}
A positive word of the form
$a_{i_k i_{k-1}} \cdots a_{i_3 i_2} a_{i_{2}i_1}$,
$1\le i_1 < i_2 < \cdots < i_k\le n$, is called a \emph{descending cycle}
and denoted $[i_k,\ldots,i_2,i_1]$.
Two descending cycles, $[i_k,\ldots,i_1]$ and $[j_l,\ldots,j_1]$,
are said to be \emph{parallel}
if the convex hulls of $\{P_{i_1},\ldots,P_{i_k}\}$
and of $\{P_{j_1},\ldots,P_{j_l}\}$ are disjoint.
A simple element is a product of parallel descending cycles.
\section{Super summit sets of periodic elements in Garside groups}
\label{sec:partial_decycling}
In this section $G$ denotes a Garside group with Garside element $\Delta$.
Let $m$ denote the smallest positive integer such that
$\Delta^m$ is central in $G$,
and let $G_\Delta$ denote the central quotient $G/\myangle{\Delta^m}$
where $\myangle{\Delta^m}$ is the cyclic group generated by $\Delta^m$.
For an element $g\in G$, let $\bar g$ denote the image of $g$
under the natural projection from $G$ to $G_\Delta$.
An element $g$ of $G$ is said to be \emph{periodic} if
$\bar g$ has a finite order in $G_\Delta$ or, equivalently,
if some power of $g$ belongs to $\langle\Delta^m\rangle$.
In this section we study periodic elements in Garside groups.
\subsection{BCMW powers of periodic elements}
Due to the work of Dehornoy~\cite{Deh98}, we know that
Garside groups are torsion free,
hence there is no non-trivial finite subgroup.
We start this section with the results of Bestvina~\cite{Bes99} and
Charney, Meier and Whittlesey~\cite{CMW04}
for finite subgroups of $G_\Delta$.
The following theorem was first proved
by Bestvina~\cite[Theorem 4.5]{Bes99} for
Artin groups of finite type and then generalized to Garside groups by
Charney, Meier and Whittlesey~\cite[Corollary 6.8]{CMW04}.
\begin{theorem}
[Bestvina~\cite{Bes99}, Charney, Meier and Whittlesey~\cite{CMW04}]
\label{thm:CMW}
The finite subgroups of\/ $G_\Delta$ are, up to conjugacy,
one of the following two types:
\begin{itemize}
\item[(i)]
the cyclic group generated by $\Delta^u a$ for an integer $u$ and
a simple element $a\in\mathcal D\backslash\{\Delta\}$
such that if $a\neq e$, then for some integer $q$
with $2\le q\le\Vert\Delta\Vert$,
\begin{equation}\label{eq:CMW}
\tau^{(q-1)u}(a)\,\tau^{(q-2)u}(a)\cdots \tau^{u}(a)\,a=\Delta ;
\end{equation}
\item[(ii)]
the direct product of a cyclic group of type (i) and
$\langle\Delta^k\rangle$ where $\Delta^k$ commutes with $a$.
\end{itemize}
\end{theorem}
Notice that the element $\Delta^u a$ in Theorem~\ref{thm:CMW}~(i)
is clearly a periodic element because
$(\Delta^u a)^q
=\Delta^{qu}\,\tau^{(q-1)u}(a)\cdots \tau^{u}(a)\,a
=\Delta^{qu+1}$.
However, not every periodic element satisfies the conditions
in Theorem~\ref{thm:CMW}~(i).
Namely, if $\Delta^ua $ is periodic, then
$\tau^{(q-1)u}(a)\,\tau^{(q-2)u}(a)\cdots \tau^{u}(a)\,a=\Delta^t$
for some positive integers $q$ and $t$, but $t$ is not necessarily 1.
Motivated by this observation,
we define the following notions for periodic elements.
\begin{definition}\label{def:BCMW}
Let $g$ be a periodic element of a Garside group $G$ such that $\t_{\inf}(g)=p/q$
for relatively prime integers $p$ and $q$ with $q\ge 1$.
\begin{itemize}
\item[(i)]
The periodic element $g$ is said to be \emph{P-minimal} if $p\equiv 1\bmod q$.
\item[(ii)]
A power $h=g^r$ is called a \emph{BCMW-power} of $g$
if $h$ is P-minimal and
$\bar h$ generates the same cyclic subgroup of $G_\Delta$ as $\bar g$ does.
\item[(iii)]
The periodic element $g$ is said to be \emph{C-tight}
if $p\equiv 0\bmod m$.
\end{itemize}
\end{definition}
It will be shown in Lemma~\ref{lem:Primi_equiv}
that a periodic element $g$ is P-minimal
if and only if $g$ is conjugate to an element of the form $\Delta^u a$
satisfying the conditions in Theorem~\ref{thm:CMW}~(i).
(It will be also shown that if $\Delta^u a$ is P-minimal and belongs to
its stable super summit set, then the simple element $a$ is minimal
among the positive parts of the powers of $\Delta^u a$ other than
the identity.)
Therefore, if restricted to finite cyclic subgroups of
$G_\Delta$, Theorem~\ref{thm:CMW} is equivalent to the existence
of a BCMW-power for any periodic element.
For the periodic element $g$ in Definition~\ref{def:BCMW},
$q$ is the smallest positive integer such that $g^q$ belongs
to the cyclic group $\myangle{\Delta}$ up to conjugacy.
If $g$ is C-tight, then $g^q$ belongs to $\myangle{\Delta^m}$.
This notion is irrelevant to Theorem~\ref{thm:CMW}, but will
be used later for Theorem~\ref{thm:pa-cy}.
\begin{example}\label{eg:periodic}
Table~\ref{ta:ex} shows $\t_{\inf}(\delta^k)$ and $\t_{\inf}(\epsilon^k)$
for some braid index $n$
in $B_n^{\operatorname{[Artin]}}$, the $n$-braid group with the Artin Garside structure,
and in $B_n^{\operatorname{[BKL]}}$, the $n$-braid group with the BKL Garside structure.
In the table, $(*)$ indicates that the periodic braid is
P-minimal and $(**)$ indicates that it is P-minimal and C-tight.
We know that $\Delta^2$ and $\delta^n$ are the smallest
central power of $\Delta$ and $\delta$ in $B_n$.
Observe the following.
\begin{itemize}
\item
Let $G=B_n^{\operatorname{[Artin]}}$.
Since $\Delta$ is the Garside element
and $\delta^n = \epsilon^{n-1} = \Delta^2$,
$$
\t_{\inf}(\delta)=2/n
\quad\mbox{and}\quad
\t_{\inf}(\epsilon)=2/(n-1).
$$
Therefore, a $\delta$-type periodic $n$-braid is C-tight
if $n$ is odd, and
an $\epsilon$-type periodic $n$-braid is C-tight
if $n$ is even.
\item
Let $G=B_n^{\operatorname{[BKL]}}$.
Note that $\delta$ is the Garside element.
The super summit set of $\delta^k$ is $\{\delta^k\}$ for all integers $k$,
hence the conjugacy search problem is easy for $\delta$-type periodic braids.
Hence we are interested only in $\epsilon$-type periodic braids.
Since $\epsilon^{n-1}=\delta^n$,
$$
\t_{\inf}(\epsilon)=n/(n-1).
$$
Since $n$ is relatively prime to $n-1$, every $\epsilon$-type periodic braid
is C-tight.
\end{itemize}
\end{example}
\begin{table}
$$\def1.2{1.1}
\begin{array}{|c||l|l||l|l|}\hline
k
& \textstyle\t_{\inf}(\delta^k)~\mbox{in $B_{9}^{\operatorname{[Artin]}}$}
\atop \textstyle\t_{\inf}(\epsilon^k)~\mbox{in $B_{10}^{\operatorname{[Artin]}}$}
& \textstyle\t_{\inf}(\delta^k)~\mbox{in $B_{10}^{\operatorname{[Artin]}}$}
\atop \textstyle\t_{\inf}(\epsilon^k)~\mbox{in $B_{11}^{\operatorname{[Artin]}}$}
& ~\t_{\inf}(\epsilon^k)~\mbox{in $B_{10}^{\operatorname{[BKL]}}$}
& ~\t_{\inf}(\epsilon^k)~\mbox{in $B_{11}^{\operatorname{[BKL]}}$}\\\hline
1 & \qquad\frac29
& (*)~{}~\frac{2}{10}=\frac15
& (**)\,\frac{10}{9}=1+\frac19
& (**)\,\frac{11}{10}=1+\frac{1}{10} \\\hline
2 & \qquad\frac 49
& \qquad\frac{4}{10}=\frac25
& \qquad\frac{20}{9}=2+\frac29
& (**)\,\frac{22}{10}=\frac{11}5=2+\frac{1}{5}\\\hline
3 & \qquad\frac 69=\frac23
& \qquad\frac{6}{10}=\frac35
& (**)\,\frac{30}{9}=\frac{10}3=3+\frac13
& \qquad\frac{33}{10}=3+\frac{3}{10} \\\hline
4 & \qquad\frac 89
& \qquad\frac{8}{10}=\frac45
& \qquad\frac{40}{9}=4+\frac49
& \qquad\frac{44}{10}=\frac{22}5=4+\frac{2}{5} \\\hline
5 & (**)\,\frac {10}9=1+\frac19
& (*)~{}~\frac{10}{10}=1
& \qquad\frac{50}{9}=5+\frac59
& (**)\,\frac{55}{10}=\frac{11}2=5+\frac{1}{2}\\\hline
6 & (**)\,\frac {12}9=\frac43=1+\frac13
& (**)\,\frac{12}{10}=\frac65=1+\frac15
& \qquad\frac{60}{9}=\frac{20}3=6+\frac23
& \qquad\frac{66}{10}=\frac{33}5=6+\frac{3}{5} \\\hline
7 & \qquad\frac{14}9=1+\frac59
& \qquad\frac{14}{10}=\frac75=1+\frac25
& \qquad\frac{70}{9}=7+\frac79
& \qquad\frac{77}{10}=7+\frac{7}{10} \\\hline
8 & \qquad\frac{16}9=1+\frac 79
& \qquad\frac{16}{10}=\frac85=1+\frac35
& \qquad\frac{80}{9}=8+\frac89
& \qquad\frac{88}{10}=\frac{44}5=8+\frac{4}{5} \\\hline
9 & (**)\,\frac{18}{9}=2
& \qquad\frac{18}{10}=\frac95=1+\frac45
& (**)\,\frac{90}{9}=10
& \qquad\frac{99}{10}=9+\frac{9}{10} \\\hline
\end{array}
$$
\caption{In the table, $(*)$ indicates that the periodic braid is
P-minimal and $(**)$ indicates that it is P-minimal and C-tight.}
\label{ta:ex}
\end{table}
As mentioned earlier, it follows from Theorem~\ref{thm:CMW} that
every periodic element has a BCMW-power.
We prove this in Theorem~\ref{thm:LL-CMW} in a different way
using only the properties of the invariants $\t_{\inf}(\cdot)$,
$\t_{\sup}(\cdot)$ and $\t_{\len}(\cdot)$, and the stable super summit set.
A new fact added by Theorem~\ref{thm:LL-CMW} is that
the exponent of a BCMW-power of a periodic element $g$
is completely determined only by $\t_{\inf}(g)$ and $m$, further,
it can be computed easily from those values.
Before we go into the theorem, we show necessary lemmas.
\begin{lemma}\label{lem:per-equiv}
An element $g$ in $G$ is periodic if and only if\/
$\t_{\len}(g)=0$, that is, $\t_{\inf}(g)=\t_{\sup}(g)$.
\end{lemma}
\begin{proof}
Let $\t_{\inf}(g)=p/q$, where $p$ and $q$ are relatively prime and $q\ge 1$.
Suppose that $g$ is periodic.
Since $g^k=\Delta^l$ for some integers $l$ and $k\ge 1$,
$$
\t_{\len}(g)=\frac1k\cdot\t_{\len}(g^k)=\frac1k\cdot\t_{\len}(\Delta^l)=\frac1k\cdot 0=0.
$$
Conversely, assume that $\t_{\len}(g)=0$, that is, $\t_{\inf}(g)=\t_{\sup}(g)=p/q$.
Then
$$
\inf{\!}_ s(g^q)=\lfloor \t_{\inf}(g^q)\rfloor
=\lfloor q\cdot\t_{\inf}(g)\rfloor =p
\quad\mbox{and}\quad
\sup{\!}_s(g^q)=\lceil \t_{\sup}(g^q)\rceil
=\lceil q\cdot\t_{\sup}(g)\rceil = p.
$$
Therefore $g^q$ is conjugate to $\Delta^p$, hence $g$ is periodic.
\end{proof}
\begin{lemma}\label{lem:per-elt}
Let $g$ be a periodic element of\/ $G$ with $\t_{\inf}(g)=p/q$ for relatively prime
integers $p$ and $q$ with $q\ge 1$. Then the following hold.
\begin{itemize}
\item[(i)]
For all integers $n$,
$\t_{\inf}(g^n) = n\cdot\t_{\inf}(g)$.
\item[(ii)]
$\len{\!}_s(g^k)$ is $0$ if\/ $k\equiv 0\bmod q$ and $1$ otherwise.
In particular, $[g^k]^S =[g^k]^U$ for all $k$.
\item[(iii)]
$q=1$ if and only if $g$ is conjugate to $\Delta^p$.
\end{itemize}
\end{lemma}
\begin{proof}
\smallskip\noindent
(i) \ \
We know that $\t_{\inf}(g^n) = n\cdot\t_{\inf}(g)$ holds if $n$ is nonnegative.
Let $n$ be negative, then $n=-k$ for some positive integer $k$.
Since $\t_{\inf}(g)=\t_{\sup}(g)$ by Lemma~\ref{lem:per-equiv}
and $\t_{\inf}(h^{-1})=-\t_{\sup}(h)$ for all $h\in G$,
we have
$$\t_{\inf}(g^n) =\t_{\inf}((g^{-1})^k)
=k\cdot\t_{\inf}(g^{-1})
=k\cdot(-\t_{\sup}(g))=n\cdot\t_{\inf}(g).
$$
\smallskip\noindent
(ii) \ \
For any positive integer $k$,
$$
\len{\!}_s(g^{-k})
=\len{\!}_s(g^k)
=\sup{\!}_s(g^k)-\inf{\!}_ s(g^k)
=\lceil \t_{\sup}(g^k)\rceil -\lfloor\t_{\inf}(g^k)\rfloor
=\lceil kp/q\rceil -\lfloor kp/q\rfloor.
$$
Therefore, for any integer $k$, $\len{\!}_s(g^k)$ is either 0 or 1,
and $\len{\!}_s(g^k)=0$ if and only if $kp/q$ is an integer.
Because $p$ and $q$ are relatively prime,
$kp/q$ is an integer if and only if $k\equiv 0\bmod q$.
\smallskip\noindent
(iii) \ \
Let $q=1$. Then $\t_{\inf}(g)=\t_{\sup}(g)=p$ by Lemma~\ref{lem:per-equiv}.
Since $\inf{\!}_ s(g)=\myfloor{\t_{\inf}(g)}=p=\myceil{\t_{\sup}(g)}=\sup{\!}_s(g)$,
the element $g$ is conjugate to $\Delta^p$.
The converse is obvious.
\end{proof}
\begin{lemma}\label{lem:Primi_equiv}
Let $g$ be a periodic element of\/ $G$ with $\t_{\inf}(g)=p/q$ for relatively
prime integers $p$ and $q\ge2$.
Then the following conditions are equivalent.
\begin{itemize}
\item[(i)]
$g$ is P-minimal (i.e. $p\equiv 1\bmod q$).
\item[(ii)]
Every element $h\in[g]^{St}$ is of the form $\Delta^u a$
for an integer $u$ and a simple element $a$ such that
the simple element $a$ is minimal in the set
$S = \{ \Delta^{-\inf(h^n)}h^n : n\in\mathbf Z\}\setminus\{e\}$
under the order relation $\le_R$.
\item[(iii)]
Every element $h\in [g]^{St}$ is of the form $\Delta^u a$
for an integer $u$ and a simple element $a$
such that $\tau^{(q-1)u}(a)\,\tau^{(q-2)u}(a)\cdots \tau^{u}(a)\,a = \Delta$.
\item[(iv)]
$g$ is conjugate to an element of the form $\Delta^u a$
for an integer $u$ and a simple element $a$
such that $\tau^{(q-1)u}(a)\,\tau^{(q-2)u}(a)\cdots \tau^{u}(a)\,a = \Delta$.
\end{itemize}
\end{lemma}
\begin{proof}
We prove the equivalences by showing the implications
(i) $\Rightarrow$ (iii) $\Rightarrow$ (iv) $\Rightarrow$ (i)
and (ii) $\Leftrightarrow$ (iii).
Note that the implication (iii) $\Rightarrow$ (iv) is obvious.
\smallskip\noindent
(i) $\Rightarrow$ (iii)\ \
Suppose that $\t_{\inf}(g)=(uq+1)/q=u+1/q$ for an integer $u$.
Because $g$ is periodic, $\t_{\sup}(g)=\t_{\inf}(g)=u+1/q$ by Lemma~\ref{lem:per-equiv}.
Let $h$ be an element of the stable super summit set of $g$.
Then
$$
\begin{array}{ll}
\inf(h)=\inf{\!}_ s(g)=\lfloor \t_{\inf}(g)\rfloor=u,\quad
&\inf(h^q)=\inf{\!}_ s(g^q)=\lfloor q\cdot \t_{\inf}(g)\rfloor=uq+1,\\
\sup(h)=\sup{\!}_s(g)=\lceil \t_{\sup}(g)\rceil=u+1,\quad
&\sup(h^q)=\sup{\!}_s(g^q)=\lceil q\cdot \t_{\sup}(g)\rceil=uq+1.
\end{array}
$$
Therefore, $h=\Delta^u a$ for $a\in\mathcal D\backslash\{ e, \Delta\}$
and $h^q=\Delta^{uq+1}$.
Note that
$$
\Delta^{uq+1}=h^q=(\Delta^u a)\cdots(\Delta^u a)
=\Delta^{uq}\, \tau^{(q-1)u}(a)\,\tau^{(q-2)u}(a)\cdots \tau^{u}(a)\,a.
$$
This implies that
$\Delta=\tau^{(q-1)u}(a)\,\tau^{(q-2)u}(a)\cdots \tau^{u}(a)\,a$.
\smallskip\noindent
(iv) $\Rightarrow$ (i)\ \
Suppose that $g$ is conjugate to $h=\Delta^u a$ for $a\in\mathcal D\backslash\{ e, \Delta\}$
satisfying
$$\tau^{(q-1)u}(a)\,\tau^{(q-2)u}(a)\cdots \tau^{u}(a)\,a = \Delta.$$
Then $h^q=\Delta^{uq+1}$,
hence one has $\t_{\inf}(h)=(1/q)\cdot\t_{\inf}(h^q)=(uq+1)/q$.
Since $p/q = \t_{\inf}(g)=\t_{\inf}(h)=(uq+1)/q$, one has $p=uq+1\equiv 1\bmod q$.
\smallskip\noindent
(ii) $\Rightarrow$ (iii)\ \
For $j\ge 1$, let $a_j$ be the positive element defined by
$$
a_j=\tau^{(j-1)u}(a)\,\tau^{(j-2)u}(a)\cdots \tau^{u}(a)\,a.
$$
Note that $h^j=(\Delta^u a)^j=\Delta^{ju} a_j$, hence
$\operatorname{len}(a_j)=\operatorname{len}(h^j)=\len{\!}_s(g^j)$ and it is 0 if $j\equiv 0\bmod q$
and 1 otherwise.
In particular,
$\operatorname{len}(a_q)=\sup(a_q)-\inf(a_q)=0$, hence
$$
\inf(a_q)=\sup(a_q)\ge \sup(a)\ge 1.
$$
Since $\inf(a_1)=\inf(a)=0$ and
$0\le \inf(a_{j+1})\le \inf(a_j)+1$ for all $j\ge 1$,
there exists $2\le j\le q$ such that $\inf(a_j)=1$.
Let $k$ be the smallest positive integer such that $\inf(a_k)=1$,
then $2\le k\le q$.
Since $\inf(a_{k-1})=0$ and $\operatorname{len}(a_{k-1})=1$,
$a_{k-1}$ is a simple element.
Since
$$
\Delta\le_L a_k
=\tau^{(k-1)u}(a)\cdots \tau^{u}(a)\,a
=\tau^u\bigl(\tau^{(k-2)u}(a)\cdots \tau^{u}(a)a\bigr)\,a
=\tau^u(a_{k-1}) a,
$$
we have $a=a'a''$ for some simple elements $a'$ and $a''$ such that
$\tau^u(a_{k-1})a'=\Delta$, that is, $a_k=\Delta a''$.
Since $\inf(a_{k-1})=0$, we have $a'\ne e$, hence
$a''\ne a$.
Note that $a''\le_R a$ and $h^k=\Delta^{uk+1}a''$.
If $a''\ne e$, then it contradicts the minimality of $a$ under $\le_R$.
Therefore $a''=e$, hence $a_k=\Delta$.
Notice that $\operatorname{len}(a_j)=0$ if and only if $j\equiv 0\bmod q$.
Since $2\le k\le q$ by the construction,
one has $k=q$ and hence
$$
\tau^{(q-1)u}(a)\,\tau^{(q-2)u}(a)\cdots \tau^{u}(a)\,a=a_q=a_k=\Delta
$$
as desired.
\smallskip\noindent
(iii) $\Rightarrow$ (ii)\ \
It is obvious since $S=\{e\}\cup\{ \tau^{(k-1)u}(a)\,\tau^{(k-2)u}(a)\cdots
\tau^{u}(a)\,a: k=1,\ldots,q-1\}$.
\end{proof}
\begin{lemma}\label{lem:SubGp_equiv}
Let $g$ be a periodic element of\/ $G$ with $\t_{\inf}(g)=p/q$ for relatively
prime integers $p$ and $q\ge 1$, and let $r$ be a nonzero integer.
Then, the elements $\bar g$ and $\bar g^r$ generate the same cyclic subgroup of $G_\Delta$
if and only if $r$ is relatively prime to both
$q$ and $m/\gcd(p,m)$.
\end{lemma}
\begin{proof}
Let $\langle\bar g\rangle$ be the cyclic subgroup
of $G_\Delta$ generated by $\bar g$.
For positive integers $k$,
let $\mathbf Z/k\mathbf Z$ denote the additive group of residue classes modulo $k$.
Define a map $\phi:\langle\bar g\rangle \to \mathbf Z/(qm)\mathbf Z$ by
$$
\phi(\bar g^n)=q\cdot\t_{\inf}(g^n)=q\cdot (np/q)=pn \bmod {qm},
\qquad n\in\mathbf Z.
$$
We will first show that $\phi$ is an injective homomorphism.
$\phi$ is well-defined because $q\cdot\t_{\inf}(\Delta^m)=qm\equiv 0\bmod qm$.
$\phi$ is a homomorphism because $\t_{\inf}(g^n)=n\cdot\t_{\inf}(g)$ for all integers $n$.
If $\phi(\bar g^n)\equiv 0\bmod {qm}$,
then $\t_{\inf}(g^n)=km$ for some integer $k$.
Because $g^n$ is periodic, we have $\t_{\sup}(g^n)=\t_{\inf}(g^n)=km$,
hence $\inf{\!}_ s(g^n)=\sup{\!}_s(g^n)=km$.
Therefore $g^n$ is conjugate to $\Delta^{km}$.
Because $\Delta^m$ is central, one has $g^n=\Delta^{km}$.
This shows that $\phi$ is injective.
Let $r$ be a nonzero integer.
Since $\phi(\bar g^n)=pn \bmod{qm}$ for all integers $n$,
the images of $\langle\bar g\rangle$ and $\langle\bar g^r\rangle$ under $\phi$
are generated by the residue classes of $\gcd(p,qm)$ and $\gcd(pr,qm)$,
respectively.
Therefore it suffices to show that
$\gcd(p,qm)=\gcd(pr,qm)$ if and only if
$r$ is relatively prime to both $q$ and $m/\gcd(p,m)$.
For nonzero integers $a$, $b$ and $c$, the following equality holds:
$$
\gcd(ab,c)=\gcd(a,c)\cdot\gcd\left(b,{c}/{\gcd(a,c)}\right).
$$
Note that $\gcd(p,qm)=\gcd(p,m)$ because $p$ and $q$ are relatively prime.
Applying the above equality to $a=p$, $b=r$ and $c=qm$,
\begin{eqnarray*}
\gcd(pr,qm)
&=&\gcd(p,qm)\cdot\gcd (r,qm/\gcd(p,qm) )\\
&=&\gcd(p,qm)\cdot\gcd (r,qm/\gcd(p,m) ).
\end{eqnarray*}
Therefore, $\gcd(pr,qm)=\gcd(p,qm)$ if and only if
$\gcd(r, qm/\gcd(p,m))=1$.
Since $qm/\gcd(p,m)=q\cdot (m/\gcd(p,m))$ and $m/\gcd(p,m)$ is
an integer, $\gcd(r, qm/\gcd(p,m))=1$
if and only if
$r$ is relatively prime to both $q$ and $m/\gcd(p,m)$.
\end{proof}
\begin{theorem}\label{thm:LL-CMW}
Let $g$ be a periodic element of a Garside group\/ $G$ with $\t_{\inf}(g)=p/q$ for relatively
prime integers $p$ and $q$ with $q\ge 1$.
A power $g^r$ is a BCMW-power of $g$ if and only if\/
$pr\equiv 1 \bmod q$ and $r$ is relatively prime to $m/\gcd(p,m)$.
In particular, if\/ $q=1$ then $g$ itself is a BCMW-power, and
if\/ $q\ge 2$ then there is an integer $r$ with $1\le r<qm$ such that
$g^r$ is a BCMW-power of $g$.
\end{theorem}
\begin{proof}
By Lemma~\ref{lem:per-elt}, one has $\t_{\inf}(g^r)=pr/q$ for any integer $r$.
Suppose that $g^r$ is a BCMW-power of $g$.
Since $\myangle{\bar g}=\myangle{\bar g^r}$ in $G_\Delta$,
$r$ is relatively prime to $q$ and $m/\gcd(p,m)$
by Lemma~\ref{lem:SubGp_equiv}.
Since $q$ is relatively prime to $p$ by assumption,
$q$ is relatively prime to $pr$.
Therefore, $pr\equiv 1 \bmod q$ because $g^r$ is P-minimal.
Conversely, suppose that
$pr\equiv 1 \bmod q$ and $r$ is relatively prime to $m/\gcd(p,m)$.
Since $pr$ and $q$ are relatively prime, so are $r$ and $q$.
Hence $\myangle{\bar g}=\myangle{\bar g^r}$ in $G_\Delta$ by Lemma~\ref{lem:SubGp_equiv}.
Since $pr\equiv 1 \bmod q$, $g^r$ is P-minimal.
Therefore $g^r$ is a BCMW-power of $g$.
\smallskip
If $q=1$, then $g$ itself is obviously a BCMW-power
because it is P-minimal.
\smallskip
We now show the existence of a BCMW power $g^r$ with $1\le r<qm$
for $q\ge 2$.
Since $p$ and $q$ are relatively prime, there exists an integer $r_0$ such that
$1\le r_0 < q$ and
$$
pr_0\equiv 1\bmod q.
$$
In particular, $r_0$ is relatively prime to $q$.
Suppose that all of the prime divisors of $m/\gcd(p,m)$
are also divisors of $q$.
In this case, let $r=r_0$.
Then $r$ is relatively prime to $m/\gcd(p,m)$
since it is relatively prime to $q$.
Moreover $1\le r < q\le qm$ and $pr\equiv 1\bmod q$, as desired.
Let $p_1,\ldots,p_k$ be all of the distinct prime divisors
of $m/\gcd(p,m)$ which do not divide $q$.
For each $p_i$, take an integer $r_i$ relatively prime to $p_i$.
By the Chinese remainder theorem, there exists an integer $r$
with $1\le r<q p_1 p_2 \cdots p_k$ such that
$$
r\equiv r_0\bmod {q},\qquad
r\equiv r_i\bmod {p_i}\quad\mbox{for $i=1,\ldots,k$}.
$$
By the construction, $r$ is relatively prime to $m/\gcd(p,m)$
and $pr\equiv pr_0\equiv 1\bmod q$.
Moreover $1\le r<q p_1 p_2 \cdots p_k
\le q\cdot m/\gcd(p,m)\le qm$, as desired.
\end{proof}
Combining with Lemmas~\ref{lem:Primi_equiv} and \ref{lem:SubGp_equiv},
the above result implies the following.
\begin{corollary}\label{cor:LL-CMW}
Let $g$ be a periodic element of\/ $G$ with $\t_{\inf}(g)=p/q$ for relatively
prime integers $p$ and $q$ with $q\ge 2$.
Then there exists an integer $r$ such that the following hold.
\begin{itemize}
\item
$g^r$ generates the same cyclic subgroup of $G_\Delta$ as $g$ does.
\item
$g^r$ is conjugate to $\Delta^u a$ for a simple element
$a\in\mathcal D\backslash\{e, \Delta\}$ such that
$$
\tau^{(q-1)u}(a)\,\tau^{(q-2)u}(a)\cdots \tau^{u}(a)\,a=\Delta.
$$
\end{itemize}
The exponent $r$ depends only on the integers $p$, $q$ and $m$,
and it is characterized by the property that
$r$ is relatively prime to $m/\gcd(p,m)$ and $pr\equiv 1\bmod q$.
In particular, one can take $r$ such that $1\le r<qm$.
\end{corollary}
\begin{remark}
Theorem~\ref{thm:CMW} shows the structure of finite subgroups of $G_\Delta$.
In particular, every finite subgroup of $G_\Delta$ is abelian.
Once we know that finite subgroups of $G_\Delta$ are abelian,
then it is easy to prove Theorem~\ref{thm:CMW}
using only the properties of the invariants $\t_{\inf}(\cdot)$, $\t_{\sup}(\cdot)$ and
$\t_{\len}(\cdot)$, and the stable super summit set.
However, it looks difficult to prove that finite subgroups
of $G_\Delta$ are abelian without considering the action of $G$
on Bestvina's normal form complex as done in~\cite{Bes99} and in~\cite{CMW04}.
\end{remark}
For $g, h, x\in G$ and $n\in\mathbf Z$, $x^{-1}gx=h$ implies $x^{-1}g^nx=h^n$.
But the converse direction is not true in general.
Even though $g^n$ is conjugate to $h^n$,
$g$ is not necessarily conjugate to $h$.
Interestingly, the converse direction holds for BCMW-powers.
\begin{proposition}\label{prop:BCMW_power}
Let $g$ be a periodic element of\/ $G$ and $g^r$ a BCMW-power.
Then for elements $h$ and $x$ of\/ $G$,
$x$ conjugates $g$ to $h$
if and only if it conjugates $g^r$ to $h^r$.
Therefore, the conjugacy decision problem and
the conjugacy search problem for $(g,h)$
are equivalent to those for $(g^r,h^r)$,
and the centralizer of\/ $g$ in $G$ is the same as
the centralizer of\/ $g^r$ in $G$.
\end{proposition}
\begin{proof}
If $x^{-1}gx=h$, then it is obvious that $x^{-1}g^r x=h^r$.
Conversely, suppose that $x^{-1}g^r x=h^r$.
We claim that there exist integers $s$ and $t$ such that
$$
g^{rs}=\Delta^{mt}g\quad\mbox{and}\quad
h^{rs}=\Delta^{mt}h.
$$
Since $\langle \bar g\rangle=\langle\bar g^r\rangle$ in $G_\Delta$,
there exist integers $s$ and $t$ such that
$g^{rs}=\Delta^{mt}g$.
Since $h^r$ and $g^r$ are conjugate, $h^r$ is periodic
(hence $h$ is periodic) and $\t_{\inf}(h^r)=\t_{\inf}(g^r)$.
Since $g^{rs-1}=\Delta^{mt}$ and $h$ is periodic,
$$
\t_{\sup}(h^{rs-1})=\t_{\inf}(h^{rs-1})
=\frac{rs-1}r\cdot \t_{\inf}(h^r)
=\frac{rs-1}r\cdot \t_{\inf}(g^r)
=\t_{\inf}(g^{rs-1})=mt.
$$
Since $mt$ is an integer, we have $\inf{\!}_ s(h^{rs-1})=\sup{\!}_s(h^{rs-1})=mt$,
hence $h^{rs-1}$ is conjugate to $\Delta^{mt}$.
Since $\Delta^{mt}$ is central, $h^{rs-1}=\Delta^{mt}$.
Therefore $h^{rs}=\Delta^{mt}h$.
Since $x^{-1}g^r x=h^r$, we have
$x^{-1}(\Delta^{mt}g)x=x^{-1}g^{rs}x=h^{rs}=\Delta^{mt}h$.
Since $\Delta^{mt}$ is central,
it follows that $x^{-1}g x=h$.
\end{proof}
\subsection{Super summit sets of P-minimal and C-tight periodic elements}
Recall the definition of partial cycling introduced by Birman, Gebhardt
and Gonzalez-Meneses~\cite{BGG06b}.
\begin{definition}
Let $g=\Delta^u a_1a_2\cdots a_\ell \in G$ be in the normal form.
Let $b\in\mathcal D$ be a prefix of $a_1$, i.e. $a_1=ba_1'$ for a simple element $a'_1$.
The conjugation
$$
\tau^{-u}(b)^{-1} g \tau^{-u}(b) = \Delta^u a_1'a_2\cdots a_\ell\tau^{-u}(b)
$$
is called a \emph{partial cycling} of $g$ by $b$.
\end{definition}
\begin{remark}
For a ultra summit element $g=\Delta^u a_1a_2\cdots a_\ell \in G$ in the normal form
with $\ell > 0$,
a simple element $b\neq e$ is called a {\em minimal simple element} for $g$
with respect to $[g]^U$ if $b^{-1}gb\in [g]^U$ and no proper prefix of $b$ satisfies
this property.
When partial cycling was defined at first in~\cite{BGG06b},
it was used only for conjugating $g$
by a minimal simple element, a special prefix of $\tau^{-u}(a_1)$.
Unlike the previous usage, we deal with partial cycling which conjugates $g$
by any prefix of $\tau^{-u}(a_1)$.
\end{remark}
We now establish the main result (Theorem~\ref{thm:pa-cy}) of this section
that the super summit set of a P-minimal, C-tight
periodic element is closed under any partial cycling.
Joining Theorem~\ref{thm:LL-CMW} that every periodic element has a BCMW-power,
Theorem~\ref{thm:pa-cy} yields that the super summit set of a BCMW-power of a C-tight
periodic element is closed under any partial cycling.
Notice that, without one of the conditions in Theorem~\ref{thm:pa-cy},
the super summit set is not necessarily closed under partial cycling.
We will show this in Example~\ref{eg:conditions}.
\begin{theorem}\label{thm:pa-cy}
Let $g$ be a periodic element of a Garside group $G$.
If $g$ is P-minimal and C-tight, then
the following conditions are equivalent for an element $h$ conjugate to $g$.
\begin{itemize}
\item[(i)] $\inf(h)=\inf{\!}_ s(g)$.
\item[(ii)] $h\in [g]^{S}$, that is, $\inf(h)=\inf{\!}_ s(g)$ and $\sup(h)=\sup{\!}_s(g)$.
\item[(iii)] $h\in[g]^{St}$, that is, $\inf(h^k)=\inf{\!}_ s(g^k)$
and $\sup(h^k)=\sup{\!}_s(g^k)$ for all $k\ge 1$.
\end{itemize}
In particular, $[g]^{S}$ is closed under any partial cycling.
\end{theorem}
\begin{proof}
Let $\t_{\inf}(g)=p/q$ for relatively prime integers $p$ and $q$ with $q\ge 1$.
The claim is trivial when $q=1$, because $g$ is conjugate to $\Delta^p$.
Thus we may assume that $q\ge 2$.
Because the implications (iii) $\Rightarrow$ (ii) $\Rightarrow$ (i)
are obvious, we will show that (i) $\Rightarrow$ (iii).
Suppose that $h$ is conjugate to $g$ and $\inf(h)=\inf{\!}_ s(g)$.
Because $g$ is P-minimal and C-tight, $p=uq+1=ml$ for some integers $u$ and $l$.
Therefore
\begin{eqnarray*}
\t_{\inf}(h)&=&\t_{\sup}(h)=p/q=u+1/q,\\
\inf{\!}_ s(h^k)&=& \lfloor k\t_{\inf}(h)\rfloor = \lfloor k(uq+1)/q\rfloor=ku+\lfloor k/q\rfloor,\\
\sup{\!}_s(h^k)&=& \lceil k\t_{\sup}(h)\rceil = \lceil k(uq+1)/q\rceil = ku+\lceil k/q\rceil
\end{eqnarray*}
for all integers $k\ge 1$.
Because $\inf{\!}_ s(h^q)=\sup{\!}_s(h^q)=uq+1=ml$,
$h^q$ is conjugate to $\Delta^{ml}$.
Because $\Delta^{ml}$ is central,
$$
h^q=\Delta^{ml}=\Delta^p=\Delta^{uq+1}.
$$
Because $\inf(h)=\inf{\!}_ s(h)=u$, there exists a positive element $a$ such that
$$
h=\Delta^u a.
$$
Because $\sup(h)\ge \sup{\!}_s(h)=u+1$, $a$ is not the identity.
Let $\psi$ denote $\tau^{u}$. Then
$$
h^k=\Delta^{ku}\psi^{k-1}(a)\psi^{k-2}(a)\cdots\psi(a) a
$$
for all integers $k\ge 2$.
Because $h^q=\Delta^{qu+1}$,
$$
\Delta^{qu}\cdot\Delta=\Delta^{qu+1}=h^q
=\Delta^{qu}\psi^{q-1}(a)\psi^{q-2}(a)\cdots\psi(a) a.
$$
Therefore $\Delta=\psi^{q-1}(a)\psi^{q-2}(a)\cdots\psi(a) a$.
In particular,
$\psi^{k-1}(a)\psi^{k-2}(a)\cdots a \in \mathcal D\backslash\{ e, \Delta\}$
for all integers $k$ with $1\le k < q$.
Therefore
$$
\inf(h^k)=uk=\inf{\!}_ s(h^k)
\quad\mbox{and}\quad
\sup(h^k)=uk+1=\sup{\!}_s(h^k)
\qquad\mbox{for $k=1,\ldots,q-1$}.
$$
Because $h^q=\Delta^{ml}$, this proves that
$h$ belongs to the stable super summit set.
\medskip
Now, let us show that $[g]^S$ is closed under any partial cycling.
Let $h'$ be the result of an arbitrary partial cycling
of an element $h$ in $[g]^S$.
Partial cycling does not decrease the infimum by definition.
Therefore $\inf(h')=\inf(h)=\inf{\!}_ s(g)$, whence
$h'$ belongs to $[g]^S$.
\end{proof}
For $g\in G$, Garside~\cite{Gar69} called the set
$\{h\in[g]: \inf(h)=\inf{\!}_ s(h)\}$ the \emph{summit set} of $g$.
Theorem~\ref{thm:pa-cy} shows that for P-minimal, C-tight
periodic elements, the notions of summit set, super summit set
and stable super summit set are all equivalent.
We already know that, for a periodic element, its summit length is at most 1, hence
its ultra summit set is nothing more than its super summit set.
The following example shows that the conditions in
Theorem~\ref{thm:pa-cy} that the periodic element $g$
is P-minimal and C-tight are necessary for the conclusion.
\begin{example}\label{eg:conditions}
Recall that $B_n^{\operatorname{[Artin]}}$ and $B_n^{\operatorname{[BKL]}}$ denote
the $n$-braid group with the Artin Garside structure
and the BKL Garside structure, respectively.
We will observe the following.
\begin{itemize}
\item $\epsilon_{(5)}=\sigma_4\sigma_3\sigma_2\sigma_1\sigma_1\in B_5^{\operatorname{[Artin]}}$
is P-minimal but not C-tight because $\t_{\inf}(\epsilon_{(5)})=1/2$.
\item $\epsilon_{(6)}=\sigma_5\sigma_4\sigma_3\sigma_2\sigma_1\sigma_1\in B_6^{\operatorname{[Artin]}}$
is C-tight but not P-minimal because $\t_{\inf}(\epsilon_{(6)})=2/5$.
\item
$\epsilon_{(6)}^3=\delta_{(6)}^3[4,3,2,1]\in B_6^{\operatorname{[BKL]}}$
is C-tight but not P-minimal
because $\t_{\inf}(\epsilon_{(6)}^3)=18/5$.
\item
For all of these examples, their super summit sets
are not closed under partial cycling
and different from their stable super summit sets.
\end{itemize}
Note that $\epsilon_{(n)}^{n-1}=\Delta^2=\delta_{(n)}^n$,
hence $\t_{\inf}(\epsilon_{(n)})=2/(n-1)$ in $B_n^{\operatorname{[Artin]}}$
and $\t_{\inf}(\epsilon_{(n)})=n/(n-1)$ in $B_n^{\operatorname{[BKL]}}$.
Therefore
$$
\begin{array}{ll}
\t_{\inf}(\epsilon_{(5)}) = \frac24=\frac12
& \mbox{in $B_5^{\operatorname{[Artin]}}$},\\
\t_{\inf}(\epsilon_{(6)}) = \frac25
& \mbox{in $B_6^{\operatorname{[Artin]}}$},\\
\t_{\inf}(\epsilon_{(6)}^3) = 3\cdot\frac{6}{5}=\frac{18}5=3+\frac35
& \mbox{in $B_6^{\operatorname{[BKL]}}$}.
\end{array}
$$
Consider the elements
$g_1=\sigma_1\sigma_4\sigma_3\sigma_2\sigma_1$ in $[\epsilon_{(5)}]^S$
and $g_2=\sigma_1\sigma_5\sigma_4\sigma_3\sigma_2\sigma_1$
in $[\epsilon_{(6)}]^S$, under the Artin Garside structure.
The partial cycling on $g_1$ and $g_2$ by $\sigma_1$ yields
$\epsilon_{(5)}=(\sigma_4\sigma_3\sigma_2\sigma_1)\sigma_1$ and
$\epsilon_{(6)}=(\sigma_5\sigma_4\sigma_3\sigma_2\sigma_1)\sigma_1$, respectively.
Neither $\epsilon_{(5)}$ nor $\epsilon_{(6)}$ is a super summit element,
because they have canonical length 2.
Hence, $[\epsilon_{(5)}]^S$ and $[\epsilon_{(6)}]^S$
are not closed under partial cycling.
The normal forms of $g_1^2$ and $g_2^2$ are as in the right hand sides
in the following equations:
\begin{eqnarray*}
g_1^2
&=&(\sigma_1\sigma_4\sigma_3\sigma_2\sigma_1)
(\sigma_1\sigma_4\sigma_3\sigma_2\sigma_1)
=(\sigma_1\sigma_4\sigma_3\sigma_2\sigma_1\sigma_4\sigma_3\sigma_2)
\cdot(\sigma_1\sigma_2),\\
g_2^2
&=&(\sigma_1\sigma_5\sigma_4\sigma_3\sigma_2\sigma_1)
(\sigma_1\sigma_5\sigma_4\sigma_3\sigma_2\sigma_1)
=(\sigma_1\sigma_5\sigma_4\sigma_3\sigma_2\sigma_1\sigma_5\sigma_4\sigma_3\sigma_2)
\cdot(\sigma_1\sigma_2).
\end{eqnarray*}
In particular, $g_1^2$ and $g_2^2$ have canonical length 2, hence
they do not belong to their super summit sets.
Hence $[\epsilon_{(5)}]^S\ne[\epsilon_{(5)}]^{St}$
and $[\epsilon_{(6)}]^S\ne[\epsilon_{(6)}]^{St}$.
\smallskip
Now we consider $\epsilon_{(6)}^3 \in B_6^{\operatorname{[BKL]}}$.
Clearly, $\epsilon_{(6)}^3$ belongs to its super summit set
because it has canonical length 1.
Note that
$$
\epsilon_{(6)}^3=\delta_{(6)}^3[4,3,2,1]=\delta_{(6)}^3[4,1][4,3,2]
=\delta_{(6)}^3[4,2][4,3][2,1].
$$
Partial cycling of $\epsilon_{(6)}^3$ by $[4,1]$ gives
$$
\delta_{(6)}^3 [4,3,2] \tau^{-3}([4,1])
= \delta_{(6)}^3 [4,3,2]\,[4,1].
$$
Note that $[4,3,2][4,1]$ is not a simple element,
hence $[\epsilon_{(6)}^3]^S$ is not closed under partial cycling.
Let $g_3$ be the result of partial cycling of $\epsilon_{(6)}^3$ by $[4,2]$,
that is,
$$
g_3
=\delta_{(6)}^3 [4,3][2,1] \tau^{-3}([4,2])
=\delta_{(6)}^3 [4,3][2,1] [5,1]
=\delta_{(6)}^3 [4,3][5,2,1].
$$
Then $g_3\in [\epsilon_{(6)}^3]^S$ because $[4,3][5,2,1]$ is a simple element.
On the other hand, the normal form of $g_3^2$ is as in the right hand side
of the following equation.
$$
g_3^2
= \delta_{(6)}^3 [4,3][5,2,1] \delta_{(6)}^3 [4,3][5,2,1]
= \delta_{(6)}^6 [6,1][5,4,2] [4,3][5,2,1]
= \delta_{(6)}^6 [6,1][5,4,3,2] \cdot [5,2,1]
$$
Because $g_3^2$ has canonical length 2,
it does not belong to its super summit set.
Hence $[\epsilon_{(6)}^3]^S\ne[\epsilon_{(6)}^3]^{St}$.
\end{example}
From Theorems~\ref{thm:LL-CMW} and~\ref{thm:pa-cy}, we have the following result.
\begin{corollary}\label{cor:pa-cy}
Let $g$ be a periodic element of a Garside group $G$.
If $g$ is C-tight, then the super summit set of a BCMW-power of $g$
is closed under any partial cycling.
\end{corollary}
\begin{proof}
Let $\t_{\inf}(g)=p/q$ for relatively prime integers $p$ and $q$ with $q\ge 1$.
The claim is trivial if $q=1$, so we may assume that $q\ge 2$.
Let $g^r$ be an arbitrary BCMW-power of $g$.
Then $\t_{\inf}(g^r)=pr/q$ (by Lemma~\ref{lem:per-elt}), where
$pr$ is relatively prime to $q$ (by Theorem~\ref{thm:LL-CMW}).
Since $g$ is C-tight, one has $p\equiv 0\bmod m$
and hence $pr\equiv 0\bmod m$, which means that $g^r$
is a C-tight periodic element.
On the other hand, $g^r$ is P-minimal (by definition),
hence $[g^r]^{S}$ is closed under partial cycling
by Theorem~\ref{thm:pa-cy}.
\end{proof}
\section{Super summit sets of $\epsilon$-type periodic braids}
\label{sec:SSS_of_e}
In this section we consider the BKL Garside structure for the braid group $B_n$.
In this Garside group, the braid $\delta\ (= a_{n,n-1}\cdots a_{3,2}a_{2,1})$
is the Garside element.
This means that the super summit set
of $\delta^k$ consists of a single element $\delta^k$ for all integers $k$.
Therefore, the conjugacy search problem for $\delta$-type periodic braids is easy.
But it does not hold for $\epsilon$-type periodic braids.
Notice that, for all integers $k$, $\operatorname{len}(\epsilon^k)\le 1$ and
$\epsilon^k\in [\epsilon^k]^S = [\epsilon^k]^U$ in $B_n^{[{\operatorname{BKL}}]}$.
The main results of this section are Proposition~\ref{prop:size}
and Proposition~\ref{prop:main}.
\begin{itemize}
\item
In Proposition~\ref{prop:size} we show that the size of the ultra summit set
of $\delta^u[k,k-1,\ldots,1]$ is at least $\mathcal C_k$, the $k$th Catalan number
if $u$ and $k$ satisfy some constraints.
Asymptotically, the Catalan numbers grow as
$$
\mathcal C_k=\frac1{k+1}{2k\choose k}\sim\frac{4^k}{k^{\frac32}\sqrt\pi}.
$$
This implies that the ultra summit set of $\epsilon^k$ in $B_n^{\operatorname{[BKL]}}$
is exponentially large with respect to $n$ for some $k$.
\item
In Proposition~\ref{prop:main} we show that
by applying polynomially many partial cyclings to an arbitrary super summit
element of $\epsilon^d$ for proper divisors $d$ of $n-1$, we obtain $\epsilon^d$.
\end{itemize}
\begin{proposition}\label{prop:size}
Let $\alpha$ be an $n$-braid such that
$$
\alpha=\delta^u[k,k-1,\ldots,1],\qquad
2\le k\le u\le\frac n2\quad\mbox{or}\quad 2\le k=u+1\le\frac{n}2.
$$
Then the cardinality of\/ $[\alpha]^U$ is at least
$\mathcal C_k$, the $k$th Catalan number.
\end{proposition}
\begin{proof}
From~\cite{BKL98}, there are $\mathcal C_k$ left divisors of $[k,k-1,\ldots,1]$.
Let $a_i$, $i=1,\ldots,\mathcal C_k$, denote them.
For each $i$, let $b_i$ be the simple element satisfying
$a_ib_i=[k,k-1,\ldots,1]$, hence
$$
\alpha=\delta^u[k,k-1,\ldots,1]=\delta^u a_ib_i,\qquad i=1,2,\ldots,\mathcal C_k.
$$
For $i=1,2,\ldots,\mathcal C_k$, let $\alpha_i$ be an element such that
$$
\alpha_i=b_i\alpha b_i^{-1}=b_i\delta^u a_i=\delta^u\tau^u(b_i)a_i.
$$
First, we show that each $\alpha_i$ is a ultra summit element.
Observe that, for all $i$,
$$
\tau^u(b_i)a_i
\le_L \tau^u(b_i)a_ib_i
\le_R \tau^u(a_ib_i) a_ib_i
= [u+k,\ldots,u+2,u+1]\,[k,\ldots,2,1].
$$
Notice that $2\le k\le u+1< u+k\le n$.
If $k=u+1$, then
$$
[u+k,\ldots,u+1]\,[k,\ldots,2,1]
=[u+k,\ldots,u+1]\,[u+1,\ldots,2,1]
=[u+k,\ldots,2,1].
$$
If $k\le u$, then the descending cycles $[u+k,\ldots,u+2,u+1]$ and
$[k,\ldots,2,1]$ are parallel.
In either case, $[u+k,\ldots,u+2,u+1]\,[k,\ldots,2,1]$ is a simple element.
Therefore $\tau^u(b_i)a_i$ is a simple element, hence
$\alpha_i$ belongs to the ultra summit set $[\alpha]^U$ for all $i$.
\smallskip
Next, we show that $\alpha_i$'s are all distinct.
Suppose that $\alpha_i=\alpha_j$ for some $i$ and $j$.
Because $\tau^u(b_i)a_i=\tau^u(b_j)a_j$,
$$
\tau^u(b_i^{-1}b_j)=a_ia_j^{-1}.
$$
Note that $a_ia_j^{-1}$ belongs to the subgroup
$\langle \sigma_1,\ldots,\sigma_{k-1}\rangle$
whereas $\tau^u(b_i^{-1}b_j)$ belongs to the subgroup
$\langle \sigma_{u+1},\ldots,\sigma_{u+k-1}\rangle$.
Because
$\langle \sigma_1,\ldots,\sigma_{k-1}\rangle
\cap \langle \sigma_{u+1},\ldots,\sigma_{u+k-1}\rangle
=\{e\}$, we obtain
$\tau^u(b_i^{-1}b_j)=a_ia_j^{-1}=e$.
Therefore $a_i=a_j$, hence $i=j$.
\end{proof}
\begin{remark}\label{rmk:size}
Let $\alpha\in B_{n}^{[{\operatorname{BKL}}]}$ be as in the above proposition.
Notice that if $k$ is proportional to $n$ such as $k\sim n/2$,
then the size of the ultra summit set of $\alpha$
is exponential in the braid index $n$.
Hence Proposition~\ref{prop:size} shows that the ultra summit sets of
the following braids are huge in $B_{n}^{[{\operatorname{BKL}}]}$.
\begin{itemize}
\item
If $k=u+1$, then $\alpha=\delta^{k-1}[k,k-1,\ldots,1]=\epsilon^{k-1}$.
(See Figure~\ref{fig:ex}~(a).)
\item If $k\le u$ and $u$ is a divisor of $n$,
then $\alpha$ is reducible.
(See Figure~\ref{fig:ex}~(b).)
\item If $k\le u$ and $u$ is not a divisor of $n$, then
$\alpha$ seems to be a pseudo-Anosov braid.
However, it would be beyond the scope of this paper to prove it.
\end{itemize}
\end{remark}
\begin{figure}
\begin{tabular}{ccc}
\includegraphics{per_e2inB6.eps} &\qquad&
\includegraphics{per_redbr1.eps} \\
(a)\ \ $\delta^2[3,2,1]\in B_6^{\operatorname{[BKL]}}$ &&
(b)\ \ $\delta^3[3,2,1]\in B_6^{\operatorname{[BKL]}}$
\end{tabular}
\caption{}
\label{fig:ex}
\end{figure}
\begin{lemma}\label{lem:e^d}
For any positive divisor $d$ of\/ $n-1$,
the element $\epsilon^d$ is P-minimal and C-tight.
In particular, $[\epsilon^d]^{S}$ is closed under any partial cycling.
\end{lemma}
\begin{proof}
Notice that $n$ is the smallest positive integer such that $\delta^n$
is central.
Let $q=(n-1)/d$, hence $n=dq+1$.
Then $q$ and $n$ are relatively prime and
$$
\t_{\inf}(\epsilon^d) = \frac{nd}{n-1}=\frac{n}{q}=\frac{dq+1}{q}.
$$
Therefore $\epsilon^d$ is P-minimal because $dq+1\equiv 1\bmod q$,
and C-tight because $dq+1=n\equiv 0\bmod n$.
By Theorem~\ref{thm:pa-cy}, the super summit set
$[\epsilon^d]^{S}$ is closed under partial cycling.
\end{proof}
The following lemma shows how to find a C-tight BCMW power of an arbitrary
$\epsilon$-type periodic braid.
\begin{lemma}\label{lem:exp_red}
Let $\alpha\in B_n$ be conjugate to $\epsilon^k$ for $k\ne 0$.
Let $d=\gcd(k,n-1)$, and $r$ and $s$ be integers such that $kr+(n-1)s=d$.
\begin{itemize}
\item[(i)] $\alpha^r$ is a C-tight BCMW-power of\/ $\alpha$,
being conjugate to $\delta^{-ns}\epsilon^d$.
\item[(ii)] Let $\alpha_1=\delta^{ns}\alpha^r$.
An $n$-braid $\gamma$ conjugates $\alpha$ to $\epsilon^k$
if and only if\/ it conjugates $\alpha_1$ to $\epsilon^d$.
\end{itemize}
\end{lemma}
\begin{proof}
(i)\ \
As we have seen in Example~\ref{eg:periodic},
every $\epsilon$-type periodic braid
is C-tight under the BKL Garside structure.
We now show that $\alpha^r$ is a BCMW-power of $\alpha$
using Theorem~\ref{thm:LL-CMW}.
Let $q=(n-1)/d$ and $k'=k/d$, then
$$
\t_{\inf}(\alpha)=\t_{\inf}(\epsilon^k)=\frac{n}{n-1}\cdot k
=\frac{nk'}q.
$$
Since $q$ is relatively prime to both $n$ and $k'$,
$nk'$ is relatively prime to $q$.
Because $\frac{n}{\gcd(nk',n)}=\frac{n}{n}=1$,
it suffices to show that $nk'\cdot r\equiv 1\bmod q$.
This follows from the following formula.
$$
nk'r=\frac{nkr}d=\frac{n(d-(n-1)s)}d=
\frac{n(d-dqs)}d=n(1-qs)=(1+dq)(1-qs)\equiv 1\bmod q.
$$
Since $\alpha$ is conjugate to $\epsilon^k$, the power
$\alpha^r$ is conjugate to
$
\epsilon^{kr}=\epsilon^{d-(n-1)s}
=(\epsilon^{n-1})^{-s} \epsilon^d
=\delta^{-ns}\epsilon^d$.
\smallskip\noindent
(ii)\ \
It is an easy consequence of (i).
We just use Proposition~\ref{prop:BCMW_power} that
$\gamma^{-1}\alpha\gamma=\epsilon^k$ if and only if
$\gamma^{-1}\alpha^r\gamma=\epsilon^{kr}$.
\end{proof}
\begin{corollary}
Let both $\alpha$ and $\beta$ be conjugate to $\epsilon^k$ for $k\ne 0$.
Let $d=\gcd(k,n-1)$, and $r$ and $s$ be integers such that $kr+(n-1)s=d$.
Let $\alpha_1=\delta^{ns}\alpha^r$ and $\beta_1=\delta^{ns}\beta^r$.
Then an $n$-braid $\gamma$ conjugates $\alpha$ to $\beta$
if and only if\/ it conjugates $\alpha_1$ to $\beta_1$.
In other words, the conjugacy search problem for $(\alpha,\beta)$
is equivalent to the conjugacy search problem for $(\alpha_1,\beta_1)$.
\end{corollary}
\begin{proof}
There is $\gamma_1\in B_n$ with $\beta=\gamma_1^{-1}\epsilon^k\gamma_1$,
which implies $\beta_1=\gamma_1^{-1}\epsilon^d\gamma_1$
by applying Lemma~\ref{lem:exp_red} to $(\beta, \beta_1)$.
Now apply Lemma~\ref{lem:exp_red} to $(\alpha, \alpha_1)$,
then we have
$$
\alpha=\gamma^{-1}\beta\gamma = \gamma^{-1}\gamma_1^{-1}\epsilon^k\gamma_1\gamma
\quad\mbox{if and only if}\quad
\alpha_1=\gamma^{-1}\gamma_1^{-1}\epsilon^d\gamma_1\gamma
=\gamma^{-1}\beta_1\gamma.
$$
\vskip-1.56\baselineskip
\end{proof}
From the above observation, to solve the conjugacy search
problem for periodic braids, it suffices to consider
the conjugacy classes of $\epsilon^d$ in $B_n$
only for the divisors $0<d< n-1$ of $n-1$
instead of $\epsilon^k$ for all integers $k$.
Before proving Proposition~\ref{prop:main},
we need to show a property of $\delta$.
\begin{lemma}\label{lem:delta}
Let $\alpha$ and $\beta$ be $n$-braids
such that $\delta=\alpha\beta=\beta\alpha$.
Then $\alpha$ is a power of $\delta$.
\end{lemma}
\begin{proof}
Since $\alpha\delta=\alpha(\beta\alpha)=(\alpha \beta)\alpha=\delta \alpha$,
the braid $\alpha$ belongs to the centralizer of $\delta$.
It is well-known that the centralizer of $\delta$ is the infinite cyclic group
generated by $\delta$ (see~\cite{BDM02} or~\cite{GW04}),
hence $\alpha$ is a power of $\delta$.
\end{proof}
\def\temp{
Recall that a simple element is a product of parallel descending cycles.
A descending cycle $[i_k, \ldots, i_2, i_1]$ in $B_n$ is defined originally
for the indices $i_1, \ldots, i_k$ with $1\le i_1 < i_2 < \cdots < i_k\le n$,
and indicates the positive word $a_{i_k i_{k-1}}\cdots a_{i_3 i_2} a_{i_2 i_1}$.
For notational simplicity, we will allow indices before making them congruent modulo $n$.
Namely, the form $[i_{j}+n,\ldots, i_1 + n, i_k, \ldots, i_{j+1} ]$
for any $j$ means the descending cycle $[i_k, \ldots, i_2, i_1]$.
For example, the form $[12, 11, 10, 9]$ in $B_{10}$ means
the descending cycle $[10,9,2,1]$.
}
For an $n$-braid $\alpha$,
let $\pi(\alpha)$ denote the induced permutation of $\alpha$.
We assume that the $n$-permutation $\pi(\alpha)$ acts
on $\{1,2,\ldots,n\}$ from right, and the expression
$k*\pi(\alpha)$ indicates the image of $k$ under
the action of $\pi(\alpha)$.
For $0<d<n-1$,
notice that $\inf{\!}_ s(\epsilon^d)= \inf(\epsilon^d)= d$ and
$\len{\!}_s(\epsilon^d)= \operatorname{len}(\epsilon^d)= 1$ (by Lemma~\ref{lem:per-elt}),
and that the fixed point set of $\pi(\epsilon^d)$ is $\{1\}$,
hence every conjugate of $\epsilon^d$ has exactly one pure strand.
The next proposition shows how to find very efficiently
a conjugating element from $\epsilon^d$ to any given element
in the super summit set of $\epsilon^d$.
\begin{proposition}\label{prop:main}
Let $0< d<n-1$ be a divisor of\/ $n-1$, and let $q=(n-1)/d$.
Let $\alpha=\delta^d a$ be an $n$-braid conjugate to $\epsilon^d$,
having the $t$-th strand pure,
where $a \in\mathcal D\backslash\{ e, \delta\}$ and $1\le t\le n$.
Then the following hold.
\begin{enumerate}
\item[(i)] If the simple element $a$ has only one descending cycle, then
$\epsilon^d = \delta^{t-1}\alpha\delta^{-(t-1)}$.
\item[(ii)] If the simple element $a$ has more than one parallel descending cycles,
then at most $q-1$ iterations of partial cycling on a descending cycle of it
reduce the number of parallel descending cycles.
\end{enumerate}
\end{proposition}
\begin{proof}
(i)\ \
Suppose the simple element $a$ has only one descending cycle.
First, suppose $t=1$, that is, the first strand of $\alpha$ is pure.
Let $\pi(\delta^d)$ and $\pi(a)$ be the induced permutations of
$\delta^d$ and $a$.
Because $1*\pi(\delta^d a)=1$ and $1*\pi(\delta^d)=d+1$, we have
$$
1\stackrel{\pi(\delta^d)}{\longrightarrow}
d+1\stackrel{\pi(a)}{\longrightarrow}
1.
$$
Since $d+1\ne 1$, the number 1 is contained in the descending cycle of $a$.
On the other hand, because $\alpha=\delta^ua$ is conjugate to
$\epsilon^d=\delta^d [d+1,d,\ldots,1]$,
the exponent sum of $a$ is equal to that of $[d+1,d,\ldots,1]$.
This means that the descending cycle of the simple element $a$
has $d+1$ numbers.
Therefore $a$ is of the form
$$
[n_d,n_{d-1},\ldots,n_1,1],\qquad 1<n_1<\cdots<n_{d-1}<n_d \le n.
$$
Since $n_d*\pi(a)=1$, we obtain $n_d=d+1$ and hence $a=[d+1,d,\cdots,1]$.
This means that $\alpha=\epsilon^d$.
Now consider general cases.
If the $t$-th strand of $\alpha$ is pure,
then the first strand of $\delta^{t-1}\alpha\delta^{-(t-1)}$
is pure, hence, by the above argument,
$\epsilon^d =\delta^{t-1}\alpha\delta^{-(t-1)}$.
\medskip\noindent
(ii)\ \
Notice that $\alpha\in[\epsilon^d]^S$ because $\operatorname{len}(\alpha)=1$.
Suppose the simple element $a$ has more than one parallel descending cycles.
Among them, take any descending cycle, say $a_1$.
Then $a=a_1a_2$ for a non-identity simple element $a_2$.
We will show that at most $q-1$ iterations of partial cycling on $a_1$
reduce the number of parallel descending cycles.
Let $\psi=\tau^{d}$.
The partial cycling of $\alpha$ by $a_1$ is
$$
\alpha_1=\delta^{d}a_2\psi^{-1}(a_1).
$$
By Lemma~\ref{lem:e^d}, we know that
$[\epsilon^d]^S$ is closed under partial cycling.
This implies that $\alpha_1\in [\epsilon^d]^S$ and hence
$a_2\psi^{-1}(a_1)$ is a simple element.
If the number of parallel descending cycles is not changed, then
$$
a_2\psi^{-1}(a_1) = \psi^{-1}(a_1) a_2.
$$
Now we do partial cycling on $\alpha_1$ by $\psi^{-1}(a_1)$
and obtain
$$
\alpha_2=\delta^{d} a_2\psi^{-2}(a_1).
$$
By the same reason as above, $a_2\psi^{-2}(a_1)$ is a simple element.
If the number of parallel descending cycles is not changed, then
$$
a_2\psi^{-2}(a_1)= \psi^{-2}(a_1) a_2.
$$
Now assume that up to $q-1$ iterations of partial cycling on $a_1$ do not
decrease the number of parallel descending cycles of $a$.
Then
$\psi^{-k}(a_1)a_2 = a_2\psi^{-k}(a_1)$ for all $k=0,1,\ldots,q-1$,
which implies that
\begin{equation}\label{eq:comm}
\psi^j(a_2)\psi^i(a_1)=\psi^i(a_1)\psi^j(a_2),
\qquad \mbox{for all $i,j\in\{0,1,\ldots,q-1\}$}.
\end{equation}
We know that $\epsilon^d$ is P-minimal and C-tight (by Lemma~\ref{lem:e^d}),
hence $\alpha\in [\epsilon^d]^S=[\epsilon^d]^{St}$ (by Theorem~\ref{thm:pa-cy}).
Notice that $\t_{\inf}(\epsilon^d)=nd/(n-1)=n/q$
and that $n$ and $q$ are relatively prime.
Therefore $\delta=\psi^{q-1}(a)\psi^{q-2}(a)\cdots\psi(a) a$
(by Lemma~\ref{lem:Primi_equiv}).
Combining with Equation~(\ref{eq:comm}), we have
$$
\delta=AB=BA
$$
where $A=\psi^{q-1}(a_1)\psi^{q-2}(a_1)\cdots\psi(a_1)a_1$
and $B=\psi^{q-1}(a_2)\psi^{q-2}(a_2)\cdots\psi(a_2)a_2$.
Since $a_1$ and $a_2$ are not the identity, we have $A,B\in\mathcal D\setminus\{e,\delta\}$.
In particular $A$ and $B$ cannot be a power of $\delta$,
which contradicts Lemma~\ref{lem:delta}.
\end{proof}
\section{Algorithms for the conjugacy problem for periodic braids}
Using the results in the previous sections,
we construct algorithms for solving the conjugacy problem
for periodic braids in $B_n^{\operatorname{[BKL]}}$.
The following is an overview of the algorithms we will describe
in this section.
\begin{itemize}
\item
Algorithms I and III are basic algorithms from which
the other algorithms are constructed.
Algorithm I provides an efficient method for powering
periodic elements in Garside groups,
and Algorithm III solves the CSP for
periodic $n$-braids conjugate to $\epsilon^d$,
where $d$ is a proper divisor of $n-1$.
\item
Algorithm II solves the CDP for periodic braids
and the CSP for $\delta$-type periodic braids.
Our solution to the CDP for periodic braids is
more efficient than Algorithm A of Birman,
Gebhardt and Gonz\'alez-Meneses~\cite{BGG06c}
because we use Algorithm I, the power conjugacy algorithm for periodic elements.
Our solution to the CSP for $\delta$-type periodic braids
is the same as Algorithm B of Birman,
Gebhardt and Gonz\'alez-Meneses~\cite{BGG06c}.
\item
Algorithm IV solves the CSP for $\epsilon$-type periodic braids.
\item
Algorithm V is a complete algorithm for the CDP and the CSP for
periodic braids.
\end{itemize}
Because Algorithm I works for periodic elements in
any Garside group, we describe it separately in \S5.1.
The other algorithms are described in \S5.2,
which work for the braid groups with the BKL Garside structure.
In \S5.3, we compare the complexities and
the necessary implementations
of our algorithms and those of Birman, Gebhardt and Gonz\'alez-Meneses in~\cite{BGG06c}.
Given a Garside group $G$, let $G^{+}$, $\Delta$ and $\mathcal D$ denote
the positive monoid, the Garside element
and the set of simple elements, respectively, of $G$.
Let $m$ be the smallest positive integer such that $\Delta^m$ is central in $G$.
\smallskip
Before going into algorithms,
let us first discuss how to represent elements of Garside groups
for inputs of algorithms.
The following two types of words are commonly used:
a word in the atoms and a word in the simple elements.
For example, an element in $B_n^{\operatorname{[BKL]}}$
can be represented by a word in the band generators
or by a word in the simple elements which
are products of parallel descending cycles.
In the following, we define words in the simple elements
in a little unusual way.
We explain the motivation briefly with an example.
Let $g$ be an element of $G$ represented by the word
$$
W=a_1^{k_1} \underbrace{\Delta\cdots\Delta}_u a_2^{k_2},
$$
where $u\ge 1$, $a_1,a_2\in\mathcal D$ and $k_1,k_2\in\{-1,1\}$.
Assume that $u$ is very large.
A natural algorithm for computing the normal form of $g$
would be as follows:
(i) collect $\Delta$'s in $W$ and obtain
$g=a_1^{k_1}\Delta^u a_2^{k_2}
=\Delta^u \tau^u(a_1)^{k_1}a_2^{k_2}$;
(ii) compute $\tau^u(a_1)$;
(iii) compute the normal form of $\tau^u(a_1)^{k_1}a_2^{k_2}$;
(iv) output the normal form of $g$ which is
the concatenation of $\Delta^u$ and the normal form of $\tau^u(a_1^{k_1})a_2^{k_2}$.
Here, we remark the following two things.
First, the word length of $W$ is $u+2$, hence if we use the usual result
on the complexity for computing normal form, it will be $\mathcal O(u^2T)$
for some constant $T$, which is unnecessarily large.
Second, if $\Delta$'s are already collected so that $g$ is represented
by $a_1^{k_1}\Delta^u a_2^{k_2}$ then the complexity for computing
the normal form of $g$ is independent of $u$.
Therefore, in the following definition, we allow powers of
$\Delta$ to be contained in a word $W$ in the simple elements,
and we discard them when measuring the word length of $W$.
\begin{definition}
By a \emph{word in the simple elements} in a Garside group $G$, we mean the
following type of word $W$:
$$
W=\Delta^{u_0} a_1^{k_1} \Delta^{u_1} a_2^{k_2} \Delta^{u_2}
\cdots a_r^{k_r}\Delta^{u_r},\qquad
u_i\in\mathbf Z,~a_i\in\mathcal D,~k_i\in\{-1,1\}.
$$
Define $|W|_{\operatorname{simple}}=r$, the number of the simple elements $a_i$.
We use the notation $W^{\operatorname{atom}}$ (resp. $W^{\operatorname{simple}}$)
to indicate that $W$ is a word in the atoms
(resp. in the simple elements), when we want to make it more clear.
\end{definition}
For example, if a word $W$ is the normal form of an element $g$ in a Garside group,
then $|W|_{\operatorname{simple}}$ is the same as the canonical length of $g$.
Observe the following.
\begin{itemize}
\item
Every atom is a simple element.
Therefore, a word in the atoms can be regarded
as a word in the simple elements with the same word length.
\item
A word $W$ in the simple elements, $W=\Delta^{u_0} a_1^{k_1} \Delta^{u_1}
\cdots a_r^{k_r}\Delta^{u_r}$, can be transformed
to a word in the atoms by replacing each simple element with a product of atoms.
Let $V^{\operatorname{atom}}$ be such a transformed word.
Then we have the following inequalities:
$$
|W|_{\operatorname{simple}}
\le |V^{\operatorname{atom}}|
\le \Vert\Delta\Vert\cdot (|W|_{\operatorname{simple}} + \sum |u_i|),
$$
where $|V^{\operatorname{atom}}|$ denotes the word length of $V^{\operatorname{atom}}$.
The above formula shows that words in the simple elements
provide a more efficient way to implement
elements in Garside groups than words in the atoms.
\end{itemize}
For the algorithms in \S\ref{sec:Alg_power} and \S\ref{sec:Alg_BKL},
we assume that the elements of Garside groups are represented by
words in the simple elements, and we analyze their complexities
with respect to $|\cdot|_{\operatorname{simple}}$.
\subsection{Power conjugacy algorithm for periodic elements in Garside groups}
\label{sec:Alg_power}
In this subsection, we discuss complexities for algorithms in Garside groups,
and then give an efficient method for powering periodic elements.
We first recall the following notions in~\cite{Deh02}.
For simple elements $a$ and $b$, there is a
unique simple element $c$ such that $ac=a\vee_L b$.
Such an element $c$ is called the \emph{right complement} of $a$ in $b$.
Similarly, the \emph{left complement} of $a$ in $b$
is the unique simple element $c$ such that $ca=a\vee_R b$.
For a simple element $a$, let ${}^*a$ and $a^*$ denote
the left and right complements of $a$ in $\Delta$, respectively.
Therefore ${}^*a$ and $a^*$ are the unique simple elements satisfying
${}^*aa=\Delta=aa^*$, that is, ${}^*a=\Delta a^{-1}$ and $a^*=a^{-1}\Delta$.
\begin{definition}
Let $T_{\operatorname{lattice}}$ be the maximal time for computing the following simple elements:
\begin{itemize}
\item $a\wedge_L b$ and $a\vee_L b$ from simple elements $a$ and $b$;
\item ${}^*a$ and $a^*$ from a simple element $a$;
\item $\tau^u(a)$ from a simple element $a$ and integer $0< u<m$.
\end{itemize}
\end{definition}
\begin{remark}
Because $\tau(a) = \Delta^{-1}a\Delta =(a^{-1}\Delta)^{-1}\Delta =(a^*)^*$,
we can compute $\tau(a)$ by computing the right complements twice.
Therefore, for $0<u<m$, we can compute $\tau^u(a)$
by computing right complements at most $2u$ times.
However, there are usually more efficient methods
for computing $\tau^u(a)$.
In $B_n^{\operatorname{[Artin]}}$, the simple elements are
in one-to-one correspondence with the $n$-permutations.
If $\theta$ is the $n$-permutation corresponding to a simple element $a$,
then the permutation corresponding to $\tau(a)$ is
$\theta'$ defined by $\theta'(i)= n+1-\theta(n+1-i)$ for $1\le i\le n$.
Moreover $\tau^2$ is the identity, hence for any integer $u$,
$\tau^u(a)$ can be computed in time $\mathcal O(n)$.
Note that the $a\wedge_L b$ for simple elements $a$ and $b$
can be computed in time $\mathcal O(n\log n)$~\cite{Thu92}.
In $B_n^{\operatorname{[BKL]}}$, the simple elements are products of parallel descending cycles.
If $[i_\ell,\ldots,i_1]$ is a descending cycle, then
$\tau^u([i_\ell,\ldots,i_1])=[i_\ell+u,\ldots,i_1+u]$, hence
for a simple element $a$, $\tau^u(a)$ can be computed in time
$\mathcal O(n)$.
\end{remark}
\begin{lemma}\label{lem:lt-op}
For simple elements $a$ and $b$ in a Garside group $G$,
the following operations can be done in time $\mathcal O(T_{\operatorname{lattice}})$.
\begin{itemize}
\item[(i)] $a\wedge_R b$ and $a\vee_R b$.
\item[(ii)] $\tau^u(a)$ for an integer $u$.
\item[(iii)] The left and the right complements of $a$ in $b$.
\item[(iv)] The normal form of\/ $ab$.
\end{itemize}
\end{lemma}
\begin{proof}
(i)\ \
It is known by~\cite[Lemma 2.5~(ii)]{Deh02} that
$$
a\wedge_R b={}^*(a^*\vee_L b^*)\quad\mbox{and}\quad
a\vee_R b =({}^*a\wedge_L {}^*b)^*.
$$
Therefore $a\wedge_R b$ can be computed by computing
two right complements, one lcm and then one left complement,
hence it can be computed in time $\mathcal O(T_{\operatorname{lattice}})$.
Similarly for $a\vee_R b$.
\smallskip\noindent
(ii)\ \
For any integer $u$, there is an integer $u_0$ such that
$u\equiv u_0\bmod m$ and $0\le u_0<m$.
Because $\tau^m$ is the identity, $\tau^u(a)=\tau^{u_0}(a)$.
\smallskip\noindent
(iii)\ \
Let $c$ be the right complement of $a$ in $b$, that is, $ac=a\vee_L b$.
Since
$$
{}^*(a\vee_L b) ac= {}^*(a\vee_L b) (a\vee_L b)=\Delta,
$$
we obtain that ${}^*(a\vee_L b) a$ is a simple element and
$$
c
=\bigl({}^*(a\vee_L b) a\bigr)^{-1}\Delta
=\bigl({}^*(a\vee_L b) a\bigr)^*.
$$
Therefore the element $c$ can be computed in time $\mathcal O(T_{\operatorname{lattice}})$.
Similarly for the left complement.
\smallskip\noindent
(iv)\ \
Let $a'b'$ be the normal form of $ab$. Then
$$
a'=\Delta\wedge_L (ab)=(aa^*)\wedge_L (ab)=a(a^*\wedge_L b)
\quad\mbox{and}\quad
b'=(a^*\wedge_L b)^{-1}b.
$$
It is obvious that $a'$ can be computed in time $\mathcal O(T_{\operatorname{lattice}})$.
Note that $b'$ is the right complement of $a^*\wedge_L b$ in $b$,
hence it can be computed in time $\mathcal O(T_{\operatorname{lattice}})$ by (iii).
\end{proof}
Recall that, for a positive element $g$ in $G$,
$\Vert g\Vert$ denotes the maximal word length of $g$ in the atoms in $G^{+}$.
The following lemma is well-known.
See~\cite{DP99} and \cite{BKL01}.
\begin{lemma}\label{lem:time}
Let $g$ be an element of a Garside group $G$.
\begin{itemize}
\item[(i)]
Let $g$ be given as a word $W$ in the simple elements with $l=|W|_{\operatorname{simple}}$.
Then the normal form of $g$ can be obtained in time $\mathcal O(l^2 \cdotT_{\operatorname{lattice}})$.
\item[(ii)]
Let $g$ be in the normal form.
Then the normal forms of the cycling and the decycling of\/ $g$
can be obtained in time $\mathcal O(\operatorname{len}(g) \cdotT_{\operatorname{lattice}})$.
\item[(iii)]
Let $g$ be in the normal form.
Then the total number of cyclings and decyclings
in order to obtain a super summit element is
$\mathcal O(\operatorname{len}(g) \cdot \Vert\Delta\Vert)$.
Therefore we can compute a pair $(g_1,h_1)$ such that
$g_1\in[g]^S$ is in its normal form and $h_1^{-1}g_1h_1=g$
in time $\mathcal O(\operatorname{len}(g)^2 \cdot\Vert\Delta\Vert \cdotT_{\operatorname{lattice}})$.
\end{itemize}
\end{lemma}
Note that in $B_n^{\operatorname{[Artin]}}$ one has $\Vert\Delta\Vert=\Vert\Delta_{(n)}\Vert=n(n-1)/2$ and
$T_{\operatorname{lattice}}=n\log n$ by ~\cite{Thu92},
and that in $B_n^{\operatorname{[BKL]}}$ one has
$\Vert\Delta\Vert=\Vert\delta_{(n)}\Vert=n-1$ and $T_{\operatorname{lattice}}=n$ by~\cite{BKL98}.
\medskip
\begin{definition}
If an element $g$ in a Garside group $G$ is periodic and belongs to
its super summit set, we call it a {\em periodic super summit element}.
\end{definition}
In order to solve the CDP/CSP for periodic elements in Garside groups,
we need an algorithm for powering a periodic element
and then computing a super summit element of that power.
\medskip
\underline{Power conjugacy algorithm}
\begin{itemize}
\item[]
INPUT: an integer $r\ge 1$ and a periodic super summit element $g$ in $G$.
\item[]
OUTPUT: a pair $(h,x)$ of elements in $G$ such that
$h\in[g^r]^S$ and $x^{-1}h x=g^r$.
\end{itemize}
A naive algorithm would be the following.
\begin{itemize}
\item[1.] Compute the normal form of $g^r$.
\item[2.] Apply iterated cycling and decycling to $g^r$
until a super summit element $h$ is obtained.
Let $x$ be the conjugating element obtained in this process
such that $x^{-1}hx=g^r$.
\item[3.] Return $(h,x)$.
\end{itemize}
Note that $\operatorname{len}(g)=\len{\!}_s(g)\le 1$,
because $g$ is a periodic super summit element.
Because $\operatorname{len}(g^r)=r\operatorname{len}(g)=r$ in the worst case,
the complexity of the above algorithm
when $g$ is given in the normal form is
$$
\mathcal O(r^2\cdotT_{\operatorname{lattice}})+\mathcal O(r^2\cdot\Vert\Delta\Vert\cdot T_{\operatorname{lattice}})
=\mathcal O(r^2\cdot\Vert\Delta\Vert\cdot T_{\operatorname{lattice}}).
$$
We will improve this algorithm so as to have complexity
$\mathcal O(\log r \cdot\Vert\Delta\Vert\cdot T_{\operatorname{lattice}})$.
Our idea is based on the repeated squaring algorithm in $\mathbf Z/n\mathbf Z$,
also known as exponentiation by squaring, square-and-multiply algorithm,
binary exponentiation or double-and-add algorithm.
We remark that our algorithm is interesting not only because
it gives an efficient method for powering,
but also because it exploits a recent result on
abelian subgroups of Garside groups~\cite{LL06c}.
Let us explain the repeated squaring algorithm in $\mathbf Z/n\mathbf Z$ briefly.
See~\cite[Page 8]{Coh93} or~\cite[Page 48]{Sho05} for more detail.
Let $a$, $n$ and $r$ be given large positive integers
from which we want to compute $a^r\bmod n$.
A naive algorithm for computing $a^r$ is to iteratively multiply by $a$
a total of $r$ times.
We can do better.
Let $r=k_0+2 k_1+2^2 k_2+\cdots+2^t k_t$ be the binary
expansion of $r$. Then we have the formula
$$
a^r=\prod_{k_i\ne 0} \left(a^{2^i}\right).
$$
The right hand side is a product of at most
$t+1=\lfloor \log_2 r\rfloor +1$ terms
and $a^{2^i}$ can be obtained by squaring $i$ times
as $a^{2^i}=(\cdots((a^2)^2)^2\cdots)^2$.
Therefore $a^r$ can be computed by $\mathcal O(\log r)$ multiplications.
This idea is implemented as follows.
\begin{algorithm}{Algorithm} (Repeated squaring algorithm in $\mathbf Z/n\mathbf Z$)\\
INPUT: positive integers $a$, $n$ and $r$.\\
OUTPUT: $a^r\bmod n$.
\begin{itemize}
\item[1.]
Compute the binary expansion
$r=k_0+2k_1+2^2k_2+\cdots+2^tk_t$, $k_i\in\{0,1\}$, of $r$.
\item[2.]
Set $x\leftarrow 1\bmod n$.
\item[3.]
For $i\leftarrow t$ down to 0, do the following.
\begin{itemize}
\item[] If $k_i=1$, set $x\leftarrow x^2a\bmod n$.
Otherwise, set $x\leftarrow x^2\bmod n$.
\end{itemize}
\item[4.] Return $x$.
\end{itemize}
\end{algorithm}
Let $r_i=\lfloor r/2^i\rfloor=k_i+2k_{i+1}+\cdots+2^{t-i}k_t$.
Then $r_i=2r_{i+1}+k_i$, hence
$a^{r_i}=(a^{r_{i+1}})^2a$ if $k_i=1$ and $a^{r_i}=(a^{r_{i+1}})^2$
if $k_i=0$.
At step 3 we are computing $a^{r_i}$
from $a^{r_{i+1}}$ for $i=t, t-1,\ldots,0$.
\begin{proposition}\label{prop:power}
Let $g$ be a periodic super summit element of a Garside group $G$,
being in the normal form.
For a positive integer $r$, there is an algorithm of complexity
$\mathcal O(\log r\cdot\Vert\Delta\Vert\cdotT_{\operatorname{lattice}})$
that computes a pair $(h,x)$ of elements of\/ $G$
such that $h\in[g^r]^S$ is in the normal form and $x^{-1}hx=g^r$.
\end{proposition}
\begin{proof}
Let $t=\lfloor \log_2 r\rfloor $, then the binary expansion of $r$ is as follows
$$
r=k_0+2 k_1+2^2 k_2+\cdots+2^t k_t,\qquad
\mbox{$k_i\in\{0,1\}$ for $i=0,\ldots,t$.}
$$
For $i\ge 0$, let
$$
r_i=\lfloor r/2^i\rfloor=k_i+2 k_{i+1}+2^2 k_{i+2}+\cdots+2^{t-i} k_t.
$$
Using reverse induction on $i$, we show that,
for each $i=t+1,t,\ldots,1,0$,
we can compute a triple $(g_i, h_i,x_i)$ such that
\begin{equation}\label{eq:ind}
g_i\in[g]^S, \qquad
h_i\in[g^{r_i}]^S, \qquad
x_i^{-1}g_ix_i=g, \qquad
x_i^{-1}h_ix_i=g^{r_i}.
\end{equation}
Notice that $(h_0,x_0)$ is the desired pair because $r_0=r$.
First, define
$$
g_{t+1}=g,\qquad
h_{t+1}=e,\qquad
x_{t+1}=e.
$$
It is obvious that $(g_{t+1},h_{t+1},x_{t+1})$
satisfies Equation~(\ref{eq:ind}) because $g^{r_{t+1}}=g^0=e$.
Assume that we have computed $(g_{i+1},h_{i+1},x_{i+1})$
for $0\le i\le t$.
Define
\begin{equation}\label{eq:len3}
h_i'=
\left\{\begin{array}{ll}
(h_{i+1})^2g_{i+1} & \mbox{if $k_i=1$,}\\
(h_{i+1})^2 & \mbox{otherwise.}
\end{array}\right.
\end{equation}
Note that $x_{i+1}$ conjugates $h_i'$ to $g^{r_i}$ because
$$x_{i+1}^{-1}h_i'x_{i+1} = \left\{
\begin{array}{ll}
(x_{i+1}^{-1}h_{i+1}x_{i+1})^2
(x_{i+1}^{-1}g_{i+1}x_{i+1})
= (g^{r_{i+1}})^2g
=g^{1+2r_{i+1}}
=g^{r_i} & \mbox{if } k_i=1,\\
(x_{i+1}^{-1}h_{i+1}x_{i+1})^2
= (g^{r_{i+1}})^2 =g^{2r_{i+1}}
=g^{r_i}
& \mbox{if } k_i=0.
\end{array}\right.$$
Apply iterated cycling and decycling to $h_i'$ until
a super summit element $h_i$ is obtained.
Let $y_i$ be the conjugating element obtained in this process such that
$h_i=y_ih_i'y_i^{-1}$.
Let
$$
x_i=y_ix_{i+1}
\quad\mbox{and}\quad
g_i=y_ig_{i+1}y_i^{-1}.
$$
Now we claim that $g_i$ is a super summit element.
Notice that $x_{i+1}$ conjugates $g_{i+1}$ and $h_i'$
to $g$ and $g^{r_i}$ respectively,
and that $g$ and $g^{r_i}$ commute with each other.
Therefore $g_{i+1}$ and $h_i'$ commute with each other,
that is, $g_{i+1}h_i'=h_i'g_{i+1}$.
In Lemma 3.2 of~\cite{LL06c}, the following is proved.
\begin{quote}
Let $g$ and $h$ be elements of a Garside group such that $gh=hg$.
Let $x$ be the conjugating element obtained in the process of applying
arbitrary iteration of cycling and decycling to $h$.
If $g$ is a super summit element, then so is $x^{-1}gx$.
\end{quote}
Therefore $g_i$ is a super summit element.
Note that $h_i$ is a super summit element by construction.
The element $x_i$ conjugates $g_i$ and $h_i$ to
$g$ and $g^{r_i}$ respectively, since
\begin{eqnarray*}
x_i^{-1}g_ix_i
&=& (y_ix_{i+1})^{-1}(y_ig_{i+1}y_i^{-1})(y_ix_{i+1})
= x_{i+1}^{-1}g_{i+1}x_{i+1}
= g,\\
x_i^{-1}h_ix_i
&=& (y_ix_{i+1})^{-1}(y_ih_i'y_i^{-1})(y_ix_{i+1})
= x_{i+1}^{-1}h_i'x_{i+1}
= g^{r_i}.
\end{eqnarray*}
Therefore $(g_i,h_i,x_i)$ satisfies Equation~(\ref{eq:ind}).
\medskip
Now we analyze the complexity of the above algorithm.
By definitions, both $h_{t+1} (=e)$ and $g_{t+1} (=g)$ are already in the normal form.
Assume that $h_{i+1}$ and $g_{i+1}$ are in the normal form for some $0\le i\le t$.
First we will show that one can compute the normal form of $h'_i$
from $(h_{i+1}, g_{i+1})$ in time $\mathcal O(T_{\operatorname{lattice}})$.
By the definition of $h'_i$ in (\ref{eq:len3}),
$h'_i$ is either $(h_{i+1})^2 g_{i+1}$ or $(h_{i+1})^2$.
Since both $h_{i+1}$ and $g_{i+1}$ are periodic super summit elements,
both $\operatorname{len}(h_{i+1})$ and $\operatorname{len}(g_{i+1})$ are at most 1.
Hence the number of non-$\Delta$ factors in the word representing $h'_i$
is at most 3, from which it follows that
we can compute the normal form of $h_i'$ in time $\mathcal O(T_{\operatorname{lattice}})$
and that $\operatorname{len}(h'_i)\le 3$.
Next we will show that one can compute the normal forms of $h_i$ and $g_i$
from $h'_i$ in time $\mathcal O(\Vert\Delta\Vert\cdotT_{\operatorname{lattice}})$.
Recall that $y_i$ is the conjugating element such that $h_i=y_ih_i'y_i^{-1}$
obtained in the process of applying iterated cycling and decycling to $h_i'$ until
a super summit element $h_i$ is obtained.
If $h'_i$ is already a super summit element, then $y_i=e$ hence we are done because
$h_i=h'_i$ and $g_i=g_{i+1}$.
Otherwise, $y_i\neq e$ and it is in fact given as a product
$y_i=y_{i,\ell} y_{i,\ell-1}\cdots y_{i,1}$
for some $\ell \ge 1$,
where each $y_{i,j}$ is a simple element or its inverse obtained in the process of \emph{each}
cycling or decycling from $h'_i$ to $h_i$.
Then $\ell\le\operatorname{len}(h'_i)\cdot\Vert\Delta\Vert\le 3\Vert\Delta\Vert$.
Using $y_{i,1}, y_{i,2}, \ldots, y_{i,\ell}$,
construct $h_{i,0},h_{i,1},\ldots,h_{i,\ell}$ and $g_{i,0},g_{i,1},\ldots,g_{i,\ell}$
recursively as
$$
h_{i, j+1} = y_{i,j+1}h_{i,j}y_{i,j+1}^{-1}\quad\mbox{and}\quad
g_{i, j+1} = y_{i,j+1}g_{i,j}y_{i,j+1}^{-1}\quad\mbox{for } 0\le j <\ell
$$
initializing $h_{i,0}=h'_i$ and $g_{i,0}=g_{i+1}$.
Then $h_{i,\ell}=h_i$ and $g_{i,\ell}=g_{i}$.
Notice that
$\operatorname{len}(h_{i,\ell})\le\operatorname{len}(h_{i,\ell-1})\le\cdots\le\operatorname{len}(h_{i,0})\le 3$
by the definition of $y_{i,j}$.
Notice also that
$\operatorname{len}(g_{i,\ell})=\operatorname{len}(g_{i,\ell-1})=\cdots =\operatorname{len}(g_{i,0})\le 1$
because $g_{i,j}h_{i,j}=h_{i,j}g_{i,j}$.
Since $h_{i,0}(=h'_i)$ is in the normal form and $y_{i,1}$ is a simple element or its inverse,
the number of non-$\Delta$ factors in the word representing
$h_{i,1}(=y_{i,1}h_{i,0}y_{i,1}^{-1})$ is at most 5.
Hence we can compute the normal form of $h_{i,1}$ from $h_{i,0}$
in time $\mathcal O(T_{\operatorname{lattice}})$.
In a recursive way,
we can compute the normal form of $h_{i,j+1}$ from $h_{i,j}$
in time $\mathcal O(T_{\operatorname{lattice}})$ for $j=0,\ldots,\ell-1$.
Since $\ell\le 3\Vert\Delta\Vert$,
we can compute the normal form of $h_{i}(=h_{i,\ell})$ from $h'_{i}(=h_{i,0})$
in time $\mathcal O(\Vert\Delta\Vert\cdotT_{\operatorname{lattice}})$.
The analogous proof works for computing the normal form of $g_{i,j+1}$ from
$(g_{i,j}, y_{i,j+1})$
in time $\mathcal O(T_{\operatorname{lattice}})$ for $j=0,\ldots,\ell-1$.
We just need to notice that the number of non-$\Delta$ factors in the word representing
$g_{i,j+1}(=y_{i,j+1}g_{i,j}y_{i,j+1}^{-1})$ is at most 3.
Thus we can compute the normal form of $g_{i}(=g_{i,\ell})$ from $g_{i+1}(=g_{i,0})$
in time $\mathcal O(\Vert\Delta\Vert\cdotT_{\operatorname{lattice}})$.
Therefore we can compute $(g_i,h_i,x_i)$ from $(g_{i+1},h_{i+1},x_{i+1})$
in time $\mathcal O(\Vert\Delta\Vert\cdotT_{\operatorname{lattice}})$ for $i$ from $t$ to $0$,
where $g_i$ and $h_i$ are in the normal form.
Since $t=\lfloor\log_2r\rfloor$, the whole complexity of the algorithm
is $\mathcal O(\log r\cdot \Vert\Delta\Vert\cdotT_{\operatorname{lattice}})$.
\end{proof}
The following is the algorithm discussed in Proposition~\ref{prop:power}.
\begin{algorithm}{Algorithm I}
(Power conjugacy algorithm for periodic elements in a Garside group $G$)\\
INPUT: a pair $(g,r)$ of a periodic super summit element $g\in G$
and a positive integer $r$, where $g$ is in the normal form.\\
OUTPUT: a pair $(h,x)$ of elements in $G$ such that $h\in[g^r]^S$ and $x^{-1}h x=g^r$,
where $h$ is in the normal form.
\begin{itemize}
\item[1.]
Compute the binary expansion
$r=k_0+2k_1+2^2k_2+\cdots+2^tk_t$, $k_i\in\{0,1\}$, of $r$.
\item[2.]
Set $g'\leftarrow g$, $h\leftarrow e$ and $x\leftarrow e$.
\item[3.]
For $i\leftarrow t$ down to 0, do the following.
\begin{itemize}
\item[3-1.]
If $k_i=1$, set $h'\leftarrow h^2g'$.
Otherwise, set $h'\leftarrow h^2$.
\item[3-2.]
Apply iterated cycling and decycling to $h'$ until
a super summit element $h$ is obtained.
Let $y$ be the conjugating element obtained in this process such
that $h=yh'y^{-1}$.
\item[3-3.]
If $\operatorname{len}(h)>1$, return ``\emph{$g$ is not a periodic element}''.
\item[3-4.]
Set $g'\leftarrow yg'y^{-1}$ and $x\leftarrow yx$.
\end{itemize}
\item[4.] Return $(h,x)$.
\end{itemize}
\end{algorithm}
\begin{remark}\label{rmk:Alg1}
Actually Algorithm I returns the desired pair $(h,x)$
if $\len{\!}_s(g^k)\le 1$ for all $k$ even for a non-periodic element $g$.
In Algorithm I, Step 3.3 is used for the CDP when called by Algorithm II.
In any case, regardless of the summit length of $g^k$,
the complexity of Algorithm I is the same as the one in Proposition~\ref{prop:power}.
\end{remark}
\subsection{Algorithms in the braid groups with the BKL Garside structure}
Now we make an algorithm in $B_n^{\operatorname{[BKL]}}$ for solving the CDP for periodic braids.
In~\cite{BGG06c}, Biman, Gebhardt and Gonz\'alez-Meneses proposed
the following algorithm.
\begin{algorithm}{Algorithm}
(Algorith A in~\cite{BGG06c} of Biman, Gebhardt and Gonz\'alez-Meneses)\\
INPUT: a word $W$ in the Artin generators representing an $n$-braid $\alpha$.\\
SUMMARY: determine whether $\alpha$ is periodic or not.
\begin{itemize}
\item[1.]
Compute the normal form of $\alpha^{n-1}$.\\
If it is equal to $\Delta^{2k}$,
return ``\emph{$\alpha$ is periodic and conjugate to $\epsilon^k$}''.
\item[2.]
Compute the normal form of $\alpha^n$.\\
If it is equal to $\Delta^{2k}$,
return ``\emph{$\alpha$ is periodic and conjugate to $\delta^k$}''.
\item[3.]
Return ``\emph{$\alpha$ is not periodic}''.
\end{itemize}
\end{algorithm}
\noindent
The word $W^{n-1}$ has word length $(n-1)l$ in the worst case,
where $l$ is the word length of $W$.
Therefore the complexity of the above algorithm is
$\mathcal O((ln)^2\cdot n\log n)=\mathcal O(l^2n^3\log n)$
as shown in~\cite[Proposition 5]{BGG06c}.
If one uses the BKL Garside structure in the above algorithm,
the complexity is reduced to $\mathcal O((ln)^2\cdot n)=\mathcal O(l^2n^3)$.
Using Algorithm~I and the fact that
every periodic braid has summit length at most 1,
we get a more efficient algorithm.
\begin{algorithm}{Algorithm II}
(Solving the CDP for periodic braids and the CSP for $\delta$-type periodic braids.)\\
INPUT: $\alpha\in B_n^{\operatorname{[BKL]}}$.\\
OUTPUT: ``\emph{$\alpha$ is not periodic}'' if $\alpha$ is not periodic;
``\emph{$\alpha$ is conjugate to $\epsilon^k$}''
if $\alpha$ is conjugate to $\epsilon^k$;
``\emph{$\alpha$ is conjugate to $\delta^k$ by $\gamma$}''
if $\gamma^{-1}\alpha\gamma=\delta^k$.
\begin{itemize}
\item[1.]
Compute the normal form of $\alpha$.
\item[2.]
Apply iterated cycling and decycling to $\alpha$ until
a super summit element $\beta$ is obtained.
Let $\gamma$ be the conjugating element obtained in this process such that
$\beta=\gamma^{-1}\alpha\gamma$.\\
If $\beta=\delta^k$,
return ``\emph{$\alpha$ is conjugate to $\delta^k$ by $\gamma$}''.\\
If $\operatorname{len}(\beta)>1$, return ``\emph{$\alpha$ is not periodic}''.
\item[3.]
Apply Algorithm I to $(\beta,n-1)$.\\
If it returns $\delta^{nk}$,
return ``\emph{$\alpha$ is conjugate to $\epsilon^k$}''.
\item[4.]
Return ``\emph{$\alpha$ is not periodic}''.
\end{itemize}
\end{algorithm}
\begin{theorem}\label{thm:CDP}
Let $\alpha$ be an $n$-braid represented by a word $W$ in the simple elements
of $B_n^{\operatorname{[BKL]}}$ with $|W|_{\operatorname{simple}} =l$.
Then there is an algorithm
of complexity $\mathcal O(l^2 n^2+n^2\log n)$
that decides whether $\alpha$ is periodic or not.
Further, if $\alpha$ is a $\delta$-type periodic braid,
then it decides that $\alpha$ is a $\delta$-type periodic braid
and computes a conjugating element $\gamma$ such that
$\gamma^{-1}\alpha\gamma=\delta^k$ in time $\mathcal O(l^2n^2)$.
\end{theorem}
\begin{proof}
Consider Algorithm II.
Step 1 computes the normal form of $\alpha$,
hence its complexity is $\mathcal O(l^2n)$ by Lemma~\ref{lem:time}~(i).
Step 2 computes a super summit element $\beta$,
hence its complexity is $\mathcal O(l^2n^2)$ by Lemma~\ref{lem:time}~(iii).
If $\alpha$ is a $\delta$-type periodic braid,
then Algorithm II stops here,
returning the conjugating element that conjugates $\alpha$ to $\delta^k$.
Therefore, at Step 3, we may assume that either
$\alpha$ is an $\epsilon$-type periodic braid
or it is not periodic.
In either case, Algorithm~I runs in time $\mathcal O(n^2 \log n)$ by Proposition~\ref{prop:power}
and Remark~\ref{rmk:Alg1}.
Therefore the total complexity of Algorithm~II is
$\mathcal O(l^2n^2+n^2\log n)$.
\end{proof}
Before considering the CSP for $\epsilon$-type periodic braids,
we study the case of periodic braids conjugate to $\epsilon^d$
for proper divisors $d$ of $n-1$.
Recall that a simple element in $B_n^{\operatorname{[BKL]}}$ is a product of parallel descending cycles.
A descending cycle $[i_k, \ldots, i_2, i_1]$ in $B_n$ is defined originally
for the indices $i_1, \ldots, i_k$ with $1\le i_1 < i_2 < \cdots < i_k\le n$,
and indicates the positive word $a_{i_k i_{k-1}}\cdots a_{i_3 i_2} a_{i_2 i_1}$.
For convenience, we will allow indices congruent modulo $n$.
Namely, the form $[i_{j}+n,\ldots, i_1 + n, i_k, \ldots, i_{j+1} ]$
for any $j$ means the descending cycle $[i_k, \ldots, i_2, i_1]$.
For example, the form $[12, 11, 10, 9]$ in $B_{10}$ means
the descending cycle $[10,9,2,1]$.
\begin{proposition}\label{prop:CSPe^d}
Let $\alpha$ be an $n$-braid in the normal form in $B_n^{{\operatorname{[BKL]}}}$.
If $\alpha\in[\epsilon^d]^S$ for a divisor $0<d< n-1$ of\/ $n-1$, then
there exists an algorithm of complexity $\mathcal O(n^2)$ that
computes a conjugating element $\gamma$
such that $\gamma^{-1}\alpha\gamma=\epsilon^d$.
\end{proposition}
\begin{proof}
Since $\len{\!}_s(\epsilon^d)=1$ and $\alpha\in[\epsilon^d]^S$,
$\alpha=\delta^d a$ for some $a\in\mathcal D\backslash\{e,\delta\}$.
We will inductively construct sequences $\{\alpha_i\}_{i=0,\ldots,r}$
and $\{\gamma_i\}_{i=0,\ldots,r-1}$ of $n$-braids
for some $0\le r<d$ satisfying the following conditions.
\begin{itemize}
\item
$\alpha_i=\delta^d a_i\in [\epsilon^d]^S$
for a simple element $a_i\in\mathcal D\backslash\{e,\delta\}$ for each $i=0,\ldots,r$.
\item
$\gamma_i^{-1}\alpha_i\gamma_i=\alpha_{i+1}$ for $i=0,\ldots,r-1$.
\item
$a_0=a$ (and hence $\alpha_0=\alpha$).
The number of parallel descending cycles in $a_{i+1}$
is smaller than that in $a_i$ for $i=0,\ldots,r-1$.
The simple element $a_r$ has only one descending cycle.
\end{itemize}
Clearly, we can construct $\alpha_0$ by definition.
Suppose that we have constructed $\alpha_0,\ldots,\alpha_i$
and $\gamma_0,\ldots,\gamma_{i-1}$ for some $i$.
If $a_i$ has only one descending cycle,
then we already have constructed the desired sequences.
Therefore assume that $a_i$ has more than one parallel descending cycles.
Let $q=(n-1)/d$.
By Proposition~\ref{prop:main}~(ii),
at most $q-1$ iterations of partial cycling on
a descending cycle of $a_i$ reduce the number of parallel descending cycles in $a_i$.
Let $\alpha_{i+1}=\delta^d a_{i+1}$ denote the result
and let $\gamma_i$ be the conjugating element obtained in this process
such that $\gamma_i^{-1}\alpha_i\gamma_i=\alpha_{i+1}$.
Since $\alpha_i\in [\epsilon^d]^S$ and $[\epsilon^d]^S$ is closed under
partial cycling (by Lemma~\ref{lem:e^d}), $\alpha_{i+1}\in [\epsilon^d]^S$.
Notice that if one writes the simple element $a$ as a word in the band generators,
then the length is $d$, from which it follows that
there are at most $d$ parallel descending cycles in $a$.
Hence this process terminates in less than $d$ steps, that is, $r<d$.
Now we have the desired sequences, and $a_r$ has only one descending cycle.
By Proposition~\ref{prop:main}~(i), one has
$\alpha_r= \delta^{-(t-1)} \epsilon^d \delta^{t-1}$ for some $1\le t\le n$,
which means $a_r=[t+d,t+d-1,\ldots,t]$.
Let $\gamma=\gamma_0\gamma_1\cdots\gamma_{r-1}\delta^{1-t}$.
Then $\gamma^{-1}\alpha\gamma=\epsilon^d$.
\medskip
Because $r<d$ and we perform at most $q-1$ partial cyclings in order to obtain
$\alpha_{i+1}$ from $\alpha_i$ for $i=0,\ldots,r-1$,
the total number of partial cyclings in the whole process
is at most $d(q-1)<n$.
Because a partial cycling can be done in time $\mathcal O(n)$,
the complexity of this algorithm is $\mathcal O(n^2)$.
\end{proof}
The following is the algorithm discussed in Proposition~\ref{prop:CSPe^d}.
\begin{algorithm}{Algorithm III}
(Solving the CSP for $n$-braids conjugate to $\epsilon^d$.)\\
INPUT: the normal form $\delta^d a$ of an $n$-braid
$\alpha\in[\epsilon^d]^S$,
where $0<d<n-1$ is a divisor of $n-1$.\\
OUTPUT: an $n$-braid $\gamma$
such that $\gamma^{-1}\alpha\gamma=\epsilon^d$.
\begin{itemize}
\item[1.]
Set $\gamma \leftarrow e$.
\item[2.]
While $a$ has more than one parallel descending cycles, do the following.
\begin{itemize}
\item[2-1.]
Apply iterated partial cycling to $\alpha$ by a descending cycle of $a$
until we obtain a braid $\delta^d a'$
such that the number of parallel descending cycles in $a'$ is fewer than $a$.\\
Let $\gamma'$ be the conjugating element in this process such that
$\gamma'^{-1}(\delta^d a)\gamma' =\delta^d a'$.
\item[2-2.]
Set $a\leftarrow a'$ and $\gamma\leftarrow \gamma\gamma'$.
\end{itemize}
\item[3.]
If $a$ has only one descending cycle,
say $[t+d,t+d-1,\ldots,t]$, set $\gamma\leftarrow \gamma \delta^{1-t}$.
\item[4.]
Return $\gamma$.
\end{itemize}
\end{algorithm}
\begin{example}\label{ex:largeUSS}
This example shows how Algorithm III transforms
an arbitrary periodic element
$\alpha\in [\epsilon^d]^S$ to $\epsilon^d$, where $0<d<n-1$ is a divisor of $n-1$.
See Figures~\ref{fig:e3conjB13} and~\ref{fig:move}.
Consider a 13-braid
$$
\alpha=\delta^3 [13,10][12,11][6,4].
$$
It is easy to see that $\alpha^4=\delta^{13} (=\epsilon^{12})$,
hence $\alpha$ is conjugate to $\epsilon^3=\delta^3[4,3,2,1]$.
Note that the simple element $[13,10][12,11][6,4]$ has three parallel descending cycles.
\begin{figure}
\includegraphics{per_e3conjB13.eps}
\caption{The 13-braid $\alpha=\delta^3 [13,10][12,11][6,4]$}\label{fig:e3conjB13}
\end{figure}
\begin{itemize}
\item[(i)]
Iterate partial cycling on $[13,10]$ until it intersects another
descending cycle as follows:
$$
[13,10]\to [10,7]\to[7,4].
$$
Then the result is
$$
\alpha_1=\delta^3[12,11][6,4][7,4]=\delta^3[12,11][7,6,4].
$$
Note that $\alpha_1=b_1^{-1}\alpha b_1$, where $b_1=[10,7][7,4]=[10,7,4]$.
\item[(ii)]
Iterate partial cycling on $[12,11]$ until it intersects another
descending cycle as follows:
$$
[12,11]\to [9,8]\to[6,5].
$$
Then the result is
$$
\alpha_2=\delta^3[7,6,4][6,5]=\delta^3[7,6,5,4].
$$
Note that $\alpha_2=b_2^{-1}\alpha_1 b_2$, where $b_2=[9,8][6,5]$.
\item[(iii)]
Note that $\tau^{-3}(\alpha_2)=\delta^3[4,3,2,1]=\epsilon^3$.
Therefore $\beta^{-1}\alpha\beta=\epsilon^3$, where
\begin{eqnarray*}
\beta
&=&b_1b_2\delta^{-3}
=[10,7,4] [9,8][6,5]\delta^{-3}
=\delta^{-3}[7,4,1] [6,5][3,2].
\end{eqnarray*}
\end{itemize}
\end{example}
\begin{figure}\tabcolsep=2pt
\begin{tabular}{*7c}
\includegraphics[scale=.55]{per_move1.eps} & \raisebox{13mm}{$\rightarrow$} &
\includegraphics[scale=.55]{per_move2.eps} & \raisebox{13mm}{$\rightarrow$} &
\includegraphics[scale=.55]{per_move3.eps} & \raisebox{13mm}{$=$} &
\includegraphics[scale=.55]{per_move4.eps}\\[5mm]
\includegraphics[scale=.55]{per_move5.eps} & \raisebox{13mm}{$\rightarrow$} &
\includegraphics[scale=.55]{per_move6.eps} & \raisebox{13mm}{$\rightarrow$} &
\includegraphics[scale=.55]{per_move7.eps} & \raisebox{13mm}{$=$} &
\includegraphics[scale=.55]{per_move8.eps}\\
\end{tabular}
\caption{The first row shows partial cyclings on the descending cycle
$[13,10]$ that is represented by dotted line.
After the partial cyclings $[13,10]\to [10,7]\to[7,4]$, the number
of parallel descending cycles is reduced by one.
Similarly, the second row shows partial cyclings
on the descending cycle $[12,11]$.
}\label{fig:move}
\end{figure}
\begin{proposition}\label{prop:e^k}
Let $\alpha$ be an $n$-braid in the normal form in $B_n^{{\operatorname{[BKL]}}}$.
If $\alpha\in [\epsilon^k]^S$ for an integer $k$, then
there is an algorithm of complexity $\mathcal O(n^2\log n)$
that computes $\gamma$ such that $\gamma^{-1}\alpha\gamma=\epsilon^k$.
\end{proposition}
\begin{proof}
Let $u$ and $0\le v< n-1$ be integers satisfying $k=(n-1)u+v$,
and let $\alpha_0=\delta^{-nu}\alpha$.
Then $\alpha_0\in [\epsilon^v]^S$.
Since $\delta^n$ is central, for $\gamma\in B_n$
\begin{equation}\label{eq:n-1}
\gamma^{-1}\alpha\gamma=\epsilon^k
\quad\mbox{if and only if}\quad
\gamma^{-1}\alpha_0\gamma=\epsilon^v.
\end{equation}
If $v=0$, then $\gamma =e$, hence we may assume that $0< v< n-1$.
Compute $d=\gcd(v,n-1)$ and integers $r$ and $s$ such that
$$
d=vr+(n-1)s\quad\mbox{and}\quad 0\le r < n-1.
$$
Using Algorithm~I, compute $\gamma_1$ and the normal form of $\alpha_1$
such that $\alpha_1\in[\alpha_0^r]^S$ and
$$
\gamma_1^{-1}\alpha_1\gamma_1=\alpha_0^r.
$$
Let $\alpha_2=\delta^{ns}\alpha_1$, then $\alpha_2\in [\epsilon^d]^S$.
Apply Algorithm~III to $\alpha_2$ and obtain an $n$-braid $\gamma_2$ such that
$\gamma_2^{-1}\alpha_2\gamma_2=\epsilon^d$.
Lemma~\ref{lem:exp_red} shows that, for $\gamma\in B_n$,
\begin{equation}\label{eq:BCMW_power}
\gamma^{-1}\alpha_0\gamma=\epsilon^v
\quad\mbox{if and only if}\quad
\gamma^{-1}(\delta^{ns}\alpha_0^r)\gamma=\epsilon^d.
\end{equation}
Since $\delta^n$ is central,
$$
\epsilon^d = \gamma_2^{-1}\alpha_2\gamma_2
= \gamma_2^{-1}(\delta^{ns}\alpha_1)\gamma_2
= \gamma_2^{-1}\delta^{ns}(\gamma_1\alpha_0^r\gamma_1^{-1})\gamma_2
= \gamma_2^{-1}\gamma_1(\delta^{ns}\alpha_0^r)\gamma_1^{-1}\gamma_2.
$$
By Equations~(\ref{eq:n-1}) and~(\ref{eq:BCMW_power}),
$\gamma=\gamma_1^{-1}\gamma_2$ is the desired conjugating element.
\medskip
Now, let us analyze the complexity.
We can compute the integers $r$ and $s$ by using the extended Euclidean algorithm
which runs in time $\mathcal O(\log v\log n-1)=
\mathcal O((\log n)^2)$~\cite[Theorem 4.4 in page 60]{Sho05}.
By proposition~\ref{prop:power}, Algorithm~I with input $(\alpha_0,r)$ runs
in time $\mathcal O(n^2\log r)$.
Algorithm~III with $\alpha_2$ runs in time $\mathcal O(n^2)$ by
Proposition~\ref{prop:CSPe^d}.
Therefore the total complexity is
$$
\mathcal O((\log n)^2+n^2\log r+n^2)
=\mathcal O(n^2\log r)
=\mathcal O(n^2\log n).
$$
\vskip-\baselineskip
\end{proof}
The following is the algorithm discussed in Proposition~\ref{prop:e^k}.
\begin{algorithm}{Algorithm IV}
(Solving the CSP for $\epsilon$-type periodic braids)\\
INPUT: a pair $(\alpha,k)$, where $k$ is an integer and
$\alpha\in[\epsilon^k]^S$ is an $n$-braid in the normal form in $B_n^{\operatorname{[BKL]}}$. \\
OUTPUT: an $n$-braid $\gamma$ such that $\gamma^{-1}\alpha\gamma=\epsilon^k$.
\begin{itemize}
\item[1.]
Compute integers $u$ and $0\le v< n-1$ such that $k=(n-1)u+v$.
\item[2.]
Set $k \leftarrow v$ and
$\alpha\leftarrow \delta^{-nu}\alpha$
\item[3.]
If $k=0$, return $\gamma=e$.
\item[4.]
Compute $d=\gcd(k,n-1)$ and integers $0< r < n-1$ and $s$ such that $d=kr+(n-1)s$.
\item[5.]
Apply Algorithm I to $(\alpha, r)$.\\
Let $(\alpha_1,\gamma_1)$ be the output.
Then $\alpha_1\in[\alpha^r]^S$ is in the normal form,
and $\gamma_1^{-1}\alpha_1\gamma_1=\alpha^r$.\\
Set $\alpha_2\leftarrow\delta^{ns}\alpha_1$.
Then $\alpha_2$ belongs to the super summit set of $\epsilon^d$.
\item[6.]
Apply Algorithm III to $\alpha_2$.\\
Let $\gamma_2$ be the output, then $\gamma_2^{-1}\alpha_2\gamma_2=\epsilon^d$.
\item[7.]
Return $\gamma = \gamma_1^{-1}\gamma_2$.
\end{itemize}
\end{algorithm}
The following is the complete algorithm for the conjugacy problem
for periodic braids.
\begin{algorithm}{Algorithm V}
(The complete algorithm for the conjugacy problem for periodic braids)\\
INPUT: $\alpha\in B_n^{\operatorname{[BKL]}}$.\\
OUTPUT: ``\emph{$\alpha$ is not periodic}'' if $\alpha$ is not periodic;
``\emph{$\alpha$ is conjugate to $\epsilon^k$ by $\gamma$}''
if $\gamma^{-1}\alpha\gamma=\epsilon^k$;
``\emph{$\alpha$ is conjugate to $\delta^k$ by $\gamma$}''
if $\gamma^{-1}\alpha\gamma=\delta^k$.
\begin{itemize}
\item[1.]
Compute the normal form of $\alpha$.
\item[2.]
Apply iterated cycling and decycling to $\alpha$ until a super summit
element $\alpha_1$ is obtained.\\
Let $\gamma_0$ be the conjugating element in this process such that
$\gamma_0^{-1}\alpha\gamma_0=\alpha_1$.\\
If $\operatorname{len}(\alpha_1)>1$, return ``\emph{$\alpha$ is not periodic}''.
\item[3.]
Apply Algorithm II to $\alpha_1$.
\begin{itemize}
\item[3-1.]
If $\alpha_1$ is not periodic, return ``\emph{$\alpha$ is not periodic}''.
\item[3-2.]
If $\alpha_1$ is conjugate to $\delta^k$, Algorithm II gives
an element $\gamma_1$ such that $\gamma_1^{-1}\alpha_1\gamma_1=\delta^k$.\\
Set $\gamma\leftarrow\gamma_0\gamma_1$.\\
Return ``\emph{$\alpha$ is conjugate to $\delta^k$ by $\gamma$}''.
\item[3-3.]
If $\alpha_1$ is conjugate to $\epsilon^k$ for some $k$,
Algorithm II gives the exponent $k$.
\end{itemize}
\item[4.]
Apply Algorithm IV to $(\alpha_1,k)$. Let $\gamma_2$ be its output.\\
Set $\gamma\leftarrow \gamma_0\gamma_2$.\\
Return ``\emph{$\alpha$ is conjugate to $\epsilon^k$ by $\gamma$}''.
\end{itemize}
\end{algorithm}
\begin{proposition}\label{prop:CSP}
Let $\alpha$ be an $n$-braid given as a word $W$ in the simple elements
of $B_n^{\operatorname{[BKL]}}$ with $|W|_{\operatorname{simple}} =l$.
Then there is an algorithm of complexity $\mathcal O(l^2 n^2+n^2\log n)$
that decides whether $\alpha$ is periodic or not
and, if periodic, computes $\gamma\in B_n^{\operatorname{[BKL]}}$ such that
$\gamma^{-1}\alpha\gamma=\delta^k$ or
$\gamma^{-1}\alpha\gamma=\epsilon^k$.
\end{proposition}
\begin{proof}
It is not difficult to see that Algorithm V is the desired algorithm.
Now, let us analyze the complexity.
Step 1 computes the normal form,
hence its complexity is $\mathcal O(l^2n)$ by Lemma~\ref{lem:time}~(i).
Step 2 computes a super summit element,
hence its complexity is $\mathcal O(l^2n^2)$ by Lemma~\ref{lem:time}~(iii).
Step 3 applies Algorithm II to $\alpha_1$ whose word length is $\le 1$.
Therefore, its complexity is $\mathcal O(n^2\log n)$ by Theorem~\ref{thm:CDP}.
Step 4 uses Algorithm IV, hence the complexity is $\mathcal O(n^2\log n)$
by Proposition~\ref{prop:e^k}.
Therefore the total complexity is
$$
\mathcal O(l^2n+l^2n^2+n^2\log n)=
\mathcal O(l^2n^2+n^2\log n).
$$
\vskip-1.56\baselineskip
\end{proof}
\subsection{Remarks on efficiency of algorithms}
Here we compare our algorithms with the algorithms
of Birman, Gebhardt and Gonz\'alez-Meneses in~\cite{BGG06c}.
See Table~\ref{tab:BGG-alg} for their complexities,
the form of input words and necessary implementations.
Notice that the complexity of Algorithm~IV in Table~\ref{tab:BGG-alg}
is different from the one given in Proposition~\ref{prop:e^k}.
This is for the case where an input braid is not a super summit element and
not in the normal form like in Algorithm~C.
\begin{table}
{\small\tabcolsep=2pt
\def1.2{1.2}
\begin{tabular}{|c||c|c|c|c|}
\multicolumn{5}{c}{(a) Our algorithms ($l = |W|_{{\operatorname{simple}}}$)}\\\hline
Problems & Algorithms & Complexity & Input word
& Necessary implementations \\\hline\hline
CDP & Algorithm II & $\mathcal O(l^2n^2+n^2\log n)$ & &\\\cline{1-3}
CSP for $\delta$-type & Algorithm II & $\mathcal O(l^2n^2)$
& $W^{\operatorname{simple}}$ & Garside structure $B_n^{\operatorname{[BKL]}}$ \\\cline{1-3}
CSP for $\epsilon$-type & Algorithm IV
& $\mathcal O(l^2n^2+n^2\log n)$ & & \\\hline
\multicolumn{5}{c}{}\\
\multicolumn{5}{c}{(b) Algorithms of Birman, Gebhardt
and Gonz\'alez-Meneses in~\cite{BGG06c} ($l = |W^{\operatorname{atom}}|$)}\\\hline
Problems & Algorithms & Complexity & Input word
& Necessary implementations \\\hline\hline
CDP & Algorithm A & $\mathcal O(l^2n^3\log n)$
& & Garside structure $B_n^{\operatorname{[Artin]}}$\\\cline{1-3}\cline{5-5}
CSP for $\delta$-type & Algorithm B & $\mathcal O(l^3n^2)$
& $W^{\operatorname{atom}}$ & Garside structures $B_n^{\operatorname{[Artin]}}$ \&\ $B_n^{\operatorname{[BKL]}}$ \\\cline{1-3}\cline{5-5}
CSP for $\epsilon$-type & Algorithm C & $\mathcal O(l^3n^2)$
& & \begin{tabular}{c} Garside structures $B_n^{\operatorname{[Artin]}}$ \&\ $B_n^{\operatorname{[BKL]}}$\\[-.4em]
bijections $P_{n,2}\leftrightarrows Sym_{2n-2}$\end{tabular}\\\hline
\end{tabular}}
\bigskip
\caption{Comparison of our algorithms with those in~\cite{BGG06c}}
\label{tab:BGG-alg}
\end{table}
In the paper~\cite{BGG06c}, Birman et.{} al.{}
proposed three algorithms for the conjugacy problem for periodic braids:
Algorithm A solves the CDP for periodic braids;
Algorithms B and C solve the CSP for $\delta$-type
and $\epsilon$-type periodic braids, respectively.
In this paper,
Algorithm II solves the CDP for periodic braids
and the CSP for $\delta$-type periodic braids,
and Algorithm IV solves the CSP for $\epsilon$-type periodic braids.
The main difference between the solutions of Biman et.{} al.{}
and ours is the way to solve the CSP for $\epsilon$-type periodic braids.
Algorithm C of Birman et.{} al.{} needs implementations
for the bijections between $P_{n,2}$ and $Sym_{2n-2}$, where
$P_{n,2}$ is a subgroup of $B_n$ consisting of all 2-pure braids, that is,
the $n$-braids whose induced permutations fix 2, and
$Sym_{2n-2}$ is the centralizer of $\delta_{(2n-2)}^{n-1}$ in $B_{2n-2}$.
It is known that both $P_{n,2}$ and $Sym_{2n-2}$ are isomorphic
to the Artin group of type $\mathbf B_{n-1}$, hence there exist
bijections from $P_{n,2}$ to $Sim_{2n-2}$ and vice versa.
Birman et.{} al.{} constructed the bijections explicitly
in~\cite{BGG06c}.
From Table~\ref{tab:BGG-alg}, our algorithms have
the following advantages.
\begin{itemize}
\item
The inputs of our algorithms are given as
words in the simple elements, while those of Birman et.{} al.{} are given as words in the atoms.
As we discussed at the beginning of this section,
it is more natural and more efficient
to represent elements in Garside groups as words in the simple elements.
For example, in the experiment in \S4 of~\cite{BGG06c},
Birman et.{} al.{} generate a word in the simple
elements, not a word in the atoms.
\item
The complexities of our algorithms are lower than those of Birman et.{} al.{}
In the complexity of Algorithm C in Table~\ref{tab:BGG-alg}, $l \ge n$
unless the input braid is the identity element.
\item
Our algorithms require only implementations for Garside structure,
while the algorithms of Birman et.\ al.\ additionally require
implementations of the bijections between $P_{n,2}$ and $Sym_{2n-2}$.
\end{itemize}
We remark that Algorithms A and B of Birman el.{} al.{} can be revised as follows.
\begin{itemize}
\item
We can allow words in the simple elements
as the inputs of Algorithms A and B
without changing the complexity.
Let Algorithms A$'$ and B$'$ be the ones revised in this way.
\item
If we use the BKL Garside structure in Algorithm A$'$,
then the complexity is reduced from $\mathcal O(l^2n^3\log n)$
to $\mathcal O(l^2n^3)$.
\item
The complexity of Algorithm B$'$ is $\mathcal O(l^2n^2)$.
Algorithm B$'$ is the same as Algorithm II.
When analyzing the complexity of Algorithm B in~\cite{BGG06c},
they used that the time for computing the normal form after cycling or decycling
an element with canonical length $l$ is $\mathcal O(l^2n)$,
however it can be done in time $\mathcal O(ln)$.
\end{itemize}
\begin{center}
\small\bigskip
\begin{tabular}{|c||c|c|c|c|}\hline
Problems & Algorithms & Complexity & Input word
& Necessary implementations \\\hline\hline
CDP & Algorithm A$'$ & $\mathcal O(l^2n^3)$
& \vbox to 0pt{\vss\hbox{$W^{\operatorname{simple}} $}\vskip -.8em}
& \vbox to 0pt{\vss\hbox{Garside structure $B_n^{\operatorname{[BKL]}}$}\vskip -.8em}\\\cline{1-3}
CSP for $\delta$-type & Algorithm B$'$ & $\mathcal O(l^2n^2)$
& & \\\hline
\end{tabular}
\bigskip
\end{center}
However, because of the transformations between $P_{n,2}$ and $Sym_{2n-2}$,
Algorithm C does not allow words in the simple elements as input without increasing
the complexity.
| {'timestamp': '2010-04-30T02:01:30', 'yymm': '0702', 'arxiv_id': 'math/0702349', 'language': 'en', 'url': 'https://arxiv.org/abs/math/0702349'} |
\section{Introduction}
Magnetic fields exist in all plasmas: in laser-driven ablation fronts \cite{gao2015,campbell2020,manuel2012}, ICF capsules \cite{igumenshchev2014,hill2017,walsh2017}, shock fronts \cite{PhysRevLett.123.055002}, hohlraums \cite{farmer2017,sherlock2020} and Z-pinches \cite{gomez2020}. Magnetic fields can also be purposefully applied to ICF implosions to improve fusion performance \cite{chang2011,perkins2017,walsh2019,slutz2010}. The transport of these magnetic fields within the plasma is critically important to the design and interpretation of experiments. Non-local VFP simulations capture the transport processes innately by resolving the distribution of electron energies \cite{sherlock2020,hill2017,joglekar2014}. However, these calculations are prohibitively expensive and unstable, so simplified models with prescribed transport coefficients are used for spatial and temporal scales of interest. These transport coefficients are calculated by fitting to VFP data. The seminal work in this area is that of Braginskii, who laid the groundwork for the anisotropic transport of thermal energy and magnetic fields within the magneto-hydrodynamic (MHD) framework \cite{braginskii1965}. Following on from Braginskii, Epperlein \& Haines improved the transport coefficients, allowing for closer agreement to VFP calculations \cite{epperlein1986}. For 35 years the transport coefficients of Epperlein \& Haines were the agreed-upon standard to be implemented into extended-MHD codes \cite{walsh2017,farmer2017,perkins2017,watkins2018,igumenshchev2014}.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{./Figures/Gamma_Wedge.png}
\caption{The cross-gradient-Nernst coefficient as a function of Hall Parameter calculated using different sources. All coefficients are attempting to replicate those given by VFP calculations. \label{fig:gamma}}
\end{figure}
Recently, simultaneous work by two research teams demonstrated errors in two of the transport coefficients at low electron magnetization \cite{sadler2021,davies2021}. While Epperlein \& Haines stated low errors in fitting the transport coefficients to VFP data, they did not consider that their transport coefficients were ill-formed. From their perspective the magnetic field evolution was captured by:
\begin{align}
\frac{\partial\underline{B}}{\partial{}t}=\nabla\times(\underline{v}\times\underline{B})-\nabla\times\frac{\underline{j}\times\underline{B}}{n_{e}e}-\nabla\times\frac{\underline{\underline{\alpha}}\cdot\underline{j}}{n_{e}^{2}e^{2}}\nonumber \\
+\nabla\times\frac{\underline{\underline{\beta}}\cdot\nabla{}T_{e}}{e}+ \nabla \times \frac{\nabla P_{e}}{n_{e} e}\label{eq:mag_transport_old}
\end{align}
where $\underline{\underline{\alpha}}$ and $\underline{\underline{\beta}}$ are tensor transport coefficients calculated by fitting to 0D VFP simulations. While accurate, equation \ref{eq:mag_transport_old} does not make clear the impact of each term on the movement or generation of magnetic fields. Recent work re-arranged this equation to make the different physical processes more obvious\cite{walsh2020,davies2015,sadler2020}:
\begin{align}
\begin{split}
\frac{\partial \underline{B}}{\partial t} = & - \nabla \times \frac{\alpha_{\parallel}}{\mu_0 e^2 n_e ^2} \nabla \times \underline{B} + \nabla \times (\underline{v}_B \times \underline{B} ) \\
&+ \nabla \times \Bigg( \frac{\nabla P_e}{e n_e} - \frac{\beta_{\parallel} \nabla T_e}{e}\Bigg) \label{eq:mag_trans_new}
\end{split}
\end{align}
where the first term is diffusive, the second term is advection of magnetic field at a velocity $\underline{v}_B$ and the final term is a source of magnetic flux. The advection velocity is then a combination of the bulk plasma motion, thermally-driven terms and current-driven terms:
\begin{equation}
\underline{v}_B = \underline{v} - \gamma_{\bot} \nabla T_e - \gamma_{\wedge}(\underline{\hat{b}} \times \nabla T_e) - \frac{\underline{j}}{e n_e}(1 + \delta_{\bot}^c) + \frac{\delta_{\wedge}^c}{e n_e} (\underline{j} \times \underline{\hat{b}}) \label{eq:mag_trans_new_velocity}
\end{equation}
where the transport coefficients have been re-written from the form given by Braginskii. The dimensionless forms (denoted by a super-script $^c$) depend only on Hall Parameter and effective Z~\cite{walsh2020}:
\begin{equation}
\gamma_{\bot}^c = \frac{\beta_{\wedge}^c}{\omega_e \tau_e}
\end{equation}
\begin{equation}
\gamma_{\wedge}^c = \frac{\beta_{\parallel}^c - \beta_{\bot}^c}{\omega_e \tau_e}
\end{equation}
\begin{equation}
\delta_{\bot}^c = \frac{\alpha_{\wedge}^c}{\omega_e \tau_e}
\end{equation}
\begin{equation}
\delta_{\wedge}^c = \frac{\alpha_{\bot}^c - \alpha_{\parallel}^c}{\omega_e \tau_e}
\end{equation}
The dimensional forms of the thermally-driven coefficients are given by:
\begin{equation}
\gamma = \gamma^c \frac{\tau_e}{m_e}
\end{equation}
Once the transport coefficients have been re-written into this physically-motivated form, the fits of Epperlein \& Haines to the VFP simulations have large errors in $\gamma_{\wedge}^c$ and $\delta_{\wedge}^c$ at small $\omega_e \tau_e$. While the errors in fitting to $\beta_{\bot}^c$ are small, the $\gamma_{\wedge}$ numerator ($\beta_{\parallel}^c - \beta_{\bot}^c$) and denominator ($\omega_e \tau_e$) both tend to zero for small $\omega_e \tau_e$. This resulted in both $\gamma_{\wedge}^c$ and $\delta_{\wedge}^c$ being maximum for small $\omega_e \tau_e$, while the VFP simulations calculated them as exactly zero. Figure \ref{fig:gamma} shows how $\gamma_{\wedge}^c$ calculated using the values from Epperlein \& Haines deviate from the desired VFP data.
New fits to the VFP data incorporate the knowledge of how the transport coefficients should be formulated to capture their limiting behavior at low Hall Parameter \cite{sadler2021,davies2021}. It should be reiterated that this newer work did not modify the VFP calculations, only changing how the transport coefficients were fit to the VFP data. Figure \ref{fig:gamma} includes $\gamma_{\wedge}^c$ calculated by Sadler, Walsh \& Li as well as by Davies, Wen, Ji \& Held; note that these two references give near-identical results.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{./Figures/Coeff_ratios.png}
\caption{Ratio of thermal to magnetic transport coefficients as a function of electron Hall Parameter for the old and new coefficients. These are plotted for $Z=1$.\label{fig:coeff}}
\end{figure}
The updated transport coefficients make clear the similarities between thermal and magnetic transport. Looking specifically at the transport due to temperature gradients:
\begin{align}
&\underline{v}_{N} &= & & -\gamma_{\bot}\nabla_{\bot} T_e &- \gamma_{\wedge} \underline{\hat{b}} \times \nabla_{} T_e \label{eq:thermally_driven_transport1}\\
&\underline{q}_{\kappa} &= &-\kappa_{\parallel} \nabla_{\parallel} T_e &-\kappa_{\bot}\nabla_{\bot} T_e &- \kappa_{\wedge} \underline{\hat{b}} \times \nabla_{} T_e \label{eq:thermally_driven_transport2}
\end{align}
where equation \ref{eq:thermally_driven_transport1} is magnetic transport and equation \ref{eq:thermally_driven_transport2} is thermal conduction. These equations are split into components parallel ($\parallel$) to the magnetic field, perpendicular to the field ($\bot$) and perpendicular to both the magnetic field and temperature gradient ($\wedge$). In equation \ref{eq:thermally_driven_transport1} the $\bot$ term is Nernst advection and the $\wedge$ term is cross-gradient-Nernst. For thermal conduction the terms are the unrestricted thermal conduction along field lines, suppressed thermal conduction perpendicular to the field and the Righi-Leduc heat-flow.
The analogy between Nernst ($\gamma_{\bot}^c$) and perpendicular heat-flow ($\kappa_{\bot}^c$) was originally noted by Haines \cite{haines1986}. Haines found that if the kinetic collision operator uses an artificial $v^{-2}$ dependence, where $v$ is the electron speed, there is an exact equivalence between heat flow and magnetic field motion, such that $\kappa_\perp^c/\gamma_\perp^c = \kappa_\wedge^c/\gamma_\wedge^c = 5/2$. The physical interpretation is that magnetic field is only frozen into the faster electrons in the distribution, since it can easily diffuse through the slower, more collisional electrons. The field therefore follows the faster electrons, which tend to move down temperature gradients with the heat flow. For the artificial collision operator, this intuition holds exactly for all $Z$ and $\omega_e\tau_e$.
However, this physical picture is unaffected by changing the $v$ dependence of the collision operator, so the character of magnetic field movement with the electron heat flow should still approximately hold. For example, if instead the collisions use the realistic Fokker-Planck $v^{-3}$ dependence, magnetic field should still move with the heat flow, although they are no longer directly proportional. With the Fokker-Planck collision operator, the ratios are no longer fixed at $5/2$, and they have dependence on $Z$ and $\omega_e\tau_e$. However, for fixed $Z$, the ratios remain approximately constant. Figure \ref{fig:coeff} shows the ratio $\kappa_{\bot}^c/\gamma_{\bot}^c$ against Hall Parameter for $Z=1$; this result is similar using the old or new coefficients, giving little variation.
Figure \ref{fig:coeff} also plots the ratio $\kappa_{\wedge}^c/\gamma_{\wedge}^c$ against Hall Parameter for $Z=1$. Using the old Epperlein \& Haines coefficients the Righi-Leduc ($\kappa_{\wedge}^c$) coefficient is much smaller than cross-gradient-Nernst ($\gamma_{\wedge}^c$) at low magnetization. With the new transport coefficients, however, it can be seen that the cross-gradient-Nernst term is analogous to advection of magnetic field with the Righi-Leduc heat-flow; for $Z=1$, the ratio $\kappa_{\wedge}^c/\gamma_{\wedge}^c$ varies by less than 3\% across magnetization.
\begin{figure*}
\centering
\includegraphics[scale=0.8]{./Figures/Underdense.jpg}
\caption{ \label{fig:underdense} Azimuthal magnetic field component generated by cross-gradient-Nernst advection for an under-dense magnetized plasma \cite{walsh2020} using the Epperlein \& Haines transport coefficients \cite{epperlein1986} (left) and the updated coefficients (right) \cite{sadler2021,davies2021}. These profiles are 1.0ns after the laser turns on. }
\end{figure*}
The ratio of thermal to magnetic transport coefficients does vary with plasma ionization. For low Hall Parameter, $\kappa_{\bot}^c/\gamma_{\bot}^c$ ranges between 3.63 for $Z=1$ to 1.39 for $Z=100$. This ratio is fundamental for systems where thermal conduction drives plasma ablation, where equation \ref{eq:thermally_driven_transport1} will move magnetic fields into the colder regions, while ablation driven by equation \ref{eq:thermally_driven_transport2} will counter-act that transport \cite{betti2001}. The balance of Nernst demagnetization to plasma ablation has been found to be critically important in magnetized plasma conditions \cite{manuel2015,hill2017,walsh2017,farmer2017,campbell2020,sherlock2020,walsh2020a,hill2021}. The decrease of $\kappa_{\bot}^c/\gamma_{\bot}^c$ with Z suggests that low-Z plasmas will be less impacted by Nernst de-magnetization than high-Z plasmas.
The new fits make intuitive sense. The $\wedge$ terms come from electrons having curved orbits; if the magnetization is low, the electron trajectories should be straight on average. The Epperlein \& Haines fits captured this behavior for the heat-flow ($\kappa_{\wedge}$) but not for the magnetic transport ($\gamma_{\wedge}$). At large magnetization the electrons go through many orbits before colliding, meaning that there is no preferential direction for transport; this is captured in both the old and new coefficients, with $\kappa_{\wedge}$ and $\gamma_{\wedge}$ going to zero at large magnetization. For the wedge terms to be important, the plasma must be in a moderately magnetized regime ($\omega_e \tau_e \approx 1$).
The error in the Epperlein \& Haines transport coefficients can alternatively be explained by their focus on the electric field. They did not consider the coefficients in terms of the induction equation, where the gradients of the coefficients are important. The fitting function in the Hall parameter that Epperlein \& Haines chose has a derivative that does not tend to zero as the magnetic field tends to zero, which is a physical requirement. The errors in the fits become more apparent when the coefficients are reformulated in a manner that more clearly demonstrates their physical effects in the induction equation.
This paper demonstrates the importance of the new transport coefficients in laboratory plasmas, focusing on the thermally-driven cross-gradient-Nernst term. In section \ref{sec:underdense} the new coefficient is shown to reduce magnetic field twisting in under-dense systems relevant to MagLIF preheat \cite{gomez2020}. This setup was proposed in order to make the first measurements of the cross-gradient-Nernst transport \cite{walsh2020}. Section \ref{sec:premag} then shows how the cross-gradient-Nernst term can result in field twisting in pre-magnetized ICF capsules \cite{walsh2018a}; again, the new coefficients result in reduced twisting at low plasma magnetizations. However, the new coefficients do not universally reduce the impact of cross-gradient-Nernst transport. Section \ref{sec:selfgen} compares ICF hot-spot simulations without MHD, with MHD but no cross-gradient-Nernst, full extended-MHD with the Epperlein \& Haines coefficients and with the new coefficients. Cross-gradient-Nernst transport is shown to be important in hot-spot cooling, increasing the penetration depth of a cold spike. An example of how the new $\gamma_{\wedge}$ coefficient affects perturbation growth in direct-drive ablation fronts was already given in one of the original updated coefficients publications \cite{sadler2021}
Simulations in this paper use the Gorgon extended-MHD code \cite{ciardi2007,chittenden2004,walsh2017}. The code can use either the old Epperlein \& Haines coefficients \cite{epperlein1986} or the updated coefficients (see the supplementary material of reference \cite{sadler2021}). The coefficients calculated by Sadler, Walsh \& Li are used in this paper, although the coefficients from Davies, Wen, Ji \& Held have been implemented and give no noticeable differences \cite{davies2021} (as expected from figure \ref{fig:gamma}). Epperlein \& Haines provided tabulated data for the coefficients at specific values of Z; these values are interpolated in the simulations. The new coefficients provide a polynomial fit to ionization, which is preferable for reducing computations and eliminating discontinuities when gradients are taken in Z.
\section{Underdense Plasmas\label{sec:underdense}}
This section investigates the impact of cross-gradient-Nernst on magnetized under-dense gases heated by a laser. This setup is relevant to MagLIF \cite{slutz2010} and mini-MagLIF preheat \cite{barnak2017}, but was also investigated as a means of measuring specific extended-MHD transport terms for the first time \cite{walsh2020}. Particularly relevant was the suggestion that quantification of a twisted magnetic field component could be used to measure the cross-gradient-Nernst advection velocity \cite{walsh2020}. Simulations in that experiment design paper used the erroneous Epperlein \& Haines coefficients \cite{epperlein1986}, which are shown here to give excessive magnetic field twisting at low magnetization.
The configuration used as demonstration here is as follows. A low density ($5 \times 10^{19}$ atoms/cm$^3$) deuterium gas is irradiated by a $5J$ beam with a $0.5ns$ square temporal pulse. The beam has a Gaussian spatial profile with a standard deviation $\sigma=100\mu$m at best focus. The beam is tapered along its propagation direction such that at $r=\sigma$ the angle of the rays to the axis satisfies $\sin\theta = 1/12$. A 1T magnetic field is applied along the laser propagation axis. The beam tapering is purposefully used to induce magnetic field twisting by cross-gradient-Nernst transport \cite{walsh2020}.
2-D $\underline{r}$,$\underline{z}$ simulations are used, with the laser drive and applied magnetic field along $\underline{z}$. The laser is treated as individual rays that are traced through the plasma and deposit their energy by inverse Bremsstrahlung. A $460\mu$m axial extent is used, with radial and axial resolution of $1\mu$m.
As the gas density is low, very little energy couples to the system. At $0.1$ns the total absorption along the $460\mu$m axial extent is 2.5\%, reducing to 1\% by the end of the laser pulse when the plasma temperature is higher ($>100$eV). As the Hall Parameter is low for the chosen setup ($\omega_e \tau_e < 0.2$ everywhere) thermal conduction effectively transports energy radially. Simultaneously, the Nernst term advects magnetic field away from the beam center, reducing the field strength to 0.3T by 0.5ns. The magnetic flux piles up at the thermal conduction front, peaking at 2T.
The impact of cross-gradient-Nernst can be seen by decomposing the advection term in equation \ref{eq:mag_trans_new}:
\begin{equation}
\bigg[ \frac{\partial \underline{B}}{\partial t} \bigg] _{N \wedge} + (\underline{v}_{N\wedge} \cdot \nabla) \underline{B} = -\underline{B}(\nabla \cdot \underline{v}_{N\wedge}) + (\underline{B} \cdot \nabla) \underline{v}_{N\wedge} \label{eq:theory_velocity_B}
\end{equation}
As the temperature gradient is predominantly radial and the magnetic field is predominantly axial, the cross-gradient-Nernst velocity is purely in $\underline{\theta}$. Therefore, equation \ref{eq:theory_velocity_B} reduces to $\big[ \frac{\partial B_{\phi}}{\partial t} \big] _{N \wedge} = B_z \frac{\partial v_{N\wedge\theta}}{\partial z} $. This term represents magnetic field twisting. Following a single axial magnetic field line, if $\underline{v}_{N\wedge} = 0$ at the bottom and $\underline{v}_{N\wedge} \ne 0$ at the top, a $B_{\theta}$ field will be generated by twisting in between. By tapering the laser beam, this setup purposefully makes this happen; near the laser edge a magnetic field line will pass between an unheated and a heated regime, resulting in twisting of the magnetic field \cite{walsh2020}. This process can be viewed as a dynamo driven by electron heat-flow.
Figure \ref{fig:underdense} compares the twisted magnetic field component at $t=1.0$ns using the old Epperlein \& Haines transport coefficients and the updated coefficients \cite{sadler2021,davies2021}. The Biermann Battery generation has been neglected in these simulations, which means that $B_{\theta}$ can only be generated by cross-gradient-Nernst twisting. As the experiment here is in the low magnetization regime, the difference in the $\gamma_{\wedge}$ coefficient is substantial; at the laser spot edge, where much of the twisting takes place, $\omega_e \tau_e \approx 0.05$, resulting in a factor of 4 reduction in the cross-gradient-Nernst coefficient. By 1.0ns the peak $B_{\theta} \approx$0.08T for the old coefficients, and $\approx$0.01T using the new coefficients.
The new coefficients change the regime where significant $B_{\theta}$ is generated. Designs using the old coefficients aimed for the underdense plasma being as low magnetization as possible. Now, however, it is clear that the plasma should be balanced in the moderately magnetized regime, with the cross-gradient-Nernst velocity peaking at around $\omega_e \tau_e \approx 0.8$ for deuterium \cite{sadler2021}. This is still realizable, with the density, laser power and applied field all acting as free variables. An experiment with the plasma parameters defined here but scanning applied field strength is expected to observe the true $\gamma_{\wedge}$ dependence.
However, with the effect of cross-gradient-Nernst reduced, the Biermann Battery contribution to $B_{\theta}$ becomes more important. At 0.1ns for the simulations shown here the $B_{\theta}$ field from Biermann Battery is of the same order as from cross-gradient-Nernst. Biermann requires a density gradient in the plasma, which only develops later in time, at time-scales on the order of the laser radius divided by the sound speed. Therefore, probing early in time before the plasma hydrodynamically expands is important to distinguish Biermann from cross-gradient-Nernst.
\section{Pre-magnetized ICF Capsules\label{sec:premag}}
\begin{figure}
\centering
\includegraphics[scale=0.6]{./Figures/Premag_XNernst_Biermann.jpg}
\caption{ \label{fig:Premag_XNernst_Biermann} $B_{\phi}$ at 6.3ns for a warm indirect-drive implosion with an applied axial field of 5T. Left is a simulation with cross-gradient-Nernst included (using the new coefficients), while right shows a case with both cross-gradient-Nernst and Biermann included. }
\end{figure}
Axial magnetic fields can be applied to ICF capsules to reduce thermal cooling of the fuel during stagnation \cite{chang2011,perkins2017,walsh2019}. Throughout the implosion the magnetic field compresses along with the plasma; this happens dominantly at the capsule waist, where the implosion velocity is perpendicular to the magnetic field lines. At the poles the magnetic field remains uncompressed, as the implosion velocity is along field lines.
It has been suggested that the cross-gradient-Nernst term should twist the applied magnetic field during the implosion \cite{davies2015}. Preliminary simulations showed significant twisting by bang-time, with the yield increased by as much as 10\% due to the extra path length required for heat to travel along field lines out of the hot-spot \cite{walsh2018a}. Again, these calculations used the old Epperlein \& Haines coefficients. Here, as with the twisting seen in the underdense configuration in section \ref{sec:underdense}, the twisting is found to be lower when the new transport coefficients are used.
The simulated configuration is a $D_2$ gas-filled HDC indirect-drive capsule that is used on the National Ignition Facility, although these results broadly apply to all magnetized spherical implosions. A 5T magnetic field is applied axially. $1\mu$m radial resolution is used throughout the implosion and 180 polar cells are used in the simulation range $\theta = 0,\pi$. Before the first shock converges on axis the simulations are transferred to a cylindrical mesh.
Moderate HDC shell thickness variations are initialized \cite{casey2021}, such that the capsule is mildly perturbed by neutron bang-time. 1600 perturbations with random amplitude and mode are used, with each chosen in the range $\epsilon=0,0.5$nm and $k=0,180$ respectively.
\begin{figure}
\centering
\includegraphics[scale=0.6]{./Figures/Premag_XNernst.jpg}
\caption{ \label{fig:Premag_XNernst} Density and $B_{\phi}$ at neutron bang-time ($t=7.3ns$) using the old Epperlein \& Haines coefficients (left) and the updated versions (right). }
\end{figure}
While for the underdense configuration a tapered beam was used to induce twisting, a spherical implosion naturally has twisting peaking at $\theta = \pi/4,3\pi/4$. This happens because the cross-gradient-Nernst velocity $- \underline{\hat{b}}\times \nabla T_e$ is maximum at the capsule waist and zero at the poles. Therefore, the axial magnetic field at the waist is displaced in $\phi$ but at the poles it is stationary; in between the waist and the poles the magnetic field lines are twisted out of the plane.
Twisting of magnetic field lines introduces a closed field line component, $B_{\phi}$. This is of great interest to the magneto-inertial fusion community, as this component is effectively compressed during the implosion and results in additional thermal energy containment. Schemes have been considered to apply a purely $B_{\phi}$ component to a capsule \cite{hohenberger2011}, although none have been demonstrated for spherical implosions.
The cross-gradient-Nernst velocity is significant both in the in-flight shock-compressed gas and in the stagnating hot-spot. For the configuration chosen here the in-flight phase is not changed significantly by the new coefficients, as the low density and high temperature gas is magnetized even with $B_0 = 5$T. For lower applied fields the coefficients may make a significant difference, but these regimes are not of interest for magneto-inertial fusion.
Figure \ref{fig:Premag_XNernst_Biermann} shows the $B_{\phi}$ field profile induced by the cross-gradient-Nernst effect at 6.3ns, which is before the first shock converges onto the axis. While $B_{\phi}$ only reaches 3T at this time, the field twisting is significant: $B_{\phi}/|\underline{B}|=1/3$ in locations near the shock-front.
Also shown in figure \ref{fig:Premag_XNernst_Biermann} is a simulation with the Biermann Battery self-generated magnetic fields included. This term requires an asymmetric implosion to be important, of which there are two sources in this implosion. First of all, the applied field results in anisotropy of the heat-flow; here that results in a hotter compressed gas at the waist compared with the poles. Also, the HDC thickness variations generate magnetic fields. The interest here is in the fuel $B_{\phi}$ component, which is dominated by cross-gradient-Nernst twisting for $B_0 = 5$T. Larger applied fields result in suppression of cross-gradient-Nernst and enhancement of the thermal conductivity anisotropy, enhancing the impact of Biermann.
Figure \ref{fig:Premag_XNernst} shows the density and out-of-plane magnetic field component at bang-time. Biermann Battery generation of magnetic field has been turned off in the code for these simulations, which means that all of $B_{\phi}$ is from cross-gradient-Nernst twisting. On the left is a case using the old cross-gradient-Nernst coefficients from Epperlein \& Haines, while on the right uses the updated coefficients. The new coefficients lower the peak $B_{\phi}$ from 500T to 250T. Nonetheless, the twisting is significant, with $B_{\phi}/|\underline{B}|$ up to 0.3 throughout the hot-spot.
The twisting introduces several additional effects. Firstly, thermal conduction and Nernst are moderately reduced, as these components become partially out of the simulation plane; Cross-gradient-Nernst and Righi-Leduc then have components within the simulation plane. If the magnetic tension becomes significant then the twisted field would also be expected to induce plasma motion in $\phi$; however, as cross-gradient-Nernst is suppressed for the large applied fields required for tension to be important \cite{perkins2017,walsh2019}, this may not be a realizable regime. For $B_0 = 5$T the plasma $\beta$ is too large for significant motion.
While the simulations here focused on indirect-drive implosions, cross-gradient-Nernst also twists magnetic fields in pre-magnetized direct-drive ablation fronts \cite{walsh2020a}, where the temperature gradients drive extreme heat-flows. However, the $B_{\phi}$ component is effectively advected by the plasma and Nernst velocities, lowering its impact.
\section{Self-Generated Fields in ICF Hot-spots\label{sec:selfgen}}
\begin{figure}
\centering
\includegraphics[scale=0.5]{./Figures/Singlespike_B.jpg}
\caption{ \label{fig:ss_B} Self-generated magnetic fields at bang-time around a cold spike pushing into a hot-spot. 3 cases are shown. Left: a simulation without cross-gradient-Nernst advection included. Middle: a simulation using the Epperlein \& Haines transport coefficients \cite{epperlein1986}. Right: a simulation using the updated transport coefficients \cite{sadler2021,davies2021}.}
\end{figure}
This section looks into the impact of cross-gradient-Nernst on self-generated magnetic field profiles in regular ICF hot-spots, where magnetic fields have not been externally applied.
Magnetic fields are generated by the Biermann Battery mechanism around perturbations, with estimated field strengths up to 10kT \cite{walsh2017}. These fields are predominantly generated in the stagnation phase, when the temperature and density gradients are largest \cite{walsh2021}. Recent advances have been made in the theoretical understanding of magnetic flux generation in these systems, with more flux being generated around high mode and large amplitude perturbations \cite{walsh2021}.
While it was expected that self-generated magnetic fields would reduce heat loss from ICF hot-spots, research found that magnetization of the electron population introduced Righi-Leduc heat-flow, which enhanced cooling \cite{walsh2017}. As cross-gradient-Nernst is the magnetic transport analogue of Righi-Leduc heat-flow, it is important in these systems.
The simulations here use the indirect-drive HDC design N170601 is used, although the physics is also applicable to direct-drive hot-spots. An isolated 200nm HDC shell thickness variation is imposed at the capsule pole, causing a cold spike to push into the hot-spot. Shell thickness variations have been found to be a significant degradation mechanism for HDC implosions \cite{casey2021}; the variation applied here is not based on any target fabrication specifications, and is instead used as a demonstration of magnetized heat-flow in ICF implosions.
The simulations here are 2-D and are in spherical geometry until the first shock converges onto the axis ($t\approx 7.6$ns). The resolution up to this time is $1\mu$m radially, with 180 cells in the polar direction from $\theta = 0,\pi$. The simulations are then remapped into cylindrical geometry for the stagnation phase, when the resolution is $\frac{1}{2}\mu$m.
Figure \ref{fig:ss_B} shows the magnetic field distribution at neutron bang-time ($t= 8.5$ns) for simulations with different cross-gradient-Nernst physics included. The magnetic field profiles are in green (into the page) and purple (out of the page), plotted over the density so that the proximity to the spike can be seen. The dense fuel is at the top of the figure and the hot-spot at the bottom.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{./Figures/Singlespike_Te.jpg}
\caption{ \label{fig:ss_Te} Electron temperature along the axis of a hot-spot spike at bang-time. Temperature profiles are shown for cases: without any self-generated fields included; with extended-MHD but no cross-gradient-Nernst advection; using the Epperlein \& Haines transport coefficients \cite{epperlein1986}; using the updated coefficients \cite{sadler2021,davies2021}.}
\end{figure}
First in figure \ref{fig:ss_B} is a case with no cross-gradient-Nernst advection. The Righi-Leduc heat-flow direction is shown with white arrows. Righi-Leduc acts to cool the spike tip, which results in regular Nernst compressing the field onto the simulation axis. This process causes numerical issues, advecting all of the magnetic field into a single line of cells. Once the cross-gradient-Nernst is included, the magnetic field moves in the same direction as Righi-Leduc, preventing the field profile from compressing onto the axis. It can be seen in figure \ref{fig:ss_B} that the new cross-gradient-Nernst coefficient does not advect as rapidly as the Epperlein \& Haines version.
It is posited that the updated coefficients will always be important in systems with self-generated magnetic fields and thermal conduction; there is always a null-point of magnetic field, resulting in no electron magnetization. Even if the peak magnetization is large enough to be in a regime where the Epperlein \& Haines coefficients are valid, the magnetization will decrease to zero in the surrounding regions, passing through the regime where the old coefficients are invalid.
Figure \ref{fig:ss_Te} shows the impact of the imposed spike on the hot-spot temperature for simulations with various MHD packages included. The electron temperature is plotted along the spike axis. Without any MHD included the spike does not propagate as far; this suggests that current design calculations of ICF implosions (both directly and indirectly driven) are underestimating the impact of perturbations.
Simulations with the new coefficients included result in the greatest discrepancy with the estimates that do not include Biermann generation; at the $T_e = 3.5$keV contour the spike has penetrated $7 \mu$m further due to the magnetization of heat-flow. The simulations without cross-gradient-Nernst are less affected by the magnetic fields, as the fields are transported by the standard Nernst term into a one-cell width (see figure \ref{fig:ss_B}).
The new coefficients give a greater impact of MHD on spike penetration compared with the old coefficients. This can be understood from the fact that the new coefficients result in the magnetic field moving with the Righi-Leduc heat-flow. In contrast, the old coefficients allowed the magnetic flux to move ahead of the Righi-Leduc heat-flow into regions where it would have a lower impact on the plasma magnetization. This can be seen in figure \ref{fig:coeff}, which shows that the ratio of Righi-Leduc coefficient to cross-gradient-Nernst coefficient decreasing to zero for low magnetization.
Closer analysis of the impact of MHD on capsule perturbation growth for a variety of mode numbers and amplitudes will be the subject of a future publication.
\section{Conclusions}
In summary, updated magnetic transport coefficients that accurately replicate kinetic simulations at low electron magnetizations \cite{sadler2021,davies2021} have been shown to be important across a range of laboratory plasma conditions. These coefficients are the new standard for implementation into extended-MHD codes. While the two references give different fits for the coefficients, they are found to be practically equivalent.
With an external magnetic field applied, the cross-gradient-Nernst term tends to twist the magnetic field \cite{davies2017,walsh2018a}; using the new coefficients reduces twisting in the low magnetization ($\omega_e \tau_e < 1$) regime. This result impacts attempts to measure cross-gradient-Nernst for the first time \cite{walsh2020} as well as the design of pre-magnetized capsule implosions \cite{chang2011,perkins2017,walsh2018a,walsh2019}. Twisting is still possible in the moderate magnetization regime ($\omega_e \tau_e \approx 1$).
For systems with significant self-generated magnetic fields \cite{farmer2017,walsh2017,sadler2020a} the new coefficients result in the magnetic fields moving with the Righi-Leduc heat-flow. This is found to enhance the impact of Righi-Leduc, as shown in direct-drive ablation fronts (where Righi-Leduc reduces perturbation growth) \cite{sadler2021} and in ICF hot-spots (where Righi-Leduc enhances perturbation growth).
In addition to being more physically accurate, the new coefficients have been found to increase numerical stability, as they introduce fewer discontinuities into the simulations.
\section*{Acknowledgements}
This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 and by the LLNL-LDRD program under Project Number 20-SI-002. The simulation results were obtained using the Imperial College High Performance Computer Cx1..
This document was prepared as an account of work sponsored by an agency of the United States government. Neither the United States government nor Lawrence Livermore National Security, LLC, nor any of their employees makes any warranty, expressed or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights. Reference herein to any specific commercial product, process, or service by trade name, trademark, manufacturer, or otherwise does not necessarily constitute or imply its endorsement, recommendation, or favoring by the United States government or Lawrence Livermore National Security, LLC. The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States government or Lawrence Livermore National Security, LLC, and shall not be used for advertising or product endorsement purposes.
Research presented in this article was also supported by the Laboratory Directed Research and Development program of Los Alamos National Laboratory, under the Center for Nonlinear Studies project number 20190496CR. This research was supported by the Los Alamos National Laboratory (LANL) through its Center for Space and Earth Science (CSES). CSES is funded by LANL’s Laboratory Directed Research and Development (LDRD) program under project number 20180475DR.
Finally, the information, data, or work presented herein was funded in part by the Advanced Research Projects Agency-Energy (ARPAE), U.S. Department of Energy, under Award No. DE-AR0001272, by the Department of Energy Office of Science, under Award No.
DE-FG02-04ER54746, by the Department of Energy National Nuclear Security Administration under Award No. DENA0003856, the University of Rochester, and the New York State Energy Research and Development Authority.
\section*{References}
\bibliographystyle{plainnat}
\ifdefined\DeclarePrefChars\DeclarePrefChars{'’-}\else\fi
| {'timestamp': '2021-07-28T02:26:02', 'yymm': '2107', 'arxiv_id': '2107.12988', 'language': 'en', 'url': 'https://arxiv.org/abs/2107.12988'} |
\section{Introduction}
Traditionally, radio transceivers are subject to a HD constraint
because of the crosstalk between the transmit and receive chains.
The self-interference caused by the transmitter at the receiver if using
FD transmission overwhelms the desired received signal from the partner
node since it is much stronger than the desired received
signal. Therefore, current radios all use orthogonal signaling dimensions, i.e., time division
duplexing (TDD) or frequency division duplexing (FDD), to achieve bidirectional
communication.
FD communication can potentially double the throughput if the self-interference
can be well mitigated. FD radios have been successfully implemented in the industrial, scientific and medical (ISM) radio bands in laboratory environments in the past few years~\cite{ChoJai10Mobicom, DuaSab10Asilomar, JaiCho11Mobicom, SahPat11X}. Key
to the success are novel analog and digital self-interference cancellation
techniques and/or spatially separated transmit and receive antennas.
A FD system with only one antenna has also been implemented in \cite{Knox12WAMICON}
by using specially designed circulator and the FD WiFi radio with one antenna and one circulator has been prototyped in \cite{Bharadia13}. In general, the main idea of FD transmission
is to let the receive chain of a node remove the self-interference
caused by the known signal from its transmit chain, so that reception
can be concurrent with transmission. A novel signaling technique
was proposed in \cite{Guo10Allerton} to achieve virtual FD with applications
in neighbor discovery \cite{Guo2-13} and mutual broadcasting \cite{Guo13} with its prototyping presented in \cite{Tong14RODD}.
From a theoretical perspective, the two-way transmission capacity
of wireless ad hoc networks has been studied in \cite{Vaze11} for
a FDD model. A FD cellular system has been analyzed in \cite{Goyal2013}
where the throughput gain has been illustrated via extensive simulation
for a cellular system with FD
base station and HD mobile users. The throughput gain of
single-cell multiple-input and multiple-output (MIMO) wireless systems with FD radios has been
quantified in \cite{Barghi12}. A capacity analysis of FD and HD transmissions with bounded
radio resources has been presented in \cite{Aggarwal12ITW} with focus only
on a single-link system. \cite{Ju12,Kim14} evaluate the capacity of FD ad hoc networks and alleviate the capacity
degradation due to the extra interference of FD by using beamforming and an ARQ protocol, respectively. Both
capacity analyses in \cite{Ju12,Kim14} are based on perfect self-interference cancellation and the approximation
that the distances of the two interfering nodes of a FD link to the desired receiver are the same.
In this paper, the impacts of FD transmission on the network throughput
are explored. On the one hand, FD transmission allows bidirectional communication
between two nodes simultaneously and therefore potentially doubles the throughput.
On the other hand, the extra interference caused by FD transmissions and imperfect self-interference cancellation can
degrade the throughput gain over HD, which makes it unclear whether FD
can actually outperform HD.
This paper utilizes the powerful analytical tools from stochastic geometry
to study the throughput performance of a wireless network of nodes with both FD and HD capabilities. Our results analytically
show that for an ALOHA MAC protocol, FD always outperforms HD in terms of throughput if perfect self-interference cancellation is assumed. However, for a path loss exponent $\alpha$, the achievable throughput gain is upper bounded by $\frac{2\alpha}{\alpha + 2}$, i.e., it ranges from $0$-$33\%$ for the practical range $\alpha \in (2,4]$.
This result holds for arbitrary node densities, link distances and SIR regimes. Moreover, we model imperfect self-interference cancellation and quantify its effects on the throughput. Imperfect self-interference cancellation causes a SIR loss in the FD transmission and thus reduces the throughput gain between the FD network and HD network. Tight bounds on the SIR loss are obtained using the concept of horizontal shifts of the SIR distribution. The amount of self-interference cancellation determines if HD or FD is preferable in the networks.
\section{Network Model\label{sec:Network-Model}}
Consider an independently marked Poisson point process (PPP) \cite{Haenggi12book}
$\hat{\Phi}=\left\{ \left(x_{i},m(x_{i}),s(x_{i})\right)\right\} $
on $\mathbb{R}^{2}\times\mathbb{R}^{2}\times\left\{ 0,1,2\right\} $
where the ground process $\Phi=\left\{ x_{i}\right\} $ is a PPP with density $\lambda$
and $m(x_{i})$ and $s({x_{i}})$ are the marks of point
$x_{i}$. The mark $m(x_{i})$ is the location of the node that $x_{i}$
communicates with. Here, we fix $\left\Vert x-m(x)\right\Vert =R$, $\forall x\in\Phi$,
i.e., $R$ is the distance of all links. Therefore, $m(x_{i})$ can also be written as $m(x_{i})=x_{i}+R\left(\cos\varphi_{i},\sin\varphi_{i}\right)$,
where the angles $\varphi_{i}$ are independent and uniformly distributed on $\left[0,2\pi\right]$. The link distance $R$ can also be random without affecting the main conclusions since we can always derive the results by first conditioning on $R$ and then averaging over $R$. We define $m(\Phi)=\{m(x): x\in\Phi\}$, which is also a PPP of density $\lambda$.
The mark $s(x_{i})$ indicates the independently chosen state of the link that consists
of $x_{i}$ and $m(x_{i})$: $s(x_{i})=0$ means the link is silent,
$s(x_{i})=1$ means the link is in HD mode, and $s(x_{i})=2$ means it is in FD mode. HD means that in a given
time slot the transmission is unidirectional, i.e., only from $x_{i}$
to $m(x_{i})$, while FD means that $x_{i}$ and $m(x_{i})$
are transmitting to each other concurrently. Therefore, for any link there are three
states: silence, HD, and FD. Assume that a link is in the state of
silence with probability $p_{0}$, HD with probability $p_{1}$ and
FD with probability $p_{2}$, where $p_{0}+p_{1}+p_{2}=1$. $p_{1}$
and $p_{2}$ are the medium access probabilities (MAPs) for HD and
FD modes, respectively. As a result, $\Phi=\bigcup_{i=0}^{2}\Phi_{[i]}$,
where $\Phi_{[i]}=\left\{ x\in\Phi: s(x)=i\right\} $ with density $\lambda p_{i}$
and $i\in\left\{ 0,1,2\right\}$. From the marking theorem \cite[Thm. 7.5]{Haenggi12book},
these three node sets $\Phi_{[i]}$ are independent. We call the link consisting of a node $x_{0}$ and its mark $m(x_{0})$ as the \textit{typical link}.
The marked point process $\hat{\Phi}$ can be used to
model a wireless network of nodes with both FD and HD capabilities.
The self-interference in the FD links is assumed to be cancelled
imperfectly with residual self-interference-to-power ratio (SIPR) $\beta$, i.e., when the transmit power of a node is $P$, the residual self-interference is $\beta P$. The parameter $\beta$ quantifies the amount of self-interference cancellation, and $-10\log_{10} \beta$ is the self-interference cancellation in dB. When $\beta =0$, there is perfect self-interference cancellation, while for $\beta=1$, there is no self-interference cancellation. An example of a realization of such a wireless
network is illustrated in Figure \ref{Fig:0}. In the following, we will use this model to study the performance
of wireless networks with FD radios.
\begin{figure}[h]
\vspace{-3mm}
\begin{centering}
\includegraphics[width=\figwidth]{NetworkIllustration}
\par\end{centering}
\caption{An example of the class of wireless networks considered in this paper. The dashed lines indicate the link is silent, the arrows
mean the link is in HD mode, and the double arrows in FD mode. The $\times$'s form $\Phi$ while the $\circ$'s form $m(\Phi)$.}
\label{Fig:0}
\end{figure}
In this network setup, we use the SIR model where a transmission
attempt from $x$ to $y$ is considered successful if
\begin{equation}
\mbox{SIR}_{y}=\frac{P_{xy}Kh_{xy}l(x,y)}{\sum_{z\in\tilde{\Phi}\backslash\left\{ x\right\} }P_{yz}Kh_{yz}l(z,y)+ \beta P_{xy}\mathbbm{1}_{xy}^{\rm FD}}>\theta,\label{SIR}
\end{equation}
where $\tilde{\Phi}$ is the set of transmitting nodes in a given time slot, $\theta$ is the SIR threshold, $h_{xy}$ and $h_{zy}$ are the fading power coefficients with
mean $1$ from the desired transmitter $x$ and the interferer $z$
to $y$ respectively, and $\mathbbm{1}_{xy}^{\rm FD}$ is the indicator function that the link $xy$ is in FD mode. The inclusion of $\mathbbm{1}_{xy}^{\rm FD}$ means that the interference of FD links has an extra term due to the imperfect self-interference cancellation. The transmit powers $P_{xy} = P$ when link $xy$ is active. We focus on the Rayleigh fading case for both
the desired link and interferers. $K$ is a unitless constant that depends on the antenna characteristics and the average channel attenuations. $K=G_{\rm tx} G_{\rm rx}\left(\frac{c_{\rm L}}{4\pi f_c}\right)^2$, where $c_{\rm L}$ is the speed of light, $f_c$ is the carrier frequency, and $G_{\rm tx}$ and $G_{\rm rx}$ are the antenna gain at the transmitter and receiver, respectively. The path loss function $l(x,y)$
between node $x$ and $y$ is $l(x,y)=\left\Vert x-y\right\Vert ^{-\alpha}$,
where $\alpha>2$ is the path-loss exponent. If $y$ is at the origin, the index $y$ will be omitted, i.e., $l(x,\mathbf{0})\equiv l(x)$. Also, we
call a given set of system parameters $(\lambda,\theta,R,\alpha)$
a \textit{network configuration}. We will show that
some conclusions hold regardless of the network configuration.
\section{Success Probability\label{sec:Success-Probability}}
Our first metric of interest is the success probability, defined as
\begin{equation}
p_{s}\triangleq\mathbb{P}(\mbox{SIR}_{y}>\theta),\label{eq:ps}
\end{equation}
which is also the complementary cumulative distribution function (ccdf) of the SIR. Without changing the distribution of the point process, we may assume that
the receiver $y$ is at the origin. This implies there is a transmitter at fixed distance
$R$ from the origin. The success probability plays an
important role in determining the throughput, as will be described
in the following section.
\subsection{Derivation of the success probability and its bounds}
Before obtaining the unconditional success probability given in (\ref{eq:ps}), we first derive the conditional success probabilities given that a link is HD or FD\footnote{When the link is inactive, the conditional success probability is obviously zero by (\ref{SIR}).}. We denote the success probabilities conditioning that the typical link is HD and FD as $p_{s}^{\rm HD}$ and $p_{s}^{\rm FD}$, respectively. The following theorem gives the conditional success probabilities $p_{s}^{\rm HD}$ and $p_{s}^{\rm FD}$ of the FD/HD-mixed
wireless network modeled by the marked PPP:
\begin{thm}\label{sucProb}
In a wireless network described by the marked PPP $\hat{\Phi}$, the conditional success
probability $p_{s}^{\rm HD}$ is given by
\begin{equation}
p_{s}^{\rm HD}=\exp(-\lambda p_{1}H(\theta R^{\alpha},\alpha))\exp(-\lambda p_{2}F(\theta R^{\alpha},\alpha,R)),\label{eq:ps-2}
\end{equation}
where $H(s,\alpha)\triangleq\frac{\pi^{2}\delta s^{\delta}}{\sin(\pi\delta)}$
with $\delta\triangleq2/\alpha$ and
\ifCLASSOPTIONonecolumn
\begin{equation}
F(s,\alpha,R)\triangleq\int_{0}^{\infty}\left(2\pi-\frac{1}{1+sr^{-\alpha}}\int_{0}^{2\pi}\frac{d\varphi}{1+s\left(r^{2}+R^{2}+2rR\cos\varphi\right)^{-\alpha/2}}\right)rdr,\label{F}
\end{equation}
\else
\begin{equation}F(s,\alpha,R)\triangleq \int_{0}^{\infty}\left(2\pi-\frac{1}{1+sr^{-\alpha}}K(s,r,R,\alpha)\right)rdr \label{F}\end{equation}
with $K(s,r,R,\alpha)\triangleq\int_{0}^{2\pi}\frac{d\varphi}{1+s\left(r^{2}+R^{2}+2rR\cos\varphi\right)^{-\alpha/2}},$
\fi
and the conditional success
probability $p_{s}^{\rm FD}$ is given by
\begin{equation}
p_{s}^{\rm FD}=\kappa p_{s}^{\rm HD},\label{eq:ps-3}
\end{equation}
where $\kappa \triangleq e^{-\frac{\theta R^{\alpha}\beta}{K}}$.
\end{thm}
\begin{IEEEproof}
Conditional on that the link is active, the SIR from (\ref{SIR}) can be rewritten as
\begin{equation}
\mbox{SIR}_{y}=\frac{h_{xy}l(x,y)}{\sum_{z\in\tilde{\Phi}\backslash\left\{ x\right\} }h_{yz}l(z,y)+ \frac{\beta \mathbbm{1}_{xy}^{\rm FD}}{K}}>\theta,\label{SIR2}
\end{equation}
by dividing both numerator and denumerator by $PK$.
As a result, it is equivalent to a network where each node transmit with unit power while the SIPR $\beta$ is scaled by $K$. Hence, with Rayleigh fading, the desired signal strength $S$ at the
receiver at the origin is exponential, i.e., $S=hR^{-\alpha}$. Conditional on that the link is HD, the interference $I$ consists of two parts: the
interference from the HD nodes $\Phi_{[1]}$ and the interference from the FD nodes $\Phi_{[2]}$. Hence, it can be expressed as:
\[
I=\sum_{x\in\Phi_{[1]}}h_{x}l(x)+\sum_{x\in\Phi_{[2]}}\left(h_{x}l(x)+h_{m(x)}l(m(x))\right).
\]
The Laplace transform of the interference follows as
\ifCLASSOPTIONonecolumn
\begin{eqnarray}
L_{I}(s) & = & \mathbb{E}e^{-s\left(\sum_{x\in\Phi_{[1]}}h_{x}l(x)+\sum_{x\in\Phi_{[2]}}\left(h_{x}l(x)+h_{m(x)}l(m(x))\right)\right)}\nonumber \\
& = & \mathbb{E}\left(\prod_{x\in\Phi_{[1]}}e^{-sh_{x}l(x)}\prod_{x\in\Phi_{[2]}}e^{-s\left(h_{x}l(x)+h_{m(x)}l(m(x))\right)}\right)\nonumber \\
& \overset{\left(a\right)}{=} & \mathbb{E}\left(\prod_{x\in\Phi_{[1]}}e^{-sh_{x}l(x)}\right)\mathbb{E}\left(\prod_{x\in\Phi_{[2]}}e^{-s\left(h_{x}l(x)+h_{m(x)}l(m(x))\right)}\right),\label{eq:Two}
\end{eqnarray}
\else
\begin{align}
L_{I}(s)
& = \mathbb{E}\left(\prod_{x\in\Phi_{\left[1\right]}}e^{-sh_{x}l(x)} \prod_{x\in\Phi_{\left[2\right]}}e^{-s\left(h_{x}l(x)+h_{m(x)}l(m(x))\right)}\right)\nonumber \\
& \overset{\left(a\right)}{=} \mathbb{E}\left(\prod_{x\in\Phi_{\left[1\right]}}e^{-sh_{x}l(x)}\right)\cdot\nonumber\\
& \qquad \mathbb{E}\left(\prod_{x\in\Phi_{\left[2\right]}}e^{-s\left(h_{x}l(x)+h_{m(x)}l(m(x))\right)}\right),\label{eq:Two}
\end{align}
\fi
where (a) follows from the fact that $\Phi_{[1]}$ and
$\Phi_{\left[2\right]}$ are independent PPPs from the marking theorem
\cite[Thm. 7.5]{Haenggi12book}. The first term in the
product of (\ref{eq:Two}) is the Laplace transform of the interference
of the PPP $\Phi_{[1]}$, given by \cite[page 103]{Haenggi12book}:
\begin{align*}
L_{I_{1}}(s) & = \mathbb{E}\left(\prod_{x\in\Phi_{\left[1\right]}}e^{-sh_{x}l(x)}\right)\\
& = \exp(-\lambda p_{1}H(s,\alpha)).
\end{align*}
The second term in the product of (\ref{eq:Two}) can be written as
follows:\ifCLASSOPTIONonecolumn
\begin{align}
L_{I_{2}}(s) & = \mathbb{E}\left(\prod_{x\in\Phi_{[2]}}e^{-s\left(h_{x}l(x)+h_{m(x)}l(m(x))\right)}\right)\nonumber \\
& = \mathbb{E}\left(\prod_{x\in\Phi_{[2]}}\frac{1}{1+sl(x)}\frac{1}{1+sl(m(x))}\right)\label{eq:I2}\\
& \overset{\left(a\right)}{=} \exp\left(-\lambda p_{2}\int_{\mathbb{R}^{2}}\left(1-\frac{1}{1+sl(x)}\frac{1}{1+sl(m(x))}\right)dx\right)\label{eq:I2-1}\\
& = \exp(-\lambda p_{2}F(s,\alpha,R)),\label{L2}
\end{align}
where (a) follows from the probability generating functional of the
PPP.
\else
\begin{align}
L_{I_{2}}(s) & = \mathbb{E}\left(\prod_{x\in\Phi_{[2]}}e^{-s\left(h_{x}l(x)+h_{m(x)}l(m(x))\right)}\right)\nonumber\\
& = \mathbb{E}\left(\prod_{x\in\Phi_{[2]}}\frac{1}{1+sl(x)}\frac{1}{1+sl(m(x))}\right)\label{eq:I2}\\
& \overset{\left(a\right)}{=} \exp\left(-\lambda p_{2}\int_{\mathbb{R}^{2}}\left(1-v(x)\right)\right)\label{eq:I2-1}\\
& = \exp(-\lambda p_{2}F(s,\alpha,R)),\label{L2}
\end{align}
where (a) follows from the probability generating functional of the
PPP with $v(x)=\frac{1}{1+sl(x)}\frac{1}{1+sl(m(x))}.$
\fi
As a result, the success probability is
\begin{align}
p_{s}^{\rm HD} & = L_{I_{1}}(\theta R^{\alpha})L_{I_{2}}(\theta R^{\alpha})\label{HDps}\\
& = \exp(-\lambda p_{1}H(\theta R^{\alpha},\alpha))\exp(-\lambda p_{2}F(\theta R^{\alpha},\alpha,R)),
\end{align}
which completes the proof of $p_{s}^{\rm HD}$.
Conditional on a FD link, there is an extra term in the interference, which is the residual self-interference scaled by the constant $K$. Hence, the interference for a FD link consists of three parts as follows:
\[
I=\sum_{x\in\Phi_{\left[1\right]}}h_{x}l(x)+\sum_{x\in\Phi_{\left[2\right]}}\left(h_{x}l(x)+h_{m(x)}l(m(x))\right) + \frac{\beta}{K}.
\]
The first two terms are the same as in the proof of $p_s^{\rm HD}$ while the third term is the residual self-interference. Hence, the Laplace transform of the interference follows as
\begin{eqnarray}
L_{I}(s) & = & L_{I_{1}}(s) L_{I_{2}}(s) e^{-\frac{s\beta}{K}}.\label{eq:Two2}
\end{eqnarray}
As a result, the conditional success probability $p_{s}^{\rm FD}$ is
\begin{align*}
p_{s}^{\rm FD} & = L_{I_{1}}(\theta R^{\alpha})L_{I_{2}}(\theta R^{\alpha})e^{-\frac{\theta R^{\alpha}\beta}{K}}\\
& = \kappa p_{s}^{\rm HD},
\end{align*}
where the last step is from (\ref{HDps}).
\end{IEEEproof}
Alternatively, $p_s^{\rm HD}$ can also be derived using the results for the Gauss-Poisson process\cite{Guo14ISIT}.
The fact that the conditional success probability $p_s^{\rm HD}$ (and the Laplace transform of the interference)
is a product of two terms follows from the
independence of the point processes $\Phi_{\left[i\right]}$. The names of the functions $H$ and $F$ are chosen to reflect the fact that they represent the case of half- and full-duplex, respectively.
The residual self-interference for FD links simply adds an exponential factor to the success probability for HD links, which is similar to the effect of noise as in \cite[page 105]{Haenggi12book}. Theorem \ref{sucProb} also reveals the connection between two conditional success probability $p_{s}^{\rm FD}$ and $p_{s}^{\rm HD}$. As expected, $p_{s}^{\rm FD}\leq p_{s}^{\rm HD}$, with equality for perfect self-interference cancellation.
The unconditional success probability can be easily obtained from the results in Theorem \ref{sucProb}.
\begin{cor}\label{sucProb_FD2}
In a wireless network described by the marked PPP $\hat{\Phi}$, the unconditional success
probability $p_{s}$ is given by
\begin{equation}
p_{s}=\left(p_1+\kappa p_2 \right) e^{-\lambda p_{1}H(\theta R^{\alpha},\alpha)}e^{-\lambda p_{2}F(\theta R^{\alpha},\alpha,R)}.\label{eq:ps-4}
\end{equation}
\end{cor}
\begin{IEEEproof}
Since a link is HD with probability $p_1$ and FD with probability $p_2$, the unconditional success probability from (\ref{eq:ps}) is the average
\begin{equation}
p_s = p_1 p_{s}^{\rm HD} + p_2 p_{s}^{\rm FD}.\label{ps0}
\end{equation}
Inserting the results from (\ref{eq:ps-2}) and (\ref{eq:ps-3}), we have (\ref{eq:ps-4}).
\end{IEEEproof}
The (un)conditional success probabilities are not in strict closed-form due to the integral form of $F(\theta R^{\alpha},\alpha,R)$. However,
tight simple bounds can be obtained.
\begin{thm}
The conditional success probability $p_{s}^{\rm HD}$ is lower and upper bounded by
\begin{equation}
\underline{p}_{s}=\exp(-\lambda(p_{1}+2p_{2})H(\theta R^{\alpha},\alpha))\label{lb}
\end{equation}
and
\begin{equation}
\overline{p}_{s}=\exp(-\lambda(p_{1}+p_{2}(1+\delta))H(\theta R^{\alpha},\alpha)).\label{ub}
\end{equation}
and, similarly, $p_{s}^{\rm FD}$ is bounded as
\begin{equation} \kappa \underline{p}_{s} \leq p_{s}^{\rm FD} \leq \kappa \overline{p}_{s}.\label{b1}\end{equation}The unconditional success probability is lower and upper bounded as
\begin{equation}
(p_1 + \kappa p_2)\underline{p}_{s} \leq p_s \leq (p_1 + \kappa p_2)\overline{p}_{s}.\label{b2}
\end{equation}
\end{thm}
\begin{IEEEproof}
Bounds only need to be established for the second term of the product in the conditional success
probability that contains the integral $F(\theta R^{\alpha},\alpha,R)$.
Lower Bound: From (\ref{eq:I2-1}),
\ifCLASSOPTIONonecolumn
\begin{eqnarray}
L_{I_{2}}(s) & = & \mathbb{E}\left(\prod_{x\in\Phi_{\left[2\right]}}e^{-s\left(h_{x}l(x)+h_{m(x)}l(m(x))\right)}\right)\nonumber \\
& \overset{\left(a\right)}{\geq} & \mathbb{E}\left(\prod_{x\in\Phi_{\left[2\right]}}e^{-sh_{x}l(x)}\right)\mathbb{E}\left(\prod_{x\in\Phi_{\left[2\right]}}e^{-sh_{m(x)}l(m(x))}\right)\label{eq:lb2}\\
& \overset{\left(b\right)}{=} & \exp(-2\lambda p_{2}H(s,\alpha)),\label{K1}
\end{eqnarray}
\else
\begin{align}
L_{I_{2}}(s) & = \mathbb{E}\left(\prod_{x\in\Phi_{\left[2\right]}}e^{-s\left(h_{x}l(x)+h_{m(x)}l(m(x))\right)}\right)\nonumber \\
& \overset{\left(a\right)}{\geq} \mathbb{E}\left(\prod_{x\in\Phi_{\left[2\right]}}e^{-sh_{x}l(x)}\right)\mathbb{E}\left(\prod_{x\in\Phi_{\left[2\right]}}e^{-sh_{m(x)}l(m(x))}\right)\label{eq:lb2}\\
& \overset{\left(b\right)}{=} \exp(-2\lambda p_{2}H(s,\alpha)),\label{K1}
\end{align}
\fi
where (a) follows from the FKG inequality \cite[Thm 10.13]{Haenggi12book} since
both $\prod_{x\in\Phi}e^{-sh_{x}l(x)}$ and \ifCLASSOPTIONonecolumn \\ \else \fi $\prod_{x\in\Phi}e^{-sh_{m(x)}l(m(x))}$
are decreasing random variables. In (\ref{eq:lb2}), the first term
is similar to the calculation of $L_{I_{1}}(s)$ with $\Phi_{\left[1\right]}$
replaced by $\Phi_{\left[2\right]}$ while in the second term, $m(\Phi_{\left[2\right]}) $ is a PPP with the same density as $\Phi_{\left[2\right]}$
due to the displacement theorem \cite[page 35]{Haenggi12book}. As a result, the two factors in (\ref{eq:lb2}) are equal, and
\begin{align*}
p_{s}^{\rm HD}
& \geq L_{I_{1}}(\theta R^{\alpha})\exp(-2\lambda p_{2}H(\theta R^{\alpha},\alpha))
= \underline{p}_{s}.
\end{align*}
Upper Bound: From (\ref{eq:I2}),
\begin{align*}
L_{I_{2}}(s) & = \mathbb{E}\left(\prod_{x\in\Phi_{\left[2\right]}}\frac{1}{1+sl(x)}\frac{1}{1+sl(m(x))}\right)\\
& \leq \left\{ K_{1}(s,\alpha)K_{2}(s,\alpha)\right\} ^{\frac{1}{2}},
\end{align*}
which follows from the Cauchy-Schwarz inequality with $K_{1}(s,\alpha)=\mathbb{E}\left(\prod_{x\in\Phi_{\left[2\right]}}\frac{1}{\left(1+sl(x)\right)^{2}}\right)$
and $K_{2}(s,\alpha)=\mathbb{E}\left(\prod_{x\in\Phi_{\left[2\right]}}\frac{1}{\left(1+sl(m(x))\right)^{2}}\right)$. We have
\begin{align*}
K_{1}(s,\alpha
& = \exp\left(-2\pi\lambda p_{2}\int_{0}^{\infty}\left(1-\frac{1}{\left(1+sr^{-\alpha}\right)^{2}}\right)rdr\right)\\
& = \exp\left(-\pi\lambda p_{2}(1+\delta)\Gamma(1+\delta)\Gamma(1-\delta)s^{\delta}\right)\\
& = \exp(-\lambda p_{2}(1+\delta)H(s,\alpha)),
\end{align*}where $\Gamma(\cdot)$ is the gamma function.
$K_{2}(s,\alpha)=K_{1}(s,\alpha)$ because $m(\Phi_{\left[2\right]})$
is a PPP with the same density as $\Phi_{\left[2\right]}$. As
a result,
\begin{align}
L_{I_{2}}(s) & \leq \left\{ K_{1}(s,\alpha)K_{2}(s,\alpha)\right\} ^{\frac{1}{2}}\\
& = \exp(-\lambda p_{2}(1+\delta)H(s,\alpha)).\label{K2}
\end{align}
Therefore,
\begin{align*}
p_{s}^{\rm HD}
& \leq e^{-\lambda p_{1}H(\theta R^{\alpha},\alpha)}e^{-\lambda p_{2}(1+\delta)H(\theta R^{\alpha},\alpha)}
= \overline{p}_{s}.
\end{align*}
The lower and upper bounds of $p_{s}^{\rm FD}$ and $p_{s}$ simply follow from (\ref{eq:ps-3}) and (\ref{eq:ps-4}).
\end{IEEEproof}
The lower bound can be intuitively
understood as lower bounding the interference of the FD nodes (which
are formed by two {\em dependent} PPPs) by that of two {\em independent} PPPs with the same
density.
The upper bound turns out to be the same as the result obtained
by assuming $l(x)=l(m(x))$, $\forall x \in \Phi$, i.e., the distances between the receiver
at the origin and the interfering pair from the FD links are the same.
Indeed, assuming $l(x)=l(m(x))$, we have\begin{align*}
\tilde{L}_{I_{2}}(s) & = \mathbb{E}\left(\prod_{x\in\Phi_{\left[2\right]}}e^{-s\left(h_{x}+h_{m(x)}\right)l(x)}\right)\\
& = \exp\left(-\pi\lambda p_{2}\mathbb{E}\left[\left(h_{x}+h_{m(x)}\right)^{\delta}\right]\Gamma\left(1-\delta\right)s^{\delta}\right)\\
& = \exp(-\pi\lambda p_{2}\Gamma(2+\delta)\Gamma(1-\delta)s^{\delta})\\
& = \exp(-\pi\lambda p_{2}\left(1+\delta\right)\Gamma(1+\delta)\Gamma(1-\delta)s^{\delta})\\
& = \exp(-\lambda p_{2}(1+\delta)H(s,\alpha))
\end{align*}
where $\mathbb{E}\left[\left(h_{x}+h_{y}\right)^{\delta}\right]=\Gamma(2+\delta)$
since $h_{x}+h_{y}$ has an Erlang distribution
and $\Gamma(2+\delta)=(1+\delta)\Gamma(1+\delta)$. Hence, the approximated
success probability assuming $l(x)=l(m(x))$ is $\tilde{p}_{s} = L_{I_{1}}(\theta R^{\alpha})\tilde{L}_{I_{2}}(\theta R^{\alpha})= \overline{p}_{s}$. This result is not surprising. The equality holds for the Cauchy-Schwarz
inequality if $\prod_{x\in\Phi_{[2]}}\frac{1}{(1+sl(x))^{2}}$
and $\prod_{x\in\Phi_{[2]}}\frac{1}{(1+sl(m(x)))^{2}}$
are linearly dependent. Obviously, $l(x)=l(m(x))$ satisfies this condition.
Therefore, we have $\tilde{p}_{s}=\overline{p}_{s}$
as expected.
The horizontal gap between two success probability curves (or SIR distributions) is often quite insensitive to the success probability where it is evaluated and the path loss models, as pointed out in \cite{Haenggi14, Guo15TC}. The horizontal gap is defined as
\begin{equation}
G(p) \triangleq \frac{p_{s_1}^{-1}(p)}{p_{s_2}^{-1}(p)}, \quad p\in (0,1),\label{hg}
\end{equation}
where $p_{s}^{-1}(p)$ is the inverse of the success probability and $p$ is the target success probability.
The sharpness of the upper and lower bounds of the success probabilities is established by the following corollary.
\begin{cor}
\label{cor:HGain} The horizontal gap between the upper and lower bound of the conditional success probability $p_s^{\rm HD}$ does not depend on the target success probability and is given by
\begin{equation}
G = \left(\frac{p_1+2p_2 }{p_1+p_2(1+\delta)}\right)^{{1}/{\delta}}.\label{HG}
\end{equation}
Furthermore, the horizontal gap between the lower and upper bound of the conditional success probability $p_s^{\rm FD}$ and the bounds of the unconditional success probability $p_s$ under perfect self-interference cancellation is also $G$.
\end{cor}
\begin{IEEEproof}
The horizontal gap can be obtained by setting $\overline{p}_{s}(\overline{\theta})= \underline{p}_{s}(\underline{\theta})$ and calculating $G = {\overline{\theta}}/{\underline{\theta}}$. From (\ref{lb}) and (\ref{ub}), we have
\ifCLASSOPTIONonecolumn
\begin{equation}
\exp(-\lambda(p_{1}+2p_{2})H(\overline{\theta} R^{\alpha},\alpha))= \exp(-\lambda(p_{1}+p_{2}(1+\delta))H(\underline{\theta}R^{\alpha},\alpha)).
\end{equation}
\else
\begin{equation}
e^{-\lambda(p_{1}+2p_{2})H(\overline{\theta} R^{\alpha},\alpha)}= e^{-\lambda(p_{1}+p_{2}(1+\delta))H(\underline{\theta}R^{\alpha},\alpha)}.
\end{equation}
\fi
Solving the above equation for the ratio ${\overline{\theta}}/{\underline{\theta}}$, we obtain (\ref{HG}). When the self-interference cancellation is perfect, $\kappa = 1$. From (\ref{b1}) and (\ref{b2}), we obtain the same horizontal gap $G$ for $p_s^{\rm FD}$ and $p_s$.
\end{IEEEproof}
\begin{figure}[h]
\begin{centering}
\includegraphics[width=\figwidth]{Success_Prob_with_bounds5
\par\end{centering}
\caption{Comparison of unconditional success probability between the theoretical result from (\ref{eq:ps-4}) and its bounds
as a function of the SIR threshold $\theta$ in dB: $\alpha=4$, $\lambda=0.1$, $R=1$,
$p_{0}=0$, $p_{1}=p_{2}=0.5$, $\beta = 0$. The horizontal gap from (\ref{HG}) is $G=36/25$ or $1.58$ dB for these parameters.}
\label{Fig:1}
\end{figure}
From the above corollary, it is apparent that the gap $G$ is independent of the SIR threshold $\theta$ and the target success probability; it only depends on $\delta$ and the transmit probabilities. Also, for $\alpha \downarrow 2$, $G \downarrow 1$. Figure \ref{Fig:1} plots the unconditional success probability from (\ref{eq:ps-4})
and its closed-form upper and lower bounds under perfect self-interference cancellation as a function of the SIR threshold in dB. To obtain the exact curve, the double integral $F(\theta R^{\alpha},\alpha,R)$ is numerically evaluated.
As seen, both bounds are tight with constant horizontal gap $G=1.58$ dB everywhere for $p_1=p_2 = 0.5$ and $\delta = 2/\alpha = 1/2$.
Furthermore, we can examine the relationship between the two key functions ${F(\theta R^{\alpha},\alpha,R)}$ and ${H(\theta R^{\alpha},\alpha)}$. The following corollary bounds their ratio:
\begin{cor}
\label{cor:F}The ratio ${F}/{H}$ is bounded as follows:
\begin{equation}
1+\delta \leq \frac{F(\theta R^{\alpha},\alpha,R)}{H(\theta R^{\alpha},\alpha)}\leq 2, \quad\forall\;\theta>0, R>0, \alpha>2.\label{F2}
\end{equation}
Moreover, the ratio is independent of the link distance $R$ and
\begin{equation}
\lim_{\theta \rightarrow \infty}\frac{F(\theta R^{\alpha},\alpha,R)}{H(\theta R^{\alpha},\alpha)} = {1+\delta}.\label{FH2}
\end{equation}
\end{cor}
\begin{IEEEproof}
From the proof of the upper and lower bounds of the conditional success probability $p_s^{\rm HD}$, i.e., (\ref{K1}) and (\ref{K2}),
we have
\begin{equation}
\exp(-2\lambda p_{2}H(s,\alpha)) \leq L_{I_{2}}(s) \leq \exp(-\lambda p_{2}(1+\delta)H(s,\alpha)),
\end{equation}
where $L_{I_{2}}(s)=\exp(-\lambda p_{2}F(s,\alpha,R))$ from (\ref{L2}).
By taking the logarithm on both sides of the above, we have
\begin{equation}
-2\lambda p_{2}H(s,\alpha)\leq -\lambda p_{2}F(s,\alpha,R) \leq -\lambda p_{2}(1+\delta)H(s,\alpha),
\end{equation}
which leads to (\ref{F2}).
For the independence on the link distance $R$, since $H(\theta R^{\alpha},\alpha)=\frac{\pi^{2}\delta {\theta}^{\delta}R^2}{\sin\left(\pi\delta\right)}$, we need to prove that $F(\theta R^{\alpha},\alpha,R)$ is also proportional to $R^2$. By the change of the variable $r_1 = {r}/{R}$, we can express (\ref{F}) as
\ifCLASSOPTIONonecolumn
\begin{align}
F(\theta R^{\alpha},\alpha,R)
&= R^2\int_{0}^{\infty}\left(2\pi-\frac{1}{1+\theta r_{1}^{-\alpha}}\int_{0}^{2\pi}\frac{d\varphi}{1+\theta \left(r_{1}^{2}+1+2r_1\cos\varphi\right)^{-\alpha/2}}\right) r_1dr_1,\label{F_R}
\end{align}
\else
\begin{align}
F(\theta R^{\alpha},\alpha,R)
&= R^2\int_{0}^{\infty}\left(2\pi-\frac{K(\theta, r_1, 1, \alpha)}{1+\theta r_{1}^{-\alpha}}\right) r_1dr_1,\label{F_R}
\end{align}
\fi
which completes the proof of independence of link distance $R$. For the limit, by the change of the variable $r_2 = r_1 \theta^{-\frac{1}{\alpha}}$, we have
\ifCLASSOPTIONonecolumn
\begin{align}
F(\theta R^{\alpha},\alpha,R)
&= \theta^{\delta}R^2\int_{0}^{\infty}\left(2\pi-\frac{1}{1+ r_{2}^{-\alpha}}\int_{0}^{2\pi}\frac{d\varphi}{1+ \left(r_{2}^{2}+\theta^{-\delta}+2r_2\theta^{-\delta/2}\cos\varphi\right)^{-\alpha/2}}\right) r_2dr_2.\label{F_R2}
\end{align}
\else
\begin{align}
F(\theta R^{\alpha},\alpha,R)
&= \theta^{\delta}R^2\int_{0}^{\infty}\left(2\pi-\frac{K(1, r_2, \theta^{-\delta/2}, \alpha)}{1+ r_{2}^{-\alpha}}\right) r_2dr_2.\label{F_R2}
\end{align}
\fi
Therefore,
\ifCLASSOPTIONonecolumn
\begin{align}
\lim_{\theta \rightarrow \infty}\frac{F(\theta R^{\alpha},\alpha,R)}{H(\theta R^{\alpha},\alpha)} &= \lim_{\theta \rightarrow \infty}\frac{\int_{0}^{\infty}\left(2\pi-\frac{1}{1+ r_{2}^{-\alpha}}\int_{0}^{2\pi}\frac{d\varphi}{1+ \left(r_{2}^{2}+\theta^{-\frac{2}{\alpha}}+2r_2\theta^{-\frac{1}{\alpha}}\cos\varphi\right)^{-\alpha/2}}\right) r_2dr_2}{\frac{\pi^{2}\delta }{\sin(\pi\delta)}}\\
&= \frac{\int_{0}^{\infty}\left(2\pi-\frac{2\pi}{\left(1+ r_{2}^{-\alpha}\right)^2}\right) r_2dr_2}{\frac{\pi^{2}\delta }{\sin(\pi\delta)}}\\
&=1+\delta.
\end{align}
\else
\begin{align}
\lim_{\theta \rightarrow \infty}\frac{F(\theta R^{\alpha},\alpha,R)}{H(\theta R^{\alpha},\alpha)} &= \lim_{\theta \rightarrow \infty}\frac{\int_{0}^{\infty}\left(2\pi-\frac{K(1, r_2, \theta^{-\delta/2}, \alpha)}{1+ r_{2}^{-\alpha}}\right) r_2dr_2}{\frac{\pi^{2}\delta }{\sin(\pi\delta)}}\\
&= \frac{\int_{0}^{\infty}\left(2\pi-\frac{2\pi}{\left(1+ r_{2}^{-\alpha}\right)^2}\right) r_2dr_2}{\frac{\pi^{2}\delta }{\sin(\pi\delta)}}\\
&=1+\delta.
\end{align}
\fi
\end{IEEEproof}
This corollary is useful in calculating the SIR loss, the maximal throughput, and
their bounds in the following. Also, we can conclude that the upper bound of the success probability is asymptotically exact as $\theta\to \infty$. It is also illustrated by \figref{Fig:1}.
\subsection{SIR loss due to FD operation}
In this subsection, we investigate the SIR loss caused by the FD operation in wireless networks described by $\hat{\Phi}$. Consider two extreme cases: one is the case where all concurrently
transmitting nodes work in HD mode, i.e., $p_1=1$, and the other
is where all concurrently transmitting nodes work in FD mode, i.e., $p_2 = 1$. The success probabilities of HD-only and FD-only networks follows from (\ref{eq:ps-2}) as
\begin{equation}
p_{s, \;p_1=1} = \exp(-\lambda H(\theta R^{\alpha},\alpha))\label{ps_HD}
\end{equation}
and \begin{equation}
p_{s, \;p_2=1} = \kappa\exp(-\lambda F(\theta R^{\alpha},\alpha,R)).\label{ps_FD}\end{equation}
\begin{figure}[h]
\begin{centering}
\includegraphics[width=\figwidth]{Success_Prob_FD_vs_HD
\par\end{centering}
\caption{Comparison of success probabilities of FD-only networks, its bounds, and HD-only networks
as a function of the SIR threshold $\theta$ in dB under perfect self-interference cancellation: $\alpha=4$, $\lambda=0.1$, $R=1$,
$p_{0}=0$, $\beta = 0$.}
\label{Fig:1-1}
\end{figure}
Figure \ref{Fig:1-1} plots the success probability of FD-only networks
and its upper and lower bounds as well as success probability of HD-only networks as a function of the SIR threshold in dB. Clearly, FD transmission in a FD-only wireless network leads to a SIR loss compared to its counterpart HD-only wireless network in the success probability, which is the ccdf of SIR. The SIR loss can be defined as the horizontal gap between two SIR distributions as follows from (\ref{hg}):
\begin{definition} The SIR loss between FD-only and HD-only networks is defined as
\begin{equation}
G(p) \triangleq \frac{\theta_{\rm HD}(p)}{\theta_{\rm FD}(p)}=\frac{p_{s, \;p_1=1}^{-1}(p)}{p_{s, \;p_2=1}^{-1}(p)},
\end{equation}
where $p$ is the target success probability and $p_{s}^{-1}$ is the inverse of the ccdf of the SIR. $\theta_{\rm HD}(p)$ is the SIR threshold when the target success probability is $p$, i.e., $\theta_{\rm HD}(p)=p_{s, \;p_1=1}^{-1}(p)$. Similarly, $\theta_{\rm FD}(p)=p_{s, \;p_2=1}^{-1}(p)$.
\end{definition}
The following theorem bounds this SIR loss.
\begin{thm} \label{SIR loss}The SIR loss $G(p)$ between the FD-only network and HD-only network is bounded as
\begin{equation}
(1+\delta + \gamma(\theta_{\rm FD}(p)))^{{1}/{\delta}} \leq G(p) \leq (2 + \gamma(\theta_{\rm FD}(p)))^{{1}/{\delta}},\label{SIR_loss}
\end{equation}
where $\gamma(x)= x^{1-\delta}\frac{R^{\alpha-2}\beta \sin(\pi \delta)}{\lambda \pi^2 \delta K}$.
\end{thm}
\begin{IEEEproof}The proof is quite straightforward by equating
\[p_{s, \;p_1=1} (\theta_{\rm HD}) = p_{s, \;p_2=1}(\theta_{\rm FD}),\]
and solving for the ratio $\frac{\theta_{\rm HD}}{\theta_{\rm FD}}$.
From (\ref{ps_HD}) and (\ref{ps_FD}), we obtain
\[\lambda H(\theta_{\rm HD} R^{\alpha},\alpha) = \lambda F(\theta_{\rm FD} R^{\alpha},\alpha) + \theta_{\rm FD} R^{\alpha} \beta/K,\]
and from (\ref{F2}) in Corollary \ref{cor:F}, we have
\ifCLASSOPTIONonecolumn
\begin{equation}
(1+\delta) \lambda H(\theta_{\rm FD} R^{\alpha},\alpha) \leq \lambda H(\theta_{\rm HD} R^{\alpha},\alpha) - \theta_{\rm FD} R^{\alpha} \beta/K \leq 2\lambda H(\theta_{\rm FD} R^{\alpha},\alpha).\label{F_in}
\end{equation}
\else
\begin{align}
(1+\delta) \lambda H(\theta_{\rm FD} R^{\alpha},\alpha) \leq \lambda H(\theta_{\rm HD} R^{\alpha},\alpha) - \theta_{\rm FD} R^{\alpha} \beta/K \nonumber\\
\leq 2\lambda H(\theta_{\rm FD} R^{\alpha},\alpha).\label{F_in}
\end{align}
\fi
By inserting $H(s,\alpha)=\frac{\pi^{2}\delta s^{\delta}}{\sin\left(\pi\delta\right)}$ into (\ref{F_in}) and after elementary manipulations, (\ref{F_in}) leads to
\begin{equation}
\left(1+\delta + \gamma(\theta_{\rm FD})\right)^{{1}/{\delta}} \leq \frac{\theta_{\rm HD}}{\theta_{\rm FD}} \leq (2 + \gamma(\theta_{\rm FD}))^{{1}/{\delta}}.\label{theta_in}
\end{equation}
Hence, we have (\ref{SIR_loss}).
\end{IEEEproof}
Apparently, the bounds of the SIR loss depend on the SIR threshold $\theta_{\rm FD}$ due to the imperfect self-interference cancellation since $\gamma(\theta_{\rm FD})>0$ for imperfect self-interference cancellation. It means that imperfect self-interference cancellation introduces an extra SIR loss compared to perfect self-interference cancellation and that the SIR loss gets larger as the residual self-interference increases, as shown in Figure \ref{Fig:1-3}.
\begin{figure}[h]
\begin{centering}
\includegraphics[width=\figwidth]{Success_Prob_at_various_beta3
\par\end{centering}
\caption{Success probabilities of FD-only networks ($p_{s, \;p_2=1}(\theta)$) and HD-only networks ($p_{s, \;p_1=1}(\theta)$) at two SIPRs $\beta = 10^{-4}$ and $\beta=0$. The other parameters are $\alpha=4$, $\lambda=0.1$, $R=1$, $K=-34$ dB (Assume that $G_{\rm tx}=G_{\rm rx}=2$, i.e., $3$ dBi, and $f_c=2.4$ GHz).}
\label{Fig:1-3}
\end{figure}
\begin{cor}\label{cor:G}The SIR loss $G(p)$ between the FD-only network and HD-only network under perfect self-interference cancellation ($\beta =0$) is bounded as
\begin{equation}
(1+\delta)^{{1}/{\delta}} \leq G(p) \leq {2}^{{1}/{\delta}}.\label{SIR_loss2}
\end{equation}
\end{cor}
\begin{IEEEproof}
Follows from Theorem \ref{SIR loss} since $\gamma(\theta_{\rm FD})=0$ for $\beta=0$.
\end{IEEEproof}
For perfect self-interference cancellation ($\beta=0$), the bounds of the SIR loss only depend on the path loss exponent $\alpha=2/\delta$, i.e., they are independent of the SIR threshold, the target success probability $p$, and the link distance $R$.
Corollary \ref{cor:G} can also be proven in the following way. Under perfect self-interference cancellation, from (\ref{ps_HD}), we have \[p_{s, \;p_1=1}(\theta_{\rm HD})=e^{-c\theta_{\rm HD}^{\delta}},\]
where $c = \frac{\lambda\pi^{2}\delta R^2}{\sin(\pi\delta)}$.
From (\ref{ps_FD}) and (\ref{F2}), we have
\begin{align*}
\underline{p}_{s, \;p_2=1}(\theta_{\rm FD})&=e^{-2c\theta_{\rm FD}^{\delta}},\\
\overline{p}_{s, \;p_2=1}(\theta_{\rm FD})&=e^{-(1+\delta)c\theta_{\rm FD}^{\delta}}.
\end{align*}
By solving
\begin{align*}
\underline{p}_{s, \;p_2=1}(\theta_{\rm FD})&=p_{s, \;p_1=1}(\theta_{\rm FD} \overline{G}),\\
\overline{p}_{s, \;p_2=1}(\theta_{\rm FD})&=p_{s, \;p_1=1}(\theta_{\rm FD} \underline{G}),
\end{align*}
we obtain the upper bound $\overline{G}=2^{\frac{1}{\delta}}$ and the lower bound $\underline{G}=(1+\delta)^{\frac{1}{\delta}}$.
Therefore, the upper (lower) bound of the SIR loss are actually the constant horizontal gap between the upper (lower) bound of the success probability of the FD-only network and that of the HD-only network. Under perfect self-interference cancellation, the upper bound of the success probability of the FD-only network is just the success probability curve of the HD-only network left shifted by $\underline{G}^{\rm dB}=\frac{10\log_{10}{(1+\delta)}}{\delta}$ dB, whereas the lower bound is that left shifted by $\overline{G}^{\rm dB}=\frac{10\log_{10}{2}}{\delta}$ dB. For $\alpha=4$, the upper bound in Figure \ref{Fig:1-1} is equivalent to the HD curve left-shifted by $3.5$ dB while the lower bound equivalent to that left-shifted by $6.0$ dB.
To summarize, FD-only operation can result in up to $6.0$ dB SIR loss compared to HD-only operation even under perfect self-interference cancellation. Remarkably, this result only depends on the path loss exponent. The above analysis accurately quantifies the SIR loss caused by the extra interference introduced by the FD transmissions. Imperfect self-interference cancellation further adds to the SIR loss, especially when the SIR threshold is high, as shown in \figref{Fig:1-3}.
\section{Throughput Analysis\label{sec:Throughput-Performance-Analysis}}
\subsection{Problem statement}
The purpose of FD transmission in a network is to increase
the network throughput. While FD increases the throughput of an isolated link, it also causes additional interference to the other links. As analyzed in the previous section, FD transmission leads to SIR loss. There is a tradeoff between the link throughput and interference when the nodes in the networks decide to choose FD or HD. Given a network
that consists of nodes of FD capability and HD capability,
how should a node choose between FD and HD operation in order to maximize the network-wide throughput as the network configuration varies? It is important to determine under what condition one should choose FD.
First, we need to define the throughput. In a
random wireless network described by $\hat{\Phi}$, we can consider the throughput of
the \textit{typical link} as mentioned in the network model.
It has probability $p_{1}$ to be in HD mode and
$p_{2}$ to be in FD mode. Therefore, its throughput can be defined as follows:
\begin{defn}\label{linkT}
For a wireless network described by $\hat{\Phi}$, the throughput is defined as
\begin{equation}
T\triangleq \lambda \left(p_{1}p_{s}^{\rm HD}\log(1+ \theta)+2p_{2}p_{s}^{\rm FD}\log(1+ \theta)\right),\label{eq:T}
\end{equation}
assuming that a spectral efficiency of $\log(1+\theta)$ is achievable for a SIR threshold $\theta$.
\end{defn}
Let $\lambda_{1} \triangleq \lambda p_{1}$ and $\lambda_{2} \triangleq \lambda p_{2}$ be the densities of HD links and FD links. $\lambda_{1}$ and $\lambda_{2}$ can be tuned by changing the transmit probabilities of HD and FD modes $p_1$ and $p_2$ given a fixed node density $\lambda$ or vice versa. By doing so, we can optimize the throughput over the densities of HD and FD links ($\lambda_{1}$ and $\lambda_{2}$) instead of just the transmit probabilities $p_1$ and $p_2$ and reduce the variables by one as well. Hence, (\ref{eq:T}) can be rewritten as
\begin{equation}
T= \lambda_{1}p_{s}^{\rm HD}\log(1+ \theta)+2\lambda_{2}p_{s}^{\rm FD}\log(1+ \theta).\label{eq:T2}
\end{equation}
Given the definition of throughput, there are two extreme cases that
are particularly relevant: HD-only networks and FD-only networks, as mentioned earlier. Their throughputs are given as
\begin{equation}
T^{\mbox{\scriptsize{\rm HD}}}=\lambda_{1}\log(1+ \theta)\exp(-\lambda_{1}H(\theta_{\rm HD} R^{\alpha},\alpha))\label{HD FD}
\end{equation}
and
\begin{equation}
T^{\mbox{\scriptsize{\rm FD}}}=2\lambda_{2}\kappa\log(1+ \theta)\exp(-\lambda_{2}F(\theta R^{\alpha},\alpha)).\label{HD FD1}
\end{equation}
With the above setup, the goal is to optimize the throughput over the densities $\lambda_1$ and $\lambda_2$:
\begin{equation}
T_{\max} = \max_{\lambda_1,\lambda_2}T(\lambda_{1},\lambda_{2}).
\end{equation}
It is also interesting to find the relationship between the maxima
of $T^{\mbox{\scriptsize{\rm HD}}}$, $T^{\mbox{\scriptsize{\rm FD}}}$ and $T$, denoted as $T_{\max}^{\mbox{\scriptsize{\rm HD}}}$,
$T_{\max}^{\mbox{\scriptsize{\rm FD}}}$ and $T_{\max}$.
\subsection{Throughput optimization}
Inserting $p_{s}^{\rm HD}$ and $p_{s}^{\rm FD}$ from (\ref{eq:ps-2}) and (\ref{eq:ps-3}) into (\ref{eq:T}), we have
\begin{equation}
T(\lambda_{1},\lambda_{2})= \left(\lambda_{1}+2\lambda_{2}\kappa\right)\exp(-\lambda_{1}H)\exp(-\lambda_{2}F)\log(1+ \theta).\label{eq:T1}
\end{equation}
From now on, we will
use $H$ to denote $H(\theta R^{\alpha},\alpha)$ and $F$
to denote $F(\theta R^{\alpha},\alpha,R)$ for simplicity. $T_{\max}^{\mbox{\scriptsize{\rm HD}}}$ and $T_{\max}^{\mbox{\scriptsize{\rm FD}}}$
can be easily obtained by the following lemma.
\begin{lem}
\label{lem:6}For a HD-only network, described
by $\hat{\Phi}$ with $p_{1}=1$, $T_{\max}^{\mbox{\scriptsize{\rm HD}}}$ is given
by
\begin{equation}
T_{\max}^{\mbox{\scriptsize{\rm HD}}}=
T^{\mbox{\scriptsize{\rm HD}}}\!\left(\frac{1}{H}\right)=\frac{1}{eH}\log(1+\theta),\label{eq:hdmax}
\end{equation}
with optimal density of HD links
\begin{equation}
\lambda_{1}^{{\scriptsize{\rm opt}}}=\frac{1}{H}.\label{eq:p1opt}
\end{equation}
For a FD-only network, described by
$\hat{\Phi}$ with $p_{2}=1$, $T_{\max}^{\mbox{\scriptsize{\rm FD}}}$ is given by
\begin{equation}
T_{\max}^{\mbox{\scriptsize{\rm FD}}}=
T^{\mbox{\scriptsize{\rm FD}}}\!\left(\frac{1}{F}\right)=\frac{2}{e\kappa F}\log(1+\theta),\label{eq:fdmax}
\end{equation}
with optimal density of FD links
\begin{equation}
\lambda_{2}^{{\scriptsize{\rm opt}}}=\frac{1}{F}.\label{eq:p2opt}
\end{equation}
\end{lem}
\begin{IEEEproof}
The proof is straightforward by taking the derivatives of $T^{\mbox{\scriptsize{\rm HD}}}$
and $T^{\mbox{\scriptsize{\rm FD}}}$ with respect to $\lambda_{1}$ and $\lambda_{2}$, respectively.
\end{IEEEproof}
A similar result for HD-only networks has been presented in \cite[Proposition 4]{Haenggi09twc}. In fact, ${1}/{H}$ and ${1}/{F}$ are the spatial efficiency~\cite{Haenggi09twc} of HD-only networks and FD-only networks, respectively. The spatial efficiency quantifies how efficiently a wireless network uses space as a resource. A large spatial efficiency indicates high spatial reuse.
In the following theorem, we show that $T_{\max}$ is achieved by
setting all concurrently transmitting nodes to be in FD mode or in HD mode or in a mixed FD/HD mode, depending on one simple condition.
\begin{thm} Let
\begin{equation} L \triangleq\{(\lambda_1, \lambda_2) \in (\mathbb{R}^+)^2\colon \lambda_1 + 2\kappa\lambda_2 = H^{-1}\}.\label{condition}\end{equation}
For a wireless network described by $\hat{\Phi}$, the maximal throughput
is given by
\begin{equation}
T_{\max} = \begin{cases}
T_{\max}^{\mbox{\scriptsize{\rm FD}}} & \mbox{if } F < 2\kappa H\\
T_{\max}^{\mbox{\scriptsize{\rm HD}}} & \mbox{if } F > 2\kappa H\\
T_{\max}^{\mbox{\scriptsize{\rm HD}}}= T_{\max}^{\mbox{\scriptsize{\rm FD}}} & \mbox{if } F = 2\kappa H
\end{cases}\label{eq:equ}
\end{equation}
with the optimal densities of HD and FD links
\begin{equation}
\left(\lambda_{1}^{{\scriptsize{\rm opt}}},\lambda_{2}^{{\scriptsize{\rm opt}}}\right)\begin{cases}
=\left(0, \frac{1}{F}\right) & \mbox{if } F < 2\kappa H\\
=\left(\frac{1}{H}, 0\right) & \mbox{if } F > 2\kappa H\\
\in L & \mbox{if } F = 2\kappa H. \end{cases}
\end{equation}
\end{thm}
\begin{IEEEproof}
Taking the derivative of $T$ w.r.t. $\lambda_{1}$ and $\lambda_{2}$ leads to
\begin{equation}
\frac{\partial T}{\partial \lambda_{1}}=\exp(-\lambda_{1}H-\lambda_{2}F)\log(1+\theta)[1- H(2\kappa \lambda_{2}+\lambda_{1})],\label{eq:p1}
\end{equation}
\begin{equation}
\frac{\partial T}{\partial \lambda_{2}}=\exp(-\lambda_{1}H-\lambda_{2}F)\log(1+\theta)[2\kappa-F(2\kappa\lambda_{2}+\lambda_{1})].\label{eq:p2}
\end{equation}
Setting $\frac{\partial T}{\partial \lambda_{1}}=0$ and $\frac{\partial T}{\partial \lambda_{2}}=0$, we have
\begin{equation}
\lambda_{1}+2\kappa\lambda_{2} = \frac{1}{H}\label{condition1}
\end{equation}
\begin{equation}
\lambda_{1}+2\kappa\lambda_{2} = \frac{2\kappa}{F}.\label{condition2}
\end{equation}
\leavevmode
\begin{enumerate} \item $F= 2\kappa H$: Both partial derivatives are zero as long as $(\lambda_1, \lambda_2)\in L$. $({1}/{H}, 0)$ and $(0, {2\kappa}/{F})$ both lie in $L$. Hence, $T_{\max} = T({1}/{H}, 0) = T_{\max}^{\mbox{\scriptsize{HD}}}$ and also $T_{\max} = T(0, {2\kappa}/{F}) = T_{\max}^{\mbox{\scriptsize{FD}}}$. This case is the break-even point where FD and HD have the same throughput. That means in a wireless network, the typical link has the same throughput no matter if it is in FD or HD mode.
\item $F > 2\kappa H$: Under this condition, let $\frac{\partial T}{\partial \lambda_{1}}=0$ and we have (\ref{condition1}). Moreover, $\frac{\partial T}{\partial \lambda_{2}}<0.$ Therefore, the maximal $T$ is achieved at $\left(\lambda_{1},\lambda_{2}\right)=\left({1}/{H}, 0\right)$ from (\ref{condition1}) and $\frac{\partial T}{\partial \lambda_{2}}<0$. Note that $T({1}/{H}, 0)=T^{\mbox{\scriptsize{HD}}}({1}/{H})=T^{\mbox{\scriptsize{HD}}}_{\max}$. On the other hand, letting $\frac{\partial T}{\partial \lambda_{2}}=0$, we have (\ref{condition2}) and $\frac{\partial T}{\partial \lambda_{1}}>0,$ which leads to that the maximal $T$ is achieved at $\left(\lambda_{1},\lambda_{2}\right)=({2\kappa}/{F}, 0).$ Since $T({2\kappa}/{F}, 0)<T({1}/{H}, 0)$. We conclude that $T_{\max} = T^{\mbox{\scriptsize{HD}}}$ in this case.
\item $F < 2\kappa H$: By similar reasoning as in the second case, we can conclude that $T_{\max} = T^{\mbox{\scriptsize{FD}}}$ in this case.
\end{enumerate}\end{IEEEproof
Under perfect self-interference cancellation ($\beta = 0$), $F < 2\kappa G$, and we always have
\[
T_{\max}=T_{\max}^{\mbox{\scriptsize{FD}}},
\] which means $T_{\max}$ is always achieved by
setting all transmitting nodes to work in FD mode, despite the extra interference
caused by the FD nodes. This conclusion holds for all network
configurations $(\lambda, \theta, R, \alpha)$.
On the other hand, the following corollary quantifies how the imperfect self-interference cancellation affects the throughput in a wireless network with full-duplex radios.
\begin{cor}\label{beta_c}
Given an SIR threshold $\theta$, path loss exponent $\alpha$, and link distance $R$, there exists a critical SIPR value $\beta_{c}$ in the wireless network described by $\hat{\Phi}$: when $\beta < \beta_{c}$, FD is preferable in terms of throughput while HD has better throughput when $\beta > \beta_{c}$, where
\begin{equation}
\beta_{c} = \frac{K\log({2H}/{F})}{\theta R^{\alpha}}.\label{beta}
\end{equation}
\end{cor}
\begin{IEEEproof}
$\beta_{c}$ can be obtained by solving $F=2\kappa H = 2e^{-\theta R^{\alpha} \beta/K}H$.
By Corollary \ref{cor:F}, the ratio of $F/H$ does not depend on $R$ and hence $\beta_{c}$ scales as $R^{-\alpha}$.
\end{IEEEproof}
\begin{figure}[h]
\begin{centering}
\includegraphics[width=\figwidth]{beta_th_Vs_R
\par\end{centering}
\caption{The link distance vs the critical SIPR $\beta_{c}$ from (\ref{beta}) with $K=-34$ dB for $G_{\rm tx}=G_{\rm rx}=2$, i.e., $3$ dBi, and $f_c=2.4$ GHz, which corresponds to the carrier frequency of a WiFi signal. Below the curves, FD provides a higher throughput, while above the curves, HD does.}
\label{Fig:2}
\end{figure}
Figure \ref{Fig:2} plots the relationship between the link distance and the self-interference cancellation threshold $\beta_{c}$. It provides very valuable insight into the system design. The curves are linear in this log-log plot with slope $-1/\alpha$. The region under the lines is the region where FD transmission achieves a higher network throughput. For example, assume that the self-interference cancellation is limited to $80$ dB due to hardware imperfection. In this case, FD transmission is preferable only if the link distance is smaller than $10$ when $\theta=0$ dB and $\alpha = 4$ under the wireless network model used in this paper. To achieve a link distance of up to $100$, the self-interference cancellation needs to be at least $100$ dB ($\alpha = 3$) and $120$ dB ($\alpha = 4$) when $\theta =0$ dB. When the link distance is greater than $100$ with self-interference cancellation no greater than $120$ dB, it is better to use HD. So the amount of self-interference cancellation determines the maximal transmission range for which FD has better throughput than HD.
\subsection{Comparison of FD with HD}
Since the mixed FD/HD network achieves the maximal throughput in
the extreme case of a FD-only or HD-only network, we can simply focus on FD-only
and HD-only networks and compare their maximal throughputs
from the results in Lemma \ref{lem:6}.
The throughput gain of a FD network over a HD network is of
great interest. It is defined as follows.
\begin{defn}
The throughput gain (TG) is defined as the ratio between the maximal
throughput of FD-only networks and HD-only networks given the same network parameters
$\left(\theta,R,\alpha\right)$:
\[
{\rm TG}\triangleq\frac{T_{\max}^{\mbox{\scriptsize{FD}}}}{T_{\max}^{\mbox{\scriptsize{HD}}}}.
\]
\end{defn}
The following corollary quantifies and bounds $\rm{TG}$ in
terms of $F$ and $H$. Note that $F$ and $H$ are constant given
$\left(\theta,R,\alpha\right)$.
\begin{cor}
The throughput gain is given by
\begin{equation}
{\rm TG}=\frac{2\kappa H}{F}\label{eq:TG}
\end{equation} and bounded as
\begin{equation}
\kappa < {\rm TG} < \frac{2\kappa}{1+\delta}.
\label{eq:TG_ub}
\end{equation}
Moreover, for any $\beta \geq 0$,
\begin{equation}{\rm TG}(\theta) \sim \frac{2\kappa}{1+\delta}=\frac{2}{1+\delta}\exp(-\theta R^\alpha\beta/K), \quad\theta\to\infty.\label{TGinf}\end{equation}
\end{cor}
\begin{IEEEproof}
From (\ref{eq:hdmax}) and (\ref{eq:fdmax}), we have (\ref{eq:TG}). The upper and lower bounds are easily obtained from Corollary \ref{cor:F}.
For $\beta>0$,
\begin{equation}\lim_{\theta \rightarrow \infty} \kappa = \lim_{\theta \rightarrow \infty} e^{-\frac{\theta R^{\alpha} \beta}{K}}=0.\end{equation}
Therefore, both the lower bound and upper bound of ${\rm TG}$ converge to $0$ as $\theta$ goes to $\infty$. As a result, \begin{equation}\lim_{\theta \rightarrow \infty} {\rm TG}(\theta)=0.\label{TGinf1}\end{equation} $\beta=0$ implies $\kappa = 1$, which leads to \begin{equation}\lim_{\theta \rightarrow \infty} {\rm TG}(\theta)=\lim_{\theta \rightarrow \infty} \frac{2H}{F}=\frac{2}{1+\delta}\label{TGinf2}\end{equation}
from (\ref{FH2}). Combining (\ref{TGinf1}) and (\ref{TGinf2}), we obtain (\ref{TGinf}).
\end{IEEEproof}
(\ref{TGinf}) indicates that the throughput gain converges to its upper bound as $\theta$ goes to infinity.
\ifCLASSOPTIONonecolumn
\begin{figure}[h]
\begin{centering}
\subfigure[$50$ dB self-interference cancellation: $\beta=10^{-5}$]{\includegraphics[width=7cm]{TGvsSIR_at_various_beta}\label{beta40}
}
\subfigure[$70$ dB self-interference cancellation: $\beta=10^{-7}$]{\includegraphics[width=7cm]{TGvsSIR_at_various_beta2}\label{beta20}
}
\par\end{centering}
\begin{centering}
\subfigure[Perfect self-interference cancellation: $\beta=0$]{\includegraphics[width=7cm]{TGvsSIR_at_various_beta3}\label{beta0}
\par\end{centering}
\caption{Throughput gain as a function of the SIR threshold $\theta$ and its bounds at different SIPRs for
$\alpha=4$, $R=1$.}
\label{Fig:4}
\end{figure}
\else
\begin{figure}[h]
\begin{centering}
\subfigure[$50$ dB self-interference cancellation: $\beta=10^{-5}$]{\includegraphics[width=7cm]{TGvsSIR_at_various_beta}\label{beta40}
}
\par\end{centering}
\begin{centering}
\subfigure[$70$ dB self-interference cancellation: $\beta=10^{-7}$]{\includegraphics[width=7cm]{TGvsSIR_at_various_beta2}\label{beta20}
}
\par\end{centering}
\begin{centering}
\subfigure[Perfect self-interference cancellation: $\beta=0$]{\includegraphics[width=7cm]{TGvsSIR_at_various_beta3}\label{beta0}
\par\end{centering}
\caption{Throughput gain as a function of the SIR threshold $\theta$ and its bounds at different SIPRs for
$\alpha=4$, $R=1$.}
\label{Fig:4}
\end{figure}
\fi
\figref{Fig:4} illustrates the throughput gain as a function
of the SIR threshold together with its upper and lower bounds given in (\ref{eq:TG_ub}). As seen, the throughput gain
is always lower than $\frac{2}{1+\delta}$ since the upper bound is smaller than that. For perfect self-interference cancellation, the throughput gain increases as the SIR threshold gets larger as shown in \figref{beta0}. The throughput gain decreases in the high SIR regime due to the imperfect self-interference cancellation as shown in \figref{beta40} and \figref{beta20}. When the self-interference cancellation is not sufficient, the throughput gain is less than $1$, which means that the HD-only network has a higher throughput, i.e., the value in the curve after $\theta>10$ dB in \figref{beta40} is less than $1$. These figures illustrate the throughput gain under different conditions, especially the impact of the imperfect self-interference cancellation on the throughput.
\section{Conclusion\label{sec:Conclusion}}
In this paper, we analyzed the throughput of wireless networks with
FD radios using tools from stochastic geometry. Given a wireless network of radios
with both FD and HD capabilities, we showed that FD transmission is always preferable
compared to HD transmission in terms of throughput when the self-interference cancellation is perfect. It turns out that the throughput of HD transmission cannot be doubled and the actual gain is $\frac{2\alpha}{\alpha+2}$ for an ALOHA protocol, where $\alpha$ is the path loss exponent. Under imperfect self-interference cancellation, the network has a break-even point where FD and HD have the same throughput. The break-even point depends on the amount of self-interference cancellation and the link distance. Given a fixed SIR threshold and path loss exponent, the necessary amount of self-interference cancellation in dB is logarithmically proportional to the link distance. It means that the residual self-interference determines the maximal link distance within which FD is beneficial compared to HD. It provides great insights into the network design with FD radios. We also analyzed and quantified the effects of imperfect self-interference cancellation on the network throughput and SIR loss. The SIR loss of a FD-only network over a HD-only network is quantified within tight bounds. The horizontal gap is utilized to determine the SIR loss. Moreover, the throughput gain of FD over HD is presented under imperfect self-interference cancellation.
In our network model, we consider the interference-limited case where the thermal noise is ignored. However, this is not a restriction as the throughput gain is independent of the thermal noise since the thermal noise adds the same exponential factor ($\exp({-\theta R^{\alpha} W})$ \cite[Page 105]{Haenggi12book}, where $W$ is the thermal noise power) to the throughput expressions of both HD and FD networks and they cancel each other in the throughput gain.
In general, FD is a
very powerful technique that can be adapted for the next-generation
wireless networks. The throughput gain may be larger if more advanced
MAC protocols other than ALOHA are used or the interference management
can be used for the pairwise interferers in the FD links. A FD-friendly MAC scheme should let the node decide to use FD or HD based on its surrounding interference in order to maximize the overall network throughput. There is a strong need for a MAC protocol tailored for a wireless network of radios with both FD and HD capacities and an intelligent and adaptive scheme to switch between FD and HD based on different network configurations.
\section*{Acknowledgment}
This work has been partially supported by the U.S. NSF (grants ECCS-1231806, CNS 1016742 and CCF 1216407).
\bibliographystyle{IEEEtran}
| {'timestamp': '2015-02-27T02:04:58', 'yymm': '1502', 'arxiv_id': '1502.07404', 'language': 'en', 'url': 'https://arxiv.org/abs/1502.07404'} |
\section{Introduction\label{sec:Introduction}}
Traveling wave solutions to partial differential equations (PDEs)
are often used to study the collective migration of a population of
cells during wound healing \citep{cai_multi-scale_2007,denman_analysis_2006,landman_diffusive_2005,landman_travelling_2007,maini_traveling_2004},
tumorigenesis \citep{kuang_data-motivated_2015}, and angiogenesis
\citep{pettet_model_1996,sherratt_new_2001}. R.A. Fisher introduced
what is now referred to as Fisher's Equation in 1937 to model the
advance of an advantageous gene in a population \citep{fisher_wave_1937}.
Since then, it has been used extensively in math biology literature
to model the migration of a monolayer of cells during experimental
wound healing assays \citep{cai_multi-scale_2007,jin_reproducibility_2016,maini_traveling_2004}.
Fisher's Equation is written as
\begin{equation}
u_{t}=Du_{xx}+\lambda u(K-u)\label{eq:fisher}
\end{equation}
with subscripts denoting differentiation with respect to that variable
and $u=u(t,x)$ representing a population of cells over time $t$
at spatial location $x$. The first term on the right hand side of
\eqref{eq:fisher} represents diffusion in space with rate of diffusion,
$D$, and the second term represents logistic growth of the population
with proliferation rate, $\lambda,$ and carrying capacity, $K$.
As shown in \citep[$\S$ 11.2]{murray_mathematical_2002}, \eqref{eq:fisher}
admits traveling wave solutions of the form
\[
u(t,x)=U(z),\ z=x-ct
\]
where $c$ denotes the speed of the traveling wave solution and $U(z)$
denotes the traveling wave profile. Traveling wave solutions to \eqref{eq:fisher}
thus maintain a constant profile, $U(z),$ over time that moves leftward
if $c<0$ or rightward if $c>0$ with speed $|c|$. It is also shown
that \eqref{eq:fisher} has a positive and monotonic profile for $|c|\ge2\sqrt{D\lambda}$,
which is biologically relevant when $u(t,x)$ denotes a population
of cells. Kolmogoroff proved in 1937 that any solution to \eqref{eq:fisher}
with a compactly-supported initial condition will converge to a traveling
wave solution with minimum wavespeed $c=2\sqrt{D\lambda}$ \citep{kolmogoroff_etude_1937}.
See \citep[$\S$ 5.4]{murray_lectures_1977} for a proof of this. There
is also a wide literature on studies into extensions of Fisher's Equation,
such as Fisher's Equation coupled with chemotaxis \citep{ai_reaction_2015,landman_diffusive_2005},
time-dependent rates of proliferation and diffusion \citep{hammond_analytical_2011},
and space-dependent rates of diffusion \citep{curtis_propagation_2012}.
Structured population models, or PDE models with independent variables
to distinguish individuals by some continuously-varying properties,
were first investigated via age-structured models in the early 20th
century \citep{mckendrick_applications_1927,sharpe_problem_1911}.
The 1970s saw a revival in structured population modeling after the
introduction of methods to investigate nonlinear structured population
models \citep{gurtin_non-linear_1974}, which led to our current understanding
of semigroup theory for linear and nonlinear operators on Banach spaces
\citep{webb_population_2008}. Several recent biological studies have
demonstrated the existence of traveling wave solutions to structured
population models \citep{ducrot_travelling_2011,ducrot_travelling_2009,gourley_vector_2007,so_reaction-diffusion_2001},
and another study used an independent variable representing subcellular
$\beta$-catenin concentration to investigate how signaling mutations
can cause intestinal crypts to invade healthy neighboring crypts \citep{murray_modelling_2010}.
Recent biological research has focused on the influence of biochemical
signaling pathways on the migration of a population of cells during
wound healing. Particular emphasis has been placed on the mitogen-activated
protein kinase (MAPK) signaling cascade, which elicits interesting
patterns of activation and migration in response to different types
of cytokines and growth factors in various cell lines \citep{chapnick_leader_2014,matsubayashi_erk_2004}.
For example, experimental wounding assays of madine darby canine kidney
cells (MDCKs) in \citep{matsubayashi_erk_2004} yielded a transient
pulse of ERK 1/2 (a specific MAPK protein) activity in the cell sheet
that only lasted for a few minutes. This pulse of activity was followed
by a slow wave of activity that propagated from the wound margin to
submarginal cells over the course of several hours. The second wave
was determined to be crucial for regulating MDCK sheet migration.
The authors of \citep{matsubayashi_erk_2004} proposed that these
fast and slow waves of ERK 1/2 activity could be caused by the production
of reactive oxygen species (ROS) and epidermal growth factor (EGF),
respectively. Similar experiments with fibroblasts also demonstrated
this first transient wave of ERK 1/2 activity, but not the following
slow wave. The authors of \citep{chapnick_leader_2014} found that
human keratinocyte (HaCaT) cells exhibit ERK 1/2 activity primarily
at the wound margin during similar experimental wound healing assays
with a high density in response to treatment with transforming growth
factor-$\beta$ (TGF-$\beta$).
In this study, we detail an approach to investigate a structured version
of Fisher's Equation that is motivated by the above experimental observations.
Previous structured population models have been restricted to traits
that primarily increase over time, such as age or size, but our analysis
allows for both activation and deactivation along the biochemical
activity dimension.
In Section \ref{sec:Model-Development}, we develop our structured
population model and devote Section \ref{sec:Characteristic-equations-from}
to a review of relevant material from size-structured population models.
We demonstrate the existence of self-similar traveling wave solutions
to the model in Section \ref{sec:Traveling-waves-analysis}. We then
study a more complicated version of our model where migration and
proliferation of the population depend on MAPK activity levels in
Section \ref{sec:Numerical-simulations} before making final conclusions
and discussing future work in Section \ref{sec:Discussion-and-future}.
\section{Model Development\label{sec:Model-Development}}
We model a cell population during migration into a wound, denoted
by $u(t,x,m)$, for
\[
u:[0,\infty)\times\mathbb{R}\times\left[m_{0},m_{1}\right]\rightarrow\mathbb{R}
\]
where $t$ denotes time, $x$ denotes spatial location, and $m$ denotes
activation along a biochemical signaling pathway with minimum and
maximum levels $m_{0}$ and $m_{1}$, respectively. As a first pass,
we assume that any cells of the same MAPK activity level will activate
identically over time in the same environment. This assumption allows
us to model the activation distribution of the population over time
deterministically by considering how cells of all possible MAPK activity
levels activate and deactivate over time. We note that biochemical
signaling is an inherently heterogeneous process, so our approach
would benefit from a further investigation with stochastic differential
equations.
As discussed in \citep{de_roos_gentle_1996}, crucial aspects of a
structured population model include the individual state, the environmental
state, external forcing factors, and feedback functions. The \emph{individual
state} is a dimension used to distinguish between individuals of a
population and is typically based on physiological properties such
as age or size. As activation of biochemical signaling pathways influences
cell migration through diffusive and proliferative properties of cells,
we incorporate the biochemical activity dimension, $m$, as an individual
state for our model.
The \emph{environmental state} of a population is the external factors
that influence individual behavior. Recall that external cytokines
and growth factors, such as ROS, TGF-$\beta$, and EGF, influence
activation of the MAPK signaling cascade and promote migration during
wound healing. The cell population will not directly affect the level
of external growth factor in this work, so an \emph{external forcing
factor} will be used to represent treatment with these chemicals here.
The external chemical concentration at time $t$ will be denoted by
$s(t),$ and the activation response of cells to this chemical will
be given by the function $f(s).$
A \emph{feedback function }included in our work will be the inhibition
of individual cell proliferation in response to a confluent density.
As proliferation is hindered by contact inhibition, we introduce a
new variable,
\begin{equation}
w(t,x):=\int_{m_{0}}^{m_{1}}u(t,x,m)dm\label{eq:w_def}
\end{equation}
to represent the population of cells at time $t$ and spatial location
$x$. Proliferation of the population will accordingly vanish as $w(t,x)$
approaches the carrying capacity, $K$.
Our model, which we term as a \emph{structured Fisher's Equation},
is given by the PDE:
\begin{eqnarray}
u_{t}+\underbrace{\left(f(s(t))g(m)u\right)_{m}}_{activation} & = & \underbrace{D(m)u_{xx}}_{dif\negthinspace fusion}+\underbrace{\lambda(m)u\left(K-w(t,x)\right)}_{population\ growth}\label{eq:structured_fisher_eqn}\\
w(t,x) & = & \int_{m_{0}}^{m_{1}}u(t,x,m)dm\nonumber \\
u(t=0,x,m) & = & \phi(x,m)\nonumber \\
u(t,x,m=m_{1}) & = & 0\nonumber \\
w(t,-\infty)=K & & w(t,x=+\infty)=0\nonumber
\end{eqnarray}
The function $g(m)\in C^{1}\left([m_{0},m_{1}]\right)$ denotes the
rate of biochemical activation in the population, $s(t)\in L^{\infty}(\mathbb{R}^{+})$
denotes the external chemical concentration in the population, $f(s)\in L_{loc}^{1}(0,\infty)$
denotes the activation response of cells to the level of signaling
factor present, $D(m)$ and $\lambda(m)$ denote biochemically-dependent
rates of cell diffusion and proliferation, and $\phi(x,m)$ denotes
the initial condition of $u$. The spatial boundary conditions specify
that the cell density has a confluent density at $x=-\infty$ and
an empty wound space at $x=+\infty.$ We use a no flux boundary condition
at $m=m_{1}$ so that cells cannot pass this boundary. In the remainder
of this study, we will write $f(s(t))$ as $f(t)$ for simplicity,
though we note that this function will differ between cell lines that
respond differently to the same chemical during wound healing\footnote{Note that an extension for modeling the dynamics governing $s(t)$
will be considered in a future study.}.
The solution space of \eqref{eq:structured_fisher_eqn}, $\mathcal{D}$,
is defined with inspiration from \citep{webb_population_2008} and
\citep[$\S$ 1.1]{volpert_traveling_1994}. If we let $Z$ denote the
space of bounded and twice continuously differentiable functions on
$\mathbb{R}$, then we define
\[
\mathcal{D}:=\left\{ u(t,x,m)\left|\int_{m_{0}}^{m_{1}}u(t,x,m)dm\in Z\right.\right\} ,
\]
i.e., $u(t,x,m)\in\mathcal{D}$ if $w(t,x)\in Z$ for all $t>0.$
We note that $\int_{m_{0}}^{m_{1}}\phi(x,m)dm$ need only be bounded
and piecewise continuous with a finite number of discontinuities \citep{volpert_traveling_1994}.
If $\phi(x,m)$ is not sufficiently smooth in $m,$ we obtain generalized
solutions of \eqref{eq:structured_fisher_eqn} \citep{webb_population_2008}.
In Section \ref{sec:Traveling-waves-analysis}, we will investigate
\eqref{eq:structured_fisher_eqn} with constant rates of diffusion
and proliferation (i.e., $D(m)=D,\lambda(m)=\lambda$) and $f(t)=1$.
By substituting
\begin{eqnarray*}
u^{*}=u/K,\ \ & & t^{*}=\lambda Kt,\\
x^{*}=x\sqrt{\lambda K/D},\ \ & & m^{*}=(m-m_{0})/(m_{1}-m_{0}),\\
g^{*}(m^{*})= & & \ g(m^{*}(m_{1}-m_{0})+m_{0})/(\lambda Km_{1}),
\end{eqnarray*}
and dropping asterisks for simplicity, \eqref{eq:structured_fisher_eqn}
can be non-dimensionalized to
\begin{eqnarray}
u_{t}+\underbrace{(f(t)g(m)u)_{m}}_{activation} & = & \underbrace{u_{xx}}_{diffusion}+\underbrace{u\left(1-\int_{0}^{1}u(t,x,m)dm\right)}_{population\ growth}\label{eq:structured_fisher_eqn_norm}\\
w(t,x) & = & \int_{0}^{1}u(t,x,m)dm\nonumber \\
u(t=0,x,m) & = & \phi(x,m)\nonumber \\
u(t,x,m=1) & = & 0\nonumber \\
w(t,x=-\infty)=1 & & w(t,x=+\infty)=0.\nonumber
\end{eqnarray}
In Section \ref{sec:Numerical-simulations}, we will consider the
full model \eqref{eq:structured_fisher_eqn} when the rates of cellular
diffusion and proliferation are piece-wise constant functions of $m$
and numerically investigate how different functions for $f(t)$ lead
to increased and decreased levels of population migration.
\section{Background Material from Size-Structured Population Modeling\label{sec:Characteristic-equations-from}}
Before investigating the existence of traveling-wave solutions to
\eqref{eq:structured_fisher_eqn_norm}, it is useful to review some
key topics used to solve size-structured population models, as discussed
in \citep{webb_population_2008}. These topics will be useful in analyzing
\eqref{eq:structured_fisher_eqn} in later sections. A reader familiar
with using the method of characteristics to solve size-structured
population models may briefly skim over this section to pick up on
the notation used throughout our study.
As an example, we consider the size-structured model given by
\begin{eqnarray}
u_{t}+(g(y)u)_{y} & = & Au\label{eq:size_example}\\
u(t=0,y) & = & \phi(y)\nonumber
\end{eqnarray}
where $u=u(t,y):[0,\infty)\times[y_{0},y_{1}]\rightarrow\mathbb{R}$
denotes the size distribution over $y$ of a population at time $t$,
$y_{0}$ and $y_{1}$ denote the minimum and maximum population sizes
respectively, and $g(y)\in C^{1}(([y,y_{1}])$ denotes the physical
growth rate\footnote{Note that in this section, $g(y)$ denotes a growth rate with respect
to size, $y$, whereas throughout the rest of our study, $g(m)$ denotes
an activation rate with respect to biochemical activity, $m$.} of individuals of size $y$. In this section, we will work in the
Banach space $\mathbb{X}=L^{1}((y_{0},y_{1})\rightarrow\mathbb{R}),$
and assume $A\in\mathbb{\mathcal{B}}(\mathbb{X})$, the space of bounded,
linear operators on $\mathbb{X}$. The method of characteristics will
facilitate solving \eqref{eq:size_example}.
For a fixed size $\underline{y}\in[y_{0},y_{1}],$ the function
\begin{equation}
\sigma(y;\underline{y}):=\int_{\underline{y}}^{y}\frac{1}{g(y')}dy'\label{eq:sigma}
\end{equation}
provides \emph{the time it takes for an individual to grow from the
fixed size $\underline{y}$ to arbitrary size $y$.} If $g(y)$ is
positive and uniformly continuous on $[y_{0},y_{1}]$, then $\sigma(y;\underline{y})$
is invertible. We denote the inverse function, $\sigma^{-1}(t;\underline{y}),$
as the \emph{growth curve}, and it computes \emph{the size of an individual
over time that starts at size $\underline{y}$ at time $t=0$}. For
instance, if an individual has size $\underline{y}$ at $t=0,$ then
that individual will have size $\sigma^{-1}(t_{1};\underline{y})$
at time $t=t_{1}$. Some helpful properties of the growth curve are
that $\sigma^{-1}(0;\underline{y})=\underline{y}$ and
\begin{equation}
\frac{d}{dt}\sigma^{-1}(t;\underline{y})=g(\sigma^{-1}(t;\underline{y})).\label{eq:dsigma/dt}
\end{equation}
See Section \ref{sec:Properties-of} in the appendix for the derivation
of \eqref{eq:dsigma/dt}.
In order to solve \eqref{eq:size_example} with the method of characteristics,
we set $y=\sigma^{-1}(t;\underline{y})$ to define the variable $v(t;\underline{y})$:
\begin{equation}
v(t;\underline{y}):=u(t,y=\sigma^{-1}(t;\underline{y})).\label{eq:size_characteristic}
\end{equation}
As shown in Section \ref{sec:Derivation-of} of the appendix, substitution
of \eqref{eq:size_characteristic} into \eqref{eq:size_example} yields
the characteristic equation
\begin{equation}
v_{t}=-g'(\sigma^{-1}(t;\underline{y}))v+Av,\label{eq:v_t_diffeq}
\end{equation}
where primes denote differentiation with respect to $y$. This characteristic
equation has size $\underline{y}$ at time $t=0$ and can be solved
explicitly as\footnote{To derive this, use separation of variables and with the help of \eqref{eq:dsigma/dt}
note that $\int_{0}^{t}g'(\sigma^{-1}(\tau;\underline{y}))d\tau=\ln[g(\sigma^{-1}(t;\underline{y}))/g(\underline{y})]$.}
\begin{equation}
v(t;\underline{y})=\frac{g(\underline{y})}{g(\sigma^{-1}(t;\underline{y}))}e^{At}\phi(\underline{y}).\label{eq:soln_size_characteristic}
\end{equation}
As \eqref{eq:soln_size_characteristic} provides the solution to \eqref{eq:size_example}
along the arbitrary characteristic curve with initial size $\underline{y},$
we use it to solve the whole equation with the substitution $y=\sigma^{-1}(t,\underline{y})$,
in which we find
\begin{equation}
u(t,y)=\left\{ \begin{array}{cc}
\frac{g(\sigma^{-1}(-t,y))}{g(y)}e^{At}\phi(\sigma^{-1}(-t,y)) & \sigma^{-1}(t;y_{0})\le y\le y_{1}\\
0 & y_{0}\le y<\sigma^{-1}(t;y_{0}).
\end{array}\right.\label{eq:soln_to_size_example}
\end{equation}
If $\phi(y)\notin C^{1}(y_{0},y_{1}),$ then \eqref{eq:soln_to_size_example}
is viewed as a generalized solution. Note that a piecewise form is
needed for \eqref{eq:soln_to_size_example} because we do not have
any individuals below the minimum size, $y_{0},$ and thus the minimum
possible size at time $t$ is given by $\sigma^{-1}(t;y_{0}).$ If
the population is assumed to give birth to individuals of size $y_{0}$
over time, then the appropriate renewal equation representing population
birth would replace the zero term in the piecewise function (see \citep[$\S$ 9.5]{banks_mathematical_2009}
for an example in size-structured populations and \citep{gourley_vector_2007}
for an example in age-structured populations).
\section{Existence of Traveling Wave Solutions to the Structured Fisher's
Equation\label{sec:Traveling-waves-analysis}}
\subsection{Existence of traveling wave solutions to \eqref{eq:structured_fisher_eqn_norm}}
We now incorporate topics from the previous section to show the existence
of traveling wave solutions to \eqref{eq:structured_fisher_eqn_norm}.
After taking the time derivative of $w(t,x)$, which was defined
in \eqref{eq:w_def}, we can rewrite \eqref{eq:structured_fisher_eqn_norm}
as a system of two coupled PDEs\footnote{Note that either $g(m_{0})=0$ or $u(t,m=m_{0},x)=0$ for $t>0$,
so that the activation term drops out when integrating over $m$ for
$w$.}:
\begin{eqnarray}
u_{t}+(g(m)u)_{m} & = & u_{xx}+u(1-w)\nonumber \\
w_{t} & = & w_{xx}+w(1-w).\label{eq:coupled_fisher_uw}
\end{eqnarray}
Note that in this section, $g(m)$ is a function of biochemical activity
level and $\sigma^{-1}(t;\underline{m})$ computes \emph{the activity
level of an individual over time that starts at level $\underline{m}$
at time $t=0$}. We will thus now refer to $\sigma^{-1}(t;\underline{m})$
as the \emph{activation curve}. We next set up the characteristic
equation for $u$ by setting $m=\sigma^{-1}(t;\underline{m})$ for
a fixed value of $\underline{m}$:
\begin{equation}
v(t,x;\underline{m}):=u(t,x,m=\sigma^{-1}(t;\underline{m})).\label{eq:v_characteristic}
\end{equation}
Substituting \eqref{eq:v_characteristic} into \eqref{eq:coupled_fisher_uw}
simplifies to our characteristic equation
\begin{eqnarray}
v_{t} & = & v_{xx}+v[1-w-g'(\sigma^{-1}(t;\underline{m}))]\nonumber \\
w_{t} & = & w_{xx}+w(1-w),\label{eq:coupled_fisher_vw}
\end{eqnarray}
a nonautonomous system of two coupled PDEs in time and space. Note
that the bottom equation for \eqref{eq:coupled_fisher_vw} is Fisher's
Equation, which has positive monotonic traveling wave solutions for
any speed $c\ge2$ (see \citep[$\S$ 11.2]{murray_mathematical_2002}).
We next aim to derive traveling wave solutions to \eqref{eq:coupled_fisher_vw},
however, we are not aware of any traveling wave solutions to nonautonomous
systems such as this one. From our knowledge of size-structured population
models from Section \ref{sec:Characteristic-equations-from}, we instead
intuit the ansatz of a self-similar traveling wave solution, which
we write as
\begin{eqnarray}
v(t,x;\underline{m}) & = & \frac{g(\underline{m})}{g(\sigma^{-1}(t;\underline{m}))}V(z),\ z=x-ct\label{eq:self_similar_TW_ansatz}\\
w(t,x) & = & W(z).\nonumber
\end{eqnarray}
In this ansatz, $V(z)$ will define a traveling wave profile for $v$
and $\frac{g(\underline{m})}{g(\sigma^{-1}(t;\underline{m}))}$ will
provide the height of the function over time. With the aid of the
chain rule, we observe that:
\begin{eqnarray*}
v_{t}(t,x;\underline{m}) & = & \frac{g'(\sigma^{-1}(t;\underline{m}))g(\sigma^{-1}(t;\underline{m}))}{g(\underline{m})}V-c\frac{g(\sigma^{-1}(t;\underline{m}))}{g(\underline{m})}V_{z}\\
v_{xx}(t,x;\underline{m}) & = & \frac{g(\sigma^{-1}(t;\underline{m}))}{g(\underline{m})}V_{zz},
\end{eqnarray*}
where subscripts denote differentiation with respect to t, $x$, or
$z$ and primes denote differentiation with respect to $m$. Substituting
\eqref{eq:self_similar_TW_ansatz} into \eqref{eq:coupled_fisher_vw}
reduces to the autonomous system
\begin{eqnarray}
-cV_{z} & = & V_{zz}+V(1-W)\nonumber \\
-cW_{z} & = & W_{zz}+W(1-W).\label{eq:coupled_fisher_VW}
\end{eqnarray}
It is now useful to rewrite \eqref{eq:coupled_fisher_VW} as the first
order system
\begin{equation}
\frac{d}{dz}\boldsymbol{\mathcal{\mathcal{V}}}=\left(\begin{array}{c}
V_{z}\\
-cV_{z}-V(1-W)\\
W_{z}\\
-cW_{z}-W(1-W)
\end{array}\right)\label{eq:first_order_VW}
\end{equation}
for $\boldsymbol{\mathcal{V}}(z)=[V(z),V_{z}(z),W(z),W_{z}(z)]^{T}$.
Recall that profiles to traveling wave solutions can be constructed
with heteroclinic orbits between equilibria for a given dynamical
system (or homoclinic orbits for a traveling pulse) \citep[$\S$ 6.2]{keener_mathematical_2009}.
We observe two types of equilibria for \eqref{eq:coupled_fisher_VW},
given by $\boldsymbol{\mathcal{\mathcal{V}}}_{1}^{*}=(1,0,V,0)^{T}$
and $\boldsymbol{\mathcal{\mathcal{V}}}_{2}^{*}=\vec{0}$, where the
former represents a confluent cell density and the latter represents
an empty wound space. We accordingly search for heteroclinic orbits
from $\boldsymbol{\mathcal{\mathcal{V}}}_{1}^{*}$ to $\boldsymbol{\mathcal{\mathcal{V}}}_{2}^{*}$
for some $c>0.$ We choose to focus on the characteristic equations
$v(t,x,m=\sigma^{-1}(t;\underline{m}))$ for values of $\underline{m}$
in which $\phi(\underline{m},x=-\infty)>0$ to represent the population
of cells migrating into the empty wound space. We thus denote $\boldsymbol{\mathcal{\mathcal{V}}}_{1}^{*}=(1,0,\nu,0)^{T}$
for $\nu>0$.
Note that $\boldsymbol{\mathcal{\mathcal{V}}}_{1}^{*}$ is an equilibrium
for any value of $V,$ as $W=1$ will guarantee the existence of an
equilibrium. Such a ``continuum'' of equilibria was also observed
in \citep{perumpanani_traveling_2000}. This structure of $\boldsymbol{\mathcal{\mathcal{V}}}_{1}^{*}$
yields a zero eigenvalue after linearizing \eqref{eq:first_order_VW}
about $\boldsymbol{\mathcal{\mathcal{V}}}_{1}^{*}$, so we cannot
use linear theory to study the local behavior of \eqref{eq:first_order_VW}
near $\boldsymbol{\mathcal{\mathcal{V}}}_{1}^{*}$. While we could
construct the unstable manifold of \eqref{eq:first_order_VW} using
a power series representation to study its local behavior around $\boldsymbol{\mathcal{\mathcal{V}}}_{1}^{*}$
(see \citep[Section 5.6]{meiss_differential_2007}), we find it more
insightful to define a trapping region in the $(V,V_{z})$-plane as
has been done in previous traveling wave studies \citep{ai_reaction_2015,kuang_data-motivated_2015}.
We will then use asymptotically autonomous phase-plane theory to describe
the $\omega$-limit set of our flow, which will show the existence
of a heteroclinic orbit from $\boldsymbol{\mathcal{\mathcal{V}}}_{1}^{*}$
to $\boldsymbol{\mathcal{\mathcal{V}}}_{2}^{*}$. \emph{Trapping regions}
are positively invariant regions with respect to the flow of a dynamical
system, and the \emph{$\omega$-limit set }of a flow is the collection
all limit point of that flow \citep[$\S$ 4.9-10]{meiss_differential_2007}.
We study the trajectory of $\boldsymbol{\mathcal{\mathcal{V}}}$ in
the $(V,V_{z})$-plane by defining the triangular region bound by
the lines $\{V=\nu,V_{z}=0,V_{z}=-\frac{c}{2}V\}$ and denoting this
region as $\Delta.$ The following lemma will demonstrate that $\Delta$
is a trapping region for the flow of \eqref{eq:first_order_VW} in
the $(V,V_{z})$-plane.
\textbf{Lemma: }Let $\nu>0$ and $c\ge2$. Then the region $\Delta$
is positively invariant with respect to \eqref{eq:first_order_VW}
so long as $0<W(z)<1$ for all $z\in\mathbb{R}.$
\emph{Proof:}
We prove this lemma by investigating the vector field along each of
the lines specifying the boundary of our region and showing that they
point into the interior of the space.
i.) Along the line $V_{z}=0,$ $\frac{d}{dz}V_{z}=-V(1-W)$, which
is nonpositive because $W(z)<1$ for all $z\in\mathbb{R}$ and our
region is defined for $V(z)\ge0.$ If $V=0,$then we are at the equilibrium
point $(V,V_{z})=(0,0)$.
ii.) Along $V=\nu,$ $\frac{d}{dz}V=V_{z},$ which is negative in
our defined region. The only point to worry about here is at $(V,V_{z})=(\nu,0)$,
as then $\frac{d}{dz}V=0$. However, we see from part i.) that $\frac{d}{dz}V_{z}<0$
here, so that a flow starting at $(\nu,0)$ will initially move perpendicular
to the $V$-axis in the negative $V_{z}$ direction, and then $\frac{d}{dz}V<0$,
so the flow enters $\Delta.$
iii.) Note that the inner normal vector to the line $V_{z}=-\frac{c}{2}V$
is $\hat{n}=\left(\frac{c}{2},1\right).$ Then
\begin{eqnarray*}
\hat{n}\cdot\frac{d}{dz}(V,V_{z}) & = & \left(\frac{c}{2},1\right)\cdot(V_{z},-cV_{z}-V+VW)\\
& = & \left(\frac{c}{2},1\right)\cdot(-\frac{c}{2}V,\frac{c^{2}}{2}V-V+VW)\\
& = & -\frac{c^{2}}{4}V+\frac{c^{2}}{2}V-V+VW\\
& = & V\left(\frac{c^{2}}{4}-1\right)+VW,
\end{eqnarray*}
which is positive, as $c\ge2$.$\square$
This proof is visually demonstrated in the top row of Figure \ref{fig:invariant_region}.
As $W(z)$ has a heteroclinic orbit with $W(-\infty)=1$ and $W(\infty)=0$
for any $c\ge2$ \citep[$\S$ 11.2]{murray_mathematical_2002}, we
conclude that $\Delta$ is a positively invariant set for the flow
of \eqref{eq:first_order_VW} in the $(V,V_{z})$. The following corollary
describes the $\omega$-limit set of \eqref{eq:first_order_VW}.
\textbf{Corollary: }The $\omega$-limit set of \eqref{eq:first_order_VW}
starting at $\boldsymbol{\mathbb{\mathcal{V}}}_{2}^{*}$ , $\omega(\boldsymbol{\mathbb{\mathcal{V}}}_{2}^{*})$,
is $\boldsymbol{\mathbb{\mathcal{V}}}_{1}^{*}$$.$
\emph{Proof:}
As $W(z)\rightarrow0$ as $z\rightarrow+\infty,$ then the vector
field for \eqref{eq:first_order_VW} in the $(V,V_{z})$-plane is
asymptotically autonomous to the vector field
\begin{equation}
\frac{d}{dz}\left(\begin{array}{c}
V\\
V_{z}
\end{array}\right)=\left(\begin{array}{c}
V_{z}\\
-cV_{z}-V
\end{array}\right),\label{eq:asymptotical_flow}
\end{equation}
a linear system whose only equilibrium is the origin. As $c\ge2,$
the origin is a stable equilibrium and the flow of the limiting system
remains in $\Delta$, and hence the fourth quadrant, for all time.\\
As $\nicefrac{d}{dz}V=V_{z}<0$ in $\Delta$, no periodic or homoclinic
orbits can exist for the limiting system. We thus conclude from the
asymptotically autonomous Poincare-Bendixson Theorem presented in
\citep{markus_asymptotically_1956}\footnote{The relevant theorem statement is given in Appendix \ref{sec:Statement-of-AA_markus}.
We note that while the results of \citep{markus_asymptotically_1956}
are sufficient for our study, asymptotically autonomous systems have
been more extensively studied in \citep{blythe_autonomous_1991,castillo-chavez_asymptotically_1994,thieme_convergence_1992,thieme_asymptotically_1993}
and a more comprehensive result in describing the $\omega$-limit
set of the asymptotically autonomous flow is given in \citep{thieme_asymptotically_1993}.} that our flow in the $(V,V_{z})$ plane starting at $(\nu,0)$ will
limit to the origin. We conclude that $\omega(\boldsymbol{\mathbb{\mathcal{V}}}_{2}^{*})=\boldsymbol{\mathbb{\mathcal{V}}}_{1}^{*}.$$\square$
\begin{figure}
\includegraphics[width=0.45\textwidth]{0_home_john_fall_2016_stucture_manuscript_SIAM_final_invariant_G_0.pdf}\hfill{}\includegraphics[width=0.45\textwidth]{1_home_john_fall_2016_stucture_manuscript_SIAM_final_invariant_G_1.pdf}\vspace{.1cm}
\includegraphics[width=0.45\textwidth]{2_home_john_fall_2016_stucture_manuscript_SIAM_final_W_flow_in_trap_Region.pdf}\hfill{}\includegraphics[width=0.45\textwidth]{3_home_john_fall_2016_stucture_manuscript_SIAM_final_flow_in_trap_Region.pdf}
\protect\caption{Construction of the heteroclinic orbit between $\boldsymbol{\mathbb{\mathcal{V}}}_{1}^{*}$
and $\boldsymbol{\mathbb{\mathcal{V}}}_{2}^{*}$ for \eqref{eq:first_order_VW}.
In (a) and (b), we depict the trapping region in the $(V,V_{z})$-plane,
$\Delta$, and the vector field along its boundary for $\alpha=0.5,c=2.3,W_{z}=0$
and $W$ near 0 and 1, respectively. In (c) and (d), we depict numerical
simulations of \eqref{eq:first_order_VW} in the $(W,W_{z})$-plane
and in the $(V,V_{z})$-plane, respectively. Arrows denote the direction
of the flow and the black dots mark equilibria of \eqref{eq:first_order_VW}.
\label{fig:invariant_region}}
\end{figure}
\subsection{Summary of results}
We have demonstrated the existence of self-similar traveling wave
solutions to \eqref{eq:structured_fisher_eqn_norm} in this section
of the form
\begin{eqnarray*}
u(t,x,m) & = & \frac{g(\underline{m})}{g(\sigma^{-1}(t;\underline{m}))}V(z;\underline{m}),\ z=x-ct
\end{eqnarray*}
for $c\ge2$ and values of $\underline{m}$ where the initial condition
$\phi(x=-\infty,\underline{m})>0$ and $V(z;\underline{m})=u(x-ct,m=\sigma^{-1}(t;\underline{m})).$
Setting $m=\sigma^{-1}(t,\underline{m}),$ then this can be written
more explicitly as
\[
u(t,x,m)=\left\{ \begin{array}{cc}
\frac{g(\sigma^{-1}(-t,m))}{g(m)}V(z;\sigma^{-1}(-t;m)), & \sigma^{-1}(t;0)\le m<1\\
0 & \mbox{otherwise}
\end{array}\right.
\]
and
\[
\int_{0}^{1}u(t,x,m)dm=w(t,x)=W(z)
\]
where $[V(z;\underline{m}),W(z)]^{T}$ satisfies \eqref{eq:coupled_fisher_VW}.
An example height function $\frac{g(\sigma^{-1}(-t,m))}{g(m)}$ for
trajectories along the activation curves $m=\sigma^{-1}(t;\underline{m})$
will be demonstrated later in Figure \ref{fig:p(t,m)_ex1}.
\section{Structured Fisher's Equation with MAPK-dependent Phenotype\label{sec:Numerical-simulations}}
We now study a version of Fisher's Equation where cellular migration
and proliferation depend on biochemical activity, $m$. Various cell
lines have reduced rates of proliferation and increased migration
in response to MAPK activation \citep{chapnick_leader_2014,clark_molecular_1995,matsubayashi_erk_2004},
so we let $m$ denote activity along the MAPK signaling cascade in
this section. We consider a model with two subpopulations: one with
a high rate of diffusion in response to MAPK activation and the other
with a high rate of proliferation when MAPK levels are low. MAPK activation
will depend on an external forcing factor to represent the presence
of an extracellular signaling chemical, such as ROS, TGF-$\beta$,
or EGF. While the method of characteristics is not applicable to spatial
activation patterning here due to the parabolic nature of \eqref{eq:structured_fisher_eqn}
in space, we can investigate temporal patterns of activation and deactivation.
\emph{We will exhibit simple scenarios that give rise to three ubiquitous
patterns of biochemical activity: 1.) a sustained wave of activation,
2.) a single pulse of activitation, and 3.) periodic pulses of activation.}
Before describing these examples, we first introduce some tools to
facilitate our study of \eqref{eq:structured_fisher_eqn}. We will
detail some assumptions that simplify our analysis in Section \ref{sub:Model-Description},
solve and compute the population activation profile over time and
use it to define some activation criteria in Section \ref{sub:Activation-level-in},
and discuss numerical issues and the derivation of a nonautonomous
averaged Fisher's Equation in Section \ref{sub:Derivation-of-the}
before illustrating the different activation patterns and their effects
on migration in Section \ref{sub:Three-biologically-motivated-exa}.
\subsection{Model Description\label{sub:Model-Description}}
Recall that the full structured Fisher's Equation is given by
\begin{eqnarray}
u_{t}+(f(t)g(m)u)_{m} & = & D(m)u_{xx}+\lambda(m)u\left(1-w\right)\label{eq:nonautonomous_fisher_eqn}\\
w & = & \int_{m_{0}}^{m_{1}}u(t,x,m)dm\nonumber \\
u(t=0,x,m) & = & \phi_{1}(m)\phi_{2}(x)\nonumber \\
u(t,x,m=m_{1}) & = & 0\nonumber \\
w(t,x=+\infty)=0 & & w(t,x=-\infty)=1.\nonumber
\end{eqnarray}
We have chosen the separable initial condition $u(t=0,x,m)=\phi_{1}(m)\phi_{2}(x)$
for simplicity. Given some $m_{crit}\in(m_{0},m_{1})$, we define
two subsets of $[m_{0},m_{1}]$ as $M_{inact}:=[m_{0},m_{crit}],M_{act}:=(m_{crit},m_{1}]$,
and the rates of diffusion and proliferation by
\begin{equation}
D(m):=\left\{ \begin{array}{cc}
D_{1} & m\in M_{inact}\\
D_{2} & m\in M_{act}
\end{array}\right.,\ \lambda(m):=\left\{ \begin{array}{cc}
\lambda_{1} & m\in M_{inact}\\
\lambda_{2} & m\in M_{act}
\end{array}\right.\label{eq:nonatuonomous_D_lambda}
\end{equation}
for $D_{1}<D_{2}$ and $\lambda_{2}<\lambda_{1}.$ Hence for $m\in M_{inact},$
the population is termed as \emph{inactive }and primarily proliferates
whereas for $m\in M_{act},$ the population is termed as \emph{active}
and primarily diffuses.
We let $supp(\phi_{1}(m))=[\underline{m}_{\min},\mbox{\ensuremath{\underline{m}}}_{\max}]$
for $\underline{m}_{\max}<m_{crit}$ and assume that $\int_{m_{0}}^{m_{1}}\phi_{1}(m)dm=1$
so $\phi_{1}(m)$ represents a probability density function for the
initial distribution of cells in $m$. We accordingly denote
\[
\Phi_{1}(m):=\left\{ \begin{array}{cc}
0 & m\le m_{0}\\
\int_{m_{0}}^{m}\phi_{1}(m')dm' & m_{0}<m\le m_{1}\\
1 & m_{1}<m
\end{array}\right.
\]
as the cumulative distribution function for $\phi_{1}(m).$
\subsection{Activation Profile and Activation Criteria\label{sub:Activation-level-in}}
An interesting question is how the distribution of \eqref{eq:nonautonomous_fisher_eqn}
along $m$ changes over time. To answer this question, we consider
\eqref{eq:nonautonomous_fisher_eqn} in terms of $t$ and $m$, which
we will write as $p(t,m)$ and call the \emph{activation profile}:
\begin{eqnarray}
p_{t}+(f(t)g(m)p)_{m} & = & 0\label{eq:dist_p_tm}\\
p(0,m) & = & \phi_{1}(m).\nonumber
\end{eqnarray}
Following the analysis from Section \ref{sec:Characteristic-equations-from},
we can solve \eqref{eq:dist_p_tm} analytically. We integrate \eqref{eq:dist_p_tm}
along the \emph{activation curve}s, $h(t;\underline{m}),$ which now
are given by
\begin{equation}
m=h(t;\underline{m}):=\sigma^{-1}\left(F(t);\underline{m}\right),\label{eq:h_characteristic}
\end{equation}
where $F(t):=\int_{0}^{t}f(\tau)d\tau$ denotes a cumulative activation
function. We find the activation profile to be:
\begin{equation}
p(t,m)=\left\{ \begin{array}{cc}
\frac{g(\sigma^{-1}(-F(t),m))}{g(m)}\phi_{1}(\sigma^{-1}(-F(t),m)) & h(t;m_{0})\le m\le m_{1}\\
0 & m_{0}\le m<h(t;m_{0}).
\end{array}\right.\label{eq:u(tm)_soln_exact_fs}
\end{equation}
Now we can derive a condition for a cell population starting in the
inactive population to enter the active population. We see from \eqref{eq:u(tm)_soln_exact_fs}
that the population will enter the active population if
\begin{equation}
h(t;\underline{m}_{\max})>m_{crit}\iff F(t)>\sigma(m_{crit};\underline{m}_{\max})\label{eq:transition_condition}
\end{equation}
for some values of $t$. By standard calculus arguments, \eqref{eq:transition_condition}
will occur if
\begin{equation}
F(t_{\max})>\sigma(m_{crit};\underline{m}_{\max})\label{eq:transition_criterion}
\end{equation}
where a local maximum for $F(t)$ occurs at $t=t_{\max}$. Hence,
$f(t_{\max})=0$, $f(t_{\max}^{-})>0$, and $f(t_{\max}^{+})<0$.
We denote \eqref{eq:transition_criterion} as the \emph{activation
criterion} for \eqref{eq:nonautonomous_fisher_eqn}. By the same argument,
for the entire population to activate at some point, then we can derive
the \emph{entire activation criterion} as
\begin{equation}
F(t_{\max})>\sigma(m_{crit};\underline{m}_{\min}).\label{eq:full_transition_criterion}
\end{equation}
\subsection{Numerical Simulation Issues and Derivation of an Averaged Nonautonomous
Fisher's Equation\label{sub:Derivation-of-the}}
We depict the $u=1$ isocline for a numerical simulation of \eqref{eq:nonautonomous_fisher_eqn}
in Figure \ref{fig:Numerical-simulations-foru(tmx)} with $g(m)=\alpha m(1-m)$,
$f(t)=\beta\sin(\gamma t)$, $\alpha=1/2,\beta=1,$ and $\gamma=1.615$.
These terms will be detailed more in Example 3 below. For numerical
implementation, we use a standard central difference scheme for numerical
integration along the $x$-dimension, an upwind scheme with flux limiters
(similar to those described in \citep{thackham_computational_2008})
to integrate along the $m$ dimension, and a Crank-Nicholson scheme
to integrate along time. From \eqref{eq:transition_criterion}, we
see that this simulation should not enter the active population with
an initial condition of $\phi_{1}(m)=\nicefrac{10}{3}I_{[.05,0.35]}(m),$
where $I_{M}(m)$ denotes an indicator function with support for $m\in M$.
In Figure \ref{fig:Numerical-simulations-foru(tmx)}, however, we
observe that the numerical simulation does enter the active population,
which causes a significant portion of the population to incorrectly
diffuse into the wound at a high rate.
Numerical simulations of advection-driven processes have been described
as an ``embarrassingly difficult'' task, and one such problem is
the presence of numerical diffusion \citep{leonard_ultimate_1991,thackham_computational_2008}.
Numerical diffusion along the $m$-dimension is hard to avoid and
here causes a portion of the cell population to enter the active population
in situations where the it should approach the $m=m_{crit}$ plane
but not pass it. Numerical diffusion can be reduced with a finer grid,
but this can lead to excessively long computation times. With the
aid of the activation curves given by \eqref{eq:h_characteristic},
however, we can track progression of cells in the $m$-dimension analytically
and avoid the problems caused by numerical diffusion completely.
\begin{figure}
\subfloat[]{\protect\includegraphics[width=0.45\textwidth]{4_home_john_fall_2016_stucture_manuscript_SIAM_final_isocline1_1.png}}\subfloat[]{\protect\includegraphics[width=0.45\textwidth]{5_home_john_fall_2016_stucture_manuscript_SIAM_final_isocline1_2.png}}
\centering{}\protect\caption{Two views of the isocline for $u=1$ from a numerical simulation of
\eqref{eq:nonautonomous_fisher_eqn} with $g(m)=\alpha m(1-m)$ and
$f(t)=\beta\sin(\gamma t)$ for $\alpha=0.5,\beta=1,$ $\gamma=1.615,D_{1}=0.01,D_{2}=1,\lambda_{1}=0.25,$
and $\lambda_{2}=0.0025$ and an initial condition of $\phi_{1}(m)=\nicefrac{10}{3}I_{[.05,0.35]}(m)$
and $\phi_{2}(x)=I_{[x\le5]}(x)$. The numerical scheme is discussed
in Section \ref{sub:Derivation-of-the} and the step sizes used are
$\Delta m=\nicefrac{1}{80},\Delta x=\nicefrac{1}{5},\Delta t=10^{-3}.$
From \eqref{eq:transition_criterion}, the simulation should not cross
the $m=m_{crit}$ plane, which is given by the red plane. We see in
frame (a) that the simulation does cross the $m=m_{crit}$ plane due
to numerical diffusion, which causes the high rate of diffusion along
$x$ seen in frame (b). \label{fig:Numerical-simulations-foru(tmx)}}
\end{figure}
To avoid the problems caused by numerical diffusion, we derive a nonautonomous
Fisher's Equation for $w(t,x)$ that represents the average behavior
along $m$ with time-dependent diffusion and proliferation terms.
To investigate the averaged cell population behavior along $m$ over
time, we integrate \eqref{eq:nonautonomous_fisher_eqn} over $m$
to find
\begin{eqnarray}
w_{t}(t,x) & = & \left(D_{1}w_{xx}+\lambda_{1}w(1-w)\right)I_{[M_{inact}]}(m)\nonumber \\
& & +\left(D_{2}w_{xx}+\lambda_{2}w(1-w)\right)I_{[M_{act}]}(m).\label{eq:fisher_two_subpop}
\end{eqnarray}
An explicit form for \eqref{eq:fisher_two_subpop} thus requires determining
how much of the population is in the active and inactive populations
over time. This is determined with the activation curves by calculating
\begin{eqnarray}
h(t;\underline{m})<m_{crit} & \iff & F(t)<\sigma(m_{crit};\underline{m})\nonumber \\
& \iff & \underline{m}<\sigma^{-1}\left(-F(t);m_{crit}\right)=:\psi(t).\label{eq:psi}
\end{eqnarray}
Thus, $\underline{m}=\sigma^{-1}(-F(t);m)$ maps the distribution
along $m$ at time $t$ back to the initial distribution, $\phi_{1}(\underline{m}),$
and $\psi(t)$ denotes the threshold value in $\underline{m}$ between
the active and inactive populations over time. $\Phi_{1}(\psi(t))$
thus denotes the portion of the population in the inactive population,
and $1-\Phi_{1}(\psi(t))$ denotes the portion in the active population
over time.
We thus derive a nonautonomous PDE for $w$, which we will term the
\emph{averaged nonautonomous Fisher's Equation}, as:
\begin{eqnarray}
w_{t} & = & D(t)w_{xx}+\lambda(t)w(1-w),\label{eq:w_t_phenotype}\\
w(t=0,x) & = & \phi_{2}(x)\nonumber \\
w(t,x=-\infty)=1 & & w(t,x=\infty)=0\nonumber
\end{eqnarray}
where
\begin{eqnarray*}
D(t) & = & D_{2}+(D_{1}-D_{2})\Phi_{1}(\psi(t))\\
\lambda(t) & = & \lambda_{2}+(\lambda_{1}-\lambda_{2})\Phi_{1}(\psi(t)).
\end{eqnarray*}
\subsection{Three biologically-motivated examples\label{sub:Three-biologically-motivated-exa}}
We next consider three examples of \eqref{eq:nonautonomous_fisher_eqn}
that pertain to common patterns of biochemical activity during wound
healing. We will use numerical simulations of \eqref{eq:w_t_phenotype}
to investigate how different patterns of activation and deactivation
over time affect the averaged cell population profile. We will also
investigate how the profile changes when crossing the activation and
entire activation thresholds derived in \eqref{eq:transition_criterion}
and \eqref{eq:full_transition_criterion}. In each example, we fix
$m_{crit}=0.5,D_{1}=0.01,D_{2}=1,\lambda_{1}=.25$, $\lambda_{2}=0.0025,\phi_{1}(m)=\nicefrac{10}{3}I_{[.05,0.25]}(m),\phi_{2}(x)=I_{(-\infty,5]}(x)$
and $g(m)=\alpha m(1-m),$ and use a different terms for $f(t)$ to
mimic different biological situations. The choice for $g(m)$ ensures
that the distribution along $m$ stays between $m=0$ and $m=1$.
A standard central difference scheme is used for numerical simulations
of \eqref{eq:w_t_phenotype}.
\subsubsection*{Example 1: Single Sustained MAPK activation wave: $f(t)=1$\label{exa:no_forcing_threshold}}
In this example, we consider a case where we observe the entire cell
population approach a level of $m=1$ over time. Such a scenario may
represent the sustained wave of ERK 1/2 activity observed in MDCK
cells from \citep{matsubayashi_erk_2004}. The authors of \citep{posta_mathematical_2010}
proposed that the autocrine production of EGF caused this activation
in the population. We use $f(t)=1$ to observe this behavior.
Using \eqref{eq:sigma} and \eqref{eq:psi}, we find
\begin{eqnarray*}
\sigma(m;\underline{m}) & = & \frac{1}{\alpha}\log\left(\frac{m}{1-m}\frac{1-\underline{m}}{\underline{m}}\right);\ \underline{m},m\in(0,1)\\
h(t;\underline{m}) & = & \sigma^{-1}(t;\underline{m})\ =\ \underline{m}\left((1-\underline{m})e^{-\alpha t}+\underline{m}\right)^{-1}\\
\psi(t) & = & \mbox{\ensuremath{\left(1+e^{\alpha t}\right)}}^{-1}
\end{eqnarray*}
These functions demonstrate that the distribution along $m$ is always
activating along $m$ but never reaches the $m=1$ line, as $\sigma(m;\underline{m})\rightarrow\infty$
as $m\rightarrow1^{-}$ for any $\underline{m}\in(0,1).$ The entire
population (excluding $\underline{m}=0)$ approaches $m=1$ asymptotically,
however, as $\lim_{t\rightarrow\infty}\sigma^{-1}(t;\underline{m})=1$.
In Figure \ref{fig:p(t,m)_ex1}, we use \eqref{eq:u(tm)_soln_exact_fs}
to depict the activation profile, $p(t,m)$, over time to show the
activation behavior of the population. As expected, we observe the
entire population converging to $m=1$. We include some specific plots
of the activation curves, $h(t;\underline{m}),$ for this example.
Note that the density changes along these curves by the height function
$\frac{g(\sigma^{-1}(-F(t),m))}{g(m)}$, which is equivalent to the
height function of the self-similar traveling wave ansatz made in
\eqref{eq:self_similar_TW_ansatz}.
\begin{figure}
\centering{}\includegraphics[width=0.45\textwidth]{6_home_john_fall_2016_stucture_manuscript_SIAM_final_ex1_tm.pdf}\protect\caption{The analytical solution for the activation profile, $p(t,m),$ for
Example 1 for $\alpha=0.5,$ and $\phi_{1}(m)=I_{(0.05,0.35)}(m).$
The solid black curves denote $h(t;\underline{m})$ for $\underline{m}=0.05,0.15,$
and $0.35$ and the dashed line denotes $m=m_{crit}.$ Note that a
log scale is used along $p$ for visual ease. \label{fig:p(t,m)_ex1}}
\end{figure}
In Figure \ref{fig:ex1_nonaut_sims}(a), we depict a numerical simulation
of $w(t,x)$ over time using \eqref{eq:w_t_phenotype}. The slices
denoted as ``P'' and ``D'' denote when the population is primarily
proliferating ($\Phi_{1}(\psi(t))>1/2$) or diffusing $(\Phi_{1}(\psi(t))\le1/2)$
over time. The profile maintains a high cell density but limited migration
into the wound during the proliferative phase and then migrates into
the wound quickly during the diffusive phase but can not maintain
a high cell density throughout the population. In Figure \ref{fig:ex1_nonaut_sims}(b),
we investigate how the profile of $w(t=40,x)$ changes as $\alpha$
varies from $\alpha=0$ to $\alpha=0.2$. In the slice denoted ``No
activation'', the entire population is still in the inactive population
at $t=40$ and thus does not progress far into the wound or change
with $\alpha$. In the slice denoted ``Activation,'' the population
is split between the active and inactive populations at $t=40.$ The
profiles here are sensitive to increasing values of $\alpha$, as
they migrate further into the wound while maintaining a high density
near $x=0$. The slice denoted as ``Entire Activation'' denotes
simulations that are entirely in the active population by $t=40.$
As $\alpha$ increases, these simulations do not migrate much further
into the wound but do have decreasing densities at $x=0.$ These results
suggest that a combination of proliferation and diffusion must be
used to maximize population migration while maintaining a high cellular
density behind the population front. The optimal combination appears
to occur at the entire activation threshold.
\begin{figure}
\includegraphics[width=0.45\textwidth]{7_home_john_fall_2016_stucture_manuscript_SIAM_final_nonaut_fisher_profile_ex1.pdf}
\hfill{}\includegraphics[width=0.45\textwidth]{8_home_john_fall_2016_stucture_manuscript_SIAM_final_Ex1_nonaut_range_alpha.pdf}
\protect\caption{Numerical simulations of the averaged nonautonomous Fisher's equation
for Example 1. In (a), we depict a simulation of $w(t,x)$ over time
for $\alpha=0.05$. The letters ``P'' and ``D'' denote when the
population is primarily proliferating or diffusing, respectively.
In (b), we depict how the profile for $w(t=40,x)$ changes for various
values of $\alpha$. The descriptions ``No Activation'', ``Activation'',
and ``Entire Activation'' denote values of $\alpha$ for which the
population is entirely in the inactive population, split between the
active and inactive populations, or entirely in the active population
at $t=40$, respectively.\label{fig:ex1_nonaut_sims}}
\end{figure}
\subsubsection*{Example 2: Single pulse of MAPK activation: $f(t)=\beta e^{\gamma t}-1$}
We now detail an example that exhibits a pulse of activation in the
$m$ dimension, which may represent the transient wave of ERK 1/2
activation observed in MDCK cells in \citep{matsubayashi_erk_2004}.
The authors of \citep{posta_mathematical_2010} proposed that this
wave may be caused by the rapid production of ROS in response to the
wound, followed by the quick decay of ROS or its consumption by cells.
We now let $f(t)=\beta e^{\gamma t}-1$. This forcing function arises
if ROS is present but decaying exponentially over time and modeled
by $s(t)=\beta e^{\gamma t},\beta>0,\gamma<0$ and cells activate
linearly in response to the presence of ROS but have a baseline level
of deactivation, which may be given by $f(s)=s-1.$
We see that $\sigma(m;\underline{m})$ and $\sigma^{-1}(t;\underline{m})$
are the same as in Example 1 and now compute
\begin{eqnarray*}
h(t;\underline{m}) & = & \underline{m}\left(\underline{m}+(1-\underline{m})\exp\left[\alpha t-\frac{\alpha\beta}{\gamma}(\exp(\gamma t)-1)\right]\right)^{-1}\\
\psi(t) & = & \left(1+\exp\left[-\alpha t+\frac{\alpha\beta}{\gamma}(\exp(\gamma t)-1)\right]\right)^{-1}.
\end{eqnarray*}
In Figure \ref{fig:p(t,m)_ex2}, we use \eqref{eq:u(tm)_soln_exact_fs}
to depict the activation profile, $p(t,m),$ over time to show the
activation behavior of the population. We also include some specific
plots of the activation curves, $h(t;\underline{m})$, which show
a pulse of MAPK activity in the population that starts decreasing
around $t=5.$ Note that $h(t;0.35)$ crosses the $m=m_{crit}$ line
but $h(t;0.05)$ does not, so \eqref{eq:transition_criterion} is
satisfied for this parameter set (the population becomes activated)
but \eqref{eq:full_transition_criterion} is not (the entire population
does not become activated).
\begin{figure}
\centering{}\includegraphics[width=0.45\textwidth]{9_home_john_fall_2016_stucture_manuscript_SIAM_final_ex2_tm.pdf}\protect\caption{The analytical solution for the activation profile, $p(t,m)$, for
Example 2 for $\alpha=0.5,\beta=3,\gamma=-1/4$ and $\phi_{1}(m)=I_{(0.05,0.35)}(m).$
The solid black curves denote $h(t;\underline{m})$ for $\underline{m}=0.05,0.15,$
and $0.35$ and the dashed line denotes $m=m_{crit}.$ Note that a
log scale is used along $p$ for visual ease. \label{fig:p(t,m)_ex2}}
\end{figure}
Using \eqref{eq:transition_criterion}, we determine our activation
criterion for this example as
\[
\frac{1-\beta+\log\beta}{\gamma}>\frac{1}{\alpha}\log\left(\frac{m_{crit}}{1-m_{crit}}\frac{1-\underline{m}_{\max}}{\underline{m}_{\max}}\right).
\]
If we fix $\gamma=-1,\alpha=1,m_{crit}=0.5,\underline{m}_{\max}=0.35,$
and $\underline{m}_{\min}=0.05$, we find that the above inequality
is satisfied for $\beta$ approximately greater than 2.55. This may
represent a scenario in which we know the decay rate of the ROS through
$\gamma,$ the activation rate of the MAPK signaling cascade through
$\alpha$, the MAPK activation distribution before ROS release with
$\underline{m}_{\min}\mbox{ and }\underline{m}_{max},$ and the activation
threshold with $m_{crit}.$ The values of $\beta$ denote the concentration
of released ROS, which should be at least 2.55 to see the population
activate. We similarly find that the entire population will activate
at some time for $\beta>5.68.$
In Figure \ref{fig:ex2_nonaut_sims}(a), we depict a numerical simulation
of \eqref{eq:w_t_phenotype} for this example. The population quickly
transitions to a diffusing stage due to the pulse of MAPK activation
and shows the smaller densities ($u$ approximately less than $0.2)$
migrating into the wound rapidly while the density behind the population
front drops. As the pulse of MAPK activation ends and the population
transitions back to a proliferating phenotype, the populations restores
a high density behind the cell front and begins to develop a traveling
wave profile, as suggested by the parallel contour lines. In Figure
\ref{fig:ex2_nonaut_sims}(b), we investigate how the profile for
$w(t=30,x)$ changes as $\beta$ varies from $\beta=2$ to $\beta=9$
while keeping all other parameters fixed. We observe that the profile
is the same for all values of $\beta<2.55,$ as \eqref{eq:transition_criterion}
is not satisfied. As $\beta$ increases past the activation threshold,
the profile shows increased rates of migration into the wound. After
passing the entire activation threshold \eqref{eq:full_transition_criterion},
the profile continues to migrate further as $\beta$ increases, but
appears less sensitive to $\beta$. This increased migration is likely
due to the population spending more time in the active population
for larger values of $\beta$. Note that for all simulations shown,
the pulse of MAPK activation has finished by $t=30.$
\begin{figure}
\includegraphics[width=0.45\textwidth]{10_home_john_fall_2016_stucture_manuscript_SIAM_final_nonaut_fisher_profile_ex2.pdf}\hfill{}\includegraphics[width=0.45\textwidth]{11_home_john_fall_2016_stucture_manuscript_SIAM_final_Ex2_nonaut_range_beta.pdf}\protect\caption{Numerical simulations of the averaged nonautonomous Fisher's equation
for Example 2. In (a), we depict a simulation of $w(t,x)$ over time
for $\alpha=1,\beta=8,\gamma=-1$. Slices denoted with a ``P'' or
``D'' denote when the population is primarily proliferating or diffusing,
respectively. In (b), we depict how the profile for $w(t=30,x)$ changes
for various values of $\beta$. The descriptions ``No activation'',
``Activation'', and ``Entire Activation'' denote values of $\beta$
for which the population is entirely in the inactive population, split
between the active and inactive populations, or entirely in the active
population at $t=t_{\max}$. \label{fig:ex2_nonaut_sims}}
\end{figure}
\subsubsection*{Example 3: Periodic pulses of MAPK activation: $f(t)=\beta\sin(\gamma t)$}
As a last example, we exhibit a scenario with periodic waves of activity.
Such behavior was observed in some of the experiments performed in
\citep{zi_quantitative_2011}, in which cell cultures of the HaCaT
cell line were periodically treated with TGF-$\beta$ to investigate
how periodic treatment with TGF-$\beta$ affects activation of the
SMAD pathway (the canonical pathway for TGF-$\beta$, which also influences
cell proliferation and migration). We let $f(t)=\beta\sin(\gamma t),\beta,\gamma>0$,
which occurs if the concentration of TGF-$\beta$ over time is given
by $s(t)=1+\sin(\gamma t),$ and cells activate linearly in response
to $s$ and have a baseline rate of deactivation, given by $f(s)=\beta(s-1)$.
We now calculate
\[
h(t;\underline{m})=\underline{m}\left(\underline{m}+(1-\underline{m})\exp\left[\frac{\alpha\beta}{\gamma}\left(\cos(\gamma t)-1\right)\right]\right)^{-1}
\]
\[
\psi(t)=\left(1+\exp\left[\frac{\alpha\beta}{\gamma}(1-\cos(\gamma t))\right]\right)^{-1}.
\]
In Figure \ref{fig:p(t,m)_ex3}, we use \eqref{eq:u(tm)_soln_exact_fs}
to depict the activation profile, $p(t,m),$ over time to show the
activation behavior of the population. We also include some specific
plots of the activation curves $h(t;\underline{m})$, which demonstrate
periodic waves of activation along $m$. Note that $h(t;0.05)$ crosses
the $m=m_{crit}$ line, so \eqref{eq:full_transition_criterion} is
satisfied, and the entire population becomes activated at some points
during the simulation.
\begin{figure}
\centering{}\includegraphics[width=0.45\textwidth]{12_home_john_fall_2016_stucture_manuscript_SIAM_final_ex3_tm.pdf}\protect\caption{The analytical solution for the activation profile, $p(t,m)$, for
Example 3 for $\alpha=1/2,\beta=4,\gamma=1$ and $\phi_{1}(m)=I_{(0.05,0.35)}(m).$
The solid black curves denote $h(t;\underline{m})$ for $\underline{m}=0.05,0.15,$
and $0.35$ and the dashed line denotes $m=m_{crit}.$ Note that a
log scale is used along $p$ for visual ease. \label{fig:p(t,m)_ex3}}
\end{figure}
The activation criterion \eqref{eq:transition_criterion} can be solved
as
\[
\frac{2\beta}{\gamma}>\frac{1}{\alpha}\log\left(\frac{m_{crit}}{1-m_{crit}}\frac{1-\underline{m}_{\max}}{\underline{m}_{\max}}\right).
\]
We thus calculate that if we fix $\beta=1,\alpha=1/2,\underline{m}_{max}=0.35,\underline{m}_{\min}=0.05,$
and $m_{crit}=0.5,$ then the activation criterion \eqref{eq:transition_criterion}
is satisfied for $\gamma<1.615$ and the entire activation criterion
\eqref{eq:full_transition_criterion} is satisfied for $\gamma<0.34.$
These estimates would tell us how frequently signaling factor treatment
is needed to see different patterns of activation in the population.
In Figure \ref{fig:ex3_nonaut_sims}(a), we depict a numerical simulation
of \eqref{eq:w_t_phenotype} for this example. The population phenotype
has a period of $4\pi$, and we see that the lower densities migrate
into the wound most during the diffusive stages, whereas all densities
appear to migrate into the wound at similar speeds during the proliferative
stages. In Figure \ref{fig:ex3_nonaut_sims}(b), we investigate how
the profile for $w(t=40,x)$ changes as $\gamma$ varies between $\gamma=0$
and $\gamma=1.9$ while keeping all other parameters fixed. All profiles
appear the same for $\gamma>1.615$ as \eqref{eq:transition_criterion}
is not satisfied. As $\gamma$ decreases below this threshold, more
of the population becomes activated during the simulation, culminating
in a maximum propagation of the population at the entire activation
threshold, $\gamma\approx0.34.$ As $\gamma$ falls below $\mbox{\ensuremath{\gamma}=}0.34,$
the population tends to migrate less, although the population does
migrate far for $\gamma$ near $0.2$. For $\gamma<0.2,$ the population
appears to spend too much time in the active population and diffuses
excessively with limited proliferation. These simulations lead to
shallow profiles that do not migrate far into the wound. As $\gamma$
approaches zero, the simulations would become entirely activated,
but do not before $t=40$. These simulations stay in the inactive
population for the duration of the simulation and do not migrate far
into the wound.
\begin{figure}
\includegraphics[width=0.45\textwidth]{13_home_john_fall_2016_stucture_manuscript_SIAM_final_nonaut_fisher_profile_ex3.pdf}\hfill{}\includegraphics[width=0.45\textwidth]{14_home_john_fall_2016_stucture_manuscript_SIAM_final_Ex3_nonaut_range_beta_transition.pdf}\protect\caption{Numerical simulations of the averaged nonautonomous Fisher's equation
for Example 3. In (a), we depict a simulation of $w(t,x)$ over time
for $\alpha=0.5,\beta=1,$ and \protect \\
$\gamma=1/2$. Slices denoted with a ``P'' or ``D'' denote when
the population is primarily proliferating or diffusing, respectively.
In (b), we depict $w(t=40,x)$ for various values of $\gamma$. The
descriptions ``No activation'', ``Activation'', and ``Entire
Activation'' denote values of $\gamma$ for which the population
is entirely in the inactive population, split between the active and
inactive populations, or entirely in the active population at $t=t_{\max}$.\label{fig:ex3_nonaut_sims}}
\end{figure}
\section{Discussion and Future work\label{sec:Discussion-and-future}}
We have investigated a structured Fisher's Equation that incorporates
an added dimension for biochemical activity that influences population
migration and proliferation. The method of characteristics proved
to be a useful way to track the progression along the population activity
dimension over time. With the aid of a phase plane analysis and an
asymptotically autonomous Poincare-Bendixson Theorem, we were able
to prove the existence of a self-similar traveling wave solution to
the equation when diffusion and proliferation do not depend on MAPK
activity. The height function of the self-similar traveling wave ansatz
along characteristic curves is demonstrated in Figures \ref{fig:p(t,m)_ex1},
\ref{fig:p(t,m)_ex2}, and \ref{fig:p(t,m)_ex3}. We believe our analysis
could be extended to investigate structured versions of other nonlinear
PDEs.
Activation of the MAPK signaling cascade is known to influence collective
migration during woung healing through cellular migration and proliferation
properties. For this reason, we also considered a structured PDE model
in which the rates of cellular diffusion and proliferation depend
on the levels of MAPK activation in the population. We also extended
the model to allow for the presence of an external cytokine or growth
factor that regulates activation and deactivation along the MAPK signaling
cascade. We derived two activation criteria for the model to establish
conditions under which the population will become activated during
simulations. As numerical simulations of the structured equation are
prone to error via numerical diffusion, we derived a nonautonomous
equation in time and space to represent the average population behavior
along the biochemical activity dimension. Using this nonautonomous
equation, we exhibited three simple examples that demonstrate biologically
relevant activation levels and their effects on population migration:
a sustained wave of activity, a pulse of activity, and periodic pulses
of activity. We found that the population tends to migrate farthest
while maintaining a high cell density at the entire activation threshold
value, \eqref{eq:full_transition_criterion}, for the sustained wave
and periodic pulse patterns of activation. The single pulse case continued
migrating further into the wound after passing the entire activation
threshold but appeared less sensitive after doing so.
A natural next step for this analysis is to use a structured population
model of this sort in combination with biological data to thoroughly
investigate the effects of MAPK activation and deactivation on cell
migration and proliferation during wound healing. Previous mathematical
models have focused on either collective migration during wound healing
assays in response to EGF treatment (while neglecting the MAPK signaling
cascade) \citep{johnston_estimating_2015,nardini_modeling_2016} or
MAPK propagation during wound healing assays (while neglecting cell
migration) \citep{posta_mathematical_2010}. To the best of our knowledge,
no mathematical models have been able to reliably couple signal propagation
and its effect on cell migration during wound healing. The examples
detailed in this work intentionally used the simplest terms possible
as a means to focus on the underlying mathematical aspects. With a
separate in-depth study into the biochemistry underlying the MAPK
signaling cascade and its relation with various cytokines or growth
factors, more complicated and biologically relevant terms for $g(m),f(s),$
and $s(t)$ can be determined to help elucidate the effects of MAPK
activation on cell migration during wound healing.
The analytical techniques used in this study cannot be used to investigate
spatial patterns of biochemical activity due to the parabolic nature
of \eqref{eq:structured_fisher_eqn} in space. Cell populations also
migrate via chemotaxis during wound healing, in which cells migrate
up a concentration gradient of some chemical stimulus \citep{ai_reaction_2015,keller_traveling_1971,landman_diffusive_2005,newgreen_chemotactic_2003}.
Chemotactic equations are hyperbolic in space, which may facilitate
spatial patterns of MAPK activation during wound healing, such as
those described experimentally in \citep{chapnick_leader_2014}. As
various pathways become activated and cross-talk during wound healing
to influence migration \citep{guo_signaling_2009}, future studies
could also investigate a population structured along multiple signaling
pathways, $u(t,x,\vec{m})$ for the vector $\vec{m}=(m_{1},m_{2},\dots,m_{n})^{T}$.
Because the cell population also produces cytokines and growth factors
for paracrine and autocrine signaling during wound healing, these
models would also benefit from unknown variables representing ROS,
TGF-$\beta$, EGF, etc.
While the main motivation for this study is epidermal wound healing,
there are potential applications in other areas of biology. Fisher's
equation has also been used to study population dynamics in ecology
and epidemiology \citep{ai_travelling_2005,hastings_spatial_2005,shigesada_biological_1997}.
Our framework could be extended to a case where an environmental effect,
such as seasonal forcing, impacts species migration or susceptibility
of individuals to disease. The results presented here may thus aid
in a plethora of mathematical biology studies.
\bibliographystyle{my_siam}
| {'timestamp': '2016-12-16T02:07:35', 'yymm': '1612', 'arxiv_id': '1612.05188', 'language': 'en', 'url': 'https://arxiv.org/abs/1612.05188'} |
\section{Introduction
\label{sec:Introduction}}
The large-scale structure of the Universe is an inexhaustible fountain
of the cosmological information and has been playing important roles
in cosmology. The various patterns in the spatial distribution of
galaxies have emerged from initial density fluctuations of the
Universe, which origin is believed to be generated by quantum
fluctuations in the inflationary phase of the very early Universe
\cite{Lyth:2009zz}. Considerable information on the evolution of the
Universe is also contained in the large-scale structure. For example,
the spatial clustering of galaxies is a promising probe to reveal the
nature of dark energy \cite{SDSS:2005xqv} and the initial condition in
the early Universe such as the primordial spectrum and non-Gaussianity
etc.~\cite{Dalal:2007cu}. While the large-scale structure is used as a
probe of the density field in the relatively late Universe, the
distribution of observable objects, such as galaxies, quasars, and
21cm lines, etc., are biased tracers of the underlying matter
distribution of the Universe. The difference between distributions of
galaxies and matter is referred to as ``galaxy bias''
\cite{Desjacques:2016bnm}. The phenomena of biasing are not restricted
to the galaxies, but are unavoidable when we use astronomical objects
as tracers of underlying matter distribution.
The number density of galaxies are primary target to probe the
properties of large-scale structure. Theoretical predictions for the
spatial distribution of galaxies are investigated by various methods,
including numerical simulations, analytic modeling, and the
perturbation theories. On one hand, the dynamical evolutions of the
mass density fields on relatively large scales are analytically
described by the perturbation theory
\cite{Hunter1964,Tomita1967,Peebles1980,Bernardeau:2001qr}, which is
valid as long as the density fluctuations are small and in the
quasi-linear regime. On the other hand, the dynamical structure
formation on relatively small scales, including the formation of
galaxies and other astronomical objects, is not analytically tractable
because of the full nonlinearity of the problem. One should resort to
numerical simulations and nonlinear modeling to describe and
understand the small-scale structure formation. For example, the halo
model \cite{Mo:1995cs,Mo:1996cn,Cooray:2002dia} statistically predicts
spatial distributions of halos depending on the initial conditions,
using simple assumptions based on the Press-Schechter theory
\cite{Press:1973iz} and its extensions \cite{Bond:1990iw,Sheth:1999mn}
of the nonlinear structure formation.
The biasing of tracers affects not only spatial distributions on small
scales, but also those on large scales. However, the biasing effects
are not so complicated on large scales as those on small scales. For
example, the density contrast $\delta_\mathrm{g}$ of biased tracers on
sufficiently large scales, where the dynamics are described by linear
theory, is proportional to the linear density contrast
$\delta_\mathrm{L}$, and we have a simple relation
$\delta_\mathrm{g} = b\,\delta_\mathrm{L}$, where a proportional
constant $b$ is called the linear bias parameter
\cite{Kaiser:1984sw,Bardeen:1985tr}. While nonlinear dynamics
complicate the situation, the nonlinear perturbation theory to
describe the spatial clustering of biased tracers has been also
developed \cite{Bernardeau:2001qr,Desjacques:2016bnm}. Since the
perturbation theory cannot describe the nonlinear structure formation
which takes place on small scales, it is inevitable to introduce some
unknown parameters or functions whose values strongly depend on the
small-scale nonlinear physics which cannot be predicted from the first
principle. These nuisance parameters are called bias parameters or
bias functions in the nonlinear perturbation theory. When a model of
bias is defined in some way, one can calculate the evolution of the
biased tracers by applying the nonlinear perturbation theory.
There are several different ways of characterizing the bias model in
the nonlinear perturbation theory \cite{Desjacques:2016bnm}. Among
others, the integrated perturbation theory (iPT)
\cite{Matsubara:2011ck,Matsubara:2012nc,Matsubara:2013ofa,Matsubara:2016wth},
which is based on the Lagrangian perturbation theory
\cite{Buchert:1989xx,Moutarde:1991evx} and orthogonal decompositions
of the bias \cite{Matsubara:1995wd}, provides a method to generically
include any model of bias formulated in Lagrangian space, irrespective
of whether the bias is local or nonlocal. In this formalism, the
concept of the renormalized bias functions \cite{Matsubara:2011ck} is
introduced. Dynamical evolutions on large scales which are described
by perturbation theory are separated from complicated nonlinear
processes of structure formation on small scales. The latter
complications are all encoded in the renormalized bias functions in
the formalism of iPT. The renormalized bias functions are given for
any models of bias, which can be nonlocal in general. For example, the
halo bias is one of the typical models of Lagrangian bias, and the
renormalized bias functions are uniquely determined and calculated
from the model \cite{Matsubara:2012nc}.
The number density of biased objects is not the only quantity we can
observe. For example, the position and shape of galaxies are
simultaneously observed in imaging surveys of galaxies, which are
essential in observations of weak lensing fields to probe the nature
of dark matter and dark energy \cite{Bartelmann:1999yn}. Before the
lensing effects, the galaxy shapes are more or less correlated to the
mass density field through, e.g., gravitational tidal fields and other
environmental structures in the underlying density field. The shapes
of galaxies, i.e., sizes and intrinsic alignments
\cite{Catelan:2000vm,Okumura:2008du,Joachimi:2015mma,Kogai:2018nse,Okumura:2019ned,Okumura:2019ozd,Okumura:2020hhr,Taruya:2020tdi,Okumura:2021xgc}
are expected to be promising probes of cosmological information
\cite{Chisari:2013dda} in near future when unprecedentedly large
imaging surveys are going to take place. For example, the shape
statistics of galaxies offer a new probe of particular features in
primordial non-Gaussianity generated during the inflation in the
presence of higher-spin fields \cite{Arkani-Hamed:2015bza}, which
features are dubbed ``cosmological collider physics.''
Motivated by recent progress in observational techniques of imaging
surveys, analytical modelings of galaxy shape statistics by the
nonlinear perturbation theory have recently been considered
\cite{Blazek:2015lfa,Blazek:2017wbz,Schmitz:2018rfw,Vlah:2019byq,Vlah:2020ovg,Taruya:2021jhg}.
Among others, Ref.~\cite{Vlah:2019byq} developed a formalism based on
spherical tensors to describe the galaxy shapes, with which rotation
and parity symmetries in the shape statistics are respected. They use
an approach of what is called Effective Field Theory of the
Large-Scale Structure (EFTofLSS) with the Eulerian perturbation
theory, and galaxy biasing are modeled by introducing a set of terms
that are generally allowed by conceivable symmetries of the problem
and the coefficients of these terms are free parameters in the theory.
The galaxy shape statistics are usually characterized by the
second-order moments of galaxy images in the two-dimensional sky,
which correspond to the projection of the three-dimensional
second-order moments of galaxy shapes to the two-dimensional sky. In
principle, the galaxy images contain information about other moments
higher than the second order. The higher-order moments in galaxy
images turn out to be possible probes for the higher-spin fields in
the context of cosmological collider physics through angle-dependent
primordial non-Gaussianity. Ref.~\cite{Kogai:2020vzz} showed that the
shape statistics of the order-$s$ moment turn out to be a probe for
spin-$s$ fields.
In this paper, inspired by the above developments, we consider a
generalization of the iPT formalism to predict correlations of tensor
fields of rank $l$ in general with the perturbation theory. In order
to have rotational symmetry apparent, we decompose the tensor fields
into irreducible tensors on the basis of spherical tensors, just
similarly in the pioneering work of
Refs.~\cite{Vlah:2019byq,Vlah:2020ovg}. However, unlike the previous
work, we do not fix the coordinates system to the directions of
wavevectors of Fourier modes, so as to make the theory fully covariant
with respect to the rotational symmetry in three-dimensional space. It
turns out that the generalizing iPT formalism on the basis of
irreducible tensors results in an elegant way of describing the
perturbation theory only with rotationally invariant quantities in the
presence of tensor-valued bias. Various methods developed for the
theory of angular momentum in quantum mechanics, such as the
$3nj$-symbols and their orthogonality relations, sum rules, the
Wigner-Eckart theorem, and so on, are effectively utilized in the
course of calculations for predicting statistical quantities of tensor
fields.
This paper defines a basic formalism of the tensor generalization of
the methods with iPT. Several applications of the fully general
formalism to relatively simple examples of calculating statistics are
also illustrated, i.e., the lowest-order predictions of the power
spectrum of tensor fields in real space and in redshift space,
lowest-order effects of primordial non-Gaussianity in the power
spectrum, and tree-level predictions of the bispectrum of tensor
fields in real space. The main purpose of this paper is to show the
fundamental formulation of tensor-valued iPT, and to give useful
methods and formulas for future applications to calculate statistics
of concrete tensor fields, such as galaxy spins, shapes, etc. Further
developments for calculating loop corrections in the
higher-order perturbation theory are described in a separate paper
in the series, Paper~II \cite{PaperII}, which follows this paper.
This paper is organized as follows. In Sec.~\ref{sec:SphericalBasis},
various notations regarding the spherical basis are introduced and
defined, and their fundamental properties are derived and summarized.
In Sec.~\ref{sec:iPTTensor}, the iPT formalism originally for
scalar-valued fields is generalized to those for tensor-valued fields,
and rotationally invariant components of renormalized bias functions
and higher-order propagators are identified. Additional symmetries in
them are also described. Explicit forms of several lower-order
propagators are calculated and presented. In
Sec.~\ref{sec:TensorSpectrum}, the calculation of the power spectrum
and bispectrum of tensor fields by our formalism is illustrated and
presented in several cases which are not relatively complicated. In
Sec.~\ref{sec:Semilocal}, we define a class of bias models, semi-local
models, in which the Lagrangian bias of the tensor field is determined
only by derivatives of the gravitational potential at the same
position in Lagrangian space. Conclusions are given in
Sec.~\ref{sec:Conclusions}. In Appendix~\ref{app:SphericalBasis},
explicit expressions of higher-order spherical tensor basis are
derived and given. Formulas regarding spherical harmonics which are
repeatedly used throughout this paper are summarized in
Appendix~\ref{app:SphericalHarmonics} for readers' convenience. In
Appendix~\ref{app:3njSymbols} is summarized some properties of
Wigner's $3j$-, $6j$- and $9j$-symbols which are useful in this
paper.
\section{Decomposition of tensors by spherical basis
\label{sec:SphericalBasis}
}
For a given set of orthonormal basis vectors $\hat{\mathbf{e}}_i$
($i=1,2,3$) in a flat, three-dimensional space, the spherical basis
($\mathbf{e}_0$, $\mathbf{e}_\pm$) is defined by
\cite{Edmonds:1955fi,Khersonskii:1988krb}
\begin{equation}
\mathbf{e}_0 = \hat{\mathbf{e}}_3, \quad
\mathbf{e}_\pm = \mp
\frac{\hat{\mathbf{e}}_1 \pm i\hat{\mathbf{e}}_2}{\sqrt{2}}.
\label{eq:2-1}
\end{equation}
It is convenient to introduce the dual basis with an upper index by
taking complex conjugates of the spherical basis:
\begin{equation}
\mathbf{e}^0 \equiv \mathbf{e}_0^* = \hat{\mathbf{e}}_3, \quad
\mathbf{e}^\pm \equiv \mathbf{e}_\pm^*
= \mp \frac{\hat{\mathbf{e}}_1 \mp i\hat{\mathbf{e}}_2}{\sqrt{2}}.
\label{eq:2-2}
\end{equation}
The orthogonality relations among these bases are given by
\begin{equation}
\mathbf{e}_m\cdot\mathbf{e}_{m'} = g^{(1)}_{mm'}, \quad
\mathbf{e}_m\cdot\mathbf{e}^{m'} = \delta_m^{m'}, \quad
\mathbf{e}^m\cdot\mathbf{e}^{m'} = g_{(1)}^{mm'},
\label{eq:2-3}
\end{equation}
where $m,m'$ are azimuthal indices of the spherical basis which
represent one of $(-,0,+)$,
\begin{equation}
g^{(1)}_{mm'} = g_{(1)}^{mm'} = (-1)^m\delta_{m,-m'},
\label{eq:2-4}
\end{equation}
and $\delta_m^{m'}=\delta_{mm'}$ is the Kronecker's delta symbol which
is unity for $m=m'$ and zero for $m\ne m'$. The matrices
$g_{(1)}^{mm'}$ and $g^{(1)}_{mm'}$ are inverse matrices of each
other,
\begin{equation}
g_{(1)}^{mm''} g^{(1)}_{m''m'} = \delta^m_{m'},
\label{eq:2-5}
\end{equation}
where the repeated index $m''$ is assumed to be summed over, omitting
summation symbols as commonly adopted in the convention of tensor
analysis (Einstein summation convention). Throughout this paper, the
Einstein convention of summation is always assumed, unless otherwise
stated. They act as metric tensors of the spherical basis,
\begin{equation}
\mathbf{e}^m = g_{(1)}^{mm'} \mathbf{e}_{m'}, \quad
\mathbf{e}_m = g^{(1)}_{mm'} \mathbf{e}^{m'}.
\label{eq:2-6}
\end{equation}
Any vector can be represented by
$\bm{V} \propto V^m\mathbf{e}_m = V_m\mathbf{e}^m$ up to normalization
constant, and thus $\mathbf{e}_m$ is considered as the basis of {\em
contravariant} spherical vectors, and $\mathbf{e}^m$ is considered
as the basis of {\em covariant} spherical vectors.
One of the advantages of the spherical basis over the Cartesian basis
is that the Cartesian tensors are reduced into irreducible tensors on
the spherical basis under a rotation of coordinates system
\cite{Sakurai:2011zz}. We define the spherical basis of traceless,
symmetric tensors \cite{Desjacques:2010gz,Vlah:2019byq}
as\footnote{The normalization of the spherical tensors in this paper
are different from those in
Refs.~\cite{Desjacques:2010gz,Vlah:2019byq}. Our normalization is
chosen so that Eq.~(\ref{eq:2-13}) below should hold with a usual
normalization of the spherical harmonics. In addition, in the
previous literature, the axis $\mathbf{e}^0$ of the coordinates
system is specifically chosen to be parallel to the separation
vector $\bm{r}$ of the correlation function
\cite{Desjacques:2010gz}, or, to the wavevector $\bm{k}$ of Fourier
modes \cite{Vlah:2019byq}, while our axis can be arbitrarily chosen,
so that fully rotational symmetry is explicitly kept in our
formalism, not choosing a special coordinates system.}
\begin{equation}
\mathsf{Y}^{(0)} = \frac{1}{\sqrt{4\pi}}, \quad
\mathsf{Y}^{(m)}_i = \sqrt{\frac{3}{4\pi}}\, {\mathrm{e}^m}_i
\label{eq:2-8}
\end{equation}
for tensors of rank 0 and 1, and
\begin{align}
&
\mathsf{Y}^{(0)}_{ij} = \frac{1}{4} \sqrt{\frac{5}{\pi}}
\left(
3 {\mathrm{e}^0}_i\,{\mathrm{e}^0}_j - \delta_{ij}
\right),
\label{eq:2-9a}\\
&
\mathsf{Y}^{(\pm 1)}_{ij} = \frac{1}{2} \sqrt{\frac{15}{\pi}}
{\mathrm{e}^0}_{(i}\,{\mathrm{e}^\pm}_{j)},
\label{eq:2-9b}\\
&
\mathsf{Y}^{(\pm 2)}_{ij} = \frac{1}{2} \sqrt{\frac{15}{2\pi}}\,
{\mathrm{e}^\pm}_i\,{\mathrm{e}^\pm}_j
\label{eq:2-9c}
\end{align}
for tensors of rank 2, where ${\mathrm{e}^m}_i = [\mathbf{e}^m]_i$ is
the Cartesian components of the spherical basis, and round brackets in
the indices of the right-hand side (rhs) of
Eq.~(\ref{eq:2-9b}) indicate symmetrization with respect to the
indices inside the brackets, e.g.,
$x_{(i}y_{j)} = (x_iy_j + x_jy_i)/2$. These bases satisfy
orthogonality relations,
\begin{equation}
\mathsf{Y}^{(0)}\mathsf{Y}^{(0)} = \frac{1}{4\pi}, \
\mathsf{Y}^{(m)}_i \mathsf{Y}^{(m')}_i =
\frac{3}{4\pi} g_{(1)}^{mm'},\
\mathsf{Y}^{(m)}_{ij} \mathsf{Y}^{(m')}_{ij} =
\frac{15}{8\pi} g_{(2)}^{mm'},
\label{eq:2-10}
\end{equation}
where $g_{(2)}^{mm'} = (-1)^m \delta_{m,-m'}$ is the $5\times 5$
spherical metric, and the indices $m,m'$ run over the integers
$-2,-1,0,+1,+2$ in this case. The Einstein summation convention is also
applied for the Cartesian indices in the above, so that $i$ and/or $j$
are summed over on the left-hand side (lhs) of the above equations.
Taking complex conjugates of the spherical basis virtually lowers the
azimuthal indices,
\begin{equation}
\mathsf{Y}^{(0)*} = \mathsf{Y}^{(0)},\quad
\mathsf{Y}^{(m)*}_i = g^{(1)}_{mm'}\mathsf{Y}^{(m')}_i,\quad
\mathsf{Y}^{(m)*}_{ij} = g^{(2)}_{mm'}\mathsf{Y}^{(m')}_{ij}.
\label{eq:2-11}
\end{equation}
For a unit vector $\bm{n}$ whose direction is given by spherical
coordinates $(\theta,\phi)$, the spherical tensors are related to the
spherical harmonics $Y_{lm}(\theta,\phi)$ as
\begin{equation}
Y_{00}(\theta,\phi) = \mathsf{Y}^{(0)*}, \
Y_{1m}(\theta,\phi) = \mathsf{Y}^{(m)*}_i n_i, \
Y_{2m}(\theta,\phi) = \mathsf{Y}^{(m)*}_{ij} n_i n_j.
\label{eq:2-12}
\end{equation}
We have chosen the normalization of the spherical basis so that the
coefficients in the relations above are unity. In this paper, the
normalization convention of the spherical harmonics is given by a
standard one,
\begin{equation}
Y_{lm}(\theta,\phi) =
\sqrt{\frac{2l+1}{4\pi}}
\sqrt{\frac{(l-m)!}{(l+m)!}}
P_l^m(\cos\theta)\,e^{im\phi},
\label{eq:2-12-1}
\end{equation}
where $P_l^m(x)$ is the associated Legendre polynomial.
Similarly, spherical bases for higher-rank symmetric tensors are
uniquely defined by imposing the similar properties of
Eq.~(\ref{eq:2-12}),
\begin{equation}
Y_{lm}(\theta,\phi) =
\mathsf{Y}^{(m)*}_{i_1i_2\cdots i_l} n_{i_1} n_{i_2}\cdots n_{i_l}.
\label{eq:2-13}
\end{equation}
Explicit forms of the spherical bases of the rank 0 to 4 are derived
and given in Appendix~\ref{app:SphericalBasis}. The spherical bases
are irreducible representations of the rotation group SO(3) in the
same way as the spherical harmonics are. We consider a passive
rotation of the Cartesian basis,
\begin{equation}
\hat{\mathbf{e}}_i \rightarrow \hat{\mathbf{e}}_i'
= R_{ij}\hat{\mathbf{e}}_j,
\label{eq:2-14}
\end{equation}
where $R_{ij} = \hat{\mathbf{e}}_i' \cdot \hat{\mathbf{e}}_j$ is
a real orthogonal matrix with $R^\mathrm{T}R = I$. Cartesian
components of a unit vector $\bm{n}$, where $|\bm{n}|=1$,
transform as
\begin{equation}
n_i \rightarrow n_i' = (R^{-1})_{ij} n_j = n_jR_{ji},
\label{eq:2-14-1}
\end{equation}
because we have $n_i \hat{\mathbf{e}}_i = n_i' \hat{\mathbf{e}}_i'$.
Denoting the spherical coordinates of $n_i$ and $n_i'$ as
$(\theta,\phi)$ and $(\theta',\phi')$, respectively, the spherical
harmonics transform as
\begin{equation}
Y_{lm}(\theta',\phi') = Y_{lm'}(\theta,\phi)D_{(l)m}^{m'}(R),
\label{eq:2-14-2}
\end{equation}
where $D_{(l)m}^{m'}(R)$ is the Wigner's rotation matrix of the
passive rotation $R$.
Therefore, the construction of the tensor bases,
Eq.~(\ref{eq:2-13}), indicates that they transform as
\begin{align}
\mathsf{Y}^{(m)}_{i_1i_2\cdots i_l} \rightarrow
\mathsf{Y}^{(m)\prime}_{i_1i_2\cdots i_l}
&=
R_{i_1j_1} R_{i_2j_2}\cdots R_{i_lj_l}
\mathsf{Y}^{(m)}_{j_1j_2\cdots j_l}
\nonumber\\
&=
D^{m}_{(l)m'}(R^{-1}) \mathsf{Y}^{(m')}_{i_1i_2\cdots i_l},
\label{eq:2-15}
\end{align}
where Wigner's matrix of the inverse rotation is given by
\begin{equation}
D^{m}_{(l)m'}(R^{-1}) = D^{m'*}_{(l)m}(R).
\label{eq:2-15-1}
\end{equation}
The orthogonality relations of Eq.~(\ref{eq:2-10}) are generalized for
higher-rank tensors, and are given by
\begin{equation}
\mathsf{Y}^{(m)}_{i_1i_2\cdots i_l}
\mathsf{Y}^{(m')}_{i_1i_2\cdots i_l} =
\frac{(2l+1)!!}{4\pi l!} g_{(l)}^{mm'},
\label{eq:2-16}
\end{equation}
where $(2l+1) \times (2l+1)$ matrix $g_{(l)}^{mm'}$ is defined just as
Eq.~(\ref{eq:2-4}) but the indices $m,m'$ run over integers from $-l$
to $+l$. The inverse matrix is similarly denoted by $g^{(l)}_{mm'}$,
which elements are the same as $g_{(l)}^{mm'}$. Thus, similar
relations as in Eqs.~(\ref{eq:2-4}) and (\ref{eq:2-5}) hold for
$g_{(l)}^{mm'}$ and $g^{(l)}_{mm'}$:
\begin{equation}
g_{(l)}^{mm'} = g^{(l)}_{mm'} = (-1)^m \delta_{m,-m'}, \quad
g_{(l)}^{mm''} g^{(l)}_{m''m'} = \delta^m_{m'}.
\label{eq:2-16-1}
\end{equation}
The relations to the $1jm$-symbol introduced by Wigner \cite{Wigner93}
are given by
\begin{equation}
(-1)^lg^{(l)}_{mm'} =
\begin{pmatrix}
l\\
m\ \ m'
\end{pmatrix}
= (-1)^lg_{(l)}^{mm'}.
\label{eq:2-16-2}
\end{equation}
Taking the complex conjugate of the basis virtually lowers the
azimuthal index as in Eq.~(\ref{eq:2-11}), i.e., we have
\begin{equation}
\mathsf{Y}^{(m)*}_{i_1i_2\cdots i_l}
= g^{(l)}_{mm'}\mathsf{Y}^{(m')}_{i_1i_2\cdots i_l}, \quad
\mathsf{Y}^{(m)}_{i_1i_2\cdots i_l}
= g_{(l)}^{mm'}\mathsf{Y}^{(m')*}_{i_1i_2\cdots i_l}.
\label{eq:2-17}
\end{equation}
The same properties also apply to spherical harmonics:
\begin{equation}
Y_{lm}^{*}(\theta,\phi)
= g_{(l)}^{mm'} Y_{lm'}(\theta,\phi), \quad
Y_{lm}(\theta,\phi)
= g^{(l)}_{mm'} Y_{lm'}^{*}(\theta,\phi).
\label{eq:2-17-1}
\end{equation}
Consequently, the orthogonality relations of Eq.~(\ref{eq:2-16}) is
also equivalent to
\begin{align}
\mathsf{Y}^{(m)*}_{i_1i_2\cdots i_l}
\mathsf{Y}^{(m')}_{i_1i_2\cdots i_l}
&=
\frac{(2l+1)!!}{4\pi l!} \delta_m^{m'},
\label{eq:2-18a}\\
\mathsf{Y}^{(m)*}_{i_1i_2\cdots i_l}
\mathsf{Y}^{(m')*}_{i_1i_2\cdots i_l}
&=
\frac{(2l+1)!!}{4\pi l!} g^{(l)}_{mm'}.
\label{eq:2-18b}
\end{align}
We decompose a scalar function $S$ and a vector function $V_i$ into
spherical components as
\begin{equation}
S = 4\pi S_{00} \mathsf{Y}^{(0)}, \quad
V_i = \frac{4\pi i}{3} V_{1m} \mathsf{Y}^{(m)}_i.
\label{eq:2-19}
\end{equation}
The prefactors $4\pi$ and $4\pi i/3$ on the rhs of Eq.~(\ref{eq:2-19})
are just our normalization conventions that turn out to be convenient
for our purpose below. Due to the orthogonality relations of
Eq.~(\ref{eq:2-10}), the inverted relations of the above are given by
\begin{equation}
S_{00} = S \mathsf{Y}^{(0)*}, \quad
V_{1m} = -i V_i \mathsf{Y}^{(m)*}_i.
\label{eq:2-20}
\end{equation}
Since the spherical basis is a traceless tensor, a symmetric rank-2
tensor function $T_{ij}$, which is not necessarily traceless, is
decomposed into spherical components of the trace part and traceless
part as
\begin{equation}
T_{ij}
= 4\pi T_{00} \frac{\delta_{ij}}{3} \mathsf{Y}^{(0)}
- \frac{8\pi}{15} T_{2m} \mathsf{Y}^{(m)}_{ij},
\label{eq:2-21}
\end{equation}
where
\begin{equation}
T_{00} = T_{ii} \mathsf{Y}^{(0)*}, \quad
T_{2m} = - T_{ij} \mathsf{Y}^{(m)*}_{ij}.
\label{eq:2-22}
\end{equation}
The prefactor $-8\pi/15$ in front of the second term on the rhs of
Eq.~(\ref{eq:2-21}) is again our normalization convention.
Higher-rank tensors can be similarly decomposed into spherical
tensors. We first note that any symmetric tensor of rank $l$ can be
decomposed into \cite{Spencer1970}
\begin{equation}
T_{i_1i_2\cdots i_l} =
T^{(l)}_{i_1i_2\cdots i_l}
+ \frac{l(l-1)}{2(2l-1)}
\delta_{(i_1i_2} T^{(l-2)}_{i_3\cdots i_l)}
+ \cdots,
\label{eq:2-23}
\end{equation}
where $T^{(l)}_{i_1i_2\cdots i_l}$ is a symmetric traceless tensor of
rank $l$, $T^{(l-2)}_{i_3\cdots i_l}$ is traceless part of
$T_{jji_3i_4\cdots i_l}$, and so forth (one should note that the
Einstein summation convention is applied also to Cartesian indices,
and thus $j=1,2,3$ is summed over in the last tensor). The traceless
part of $T_{i_1\cdots i_l}$ is generally given by a formula
\cite{Spencer1970,Applequist1989}
\begin{multline}
T^{(l)}_{i_1\cdots i_l} =
\frac{l!}{(2l-1)!!}
\sum_{k=0}^{[l/2]}
\frac{(-1)^k(2l-2k-1)!!}{2^k k! (l-2k)!}
\\ \times
\delta_{(i_1i_2}\cdots\delta_{i_{2k-1}i_{2k}}
T^{(l:k)}_{i_{2k+1}\cdots i_l)},
\label{eq:2-23-1}
\end{multline}
where $[l/2]$ is the Gauss symbol, i.e., $[l/2] = l/2$ if $l$ is even
and $[l/2] = (l-1)/2$ if $l$ is odd, and
\begin{align}
T^{(l:k)}_{i_{2k+1}\cdots i_l}
&= \delta_{i_1i_2} \delta_{i_3i_4} \cdots \delta_{i_{2k-1}i_{2k}}
T_{i_1\cdots i_{2k}i_{2k+1}\cdots i_l}
\nonumber \\
&=T_{j_1j_1j_2j_2\cdots j_kj_ki_{2k+1}\cdots i_l}
\label{eq:2-23-2}
\end{align}
is the traces of the original tensor taken $k$ times. Tensors of rank
0 and 1 correspond to a scalar and a vector, respectively, and are
already traceless as they do not have any trace component from the
first place. The traceless part of the tensor components forms an
irreducible representation of the rotation group SO(3). Decomposition
of higher-rank tensors can be similarly obtained according to the
procedure described in Ref.~\cite{Spencer1970}. Recursively using
Eq.~(\ref{eq:2-23-1}), one obtains explicit expressions of
Eq.~(\ref{eq:2-23}). Up to the fourth rank, they are given by
\begin{align}
T_{ij}
&= T^{(2)}_{ij} + \frac{1}{3} \delta_{ij} T^{(0)},
\label{eq:2-23-1a}\\
T_{ijk}
&= T^{(3)}_{ijk} + \frac{3}{5} \delta_{(ij}T^{(1)}_{k)},
\label{eq:2-23-1b}\\
T_{ijkl}
&= T^{(4)}_{ijkl} + \frac{6}{7} \delta_{(ij} T^{(2)}_{kl)}
+ \frac{1}{5} \delta_{(ij}\delta_{kl)} T^{(0)}.
\label{eq:2-23-1c}
\end{align}
The traceless part of a tensor is decomposed by spherical bases as
\begin{align}
T^{(l)}_{i_1i_2\cdots i_l}
&= i^l \alpha_l T_{lm} \mathsf{Y}^{(m)}_{i_1i_2\cdots i_l},
\label{eq:2-24a}\\
T_{lm}
&= (-i)^l
T^{(l)}_{i_1i_2\cdots i_l} \mathsf{Y}^{(m)*}_{i_1i_2\cdots i_l},
\label{eq:2-24b}
\end{align}
where
\begin{equation}
\alpha_l \equiv
\frac{4\pi\,l!}{(2l+1)!!}.
\label{eq:2-24-0}
\end{equation}
The prefactor $i^l\alpha_l$ on the rhs of Eq.~(\ref{eq:2-24a}) is our
normalization convention. The traceless part of a traced tensor is
similarly decomposed as
\begin{align}
T^{(l-2k)}_{i_{2k+1}\cdots i_l}
&= i^{l-2k} \alpha_{l-2k}T_{l-2k,m} \mathsf{Y}^{(m)}_{i_{2k+1}\cdots i_l},
\label{eq:2-24-1a}\\
T_{l-2k,m}
&= (-i)^{l-2k}
T^{(l)}_{i_{2k+1}\cdots i_l} \mathsf{Y}^{(m)*}_{i_{2k+1}\cdots i_l}.
\label{eq:2-24-1b}
\end{align}
Under the passive rotation of the basis, Eq.~(\ref{eq:2-14}),
Cartesian tensors of rank $l$ transform as
\begin{equation}
T^{(l)}_{i_1i_2\cdots i_l} \rightarrow
{T'}^{(l)}_{i_1i_2\cdots i_l} =
T^{(l)}_{j_1j_2\cdots j_l} R_{j_1i_1}\cdots R_{j_li_l},
\label{eq:2-25}
\end{equation}
and the corresponding transformation of spherical tensors is given by
\begin{equation}
T_{lm} \rightarrow
T'_{lm} =
T_{lm'} D^{m'}_{(l)m}(R),
\label{eq:2-26}
\end{equation}
as derived from the transformation of the spherical base,
Eq.~(\ref{eq:2-15}). The contravariant components of the spherical
tensors are defined by
\begin{equation}
T_{l}^{\,m} = g_{(l)}^{mm'} T_{lm'},
\label{eq:2-27}
\end{equation}
and transform as
\begin{equation}
T_{l}^{\,m} \rightarrow
{T'}_{l}^{\,m} =
D^{m}_{(l)m'}(R^{-1}) T_{l}^{\,m'}.
\label{eq:2-28}
\end{equation}
\section{\label{sec:iPTTensor}
The integrated perturbation theory of tensor fields
}
\subsection{\label{subsec:iPTTFormulation}
Formulating the iPT of tensor fields
}
The integrated perturbation theory (iPT) is a systematic method of
perturbation theory to describe the quasi-nonlinear evolution of the
large-scale structure. This method is based on the Lagrangian scheme
of the biasing and nonlinear structure formation. The formalism of
this method is explicitly developed to describe the scalar fields,
such as the number density fields of galaxies or other biased objects
in the Universe \cite{Matsubara:2011ck,Matsubara:2013ofa}. In this
section, the formalism is generalized to be able to describe
quasi-nonlinear evolutions of objects which have generally tensor
values, such as the angular momentum of galaxies (vector),
the second moment of intrinsic alignment of galaxy shapes (rank-2
tensor), or higher moments of intrinsic alignment (rank-3 or higher
tensors), etc.
The fundamental formalism of iPT to calculate spatial correlations of
biased objects is described in Ref.~\cite{Matsubara:2011ck}. Basically
we can just follow the formalism with a generalization of assigning the
tensor values to the objects. We denote the observable objects $X$,
where $X$ is the class of objects selected in a given cosmological
surveys, such as a certain types of galaxies in galaxy surveys. We
consider each object of $a\in X$ has a tensor value
$F^a_{i_1i_2\cdots}$ in general, and define the tensor field by
\begin{equation}
F_{X\,i_1i_2\cdots}(\bm{x}) =
\frac{1}{\bar{n}_X}
\sum_{a\in X} F^a_{i_1i_2\cdots}
\delta_\mathrm{D}^3\left(\bm{x} - \bm{x}_a\right),
\label{eq:3-1}
\end{equation}
where $\bm{x}_a$ is the location of a particular object $a$,
$\bar{n}_X$ is the mean number density of objects $X$, and
$\delta_\mathrm{D}^3$ is the 3-dimensional Dirac's delta function.
It is crucial to note that the above definition of the tensor field is
weighted by the number density of the objects $X$. In the above,
although the field really depends on the time
$F_{X\,i_1i_2\cdots}(\bm{x},t)$, the argument of time is omitted in
the notation for simplicity. Similarly, every field in this paper
depends on time, although they are not explicitly written in the
arguments.
In general, any type of tensor can be decomposed into symmetric
tensors \cite{Spencer1970}, and therefore we assume that the tensor
field $F^a_{i_1i_2\cdots}$ is a symmetric tensor without loss of
generality. We decompose the symmetric tensor of each object
$F^a_{i_1i_2\cdots}$ into irreducible tensors $F^a_{lm}$ according to
the scheme described in the previous section. Accordingly,
corresponding components of Eq.~(\ref{eq:3-1}) are given by
\begin{equation}
F_{Xlm}(\bm{x}) =
\frac{1}{\bar{n}_X}
\sum_{a\in X} F^a_{lm}
\delta_\mathrm{D}^3\left(\bm{x} - \bm{x}_a\right),
\label{eq:3-1-1}
\end{equation}
As this tensor field is a number-density weighted quantity by
construction, we also consider the unweighted field $G_{Xlm}(\bm{x})$
defined by
\begin{equation}
F_{Xlm}(\bm{x}) = \left[1 + \delta_X(\bm{x})\right] G_{Xlm}(\bm{x}),
\label{eq:3-2}
\end{equation}
where $\delta_X(\bm{x})$ is the number density contrast of the objects
$X$, defined by
\begin{equation}
1 + \delta_X(\bm{x}) =
\frac{1}{\bar{n}_X}
\sum_{a\in X} \delta_\mathrm{D}^3\left(\bm{x} - \bm{x}_a\right).
\label{eq:3-3}
\end{equation}
Substituting Eq.~(\ref{eq:3-3}) into Eq.~(\ref{eq:3-2}) and comparing
with Eq.~(\ref{eq:3-1-1}), the unweighted field satisfies
\begin{equation}
G_{Xlm}(\bm{x}_a) = F^a_{Xlm}
\label{eq:3-3-1}
\end{equation}
at each position of object $a$, as naturally expected. The function
$G_{Xlm}(\bm{x})$ can arbitrarily be interpolated where $\bm{x} \ne
\bm{x}_a$ for all $a \in X$, because the field $F_{Xlm}(\bm{x})$ is
not supported at those points.
The iPT utilizes the Lagrangian perturbation theory (LPT) for the
dynamical nonlinear evolutions (see Ref.~\cite{Matsubara:2015ipa} for
notations of LPT employed in this paper). The Eulerian coordinates
$\bm{x}$ and the Lagrangian coordinates $\bm{q}$ of a mass element are
related by
\begin{equation}
\bm{x} = \bm{q} + \bm{\Psi}(\bm{q}),
\label{eq:3-4}
\end{equation}
where $\bm{\Psi}(\bm{q})$ is the displacement field. The number
density of the objects in Lagrangian space $n^\mathrm{L}_X(\bm{q})$ is
given by the Eulerian counterpart by a continuity relation,
\begin{equation}
n_X(\bm{x}) d^3\!x = n^\mathrm{L}_X(\bm{q}) d^3\!q,
\label{eq:3-5}
\end{equation}
i.e., the Lagrangian number density is defined so that the Eulerian
positions of objects are displaced back into the Lagrangian positions,
and the number density in Lagrangian space also depends on the time of
evaluating $n_X(\bm{x})$. The mean number density of objects
$\bar{n}_X$ is the same in Eulerian and Lagrangian spaces by
construction, thus we have
$1+\delta_X(\bm{x}) = [1+\delta^\mathrm{L}_X(\bm{q})]/J(\bm{q})$,
where $J(\bm{q})=|\partial(\bm{x})/\partial(\bm{q})|$ is the Jacobian
of the mapping from Lagrangian space to Eulerian space. The density
contrasts between these spaces are therefore related by
\begin{equation}
1 + \delta_X(\bm{x}) =
\int d^3\!q
\left[1 + \delta^\mathrm{L}_X(\bm{q})\right]
\delta_\mathrm{D}^3\left[\bm{x} - \bm{q} - \bm{\Psi}(\bm{q})\right].
\label{eq:3-6}
\end{equation}
As in the Eulerian space, the tensor field in Lagrangian space is also
naturally defined with the weight of the number density of objects.
The unweighted tensor field $G^\mathrm{L}_{Xlm}(\bm{q})$ in Lagrangian
space is simply defined by taking the same value displaced back from
the Eulerian position of Eq.~(\ref{eq:3-2}),
\begin{equation}
G^\mathrm{L}_{Xlm}(\bm{q}) \equiv G_{Xlm}(\bm{x}),
\label{eq:3-7}
\end{equation}
where $\bm{x}$ on the right-hand side (rhs) is just given by
Eq.~(\ref{eq:3-4}) and the number density-weighted tensor field in
Lagrangian space, $F^\mathrm{L}_{lm}(\bm{q})$, is defined by
\begin{equation}
F^\mathrm{L}_{Xlm}(\bm{q})
= \left[1 + \delta^\mathrm{L}_X(\bm{q})\right]
G^\mathrm{L}_{Xlm}(\bm{q}).
\label{eq:3-8}
\end{equation}
As obviously derived from Eqs.~(\ref{eq:3-6}) and (\ref{eq:3-7}), the
number density-weighted tensor fields in Eulerian space and
Lagrangian space is related by
\begin{equation}
F_{Xlm}(\bm{x}) =
\int d^3\!q
F^\mathrm{L}_{Xlm}(\bm{q})
\delta_\mathrm{D}^3\left[\bm{x} - \bm{q} - \bm{\Psi}(\bm{q})\right].
\label{eq:3-9}
\end{equation}
In cosmology, the properties of observable quantities are determined
by the initial condition of density fluctuations in the Universe, and
the growing mode of linear density contrast, $\delta_\mathrm{L}$, is a
representative of initial conditions. Therefore, the tensor field
$F_{Xlm}$ is considered as a functional of the linear density field
$\delta_\mathrm{L}$. The ensemble average of the $n$th functional
derivatives of the evolved field is called multi-point propagators
\cite{Matsubara:1995wd,Bernardeau:2008fa,Bernardeau:2010md}, which are
defined in equivalently various manners. Here we explain a
comprehensive definition according to Ref.~\cite{Matsubara:1995wd},
starting from a configuration space:
\begin{equation}
\Gamma^{(n)}_{Xlm}\left(\bm{x}-\bm{x}_1,\ldots,\bm{x}-\bm{x}_n\right)
\equiv
\left\langle
\frac{\delta^n {F}_{Xlm}(\bm{x})}{
\delta\delta_\mathrm{L}(\bm{x}_1) \cdots \delta\delta_\mathrm{L}(\bm{x}_n)}
\right\rangle.
\label{eq:3-10}
\end{equation}
where $\delta_\mathrm{L}(\bm{x})$ is the linear density contrast,
$\delta/\delta\delta_\mathrm{L}(\bm{x})$ is the functional derivative
with respect to the linear density contrast, and
$\langle\cdots\rangle$ represents the ensemble average of the field.
Because of the translational invariance of the Universe in a
statistical sense, the rhs should be invariant under the homogeneous
translation, $\bm{x} \rightarrow \bm{x} + \bm{x}_0$ and
$\bm{x}_i \rightarrow \bm{x}_i + \bm{x}_0$ for $i=1,\ldots,n$ with any
fixed displacement $\bm{x}_0$, and thus the arguments on the lhs of
Eq.~(\ref{eq:3-10}) should be given as those in the expression.
We apply the Fourier transform of the linear density contrast and the
tensor field,
\begin{align}
\tilde{\delta}_\mathrm{L}(\bm{k})
&=
\int d^3\!x\, e^{-i\bm{k}\cdot\bm{x}} \delta_\mathrm{L}(\bm{x}),
\label{eq:3-11a}\\
\tilde{F}_{Xlm}(\bm{k})
&=
\int d^3\!x\, e^{-i\bm{k}\cdot\bm{x}} F_{Xlm}(\bm{x}).
\label{eq:3-11b}
\end{align}
and the functional derivative in Fourier space is given by
\begin{equation}
\frac{\delta}{\delta\tilde{\delta}_\mathrm{L}(\bm{k})}
= \frac{1}{(2\pi)^3} \int d^3\!x\,e^{i\bm{k}\cdot\bm{x}}
\frac{\delta}{\delta\delta_\mathrm{L}(\bm{x})}.
\label{eq:3-12}
\end{equation}
Using the above equations, the Fourier counterpart of the rhs of
Eq.~(\ref{eq:3-10}) is calculated, and we have
\begin{multline}
\left\langle
\frac{\delta^n \tilde{F}_{Xlm}(\bm{k})}{
\delta\tilde{\delta}_\mathrm{L}(\bm{k}_1) \cdots
\delta\tilde{\delta}_\mathrm{L}(\bm{k}_n)}
\right\rangle
\\
= (2\pi)^{3-3n}
\delta_\mathrm{D}^3(\bm{k}_1+\cdots +\bm{k}_n - \bm{k})
\tilde{\Gamma}^{(n)}_{Xlm}(\bm{k}_1,\ldots\bm{k}_n),
\label{eq:3-13}
\end{multline}
where
\begin{multline}
\tilde{\Gamma}^{(n)}_{Xlm}(\bm{k}_1,\ldots\bm{k}_n)
= \int d^3\!x_1 \cdots d^3\!x_n\,
e^{-i(\bm{k}_1\cdot\bm{x}_1+\cdots +\bm{k}_n\cdot\bm{x}_n)}
\\
\times
\Gamma^{(n)}_{Xlm}(\bm{x}_1,\ldots\bm{x}_n)
\label{eq:3-14}
\end{multline}
is the Fourier transform of the propagator. The appearance of the
delta function on the rhs of Eq.~(\ref{eq:3-13}) is the consequence of
the statistical homogeneity of the Universe. Integrating
Eq.~(\ref{eq:3-13}), we have
\begin{equation}
\tilde{\Gamma}^{(n)}_{Xlm}(\bm{k}_1,\ldots\bm{k}_n) =
(2\pi)^{3n}
\int \frac{d^3k}{(2\pi)^3}
\left\langle
\frac{\delta^n \tilde{F}_{Xlm}(\bm{k})}{
\delta\tilde{\delta}_\mathrm{L}(\bm{k}_1) \cdots
\delta\tilde{\delta}_\mathrm{L}(\bm{k}_n)}
\right\rangle.
\label{eq:3-15}
\end{equation}
The Fourier transform of Eq.~(\ref{eq:3-9}) is given by
\begin{equation}
\tilde{F}_{Xlm}(\bm{k}) =
\int d^3\!q\,e^{-i\bm{k}\cdot\bm{q} -i\bm{k}\cdot\bm{\Psi}(\bm{q})}
F^\mathrm{L}_{Xlm}(\bm{q}).
\label{eq:3-16}
\end{equation}
In the iPT, the displacement field $\bm{\Psi}(\bm{q})$ is expanded as
a perturbative series of Lagrangian perturbation theory (LPT),
\begin{multline}
\bm{\Psi}(\bm{q}) =
\sum_{n=1}^\infty \frac{i}{n!}
\int \frac{d^3k_1}{(2\pi)^3} \cdots \frac{d^3k_n}{(2\pi)^3}
e^{i(\bm{k}_1 + \cdots + \bm{k}_n)\cdot\bm{q}}
\\ \times
\bm{L}_n(\bm{k}_1,\ldots,\bm{k}_n)
\tilde{\delta}_\mathrm{L}(\bm{k}_1) \cdots
\tilde{\delta}_\mathrm{L}(\bm{k}_n).
\label{eq:3-17}
\end{multline}
The perturbation kernels $\bm{L}_n$ are recursively obtained both for
the model of Einstein-de Sitter Universe \cite{Zheligovsky:2013eca}
and for general models \cite{Matsubara:2015ipa}. The propagators of
Eq.~(\ref{eq:3-15}) are systematically evaluated by applying
diagrammatic rules in the iPT. The detailed derivation of the
diagrammatic rules for the number density field $\delta_X$ is given in
Ref.~\cite{Matsubara:2011ck}. Exactly the same derivation applies to
the tensor field, simply replacing $1+\delta_X \rightarrow F_{Xlm}$
and $1+\delta^\mathrm{L}_X \rightarrow F^\mathrm{L}_{Xlm}$ in that
reference. In this method, the renormalized bias functions play
important roles. In the present case of the tensor field, the renormalized
bias functions $c^{(n)}_{Xlm}(\bm{k}_1,\ldots,\bm{k}_n)$ in Fourier
space are defined by
\begin{multline}
\left\langle
\frac{\delta^n \tilde{F}^\mathrm{L}_{Xlm}(\bm{k})}{
\delta\tilde{\delta}_\mathrm{L}(\bm{k}_1) \cdots
\delta\tilde{\delta}_\mathrm{L}(\bm{k}_n)}
\right\rangle
\\
= (2\pi)^{3-3n}
\delta_\mathrm{D}^3(\bm{k}_1+\cdots +\bm{k}_n - \bm{k})
c^{(n)}_{Xlm}(\bm{k}_1,\ldots\bm{k}_n),
\label{eq:3-18-0}
\end{multline}
or equivalently,
\begin{equation}
c^{(n)}_{Xlm}(\bm{k}_1,\ldots\bm{k}_n) =
(2\pi)^{3n}
\int \frac{d^3k}{(2\pi)^3}
\left\langle
\frac{\delta^n \tilde{F}^\mathrm{L}_{Xlm}(\bm{k})}{
\delta\tilde{\delta}_\mathrm{L}(\bm{k}_1) \cdots
\delta\tilde{\delta}_\mathrm{L}(\bm{k}_n)}
\right\rangle.
\label{eq:3-18}
\end{equation}
As can be seen by comparing the above definition with
Eqs.~(\ref{eq:3-13}) and (\ref{eq:3-15}), the renormalized bias
functions are the counterparts of the propagator for the biasing in
Lagrangian space.
The diagrammatic rules of iPT are summarized in the Appendix of
Ref.~\cite{Matsubara:2013ofa}. The same rules apply, simply replacing
$\Gamma_X^{(n)} \rightarrow \Gamma_{Xlm}^{(n)}$,
$c_X^{(n)} \rightarrow c_{Xlm}^{(n)}$, etc. For the terms without
$c^{(n)}_X$ in that reference, an additional factor $c^{(0)}_{Xlm}$
should be multiplied, which is unity in the original formulation for
scalar fields. The propagators are given in a form,
\begin{equation}
\tilde{\Gamma}^{(n)}_{Xlm}(\bm{k}_1,\ldots,\bm{k}_n) =
\Pi(\bm{k})\,
\hat{\Gamma}^{(n)}_{Xlm}(\bm{k}_1,\ldots,\bm{k}_n),
\label{eq:3-19}
\end{equation}
where
\begin{equation}
\Pi(\bm{k}) = \left\langle e^{-i\bm{k}\cdot\bm{\Psi}} \right\rangle
=
\exp\left[
\sum_{n=2}^\infty \frac{(-i)^n}{n!}
\left\langle (\bm{k}\cdot\bm{\Psi})^n \right\rangle_\mathrm{c}
\right]
\label{eq:3-19-1}
\end{equation}
is the vertex resummation factor in terms of the displacement field,
and $\langle\cdots\rangle_\mathrm{c}$ denotes the $n$-point connected
part of the random variables. The propagator
$\hat{\Gamma}^{(n)}_{Xlm}$ without the resummation factor
$\Pi(\bm{k})$ is called reduced propagators in the following.
Below, we give explicit forms of lower-order propagators, which can be
used to estimate power spectra, bispectra, etc. The vertex resummation
factor in the one-loop approximation of iPT is given by
\begin{equation}
\Pi(\bm{k}) =
\exp\left\{
-\frac{1}{2} \int\frac{d^3p}{(2\pi)^3}
\left[\bm{k}\cdot\bm{L}_1(\bm{p})\right]^2
P_\mathrm{L}(p)
\right\},
\label{eq:3-20}
\end{equation}
where $P_\mathrm{L}(k)$ is the linear power spectrum defined by
\begin{equation}
\left\langle
\tilde{\delta}_\mathrm{L}(\bm{k})
\tilde{\delta}_\mathrm{L}(\bm{k})
\right\rangle
= (2\pi)^3 \delta_\mathrm{D}^3(\bm{k} + \bm{k}')
P_\mathrm{L}(k),
\label{eq:3-20-1}
\end{equation}
where the delta function on the rhs appears due to the statistical
homogeneity of the space. The reduced propagator of the first order,
up to the one-loop approximation, is given by
\begin{multline}
\hat{\Gamma}_{Xlm}^{(1)}(\bm{k})
= c_{Xlm}^{(1)}(\bm{k}) +
\left[\bm{k}\cdot\bm{L}_1(\bm{k})\right] c_{Xlm}^{(0)}
\\
+ \int\frac{d^3p}{(2\pi)^3} P_\mathrm{L}(p)
\biggl\{
\left[\bm{k}\cdot\bm{L}_1(-\bm{p})\right]
c_{Xlm}^{(2)}(\bm{k},\bm{p})
\\
+ \left[\bm{k}\cdot\bm{L}_1(-\bm{p})\right]
\left[\bm{k}\cdot\bm{L}_1(\bm{k})\right]
c_{Xlm}^{(1)}(\bm{p})
\\
+ \left[\bm{k}\cdot\bm{L}_2(\bm{k},-\bm{p})\right]
c_{Xlm}^{(1)}(\bm{p})
\\
+ \frac{1}{2}
\left[\bm{k}\cdot\bm{L}_3(\bm{k},\bm{p},-\bm{p})\right]
c_{Xlm}^{(0)}
\\
+ \left[\bm{k}\cdot\bm{L}_1(\bm{p})\right]
\left[\bm{k}\cdot\bm{L}_2(\bm{k},-\bm{p})\right]
c_{Xlm}^{(0)}
\biggr\}.
\label{eq:3-21}
\end{multline}
The first line of the above quation corresponds to the lowest, or
tree-level approximation. The reduced propagator of second order in
the tree-level approximation is given by
\begin{multline}
\hat{\Gamma}^{(2)}_{Xlm}(\bm{k}_1,\bm{k}_2) =
c^{(2)}_{Xlm}(\bm{k}_1,\bm{k}_2)
\\
+ \left[\bm{k}_{12}\cdot\bm{L}_1(\bm{k}_1)\right] c^{(1)}_{Xlm}(\bm{k}_2)
+ \left[\bm{k}_{12}\cdot\bm{L}_1(\bm{k}_2)\right] c^{(1)}_{Xlm}(\bm{k}_1)
\\
+
\left\{
\left[\bm{k}_{12}\cdot\bm{L}_1(\bm{k}_1)\right]
\left[\bm{k}_{12}\cdot\bm{L}_1(\bm{k}_2)\right]
+ \bm{k}_{12}\cdot\bm{L}_2(\bm{k}_1,\bm{k}_2)
\right\} c^{(0)}_{Xlm},
\label{eq:3-22}
\end{multline}
where $\bm{k}_{12}=\bm{k}_1+\bm{k}_2$, and the reduced propagator of
the third order, again in the tree-level approximation, is given by
\begin{multline}
\hat{\Gamma}^{(3)}_{Xlm}(\bm{k}_1,\bm{k}_2,\bm{k}_3) =
c^{(3)}_{Xlm}(\bm{k}_1,\bm{k}_2,\bm{k}_3)
\\
+ \left[\bm{k}_{123}\cdot\bm{L}_1(\bm{k}_1)\right]
c^{(2)}_{Xlm}(\bm{k}_2,\bm{k}_3) +\,\mathrm{cyc.}
\\
+
\Bigl\{
\left[\bm{k}_{123}\cdot\bm{L}_1(\bm{k}_1)\right]
\left[\bm{k}_{123}\cdot\bm{L}_1(\bm{k}_2)\right]
+
\left[\bm{k}_{123}\cdot\bm{L}_2(\bm{k}_1,\bm{k}_2)\right]
\Bigr\}
\\
\hspace{10pc} \times
c^{(1)}_{Xlm}(\bm{k}_3) +\,\mathrm{cyc.}
\\
+
\Bigl\{
\left[\bm{k}_{123}\cdot\bm{L}_1(\bm{k}_1)\right]
\left[\bm{k}_{123}\cdot\bm{L}_1(\bm{k}_2)\right]
\left[\bm{k}_{123}\cdot\bm{L}_1(\bm{k}_3)\right]
\\
+
\left[\bm{k}_{123}\cdot\bm{L}_1(\bm{k}_1)\right]
\left[\bm{k}_{123}\cdot\bm{L}_2(\bm{k}_2,\bm{k}_3)\right]
+\,\mathrm{cyc.}
\\
+ \bm{k}_{123}\cdot\bm{L}_3(\bm{k}_1,\bm{k}_2,\bm{k}_3)
\Bigr\} c^{(0)}_{Xlm},
\label{eq:3-23}
\end{multline}
where $\bm{k}_{123}=\bm{k}_1+\bm{k}_2+\bm{k}_3$, and $+\,\mathrm{cyc.}$
corresponds to the two terms which are added with cyclic permutations
of each previous term.
In real space, the kernels of LPT in the standard theory of gravity
(in the Newtonian limit) are given by
\cite{Catelan:1994ze,Catelan:1996hw}
\begin{align}
& \bm{L}_1(\bm{k}) = \frac{\bm{k}}{k^2},
\label{eq:3-24a}\\
& \bm{L}_2(\bm{k}_1,\bm{k}_2)
=\frac37 \frac{\bm{k}_{12}}{{k_{12}}^2}
\left[1 - \left(\frac{\bm{k}_1 \cdot \bm{k}_2}{k_1 k_2}\right)^2\right],
\label{eq:3-24b}\\
& \bm{L}_3(\bm{k}_1,\bm{k}_2,\bm{k}_3) =
\frac13
\left[\tilde{\bm{L}}_3(\bm{k}_1,\bm{k}_2,\bm{k}_3) + \mathrm{cyc.}\right],
\label{eq:3-24c}\\
& \tilde{\bm{L}}_3(\bm{k}_1,\bm{k}_2,\bm{k}_3)
\nonumber\\
& \quad
= \frac{\bm{k}_{123}}{{k_{123}}^2}
\left\{
\frac57
\left[1 - \left(\frac{\bm{k}_1 \cdot \bm{k}_2}{k_1 k_2}\right)^2\right]
\left[1 - \left(\frac{\bm{k}_{12} \cdot \bm{k}_3}
{{k}_{12} k_3}\right)^2\right]
\right.
\nonumber\\
& \qquad\quad
\left.
- \frac13
\left[
1 - 3\left(\frac{\bm{k}_1 \cdot \bm{k}_2}{k_1 k_2}\right)^2
+\, 2 \frac{(\bm{k}_1 \cdot \bm{k}_2)(\bm{k}_2 \cdot \bm{k}_3)
(\bm{k}_3 \cdot \bm{k}_1)}{{k_1}^2 {k_2}^2 {k_3}^2}
\right]\right\}
\nonumber\\
& \qquad
+ \frac{3}{7}\frac{\bm{k}_{123}}{{k_{123}}^2}
\times
\frac{(\bm{k}_1\times\bm{k}_{23})(\bm{k}_1\cdot\bm{k}_{23})}{{k_1}^2{k_{23}}^2}
\left[1 - \left(\frac{\bm{k}_2 \cdot \bm{k}_3}{k_2 k_3}\right)^2\right].
\label{eq:3-24d}
\end{align}
In the above, weak dependencies on the time $t$ in the kernels are
neglected \cite{Bernardeau:2001qr,Matsubara:2015ipa}. In
Ref.~\cite{Matsubara:2015ipa}, complete expressions of the
displacement kernels of LPT up to the seventh order including the
transverse parts are explicitly given, together with a general way of
recursively deriving the kernels including weak dependencies on the
time in general cosmology and subleading growing modes. This method is
generalized to obtain kernels of LPT in modified theories of gravity
\cite{Aviles:2017aor}.
One of the benefits of the Lagrangian picture is that redshift-space
distortions are relatively easy to incorporate into the theory. The
displacement vector in redshift space $\bm{\Psi}^\mathrm{s}$ is
obtained from a linear mapping of that in real space by
$\bm{\Psi}^\mathrm{s} = \bm{\Psi} +
H^{-1}(\hat{\bm{z}}\cdot\dot{\bm{\Psi}})\hat{\bm{z}}$, where
$\dot{\bm{\Psi}} = \partial\bm{\Psi}/\partial t$ is the time
derivative of the displacement field, $\hat{\bm{z}}$ is a unit vector
directed to the line of sight, $H = \dot{a}/a$ is the time-dependent
Hubble parameter and $a(t)$ is the scale factor. A displacement kernel
in redshift space $\bm{L}^\mathrm{s}_n$ is simply related to the
kernel in real space in the same order by a linear mapping
\cite{Matsubara:2007wj}
\begin{equation}
\bm{L}_n \rightarrow \bm{L}^\mathrm{s}_n = \bm{L}_n +
nf\left(\hat{\bm{z}}\cdot\bm{L}_n\right)\hat{\bm{z}},
\label{eq:3-25}
\end{equation}
where $f=d\ln D/d\ln a = \dot{D}/HD$ is the linear growth rate, $D(t)$
is the linear growth factor. Throughout this paper, the
distant-observer approximation is assumed in redshift space and the
unit vector $\hat{\bm{z}}$ denotes the line-of-sight direction. The
expressions of Eqs.~(\ref{eq:3-20})--(\ref{eq:3-23}) apply as well in
redshift space when the displacement kernels $\bm{L}_n$ are replaced
by those in redshift space $\bm{L}^\mathrm{s}_n$.
\subsection{\label{subsec:PropSym}
Symmetries
}
\subsubsection{\label{subsubsec:SymCC}
Complex conjugate
}
We assume the Cartesian tensor field of Eq.~(\ref{eq:3-1}) is a real
tensor:
\begin{equation}
F^*_{X\,i_1i_2\cdots i_l}(\bm{x}) = F_{X\,i_1i_2\cdots i_l}(\bm{x}),
\label{eq:3-30}
\end{equation}
as it should be for physically observable quantities. For the spherical
decomposition of the traceless part, $F_{Xlm}(\bm{x})$,
Eqs.~(\ref{eq:2-17}), (\ref{eq:2-24b}), (\ref{eq:2-27}) and the above
equation indicates
\begin{equation}
F^*_{Xlm}(\bm{x}) = (-1)^l g_{(l)}^{mm'} F_{Xlm'}(\bm{x})
= (-1)^l F_{Xl}^{\phantom{X}m}(\bm{x}),
\label{eq:3-31}
\end{equation}
and thus the Fourier transform of Eq.~(\ref{eq:3-11b}) satisfies
\begin{equation}
\tilde{F}^*_{Xlm}(\bm{k})
= (-1)^l g_{(l)}^{mm'}\tilde{F}_{Xlm'}(-\bm{k})
= (-1)^l \tilde{F}_{Xl}^{\phantom{X}m}(-\bm{k}).
\label{eq:3-32}
\end{equation}
That is, the complex conjugate of the irreducible tensor field in
Fourier space raises the azimuthal index, inverts the sign of
wavevector in the argument, and put a phase factor $(-1)^l$. The
necessity of the last phase factor comes from our convention of the
$i^l$ factor in Eq.~(\ref{eq:2-24a}). The linear density contrast
$\delta_\mathrm{L}(\bm{x})$ is also a real field, and its Fourier
transform satisfies
${\tilde{\delta}_\mathrm{L}}^{\,*}(\bm{k}) =
\tilde{\delta}_\mathrm{L}(-\bm{k})$. Therefore, the renormalized bias
functions satisfy
\begin{align}
c^{(n)\,*}_{Xlm}(\bm{k}_1,\ldots,\bm{k}_n)
&= (-1)^l
g_{(l)}^{mm'}c^{(n)}_{Xlm'}(-\bm{k}_1,\ldots,-\bm{k}_n)
\nonumber\\
&= (-1)^l
c^{(n)m}_{Xl}(-\bm{k}_1,\ldots,-\bm{k}_n).
\label{eq:3-34}
\end{align}
Similarly, the tensor propagator of Eq.~(\ref{eq:3-15}) satisfies
\begin{align}
\tilde{\Gamma}^{(n)\,*}_{Xlm}(\bm{k}_1,\ldots,\bm{k}_n)
&= (-1)^l
g_{(l)}^{mm'}\tilde{\Gamma}^{(n)}_{Xlm'}(-\bm{k}_1,\ldots,-\bm{k}_n)
\nonumber\\
&= (-1)^l
\tilde{\Gamma}^{(n)m}_{Xl}(-\bm{k}_1,\ldots,-\bm{k}_n).
\label{eq:3-33}
\end{align}
In redshift space, the propagators also depend on the direction of
lines of sight, $\hat{\bm{z}}$. The above property of the complex
conjugate equally applies to those in redshift space, and we have
\begin{align}
\tilde{\Gamma}^{(n)*}_{Xlm}(\bm{k}_1,\ldots,\bm{k}_n;\hat{\bm{z}})
&= (-1)^l
g_{(l)}^{mm'}
\tilde{\Gamma}^{(n)}_{Xlm'}(-\bm{k}_1,\ldots,-\bm{k}_n;\hat{\bm{z}})
\nonumber\\
&= (-1)^l
\tilde{\Gamma}^{(n)m}_{Xl}(-\bm{k}_1,\ldots,-\bm{k}_n;\hat{\bm{z}}).
\label{eq:3-33-1}
\end{align}
\subsubsection{\label{subsubsec:SymRotation}
Rotation
}
Under the passive rotation of Eq.~(\ref{eq:2-14}), vector
components $V_i$ generally transform as
\begin{equation}
V_i \rightarrow V_i' = (R^{-1})_{ij}V_j = V_jR_{ji}.
\label{eq:3-35}
\end{equation}
We denote the rotation of the position and wavevector components as,
e.g., $\bm{x} \rightarrow \bm{x}' = R^{-1}\bm{x}$ and
$\bm{k} \rightarrow \bm{k}' = R^{-1}\bm{k}$, respectively. This
notation does not mean the physical rotations of vectors and does
mean the rotation of components,
$x_i \rightarrow x_i' = x_jR_{ji}$ and
$k_i \rightarrow k_i' = k_jR_{ji}$, respectively, as a result of the
passive rotation of Cartesian basis, Eq.~(\ref{eq:2-14}). The
Cartesian tensor field of Eq.~(\ref{eq:3-1}) transforms as
\begin{equation}
F_{X\,i_1i_2\cdots}(\bm{x}) \rightarrow
F_{X\,i_1i_2\cdots}'(\bm{x}') =
F_{X\,j_1j_2\cdots}(\bm{x}) R_{j_1i_1} R_{j_2i_2}\cdots,
\label{eq:3-36}
\end{equation}
and correspondingly the irreducible tensor of rank $l$ transforms as
\begin{equation}
F_{Xlm}(\bm{x}) \rightarrow
F_{Xlm}'(\bm{x}') = F_{Xlm'}(\bm{x}) D_{(l)m}^{m'}(R).
\label{eq:3-37}
\end{equation}
The corresponding transformation in Fourier space,
Eq.~(\ref{eq:3-11b}), is given by
\begin{equation}
\tilde{F}_{Xlm}(\bm{k}) \rightarrow
\tilde{F}_{Xlm}'(\bm{k}') = \tilde{F}_{Xlm'}(\bm{k}) D_{(l)m}^{m'}(R),
\label{eq:3-38}
\end{equation}
because the volume element $d^3\!x$ and the inner product
$\bm{k}\cdot\bm{x}$ are invariant under the rotation.
The same transformations apply to the tensor fields in
Lagrangian space, $F^\mathrm{L}_{Xlm}(\bm{q})$. Therefore, the
renormalized bias functions, Eq.~(\ref{eq:3-18}), transform as
\begin{multline}
c^{(n)}_{Xlm}(\bm{k}_1,\ldots,\bm{k}_n) \rightarrow
c^{(n)\prime}_{Xlm}(\bm{k}_1',\ldots\bm{k}_n')
\\
= c^{(n)}_{Xlm'}(\bm{k}_1,\ldots,\bm{k}_n) D_{(l)m}^{m'}(R).
\label{eq:3-40}
\end{multline}
Similarly, the propagators of Eq.~(\ref{eq:3-15}) transform as
\begin{multline}
\tilde{\Gamma}^{(n)}_{Xlm}(\bm{k}_1,\ldots\bm{k}_n) \rightarrow
\tilde{\Gamma}^{(n)\prime}_{Xlm}(\bm{k}_1',\ldots\bm{k}_n')
\\
= \tilde{\Gamma}^{(n)}_{Xlm'}(\bm{k}_1,\ldots\bm{k}_n) D_{(l)m}^{m'}(R).
\label{eq:3-39}
\end{multline}
In redshift space, the lines of sight also rotate,
$\hat{\bm{z}} \rightarrow \hat{\bm{z}}' = R^{-1}\hat{\bm{z}}$,
although simultaneous rotations of wavevectors and lines of sight
do not change the scalar products in the propagators, as in
Eqs.~(\ref{eq:3-20})--(\ref{eq:3-25}). Thus we have
\begin{multline}
\tilde{\Gamma}^{(n)}_{Xlm}(\bm{k}_1,\ldots\bm{k}_n;\hat{\bm{z}})
\rightarrow
\tilde{\Gamma}^{(n)\prime}_{Xlm}(\bm{k}_1',\ldots\bm{k}_n';\hat{\bm{z}}')
\\
=
\tilde{\Gamma}^{(n)}_{Xlm'}(\bm{k}_1,\ldots\bm{k}_n;\hat{\bm{z}})
D_{(l)m}^{m'}(R).
\label{eq:3-39-1}
\end{multline}
\subsubsection{\label{subsubsec:SymParity}
Parity
}
Next, we consider the property of parity symmetry. Keeping the
left-handed coordinates system, we consider an active parity
transformation of the physical system, instead of flipping the axes of
coordinates system. With the active parity transformation, the field
values at a position $\bm{x}$ are mapped into those at $-\bm{x}$, and
the functional form of the tensor field is transformed as
\begin{equation}
F_{X\,i_1\cdots i_l}(\bm{x}) \rightarrow
F_{Xlm}'(\bm{x}) =
(-1)^{s_X+l} F_{X\,i_1\cdots i_l}(-\bm{x}),
\label{eq:3-41}
\end{equation}
where $s_X=0$ for ordinary tensors and $s_X=1$ for pseudotensors. The
angular momentum is a typical example of a pseudotensor of rank 1.
The corresponding transformation in Fourier space is given by
\begin{equation}
\tilde{F}_{Xlm}(\bm{k}) \rightarrow
\tilde{F}_{Xlm}'(\bm{k}) = (-1)^{s_X+l} \tilde{F}_{Xlm}(-\bm{k}),
\label{eq:3-42}
\end{equation}
The same applies to tensor fields in Lagrangian space.
Accordingly, the renormalized bias functions transform as
\begin{multline}
c^{(n)}_{Xlm}(\bm{k}_1,\ldots,\bm{k}_n) \rightarrow
c^{(n)\prime}_{Xlm}(\bm{k}_1,\ldots,\bm{k}_n)
\\ =
(-1)^{s_X+l}\, c^{(n)}_{Xlm}(-\bm{k}_1,\ldots,-\bm{k}_n),
\label{eq:3-43}
\end{multline}
for the linear density field transforms as
$\tilde{\delta}_\mathrm{L}(\bm{k}) \rightarrow
\tilde{\delta}_\mathrm{L}(-\bm{k})$.
Similarly, the propagators transform as
\begin{multline}
\tilde{\Gamma}^{(n)}_{Xlm}(\bm{k}_1,\ldots,\bm{k}_n) \rightarrow
\tilde{\Gamma}^{(n)\prime}_{Xlm}(\bm{k}_1,\ldots,\bm{k}_n)
\\
= (-1)^{s_X+l}\,
\tilde{\Gamma}^{(n)}_{Xlm}(-\bm{k}_1,\ldots,-\bm{k}_n).
\label{eq:3-44}
\end{multline}
In redshift space, we have
\begin{multline}
\tilde{\Gamma}^{(n)}_{Xlm}(\bm{k}_1,\ldots,\bm{k}_n;\hat{\bm{z}})
\rightarrow
\tilde{\Gamma}^{(n)\prime}_{Xlm}(\bm{k}_1,\ldots,\bm{k}_n;\hat{\bm{z}})
\\
= (-1)^{s_X+l}\,
\tilde{\Gamma}^{(n)}_{Xlm}(-\bm{k}_1,\ldots,-\bm{k}_n;-\hat{\bm{z}}).
\label{eq:3-45}
\end{multline}
\subsubsection{\label{subsubsec:SymInterC}
Interchange of arguments
}
An obvious symmetry of the renormalized bias functions and propagators
is that they are invariant under permutation of the wavevectors in the
argument. For the renormalized bias function, we have
\begin{equation}
c^{(n)}_{Xlm}(\bm{k}_{\sigma(1)},\ldots,\bm{k}_{\sigma(n)}) =
c^{(n)}_{Xlm}(\bm{k}_1,\ldots\bm{k}_n),
\label{eq:3-46}
\end{equation}
where $\sigma \in \mathcal{S}_n$ is a permutation of the symmetric
group $\mathcal{S}_n$ of order $n$. Any permutation can be realized by
a series of interchange operations of adjacent arguments:
\begin{equation}
c^{(n)}_{Xlm}(\bm{k}_1,\ldots,\bm{k}_i,\bm{k}_{i+1},\ldots\bm{k}_n)
=
c^{(n)}_{Xlm}(\bm{k}_1,\ldots,\bm{k}_{i+1},\bm{k}_i,\ldots\bm{k}_n),
\label{eq:3-47}
\end{equation}
with arbitrary $i=1,\ldots,n-1$. Similarly, we have
\begin{equation}
\tilde{\Gamma}^{(n)}_{Xlm}(\bm{k}_1,\ldots,\bm{k}_i,\bm{k}_{i+1},\ldots\bm{k}_n)
=
\tilde{\Gamma}^{(n)}_{Xlm}(\bm{k}_1,\ldots,\bm{k}_{i+1},\bm{k}_i,\ldots\bm{k}_n),
\label{eq:3-48}
\end{equation}
for the propagators in real space, and
\begin{multline}
\tilde{\Gamma|}^{(n)}_{Xlm}(\bm{k}_1,\ldots,\bm{k}_i,\bm{k}_{i+1},\ldots\bm{k}_n;\hat{\bm{z}})
\\
=
\tilde{\Gamma}^{(n)}_{Xlm}(\bm{k}_1,\ldots,\bm{k}_{i+1},\bm{k}_i,\ldots\bm{k}_n;\hat{\bm{z}}),
\label{eq:3-49}
\end{multline}
for the propagators in redshift space.
\subsection{\label{subsec:RenBiasFn}
Renormalized bias functions of tensor fields
}
In the following of this paper, we mostly work in Fourier space. For
notational simplicity, we omit the tildes above variables in Fourier
space, and denote $\delta_\mathrm{L}(\bm{k})$, $F_{Xlm}(\bm{k})$,
$\Gamma^{(n)}_{Xlm}(\bm{k}_1,\ldots)$ etc.~instead of
$\tilde{\delta}_\mathrm{L}(\bm{k})$, $\tilde{F}_{Xlm}(\bm{k})$,
$\tilde{\Gamma}^{(n)}_{Xlm}(\bm{k}_1,\ldots)$, respectively, as long
as when they are not confusing.
One of the essential parts of iPT is the introduction of the
renormalized bias function defined by Eq.~(\ref{eq:3-18}). They
characterize complicated fully nonlinear processes in the formation of
astronomical objects, such as various types of galaxies and clusters,
etc. If these complicated processes are described by some kind of
models and the function $F^\mathrm{L}_{Xlm}$ in Lagrangian space is
analytically given in terms of the linear density field
$\delta_\mathrm{L}$, then we can analytically calculate the
renormalized bias function $c^{(n)}_{Xlm}$ according to the
model. In the case of the number density of halos, the Press-Schechter
model and its extensions are applied to the concrete calculations
along this line
\cite{Matsubara:2012nc,Matsubara:2013ofa,Matsubara:2016wth}. When this
kind of models are not known, the renormalized bias functions have an
infinite degrees of freedom which is difficult to calculate from the
fundamental level. However, the rotational symmetry considered in the
previous subsection places some constraints on the functional form of
the renormalized bias functions as we show below.
We consider the renormalized bias functions given in the form of
Eq.~(\ref{eq:3-18}). In the simplified notation without tildes,
\begin{equation}
c^{(n)}_{Xlm}(\bm{k}_1,\ldots\bm{k}_n) =
(2\pi)^{3n}
\int \frac{d^3k}{(2\pi)^3}
\left\langle
\frac{\delta^n F^\mathrm{L}_{Xlm}(\bm{k})}{
\delta\delta_\mathrm{L}(\bm{k}_1) \cdots
\delta\delta_\mathrm{L}(\bm{k}_n)}
\right\rangle.
\label{eq:3-50}
\end{equation}
The angular dependencies of the wavevectors in the arguments can be
decomposed in a series expansion with spherical harmonics as
\begin{multline}
c^{(n)}_{Xlm}(\bm{k}_1,\ldots\bm{k}_n) =
\sum_{l_1,\ldots,l_n}
Y_{l_1m_1}^*(\hat{\bm{k}}_1)\cdots Y_{l_nm_n}^*(\hat{\bm{k}}_n)
\\ \times
c^{(n)\,l;l_1\cdots l_n}_{Xm;m_1\cdots m_n}(k_1,\ldots,k_n),
\label{eq:3-51}
\end{multline}
where $k_i = |\bm{k}_i|$ with $i=1,\ldots,n$ are absolute values of
the wavevectors, and $\hat{\bm{k}}_i = (\theta_i,\phi_i)$ represents
the spherical coordinates for the direction of wavevectors $\bm{k}_i$.
The repeated indices $m$, $m_1,\ldots,m_n$ in Eq.~(\ref{eq:3-51}) are
summed over according to the Einstein summation convention as in the
previous section. Due to the orthonormality relation for the spherical
harmonics, Eq.~(\ref{eq:a-5}), the coefficients are inversely given by
\begin{multline}
c^{(n)\,l;l_1\cdots l_n}_{Xm;m_1\cdots m_n}(k_1,\ldots,k_n) =
\int d^2\hat{k}_1 \cdots d^2\hat{k}_n\,
c^{(n)}_{Xlm}(\bm{k}_1,\ldots\bm{k}_n)
\\ \times
Y_{l_1m_1}(\hat{\bm{k}}_1)\cdots Y_{l_nm_n}(\hat{\bm{k}}_n),
\label{eq:3-52}
\end{multline}
where $d^2\hat{k}_i = \sin\theta_i\,d\theta_i\,d\phi_i$ represents the
angular integration of the wavevector $\bm{k}_i$. Using the property
of Eq.~(\ref{eq:3-34}), the complex conjugate of the above equation
satisfies
\begin{multline}
c^{(n)\,l;l_1\cdots l_n\,*}_{Xm;m_1\cdots m_n}(k_1,\ldots,k_n)
=
(-1)^{l+l_1+\cdots +l_n} g_{(l)}^{mm'}
g_{(l_1)}^{m_1m_1'} \cdots g_{(l_2)}^{m_2m_2'}
\\ \times
c^{(n)\,l;l_1\cdots l_n}_{Xm';m_1'\cdots m_n'}(k_1,\ldots,k_n).
\label{eq:3-55}
\end{multline}
Under the passive rotation of Eq.~(\ref{eq:2-14}), the function of
Eq.~(\ref{eq:3-52}) covariantly transforms as
\begin{multline}
c^{(n)\,l;l_1\cdots l_n}_{Xm;m_1\cdots m_n}(k_1,\ldots,k_n)
\rightarrow
c^{(n)\,l;l_1\cdots l_n}_{Xm';m_1'\cdots m_n'}(k_1,\ldots,k_n)
\\ \times
D_{(l)m}^{m'}(R) D_{(l_1)m_1}^{m_1'}(R) \cdots D_{(l_n)m_n}^{m_n'}(R),
\label{eq:3-56}
\end{multline}
as obviously indicated by the lower positions of the azimuthal indices
$m, m_1,\ldots m_n$. Under the active parity transformation of
Eq.~(\ref{eq:3-43}), the same function transforms as
\begin{multline}
c^{(n)\,l;l_1\cdots l_n}_{Xm;m_1\cdots m_n}(k_1,\ldots,k_n)
\\
\rightarrow
(-1)^{s_X+l+l_1+\cdots +l_n}
c^{(n)\,l;l_1\cdots l_n}_{Xm;m_1\cdots m_n}(k_1,\ldots,k_n),
\label{eq:3-56-1}
\end{multline}
because of the parity property of the spherical harmonics,
Eq.~(\ref{eq:a-1}). The parity symmetry suggests that the renormalized
bias function is invariant under the above transformation, and thus we
have a condition to have a non-zero value of the function,
\begin{equation}
s_X + l + l_1 + \cdots + l_n = \mathrm{even},
\label{eq:3-56-2}
\end{equation}
in the Universe with parity symmetry.
For the statistically isotropic Universe, the functions of
Eq.~(\ref{eq:3-52}) are rotationally invariant as these functions are
given by ensemble averages of the field variables as seen by
Eqs.~(\ref{eq:3-50}) and (\ref{eq:3-52}), and they should not depend
on the choice of coordinates system $\hat{\mathbf{e}}_i$ to describe
wavevectors $\bm{k}_1,\ldots,\bm{k}_n$ of the renormalized bias
functions $c^{(n)}_{Xlm}(\bm{k}_1,\ldots,\bm{k}_n)$. Therefore, they
are invariant under the transformation of Eq.~(\ref{eq:3-56}), and we
have
\begin{multline}
c^{(n)\,l;l_1\cdots l_n}_{Xm;m_1\cdots m_n}(k_1,\ldots,k_n)
= c^{(n)\,l;l_1\cdots l_n}_{Xm';m_1'\cdots m_n'}(k_1,\ldots,k_n)
\\ \times
\frac{1}{8\pi^2} \int [dR]
D_{(l)m}^{m'}(R) D_{(l_1)m_1}^{m_1'}(R) \cdots D_{(l_n)m_n}^{m_n'}(R).
\label{eq:3-57}
\end{multline}
where
\begin{equation}
\int [dR] \cdots
= \int_0^{2\pi}d\alpha \int_0^\pi d\beta\,\sin\beta
\int_0^{2\pi} d\gamma \cdots
\label{eq:3-57-1}
\end{equation}
represents the integrals over Euler angles
$(\alpha\beta\gamma)$ of the rotation $R$. A product of two Wigner's
rotation matrices of the same rotation reduces to a single matrix
through vector-coupling coefficients as \cite{Edmonds:1955fi}
\begin{multline}
D_{(l_1)m_1}^{m_1'}(R) D_{(l_2)m_2}^{m_2'}(R)
\\
=
\sum_{l,m,m'} (2l+1)
\begin{pmatrix}
l_1 & l_2 & l \\
m_1' & m_2' & m'
\end{pmatrix}
\begin{pmatrix}
l_1 & l_2 & l \\
m_1 & m_2 & m
\end{pmatrix}
D_{(l)m}^{m'\,*}(R),
\label{eq:3-58}
\end{multline}
using the Wigner's $3j$-symbol. The complex conjugate of the rotation
matrix is given by
$D_{(l)m_1}^{m_2*}(R) = g_{(l)}^{m_1m_1'}
g^{(l)}_{m_2m_2'}D_{(l)m_1'}^{m_2'}(R)$ using our metric tensor for
spherical basis.
In the following, we use a simplified notation for the $3j$-symbols,
\begin{equation}
\left(l_1\,l_2\,l_3\right)_{m_1m_2m_3} \equiv
\begin{pmatrix}
l_1 & l_2 & l_3 \\
m_1 & m_2 & m_3
\end{pmatrix},
\label{eq:3-59}
\end{equation}
which is non-zero only when $m_1+m_2+m_3=0$. We consider that the
azimuthal indices can be raised or lowered by spherical metric, e.g.,
\begin{align}
\left(l_1\,l_2\,l_3\right)_{m_1}^{\phantom{m_1}m_2m_3}
&= g_{(l_2)}^{m_2m_2'} g_{(l_3)}^{m_3m_3'}
\left(l_1\,l_2\,l_3\right)_{m_1m_2'm_3'}
\nonumber \\
&= (-1)^{m_2+m_3}
\begin{pmatrix}
l_1 & l_2 & l_3 \\
m_1 & -m_2 & -m_3
\end{pmatrix},
\label{eq:3-60}
\end{align}
and so forth. The properties and formulas for Wigner's $3j$-symbol
in our notation, which are repeatedly used in this paper below, are
summarized in Appendix~\ref{app:3njSymbols}.
With our notation of the $3j$-symbol, Eq.~(\ref{eq:3-58}) is
equivalent to an expression,
\begin{multline}
D_{(l_1)m_1}^{m_1'}(R) D_{(l_2)m_2}^{m_2'}(R)
= (-1)^{l_1+l_2}
\sum_{l=0}^\infty (-1)^l(2l+1)
\\ \times
\left(l_1\,l_2\,l\right)_{m_1m_2}^{\phantom{m_1m_2}m}
\left(l_1\,l_2\,l\right)^{m_1'm_2'}_{\phantom{m_1'm_2'}m'}
D_{(l)m}^{m'}(R),
\label{eq:3-64}
\end{multline}
in which the rotational covariance is explicit. We use a property of
Eq.~(\ref{eq:c-2}) for the $3j$-symbol to derive the above equation.
Consecutive applications of Eq.~(\ref{eq:3-64}) to
Eq.~(\ref{eq:3-57}), the dependence on the rotation $R$ in the
integrand on the rhs can be represented by a linear combination of a
product of less than or equal to three rotation matrices and the
coefficients of the expression are given by the products of
$3j$-symbols. The averages of products of rotation matrices of three
or fewer are given by
\begin{align}
&
\frac{1}{8\pi^2} \int [dR]
D_{(l)m}^{m'}(R)
= \delta_{l0} \delta_{m0} \delta_{m'0},
\label{eq:3-65a}\\
&
\frac{1}{8\pi^2} \int [dR]
D_{(l_1)m_1}^{m_1'}(R) D_{(l_2)m_2}^{m_2'}(R)
= \frac{\delta_{l_1l_2}}{2l_1+1}
g^{(l_1)}_{m_1m_2} g_{(l_2)}^{m_1'm_2'},
\label{eq:3-65b}\\
&
\frac{1}{8\pi^2} \int [dR]
D_{(l_1)m_1}^{m_1'}(R) D_{(l_2)m_2}^{m_2'}(R) D_{(l_3)m_3}^{m_3'}(R)
\nonumber\\
& \hspace{5pc}
= (-1)^{l_1+l_2+l_3} \left(l_1\,l_2\,l_3\right)_{m_1m_2m_3}
\left(l_1\,l_2\,l_3\right)^{m_1'm_2'm_3'}.
\label{eq:3-65c}
\end{align}
Eqs.~(\ref{eq:3-65a}) and (\ref{eq:3-65b}) are also derived from
Eq.~(\ref{eq:3-65c}) noting $D_{(0)0}^0(R)=1$ and Eqs.~(\ref{eq:c-8}).
Also, Eqs.~(\ref{eq:3-65b}) and (\ref{eq:3-65c}) are derived from
further applications of Eq.~(\ref{eq:3-64}) to reduce the number of
products and finally using Eq.~(\ref{eq:3-65a}).
As a result of the above procedure, averaging over the rotation $R$,
the dependence of the indices of $m$'s in Eq.~(\ref{eq:3-57}) are all
represented by spherical metrics and $3j$-symbols. First, we
specifically derive the results for lower orders of $n$.
For $n=0$, using Eqs.~(\ref{eq:3-57}) and (\ref{eq:3-65a}), we derive
\begin{equation}
c^{(0)}_{Xlm} =
c^{(0)\,l}_{Xm} = c^{(0)\,0}_{X\,0}\delta_{l0} \delta_{m0}.
\label{eq:3-66}
\end{equation}
This result is trivial because the corresponding function of
Eq.~(\ref{eq:3-50}) for $n=0$ does not depend on the direction of the
wavevector. For notational simplicity, we simply denote
$c^{(0)}_X \equiv c^{(0)\,0}_{X\,0}/\sqrt{4\pi}$, and we have
\begin{equation}
c^{(0)}_{Xlm} =
c^{(0)\,l}_{Xm} = \frac{\delta_{l0} \delta_{m0}}{\sqrt{4\pi}} c^{(0)}_{X}.
\label{eq:3-67}
\end{equation}
For a scalar function, $F_X$, the coefficient $c^{(0)}_X$ just
corresponds to the mean value,
$c^{(0)}_X = \sqrt{4\pi} c^{(0)}_{X00} =\sqrt{4\pi} \langle F_{X00}
\rangle = 4\pi \langle F_{X00} \rangle \mathsf{Y}^{(0)}= \langle F_X
\rangle $, as seen from Eqs.~(\ref{eq:2-8}) and (\ref{eq:2-19}). The
parity symmetry of Eq.~(\ref{eq:3-56-2}) suggests that $s_X=0$, which
is trivial because the mean value of a scalar field
$\langle F_X \rangle$ vanish if the field is a pseudo-scalar in the
Universe with parity symmetry.
For $n=1$, using Eqs.~(\ref{eq:3-57}) and (\ref{eq:3-65b}), we derive
\begin{equation}
c^{(1)\,l;l_1}_{Xm;m_1}(k_1)
= \frac{\delta_{l\,l_1}}{2l+1}
g^{(l)}_{mm_1} g_{(l)}^{m'm_1'} c^{(1)\,l;l}_{Xm';m_1'}(k_1),
\label{eq:3-68}
\end{equation}
and therefore both indices $m,m_1$ appear only in the metric
$g^{(l)}_{mm_1}$. Defining a rotationally invariant function,
\begin{equation}
c^{(1)}_{Xl}(k_1)
\equiv \frac{g_{(l)}^{mm_1}}{2l+1}
c^{(1)\,l;l}_{Xm;m_1}(k_1),
\label{eq:3-69}
\end{equation}
Eq.~(\ref{eq:3-68}) reduces to a simple form,
\begin{equation}
c^{(1)\,l;l_1}_{Xm;m_1}(k_1)
= \delta_{ll_1} g^{(l)}_{mm_1} c^{(1)}_{Xl}(k_1).
\label{eq:3-70}
\end{equation}
For the symmetry of complex conjugate, Eq.~(\ref{eq:3-55}), we have
\begin{equation}
c^{(1)\,*}_{Xl}(k_1) = c^{(1)}_{Xl}(k_1),
\label{eq:3-70-2}
\end{equation}
and thus the reduced function is a real function. The parity symmetry
of Eq.~(\ref{eq:3-56-2}) in this $n=1$ case suggests $s_X = 0$,
because of the constraint $l=l_1$. This means that pseudotensors do
not have the first-order renormalized bias function. This property is
partly because we consider the case that only scalar perturbations
$\delta_\mathrm{L}$ are responsible for the properties of tensor
fields. For example, the angular momenta are not generated by the
first-order effect in linear perturbation theory \cite{Peebles1969},
and so forth. Substituting the expression of Eq.~(\ref{eq:3-70}) into
Eq.~(\ref{eq:3-51}) with $n=1$, we derive
\begin{equation}
c^{(1)}_{Xlm}(\bm{k}_1) = c^{(1)}_{Xl}(k_1) Y_{lm}(\hat{\bm{k}}_1).
\label{eq:3-70-1}
\end{equation}
The invariant coefficient $c^{(1)}_{Xl}(k_1)$ can also be read off
from an expression of the renormalized bias function
$c^{(1)}_{Xlm}(\bm{k}_1)$ in which the angular dependence on
$\bm{k}_1$ is expanded by the spherical harmonics
$Y_{lm}(\hat{\bm{k}}_1)$. One can also explicitly invert the above
equation by using the orthonormality relation of spherical harmonics.
For $n=2$, using Eqs.~(\ref{eq:3-57}) and (\ref{eq:3-65c}), we derive
\begin{equation}
c^{(2)\,l;l_1l_2}_{Xm;m_1m_2}(k_1,k_2)
= (-1)^l \sqrt{2l+1}
\left(l\,l_1\,l_2\right)_{mm_1m_2}
c^{(2)\,l}_{Xl_1l_2}(k_1,k_2),
\label{eq:3-71}
\end{equation}
where we define a rotationally invariant function,
\begin{equation}
c^{(2)\,l}_{Xl_1l_2}(k_1,k_2)
\equiv
\frac{(-1)^{l_1+l_2}}{\sqrt{2l+1}}
\left(l\,l_1\,l_2\right)^{mm_1m_2}
c^{(2)\,l;l_1l_2}_{Xm;m_1m_2}(k_1,k_2).
\label{eq:3-72}
\end{equation}
The above Eq.~(\ref{eq:3-71}) essentially corresponds to the
Wigner-Eckart theorem of representation theory and quantum mechanics
\cite{Sakurai:2011zz}. For the symmetry of complex conjugate,
Eq.~(\ref{eq:3-55}), we have
\begin{equation}
c^{(2)\,l\,*}_{Xl_1l_2}(k_1,k_2)
= c^{(2)\,l}_{Xl_1l_2}(k_1,k_2),
\label{eq:3-72-1}
\end{equation}
and thus the reduced function is a real function. The parity symmetry
of Eq.~(\ref{eq:3-56-2}) in this case suggests
$s_X+l+l_1+l_2=\mathrm{even}$. Substituting Eq.~(\ref{eq:3-71}) into
Eq.~(\ref{eq:3-51}) with $n=2$, we derive
\begin{multline}
c^{(2)}_{Xlm}(\bm{k}_1,\bm{k}_2)
= (-1)^l \sqrt{2l+1}
\sum_{l_1,l_2}
c^{(2)\,l}_{Xl_1l_2}(k_1,k_2)
\left(l\,l_1\,l_2\right)_{m}^{\phantom{m}m_1m_2}
\\ \times
Y_{l_1m_1}(\hat{\bm{k}}_1) Y_{l_2m_2}(\hat{\bm{k}}_2).
\label{eq:3-73}
\end{multline}
The last expression contains the bipolar spherical harmonics
\cite{Khersonskii:1988krb}, defined by
\begin{multline}
\left\{
Y_{l_1}(\hat{\bm{k}}_1)\otimes Y_{l_2}(\hat{\bm{k}}_2)
\right\}_{lm}
\\
\equiv (-1)^l \sqrt{2l+1}
\left(l\,l_1\,l_2\right)_{m}^{\phantom{m}m_1m_2}
Y_{l_1m_1}(\hat{\bm{k}}_1) Y_{l_2m_2}(\hat{\bm{k}}_2).
\label{eq:3-74}
\end{multline}
Thus, Eq.~(\ref{eq:3-73}) is concisely given by
\begin{multline}
c^{(2)}_{Xlm}(\bm{k}_1,\bm{k}_2)
= \sum_{l_1,l_2}
c^{(2)\,l}_{Xl_1l_2}(k_1,k_2)
\left\{
Y_{l_1}(\hat{\bm{k}}_1)\otimes Y_{l_2}(\hat{\bm{k}}_2)
\right\}_{lm}.
\label{eq:3-75}
\end{multline}
The interchange symmetry of Eq.~(\ref{eq:3-47}) in this case is given
by $c^{(2)}_{Xlm}(\bm{k}_1,\bm{k}_2) =
c^{(2)}_{Xlm}(\bm{k}_2,\bm{k}_1)$. Due to the symmetry of $3j$-symbols
of Eq.~(\ref{eq:c-4}), the bipolar spherical harmonics satisfy
\begin{equation}
\left\{
Y_{l_1}(\hat{\bm{k}}_1)\otimes Y_{l_2}(\hat{\bm{k}}_2)
\right\}_{lm} = (-1)^{l+l_1+l_2}
\left\{
Y_{l_2}(\hat{\bm{k}}_2)\otimes Y_{l_1}(\hat{\bm{k}}_1)
\right\}_{lm},
\label{eq:3-75-1}
\end{equation}
and therefore the invariant function satisfies
\begin{equation}
c^{(2)\,l}_{Xl_2l_1}(k_2,k_1)
= (-1)^{l+l_1+l_2}
c^{(2)\,l}_{Xl_1l_2}(k_1,k_2).
\label{eq:3-75-2}
\end{equation}
For $n=3$, using Eqs.~(\ref{eq:3-57}), (\ref{eq:3-64}) and
(\ref{eq:3-65c}), we derive
\begin{multline}
c^{(3)\,l;l_1l_2l_3}_{Xm;m_1m_2m_3}(k_1,k_2,k_3)
= (-1)^l \sqrt{2l+1}
\sum_L (-1)^L\sqrt{2L+1}
\\ \times
\left(l\,l_1\,L\right)_{mm_1M}
\left(L\,l_2\,l_3\right)^{M}_{\phantom{M}m_2m_3}
c^{(3)\,l;L}_{Xl_1l_2l_3}(k_1,k_2,k_3),
\label{eq:3-76}
\end{multline}
where we define
\begin{multline}
c^{(3)\,l;L}_{Xl_1l_2l_3}(k_1,k_2,k_3)
\equiv
\frac{(-1)^{l_1+l_2+l_3}}{\sqrt{2l+1}}
(-1)^{L} \sqrt{2L+1}
\left(l\,l_1\,L\right)^{mm_1}_{\phantom{mm_1}M}
\\ \times
\left(L\,l_2\,l_3\right)^{Mm_2m_3}
c^{(3)\,l;l_1l_2l_3}_{Xm;m_1m_2m_3}(k_1,k_2,k_3).
\label{eq:3-77}
\end{multline}
For the symmetry of complex conjugate, Eq.~(\ref{eq:3-55}), we have
\begin{equation}
c^{(3)\,l;L\,*}_{Xl_1l_2l_3}(k_1,k_2,k_3)
= c^{(3)\,l;L}_{Xl_1l_2l_3}(k_1,k_2,k_3),
\label{eq:3-77-1}
\end{equation}
and thus the reduced function is a real function.
The parity symmetry of Eq.~(\ref{eq:3-56-2}) in this case suggests
$s_X+l+l_1+l_2+l_3=\mathrm{even}$.
Substituting Eq.~(\ref{eq:3-76}) into Eq.~(\ref{eq:3-51}) with $n=3$, we
derive
\begin{multline}
c^{(3)}_{Xlm}(\bm{k}_1,\bm{k}_2,\bm{k}_3)
= (-1)^l \sqrt{2l+1}
\\ \times
\sum_{l_1,l_2,l_3,L}
(-1)^L\sqrt{2L+1}
c^{(3)\,l;L}_{Xl_1l_2l_3}(k_1,k_2,k_3)
\left(l\,l_1\,L\right)_{m}^{\phantom{m}m_1M}
\\ \times
\left(L\,l_2\,l_3\right)_{M}^{\phantom{M}m_2m_3}
Y_{l_1m_1}(\hat{\bm{k}}_1) Y_{l_2m_2}(\hat{\bm{k}}_2)
Y_{l_3m_3}(\hat{\bm{k}}_3).
\label{eq:3-78}
\end{multline}
The last expression contains the tripolar spherical harmonics
\cite{Khersonskii:1988krb}, defined by
\begin{multline}
\left\{
Y_{l_1}(\hat{\bm{k}}_1)\otimes
\left\{Y_{l_2}(\hat{\bm{k}}_2)
\otimes Y_{l_3}(\hat{\bm{k}}_3)
\right\}_L
\right\}_{lm}
\\
= (-1)^{l+L} \sqrt{(2l+1)(2L+1)}
\left(l\,l_1\,L\right)_{m}^{\phantom{m}m_1M}
\\ \times
\left(L\,l_2\,l_3\right)_{M}^{\phantom{M}m_2m_3}
Y_{l_1m_1}(\hat{\bm{k}}_1) Y_{l_2m_2}(\hat{\bm{k}}_2)
Y_{l_3m_3}(\hat{\bm{k}}_3).
\label{eq:3-79}
\end{multline}
Thus, Eq.~(\ref{eq:3-78}) is concisely given by
\begin{multline}
c^{(3)}_{Xlm}(\bm{k}_1,\bm{k}_2,\bm{k}_3)
=
\sum_{l_1,l_2,l_3,L}
c^{(3)\,l;L}_{Xl_1l_2l_3}(k_1,k_2,k_3)
\\ \times
\left\{
Y_{l_1}(\hat{\bm{k}}_1)\otimes
\left\{
Y_{l_2}(\hat{\bm{k}}_2) \otimes Y_{l_3}(\hat{\bm{k}}_3)
\right\}_L
\right\}_{lm}.
\label{eq:3-80}
\end{multline}
The interchange symmetries of Eq.~(\ref{eq:3-47}) in terms of
invariant functions in this case are derived from properties of
interchanging arguments in tripolar spherical harmonics. For an
interchange of the last two arguments, $2 \leftrightarrow 3$, we have
\begin{multline}
\left\{
Y_{l_1}(\hat{\bm{k}}_1)\otimes
\left\{Y_{l_2}(\hat{\bm{k}}_2)
\otimes Y_{l_3}(\hat{\bm{k}}_3)
\right\}_L
\right\}_{lm}
\\
= (-1)^{l_2+l_3+L}
\left\{
Y_{l_1}(\hat{\bm{k}}_1)\otimes
\left\{Y_{l_3}(\hat{\bm{k}}_3)
\otimes Y_{l_2}(\hat{\bm{k}}_2)
\right\}_L
\right\}_{lm},
\label{eq:3-80-1}
\end{multline}
just similarly in the case of Eq.~(\ref{eq:3-75-1}). For an
interchange of the first two arguments, $1 \leftrightarrow 2$,
however, a recoupling of the $3j$-symbols appears, and we have
\begin{multline}
\left\{
Y_{l_1}(\hat{\bm{k}}_1)\otimes
\left\{Y_{l_2}(\hat{\bm{k}}_2)
\otimes Y_{l_3}(\hat{\bm{k}}_3)
\right\}_L
\right\}_{lm}
= (-1)^{l_1+l_2}
\sum_{L'} (2L'+1)
\\ \times
\begin{Bmatrix}
l_1 & l & L \\
l_2 & l_3 & L'
\end{Bmatrix}
\left\{
Y_{l_2}(\hat{\bm{k}}_2)\otimes
\left\{Y_{l_1}(\hat{\bm{k}}_1)
\otimes Y_{l_3}(\hat{\bm{k}}_3)
\right\}_{L'}
\right\}_{lm},
\label{eq:3-80-2}
\end{multline}
where the factor in front of the tripolar spherical harmonics on the
rhs is a $6j$-symbol. This relation is derived by applying a sum rule
of the $3j$-symbols, Eq.~(\ref{eq:c-38}), to the definition of the
tripolar spherical harmonics, Eq.~(\ref{eq:3-79}). Correspondingly,
the interchange symmetries of invariant coefficients of
Eq.~(\ref{eq:3-80}) are given by
\begin{equation}
c^{(3)l;L}_{Xl_ll_3l_2}(k_1,k_3,k_2)
= (-1)^{l_2+l_3+L}
c^{(3)l;L}_{Xl_ll_2l_3}(k_1,k_2,k_3),
\label{eq:3-80-3}
\end{equation}
and
\begin{multline}
c^{(3)l;L}_{Xl_2l_1l_3}(k_2,k_1,k_3)
= (-1)^{l_1+l_2}
\sum_{L'} (2L'+1)
\begin{Bmatrix}
l_1 & l & L \\
l_2 & l_3 & L'
\end{Bmatrix}
\\ \times
c^{(3)l;L'}_{Xl_ll_2l_3}(k_1,k_2,k_3).
\label{eq:3-80-4}
\end{multline}
Combining Eqs.~(\ref{eq:3-80-3}) and (\ref{eq:3-80-4}), all the other
symmetries concerning the permutation of $(1,2,3)$ in the subscripts
of the arguments are straightforwardly obtained. A similar kind of
considerations of the interchange symmetry in a formulation of angular
trispectrum of the cosmic microwave background are found in
Ref.~\cite{Hu:2001fa} to enforce permutation symmetry in the angular
trispectrum.
In the same way, one derives the results for general orders $n > 3$.
Since the factors such as $2l+1$, $2L+1$ frequently appear in the
equations, we define a simplified notation,
\begin{equation}
\{l\} \equiv 2l+1, \quad
\{L\} \equiv 2L+1, \quad
\mathrm{etc.}
\label{eq:3-81-0}
\end{equation}
throughout the paper henceforth. The expansion coefficient of order
$n$ is given by
\begin{multline}
c^{(n)\,l;l_1\cdots l_n}_{Xm;m_1\cdots m_n}(k_1,\ldots,k_n)
=(-1)^l \sqrt{\{l\}}
\\ \times
\sum_{L_2,\ldots,L_{n-1}}
(-1)^{L_2+\cdots L_{n-1}}
\sqrt{\{L_2\}\cdots \{L_{n-1}\}}
\left(l\,l_1\,L_2\right)_{mm_1M_2}
\\ \times
\left(L_2\,l_2\,L_3\right)^{M_2}_{\phantom{M_2}m_2M_3}
\cdots
\left(L_{n-2}\,l_{n-2}\,L_{n-1}\right)^{M_{n-2}}_{\phantom{M_{n-2}}m_2M_{n-1}}
\\ \times
\left(L_{n-1}\,l_{n-1}\,l_n\right)^{M_{n-1}}_{\phantom{M_{n-1}}m_{n-1}m_n}
c^{(n)\,l;L_2\cdots L_{n-1}}_{Xl_1\cdots l_n}(k_1,\ldots,k_n),
\label{eq:3-81}
\end{multline}
where we define
\begin{multline}
c^{(n)\,l;L_2\cdots L_{n-1}}_{Xl_1\cdots l_n}(k_1,\ldots,k_n)
\equiv
\frac{(-1)^{l_1+\cdots +l_n}}{\sqrt{\{l\}}}
\\ \times
(-1)^{L_2+\cdots +L_{n-1}}\sqrt{\{L_2\}\cdots\{L_{n-1}\}}
\left(l\,l_1\,L_2\right)^{mm_1}_{\phantom{mm_1}M_2}
\\ \times
\left(L_2\,l_2\,L_3\right)^{M_2m_2}_{\phantom{M_2m_2}M_3}
\cdots
\left(L_{n-2}\,l_{n-2}\,L_{n-1}\right)^{M_{n-2}m_{n-2}}_{\phantom{M_{n-2}m_{n-2}}M_{n-1}}
\\ \times
\left(L_{n-1}\,l_{n-1}\,l_n\right)^{M_{n-1}m_{n-1}m_n}
c^{(n)\,l;l_1\cdots l_n}_{Xm;m_1\cdots m_n}(k_1,\ldots,,k_n).
\label{eq:3-82}
\end{multline}
Thus the azimuthal indices of the decomposed renormalized bias
functions, Eq.~(\ref{eq:3-52}) are all represented by combinations of
$3j$-symbols, and the physical contents of the renormalized bias
functions are given by the invariant functions,
$c^{(n)\,l;L_2\cdots L_{n-1}}_{Xl_1\cdots l_n}(k_1,\ldots,k_n)$. The
fully rotational symmetry is the reason why the azimuthal indices of
decomposed renormalized bias function,
$c^{(n)\,l;l_1\cdots l_n}_{Xm;m_1\cdots m_n}(\ldots)$ are represented
by the combinations of $3j$-symbols. Due to the symmetry of
$3j$-symbols, we have $m+m_1+\cdots +m_n=0$, which is a manifestation
of the axial symmetry of the rotation around the third axis
$\hat{\mathbf{e}}_3$.
For the symmetry of complex conjugate, Eq.~(\ref{eq:3-55}), we have
\begin{equation}
c^{(n)\,l;L_2\cdots L_{n-1}\,*}_{Xl_1\cdots l_n}(k_1,\ldots,k_n)
= c^{(n)\,l;L_2\cdots L_{n-1}}_{Xl_1\cdots l_n}(k_1,\ldots,k_n),
\label{eq:3-82-1}
\end{equation}
and thus the reduced function is a real function.
The parity symmetry of Eq.~(\ref{eq:3-56-2}) in this case suggests
$s_X+l+l_1+\cdots +l_n=\mathrm{even}$.
The renormalized bias function of Eq.~(\ref{eq:3-51}) is given by
\begin{multline}
c^{(n)}_{Xlm}(\bm{k}_1,\cdots,\bm{k}_n)
= (-1)^l \sqrt{\{l\}}
\\ \times
\sum_{\substack{l_1,\ldots,l_n\\L_2,\ldots,L_{n-1}}}
(-1)^{L_2 + \cdots + L_{n-1}}
\sqrt{\{L_2\}\cdots\{L_{n-1}\}}
\\ \times
c^{(n)\,l;L_2\cdots L_{n-1}}_{Xl_1\cdots l_n}(k_1,\ldots,k_n)
\left(l\,l_1\,L_2\right)_{m}^{\phantom{m}m_1M_2}
\\ \times
\left(L_2\,l_2\,L_3\right)_{M_2}^{\phantom{M_2}m_2M_3} \cdots
\left(L_{n-2}\,l_{n-2}\,L_{n-1}\right)_{M_{n-2}}^{\phantom{M_{n-2}}m_{n-2}M_{n-1}}
\\ \times
\left(L_{n-1}\,l_{n-1}\,l_n\right)_{M_{n-1}}^{\phantom{M_{n-1}}m_{n-1}m_n}
Y_{l_1m_1}(\hat{\bm{k}}_1) \cdots Y_{l_nm_n}(\hat{\bm{k}}_n).
\label{eq:3-83}
\end{multline}
We generally define polypolar spherical harmonics by
\begin{multline}
\left\{
Y_{l_1}(\hat{\bm{k}}_1)\otimes
\left\{
Y_{l_2}(\hat{\bm{k}}_2)\otimes
\left\{ \cdots
\otimes Y_{l_n}(\hat{\bm{k}}_n)
\right\}_{L_{n-1}} \cdots \right\}_{L_2}
\right\}_{lm}
\\
=
(-1)^{l+L_2+\cdots +L_{n-1}}
\sqrt{\{l\}\{L_2\}\cdots\{L_{n-1}\}}
\left(l\,l_1\,L_2\right)_{m}^{\phantom{m}m_1M_2}
\\ \times
\left(L_2\,l_2\,L_3\right)_{M_2}^{\phantom{M_2}m_2M_3} \cdots
\left(L_{n-2}\,l_{n-2}\,L_{n-1}\right)_{M_{n-2}}^{\phantom{M_{n-2}}m_{n-2}M_{n-1}}
\\ \times
\left(L_{n-1}\,l_{n-1}\,l_n\right)_{M_{n-1}}^{\phantom{M_{n-1}}m_{n-1}m_n}
Y_{l_1m_1}(\hat{\bm{k}}_1) \cdots
Y_{l_nm_n}(\hat{\bm{k}}_n),
\label{eq:3-84}
\end{multline}
and Eq.~(\ref{eq:3-83}) is concisely represented by
\begin{multline}
c^{(n)}_{Xlm}(\bm{k}_1,\cdots,\bm{k}_n)
=
\sum_{\substack{l_1,\ldots,l_n\\L_2,\ldots,L_{n-1}}}
c^{(n)\,l;L_2\cdots L_{n-1}}_{Xl_1\cdots l_n}(k_1,\cdots,k_n)
\\ \times
\left\{
Y_{l_1}(\hat{\bm{k}}_1)\otimes
\left\{
Y_{l_2}(\hat{\bm{k}}_2)\otimes
\left\{ \cdots
\otimes Y_{l_n}(\hat{\bm{k}}_n)
\right\}_{L_{n-1}} \cdots \right\}_{L_2}
\right\}_{lm}.
\label{eq:3-85}
\end{multline}
The interchange symmetries of Eq.~(\ref{eq:3-47}) in terms of
invariant function in this case are also derived following a similar
way in the case of $n=3$. For the interchange of the last two
arguments, we have
\begin{multline}
c^{(n)l;L_2\cdots L_{n-2}L_{n-1}}_{Xl_l\cdots l_{n-2}l_nl_{n-1}}
(k_1,\cdots,k_{n-2},k_n,k_{n-1})
\\
= (-1)^{l_{n-1}+l_n+L_{n-1}}
c^{(n)l;L_2\cdots L_{n-1}}_{Xl_l\cdots l_n}
(k_1,\cdots,k_n),
\label{eq:3-86}
\end{multline}
and for the interchange of the other adjacent arguments, $i
\leftrightarrow i+1$, we have
\begin{multline}
c^{(n)\,l;L_2\cdots L_{i-1}L_{i+1}L_iL_{i+2}\cdots L_{n-1}}
_{Xl_1\cdots l_{i-1}l_{i+1}l_il_{i+2}\cdots l_n}
(k_1,\cdots,k_{i-1},k_{i+1},k_i,k_{i+2},\cdots,k_n)
\\
= (-1)^{l_i+l_{i+1}}
\sum_{L'} (2L'+1)
\begin{Bmatrix}
l_i & L_i & L_{i+1} \\
l_{i+1} & L_{i+2} & L'
\end{Bmatrix}
\\ \times
c^{(n)\,l;L_2\cdots L_iL'L_{i+2}\cdots L_{n-1}}_{Xl_1\cdots l_n}(k_1,\cdots,k_n),
\label{eq:3-87}
\end{multline}
where $i=1,\ldots,n-2$. Combining Eqs.~(\ref{eq:3-86}) and
(\ref{eq:3-87}), all the other symmetries concerning the permutation
of $(1,\ldots,n)$ in the subscripts of the arguments are
straightforwardly obtained.
\subsection{\label{subsec:Propagators}
Propagators of tensor fields
}
The renormalized bias functions describe the bias mechanisms in the
physical space, and thus there is not any corresponding concept of
redshift space. However, the apparent clustering in redshift space
introduces anisotropy in the propagators in Eulerian space. In the
following, we separately consider the propagators in real space and
redshift space.
\subsubsection{\label{subsec:PropReal}
Real space
}
In real space, where the fully rotational symmetry is satisfied even
in Eulerian space, the reduced propagators $\hat{\Gamma}^{(n)}_{Xlm}$
are similarly decomposed into rotationally invariant coefficients as
in the case of renormalized bias functions explained in the previous
subsection. Although the description below is pretty much redundant
with the formulas for renormalized bias functions, we dare to write
down the corresponding formulas for the propagator in real space, just
for convenience of later use.
The decomposition of the reduced propagator is given by
\begin{multline}
\hat{\Gamma}^{(n)}_{Xlm}(\bm{k}_1,\ldots\bm{k}_n)
= \sum_{l_1,\ldots,l_n} \hat{\Gamma}^{(n)\,l;l_1\cdots
l_n}_{Xm;m_1\cdots m_n}(k_1,\ldots,k_n) \\ \times
Y_{l_1m_1}^*(\hat{\bm{k}}_1)\cdots Y_{l_nm_n}^*(\hat{\bm{k}}_n),
\label{eq:3-100}
\end{multline}
and the inverse relation of the decomposition is given by
\begin{multline}
\hat{\Gamma}^{(n)\,l;l_1\cdots l_n}_{Xm;m_1\cdots m_n}(k_1,\ldots,k_n) =
\int d^2\hat{k}_1 \cdots d^2\hat{k}_n\,
\hat{\Gamma}^{(n)}_{Xlm}(\bm{k}_1,\ldots\bm{k}_n)
\\ \times
Y_{l_1m_1}(\hat{\bm{k}}_1)\cdots Y_{l_nm_n}(\hat{\bm{k}}_n).
\label{eq:3-101}
\end{multline}
The symmetry of complex conjugate,
Eq.~(\ref{eq:3-33}) indicates
\begin{multline}
\hat{\Gamma}^{(n)\,l;l_1\cdots l_n\,*}_{Xm;m_1\cdots m_n}(k_1,\ldots,k_n)
=
(-1)^{l+l_1+\cdots +l_n} g_{(l)}^{mm'}
g_{(l_1)}^{m_1m_1'} \cdots g_{(l_n)}^{m_nm_n'}
\\ \times
\hat{\Gamma}^{(n)\,l;l_1\cdots l_n}_{Xm';m_1'\cdots m_n'}(k_1,\ldots,k_n).
\label{eq:3-102}
\end{multline}
The parity transformation of Eq.~(\ref{eq:3-44}) suggests that
\begin{equation}
s_X + l + l_1 + \cdots + l_n = \mathrm{even}
\label{eq:3-102-1}
\end{equation}
in the Universe with parity symmetry.
The considerations of the symmetric properties of the renormalized
bias functions $c^{(n)}_{Xlm}$ in the previous subsection are all
applicable to the case of propagators $\hat{\Gamma}^{(n)}_{Xlm}$ in
real space. Therefore, all the equations in the previous subsection
holds with replacements of $c^{(n)}_{Xlm}$ with
$\hat{\Gamma}^{(n)}_{Xlm}$, etc. For $n=1$, we have
\begin{equation}
\hat{\Gamma}^{(1)\,l;l_1}_{Xm;m_1}(k)
= \delta_{ll_1} g^{(l)}_{mm_1} \hat{\Gamma}^{(1)}_{Xl}(k).
\label{eq:3-103}
\end{equation}
with
\begin{equation}
\hat{\Gamma}^{(1)}_{Xl}(k)
\equiv \frac{g_{(l)}^{mm_1}}{\{l\}}
\hat{\Gamma}^{(1)\,l;l}_{Xm;m_1}(k).
\label{eq:3-104}
\end{equation}
The symmetry of complex conjugate is given by
\begin{equation}
\hat{\Gamma}^{(1)\,*}_{Xl}(k) = \hat{\Gamma}^{(1)}_{Xl}(k),
\label{eq:3-104-1}
\end{equation}
and thus the reduced function is a real function. In the Universe with
parity symmetry, Eq.~(\ref{eq:3-102-1}) suggests $s_X=0$ in order that
the first-order propagator $\hat{\Gamma}^{(1)}_{Xl}(k)$ does not
vanish. The propagator is given by
\begin{equation}
\hat{\Gamma}^{(1)}_{Xlm}(\bm{k})
= \hat{\Gamma}^{(1)}_{Xl}(k) Y_{lm}(\hat{\bm{k}}).
\label{eq:3-105}
\end{equation}
For $n=2$, we have
\begin{equation}
\hat{\Gamma}^{(2)\,l;l_1l_2}_{Xm;m_1m_2}(k_1,k_2)
= (-1)^l \sqrt{\{l\}}
\left(l\,l_1\,l_2\right)_{mm_1m_2}
\hat{\Gamma}^{(2)\,l}_{Xl_1l_2}(k_1,k_2),
\label{eq:3-106}
\end{equation}
with
\begin{equation}
\hat{\Gamma}^{(2)\,l}_{Xl_1l_2}(k_1,k_2)
\equiv
\frac{(-1)^{l_1+l_2}}{\sqrt{\{l\}}}
\left(l\,l_1\,l_2\right)^{mm_1m_2}
\hat{\Gamma}^{(2)\,l;l_1l_2}_{Xm;m_1m_2}(k_1,k_2).
\label{eq:3-107}
\end{equation}
The symmetry of complex conjugate is given by
\begin{equation}
\hat{\Gamma}^{(2)\,l\,*}_{Xl_1l_2}(k_1,k_2)
= \hat{\Gamma}^{(2)\,l}_{Xl_1l_2}(k_1,k_2),
\label{eq:3-107-1}
\end{equation}
and thus the reduced function is a real function.
The parity symmetry of Eq.~(\ref{eq:3-102-1}) in this case suggests
$s_X+l+l_1+l_2=\mathrm{even}$. The propagator is given by
\begin{equation}
\hat{\Gamma}^{(2)}_{Xlm}(\bm{k}_1,\bm{k}_2)
=
\sum_{l_1,l_2}
\hat{\Gamma}^{(2)\,l}_{Xl_1l_2}(k_1,k_2)
\left\{
Y_{l_1}(\hat{\bm{k}}_1)\otimes Y_{l_2}(\hat{\bm{k}}_2)
\right\}_{lm}.
\label{eq:3-108}
\end{equation}
Corresponding to Eq.~(\ref{eq:3-75-2}) of the renormalized bias
function, the interchange symmetry for the second-order propagator is
given by
\begin{equation}
\hat{\Gamma}^{(2)\,l}_{Xl_2l_1}(k_2,k_1)
= (-1)^{l+l_1+l_2}
\hat{\Gamma}^{(2)\,l}_{Xl_1l_2}(k_1,k_2).
\label{eq:3-108-1}
\end{equation}
For $n=3$, we have
\begin{multline}
\hat{\Gamma}^{(3)\,l;l_1l_2l_3}_{Xm;m_1m_2m_3}(k_1,k_2,k_3)
= (-1)^l \sqrt{\{l\}}
\sum_L (-1)^L\sqrt{\{L\}}
\\ \times
\left(l\,l_1\,L\right)_{mm_1M}
\left(L\,l_2\,l_3\right)^{M}_{\phantom{M}m_2m_3}
\hat{\Gamma}^{(3)\,l;L}_{Xl_1l_2l_3}(k_1,k_2,k_3),
\label{eq:3-109}
\end{multline}
with
\begin{multline}
\hat{\Gamma}^{(3)\,l;L}_{Xl_1l_2l_3}(k_1,k_2,k_3)
\equiv
\frac{(-1)^{l_1+l_2+l_3}}{\sqrt{\{l\}}}
(-1)^L \sqrt{\{L\}}
\left(l\,l_1\,L\right)^{mm_1}_{\phantom{mm_1}M}
\\ \times
\left(L\,l_2\,l_3\right)^{Mm_2m_3}
\hat{\Gamma}^{(3)\,l;l_1l_2l_3}_{Xm;m_1m_2m_3}(k_1,k_2,k_3).
\label{eq:3-110}
\end{multline}
The symmetry of complex conjugate is given by
\begin{equation}
\hat{\Gamma}^{(3)\,l;L\,*}_{Xl_1l_2l_3}(k_1,k_2,k_3)
= \hat{\Gamma}^{(3)\,l;L}_{Xl_1l_2l_3}(k_1,k_2,k_3),
\label{eq:3-110-1}
\end{equation}
and thus the reduced function is a real function.
The parity symmetry of Eq.~(\ref{eq:3-102-1}) in this case suggests
$s_X+l+l_1+l_2+l_3=\mathrm{even}$. The propagator is given by
\begin{multline}
\hat{\Gamma}^{(3)}_{Xlm}(\bm{k}_1,\bm{k}_2,\bm{k}_3)
=
\sum_{l_1,l_2,l_3,L}
\hat{\Gamma}^{(3)\,l;L}_{Xl_1l_2l_3}(k_1,k_2,k_3)
\\ \times
\left\{
Y_{l_1}(\hat{\bm{k}}_1)\otimes
\left\{
Y_{l_2}(\hat{\bm{k}}_2) \otimes Y_{l_3}(\hat{\bm{k}}_3)
\right\}_L
\right\}_{lm}.
\label{eq:3-111}
\end{multline}
Corresponding to Eqs.~(\ref{eq:3-80-3}) and (\ref{eq:3-80-4}) of the
renormalized bias function, the interchange symmetries for the
third-order propagator are given by
\begin{equation}
\hat{\Gamma}^{(3)l;L}_{Xl_ll_3l_2}(k_1,k_3,k_2)
= (-1)^{l_2+l_3+L}
\hat{\Gamma}^{(3)l;L}_{Xl_ll_2l_3}(k_1,k_2,k_3),
\label{eq:3-111-1}
\end{equation}
and
\begin{multline}
\hat{\Gamma}^{(3)l;L}_{Xl_2l_1l_3}(k_2,k_1,k_3)
= (-1)^{l_1+l_2}
\sum_{L'} (2L'+1)
\begin{Bmatrix}
l_1 & l & L \\
l_2 & l_3 & L'
\end{Bmatrix}
\\ \times
\hat{\Gamma}^{(3)l;L'}_{Xl_ll_2l_3}(k_1,k_2,k_3).
\label{eq:3-111-2}
\end{multline}
For $n > 3$, we have
\begin{multline}
\hat{\Gamma}^{(n)\,l;l_1\cdots l_n}_{Xm;m_1\cdots m_n}(k_1,\ldots,k_n)
=(-1)^l \sqrt{\{l\}}
\sum_{L_2,\ldots,L_{n-1}}
\\ \times
(-1)^{L_2+\cdots L_{n-1}}
\sqrt{\{L_2\}\cdots\{L_{n-1}\}}
\left(l\,l_1\,L_2\right)_{mm_1M_2}
\\ \times
\left(L_2\,l_2\,L_3\right)^{M_2}_{\phantom{M_2}m_2M_3}
\cdots
\left(L_{n-2}\,l_{n-2}\,L_{n-1}\right)^{M_{n-2}}_{\phantom{M_{n-2}}m_2M_{n-1}}
\\ \times
\left(L_{n-1}\,l_{n-1}\,l_n\right)^{M_{n-1}}_{\phantom{M_{n-1}}m_{n-1}m_n}
\hat{\Gamma}^{(n)\,l;L_2\cdots L_{n-1}}_{Xl_1\cdots l_n}(k_1,\ldots,k_n),
\label{eq:3-112}
\end{multline}
where we define
\begin{multline}
\hat{\Gamma}^{(n)\,l;L_2\cdots L_{n-1}}_{Xl_1\cdots l_n}(k_1,\ldots,k_n)
\equiv
\frac{(-1)^{l_1+\cdots +l_n}} {\sqrt{\{l\}}}
\\ \times
(-1)^{L_2+\cdots +L_{n-1}} \sqrt{\{L_2\}\cdots\{L_{n-1}\}}
\left(l\,l_1\,L_2\right)^{mm_1}_{\phantom{mm_1}M_2}
\\ \times
\left(L_2\,l_2\,L_3\right)^{M_2m_2}_{\phantom{M_2m_2}M_3}
\cdots
\left(L_{n-2}\,l_{n-2}\,L_{n-1}\right)^{M_{n-2}m_{n-2}}_{\phantom{M_{n-2}m_{n-2}}M_{n-1}}
\\ \times
\left(L_{n-1}\,l_{n-1}\,l_n\right)^{M_{n-1}m_{n-1}m_n}
\hat{\Gamma}^{(n)\,l;l_1\cdots l_n}_{Xm;m_1\cdots m_n}(k_1,\ldots,,k_n).
\label{eq:3-113}
\end{multline}
The symmetry of complex conjugate is given by
\begin{equation}
\hat{\Gamma}^{(n)\,l;L_2\cdots L_{n-1}\,*}_{Xl_1\cdots l_n}(k_1,\ldots,k_n)
= \hat{\Gamma}^{(n)\,l;L_2\cdots L_{n-1}}_{Xl_1\cdots l_n}(k_1,\ldots,k_n),
\label{eq:3-110-2}
\end{equation}
and thus the reduced function is a real function.
The parity transformation of Eq.~(\ref{eq:3-44}) suggests that
\begin{equation}
s_X + l + l_1 + \cdots + l_n = \mathrm{even}
\label{eq:3-110-3}
\end{equation}
in the Universe with the parity symmetry, which is the same condition
with Eq.~(\ref{eq:3-102-1}). The propagator is given by
\begin{multline}
\hat{\Gamma}^{(n)}_{Xlm}(\bm{k}_1,\cdots,\bm{k}_n)
=
\sum_{\substack{l_1,\ldots,l_n\\L_2,\ldots,L_{n-1}}}
\hat{\Gamma}^{(n)\,l;L_2\cdots L_{n-1}}_{Xl_1\cdots l_n}(k_1,\cdots,k_n)
\\ \times
\left\{
Y_{l_1}(\hat{\bm{k}}_1)\otimes
\left\{
Y_{l_2}(\hat{\bm{k}}_2)\otimes
\left\{ \cdots
\otimes Y_{l_n}(\hat{\bm{k}}_n)
\right\}_{L_{n-1}} \cdots \right\}_{L_2}
\right\}_{lm}.
\label{eq:3-114}
\end{multline}
Corresponding to Eqs.~(\ref{eq:3-86}) and (\ref{eq:3-87}) of the
renormalized bias function, the interchange symmetries for the
higher-order propagators are given by
\begin{multline}
\hat{\Gamma}^{(n)l;L_2\cdots L_{n-2}L_{n-1}}_{Xl_l\cdots l_{n-2}l_nl_{n-1}}
(k_1,\cdots,k_{n-2},k_n,k_{n-1})
\\
= (-1)^{l_{n-1}+l_n+L_{n-1}}
\hat{\Gamma}^{(n)l;L_2\cdots L_{n-1}}_{Xl_l\cdots l_n}
(k_1,\cdots,k_n),
\label{eq:3-114-1}
\end{multline}
and
\begin{multline}
\hat{\Gamma}^{(n)\,l;L_2\cdots L_{i-1}L_{i+1}L_iL_{i+2}\cdots L_{n-1}}
_{Xl_1\cdots l_{i-1}l_{i+1}l_il_{i+2}\cdots l_n}
(k_1,\cdots,k_{i-1},k_{i+1},k_i,k_{i+2},\cdots,k_n)
\\
= (-1)^{l_i+l_{i+1}}
\sum_{L'} (2L'+1)
\begin{Bmatrix}
l_i & L_i & L_{i+1} \\
l_{i+1} & L_{i+2} & L'
\end{Bmatrix}
\\ \times
\hat{\Gamma}^{(n)\,l;L_2\cdots L_iL'L_{i+2}\cdots L_{n-1}}_{Xl_1\cdots l_n}(k_1,\cdots,k_n).
\label{eq:3-114-2}
\end{multline}
\subsubsection{\label{subsec:PropRed}
Redshift space
}
In redshift space, the propagators also depend on the direction of the
line of sight, $\hat{\bm{z}}$. In this paper, we approximately
consider the direction of the line of sight is fixed in the observed
space. This approximation is usually called the plane-parallel, or
distant-observer approximation, and is known as a good approximation
for most of the practical purposes. In order to keep the rotational
covariance in the theory, the line of sight, $\hat{\bm{z}}$, is
considered to be arbitrarily directed in the coordinates system, i.e.,
not fixed in the third axis, $\hat{\bm{z}} \ne \hat{\mathbf{e}}_3$, in
general.
In this case, the dependence of the normalized propagators on the line
of sight $\hat{\bm{z}}$ is implicit in the decomposed propagators of
Eqs.~(\ref{eq:3-100}) and (\ref{eq:3-101}). We further decompose this
implicit dependence on the direction of the line of sight with the
spherical harmonics, $Y_{l_zm_z}(\hat{\bm{z}})$:
\begin{multline}
\hat{\Gamma}^{(n)\,l;l_1\cdots l_n}_{Xm;m_1\cdots
m_n}(k_1,\ldots,k_n;\hat{\bm{z}})
\\
=
\sum_{l_z} \sqrt{\frac{4\pi}{\{l_z\}}}\,
\hat{\Gamma}^{(n)\,l\,l_z;l_1\cdots l_n}_{Xmm_z;m_1\cdots
m_n}(k_1,\ldots,k_n)
Y_{l_zm_z}^*(\hat{\bm{z}}).
\label{eq:3-120}
\end{multline}
The prefactor $\sqrt{4\pi/\{l_z\}}$ is added in the definition in
order that the expansion reduces to the Legendre expansion when
$m_z=0$, since
$Y_{l0}(\theta,\phi) = \sqrt{\{l\}/(4\pi)}\,\mathit{P}_l(\cos\theta)$.
The function $\sqrt{4\pi/\{l\}}\,Y_{lm}(\bm{n})$ is also known as the
spherical harmonics with Racah's normalization. With the above
normalization of Eq.~(\ref{eq:3-120}), the coefficient has the same
normalization in real space and redshift space when $l_z=m_z=0$.
Unless we consider some strange situations, physical quantities are
usually invariant under the flip of the direction of the line of
sight, $\hat{\bm{z}} \rightarrow - \hat{\bm{z}}$. In such normal
cases, the non-negative integer $l_z$ should take even numbers, as
immediately seen from an identity of the spherical harmonics,
$Y_{l_zm_z}(-\hat{\bm{z}}) = (-1)^{l_z}Y_{l_zm_z}(\hat{\bm{z}})$.
Below we do not exclude the possibility that $l_z$ can take strangely
odd numbers, just for generality.
The above decomposition is equivalent to
\begin{multline}
\hat{\Gamma}^{(n)}_{Xlm}(\bm{k}_1,\ldots\bm{k}_n;\hat{\bm{z}})
\\
=
\sum_{l_z,l_1,\ldots,l_n}
\sqrt{\frac{4\pi}{\{l_z\}}}\,
\hat{\Gamma}^{(n)\,l\,l_z;l_1\cdots l_n}_{Xmm_z;m_1\cdots
m_n}(k_1,\ldots,k_n)
\\ \times
Y_{l_zm_z}^*(\hat{\bm{z}})
Y_{l_1m_1}^*(\hat{\bm{k}}_1)\cdots Y_{l_nm_n}^*(\hat{\bm{k}}_n).
\label{eq:3-121}
\end{multline}
The inverse relation is given by
\begin{multline}
\hat{\Gamma}^{(n)\,l\,l_z;l_1\cdots l_n}_{Xmm_z;m_1\cdots
m_n}(k_1,\ldots,k_n)
\\
=
\sqrt{\frac{\{l_z\}}{4\pi}}
\int d^2\hat{z}\,d^2\hat{k}_1 \cdots d^2\hat{k}_n\,
\hat{\Gamma}^{(n)}_{Xlm}(\bm{k}_1,\ldots\bm{k}_n;\hat{\bm{z}})
\\ \times
Y_{l_zm_z}(\hat{\bm{z}})
Y_{l_1m_1}(\hat{\bm{k}}_1)\cdots Y_{l_nm_n}(\hat{\bm{k}}_n).
\label{eq:3-122}
\end{multline}
The complex conjugate of the coefficient is given by
\begin{multline}
\hat{\Gamma}^{(n)\,l\,l_z;l_1\cdots l_n\,*}_{Xmm_z;m_1\cdots m_n}(k_1,\ldots,k_n)
=
(-1)^{l+l_1+\cdots +l_n} g_{(l)}^{mm'} g_{(l_z)}^{m_zm_z'}
\\ \times
g_{(l_1)}^{m_1m_1'} \cdots g_{(l_n)}^{m_nm_n'}
\hat{\Gamma}^{(n)\,l\,l_z;l_1\cdots l_n}_{Xm'm_z';m_1'\cdots m_n'}
(k_1,\ldots,k_n).
\label{eq:3-123}
\end{multline}
The parity transformation of Eq.~(\ref{eq:3-45}) suggests that
\begin{equation}
s_X + l + l_z + l_1 + \cdots + l_n = \mathrm{even}
\label{eq:3-123-1}
\end{equation}
in the Universe with parity symmetry. In normal situations, $l_z$ is
an even number and the above condition is the same as that of
Eq.~(\ref{eq:3-110-3}) in real space.
The transformation under the
passive rotation of Eq.~(\ref{eq:2-14}) is given by
\begin{multline}
\hat{\Gamma}^{(n)\,l\,l_z;l_1\cdots l_n}_{Xmm_z;m_1\cdots m_n}(k_1,\ldots,k_n)
\rightarrow
\hat{\Gamma}^{(n)\,l\,l_z;l_1\cdots l_n}_{Xm'm_z';m_1'\cdots m_n'}
(k_1,\ldots,k_n)
\\ \times
D_{(l)m}^{m'}(R)
D_{(l_z)m_z}^{m_z'}(R)
D_{(l_1)m_1}^{m_1'}(R) \cdots
D_{(l_n)m_n}^{m_n'}(R).
\label{eq:3-124}
\end{multline}
When we rotate the direction of the line of sight together with
the coordinate system, the propagators should be invariant under the
rotation of the coordinate system in the Universe which satisfies the
cosmological principle. Thus we have
\begin{multline}
\hat{\Gamma}^{(n)\,l\,l_z;l_1\cdots l_n}_{Xmm_z;m_1\cdots m_n}(k_1,\ldots,k_n)
= \hat{\Gamma}^{(n)\,l\,l_z;l_1\cdots l_n}_{Xm'm_z';m_1'\cdots m_n'}(k_1,\ldots,k_n)
\\ \times
\frac{1}{8\pi^2} \int [dR]
D_{(l)m}^{m'}(R)
D_{(l_z)m_z}^{m_z'}(R)
D_{(l_1)m_1}^{m_1'}(R) \cdots D_{(l_n)m_n}^{m_n'}(R).
\label{eq:3-125}
\end{multline}
Following the same procedure below Eq.~(\ref{eq:3-57}) in the
case of renormalized bias functions, one can represent the dependence
of propagators in redshift space by spherical metric, $3j$-symbols, and
invariant variables. One of the main differences is that an angular
dependence on the line of sight additionally appears in the
expansion.
For $n=1$, we have
\begin{equation}
\hat{\Gamma}^{(1)\,l\,l_z;l_1}_{Xmm_z;m_1}(k_1)
= (-1)^l\sqrt{\{l\}}\, \left(l\,l_z\,l_1\right)_{mm_zm_1}
\hat{\Gamma}^{(1)\,l\,l_z}_{Xl_1}(k_1),
\label{eq:3-126}
\end{equation}
with
\begin{equation}
\hat{\Gamma}^{(1)l\,l_z}_{Xl_1}(k_1)
\equiv
\frac{(-1)^{l_z+l_1}}{\sqrt{\{l\}}}
\left(l\,l_z\,l_1\right)^{mm_zm_1}
\hat{\Gamma}^{(1)\,l\,l_z;l_1}_{Xmm_z;m_1}(k_1),
\label{eq:3-127}
\end{equation}
The symmetry of complex conjugate is given by
\begin{equation}
\hat{\Gamma}^{(1)l\,l_z\,*}_{Xl_1}(k_1)
= (-1)^{l_z} \hat{\Gamma}^{(1)l\,l_z}_{Xl_1}(k_1),
\label{eq:3-127-1}
\end{equation}
and thus the reduced function is a real function for normal cases of
$l_z=\mathrm{even}$ and a pure imaginary function for strange cases of
$l_z=\mathrm{odd}$. The parity symmetry of Eq.~(\ref{eq:3-123-1}) in
this case suggests $s_X+l+l_z+l_1=\mathrm{even}$. The propagator is
given by
\begin{equation}
\hat{\Gamma}^{(1)}_{Xlm}(\bm{k}_1;\hat{\bm{z}})
= \sum_{l_z,l_1}
\sqrt{\frac{4\pi}{\{l_z\}}}\,
\hat{\Gamma}^{(1)\,l\,l_z}_{Xl_1}(k_1)
\left\{
Y_{l_z}(\hat{\bm{z}})\otimes Y_{l_1}(\hat{\bm{k}}_1)
\right\}_{lm}.
\label{eq:3-128}
\end{equation}
For $n=2$, we have
\begin{multline}
\hat{\Gamma}^{(2)\,l\,l_z;l_1l_2}_{Xmm_z;m_1m_2}(k_1,k_2)
= (-1)^l\sqrt{\{l\}}
\sum_L (-1)^L \sqrt{\{L\}}
\left(l\,l_z\,L\right)_{mm_zM}
\\ \times
\left(L\,l_1\,l_2\right)^M_{\phantom{M}m_1m_2}
\hat{\Gamma}^{(2)\,l\,l_z;L}_{Xl_1l_2}(k_1,k_2),
\label{eq:3-129}
\end{multline}
with
\begin{multline}
\hat{\Gamma}^{(2)\,l\,l_z;L}_{Xl_1l_2}(k_1,k_2)
\equiv
\frac{(-1)^{l_z+l_1+l_2}}{\sqrt{\{l\}}}
(-1)^L \sqrt{\{L\}}
\left(l\,l_z\,L\right)^{mm_z}_{\phantom{mm_z}M}
\\ \times
\left(L\,l_1\,l_2\right)^{Mm_1m_2}
\hat{\Gamma}^{(2)\,l\,l_z;l_1l_2}_{Xmm_z;m_1m_2}(k_1,k_2).
\label{eq:3-130}
\end{multline}
The symmetry of complex conjugate is given by
\begin{equation}
\hat{\Gamma}^{(2)\,l\,l_z;L\,*}_{Xl_1l_2}(k_1,k_2)
= (-1)^{l_z} \hat{\Gamma}^{(2)\,l\,l_z;L}_{Xl_1l_2}(k_1,k_2),
\label{eq:3-130-1}
\end{equation}
and thus the reduced function is a real function for normal cases of
$l_z=\mathrm{even}$ and a pure imaginary function for strange cases of
$l_z=\mathrm{odd}$. The parity symmetry of Eq.~(\ref{eq:3-102-1}) in
this case suggests $s_X+l+l_z+l_1+l_2=\mathrm{even}$. The propagator
is given by
\begin{multline}
\hat{\Gamma}^{(2)}_{Xlm}(\bm{k}_1,\bm{k}_2;\hat{\bm{z}})
=
\sum_{l_z,l_1,l_2,L}
\sqrt{\frac{4\pi}{\{l_z\}}}\,
\hat{\Gamma}^{(2)\,l\,l_z;L}_{Xl_1l_2}(k_1,k_2)
\\ \times
\left\{
Y_{l_z}(\hat{\bm{z}})\otimes
\left\{
Y_{l_1}(\hat{\bm{k}}_1) \otimes Y_{l_2}(\hat{\bm{k}}_2)
\right\}_L
\right\}_{lm}.
\label{eq:3-131}
\end{multline}
Due to the symmetry of the $3j$-symbol,
$(L\,l_1\,l_2)_{Mm_1m_2} = (-1)^{L+l_1+l_2}(L\,l_2\,l_1)_{Mm_2m_1}$,
the invariant propagator of the second order in redshift space
satisfies an interchange symmetry,
\begin{equation}
\hat{\Gamma}^{(2)\,l\,l_z;L}_{Xl_2l_1}(k_2,k_1)
= (-1)^{l_1+l_2+L}
\hat{\Gamma}^{(2)\,l\,l_z;L}_{Xl_1l_2}(k_1,k_2).
\label{eq:3-131-1}
\end{equation}
For $n \geq 3$, we have
\begin{multline}
\hat{\Gamma}^{(n)\,l\,l_z;l_1\cdots l_n}_{Xmm_z;m_1\cdots m_n}(k_1,\ldots,k_n)
= (-1)^l\sqrt{\{l\}}
\\ \times
\sum_{L_1,\ldots,L_{n-1}}
(-1)^{L_1+\cdots +L_{n-1}}
\sqrt{\{L_1\}\cdots\{L_{n-1}\}}
\left(l\,l_z\,L_1\right)_{mm_zM_1}
\\ \times
\left(L_1\,l_1\,L_2\right)^{M_1}_{\phantom{M_1}m_1M_2}
\cdots
\left(L_{n-2}\,l_{n-2}\,L_{n-1}\right)^{M_{n-2}}_{\phantom{M_{n-2}}m_{n-2}M_{n-1}}
\\ \times
\left(L_{n-1}\,l_{n-1}\,l_n\right)^{M_{n-1}}_{\phantom{M_{n-1}}m_{n-1}m_n}
\hat{\Gamma}^{(n)\,l\,l_z;L_1\cdots L_{n-1}}_{Xl_1\cdots l_n}(k_1,\ldots,k_n),
\label{eq:3-132}
\end{multline}
with
\begin{multline}
\hat{\Gamma}^{(n)\,l\,l_z;L_1\cdots L_{n-1}}_{Xl_1\cdots l_n}(k_1,\ldots,k_n)
\equiv
\frac{(-1)^{l_z+l_1+\cdots +l_n}}{\sqrt{\{l\}}}
\\ \times
(-1)^{L_1+\cdots +L_{n-1}} \sqrt{\{L_1\}\cdots\{L_{n-1}\}}
\left(l\,l_z\,L_1\right)^{mm_z}_{\phantom{mm_z}M_1}
\\ \times
\left(L_1\,l_1\,L_2\right)^{M_1m_1}_{\phantom{M_1m_1}M_2}
\cdots
\left(L_{n-2}\,l_{n-2}\,L_{n-1}\right)^{M_{n-2}m_{n-2}}_{\phantom{M_{n-2}m_{n-2}}M_{n-1}}
\\ \times
\left(L_{n-1}\,l_{n-1}\,l_n\right)^{M_{n-1}m_{n-1}m_n}
\hat{\Gamma}^{(n)\,l\,l_z;l_1\cdots l_n}_{Xmm_z;m_1\cdots m_n}(k_1,\ldots,k_n),
\label{eq:3-133}
\end{multline}
The symmetry of complex conjugate is given by
\begin{equation}
\hat{\Gamma}^{(n)\,l\,l_z;L_1\cdots L_{n-1}\,*}_{Xl_1\cdots l_n}(k_1,\ldots,k_n)
= (-1)^{l_z}
\hat{\Gamma}^{(n)\,l\,l_z;L_1\cdots L_{n-1}}_{Xl_1\cdots l_n}(k_1,\ldots,k_n),
\label{eq:3-133-1}
\end{equation}
and thus the reduced function is a real function for normal cases of
$l_z=\mathrm{even}$ and a pure imaginary function for strange cases of
$l_z=\mathrm{odd}$. The parity symmetry of Eq.~(\ref{eq:3-123-1}) in
this case suggests $s_X+l+l_z+l_1+\cdots +l_n=\mathrm{even}$. The
propagator is given by
\begin{multline}
\hat{\Gamma}^{(n)}_{Xlm}(\bm{k}_1,\cdots,\bm{k}_n;\hat{\bm{z}})
\\
=
\sum_{\substack{l_z,l_1,\ldots,l_n\\L_1,\ldots,L_{n-1}}}
\sqrt{\frac{4\pi}{\{l_z\}}}\,
\hat{\Gamma}^{(n)\,l\,l_z;L_1\cdots L_{n-1}}_{Xl_1\cdots l_n}(k_1,\cdots,k_n)
\\ \times
\left\{
Y_{l_z}(\hat{\bm{z}})\otimes
\left\{
Y_{l_1}(\hat{\bm{k}}_1)\otimes
\left\{ \cdots
\otimes Y_{l_n}(\hat{\bm{k}}_n)
\right\}_{L_{n-1}} \cdots \right\}_{L_1}
\right\}_{lm}.
\label{eq:3-134}
\end{multline}
Corresponding to Eqs.~(\ref{eq:3-86}) and (\ref{eq:3-87}) of the
renormalized bias function, the interchange symmetries for the
higher-order propagators are given by
\begin{multline}
\hat{\Gamma}^{(n)l\,l_z;L_1\cdots L_{n-2}L_{n-1}}_{Xl_l\cdots l_{n-2}l_nl_{n-1}}
(k_1,\cdots,k_{n-2},k_n,k_{n-1})
\\
= (-1)^{l_{n-1}+l_n+L_{n-1}}
\hat{\Gamma}^{(n)l\,l_z;L_1\cdots L_{n-1}}_{Xl_l\cdots l_n}
(k_1,\cdots,k_n),
\label{eq:3-134-1}
\end{multline}
and
\begin{multline}
\hat{\Gamma}^{(n)\,l\,l_z;L_1\cdots L_{i-1}L_{i+1}L_iL_{i+2}\cdots L_{n-1}}
_{Xl_1\cdots l_{i-1}l_{i+1}l_il_{i+2}\cdots l_n}
(k_1,\cdots,k_{i-1},k_{i+1},k_i,k_{i+2},\cdots,k_n)
\\
= (-1)^{l_i+l_{i+1}}
\sum_{L'} (2L'+1)
\begin{Bmatrix}
l_i & L_i & L_{i+1} \\
l_{i+1} & L_{i+2} & L'
\end{Bmatrix}
\\ \times
\hat{\Gamma}^{(n)\,l\,l_z;L_1\cdots L_iL'L_{i+2}\cdots L_{n-1}}_{Xl_1\cdots l_n}(k_1,\cdots,k_n).
\label{eq:3-134-2}
\end{multline}
\subsection{Sample calculations of decomposed propagators
\label{subsec:LowProp}
}
One can straightforwardly derive explicit forms of reduced propagators
with the formal decompositions as explained above. A simple way to
derive the invariant coefficients is to represent the angular
dependence of the propagators in terms of the spherical harmonics and
polypolar spherical harmonics, and read off the coefficients from the
resulting expressions.
The simplest example is the tree-level approximation of first-order
propagator of Eq.~(\ref{eq:3-21}):
\begin{equation}
\hat{\Gamma}_{Xlm}^{(1)}(\bm{k})
= c_{Xlm}^{(1)}(\bm{k}) +
\left[\bm{k}\cdot\bm{L}_1(\bm{k})\right] c_{Xlm}^{(0)}.
\label{eq:3-200}
\end{equation}
In real space, $\bm{L}_1(\bm{k})$ is given by Eq.~(\ref{eq:3-24a}),
and we have $\bm{k}\cdot\bm{L}_1(\bm{k}) = 1$. The relevant
renormalized bias functions are represented by invariant quantities
through Eqs.~(\ref{eq:3-67}) and (\ref{eq:3-70-1}). Therefore the
propagator of Eq.~(\ref{eq:3-200}) is represented in a form of
expansion with spherical harmonics as
\begin{equation}
\hat{\Gamma}_{Xlm}^{(1)}(\bm{k}) =
\left[
c^{(1)}_{Xl}(k) + \delta_{l0} c^{(0)}_X
\right]
Y_{lm}(\hat{\bm{k}}).
\label{eq:3-202}
\end{equation}
Comparing this expression with Eq.~(\ref{eq:3-105}), we have
\begin{equation}
\hat{\Gamma}^{(1)}_{Xl}(k) =
c^{(1)}_{Xl}(k) + \delta_{l0} c^{(0)}_X.
\label{eq:3-202-1}
\end{equation}
The same result can also be derived directly from the definition of
the invariant coefficient, i.e., Eq.~(\ref{eq:3-104}) with
Eq.~(\ref{eq:3-101}) for $n=1$. In the Universe with parity symmetry,
the rhs of Eq.~(\ref{eq:3-202-1}) is non-zero only for normal tensors,
$s_X=0$, as discussed in Sec.~\ref{subsec:RenBiasFn} and also the lhs
consistently vanishes as discussed in Sec.~\ref{subsec:Propagators}.
In redshift space, the tree-level approximation of the first-order
propagator is given by
\begin{equation}
\hat{\Gamma}_{Xlm}^{(1)}(\bm{k})
= c_{Xlm}^{(1)}(\bm{k}) +
\left[\bm{k}\cdot\bm{L}^\mathrm{s}_1(\bm{k})\right] c_{Xlm}^{(0)},
\label{eq:3-203}
\end{equation}
where the displacement kernel is replaced by the one in redshift space
as in Eq.~(\ref{eq:3-25}), and the relevant factor in the above
equation is given by
\begin{equation}
\bm{k}\cdot\bm{L}^\mathrm{s}_1(\bm{k})
= 1 + f (\hat{\bm{z}}\cdot\hat{\bm{k}})^2
= 1 + \frac{f}{3}
+ \frac{2f}{3} \mathit{P}_2(\hat{\bm{z}}\cdot\hat{\bm{k}}),
\label{eq:3-204}
\end{equation}
where $\mathit{P}_2(x)$ is one of the Legendre polynomials
$\mathit{P}_l(x)$ with $l=2$. The Legendre polynomial can be
decomposed into a superposition of spherical harmonics,
\begin{equation}
\mathit{P}_l(\hat{\bm{z}}\cdot\hat{\bm{k}})
= \frac{4\pi}{\{l\}}
g_{(l)}^{mm'} Y_{lm}(\hat{\bm{z}}) Y_{lm'}(\hat{\bm{k}}).
\label{eq:3-205}
\end{equation}
The invariant combination of the rhs of the above equation is
given by a special case of bipolar spherical harmonics as
\begin{equation}
\left\{Y_l(\hat{\bm{z}})\otimes Y_l(\hat{\bm{k}})\right\}_{00} =
\frac{(-1)^l}{\sqrt{\{l\}}}
g_{(l)}^{mm'} Y_{lm}(\hat{\bm{z}}) Y_{lm'}(\hat{\bm{k}}),
\label{eq:3-206}
\end{equation}
and thus we have
\begin{equation}
\mathit{P}_l(\hat{\bm{z}}\cdot\hat{\bm{k}})
= \frac{4\pi (-1)^l}{\sqrt{\{l\}}}
\left\{Y_l(\hat{\bm{z}})\otimes Y_l(\hat{\bm{k}})\right\}_{00}.
\label{eq:3-207}
\end{equation}
Combining Eqs.~(\ref{eq:3-203})--(\ref{eq:3-207}) above, we derive
\begin{multline}
\hat{\Gamma}^{(1)}_{Xlm}(\bm{k})
= c^{(1)}_{Xl}(k) Y_{lm}(\hat{\bm{k}})
\\
+
\frac{\delta_{l0}\delta_{m0}}{\sqrt{4\pi}} c^{(0)}_X
\left[
1 + \frac{f}{3} + \frac{8\pi}{3\sqrt{5}} f
\left\{Y_2(\hat{\bm{z}})\otimes Y_2(\hat{\bm{k}})\right\}_{00}
\right].
\label{eq:3-208}
\end{multline}
Each term of the above equation can be represented by special cases of
bipolar spherical harmonics, because we have
\begin{align}
\delta_{l0} \delta_{m0}
&= 4\pi\delta_{l0}
\left\{Y_0(\hat{\bm{z}})\otimes Y_0(\hat{\bm{k}})\right\}_{lm},
\label{eq:3-209a}\\
Y_{lm}(\hat{\bm{k}})
&= \sqrt{4\pi}
\left\{Y_0(\hat{\bm{z}})\otimes Y_l(\hat{\bm{k}})\right\}_{lm},
\label{eq:3-209b}
\end{align}
as straightforwardly shown from the definition. One can also rewrite
the bipolar spherical harmonics appeared in Eq.~(\ref{eq:3-208}) as
\begin{equation}
\delta_{l0} \delta_{m0}
\left\{Y_2(\hat{\bm{z}})\otimes Y_2(\hat{\bm{k}})\right\}_{00}
=
\delta_{l0}
\left\{Y_2(\hat{\bm{z}})\otimes Y_2(\hat{\bm{k}})\right\}_{lm}.
\label{eq:3-210}
\end{equation}
Substituting the above identities into Eq.~(\ref{eq:3-208}), one can
readily read off the coefficient of the bipolar spherical harmonics
$\{Y_2(\hat{\bm{z}})\otimes Y_2(\hat{\bm{k}})\}_{lm}$ of
Eq.~(\ref{eq:3-128}) as
\begin{multline}
\hat{\Gamma}^{(1)l\,l_z}_{Xl_1}(k) =
\delta_{l_z0}\delta_{l_1l} c^{(1)}_{Xl}(k)
\\
+ \delta_{l0} c^{(0)}_X
\left[
\left(1 + \frac{f}{3}\right)
\delta_{l_z0}\delta_{l_10} +
\frac{2f}{3} \delta_{l_z2}\delta_{l_12}
\right].
\label{eq:3-211}
\end{multline}
Putting $f=0$, $l_z=0$ and $l_1=l$ in the above equation recovers the
result in real space of Eq.~(\ref{eq:3-202}), as it should. In the
Universe with parity symmetry, the rhs of Eq.~(\ref{eq:3-211}) only
nonzero when $s_X=0$ as discussed in Sec.~\ref{subsec:RenBiasFn}, and
apparently the rhs is non-zero only for $l+l_z+l_1=\mathrm{even}$.
Thus the lhs is non-zero only when $s_x+l+l_z+l_1=\mathrm{even}$,
which is consistent with the discussion in
Sec.~\ref{subsec:Propagators}.
The same result of Eq.~(\ref{eq:3-211}) can also be derived more
straightforwardly by using an orthonormal relation of the bipolar
spherical harmonics,
\begin{multline}
\int d^2\hat{z}\,d^2\hat{k}
\left\{Y_{l_z}(\hat{\bm{z}})\otimes Y_{l_1}(\hat{\bm{k}})\right\}_{lm}
\left\{Y_{l_z'}(\hat{\bm{z}})\otimes
Y_{l_1'}(\hat{\bm{k}})\right\}_{l'm'}
\\
= \delta^{\triangle}_{ll_zl_1}
\delta_{l_zl_z'} \delta_{l_1l_1'} \delta_{ll'} g^{(l)}_{mm'},
\label{eq:3-212}
\end{multline}
where $\delta^{\triangle}_{l_1l_2l_3}=1$ only when $(l_1,l_2,l_3)$
satisfies the trangle inequality, and zero otherwise. The
Eq.~(\ref{eq:3-212}) can be shown from an orthonormal relation of the
spherical harmonies and an orthonormal relation of the $3j$-symbols.
According to the orthonormal relation above, the Eq.~(\ref{eq:3-128})
is inverted as
\begin{multline}
\hat{\Gamma}^{(1)l\,l_z}_{Xl_1}(k)
= \sqrt{\frac{\{l_z\}}{4\pi}}\,
\int d^2\hat{z}\,d^2\hat{k}\,
\hat{\Gamma}^{(1)}_{Xl0}(\bm{k})
\left\{Y_{l_z}(\hat{\bm{z}})\otimes Y_{l_1}(\hat{\bm{k}})\right\}_{l0}.
\label{eq:3-213}
\end{multline}
Substituting Eq.~(\ref{eq:3-208}) into the above equation, using
Eqs.~(\ref{eq:3-209a})--(\ref{eq:3-210}) with the orthonormal relation
of Eq.~(\ref{eq:3-212}), we derive the same result of
Eq.~(\ref{eq:3-211}) as well.
For another example of calculating the invariant propagators, we
consider the second-order propagator $\hat{\Gamma}^{(2)}_{Xlm}$ in
real space, Eq.~(\ref{eq:3-107}). The calculation in this case is
quite similar to the derivation of Eq.~(\ref{eq:3-211}). With the
tree-level approximation, the second-order propagator is given by
Eq.~(\ref{eq:3-22}). Applying a similar procedure as in the above
derivation using Eqs.~(\ref{eq:3-206}), (\ref{eq:3-207}) and
(\ref{eq:3-209a})--(\ref{eq:3-210}), the rhs of Eq.~(\ref{eq:3-22}) is
represented in a form of the rhs of Eq.~(\ref{eq:3-108}), and we can
read off the invariant coefficient from the resulting expression.
In the course of calculation, one needs to represent a product of
spherical harmonics of the same argument through the formula
\cite{Edmonds:1955fi},
\begin{multline}
Y_{l_1m_1}(\hat{\bm{k}}) Y_{l_2m_2}(\hat{\bm{k}})
= \sum_{l,m} \sqrt{\frac{\{l_1\}\{l_2\}\{l\}}{4\pi}}
\begin{pmatrix}
l_1 & l_2 & l \\
0 & 0 & 0
\end{pmatrix}
\\ \times
\begin{pmatrix}
l_1 & l_2 & l \\
m_1 & m_2 & m
\end{pmatrix}
Y_{lm}^*(\hat{\bm{k}}).
\label{eq:3-214}
\end{multline}
The coefficient of the spherical harmonics on the rhs is known as the
Gaunt integral. Similarly to the simplified notation for the
$3j$-symbols we introduced above, it is useful to define a simplified
symbol for the Gaunt integral,
\begin{multline}
\left[l_1\,l_2\,l_3\right]_{m_1m_2m_3}
\equiv
\int d^2\hat{k}\,
Y_{l_1m_1}(\hat{\bm{k}}) Y_{l_2m_2}(\hat{\bm{k}})
Y_{l_3m_3}(\hat{\bm{k}})
\\
=
\sqrt{\frac{\{l_1\}\{l_2\}\{l_3\}}{4\pi}}
\begin{pmatrix}
l_1 & l_2 & l_3 \\
0 & 0 & 0
\end{pmatrix}
\left(l_1\,l_2\,l_3\right)_{m_1m_2m_3}.
\label{eq:3-215}
\end{multline}
We consider the azimuthal indices in the Gaunt integral can be raised
and lowered by the spherical metric, such as
\begin{equation}
\left[l_1\,l_2\,l_3\right]^{m_1m_2}_{\phantom{m_1m_2}m_3}
= g_{(l_1)}^{m_1m_1'} g_{(l_2)}^{m_2m_2'}
\left[l_1\,l_2\,l_3\right]_{m_1'm_2'm_3},
\label{eq:3-215-1}
\end{equation}
and so forth. The Gaunt integral of Eq.~(\ref{eq:3-215}) is non-zero
only when $l_1+l_2+l_3$ is an even number.
With this notation, Eq.~(\ref{eq:3-214}) is simply represented by
\begin{equation}
Y_{l_1m_1}(\hat{\bm{k}})Y_{l_2m_2}(\hat{\bm{k}})
= \sum_{l} \left[l_1\,l_2\,l\right]_{m_1m_2}^{\phantom{m_1m_2}m}
Y_{lm}(\hat{\bm{k}}).
\label{eq:3-216}
\end{equation}
Following the procedures above to derive the invariant propagator of
first order, and additionally using the formula of
Eq.~(\ref{eq:3-216}), the invariant propagator of second order can be
straightforwardly derived. The result in real space is given by
\begin{multline}
\hat{\Gamma}^{(2)l}_{Xl_1l_2}(k_1,k_2) =
c^{(2)l}_{Xl_1l_2}(k_1,k_2)
\\ +
\sqrt{4\pi} c^{(1)}_{Xl}(k_1)
\left[
\delta_{l_1l}\delta_{l_20} + \frac{(-1)^l}{\sqrt{3}}\
\frac{k_1}{k_2} \delta_{l_21} \sqrt{\{l_1\}}
\begin{pmatrix}
l & l_1 & 1 \\
0 & 0 & 0
\end{pmatrix}
\right]
\\ +
\sqrt{4\pi} c^{(1)}_{Xl}(k_2)
\left[
\delta_{l_10}\delta_{l_2l} + \frac{(-1)^l}{\sqrt{3}}\
\frac{k_2}{k_1} \delta_{l_11} \sqrt{\{l_2\}}
\begin{pmatrix}
l & l_2 & 1 \\
0 & 0 & 0
\end{pmatrix}
\right]
\\
+ \sqrt{4\pi} \delta_{l0} c^{(0)}_X
\left[
\frac{34}{21}\delta_{l_10}\delta_{l_20}
- \frac{1}{\sqrt{3}}
\left( \frac{k_1}{k_2} + \frac{k_2}{k_1} \right)
\delta_{l_11} \delta_{l_21}
\right.
\\
\left.
+ \frac{8}{21\sqrt{5}} \delta_{l_12} \delta_{l_22}
\right].
\label{eq:3-217}
\end{multline}
In the Universe with parity symmetry, the terms on the rhs of
Eq.~(\ref{eq:3-217}) except the first term is non-zero only for normal
tensors, $s_X=0$, as discussed in Sec.~\ref{subsec:RenBiasFn}, and
apparently non-zero only for $l+l_1+l_2=\mathrm{even}$. The first term
on the rhs is non-zero only for $s_X+l+l_1+l_2=\mathrm{even}$. The lhs
is non-zero only for $s_X+l+l_1+l_2=\mathrm{even}$, and therefore the
properties are consistent with each other. The corresponding result in
redshift space is explicitly given in Paper~II \cite{PaperII}.
Other propagators to arbitrary orders, both in real space and
redshift space can be calculated in a similar way as illustrated
above, although the calculations are more or less demanding in the
case of calculation including higher orders.
\section{The spectra of tensor fields in perturbation theory
\label{sec:TensorSpectrum}
}
In this section, we consider how one can calculate the power spectrum
and higher-order spectra such as the bispectrum and so forth, for
tensor fields in general.
\subsection{The power spectrum
\label{subsec:PowerSpec}
}
\subsubsection{\label{subsubsec:InvPS}
The invariant power spectrum
}
The power spectrum $P^{(l_1l_2)}_{X_1X_2\,m_1m_2}(\bm{k})$ of
irreducible components of tensor field $F_{Xlm}(\bm{k})$ may be
defined by
\begin{multline}
\left\langle
F_{X_1l_1m_1}(\bm{k}_1) F_{X_2l_2m_2}(\bm{k}_2)
\right\rangle_\mathrm{c}
\\
=
(2\pi)^3\delta_\mathrm{D}^3(\bm{k}_1+\bm{k}_2)
P^{(l_1l_2)}_{X_1X_2m_1m_2}(\bm{k}_1),
\label{eq:4-0-1}
\end{multline}
where $\langle\cdots\rangle_\mathrm{c}$ indicates the two-point
cumulant, or the connected part, and the appearance of the delta
function is due to translational symmetry. We consider generally the
cross power spectra between two fields, $X_1$ and $X_2$ with
irreducible components $l_1$ and $l_2$, respectively. The auto power
spectrum is straightforwardly obtained by putting $X_1 = X_2 = X$.
The power spectrum $P^{(l_1l_2)}_{X_1X_2m_1m_2}(\bm{k})$ defined above
depend on the coordinates system, as they have azimuthal indices and
depend on the direction of wavevector $\bm{k}$ in the arguments. One
can also construct reduced power spectra which are invariant under the
rotation, in a similar manner to introducing the invariant
propagators in Sec.~\ref{subsec:Propagators}.
First, we consider the power spectrum in real space. The directional
dependence on the wavevector can be expanded by spherical harmonics
as
\begin{equation}
P^{(l_1l_2)}_{X_1X_2m_1m_2}(\bm{k}) =
\sum_l P^{l_1l_2;l}_{X_1X_2m_1m_2;m}(k)\,Y^*_{lm}(\hat{\bm{k}}).
\label{eq:4-11}
\end{equation}
Due to the rotational transformations of Eqs.~(\ref{eq:3-38}) and
(\ref{eq:2-14-2}), and rotational invariance of the delta function in
the definition of the power spectrum, Eq.~(\ref{eq:4-0-1}), the
rotational transformation of the expansion coefficient is given by
\begin{equation}
P^{l_1l_2;l}_{X_1X_2m_1m_2;m}(k) \rightarrow
P^{l_1l_2;l}_{X_1X_2m_1'm_2';m'}(k)
D^{m_1'}_{(l_1)m_1}(R) D^{m_2'}_{(l_2)m_2}(R) D^{m'}_{(l)m}(R).
\label{eq:4-0-2}
\end{equation}
For the statistically isotropic Universe, the functional form of the
power spectrum should not depend on the choice of coordinates system.
Following the same procedure as in
Sec.~\ref{subsec:RenBiasFn}, the power spectrum in the statistically
isotropic Universe is shown to have a form,
\begin{multline}
P^{(l_1l_2)}_{X_1X_2m_1m_2}(\bm{k}) =
(-1)^{l_1+l_2} \sqrt{\{l_1\}\{l_2\}}
\\ \times
\sum_l (-1)^l \sqrt{\{l\}}
\left(l_1\,l_2\,l\right)_{m_1m_2}^{\phantom{m_1m_2}m}
Y_{lm}(\hat{\bm{k}}) P^{l_1l_2;l}_{X_1X_2}(k),
\label{eq:4-0-3}
\end{multline}
where
\begin{equation}
P^{l_1l_2;l}_{X_1X_2}(k) =
\frac{1}{\sqrt{\{l_1\}\{l_2\}\{l\}}}
\left(l_1\,l_2\,l\right)^{m_1m_2m}
P^{l_1l_2;l}_{X_1X_2m_1m_2;m}(k)
\label{eq:4-0-4}
\end{equation}
is the invariant power spectrum. The physical degrees of freedom of
the power spectrum are represented by the last invariant spectrum,
$P^{l_1l_2;l}_{X_1X_2}(k)$. There are a finite number of independent
functions of power spectra which depend only on the modulus of the
wavevector $k=|\bm{k}|$ of Fourier modes, due to the triangle
inequality of the $3j$-symbol, $|l_1-l_2| \leq l \leq l_1+l_2$ for a
fixed set of integers $(l_1,l_2)$. For example, when $l_1=l_2=2$, only
$l=0,1,\ldots, 4$ are allowed and there are five independent
components in the power spectrum\footnote{In Ref.~\cite{Vlah:2019byq},
the power spectrum $P_{22}^{(q)}(k)$ of intrinsic alignment is
defined in a coordinate-dependent way, where $q=0,\pm 1,\pm2$. The
degree of freedom of their power spectrum is five and is consistent
with ours.}.
According to Eqs.~(\ref{eq:3-32}), (\ref{eq:4-0-1}) and the symmetry
of the $3j$-symbol, Eq.~(\ref{eq:c-2}), the symmetry of complex
conjugate is shown to be given by
\begin{equation}
P^{l_1l_2;l\,*}_{X_1X_2}(k) = P^{l_1l_2;l}_{X_1X_2}(k),
\label{eq:4-0-4-1}
\end{equation}
and thus the invariant power spectrum is a real function.
Additionally, the parity transformation of the power spectrum is given
by
\begin{equation}
P^{(l_1l_2)}_{X_1X_2m_1m_2}(\bm{k}) \rightarrow
(-1)^{s_{X_1}+s_{X_2}+l_1+l_2} P^{(l_1l_2)}_{X_1X_2m_1m_2}(-\bm{k}),
\label{eq:4-0-5}
\end{equation}
because of Eqs.~(\ref{eq:3-42}) and (\ref{eq:4-0-1}). Accordingly, the
parity transformation of the invariant power spectrum is given by
\begin{equation}
P^{l_1l_2;l}_{X_1X_2}(k) \rightarrow
(-1)^{s_{X_1}+s_{X_2}+l_1+l_2+l} P^{l_1l_2;l}_{X_1X_2}(k).
\label{eq:4-0-6}
\end{equation}
In the Universe with parity symmetry, the power spectrum is
invariant under the parity transformation, we thus have
\begin{equation}
s_{X_1} + s_{X_2} + l_1 + l_2 + l = \mathrm{even}.
\label{eq:4-0-7}
\end{equation}
Together with the triangle inequality,
$|l_1-l_2| \leq l \leq l_1+l_2$, mentioned above, the numbers of
independent components in the invariant power spectrum
$P^{l_1l_2;l}_{X_1X_2}(k)$ are further reduced in the presence of
parity symmetry. For example, when $l_1=l_2=2$ and
$s_{X_1}=s_{X_2}=0$, only $l=0,2,4$ are allowed and there are three
independent components in the power spectrum\footnote{In
Ref.~\cite{Vlah:2019byq}, their power spectrum of intrinsic
alignment satisfies $P_{22}^{(q)}(k)=P_{22}^{(-q)}(k)$ in the
presence of parity symmetry, and thus only three components,
$q=0,1,2$, are independent. Again, the degree of freedom of theirs
is consistent with ours.}. The power spectrum in real space has an
interchange symmetry,
\begin{equation}
P^{(l_2l_1)}_{X_2X_1m_2m_1}(\bm{k})
= P^{(l_1l_2)}_{X_1X_2m_1m_2}(-\bm{k}),
\label{eq:4-0-7-1}
\end{equation}
and the corresponding symmetry for the invariant spectrum is given by
\begin{equation}
P^{l_2l_1;l}_{X_2X_1}(k) =
(-1)^{l_1+l_2} P^{l_1l_2;l}_{X_1X_2}(k).
\label{eq:4-0-7-2}
\end{equation}
Next, we consider the power spectrum in redshift space. The directional
dependence of the wavevector and the lines of sight can be expanded by
spherical harmonics as
\begin{equation}
P^{(l_1l_2)}_{X_1X_2m_1m_2}(\bm{k};\hat{\bm{z}}) =
\sum_{l,l_z} \sqrt{\frac{4\pi}{\{l_z\}}}
P^{l_1l_2;l\,l_z}_{X_1X_2m_1m_2;mm_z}(k)\,Y^*_{l_zm_z}(\hat{\bm{z}})
Y^*_{lm}(\hat{\bm{k}}).
\label{eq:4-0-8}
\end{equation}
Following similar considerations of rotational symmetry in the rest of
this paper, e.g., in deriving Eq.~(\ref{eq:3-128}), the power spectrum
in the statistically isotropic Universe is shown to have a form,
\begin{multline}
P^{(l_1l_2)}_{X_1X_2m_1m_2}(\bm{k};\hat{\bm{z}}) =
(-1)^{l_1+l_2} \sqrt{\{l_1\}\{l_2\}}
\sum_L
\left(l_1\,l_2\,L\right)_{m_1m_2}^{\phantom{m_1m_2}M}
\\ \times
\sum_{l,l_z} (-1)^{l+l_z} \sqrt{4\pi\{l\}}
\left\{Y_{l_z}(\hat{\bm{z}}) \otimes Y_l(\hat{\bm{k}}) \right\}_{LM}
P^{l_1l_2;l\,l_z;L}_{X_1X_2}(k),
\label{eq:4-0-9}
\end{multline}
where
\begin{multline}
P^{l_1l_2;l\,l_z;L}_{X_1X_2}(k) =
\frac{(-1)^L \sqrt{\{L\}}}{\sqrt{\{l_1\}\{l_2\}{\{l\}\{l_z\}}}}
\left(l_1\,l_2\,L\right)^{m_1m_2}_{\phantom{m_1m_2}M}
\left(L\,l\,l_z\right)^{Mmm_z}
\\ \times
P^{l_1l_2;l\,l_z}_{X_1X_2m_1m_2;mm_z}(k)
\label{eq:4-0-10}
\end{multline}
is the invariant power spectrum.
According to Eqs.~(\ref{eq:3-32}), (\ref{eq:4-0-1}) and the symmetry
of the $3j$-symbol, Eq.~(\ref{eq:c-2}), the symmetry of complex
conjugate is shown to be given by
\begin{equation}
P^{l_1l_2;l\,l_z;L\,*}_{X_1X_2}(k) = (-1)^{l_z}P^{l_1l_2;l\,l_z;L}_{X_1X_2}(k),
\label{eq:4-0-10-1}
\end{equation}
and thus the invariant power spectrum is a real function for normal
cases of $l_z=\mathrm{even}$ and a pure imaginary function for strange
cases of $l_z=\mathrm{odd}$. Additionally, the parity transformation
of the power spectrum is given by
\begin{equation}
P^{(l_1l_2)}_{X_1X_2m_1m_2}(\bm{k};\hat{\bm{z}}) \rightarrow
(-1)^{s_{X_1}+s_{X_2}+l_1+l_2}
P^{(l_1l_2)}_{X_1X_2m_1m_2}(-\bm{k};-\hat{\bm{z}}),
\label{eq:4-0-11}
\end{equation}
because of Eqs.~(\ref{eq:3-42}) and (\ref{eq:4-0-1}). Accordingly, the
transformation of the invariant power spectrum is given by
\begin{equation}
P^{l_1l_2;l\,l_z;L}_{X_1X_2}(k) \rightarrow
(-1)^{s_{X_1}+s_{X_2}+l_1+l_2+l+l_z} P^{l_1l_2;l\,l_z;L}_{X_1X_2}(k).
\label{eq:4-0-12}
\end{equation}
In the Universe with parity symmetry, the power spectrum in
redshift space is invariant under the parity transformation, we thus
have
\begin{equation}
s_{X_1} + s_{X_2} + l_1 + l_2 + l + l_z = \mathrm{even}.
\label{eq:4-0-13}
\end{equation}
The power spectrum in redshift space has an interchange symmetry,
\begin{equation}
P^{(l_2l_1)}_{X_2X_1m_2m_1}(\bm{k};\hat{\bm{z}})
= P^{(l_1l_2)}_{X_1X_2m_1m_2}(-\bm{k};\hat{\bm{z}}),
\label{eq:4-0-14}
\end{equation}
and the corresponding symmetry for the invariant spectrum is given by
\begin{equation}
P^{l_2l_1;l\,l_z;L}_{X_2X_1}(k) =
(-1)^{l_1+l_2+l+L} P^{l_1l_2;l\,l_z;L}_{X_1X_2}(k).
\label{eq:4-0-15}
\end{equation}
\subsubsection{The linear power spectra in real space and redshift space
\label{subsubsec:LinearPS}
}
We consider the power spectrum of the lowest-order approximation, or
the linear power spectrum, which is the simplest application of our
formalism. The formal expression for the power spectrum of scalar
fields in the propagator formalism with Gaussian initial conditions
\cite{Matsubara:1995wd,Bernardeau:2008fa,Matsubara:2011ck} is
straightforwardly generalized to the case of tensor field. In the
lowest order, we simply have
\begin{equation}
P^{(l_1l_2)}_{X_1X_2m_1m_2}(\bm{k})
= \Pi^2(\bm{k})
\hat{\Gamma}^{(1)}_{X_1l_1m_1}(\bm{k})
\hat{\Gamma}^{(1)}_{X_2l_2m_2}(-\bm{k})
P_\mathrm{L}(k),
\label{eq:4-2}
\end{equation}
where the parity symmetry for the vertex resummation factor,
$\Pi(-\bm{k})=\Pi(\bm{k})$ is taken into account.
The vertex resummation factor $\Pi(\bm{k})$ is given by
Eq.~(\ref{eq:3-20}). In the lowest order of the perturbation theory,
this factor can be approximated by $\Pi(\bm{k})=1$. This factor
strongly damps the power spectrum on small scales, where the linear
theory does not apply. However, sometimes it is useful to keep this
exponent even in the linear theory, especially in redshift space where
the peculiar velocities damp the power on small scales. When the
resummation factor is retained, the linear approximation for this
factor is derived from Eqs.~(\ref{eq:3-20}), (\ref{eq:3-24a}),
(\ref{eq:3-25}), and we have
\cite{Matsubara:2007wj,Matsubara:2011ck,Matsubara:2013ofa}
\begin{equation}
\Pi(k) =
\exp\left[
-\frac{k^2}{12\pi^2}
\int dp P_\mathrm{L}(p)
\right],
\label{eq:4-3-0}
\end{equation}
in real space, and
\begin{equation}
\Pi(k,\mu) =
\exp\left\{
-\frac{k^2}{12\pi^2}
\left[ 1 + f(f+2)\mu^2 \right]
\int dp P_\mathrm{L}(p)
\right\},
\label{eq:4-3}
\end{equation}
in redshift space, where
\begin{equation}
\mu \equiv \hat{\bm{z}}\cdot\hat{\bm{k}}
\label{eq:4-3-1}
\end{equation}
is a direction cosine between the wavevector and the line of sight.
Substituting $f=0$ in the expression in redshift space reduces to
the expression in real space as a matter of course.
In real space, the first-order propagator in terms of the invariant
propagator is given by Eq.~(\ref{eq:3-105}). Thus Eq.~(\ref{eq:4-2})
reduces to
\begin{multline}
P^{(l_1l_2)}_{X_1X_2m_1m_2}(\bm{k})
= \Pi^2(k)\, (-1)^{l_2}\,
\hat{\Gamma}^{(1)}_{X_1l_1}(k)
\hat{\Gamma}^{(1)}_{X_2l_2}(k)
P_\mathrm{L}(k)
\\ \times
\sum_l \left[l_1\,l_2\,l\right]_{m_1m_2}^{\phantom{m_1m_2}m}
Y_{lm}(\hat{\bm{k}}).
\label{eq:4-4}
\end{multline}
The invariant power spectrum is easily read off from the above
expression and Eq.~(\ref{eq:4-0-3}), and we have
\begin{equation}
P^{l_1l_2;l}_{X_1X_2}(k)
= \frac{(-1)^{l_2}}{\sqrt{4\pi}}
\begin{pmatrix}
l_1 & l_2 & l \\
0 & 0 & 0
\end{pmatrix}
\Pi^2(k)
\hat{\Gamma}^{(1)}_{X_1l_1}(k)
\hat{\Gamma}^{(1)}_{X_2l_2}(k)
P_\mathrm{L}(k).
\label{eq:4-4-1}
\end{equation}
Further substituting the lowest-order approximation of the first-order
propagator, Eq.~(\ref{eq:3-202-1}), the above equation is explicitly
given by
\begin{multline}
P^{l_1l_2;l}_{X_1X_2}(k) =
\frac{\Pi^2(k)}{\sqrt{4\pi}} P_\mathrm{L}(k)
\\ \times
\Biggl\{
(-1)^{l_2}
\begin{pmatrix}
l_1 & l_2 & l \\ 0 & 0 & 0
\end{pmatrix}
c^{(1)}_{X_1l_1}(k) c^{(1)}_{X_2l_2}(k)
+ \delta_{l0} \delta_{l_10} \delta_{l_20}
c^{(0)}_{X_1} c^{(0)}_{X_2}
\\
+ \frac{1}{\sqrt{\{l\}}}
\Biggl[
(-1)^l \delta_{l_1l} \delta_{l_20}
c^{(1)}_{X_1l_1}(k) c^{(0)}_{X_2}
+ \delta_{l_10} \delta_{l_2l}
c^{(0)}_{X_1} c^{(1)}_{X_2l_2}(k)
\Biggr]
\Biggr\}.
\label{eq:4-4-2}
\end{multline}
This is one of the generic predictions of our theory.
As a consistency check, we consider the scalar case, $l_1=l_2=0$ of
auto power spectrum, $X_1=X_2=X$. In this case, Eq.~(\ref{eq:4-4-2})
reduces to
\begin{equation}
P^{00;0}_{XX}(k) =
\frac{\Pi^2(k)}{\sqrt{4\pi}} \left[b_X(k)\right]^2
P_\mathrm{L}(k),
\label{eq:4-5}
\end{equation}
where
\begin{equation}
b_X(k) \equiv c^{(0)}_X + c^{(1)}_{X0}(k)
\label{eq:4-5-0}
\end{equation}
corresponds to the linear bias factor. In the linear order with
$\Pi(k)=1$, the auto power spectrum of a scalarly biased field in the
well-known linear theory is given by
$P_X(k) = [b_X(k)]^2 P_\mathrm{L}(k)$. Due to the normalization of a
spherical basis, $\mathsf{Y}^{0)}=(4\pi)^{-1/2}$, the scalar power
spectrum in the spherical basis is given by
$P_{X}(k) = 4\pi P^{(00)}_{XX00}(\bm{k}) = \sqrt{4\pi}
P^{00;0}_{XX}(k)$, and Eq.~(\ref{eq:4-5}) is consistent with the
result of the usual linear power spectrum of a scalarly biased field.
The Kronecker's symbols in the Eq.~(\ref{eq:4-4-2}) indicate that the
resulting expressions are quite different between scalar fields
($l_1=0$ or $l_2=0$) and non-scalar fields ($l_1\ge 1$ or $l_2\ge 1$).
When the fields $X_1$ and $X_2$ are both non-scalar fields with
$l_1\ne 0$ and $l_2 \ne 0$, Eq.~(\ref{eq:4-4-2}) simply reduces to
\begin{multline}
P^{l_1l_2;l}_{X_1X_2}(k) =
\frac{\Pi^2(k)}{\sqrt{4\pi}} P_\mathrm{L}(k)
(-1)^{l_2}
\begin{pmatrix}
l_1 & l_2 & l \\ 0 & 0 & 0
\end{pmatrix}
c^{(1)}_{X_1l_1}(k) c^{(1)}_{X_2l_2}(k),
\\
(l_1\ne 0 \mathrm{\ \ and\ \ } l_2 \ne 0).
\label{eq:4-5-1}
\end{multline}
When $X_1$ is a non-scalar field and $X_2$ is a scalar field with
$l_1\ne 0$ and $l_2=0$, Eq.~(\ref{eq:4-4-2}) reduces to
\begin{equation}
P^{l_10;l}_{X_1X_2}(k) =
\delta_{l_1l} \frac{\Pi^2(k)}{\sqrt{4\pi}} P_\mathrm{L}(k)
\frac{(-1)^{l_1}}{\sqrt{\{l_1\}}}
c^{(1)}_{X_1l_1}(k) b_{X_2}(k),
\quad
(l_1\ne 0),
\label{eq:4-5-2}
\end{equation}
where $b_X(k)$ is given by Eq.~(\ref{eq:4-5-0}). The above equation
survives only when $l=l_1$. When $X_1$ and $X_2$ are both scalar fields
with $l_1=l_2=0$, Eq.~(\ref{eq:4-4-2}) reduces to
\begin{equation}
P^{00;l}_{X_1X_2}(k) =
\delta_{l0} \frac{\Pi^2(k)}{\sqrt{4\pi}}
P_\mathrm{L}(k) b_{X_1}(k) b_{X_2}(k),
\label{eq:4-5-3}
\end{equation}
which survives only when $l=0$. The last equation coincides with
Eq.~(\ref{eq:4-5}) when $X_1=X_2=X$ and $l=0$.
In redshift space, the lowest-order propagator is given by
Eq.~(\ref{eq:3-128}), which can be substituted into
Eq.~(\ref{eq:4-2}). As a result, a product of bipolar spherical
harmonics appears. A product of bipolar spherical harmonics reduces to
a superposition of a single bipolar spherical harmonics using a
$9j$-symbols \cite{Khersonskii:1988krb}:
\begin{multline}
\left\{
Y_{l_{z1}}(\hat{\bm{z}})\otimes Y_{l_1'}(\hat{\bm{k}})
\right\}_{l_1m_1}
\left\{
Y_{l_{z2}}(\hat{\bm{z}})\otimes Y_{l_2'}(\hat{\bm{k}})
\right\}_{l_2m_2}
\\
=(-1)^{l_1+l_2}
\frac{\sqrt{\{l_{z1}\}\{l_{z2}\}\{l_1'\}\{l_2'\}\{l_1\}\{l_2\}}}{4\pi}
\sum_L \sqrt{\{L\}}
\left(L\,l_1\,l_2\right)^M_{\phantom{M}m_1m_2}
\\ \times
\sum_{L_z,L'} (-1)^{L_z+L'} \sqrt{\{L_z\}\{L'\}
}
\begin{pmatrix}
l_{z1} & l_{z2} & L_z \\
0 & 0 & 0
\end{pmatrix}
\begin{pmatrix}
l_1' & l_2' & L' \\
0 & 0 & 0
\end{pmatrix}
\\ \times
\begin{Bmatrix}
l_{z1} & l_{z2} & L_z \\
l_1' & l_2' & L' \\
l_1 & l_2 & L
\end{Bmatrix}
\left\{
Y_{L_z}(\hat{\bm{z}})\otimes Y_{L'}(\hat{\bm{k}})
\right\}_{LM}.
\label{eq:4-6}
\end{multline}
Thus we have
\begin{multline}
P^{(l_1l_2)}_{X_1X_2m_1m_2}(\bm{k};\hat{\bm{z}})
= \Pi^2(\bm{k}) P_\mathrm{L}(k)
(-1)^{l_1+l_2}\sqrt{\{l_1\}\{l_2\}}
\\ \times
\sum_{l_{z1},l_{z2},l_1',l_2'} (-1)^{l_2'}
\sqrt{\{l_1'\}\{l_2'\}}
\hat{\Gamma}^{(1)l_1l_{z1}}_{X_1l_1'}(k)
\hat{\Gamma}^{(1)l_2l_{z2}}_{X_2l_2'}(k)
\\ \times
\sum_L \sqrt{\{L\}}
\left(l_1\,l_2\,L\right)_{m_1m_2}^{\phantom{m_1m_2}M}
\sum_{l_z,l} (-1)^{l_z+l} \sqrt{\{l_z\}\{l\}}
\\ \times
\begin{pmatrix}
l_{z1} & l_{z2} & l_z \\
0 & 0 & 0
\end{pmatrix}
\begin{pmatrix}
l_1' & l_2' & l \\
0 & 0 & 0
\end{pmatrix}
\begin{Bmatrix}
l_{z1} & l_{z2} & l_z \\
l_1' & l_2' & l\\
l_1 & l_2 & L
\end{Bmatrix}
\\ \times
\left\{
Y_{l_z}(\hat{\bm{z}})\otimes Y_{l}(\hat{\bm{k}})
\right\}_{LM}.
\label{eq:4-7}
\end{multline}
Substitution of Eq.~(\ref{eq:3-211}) into the above gives an explicit
result. The invariant power spectrum is easily read off from the
expression of Eq.~(\ref{eq:4-0-9}), and we have
\begin{multline}
P^{l_1l_2;l\,l_z;L}_{X_1X_2}(k,\mu) =
\frac{1}{\sqrt{4\pi}}
\Pi^2(k,\mu) P_\mathrm{L}(k)
\sqrt{\{l_z\}\{L\}}
\\ \times
\sum_{l_{z1},l_{z2},l_1',l_2'} (-1)^{l_2'}
\sqrt{\{l_1'\}\{l_2'\}}
\begin{pmatrix}
l_{z1} & l_{z2} & l_z \\
0 & 0 & 0
\end{pmatrix}
\begin{pmatrix}
l_1' & l_2' & l \\
0 & 0 & 0
\end{pmatrix}
\\ \times
\begin{Bmatrix}
l_{z1} & l_{z2} & l_z \\
l_1' & l_2' & l \\
l_1 & l_2 & L
\end{Bmatrix}
\hat{\Gamma}^{(1)l_1l_{z1}}_{X_1l_1'}(k)
\hat{\Gamma}^{(1)l_2l_{z2}}_{X_2l_2'}(k).
\label{eq:4-8}
\end{multline}
This is one of the generic predictions of our theory.
As a consistency check, one can recover the result of the power
spectrum in real space, Eqs.~(\ref{eq:4-4}) and (\ref{eq:4-4-1}), by
substituting $l_{z1}=l_{z2}=0$ into Eqs.~(\ref{eq:4-7}) and
(\ref{eq:4-8}) and using identities
$\hat{\Gamma}^{(1)l0}_{Xl'}(k) =
\delta_{l'l}\hat{\Gamma}^{(1)}_{Xl}(k)$, $(00l)_{000}=\delta_{l0}$,
and a special case of $9j$-symbols \cite{Khersonskii:1988krb},
\begin{equation}
\begin{Bmatrix}
0 & 0 & 0 \\
l_1' & l_2' & l_3' \\
l_1 & l_2 & l_3
\end{Bmatrix}
= \frac{\delta_{l_1l_1'}\delta_{l_2l_2'}\delta_{l_3l_3'}}
{\sqrt{\{l_1\}\{l_2\}\{l_3\}}}.
\label{eq:4-9}
\end{equation}
As another consistency check, one can also see if Eq.~(\ref{eq:4-7})
reproduces the well-known result for the scalar case, i.e., the
Kaiser's formula \cite{Kaiser:1987qv}. In fact, substituting
Eq.~(\ref{eq:3-211}) into Eq.~(\ref{eq:4-7}) in the case of
$l_1=l_2=m_1=m_2=0$, $X_1=X_2=X$, and using Eqs.~(\ref{eq:c-8}) and
(\ref{eq:4-9}), we derive
\begin{multline}
P^{(00)}_{XX00}(\bm{k};\hat{\bm{z}}) =
\frac{\Pi^2(k,\mu)}{4\pi} \left[b_X(k)\right]^2P_\mathrm{L}(k)
\\ \times
\left[
\left(1 + \frac{2}{3} \beta + \frac{1}{5}\beta^2\right)
\mathit{P}_0(\mu)
+ \left(\frac{4}{3}\beta + \frac{4}{7}\beta^2\right)
\mathit{P}_2(\mu)
\right.
\\
\left.
+ \frac{8}{35}\beta^2 \mathit{P}_4(\mu)
\right],
\label{eq:4-10}
\end{multline}
where $\beta \equiv c^{(0)}_Xf/b_X(k)$ is the redshift-space
distortion parameter, and $\mu \equiv \hat{\bm{z}}\cdot\hat{\bm{k}}$
is the direction cosine of the wavevector with respect to the line of
sight. Due to the normalization of the spherical basis, we have
$P_X(\bm{k}) = (4\pi)^{-1} P^{(00)}_{XX00}(\bm{k})$, and we recover
the well-known result of the linear power spectrum of a scalarly
biased field in redshift space.
In the lowest-order approximation of the first-order propagator in
redshift space, Eq.~(\ref{eq:3-211}), either $l$ or $l_z$ is zero in
each term. Therefore, at least two of the indices $l_1$, $l_2$,
$l_{z1}$, $l_{z2}$ are zero in the $9j$-symbol of Eq.~(\ref{eq:4-8}).
When two of the indices in the $9j$-symbol are zero, there are formulas
\cite{Khersonskii:1988krb},
\begin{align}
\begin{Bmatrix}
l_1 & l_2 & l_3 \\
l_4 & l_5 & l_6 \\
l_7 & 0 & 0
\end{Bmatrix}
&= \delta_{l_70}
\frac{\delta_{l_1l_4}\delta_{l_2l_5}\delta_{l_3l_6}}
{\sqrt{\{l_1\}\{l_2\}\{l_3\}}},
\label{eq:4-11a}\\
\begin{Bmatrix}
l_1 & l_2 & l_3 \\
l_4 & 0 & l_6 \\
l_7 & l_8 & 0
\end{Bmatrix}
&= (-1)^{l_1-l_2-l_3}
\frac{\delta_{l_4l_6}\delta_{l_2l_8}\delta_{l_3l_6}\delta_{l_7l_8}}
{\sqrt{\{l_2\}\{l_3\}}},
\label{eq:4-11b}
\end{align}
A $9j$-symbol, in which two of the indices are zero, can always be
represented in a form of the lhs of the above equations by means of
the symmetric properties of the $9j$-symbol,
Eqs.~(\ref{eq:c-31})--(\ref{eq:c-33}). Thus, with the lowest-order
approximation of the first-order propagator given by
Eq.~(\ref{eq:3-211}), the summation of Eq.~(\ref{eq:4-8}) is
explicitly expanded to give
\begin{align}
&
P^{l_1l_2;l\,l_z;L}_{X_1X_2}(k,\mu) =
\frac{\Pi^2(k,\mu)}{\sqrt{4\pi}} P_\mathrm{L}(k)
\nonumber\\
&
\quad \times
\Biggl\{
\delta_{l_z0} \delta_{lL} (-1)^{l_2}
\begin{pmatrix}
l_1 & l_2 & l \\ 0 & 0 & 0
\end{pmatrix}
c^{(1)}_{X_1l_1}(k) c^{(1)}_{X_2l_2}(k) \nonumber\\
& \hspace{3pc}
+ \left(1 + \frac{f}{3}\right)
\frac{\delta_{l_z0} \delta_{lL}}{\sqrt{\{l\}}}
\Biggl[
(-1)^l \delta_{l_1l} \delta_{l_20}
c^{(1)}_{X_1l_1}(k) c^{(0)}_{X_2} \nonumber\\
& \hspace{11pc}
+ \delta_{l_10} \delta_{l_2l}
c^{(0)}_{X_1} c^{(1)}_{X_2l_2}(k)
\Biggr] \nonumber\\
& \hspace{3pc}
+ \frac{2f}{3}
\frac{\delta_{l_z2}}{\sqrt{5}}
\Biggl[
\delta_{l_1L} \delta_{l_20}
\begin{pmatrix}
2 & l_1 & l \\ 0 & 0 & 0
\end{pmatrix}
c^{(1)}_{X_1l_1}(k) c^{(0)}_{X_2} \nonumber\\
& \hspace{7.5pc}
+ (-1)^l\delta_{l_10} \delta_{l_2L}
\begin{pmatrix}
2 & l_2 & l \\ 0 & 0 & 0
\end{pmatrix}
c^{(0)}_{X_1} c^{(1)}_{X_2l_2}(k)
\Biggl] \nonumber\\
& \hspace{3pc}
+ \delta_{L0} \delta_{l_10} \delta_{l_20} \delta_{l\,l_z}
\Biggl[
\delta_{l_z0} \left( 1 + \frac{2f}{3} + \frac{f^2}{5} \right)
\nonumber\\
& \hspace{6pc}
+ \frac{\delta_{l_z2}}{5} \left(\frac{4f}{3} + \frac{4f^2}{7} \right)
+ \frac{\delta_{l_z4}}{9} \frac{8f^2}{35}
\Biggr]
c^{(0)}_{X_1} c^{(0)}_{X_2}
\Biggr\},
\label{eq:4-12}
\end{align}
where an identity
\begin{equation}
\begin{pmatrix}
2 & 2 & l \\ 0 & 0 & 0
\end{pmatrix}^2
= \frac{1}{5} \delta_{l0}
+ \frac{2}{35} \delta_{l2}
+ \frac{2}{35} \delta_{l4}
\label{eq:4-13}
\end{equation}
is used in deriving the above result. The Eq.~(\ref{eq:4-12}) is one
of the generic predictions of our theory.
The above Eq.~(\ref{eq:4-12}) is
non-zero only when $l_z=0,2,4$, and the redshift-space distortions in
linear theory contains only monopole ($l_z=0$), quadrupole ($l_z=2$)
and hexadecapole ($l_z=4$) components, as is well known in the scalar
perturbation theory, Eq.~(\ref{eq:4-10}), while higher-order corrections
contain higher-order multipoles for the redshift-space distortions.
In linear theory besides the resummation factor $\Pi(k,\mu)$,
higher-rank tensors are not affected by the redshift-space distortions
as the first-order propagator of Eq.~(\ref{eq:3-211}) does not
neither. Thus, if neither $l_1$ nor $l_2$ is zero, Eq.~(\ref{eq:4-12})
is simply given by
\begin{multline}
P^{l_1l_2;l\,l_z;L}_{X_1X_2}(k,\mu) =
\frac{\delta_{l_z0}\delta_{lL}}{\sqrt{4\pi}}
\Pi^2(k,\mu) P_\mathrm{L}(k)
(-1)^{l_2}
\begin{pmatrix}
l_1 & l_2 & l \\ 0 & 0 & 0
\end{pmatrix}
\\ \times
c^{(1)}_{X_1l_1}(k) c^{(1)}_{X_2l_2}(k),
\qquad (l_1 \ne 0 \mathrm{\ \ and\ \ } l_2\ne 0),
\label{eq:4-14}
\end{multline}
which survives only when $l_z=0$ and $l=L$. The above equation exactly
coincides with the result in real space, Eq.~(\ref{eq:4-5-1}). Namely,
redshift-space distortions on higher-rank tensors are nonlinear
effects. In fact, nonlinear loop corrections in the power spectrum do
introduce the redshift-space distortion effects, as explicitly shown
in Paper~II \cite{PaperII}.
As in the case of real space, Kronecker's symbols in the
Eq.~(\ref{eq:4-12}) indicate that the resulting expression is
different between scalar fields ($l_1=0$ or $l_2=0$) and non-scalar
fields ($l_1\ge 1$ or $l_2\ge 1$). When the fields $X_1$ and $X_2$ are
both non-scalar fields with $l_1\ne 0$ and $l_2 \ne 0$, the expression
reduces to Eq.~(\ref{eq:4-14}) above. When $X_1$ is a non-scalar field
and $X_2$ is a scalar field with $l_1\ne 0$ and $l_2=0$,
Eq.~(\ref{eq:4-12}) reduces to
\begin{multline}
P^{l_10;l\,l_z;L}_{X_1X_2}(k,\mu) =
\frac{\delta_{Ll_1}}{\sqrt{4\pi}}
\Pi^2(k,\mu) P_\mathrm{L}(k)
\\ \times
\Biggl\{
\delta_{l_z0} \delta_{ll_1}
\frac{(-1)^{l_1}}{\sqrt{\{l_1\}}}
c^{(1)}_{X_1l_1}(k)
\left[
c^{(1)}_{X_20}(k)
+ \left(1 + \frac{f}{3}\right) c^{(0)}_{X_2}
\right]
\\
+ \delta_{l_z2}
\frac{2f}{3}
\frac{1}{\sqrt{5}}
\begin{pmatrix}
2 & l_1 & l \\ 0 & 0 & 0
\end{pmatrix}
c^{(1)}_{X_1l_1}(k) c^{(0)}_{X_2}
\Biggr\},
\quad (l_1\ne 0),
\label{eq:4-15}
\end{multline}
which survives only when $L=l_1$. Using the linear bias factor of
Eq.~(\ref{eq:4-5-0}), the above equation is alternatively represented
by
\begin{multline}
P^{l_10;l\,l_z;L}_{X_1X_2}(k,\mu) =
\delta_{Ll_1}
\frac{\Pi^2(k,\mu)}{\sqrt{4\pi}} P_\mathrm{L}(k)
\\ \times
\left[
\delta_{l_z0} \delta_{ll_1}
\frac{(-1)^{l_1}}{\sqrt{\{l_1\}}}
c^{(1)}_{X_1l_1}(k)
b_{X_2}(k)
\left(
1 + \frac{1}{3}\beta_2
\right)
\right.
\\
\left.
+ \delta_{l_z2}
\frac{2f}{3}
\frac{1}{\sqrt{5}}
\begin{pmatrix}
2 & l_1 & l \\ 0 & 0 & 0
\end{pmatrix}
c^{(1)}_{X_1l_1}(k) c^{(0)}_{X_2}
\right],
\label{eq:4-16}
\end{multline}
where $\beta_2 \equiv c^{(0)}_{X_2}f/b_{X_2}(k)$.
When $X_1$ and $X_2$ are both scalar
fields with $l_1=l_2=0$, Eq.~(\ref{eq:4-12}) reduces to
\begin{align}
&
P^{00;l\,l_z;L}_{X_1X_2}(k,\mu) =
\frac{\delta_{L0}\delta_{ll_z}}{\sqrt{4\pi}}
\Pi^2(k,\mu) P_\mathrm{L}(k)
\nonumber\\
&
\quad \times
\Biggl\{
\delta_{l_z0}
c^{(1)}_{X_10}(k) c^{(1)}_{X_20}(k) \nonumber\\
& \hspace{2pc}
+ \left(1 + \frac{f}{3}\right)
\delta_{l_z0}
\Biggl[
c^{(1)}_{X_10}(k) c^{(0)}_{X_2}
+ c^{(0)}_{X_1} c^{(1)}_{X_20}(k)
\Biggr] \nonumber\\
& \hspace{2pc}
+ \frac{2f}{3}
\frac{\delta_{l_z2}}{5}
\Biggl[
c^{(1)}_{X_1l_1}(k) c^{(0)}_{X_2}
+ c^{(0)}_{X_1} c^{(1)}_{X_2l_2}(k)
\Biggl] \nonumber\\
& \hspace{2pc}
+
\Biggl[
\delta_{l_z0} \left( 1 + \frac{2f}{3} + \frac{f^2}{5} \right)
\nonumber\\
& \hspace{5pc}
+ \frac{\delta_{l_z2}}{5} \left(\frac{4f}{3} + \frac{4f^2}{7} \right)
+ \frac{\delta_{l_z4}}{9} \frac{8f^2}{35}
\Biggr]
c^{(0)}_{X_1} c^{(0)}_{X_2}
\Biggr\},
\label{eq:4-17}
\end{align}
which survives only when $L=0$ and $l=l_z$. Using the linear bias
factor of Eq.~(\ref{eq:4-5-0}), the above equation is alternatively
represented by
\begin{multline}
P^{00;l\,l_z;L}_{X_1X_2}(k,\mu) =
\delta_{L0}\delta_{ll_z}
\frac{\Pi^2(k,\mu)}{\sqrt{4\pi}}
P_\mathrm{L}(k)
b_{X_1}(k) b_{X_2}(k)
\\
\times
\Biggl\{
\delta_{l_z0}
\left[
1 + \frac{1}{3}\left(\beta_1 + \beta_2\right)
+ \frac{1}{5}\beta_1\beta_2
\right]
\\
+ \frac{\delta_{l_z2}}{5}
\left\{
\frac{2}{3}\left(\beta_1 + \beta_2\right)
+ \frac{4}{7} \beta_1 \beta_2
\right\}
+ \frac{\delta_{l_z4}}{9} \frac{8}{35} \beta_1 \beta_2
\Biggr\},
\label{eq:4-18}
\end{multline}
where $\beta_1 \equiv c^{(0)}_{X_1}f/b_{X_1}(k)$. The last form is
consistent with Eq.~(\ref{eq:4-10}).
\subsection{\label{subsec:PowerSpecNG} Scale-dependent bias in the
power spectrum with non-Gaussian initial conditions }
In the presence of bias, the local-type non-Gaussianity is known to
produce the scale-dependent bias in the power spectrum on very large
scales \cite{Dalal:2007cu}. The same effect appears even in the shape
correlations \cite{Schmidt:2015xka,Akitsu:2020jvx}, which is
considered to be a second-rank ($l=2$) tensor field in our context.
Higher moments of the shape correlations are also investigated
\cite{Kogai:2020vzz}.
In our formalism, these previous findings are elegantly reproduced as
shown below. The derivation of scale-dependent bias from the
primordial non-Gaussianity in the iPT formalism of scalar fields is
already given in Ref.~\cite{Matsubara:2012nc}. We can simply
generalize the last method to the case of higher-rank tensor fields.
We consider the clustering in real space below, while the
generalization to that in redshift space is straightforward.
The primordial bispectrum $B_\mathrm{L}(\bm{k}_1,\bm{k}_2,\bm{k}_3)$
is defined by three-point correlations of linear density contrast in
Fourier space,
\begin{equation}
\left\langle
\delta_\mathrm{L}(\bm{k}_1)
\delta_\mathrm{L}(\bm{k}_2)
\delta_\mathrm{L}(\bm{k}_3)
\right\rangle_\mathrm{c} =
(2\pi)^3 \delta_\mathrm{D}^3(\bm{k}_1+\bm{k}_2+\bm{k}_3)
B_\mathrm{L}(\bm{k}_1,\bm{k}_2,\bm{k}_3),
\label{eq:4-20}
\end{equation}
where $\langle\cdots\rangle_\mathrm{c}$ denotes the three-point
cumulant, or the connected part. Applying a straightforward
generalization of the corresponding result of iPT
\cite{Matsubara:2012nc} to tensor fields, the lowest-order
contribution of the primordial bispectrum to the power spectrum in the
formalism of iPT is given by
\begin{multline}
P^{\mathrm{NG}(l_1l_2)}_{X_1X_2m_1m_2}(\bm{k})
= \frac{1}{2} \Gamma^{(1)}_{X_1l_1m_1}(-\bm{k})
\\ \times
\int \frac{d^3p}{(2\pi)^3}
\Gamma^{(2)}_{X_2l_2m_2}(\bm{p},\bm{k}-\bm{p})
B_\mathrm{L}(\bm{k},-\bm{p},\bm{p}-\bm{k})
\\ + [(X_1l_1m_1) \leftrightarrow (X_2l_2m_2)],
\label{eq:4-21}
\end{multline}
where we implicitly assume the parity symmetry of the Universe. The
linear power spectrum with the Gaussian initial condition,
Eq.~(\ref{eq:4-2}), is added to the above in order to obtain the total
power spectrum up to the linear order of the initial power spectrum
and bispectrum. However, the Gaussian contribution does not generate
scale-dependent bias in the limit of large scales, $k\rightarrow 0$,
and thus the non-Gaussian contribution of Eq.~(\ref{eq:4-21})
dominates in that limit.
The prefactor of the integral in Eq.~(\ref{eq:4-21}) is given by
Eq.~(\ref{eq:3-202}):
\begin{equation}
\Gamma^{(1)}_{X_1l_1m_1}(\bm{k})
=
\left[
c^{(1)}_{X_1l_1}(k)
+ c^{(0)}_{X_1} \delta_{l_10}
\right]
Y_{l_1m_1}(\hat{\bm{k}}),
\label{eq:4-22}
\end{equation}
where we consider the lowest-order approximation and the resummation
factor $\Pi(k)$ is just replaced by unity. In order to evaluate the
integral in Eq.~(\ref{eq:4-21}) with the spherical basis, one can
insert a unity
$\int d^3p' \delta_\mathrm{D}^3(\bm{p}+\bm{p}'-\bm{k}) = 1$ in the
integral, and re-express the delta function by a Fourier integral, and
primordial bispectrum can be also expanded by spherical harmonics. In
this way, the angular integrations are analytically performed, leaving
only the radial integrals over $p$ and $p'$.
We are interested in the scale-dependent bias from the primordial
non-Gaussianity, and instead of deriving a general expression
according to the above procedure, we directly consider a limiting case
of $k\rightarrow 0$ in Eq.~(\ref{eq:4-21}). In this case, the integral
is approximately given by
\begin{equation}
\int \frac{d^3p}{(2\pi)^3}
\Gamma^{(2)}_{X_2l_2m_2}(\bm{p},-\bm{p})
B_\mathrm{L}(\bm{k},-\bm{p},\bm{p}).
\label{eq:4-23}
\end{equation}
The second-order propagator in the integral is given by
Eq.~(\ref{eq:3-22}) with $\bm{k}_{12}=\bm{0}$, $\bm{k}_1=\bm{p}$ and
$\bm{k}_2=-\bm{p}$, and thus we have
\begin{multline}
\Gamma^{(2)}_{X_2l_2m_2}(\bm{p},-\bm{p}) =
c^{(2)}_{X_2l_2m_2}(\bm{p},-\bm{p})
\\ =
\sum_{l_2',l_2''} (-1)^{l_2''} c^{(2)l_2}_{X_2l_2'l_2''}(p,p)
\left\{
Y_{l_2'}(\hat{\bm{p}})\otimes Y_{l_2''}(\hat{\bm{p}})
\right\}_{l_2m_2},
\label{eq:4-24}
\end{multline}
where Eq.~(\ref{eq:3-75}) is substituted. According to
Eqs.~(\ref{eq:3-74}), (\ref{eq:3-216}) and (\ref{eq:c-5}), we have
\begin{equation}
\left\{
Y_{l_2'}(\hat{\bm{p}})\otimes Y_{l_2''}(\hat{\bm{p}})
\right\}_{l_2m_2} =
(-1)^{l_2}\sqrt{\frac{\{l_2'\}\{l_2''\}}{4\pi}}
\begin{pmatrix}
l_2' & l_2'' & l_2 \\
0 & 0 & 0
\end{pmatrix}
Y_{l_2m_2}(\hat{\bm{p}}).
\label{eq:4-25}
\end{equation}
We decompose the primordial bispectrum with the spherical harmonics as
\begin{multline}
B_\mathrm{L}(\bm{k}_1,\bm{k}_2,\bm{k}_3)
=
\sum_{l_1,l_2,l_3}
Y_{l_1m_1}(\hat{\bm{k}}_1)
Y_{l_2m_2}(\hat{\bm{k}}_2)
Y_{l_3m_3}(\hat{\bm{k}}_3)
\\ \times
\left(l_1\,l_2\,l_3\right)^{m_1m_2m_3}
B_{\mathrm{L}}^{l_1l_2l_3}(k_1,k_2,k_3).
\label{eq:4-26}
\end{multline}
The appearance of the $3j$-symbols is due to the rotational symmetry.
The inverse relation of the above is given by
\begin{multline}
B_{\mathrm{L}}^{l_1l_2l_3}(k_1,k_2,k_3)
=
\left(l_1\,l_2\,l_3\right)^{m_1m_2m_3}
\\ \times
\int d^2\hat{k}_1 d^2\hat{k}_2 d^2\hat{k}_3
Y_{l_1m_1}(\hat{\bm{k}}_1)
Y_{l_2m_2}(\hat{\bm{k}}_2)
Y_{l_3m_3}(\hat{\bm{k}}_3)
\\ \times
B_\mathrm{L}(\bm{k}_1,\bm{k}_2,\bm{k}_3).
\label{eq:4-27}
\end{multline}
For the particular bispectrum in the integrand of Eq.~(\ref{eq:4-23}),
using Eqs.~(\ref{eq:4-26}) and (\ref{eq:4-27}), we have
\begin{multline}
B_\mathrm{L}(\bm{k},-\bm{p},\bm{p})
=
\frac{1}{\sqrt{4\pi}}
\sum_{l}
Y_{lm}(\hat{\bm{k}})
Y_{lm}^*(\hat{\bm{p}})
\sum_{l',l''}
(-1)^{l'}
\\ \times
\sqrt{\frac{\{l'\}\{l''\}}{\{l\}}}
\begin{pmatrix}
l' & l'' & l \\
0 & 0 & 0
\end{pmatrix}
B_{\mathrm{L}}^{l\,l'l''}(k,p,p).
\label{eq:4-28}
\end{multline}
Putting the above equations together for the integral of
Eq.~(\ref{eq:4-23}), the angular integration over $\hat{\bm{p}}$ can
be analytically performed by using the orthonormality relation of the
spherical harmonics, Eq.~(\ref{eq:a-5}). As a result,
Eq.~(\ref{eq:4-21}) reduces to
\begin{multline}
P^{\mathrm{NG}(l_1l_2)}_{X_1X_2m_1m_2}(\bm{k}) =
\frac{1}{2\,(4\pi)^2}
\frac{1}{\sqrt{\{l_2\}}}
\left[c^{(1)}_{X_1l_1}(k) + \delta_{l0} c^{(0)}_{X_1}\right]
\\ \times
\sum_l \left[l_1\,l_2\,l\right]_{m_1m_2}^{\phantom{m_1m_2}m}
Y_{lm}(\hat{\bm{k}})
\sum_{l_1',l_1'',l_2',l_2''}
(-1)^{l_1''+l_2''}
\\ \times
\sqrt{\{l_1'\}\{l_1''\}\{l_2'\}\{l_2''\}}
\begin{pmatrix}
l_1' & l_1'' & l_2 \\
0 & 0 & 0
\end{pmatrix}
\begin{pmatrix}
l_2' & l_2'' & l_2 \\
0 & 0 & 0
\end{pmatrix}
\\ \times
\int \frac{p^2dp}{2\pi^2}
c^{(2)l_2}_{X_2l_2'l_2''}(p,p)
B_\mathrm{L}^{l_2l_1'l_1''}(k,p,p)
\\ + [(X_1l_1m_1) \leftrightarrow (X_2l_2m_2)].
\label{eq:4-29}
\end{multline}
Comparing the above equation with Eq.~(\ref{eq:4-0-3}), the
corresponding invariant power spectrum is given by
\begin{multline}
P^{\mathrm{NG}\,l_1l_2;l}_{X_1X_2}(k) =
\frac{1}{2\,(4\pi)^{5/2}}
\frac{1}{\sqrt{\{l_2\}}}
\begin{pmatrix}
l_1 & l_2 & l \\
0 & 0 & 0
\end{pmatrix}
\\ \times
\left[c^{(1)}_{X_1l_1}(k) + \delta_{l_10} c^{(0)}_{X_1}\right]
\sum_{l_1',l_1'',l_2',l_2''}
(-1)^{l_1''+l_2''}
\sqrt{\{l_1'\}\{l_1''\}\{l_2'\}\{l_2''\}}
\\ \times
\begin{pmatrix}
l_1' & l_1'' & l_2 \\
0 & 0 & 0
\end{pmatrix}
\begin{pmatrix}
l_2' & l_2'' & l_2 \\
0 & 0 & 0
\end{pmatrix}
\int \frac{p^2dp}{2\pi^2}
c^{(2)l_2}_{X_2l_2'l_2''}(p,p)
B_\mathrm{L}^{l_2l_1'l_1''}(k,p,p)
\\
+ [(X_1l_1) \leftrightarrow (X_2l_2)].
\label{eq:4-29-1}
\end{multline}
This is one of the generic predictions of our theory.
The scale-dependent bias from the primordial non-Gaussianity is
conveniently derived from the cross power spectrum between the linear
density field and biased objects. In particular, we define the
lowest-order of scale-dependent bias factor by
\begin{equation}
\Delta b_{Xlm}(\bm{k}) \equiv
\sqrt{4\pi}\,
\frac{P^{\mathrm{NG}(l0)}_{X\delta\,m0}(\bm{k})}{P_\mathrm{L}(k)},
\label{eq:4-30}
\end{equation}
where $X_2=\delta$ indicates that the biased field
$F_{X_2l_2m_2}(\bm{k})$ is replaced by the mass density field
$\delta(\bm{k})$ with $l_2=m_2=0$, and thus $c^{(0)}_{X_2} = 1$ and
$c^{(n)}_{X_2l_2m_2} = 0$ for $n\geq 1$. In the simplest case of
linear theory and linear bias,
$F_{\delta\,00}(\bm{k}) = (4\pi)^{-1/2} \delta_\mathrm{L}(\bm{k})$ and
$F_{Xlm}(\bm{k}) = b_{Xlm}(\bm{k}) \delta_\mathrm{L}(\bm{k})$, we have
$P^{(l0)}_{X\delta\,m0}(\bm{k}) = (4\pi)^{-1/2} b_{Xlm}(\bm{k})
P_\mathrm{L}(k)$. This observation explains the choice of
normalization in Eq.~(\ref{eq:4-30}). Due to the rotational symmetry,
and as explicitly seen from Eq.~(\ref{eq:4-0-3}), the directional
dependence on the wavevector on the rhs of Eq.~(\ref{eq:4-30}) should
be proportional to $Y_{lm}(\hat{\bm{k}})$, thus we naturally define an
invariant bias factor $\Delta b_{Xl}(k)$ by
\begin{equation}
\Delta b_{Xlm}(\bm{k}) =
\Delta b_{Xl}(k)\, Y_{lm}(\hat{\bm{k}}).
\label{eq:4-30-1}
\end{equation}
Substituting Eq.~(\ref{eq:4-0-3}) into Eq.~(\ref{eq:4-30}), the above
definition is equivalent to
\begin{equation}
\Delta b_{Xl}(k) =
\sqrt{4\pi} (-1)^l\sqrt{\{l\}}
\frac{P^{\mathrm{NG}\,l0;l}_{X\delta}(k)}{P_\mathrm{L}(k)}.
\label{eq:4-30-2}
\end{equation}
Substituting $l_1=l$, $l_2=0$, $c^{(0)}_{X_2}=1$, and
$c^{(1)}_{X_2l_2} = c^{(2)l_2}_{X_2l_2'l_2''} = 0$ into
Eq.~(\ref{eq:4-29-1}), the invariant scale dependent bias of
Eq.~(\ref{eq:4-30-2}) reduces to
\begin{multline}
\Delta b_{Xl}(k) =
\frac{1}{2\,(4\pi)^2P_\mathrm{L}(k)}
\sum_{l_1',l_1'',l_2',l_2''}
(-1)^{l_1''+l_2''}
\\ \times
\sqrt{\{l_1'\}\{l_2'\}\{l_1''\}\{l_2''\}}
\begin{pmatrix}
l_1' & l_1'' & l \\
0 & 0 & 0
\end{pmatrix}
\begin{pmatrix}
l_2' & l_2'' & l \\
0 & 0 & 0
\end{pmatrix}
\\ \times
\int \frac{p^2dp}{2\pi^2}
c^{(2)l}_{Xl_2'l_2''}(p,p)
B_\mathrm{L}^{l\,l_1'l_1''}(k,p,p).
\label{eq:4-32}
\end{multline}
This is one of the generic predictions of our theory.
The above formula is applicable for arbitrary models of primordial
non-Gaussianity with a given bispectrum at the lowest order. We
consider a particular model of non-Gaussianity, which is a
generalization of the so-called local-type model
\cite{Shiraishi:2013vja,Schmidt:2015xka}:
\begin{multline}
B_\mathrm{L}(\bm{k}_1,\bm{k}_2,\bm{k}_3) =
\frac{2\mathcal{M}(k_3)}{\mathcal{M}(k_1)\mathcal{M}(k_2)}
\\ \times
\sum_l f_\mathrm{NL}^{(l)}
\mathit{P}_l(\hat{\bm{k}}_1\cdot\hat{\bm{k}}_2)
P_\mathrm{L}(k_1) P_\mathrm{L}(k_2) + \mathrm{cyc.}
\label{eq:4-33}
\end{multline}
The function $\mathcal{M}(k)$ in the prefactor is given by
\begin{equation}
\mathcal{M}(k) =
\frac{2}{3}
\frac{D(z)}{(1+z_*)D(z_*)}
\frac{k^2 T(k)}{{H_0}^2 \Omega_\mathrm{m0}},
\label{eq:4-34}
\end{equation}
where $D(z)$ is the linear growth factor with an arbitrary
normalization, $z_*$ is an arbitrary redshift at the matter-dominated
epoch, $T(k)$ is the transfer function, $H_0$ is the Hubble constant,
and $\Omega_\mathrm{m0}$ is the density parameter of total matter. The
factor $(1+z_*)D(z_*)$ does not depend on the choice of $z_*$ as long
as $z_*$ is deep in the matter-dominated epoch. Some authors choose a
normalization of the linear growth factor by $(1+z_*)D(z_*)=1$ in which
case the expression of Eq.~(\ref{eq:4-34}) has the simplest form.
Substituting Eq.~(\ref{eq:4-33}) into Eq.~(\ref{eq:4-27}), we have
\begin{multline}
B_{\mathrm{L}}^{l_1l_2l_3}(k_1,k_2,k_3)
=
\frac{2\mathcal{M}(k_3)}{\mathcal{M}(k_1)\mathcal{M}(k_2)}
P_\mathrm{L}(k_1)P_\mathrm{L}(k_2)
\\ \times
(4\pi)^{3/2}
\frac{(-1)^{l_1}}{\sqrt{\{l_1\}}} f_\mathrm{NL}^{(l_1)}
\delta_{l_1l_2} \delta_{l_30}
+ \mathrm{cyc.}
\label{eq:4-35}
\end{multline}
We need a special form of the bispectrum in Eqs.~(\ref{eq:4-29-1}) and
(\ref{eq:4-32}), and in the above model, we have
\begin{multline}
B_\mathrm{L}^{l_2l_1'l_1''}(k,p,p)
\simeq
\frac{2 P_\mathrm{L}(k)}{\mathcal{M}(k)}
P_\mathrm{L}(p)
(4\pi)^{3/2}
\frac{(-1)^{l_2}}{\sqrt{\{l_2\}}} f_\mathrm{NL}^{(l_2)}
\\ \times
\left(
\delta_{l_1'l_2} \delta_{l_1''0} +
\delta_{l_1'0} \delta_{l_1''l_2}
\right),
\label{eq:4-36}
\end{multline}
where only two terms in which $\mathcal{M}(k)$ appeared in the
denominator are retained because the other term is negligible in the
limit of $k\rightarrow 0$.
Substituting Eq.~(\ref{eq:4-36}) into Eq.~(\ref{eq:4-29-1}), and using
Eqs.~(\ref{eq:c-8}) and (\ref{eq:c-13}), we have
\begin{multline}
P^{\mathrm{NG}\,l_1l_2;l}_{X_1X_2}(k) =
\frac{(-1)^l\left[1 + (-1)^{l_2}\right]}{4\pi}
f_\mathrm{NL}^{(l_2)}
\frac{P_\mathrm{L}(k)}{\mathcal{M}(k)}
\begin{pmatrix}
l_1 & l_2 & l \\
0 & 0 & 0
\end{pmatrix}
\\ \times
\left[
c^{(1)}_{X_1l_1}(k) + \delta_{l_10} c^{(0)}_{X_1}
\right]
\sum_{l',l''} (-1)^{l''}
\sqrt{\{l'\}\{l''\}}
\\ \times
\begin{pmatrix}
l' & l'' & l_2 \\
0 & 0 & 0
\end{pmatrix}
\int \frac{p^2dp}{2\pi^2}
c^{(2)l_2}_{X_2l'l''}(p,p) P_\mathrm{L}(p)
\\ + [(X_1l_1) \leftrightarrow (X_2l_2)].
\label{eq:4-37}
\end{multline}
Substituting $l_1=l$, $l_2=0$, $c^{(0)}_{X_2}=1$, and
$c^{(1)}_{X_2l_2} = c^{(2)l_2}_{X_2l_2'l_2''} = 0$ into the above
equation, we have
\begin{multline}
P^{\mathrm{NG}\,l0;l}_{X\delta}(k) =
\frac{1 + (-1)^l}{4\pi \sqrt{\{l\}}}
\frac{P_\mathrm{L}(k)}{\mathcal{M}(k)}
f_\mathrm{NL}^{(l)}
\sum_{l',l''} (-1)^{l''}
\sqrt{\{l'\}\{l''\}}
\\ \times
\begin{pmatrix}
l' & l'' & l \\
0 & 0 & 0
\end{pmatrix}
\int \frac{p^2dp}{2\pi^2}
c^{(2)l}_{Xl'l''}(p,p) P_\mathrm{L}(p).
\label{eq:4-38}
\end{multline}
Apparently, the rhs survives only in the case of $l=\mathrm{even}$.
Substituting the above equation into Eq.~(\ref{eq:4-30-2}) in this
case, we have
\begin{multline}
\Delta b_{Xl}(k) =
\frac{1 + (-1)^l}{\sqrt{4\pi}}
\frac{f_\mathrm{NL}^{(l)}}{\mathcal{M}(k)}
\sum_{l',l''} (-1)^{l''}
\sqrt{\{l'\}\{l''\}}
\\ \times
\begin{pmatrix}
l' & l'' & l \\
0 & 0 & 0
\end{pmatrix}
\int \frac{p^2dp}{2\pi^2}
c^{(2)l}_{Xl'l''}(p,p) P_\mathrm{L}(p).
\label{eq:4-39}
\end{multline}
This proves that the scale-dependent bias of the rank-$l$ tensor field
is sensitive to the multipole moment $l$ of the Legendre polynomials
$P_l(\hat{\bm{k}}_1\cdot\hat{\bm{k}}_2)$ in the model of
Eq.~(\ref{eq:4-33}) for the primordial non-Gaussianity, as shown in
Ref.~\cite{Kogai:2020vzz} by quite different considerations and a
method.
As a consistency check, we can see if the known results of scalar
perturbations are derived from the general result above. In the scalar
case of $l=0$, Eq.~(\ref{eq:4-39}) reduces to
\begin{equation}
\Delta b_{X0}(k) =
\frac{1}{\sqrt{4\pi}}\,
\frac{2f_\mathrm{NL}}{\mathcal{M}(k)}
\sum_{l} \sqrt{\{l\}}
\int \frac{p^2dp}{2\pi^2}
c^{(2)0}_{Xll}(p,p) P_\mathrm{L}(p),
\label{eq:4-40}
\end{equation}
where $f_\mathrm{NL} \equiv f_\mathrm{NL}^{(0)}$ is the original
parameter of local-type non-Gaussianity. One can confirm that the last
equation is consistent with the previously known results for the
scale-dependent bias in the halo models. In fact, the second-order
renormalized bias function of a simple halo model is given by
\cite{Matsubara:2012nc}
\begin{equation}
c^{(2)}_\mathrm{h}(\bm{k}_1,\bm{k}_2) =
\frac{\delta_\mathrm{c}b^\mathrm{L}_1}{\sigma^2}
W(k_1R) W(k_2R),
\label{eq:4-41}
\end{equation}
where
\begin{equation}
\sigma^2 = \int \frac{p^2dp}{2\pi^2}
W^2(pR) P_\mathrm{L}(p),
\label{eq:4-42}
\end{equation}
and $b^\mathrm{L}_1$ is a Lagrangian bias parameter of linear order.
Eq.~(\ref{eq:4-41}) is a result for the simplest case with a
high-peaks (or high-mass) limit of halos in the Press-Schechter mass
function (see Ref.~\cite{Matsubara:2012nc} for results of
other extended halo models). The Lagrangian bias parameter $b^\mathrm{L}_1$ is
related to the Eulerian bias parameter $b_1$ by
$b^\mathrm{L}_1 = b_1 - 1$. Comparing the above function with
Eq.~(\ref{eq:3-73}) or Eq.~(\ref{eq:3-75}), and noting
$c^{(2)}_{\mathrm{h}00}(k_1,k_2) = (4\pi)^{-1/2}
c^{(2)}_{\mathrm{h}}(k_1,k_2)$, the invariant coefficient in this case
is given by
\begin{equation}
c^{(2)0}_{\mathrm{h}\,l_1l_2}(k_1,k_2) =
\sqrt{4\pi}\, \delta_{l_10}\delta_{l_20}
\frac{\delta_\mathrm{c}b^\mathrm{L}_1}{\sigma^2}
W(k_1R) W(k_2R).
\label{eq:4-43}
\end{equation}
Substituting the above expression into Eq.~(\ref{eq:4-40}), we have
\begin{equation}
\Delta b_{\mathrm{h}0}(k) =
\frac{2f_\mathrm{NL}\delta_\mathrm{c} b^\mathrm{L}_1}{\mathcal{M}(k)}.
\label{eq:4-44}
\end{equation}
Due to the normalization of the spherical basis for the scalar
component, Eqs.~(\ref{eq:2-8}) and (\ref{eq:2-19}), the
scale-dependent bias of the halo number density is given by
$\Delta b_\mathrm{h}(k) = 4\pi\, \Delta b_{\mathrm{h}00}(k)
\mathsf{Y}^{(0)} = \sqrt{4\pi}\, \Delta
b_{\mathrm{h}0}(k)\mathsf{Y}^{(0)} = \Delta b_{\mathrm{h}0}$, and thus
we have
\begin{equation}
\Delta b_\mathrm{h}(k)
= \frac{2(b_1-1)f_\mathrm{NL}\delta_\mathrm{c}}{\mathcal{M}(k)}.
\label{eq:4-45}
\end{equation}
This exactly reproduces a well-known result of the scale-dependent
bias in the simplest halo model \cite{Dalal:2007cu}.
\subsection{\label{subsec:Bispectrum}
The bispectrum
}
\subsubsection{\label{subsubsec:InvBispec}
The invariant bispectrum
}
For yet another example of applications of our formalism, we consider
the bispectrum of the tensor field. For an illustrative purpose, we only
consider the bispectrum in real space below, while generalizing the
results to those in redshift space is straightforward with
similar methods employed for the power spectrum in
Sec.~\ref{subsec:PowerSpec}.
The bispectrum
$B^{(l_1l_2l_3)}_{X_1X_2X_3\,m_1m_2m_3}(\bm{k}_1,\bm{k}_2,\bm{k}_3)$
of irreducible components of tensor field $F_{Xlm}(\bm{k})$ may be
defined by
\begin{multline}
\left\langle
F_{X_1l_1m_1}(\bm{k}_1) F_{X_2l_2m_2}(\bm{k}_2)
F_{X_3l_3m_3}(\bm{k}_3)
\right\rangle_\mathrm{c}
\\
=
(2\pi)^3\delta_\mathrm{D}^3(\bm{k}_1+\bm{k}_2+\bm{k}_3)
B^{(l_1l_2l_3)}_{X_1X_2X_3\,m_1m_2m_3}(\bm{k}_1,\bm{k}_2,\bm{k}_3),
\label{eq:4-60}
\end{multline}
as a simple generalization of the definition of the power spectrum of
Eq.~(\ref{eq:4-0-1}). We consider generally the cross bispectrum among
three fields, $X_1$, $X_2$ and $X_3$ with irreducible components
$l_1$, $l_2$ and $l_3$, respectively. The auto bispectrum is
straightforwardly obtained by putting $X_1=X_2=X_3$.
The bispectrum
$B^{(l_1l_2l_3)}_{X_1X_2X_3\,m_1m_2m_3}(\bm{k}_1,\bm{k}_2,\bm{k}_3)$
defined above depends on the coordinates system. Generalizing the
method of constructing the invariant power spectrum, one can also
construct the invariant bispectrum as explicitly shown below. Due to
the presence of the delta function on the lhs of Eq.~(\ref{eq:4-60}),
the third argument $\bm{k}_3$ in the bispectrum can be replaced by
$-\bm{k}_1-\bm{k}_2$, and regarded as a function of only $\bm{k}_1$
and $\bm{k}_2$. However, the resulting expression loses apparent
symmetry of permutating the subscripts $(1,2,3)$ of $\bm{k}_i$, $l_i$
and $m_i$. As is usually the case in the analytic predictions in
perturbation theory, we can decompose the symmetric bispectrum into a
sum of asymmetric components as
\begin{equation}
B^{(l_1l_2l_3)}_{X_1X_2X_3\,m_1m_2m_3}(\bm{k}_1,\bm{k}_2,\bm{k}_3)
=
\tilde{B}^{(l_1l_2l_3)}_{X_1X_2X_3\,m_1m_2m_3}(\bm{k}_1,\bm{k}_2)
+ \mathrm{cyc.}
\label{eq:4-61}
\end{equation}
Although the decomposition into asymmetric components is not unique,
it is always possible to specify the originally symmetric bispectrum by
giving the asymmetric bispectra, and the predictions of perturbation
theory are usually given in the above form.
The directional dependence of the asymmetric bispectrum on the
wavevectors can be expanded by spherical harmonics as
\begin{multline}
\tilde{B}^{(l_1l_2l_3)}_{X_1X_2X_3m_1m_2m_3}(\bm{k}_1,\bm{k}_2)
=
\sum_{l_1',l_2'}
\tilde{B}^{l_1l_2l_3;l_1'l_2'}_{X_1X_2X_3m_1m_2m_3;m_1'm_2'}(k_1,k_2)\,
\\ \times
Y^*_{l_1'm_1'}(\hat{\bm{k}}_1) Y^*_{l_2'm_2'}(\hat{\bm{k}}_2).
\label{eq:4-63}
\end{multline}
Due to the rotational transformations of Eqs.~(\ref{eq:3-38}) and
(\ref{eq:2-14-2}), and rotational invariance of the delta function in
the definition of the bispectrum, Eq.~(\ref{eq:4-60}), the rotational
transformation of the expansion coefficients is given by
\begin{multline}
\tilde{B}^{l_1l_2l_3;l_1'l_2'}_{X_1X_2X_3m_1m_2m_3;m_1'm_2'}(k_1,k_2)
\\
\rightarrow
\tilde{B}^{l_1l_2l_3;l_1'l_2'}_{X_1X_2X_3m_1''m_2''m_3'';m_1'''m_2'''}(k_1,k_2)
D^{m_1''}_{(l_1)m_1}(R)
\\ \times
D^{m_2''}_{(l_2)m_2}(R)
D^{m_3''}_{(l_2)m_3}(R) D^{m_1'''}_{(l)m_1'}(R) D^{m_2'''}_{(l)m_2'}(R).
\label{eq:4-64}
\end{multline}
For the statistically isotropic Universe, the functional form of the
bispectrum should not depend on the choice of coordinates system.
Following exactly the same procedure as in
Sec.~\ref{subsec:RenBiasFn}, the bispectrum in the statistically
isotropic Universe is shown to have a form,
\begin{multline}
\tilde{B}^{(l_1l_2l_3)}_{X_1X_2X_3m_1m_2m_3}(\bm{k}_1,\bm{k}_2) =
(-1)^{l_1+l_2+l_3} \sqrt{\{l_1\}\{l_2\}\{l_3\}}
\\ \times
\sum_{\substack{L,L'\\l_1',l_2'}}
(-i)^{l_1'+l_2'} (-1)^{L}
\sqrt{\{L\}\{l_1'\}\{l_2'\}}
\left(l_1\,l_2\,L\right)_{m_1m_2}^{\phantom{m_1m_2}M}
\left(L\,l_3\,L'\right)_{Mm_3}^{\phantom{Mm_3}M'}
\\ \times
\left\{
Y_{l_1'}(\hat{\bm{k}}_1) \otimes Y_{l_2'}(\hat{\bm{k}}_2)
\right\}_{L'M'}
\tilde{B}^{l_1l_2l_3;l_1'l_2';LL'}_{X_1X_2X_3}(k_1,k_2),
\label{eq:4-65}
\end{multline}
where
\begin{multline}
\tilde{B}^{l_1l_2l_3;l_1'l_2';LL'}_{X_1X_2X_3}(k_1,k_2) =
\frac{(-i)^{l_1'+l_2'}(-1)^{L+L'} \sqrt{\{L\}\{L'\}}}
{\sqrt{\{l_1\}\{l_2\}\{l_3\}\{l_1'\}\{l_2'\}}}
\\ \times
\left(l_1\,l_2\,L\right)^{m_1m_2}_{\phantom{m_1m_2}M}
\left(L\,l_3\,L'\right)^{Mm_3}_{\phantom{Mm_3}M'}
\left(L'\,l_1'\,l_2'\right)^{M'm_1'm_2'}
\\ \times
\tilde{B}^{l_1l_2l_3;l_1'l_2'}_{X_1X_2X_3m_1m_2m_3;m_1'm_2'}(k_1,k_2)
\label{eq:4-66}
\end{multline}
is the invariant bispectrum.
According to Eqs.~(\ref{eq:3-32}), (\ref{eq:4-60}) and the symmetry
of the $3j$-symbol, Eq.~(\ref{eq:c-2}), the symmetry of complex
conjugate is shown to be given by
\begin{equation}
B^{l_1l_2l_3;l_1'l_2';LL'\,*}_{X_1X_2X_3}(k) =
B^{l_1l_2l_3;l_1'l_2';LL'}_{X_1X_2X_3}(k),
\label{eq:4-66-1}
\end{equation}
and thus the invariant bispectrum is a real function. Additionally,
the parity transformation of the bispectrum is given by
\begin{multline}
\tilde{B}^{(l_1l_2l_3)}_{X_1X_2X_3m_1m_2m_3}(\bm{k}_1,\bm{k}_2) \rightarrow
(-1)^{s_{X_1}+s_{X_2}+s_{X_3}+l_1+l_2+l_3}
\\ \times
\tilde{B}^{(l_1l_2l_3)}_{X_1X_2X_3m_1m_2m_3}(-\bm{k}_1,-\bm{k}_2),
\label{eq:4-67}
\end{multline}
because of Eqs.~(\ref{eq:3-42}) and (\ref{eq:4-60}). Accordingly, the
parity transformation of the invariant bispectrum is given by
\begin{multline}
\tilde{B}^{l_1l_2l_3;l_1'l_2'}_{X_1X_2X_3}(k_1,k_2) \rightarrow
(-1)^{s_{X_1}+s_{X_2}+s_{X_3}+l_1+l_2+l_3+l_1'+l_2'}
\\ \times
\tilde{B}^{l_1l_2l_3;l_1'l_2'}_{X_1X_2X_3}(k_1,k_2).
\label{eq:4-68}
\end{multline}
In the Universe with the parity symmetry, the bispectrum is invariant
under the parity transformation, we thus have
\begin{equation}
s_{X_1} + s_{X_2} + s_{X_3} + l_1 + l_2 + l_3 + l_1' + l_2'
= \mathrm{even}.
\label{eq:4-69}
\end{equation}
\subsubsection{\label{subsubsec:TreeBispec}
The lowest-order bispectrum
}
We consider the bispectrum of the lowest-order approximation or the
tree-level bispectrum. The formal expression of the lowest-order
bispectrum in terms of propagators is given by
\begin{multline}
\tilde{B}^{(l_1l_2l_3)}_{X_1X_2X_3\,m_1m_2m_3}(\bm{k}_1,\bm{k}_2)
= \Gamma^{(1)}_{X_1l_1m_1}(\bm{k}_1) \Gamma^{(1)}_{X_2l_2m_2}(\bm{k}_2)
\\ \times
\Gamma^{(2)}_{X_3l_3m_3}(-\bm{k}_1,-\bm{k}_2)
P_\mathrm{L}(k_1) P_\mathrm{L}(k_2).
\label{eq:4-71}
\end{multline}
We do not use the higher-order resummation factor and substitute
$\Pi(k) = 1$. Substituting Eqs.~(\ref{eq:3-105}) and (\ref{eq:3-108}),
the above equation is represented in terms of invariant functions as
\begin{multline}
\tilde{B}^{(l_1l_2l_3)}_{X_1X_2X_3\,m_1m_2m_3}(\bm{k}_1,\bm{k}_2)
=
\hat{\Gamma}^{(1)}_{X_1l_1}(k_1)
\hat{\Gamma}^{(1)}_{X_2l_2}(k_2)
P_\mathrm{L}(k_1) P_\mathrm{L}(k_2)
\\ \times
\sum_{l_1',l_2',l_1'',l_2''}
(-1)^{l_1''+l_2''}
\left[l_1\,l_1'\,l_1''\right]_{m_1}^{\phantom{m_1}m_1'm_1''}
\left[l_2\,l_2'\,l_2''\right]_{m_2}^{\phantom{m_2}m_2'm_2''}
\\ \times
Y_{l_1'm_1'}(\hat{\bm{k}}_1) Y_{l_2'm_2'}(\hat{\bm{k}}_2)
\left(l_1''\,l_2''\,l_3\right)_{m_1''m_2''m_3}
\hat{\Gamma}^{(2)\,l_3}_{X_3l_1''l_2''}(k_1,k_2).
\label{eq:4-72}
\end{multline}
Substituting Eqs.~(\ref{eq:3-202}) and (\ref{eq:3-217}), the above
expression is given by invariant forms of renormalized bias functions,
$c^{(0)}_X$, $c^{(1)}_{X\,l}(k)$ and $c^{(2)\,l}_{Xl_1l_2}(k_1,k_2)$.
The sum of the products of three $3j$-symbols appeared in the above
expression is represented by a $9j$-symbol due to a formula
of Eq.~(\ref{eq:c-41}) in Appendix~\ref{app:3njSymbols}, or equivalently,
\begin{multline}
(-1)^{l_1''+l_2''+l_3}
\left(l_1\,l_1'\,l_1''\right)_{m_1}^{\phantom{m_1}m_1'm_1''}
\left(l_2\,l_2'\,l_2''\right)_{m_2}^{\phantom{m_2}m_2'm_2''}
\left(l_1''\,l_2''\,l_3\right)_{m_1''m_2''m_3}
\\
= \sum_{L,L'} \{L\}\{L'\}
\left(l_1\,l_2\,L\right)_{m_1m_2}^{\phantom{m_1m_2}M}
\left(L\,l_3\,L'\right)_{Mm_3}^{\phantom{Mm_2}M'}
\\ \times
\left(L'\,l_1'\,l_2'\right)_{M'}^{\phantom{M'}m_1'm_2'}
\begin{Bmatrix}
l_1 & l_1' & l_1'' \\
l_2 & l_2' & l_2'' \\
L & L' & l_3
\end{Bmatrix}.
\label{eq:4-73}
\end{multline}
Consequently, Eq.~(\ref{eq:4-72}) reduces to an alternative
expression,
\begin{multline}
\tilde{B}^{(l_1l_2l_3)}_{X_1X_2X_3\,m_1m_2m_3}(\bm{k}_1,\bm{k}_2)
=
\frac{(-1)^{l_3}}{4\pi}
\sqrt{\{l_1\}\{l_2\}}
P_\mathrm{L}(k_1) P_\mathrm{L}(k_2)
\\ \times
\hat{\Gamma}^{(1)}_{X_1l_1}(k_1)
\hat{\Gamma}^{(1)}_{X_2l_2}(k_2)
\sum_{\substack{l_1',l_2'\\l_1'',l_2''}}
\sqrt{\{l_1'\}\{l_2'\}\{l_1''\}\{l_2''\}}
\\ \times
\begin{pmatrix}
l_1 & l_1' & l_1'' \\
0 & 0 & 0
\end{pmatrix}
\begin{pmatrix}
l_2 & l_2' & l_2'' \\
0 & 0 & 0
\end{pmatrix}
\hat{\Gamma}^{(2)\,l_3}_{X_3l_1''l_2''}(k_1,k_2)
\\ \times
\sum_{L,L'} \{L\} (-1)^{L'} \sqrt{\{L'\}}
\left(l_1\,l_2\,L\right)_{m_1m_2}^{\phantom{m_1m_2}M}
\left(L\,l_3\,L'\right)_{Mm_3}^{\phantom{Mm_3}M'}
\\ \times
\begin{Bmatrix}
l_1 & l_1' & l_1'' \\
l_2 & l_2' & l_2'' \\
L & L' & l_3
\end{Bmatrix}
\left\{
Y_{l_1'}(\hat{\bm{k}}_1) \otimes Y_{l_2'}(\hat{\bm{k}}_2)
\right\}_{L'M'}.
\label{eq:4-74}
\end{multline}
Comparing the above equation with Eq.~(\ref{eq:4-65}), the invariant
bispectrum is given by
\begin{multline}
\tilde{B}^{l_1l_2l_3;l_1'l_2';LL'}_{X_1X_2X_3}(k_1,k_2)
=
\frac{(-i)^{l_1'+l_2'}(-1)^{L+L'} \sqrt{\{L\}\{L'\}}}{4\pi\sqrt{\{l_3\}}}
\\ \times
\hat{\Gamma}^{(1)}_{X_1l_1}(k_1)
\hat{\Gamma}^{(1)}_{X_2l_2}(k_2)
P_\mathrm{L}(k_1) P_\mathrm{L}(k_2)
\sum_{l_1'',l_2''} (-1)^{l_1''+l_2''}
\sqrt{\{l_1''\}\{l_2''\}}
\\ \times
\begin{pmatrix}
l_1 & l_1' & l_1'' \\
0 & 0 & 0
\end{pmatrix}
\begin{pmatrix}
l_2 & l_2' & l_2'' \\
0 & 0 & 0
\end{pmatrix}
\begin{Bmatrix}
l_1 & l_1' & l_1'' \\
l_2 & l_2' & l_2'' \\
L & L' & l_3
\end{Bmatrix}
\hat{\Gamma}^{(2)\,l_3}_{X_3l_1''l_2''}(k_1,k_2).
\label{eq:4-75}
\end{multline}
This is one of the generic predictions of our theory.
As a consistency check, let us see if the bispectrum of the scalarly
biased field in the lowest-order perturbation theory can be correctly
reproduced. Considering the case of $l_1=l_2=l_3=m_1=m_2=m_3=0$ and
$X_1=X_2=X_3\equiv X$ in Eq.~(\ref{eq:4-72}), and using
Eqs.~(\ref{eq:c-8}) and (\ref{eq:c-13}), we straightforwardly derive
\begin{multline}
\tilde{B}^{(000)}_{XXX\,000}(\bm{k}_1,\bm{k}_2,\bm{k}_3)
=
\frac{1}{(4\pi)^2}
\sum_l (-1)^l \sqrt{\{l\}}\,
\mathit{P}_l(\hat{\bm{k}}_1\cdot\hat{\bm{k}}_2)
\\ \times
\hat{\Gamma}^{(1)}_{X0}(k_1) \hat{\Gamma}^{(1)}_{X0}(k_2)
\hat{\Gamma}^{(2)\,0}_{Xll}(k_1,k_2)
P_\mathrm{L}(k_1) P_\mathrm{L}(k_2).
\label{eq:4-76}
\end{multline}
The same result is also derived from Eqs.~(\ref{eq:4-74}) and
(\ref{eq:4-75}) with Eq.~(\ref{eq:4-9}). We substitute
Eqs.~(\ref{eq:3-202}) and (\ref{eq:3-217}) in the above equation, and
straightforward calculations derive the scalar bispectrum as
\begin{multline}
B_X(\bm{k}_1,\bm{k}_2,\bm{k}_3) = (4\pi)^{3/2}
B^{(000)}_{XXX\,000}(\bm{k}_1,\bm{k}_2,\bm{k}_3)
\\
=
b_1(k_1) b_1(k_2)
P_\mathrm{L}(k_1) P_\mathrm{L}(k_2)
\Biggl\{
b^\mathrm{L}_2(\bm{k}_1,\bm{k}_2)
+ b_1(k_1) + b_1(k_2)
\\
- \frac{4}{7}
+ \left[
\frac{k_2}{k_1} b_1(k_1) +
\frac{k_1}{k_2} b_1(k_2)
\right]
\frac{\bm{k}_1\cdot\bm{k}_2}{k_1k_2}
\\
+ \frac{4}{7}
\left(\frac{\bm{k}_1\cdot\bm{k}_2}{k_1k_2}\right)^2
\Biggr\}
+ \mathrm{cyc.},
\label{eq:4-77}
\end{multline}
where $b_1(k) \equiv c^{(0)}_X + c^{(1)}_{X0}(k)$
corresponds to the Eulerian linear bias and is
the same as $b_X(k)$ defined in Eq.~(\ref{eq:4-5-0}), and
\begin{multline}
b^\mathrm{L}_2(\bm{k}_1,\bm{k}_2)
\equiv
c^{(2)}_X(\bm{k}_1,\bm{k}_2)
= 4\pi\,c^{(2)}_{X00}(\bm{k}_1,\bm{k}_2) \mathsf{Y}^{(0)}
\\
=
\frac{1}{\sqrt{4\pi}} \sum_l (-1)^l \sqrt{\{l\}}\,
\mathit{P}_l(\hat{\bm{k}}_1\cdot\hat{\bm{k}}_2)
c^{(2)\,0}_{Xll}(k_1,k_2)
\label{eq:4-78}
\end{multline}
corresponds to the Lagrangian (nonlocal) bias function of second order
\cite{Matsubara:2011ck}. More commonly known form of the bispectrum in
the lowest-order perturbation theory is represented by the Eulerian
bias parameters $b_1$, $b_2$, $b_{s^2}$
\cite{Matarrese:1997sk,McDonald:2009dh,Sheth:2012fc,Baldauf:2012hs}.
It is natural to take a normalization
$\langle F^\mathrm{L}_X\rangle = 1$ [c.f., Eq.~(\ref{eq:3-1})] and we
have $c^{(0)}_X=1$ in this scalar case. On the one hand, in the
special case of local bias, in which $b_1$ and $b^\mathrm{L}_2$ are
constants and do not depend on wavevectors, Eq.~(\ref{eq:4-77})
reduces to
\begin{multline}
B_X(\bm{k}_1,\bm{k}_2,\bm{k}_3)
=
{b_1}^2
P_\mathrm{L}(k_1) P_\mathrm{L}(k_2)
\\ \times
\left[
b^\mathrm{L}_2 + 2b_1
- \frac{4}{7}
+ b_1
\left(
\frac{k_2}{k_1} + \frac{k_1}{k_2}
\right)
\frac{\bm{k}_1\cdot\bm{k}_2}{k_1k_2}
+ \frac{4}{7}
\left(\frac{\bm{k}_1\cdot\bm{k}_2}{k_1k_2}\right)^2
\right]
\\
+ \mathrm{cyc.}
\label{eq:4-79}
\end{multline}
On the other hand, the prediction of scalar bispectrum in the standard
(Eulerian) perturbation theory is given by \cite{Baldauf:2012hs},
\begin{multline}
B_X(\bm{k}_1,\bm{k}_2,\bm{k}_3)
=
{b_1}^2
P_\mathrm{L}(k_1) P_\mathrm{L}(k_2)
\\ \times
\left\{
b_2 + 2b_{s^2}
\left[
\left(\frac{\bm{k}_1\cdot\bm{k}_2}{k_1k_2}\right)^2
- \frac{1}{3}
\right]
+ b_1 F_2(\bm{k}_1,\bm{k}_2)
\right\}
\\
+ \mathrm{cyc.},
\label{eq:4-80}
\end{multline}
where
\begin{equation}
F_2(\bm{k}_1,\bm{k}_2) =
\frac{10}{7} +
\left(
\frac{k_2}{k_1} + \frac{k_1}{k_2}
\right)
\frac{\bm{k}_1\cdot\bm{k}_2}{k_1k_2}
+ \frac{4}{7}
\left(\frac{\bm{k}_1\cdot\bm{k}_2}{k_1k_2}\right)^2
\label{eq:4-81}
\end{equation}
is the second-order kernel\footnote{Our choice of the normalizations
for kernel functions $F_n$ is different from those in many previous
references, where our $F_n$ corresponds to $n!F_n$ in the latter.
We believe our choice simplifies the expressions of equations in
general \cite{Matsubara:2011ck}.} of the Eulerian perturbation
theory \cite{Bernardeau:2001qr}. Comparing the coefficients of
Eqs.~(\ref{eq:4-79}) and (\ref{eq:4-80}), we find the relations
between Lagrangian local bias parameters and Eulerian local bias
parameters as
\begin{equation}
b_1 = 1 + b^\mathrm{L}_1, \quad
b_2 = b^\mathrm{L}_2 - \frac{8}{21} b^\mathrm{L}_1, \quad
b_{s^2} = - \frac{2}{7} b^\mathrm{L}_1.
\label{eq:4-82}
\end{equation}
The above relations exactly agree with those which are found in the
literature \cite{Sheth:2012fc,Baldauf:2012hs}. It is also possible to include the
nonlocal Lagrangian bias $b^\mathrm{L}_{s^2}$ in
$b^\mathrm{L}_2(\bm{k}_1,\bm{k}_2)$, and derive consistent results
with those in the literature \cite{Sheth:2012fc,Baldauf:2012hs}.
\section{\label{sec:Semilocal}
Semi-local models of bias for tensor fields
}
So far the formulation is general enough, and we have not assumed any
model of bias for the tensor fields so far. The bias model is
naturally specified in Lagrangian space through the renormalized bias
functions $c^{(n)}_{Xlm}$ defined by Eq.~(\ref{eq:3-18}), which is
evaluated once a model of bias $F^\mathrm{L}_{Xlm}(\bm{q})$ is
analytically given as a functional of the linear density field
$\delta_\mathrm{L}(\bm{q})$. In this section, we consider a category
of bias models which are commonly adopted in models of cosmological
structure formation. We call this category ``the semi-local models''
of bias.
The concept of the semi-local models is somehow similar to the
EFTofLSS approach in perturbation theory of biased tracers
\cite{Mirbabayi:2014zca,Desjacques:2016bnm}, where the biased field is
given by a finite set of semi-local operators made of spatial and
temporal derivatives of the Newtonian potential in the perturbative
expansions order by order. However, while the EFTofLSS approach is
based on phenomenologically perturbative expansions of the biased
field, our iPT approach does not assume the underlying operators are
perturbative, because we essentially use orthogonal expansions instead
of Taylor expansions to include the fully nonlinear biasing into the
cosmological perturbation theory through the renormalized bias
functions.
This category of semi-local models of biasing is defined so that the
field $F^\mathrm{L}_{Xlm}(\bm{q})$ is given by functions (instead of
functionals) of the spatial derivatives of the gravitational potential at
the same position $\bm{q}$, smoothed with (generally, multiple numbers
of) window functions $W^{(a)}(k)$:
\begin{equation}
\chi^{(a)}_{i_1i_2\cdots i_{L_a}}(\bm{q}) =
\partial_{i_1}\partial_{i_2}\cdots\partial_{i_{L_a}}
\psi^{(a)}(\bm{q}),
\label{eq:5-1}
\end{equation}
where
\begin{equation}
\psi^{(a)}(\bm{q}) =
\int \frac{d^3k}{(2\pi)^3} e^{i\bm{k}\cdot\bm{q}}
\delta_\mathrm{L}(\bm{k})\, \frac{W^{(a)}(k)}{(ik)^{L_a}}
\label{eq:5-2}
\end{equation}
is the smoothed linear density field with an isotropic window function
$(ik)^{-L_a} W^{(a)}(k)$. The label ``$a$'' distinguishes different
kinds of the linear tensor field with a particular rank $L_a$, i.e., the
label $a$ uniquely specifies the rank $L_a$ and the form of the window
function $W^{(a)}(k)$. The window function usually contains a
parameter of smoothing radius $R_a$, which can take a different value
for each window function $W^{(a)}$.
Typical window functions include the Gaussian window,
$W^{(a)}(k) = \exp(-k^2{R_a}^2/2)$, and the top-hat window,
$W^{(a)}(k) = 3j_1(kR_a)/(kR_a)$. When only a single linear tensor
field of a fixed window function is considered, one can omit the label
$a$ in the above. We include the possibility of using multiple numbers
of fields and window functions in the general formulation below.
For a concrete example of biasing using multiple numbers of fields and
window functions, see Ref.~\cite{Matsubara:2016wth}. For example, the
second derivatives of the (normalized and smoothed) linear
gravitational potential $\varphi$ correspond to
$\varphi = \Laplace^{-1} \delta_R = \psi^{(a)}$ with a rank of
$L_a=2$, and
$\chi^{(a)}_{ij} = \partial_i\partial_j\Laplace^{-1}\delta_R$ where
$\delta_R$ is a smoothed linear density field with a smoothing window
function $W^{(a)}(k) = W(kR)$.
The linear density contrast $\delta_\mathrm{L}$ is a real function in
configuration space, and thus we have
$\delta_\mathrm{L}^{*}(\bm{k}) = \delta_\mathrm{L}(-\bm{k})$ as shown
from Eq.~(\ref{eq:3-11a}). Thus we see the function
$\psi^{(a)}(\bm{q})$ of Eq.~(\ref{eq:5-2}) is a real function when
$L_a=\mathrm{even}$, and a pure imaginary function when
$L_a=\mathrm{odd}$, and so as the function
$\chi^{(a)}_{i_1\cdots i_{L_a}}(\bm{q})$ of Eq.~(\ref{eq:5-1}).
In the semi-local models of bias, the field value
$F^\mathrm{L}_{Xlm}(\bm{q})$ at a particular position is given by a
function of the field values $\chi^{(a)}_{i_1\cdots i_l}(\bm{q})$ at
the same position in Lagrangian space. This function is common among
all the positions. Therefore, without loss of generality, one can
consider the particular position at the origin, $\bm{q}=\bm{0}$, to
describe the relation in the semi-local model of bias, and thus the
field $F^\mathrm{L}_{Xlm}$ is a function of
\begin{equation}
\chi^{(a)}_{i_1\cdots i_{L_a}} =
\int \frac{d^3k}{(2\pi)^3}\,
\hat{k}_{i_1}\cdots\hat{k}_{i_{L_a}}\,
\delta_\mathrm{L}(\bm{k}) W^{(a)}(k).
\label{eq:5-3}
\end{equation}
The tensor field $F^\mathrm{L}_{Xlm}$ can depend on multiple of local
tensors of various ranks, $\chi^{(a_0)}$, $\chi^{(a_1)}_i$,
$\chi^{(a_2)}_{ij}$, $\chi^{(a_3)}_{ijk}$, etc. A local tensor is
decomposed by an irreducible spherical basis according to the
the procedure explained in Sec.~\ref{sec:SphericalBasis}. The tensor of
Eq.~(\ref{eq:5-3}) is decomposed into traceless tensor of rank
$L_a,L_a-2,\ldots$, and each component of traceless tensor is
decomposed by a spherical basis, as in the
Eqs.~(\ref{eq:2-23})--(\ref{eq:2-24b}). The decomposition is given by
\begin{equation}
\chi^{(a)}_{i_1\cdots i_l}
= \chi^{(a,l)}_{i_1i_2\cdots i_l} +
\frac{l(l-1)}{2(2l-1)}
\delta_{(i_1i_2} \chi^{(a,l-2)}_{i_3\cdots i_l)}
+ \cdots,
\label{eq:5-4}
\end{equation}
where we simply denote $L_a=l$ in the above, and
$\chi^{(a,l-2)}_{i_3\cdots i_l}$ is the traceless part of the first
trace $\chi^{(a)}_{jji_3\cdots i_l}$ and so forth. The decomposed
traceless tensors are further decomposed by spherical basis as
\begin{equation}
\chi^{(a,l)}_{i_1i_2\cdots i_l}
= i^l \alpha_l\, \chi^{(a[l])}_{lm} \mathsf{Y}^{(m)}_{i_1i_2\cdots i_l}, \ \
\chi^{(a,l-2)}_{i_3\cdots i_l}
= i^{l-2} \alpha_{l-2}\, \chi^{(a[l])}_{l-2,m}
\mathsf{Y}^{(m)}_{i_3\cdots i_l},
\label{eq:5-4-1}
\end{equation}
and so forth.
In our simplified notation with $l=L_a$, the rank of the linear tensor
field $a$ is not obvious if we write, e.g., $\chi^{(a)}_{l-2,m}$ for
the first trace of the original tensor, and thus we instead employ a
notations $\chi^{(a[l])}_{lm}$, $\chi^{(a[l])}_{l-2,m}$, etc.~to
remind the rank of the original field $a$ is $l$ in the lower-rank
trace parts of the tensor. Namely,
\begin{equation}
\chi^{(a[l])}_{lm}
= \left.\chi^{(a)}_{L_am}\right|_{L_a=l}, \quad
\chi^{(a[l])}_{l-2,m}
= \left.\chi^{(a)}_{L_a-2,m}\right|_{L_a=l},
\label{eq:5-4-2}
\end{equation}
and so forth.
The spherical tensors are given by
\begin{align}
\chi^{(a[l])}_{lm}
&= (-i)^l
\chi^{(a)}_{i_1\cdots i_l}
\mathsf{Y}^{(m)*}_{i_1\cdots i_l}
\nonumber\\
&=
(-i)^l
\int \frac{d^3k}{(2\pi)^3}
\delta_\mathrm{L}(\bm{k})
Y_{lm}(\hat{\bm{k}}) W^{(a)}(k),
\label{eq:5-5a}\\
\chi^{(a[l])}_{l-2,m}
&= (-i)^{l-2}
\chi^{(a)}_{jji_3\cdots i_l}
\mathsf{Y}^{(m)*}_{i_3\cdots i_l}
\nonumber\\
&= -(-i)^l
\int \frac{d^3k}{(2\pi)^3}
\delta_\mathrm{L}(\bm{k})
Y_{l-2,m}(\hat{\bm{k}})
W^{(a)}(k),
\label{eq:5-5b}
\end{align}
and, in general,
\begin{align}
\chi^{(a[l])}_{l-2p,m}
&= (-i)^{l-2p}
\chi^{(a)}_{j_1j_1\cdots j_pj_pi_{2p+1}\cdots i_l}
\mathsf{Y}^{(m)*}_{i_{2p+1}\cdots i_l}
\nonumber\\
&= (-1)^p (-i)^l
\int \frac{d^3k}{(2\pi)^3}
\delta_\mathrm{L}(\bm{k})
Y_{l-2p,m}(\hat{\bm{k}})
W^{(a)}(k),
\label{eq:5-5-1}
\end{align}
where $p=0,1,\cdots,[l/2]$. Changing the labels of ranks by
$l \rightarrow L$ and $l-2p \rightarrow l$, Eq.~(\ref{eq:5-5-1}) is
equivalently represented by
\begin{equation}
\chi^{(a[L])}_{lm}
= (-i)^l
\int \frac{d^3k}{(2\pi)^3}
\delta_\mathrm{L}(\bm{k})
Y_{lm}(\hat{\bm{k}})
W^{(a)}(k),
\label{eq:5-5-2}
\end{equation}
where $l=L,L-2$, \ldots, (0 or 1) for a linear tensor field $a$ of
rank $L$. The smallest value of $l$ in the last equation is 0 if $L$
is even, and 1 if $L$ is odd. When the rank $L_a$ of the original
tensor $\chi^{(a)}_{i_1\cdots i_{L_a}}$ is obvious, one can employ a
simplified notation $\chi^{(a)}_{lm}$ instead of
$\chi^{(a[L])}_{lm}$, where $l = L, L-2,\ldots$
For a simple example, we consider the second derivatives of the
normalized potential $\varphi$, smoothed with a window function
$W(kR)$. In this case, we have
\begin{equation}
\chi^{(\varphi)}_{ij} = \partial_i\partial_j\varphi
= \int \frac{d^3k}{(2\pi)^3}\, \hat{k}_i \hat{k}_j
\delta_\mathrm{L}(\bm{k}) W(kR).
\label{eq:5-8-0}
\end{equation}
The corresponding window function is $W^{(\varphi)}(k) = W(kR)$. The
second-rank tensor is decomposed into irreducible components as
\begin{align}
\chi^{(\varphi)}_{ij}
&=
\left(
\partial_i\partial_j - \frac{\delta_{ij}}{3} \Laplace
\right)
\varphi +
\frac{\delta_{ij}}{3} \Laplace \varphi
\nonumber \\
&=
-\frac{8\pi}{15} \chi^{(\varphi)}_{2m}\, \mathsf{Y}^{(m)}_{ij} +
4\pi \frac{\delta_{ij}}{3} \chi^{(\varphi)}_{00}\, \mathsf{Y}^{(0)},
\label{eq:5-8-1}
\end{align}
where, corresponding to Eqs.~(\ref{eq:5-5a}) and (\ref{eq:5-5b}), we
have
\begin{align}
\chi^{(\varphi[2])}_{2m}
&=
- \int \frac{d^3k}{(2\pi)^3}
\delta_\mathrm{L}(\bm{k})
Y_{2m}(\hat{\bm{k}})
W(kR),
\label{eq:5-8-2a}\\
\chi^{(\varphi[2])}_{00}
&=
\int \frac{d^3k}{(2\pi)^3}
\delta_\mathrm{L}(\bm{k})
Y_{00}(\hat{\bm{k}})
W(kR).
\label{eq:5-8-2b}
\end{align}
The last variable is apparently given by
\begin{equation}
\chi^{(\varphi[2])}_{00} =
\frac{1}{\sqrt{4\pi}}
\int \frac{d^3k}{(2\pi)^3}
\delta_\mathrm{L}(\bm{k}) W(kR)
\equiv
\frac{\delta_R}{\sqrt{4\pi}},
\label{eq:5-8-3}
\end{equation}
where $\delta_R$ is the smoothed density field with a smoothing
function $W(kR)$.
Similarly, the second derivatives of the
smoothed density field is given by
\begin{equation}
\chi^{(\delta)}_{ij} = \partial_i\partial_j\delta_R
= - \int \frac{d^3k}{(2\pi)^3}\,k^2\, \hat{k}_i \hat{k}_j
\delta_\mathrm{L}(\bm{k}) W(kR).
\label{eq:5-8-3-1}
\end{equation}
and corresponding window function is $W^{(\delta)}(k) = k^2 W(kR)$.
The second-rank tensor is decomposed into irreducible components as
\begin{align}
\chi^{(\delta)}_{ij} = \partial_i\partial_j \delta_R
&=
\left(
\partial_i\partial_j - \frac{\delta_{ij}}{3} \Laplace
\right) \delta_R +
\frac{\delta_{ij}}{3} \Laplace \delta_R
\nonumber \\
&=
-\frac{8\pi}{15} \chi^{(\delta[2])}_{2m} \mathsf{Y}^{(m)}_{ij} +
4\pi \frac{\delta_{ij}}{3} \chi^{(\delta[2])}_{00} \mathsf{Y}^{(0)},
\label{eq:5-8-4}
\end{align}
where
\begin{align}
\chi^{(\delta[2])}_{2m}
&=
\int \frac{d^3k}{(2\pi)^3}
\delta_\mathrm{L}(\bm{k})
Y_{2m}(\hat{\bm{k}})\,
k^2 W(kR),
\label{eq:5-8-5a}\\
\chi^{(\delta[2])}_{00}
&=
- \int \frac{d^3k}{(2\pi)^3}
\delta_\mathrm{L}(\bm{k})
Y_{00}(\hat{\bm{k}})\,
k^2 W(kR),
\label{eq:5-8-5b}
\end{align}
and the last variable is apparently given by
\begin{equation}
\chi^{(\delta[2])}_{00} =
- \frac{1}{\sqrt{4\pi}}
\int \frac{d^3k}{(2\pi)^3}
\delta_\mathrm{L}(\bm{k})\,k^2 W(kR)
= \frac{\Laplace\delta_R}{\sqrt{4\pi}},
\label{eq:5-8-6}
\end{equation}
where $\Laplace\delta_R$ is the Laplacian of the smoothed density
field.
In the semi-local models of bias, the tensor field
$F^\mathrm{L}_{Xlm}$ is generally considered as a function of various
irreducible tensors
\begin{equation}
F^\mathrm{L}_{Xlm} =
F^\mathrm{L}_{Xlm}\left(\left\{\chi^{(a)}_{l'm'}\right\}\right)
\label{eq:5-9}
\end{equation}
at every position in (Lagrangian) configuration space. In this case,
the functional derivative in the definition of renormalized bias
functions, Eq.~(\ref{eq:3-18}), can be replaced by
\begin{equation}
(2\pi)^3 \frac{\delta}{\delta\delta_\mathrm{L}(\bm{k})}
\rightarrow
\sum_{a} W^{(a)}(k)
\sum_{l=L_a,L_a-2,\ldots}
(-i)^l
Y_{lm}(\hat{\bm{k}})
\frac{\partial}{\partial \chi^{(a)}_{lm}},
\label{eq:5-10}
\end{equation}
where the integer $L_a$ is the original rank of the linear tensor
$\chi^{(a)}_{i_1\cdots i_{L_a}}$. The summation over $m$ in
Eq.~(\ref{eq:5-10}) is implicitly assumed just as in the rest of this
paper. Therefore, the renormalized bias functions of
Eq.~(\ref{eq:3-18}) are given by
\begin{multline}
c^{(n)}_{Xlm}(\bm{k}_1,\ldots\bm{k}_n)
= \sum_{a_1,\ldots,a_n}
W^{(a_1)}(k_1) \cdots W^{(a_n)}(k_n)
\\ \times
\sum_{l_1,\ldots,l_n}
(-i)^{l_1+\cdots +l_n}
Y_{l_1m_1}(\hat{\bm{k}}_1)
\cdots Y_{l_n,m_n}(\hat{\bm{k}}_n)
\\ \times
\left\langle
\frac{\partial^n F^\mathrm{L}_{Xlm}}
{\partial \chi^{(a_1)}_{l_1m_1} \cdots \partial
\chi^{(a_n)}_{l_nm_n}}
\right\rangle,
\label{eq:5-11}
\end{multline}
where integers $l_i$ ($i=1,\ldots,n$) run over
$l_i = L_{a_i}, L_{a_i}-2, \ldots$, (0 or 1). Comparing the above
equation with Eq.~(\ref{eq:3-51}), we have
\begin{multline}
c^{(n)\,l;l_1\cdots l_n}_{Xm;m_1\cdots m_n}(k_1,\ldots,k_n)
= (-i)^{l_1+\cdots +l_n}
\\ \times
\sum_{a_1,\ldots,a_n}
W^{(a_1)}(k_1) \cdots W^{(a_n)}(k_n)
\\ \times
g^{(l_1)}_{m_1m_1'} \cdots g^{(l_n)}_{m_nm_n'}
\left\langle
\frac{\partial^n F^\mathrm{L}_{Xlm}}
{\partial \chi^{(a_1)}_{l_1m_1'} \cdots \partial
\chi^{(a_n)}_{l_nm_n'}}
\right\rangle.
\label{eq:5-11-1}
\end{multline}
The renormalized bias functions in the form of Eq.~(\ref{eq:5-11}) is
evaluated for a given model of the biased tensor field, once the
underlying statistics of the fields $\chi^{(a)}_{lm}$ are specified.
These fields just linearly depend on the linear density contrast
$\delta_\mathrm{L}$ through Eqs.~(\ref{eq:5-5a})--(\ref{eq:5-5-2}) and
therefore the statistics are straightforwardly given by those of
linear density field.
Using Eqs.~(\ref{eq:3-20-1}) and (\ref{eq:5-5-2}), and the orthonormal
relation of the spherical harmonics, Eq.~(\ref{eq:a-5}), the
covariance of the fields (at the same position, the same applies
hereafter) can be straightforwardly calculated and is given by
\begin{equation}
\left\langle
\chi^{(a)}_{lm} \chi^{(b)}_{l'm'}
\right\rangle
= \delta_{ll'} \gamma^{ab}_{(l)} g^{(l)}_{mm'}.
\label{eq:5-12}
\end{equation}
where
\begin{equation}
\gamma^{ab}_{(l)} \equiv
\frac{1}{4\pi}
\int \frac{k^2dk}{2\pi^2}
W^{(a)}(k) W^{(b)}(k)
P_\mathrm{L}(k).
\label{eq:5-13}
\end{equation}
One can regard the set of parameters $\gamma^{ab}_{(l)}$
as a matrix element of a matrix $\bm{\gamma}_{(l)}$, which components
are given by $[\bm{\gamma}_{(l)}]_{ab} = \gamma^{ab}_{(l)}$. We denote
the matrix elements of the inverse matrix as
\begin{equation}
\gamma^{(l)}_{ab} \equiv
\left[
\bm{\gamma}_{(l)}^{-1}
\right]_{ab}.
\label{eq:5-16}
\end{equation}
When one considers a set of variables $\chi^{(a)}_{lm}$ as a vector
with the set of indices $(a,l,m)$, the inverse of the covariance
matrix of Eq.~(\ref{eq:5-12}) is given by
$\delta_{ll'} \gamma^{(l)}_{ab} g_{(l)}^{mm'}$ with our notation.
When the initial density field $\delta_\mathrm{L}$ is a Gaussian
random field, the two-point covariance of Eq.~(\ref{eq:5-12}) contains
all the information for the statistical distribution of the variables.
In this case, the distribution function is given by a multivariate
Gaussian distribution function,
\begin{multline}
P_\mathrm{G}\left(\left\{\chi^{(a)}_{lm}\right\}\right)
= \frac{1}{\sqrt{(2\pi)^N \det \bm{\gamma}_{(l)}}}
\\ \times
\exp\left[
-\frac{1}{2} \sum_l \gamma^{(l)}_{ab}\, g_{(l)}^{mm'}
\chi^{(a)}_{lm} \chi^{(b)}_{lm'}
\right],
\label{eq:5-17}
\end{multline}
where the Einstein summation convention is applied also to the indices
$a$, $b$ as well as azimuthal indices $m$, $m'$ and thus the summation
over these indices are implicitly assumed in the exponent. While the
variables $\chi^{(a)}_{lm}$ are complex numbers, the exponent of
Eq.~(\ref{eq:5-17}) is a real number. This can be readily shown by
noting a property
$\chi^{(a)\,*}_{lm} = g_{(l)}^{mm'} \chi^{(a)}_{lm'}$ as shown from
Eq.~(\ref{eq:5-5-2}), thus we have
\begin{equation}
\gamma^{(l)}_{ab}\, g_{(l)}^{mm'} \chi^{(a)}_{lm} \chi^{(b)}_{lm'} =
\gamma^{(l)}_{ab}\,\chi^{(a)}_{lm} \chi^{(b)\,*}_{lm},
\label{eq:5-17-1}
\end{equation}
which is a real number because the coefficients $\gamma^{(l)}_{ab}$
are real numbers and symmetric with respect to the indices $a$ and
$b$. Moreover, the above factor is positive as long as the matrix
$\bm{\gamma}_{(l)}$ is positive definite.
The expectation value of Eq.~(\ref{eq:5-11}) is calculated by
multivariate Gaussian integrals with the distribution function of
Eq.~(\ref{eq:5-17}), which evaluation is analytically possible in many
cases when the semi-local bias function of Eq.~(\ref{eq:5-9}) is given
by an analytic function of a finite number of variables. When there is
a small non-Gaussianity in the initial condition, and the linear
density field is not exactly a random Gaussian field, one can evaluate
non-Gaussian corrections to the Gaussian distribution function of
Eq.~(\ref{eq:5-17}). The procedure of deriving the non-Gaussian
corrections are found in
Ref.~\cite{Matsubara:1995wd,Matsubara:2020lyv}, which is
straightforward to apply in this case. However, for the illustrative
examples below of this paper, we do not need to include the
non-Gaussian corrections in the evaluations of renormalized bias
functions.
The expectation value on the rhs of Eqs.~(\ref{eq:5-11}) and
(\ref{eq:5-11-1}) should be represented by rotationally invariant
variables, just in the case of the renormalized bias function
$c^{(n)l;l_1\cdots l_n}_{Xm;m_1\cdots m_n}(k_1,\ldots,k_n)$, while the
arguments of $k_i$'s are not present here. Similarly to
Eqs.~(\ref{eq:3-70}), (\ref{eq:3-71}), (\ref{eq:3-76}),
(\ref{eq:3-81}), we have
\begin{align}
&
\left\langle
\frac{\partial F^\mathrm{L}_{Xlm}}{\partial\chi^{(a_1)}_{l_1m_1}}
\right\rangle
= \delta_{ll_1} \delta_m^{m_1} i^{l_1} b^{(1:a_1)}_{Xl},
\label{eq:5-18a}\\
&
\left\langle
\frac{\partial^2 F^\mathrm{L}_{Xlm}}
{\partial\chi^{(a_1)}_{l_1m_1}\partial\chi^{(a_2)}_{l_2m_2}}
\right\rangle
= (-1)^l \sqrt{\{l\}} \left(l\,l_1\,l_2\right)_m^{\phantom{m}m_1m_2}
i^{l_1+l_2} b^{(2:a_1a_2)}_{Xl;l_1l_2},
\label{eq:5-18b}\\
&
\left\langle
\frac{\partial^3 F^\mathrm{L}_{Xlm}}
{\partial\chi^{(a_1)}_{l_1m_1}
\partial\chi^{(a_2)}_{l_2m_2}
\partial\chi^{(a_3)}_{l_3m_3}}
\right\rangle
= (-1)^l \sqrt{\{l\}} \sum_L (-1)^L\sqrt{\{L\}}
\nonumber\\
& \hspace{4pc} \times
\left(l\,l_1\,L\right)_m^{\phantom{m}m_1M}
\left(L\,l_2\,l_3\right)_M^{\phantom{M}m_2m_3}
i^{l_1+l_2+l_3} b^{(3:a_1a_2a_3)L}_{Xl;l_1l_2l_3},
\label{eq:5-18c}
\end{align}
and
\begin{multline}
\left\langle
\frac{\partial^3 F^\mathrm{L}_{Xlm}}
{\partial\chi^{(a_1)}_{l_1m_1}\cdots
\partial\chi^{(a_n)}_{l_nm_n}}
\right\rangle
= (-1)^l \sqrt{\{l\}}
\sum_{L_2,\ldots,L_{n-1}}
(-1)^{L_2+\cdots L_{n-1}}
\\ \times
\sqrt{\{L_2\}\cdots\{L_{n-1}\}}
\left(l\,l_1\,L_2\right)_m^{\phantom{m}m_1M_2}
\\ \times
\left(L_2\,l_2\,L_3\right)_{M_2}^{\phantom{M_2}m_2M_3}
\cdots
\left(L_{n-2}\,l_{n-2}\,L_{n-1}\right)_{M_{n-2}}^{\phantom{M_{n-2}}m_2M_{n-1}}
\\ \times
\left(L_{n-1}\,l_{n-1}\,l_n\right)_{M_{n-1}}^{\phantom{M_{n-1}}m_{n-1}m_n}
i^{l_1+\cdots +l_n}b^{(n:a_1\cdots a_n)L_2\cdots L_{n-1}}_{Xl;l_1\cdots l_n}.
\label{eq:5-19}
\end{multline}
Therefore, comparing Eq.~(\ref{eq:5-11-1}) with Eqs.~(\ref{eq:3-70}),
(\ref{eq:3-71}), (\ref{eq:3-75}) and (\ref{eq:3-81}), invariant
functions of the renormalized bias functions are given by
\begin{align}
c^{(1)}_{Xl}(k)
&
= \sum_a b^{(1:a)}_{Xl}W^{(a)}(k),
\label{eq:5-20a}\\
c^{(2)\,l}_{Xl_1l_2}(k_1,k_2)
&
= \sum_{a_1,a_2} b^{(2:a_1a_2)}_{Xl;l_1l_2}
W^{(a_1)}(k_1) W^{(a_2)}(k_2),
\label{eq:5-20b}\\
c^{(3)\,l;L}_{Xl;l_1l_2l_3}(k_1,k_2,k_3)
&
=
\sum_{a_1,a_2,a_3}
b^{(3:a_1a_2a_3)L}_{Xl;l_1l_2l_3}
\nonumber \\
& \qquad \times
W^{(a_1)}(k_1) W^{(a_2)}(k_2) W^{(a_3)}(k_3),
\label{eq:5-20c}
\end{align}
and
\begin{multline}
c^{(n)\,l;L_2\cdots L_{n-1}}_{Xl_1\cdots l_n}(k_1,\ldots,k_n)
=
\sum_{a_1,\ldots,a_n}
b^{(n:a_1\cdots a_n)L_2\cdots L_{n-1}}_{Xl;l_1\cdots l_n}
\\ \times
W^{(a_1)}(k_1) \cdots W^{(a_n)}(k_n).
\label{eq:5-21}
\end{multline}
Thus, the renormalized bias functions are given by superpositions
of products of window functions with constant coefficients which are
determined by a given semi-local model of bias. The scale dependencies
in the invariant renormalized functions are all contained in the
window functions which are fixed by a construction of the semi-local
model with a finite number of scale-independent parameters. These
properties simplify the calculations of loop corrections in our
applications.
Corresponding to the interchange symmetries of the renormalized bias
functions with orders greater than two, Eqs.~(\ref{eq:3-75-2}),
(\ref{eq:3-80-3}), (\ref{eq:3-80-4}), (\ref{eq:3-86}) and
(\ref{eq:3-87}), the same symmetries for the constant coefficients of
bias are given by
\begin{equation}
b^{(2:a_2a_1)}_{Xl;l_2l_1}
= (-1)^{l+l_1+l_2}
b^{(2:a_1a_2)}_{Xl;l_1l_2}
\label{eq:5-21-1}
\end{equation}
for the second-order bias,
\begin{align}
b^{(3:a_1a_3a_2)L}_{Xl;l_ll_3l_2}
&= (-1)^{l_2+l_3+L}
b^{(3:a_1a_2a_3)L}_{Xl;l_ll_2l_3},
\label{eq:5-21-2a}\\
b^{(3:a_2a_1a_3)L}_{Xl;l_2l_1l_3}
&= (-1)^{l_1+l_2}
\sum_{L'} (2L'+1)
\begin{Bmatrix}
l_1 & l & L \\
l_2 & l_3 & L'
\end{Bmatrix}
b^{(3:a_1a_2a_3)L'}_{Xl;l_ll_2l_3}
\label{eq:5-21-2b}
\end{align}
for the third-order bias, and
\begin{align}
&
b^{(n:a_1\cdots a_{n-2}a_na_{n-1})L_2\cdots L_{n-2}L_{n-1}}_{Xl;l_l\cdots l_{n-2}l_nl_{n-1}}
= (-1)^{l_{n-1}+l_n+L_{n-1}}
b^{(n)L_2\cdots L_{n-1}}_{Xl;l_l\cdots l_n},
\label{eq:5-21-3a}\\
&
b^{(n:a_1\cdots a_{i-1}a_{i+1}a_ia_{i+2}\cdots a_n);L_2\cdots L_{i-1}L_{i+1}L_iL_{i+2}\cdots L_{n-1}}
_{Xl_1\cdots l_{i-1}l_{i+1}l_il_{i+2}\cdots l_n}
\nonumber\\
&\qquad
= (-1)^{l_i+l_{i+1}}
\sum_{L'} (2L'+1)
\begin{Bmatrix}
l_i & L_i & L_{i+1} \\
l_{i+1} & L_{i+2} & L'
\end{Bmatrix}
\nonumber\\
&\hspace{8pc} \times
b^{(n:a_1\cdots a_n);L_2\cdots L_iL'L_{i+2}\cdots L_{n-1}}_{Xl;l_1\cdots l_n}
\label{eq:5-21-3b}
\end{align}
for higher-order bias with $n>3$, where $i=1,\cdots,n-2$.
As a specific example, there is a loop integral in the scale-dependent
bias of Eq.~(\ref{eq:4-39}). Applying an expression
Eq.~(\ref{eq:5-20b}) of semi-local models, the integral is given by
\begin{equation}
\int \frac{p^2dp}{2\pi^2}
c^{(2)l}_{Xl_1l_2}(p,p) P_\mathrm{L}(p)
=
\sum_{a_1,a_2} b^{(2:a_1a_2)}_{Xl;l_1l_2} \sigma^{(a_1a_2)},
\label{eq:5-22}
\end{equation}
where
\begin{equation}
\sigma^{(a_1a_2)} \equiv
\int \frac{k^2dk}{2\pi^2} P_\mathrm{L}(k)
W^{(a_1)}(k) W^{(a_2)}(k).
\label{eq:5-23}
\end{equation}
Thus Eq.~(\ref{eq:4-39}) reduces to
\begin{multline}
\Delta b_{Xl}(k) =
\frac{1}{\sqrt{4\pi}}\,
\frac{2f_\mathrm{NL}^{(l)}}{\mathcal{M}(k)}
\frac{1}{\{l\}}
\sum_{l_1,l_2}
(-1)^{l_2}
\sqrt{\{l_1\}\{l_2\}}
\\ \times
\begin{pmatrix}
l_1 & l_2 & l \\
0 & 0 & 0
\end{pmatrix}
\sum_{a_1,a_2} b^{(2:a_1a_2)}_{Xl;l_1l_2} \sigma^{(a_1a_2)}.
\label{eq:5-24}
\end{multline}
For a simple case of scalar bias with the high-mass limit of the halo
model, comparison of Eq.~(\ref{eq:4-43}) with Eq.~(\ref{eq:5-20b})
shows that only a term with
$b^{(2)\,0}_{\mathrm{h}\delta\delta;00} = (4\pi)^{1/2} \sigma^{-2}
\delta_\mathrm{c} b^\mathrm{L}_1$ survives, and one can readily see
the result of Eq.~(\ref{eq:4-44}) is reproduced as a consistency
check.
If we consider a simple model that the tensor bias is a local function
of only the second-order derivatives of the gravitational potential
$\partial_i\partial_j\varphi$, the window function is given by
$W^{(\varphi)}(k) = W(kR)$ as shown in Eq.~(\ref{eq:5-8-0}). We can
omit the label $(\varphi)$ in this case, because the linear tensor
field consists of only a single tensor.
The Eq.~(\ref{eq:5-23}) simply reduces to
\begin{equation}
\sigma^2 = \int \frac{k^2dk}{2\pi^2}
P_\mathrm{L}(k) W^2(kR),
\label{eq:5-26}
\end{equation}
which is the variance of the smoothed linear density field. The ranks
$l_1$ and $l_2$ take only values of 0 and 2 in the summation of
Eq.~(\ref{eq:5-24}) in this case, and due to the $3j$-symbol
in Eq.~(\ref{eq:5-24}), only the cases of $l=0,2,4$ are non-zero.
Substituting concrete numbers of the $3j$-symbols, Eq.~(\ref{eq:5-24})
is explicitly expanded in a finite number of terms and the result is
given by
\begin{multline}
\Delta b_{Xl}(k) =
\frac{2\sigma^2}{\sqrt{4\pi}\,\mathcal{M}(k)}
\Biggl[
\delta_{l0}
f^{(0)}_\mathrm{NL}
\left(
b^{(2)}_{0;00} + \sqrt{5} b^{(2)}_{0;22}
\right)
\\
+ \delta_{l2}
f^{(2)}_\mathrm{NL}
\left(
\frac{2}{5} b^{(2)}_{2;02} - \sqrt{\frac{2}{35}} b^{(2)}_{2;22}
\right)
\\
+ \delta_{l4} f^{(4)}_\mathrm{NL}\,
\frac{5}{9}\sqrt{\frac{2}{35}}\,
b^{(2)}_{4;22}
\Biggr],
\label{eq:5-27}
\end{multline}
where we use an interchange symmetry $b^{(2)}_{2;20} = b^{(2)}_{2;02}$
derived from Eq.~(\ref{eq:5-21-1}). Therefore, if the tensor bias is
modeled as a local function of only a second-rank linear tensor field,
the scale-dependent bias of the tensor field is non-zero only when the
rank of the biased tensor is 0, 2, or 4. Due to the rotational
symmetry, only a small number of bias parameters, $b^{(2)}_{0;00}$,
$b^{(2)}_{0;22}$, $b^{(2)}_{2;02}$, $b^{(2)}_{2;22}$, and
$b^{(2)}_{4;22}$ appear in this particular semi-local model.
The above examples probably do not sufficiently exhibit the practical
merits of semi-local models of bias. Their advantages are more
apparently shown in the calculation of loop corrections in the
higher-order perturbation theory. The present formalism with the full
use of the spherical basis is quite compatible with the FFT-PT
framework \cite{Schmittfull:2016jsw,Schmittfull:2016yqx} or the
FAST-PT framework \cite{McEwen:2016fjn,Fang:2016wcf}, which
dramatically reduces the dimensionality of multidimensional integrals
in the calculation of loop corrections in the higher-order
perturbation theory. The loop corrections in the present formalism
will be considered in a separate paper, Paper~II \cite{PaperII}.
\section{\label{sec:Conclusions}
Conclusions
}
In this paper, the formalism iPT is generalized to calculate
statistics of generally tensor-valued objects, using the nonlinear
perturbation theory. Higher-rank tensors are conveniently decomposed
into spherical tensors which are irreducible representations of the
three-dimensional rotation group SO(3), and mathematical techniques
developed in the theory of angular momentum in quantum mechanics are
effectively applied.
The fundamental formalism of iPT can be recycled without essential
modification, and the only difference is that the biased field is
colored by tensor values. Because the original formalism of iPT is
supposed to be applied to scalar fields, the rotational symmetry in
this original version of the theory is relatively trivial. When the
theory is generalized to include tensor-valued fields, the statistical
properties of biasing are largely constrained by symmetry. In
particular, the renormalized bias functions, which are one of the
important ingredients in the iPT, can be represented by a set of
invariant functions under rotations of the coordinates system.
Explicit constructions of these invariants are derived and presented
in this paper. These invariants can also be seen as coefficients of
angular expansions by polypolar spherical harmonics.
The iPT offers a systematic way of calculating the propagators of
nonlinear perturbation theory, both in real space and in redshift
space. The methodology of deriving propagators of tensor-valued fields is
generally explained in detail and a few examples of lower-order
propagators are explicitly derived in this paper. For an illustrative
purpose, these examples of propagators are applied to simple cases of
predicting the correlation statistics, including the linear power
spectra in real space and redshift space, the lowest-order power
spectrum with primordial non-Gaussianity, the bispectrum in the
tree-level approximation, for generally tensor fields. As a
consistency check in each example, we confirm that each derived
formula reproduces the known result for scalar fields, just
substituting $l=m=0$ in the general formulas.
In the last section, the concept of semi-local models of bias is
introduced. The formalism of iPT does not assume any form of bias
models, and the biasing can be given by any functional of the
underlying field. The semi-local models of bias are defined so that
the bias functional should be a function of a finite number of variables
derived from the linear density field in Lagrangian space. Therefore,
the biasing is modeled by a finite number of parameters in this
category of models. Almost all the existing models of bias introduced
in the literature fall into this category, including the halo model,
the peak theory of bias, excursion set peaks, and so forth. In this
paper, only the formal definition of the semi-local models of bias is
described, and rotationally invariant parameters in the models are
identified. Their applications to individual models of concrete
targets will be presented in future work.
In realistic observations of galaxy spins and intrinsic alignments and
so forth, we can only observe projected components of tensors onto the
two-dimensional sky. In this paper, while predictions for the spatial
correlations of tensors only in three-dimensional space are given, it
should be straightforward mathematics to transform our results of the
invariant spectra into correlation statistics of projected tensors.
Explicit expressions will also be presented in future work.
In this paper, the power spectrum and bispectrum are only evaluated in
a lowest-order, or tree-level approximation. Some useful techniques in
calculating higher-order, or loop corrections are described in a
subsequent Paper~II \cite{PaperII}. We hope that the present formalism
offers a way to carve out the future of applying the cosmological
perturbation theory to the analysis of observations in the era of
precision cosmology.
\begin{acknowledgments}
I thank Y.~Urakawa, K.~Kogai, K.~Akitsu, and A.~Taruya for useful
discussions. This work was supported by JSPS KAKENHI Grants
No.~JP19K03835 and No.~21H03403.
\end{acknowledgments}
| {'timestamp': '2022-11-02T01:04:36', 'yymm': '2210', 'arxiv_id': '2210.10435', 'language': 'en', 'url': 'https://arxiv.org/abs/2210.10435'} |
\section{Introduction}
Since the great success of the Standard Model in the recent decades, there has
been no doubt that the gauge theories are capable to describe the interactions
between elementary particles. With larger colliding energy available to probe
higher energy scattering events, only remained piece of the Standard Model,
Higgs boson(s), will be also discovered in the current and future colliders.
Besides, a new physics might be opened in this high energy frontier. Among
these energy scale, precise predictions by the perturbative calculations is
crucial for the signal and background estimations because their event
topologies become much complicate with increasing the colliding energy. We
have carried on the automatic computation of the Feynman diagrams by
GRACE \cite{grace} system since we immediately have a huge number of diagram
calculation in multi-particle final state processes although we can, in
principle, calculate them by hand based on their Lagrangian in perturbation
theory.
GRACE has satisfied this requirement at one-loop level \cite{grcloop} in the
electroweak interactions as well as at tree level and at the minimal
supersymmetric extension of the Standard Model (MSSM) \cite{grcmssm}. Those
development has been mostly aimed at applications to lepton collisions. The
generated codes however are not directly applicable to hadron-collision
interactions due to the presence of a parton distribution function (PDF).
Also, a certain process in hadron collisions consists of lots of subprocesses
by referring incoming partons in PDF or outgoing partons in jets. In current
scheme, it leads much time for the diagram calculations. We clearly need an
extended framework of the GRACE system for hadron collisions. Early extensions
can be seen in \cite{abe} and \cite{odaka}.
In order to implement those features specific to hadron collisions, we have
developed an extended framework, called GR@PPA (GRace At PP/Anti-p). The
primary function of GR@PPA is to determine the initial and final state
partons, $i.e.$ their flavors and momenta in the incoming partons by referring
to a PDF and the final state parton configuration if the process requires jets
or decay products from massive bosons. Based on the GRACE output codes, GR@PPA
calculates the cross section and generates unweighted parton-level events
using BASES/SPRING \cite{bases} included in the GRACE system. The GR@PPA
framework also includes an interface for a common data format (LHA)
\cite{leshouches} with the common interface routines proposed at Les Houches
Workshop on 2001\cite{tevwork}. To make the events realistic, the unweighted
event data are passed through the showering-MC of PYTHIA \cite{pythia} or
HERWIG \cite{herwig} which implements the initial- and final-state radiation,
hadronization and decays and so forth.
Although the GR@PPA framework is not process-specific and can be applied to
any other processes in hadron collisions using the output codes of the GRACE
system\footnote{The extension of GRACE itself to the hadron collisions is
under development.}, we also provide some primitive processes packed as a set
of matrix element customized for this extension. At the moment, the selected
processes are boson(s) plus n jets processes and $t\bar{t}$ plus m jets
processes, where n(m) is accounted for up to 4(1) jets. These processes are
the most important background processes for the Higgs boson searches or the
SUSY particle searches as well as the precise measurements for the
understandings of multi-body particle dynamics. Our previous work, four bottom
quark production processes (GR@PPA\_4b \cite{grappa_4b}), is also included.
The reasons why we also provide the particular processes apart from the
benefit of the automatic Feynman diagram calculation by the GRACE system are
followings. First, kinematical singularities in each process are cared with a
proper treatment. Since the kinematics are optimized to be suitable for
well-convergence behavior, one can get high efficiency to generate unweighted
events without any care. This is immediately addressed to the program running
speed which is critical for the large scale MC production. Second, it is easy
to adopt the higher order calculations. In higher order calculations, the
calculation procedure is normally process-specific. To avoid
the negative weight in cancellation between virtual and loop correction, one
requires the phase space points to be positive differential cross section.
This feature is so difficult to generalize by the automatic Feynman diagram
calculation. Once the customized matrix element for the NLO process is
prepared \cite{kuriharanlo}, one can simply use it. Third, some extensions are
possible only in the modification of the framework. For example, using an
ability of C++ language, the GR@PPA generators will work in the C++
environment just by rewriting the framework by C++ but the others are still
Fortran. This is minimum changes to wrap the Fortran code produced by the
GRACE system. The parton shower algorithm, for example NLL parton
shower \cite{nll}, is also possible to implement by modifying the framework
because the parton shower is not a process specific model. Note that the
extended GRACE system for hadron collisions will work to provide a set of
matrix elements apart from the GR@PPA framework at these point.
In this paper, we describe a symbolic treatment of the diagram calculation
adopted in GR@PPA for hadron collisions in the next section. Some benchmark
cross sections and program performances are presented in Section 3. Our
numerical results were compared with several
generators \cite{alpgen,madgraph,comphep,amegic}. We got a good agreement with
them \cite{mc4lhc2003}. Finally, a summary is given in Section 4.
\section{Extension of GRACE to $pp$/$p\bar{p}$ collisions}
In hadron-hadron collision, a certain process consists of several incoherent
subprocesses according to colliding partons in the hadrons. If the given
process has a decay or a jet in the final state, the whole possible
combinations of the outgoing partons are also taken into account for. The
total cross section is thus expressed as a simple summation of those
subprocesses in
Eq.(\ref{eq:xsec})
\begin{equation}
\sigma = \sum_{i, j, F} \int dx_{1} \int dx_{2} \int d\hat{\Phi}_{F}
f^{1}_{i}(x_{1},Q^{2}) f^{2}_{j}(x_{2},Q^{2})
{ d\hat{\sigma}_{i j \rightarrow F}(\hat{s}) \over d\hat{\Phi}_{F} },
\label{eq:xsec}
\end{equation}
where $f^{a}_{i}(x_{a},Q^{2})$ is a PDF of the hadron $a$ ($p$ or $\bar{p}$),
which gives the probability to find the parton $i$ with an energy fraction
$x_{a}$ at a probing virtuality of $Q^{2}$. The differential cross section
$d\hat{\sigma}_{i j \rightarrow F}(\hat{s})/d\hat{\Phi}_{F}$ describes the
parton-level hard interaction producing the final-state $F$ from a collision
of partons, $i$ and $j$, where $\hat{s}$ is the square of the total initial
4-momentum. The sum is taken over all relevant combinations of $i$, $j$ and
$F$. We had mainly two sorts of development in GR@PPA, --- applying PDF in the
phase space integration and sharing several subprocesses as a single
base-subprocess. The former is described in our previous
paper \cite{grappa_4b}. Here, we focus on the later case.
The original GRACE system assumes that both the initial and final states are
well-defined. Hence, it can be applied to evaluating
$d\hat{\sigma}_{i j \rightarrow F}(\hat{s})/d\hat{\Phi}_{F}$ and its
integration over the final-state phase space $\hat{\Phi}_{F}$ only. An
adequate extension is necessary to take into account the variation of the
initial and final states both in parton species and their momenta, in order to
make the GRACE system applicable to hadron collisions. As already mentioned, a
"process" of interest is usually composed of several incoherent subprocesses
in hadron interactions. However, in many cases, the difference between the
subprocesses is the difference in the quark combination in the initial and/or
final states only. The matrix element of these subprocesses is frequently
identical, or the difference is only in a few coupling parameters and/or
masses. In such cases, it is convenient to add one more
integration/differentiation variable to replace the summation in
Eq.(\ref{eq:xsec}) with an integration. As a result, these subprocesses can
share an identical "GRACE output code" and can be treated as a single
subprocess. This technique simplifies the program code and saves the computing
time very much.
The number of the combinations taken N out of M flavors allowing to overlap
them, in general, is given by
$_{M}H_{N}$($\equiv$ $\frac{(N+M-1)!}{N!(M-1)!}$). In case that all parton
flavors are considered,
$M=11(u,d,c,s,b,g,\bar{u},\bar{d},\bar{c},\bar{s},\bar{b})$, then the
configuration of the N jets final state has $_{11}H_{N}$ subprocesses if we
neglect the conservation by total amount of charges of this subprocess.
Clearly we can see that smaller $M$ decreases the number of subprocesses.
Unless the flavor difference is taken account in the process, the flavor
configurations can be replaced by a generic up-type, down-type parton, and
gluon ($_{5}H_{N}$ $\ll$ $_{11}H_{N}$). The base-subprocesses are thus
configured only as to have those partons and gluon. The output code of the
matrix element from the GRACE system is extended to have a function of input
masses and couplings, so that the masses and couplings are interchanged
according to the assigned flavors. Note that each base-subprocess covers every
possible combination of flavors. The diagram selection in the
base-subprocesses thus allows to specify all subprocesses with a proper flavor
configuration. In addition, since the Feynman diagrams within the process of
Standard Model are symmetry with respect to the momentum (parity) and charge
flip of the initial colliding partons in the CM frame of the process, the
subprocesses can be reduced more. In Table \ref{tab:subproc}, we list up the
number of all possible base-subprocesses contributes in N jets process in
$pp(\bar{p})$ collisions together with that of the subprocesses counted for
all flavor combinations. The subprocesses are classified according to the
difference in the initial-state parton combination. That is, the initial
parton combinations in the base-subprocesses are
$q_{u}q_{u}(\bar{q_{d}}\bar{q_{d}})$, $q_{u}\bar{q_{d}}$,
$q_{u}g(\bar{q_{d}}g)$, $q_{u}q_{d}$, $q_{u}\bar{q_{u}}(q_{d}\bar{q_{d}})$,
gg, where $q_{u}$($q_{d}$) and $g$ is up(down)-type quarks, and gluon,
respectively. We take them all positive side.
The integration of Eq.(\ref{eq:xsec}) has a weight factor for each subprocess.
If a decay products from the resonance particles is separately taken account
for the final state partons of N jets configuration, Eq.(\ref{eq:xsec2}) can
be rewritten as
\begin{equation}
\sigma = \int dw_{i,j,F} \int dx_{1} \int dx_{2} \int d\hat{\Phi}_{F}
w_{i,j,F} \cdot \frac{d\hat{\sigma}_{i j \rightarrow F}^{selected}
(\hat{s};m,\alpha)}{d\hat{\Phi}_{F}} ,
\label{eq:xsec2}
\end{equation}
where $d\hat{\sigma}_{i j \rightarrow F}^{selected}$ is the differential cross
section of each subprocess with an input arguments of the masses and
couplings. The matrix element is supplied by that of the base-subprocess.
Based on the initial and final state parton configuration, the graph selection
is applied to this base-subprocess, and masses and couplings are given in the
diagram calculation event by event. The $w_{i,j,F}$ is a weight factor for the
initial and final state parton configuration, and can be expressed as
\begin{equation}
w_{i,j,F} = \sum_{i,j,F} f^{1}_{i}(x_{1},Q^{2}) f^{2}_{j}(x_{2},Q^{2})
\cdot |V_{CKM}|^{2K} \cdot
\{Br.(X \rightarrow F') \times \Gamma_{tot}^{X}\}^{L} ,
\label{eq:xsec3}
\end{equation}
where an index $K$ and $L$ is the number of W and $X$ bosons, respectively. A
PDF is responsible for the weight of the initial state parton configuration,
and a squared coupling normalized by that of the base-subprocess is
responsible for the weight of the final state parton configuration. Note that
the square of the CKM (Cabibbo-Kobayashi-Maskawa) \cite{ckm} matrix parameter
remains after normalization of the coupling of the base-subprocess depending
on the number of the W bosons $K$. If the $X$ boson presents in the Feynman
diagrams and decay into $F'$ without interference with the other partons,
where $F'$ is a member of the final state particles, then the fraction of the
decay is used as the weight factor. The branching ratio and total width may be
given by the experimental measured one.
\section{Results}
The total cross sections estimated by GR@PPA are presented in
Table \ref{tab:xsecsgl} for the single boson plus jets productions and
Table \ref{tab:xsecdbl} for the double bosons plus jets productions,
respectively, where all bosons decay into electron and positron ($e^{+}e^{-}$)
or electron(positron) and (anti-)electron-neutrino ($e\nu_{e}$). Both are
shown with the cases of Tevatron Run-II ($p\bar{p}$ collisions at $\sqrt{s}$
$=$ 1.96 TeV) and LHC ($pp$ collisions at $\sqrt{s}$ $=$ 14 TeV) conditions
with CTEQ5L \cite{cteq} PDF. The renormalization and factorization scales
($Q^{2}$) are chosen to be identical, and those values are taken as the
squared boson mass for the single boson productions and the summation of
squared boson masses for the double bosons productions processes. The cuts are
only applied for the final state partons (jets), with the values of
$p_{T}$ $>$ 8.0 GeV, $|\eta|$ $<$ 3.0, and $\Delta$R $>$ 0.4 for Tevatron, and
$p_{T}$ $>$ 20 GeV, $|\eta|$ $<$ 3.0, and $\Delta$R $>$ 0.4 for LHC
conditions, but no cut is applied for leptons from bosons. The integration
accuracy achieved in BASES is fairly better than 1\% for all processes with
the default settings of the mapping number for the integrations.
The performance of GR@PPA for the W + N jets processes in Tevatron Run-II
condition is also summarized in Table \ref{tab:xsecspeed}. Used processor is
Intel Xeon 3.4 GHz processor. Tests are performed by two different Fortran
compilers: a free software of g77 ver.2.96 and a commercial compiler of Intel
Fortran Compiler ver.8.0. The integration time and the generation speed are
separately shown. Clearly, the commercial compiler is $\sim$ 2.5 faster than
the free compiler, but in both cases, those are not intolerable time for the
large scale Monte Carlo production. The generation efficiencies by SPRING are
also shown. These are within an order of a few percent for most of the
processes. These numbers are exceptionally good for this kind of complicated
processes.
\section{Summary}
We have developed an extended framework, named GR@PPA, of the GRACE system for
hadron collisions. We have introduced the scheme to share some subprocesses as
a single subprocess. We found that this extension allows us to incorporate the
variation in the initial and final states parton configurations into the GRACE
system. The results for some processes with multi-parton configurations are
presented, and we found that the computing time for the diagram calculations
is drastically reduced to be compared with the original GRACE system which
assumes that both the initial and final states are well-defined and the
integration is performed for every each subprocess. Using this faculty of
GR@PPA, we expect that the event generator is suitable not only for a large
scale Monte Carlo production at high luminosity hadron colliders of Tevatron
or LHC, but also for future NLO calculations which is also composed of lots
of subprocesses.
\section{Acknowledgements}
The author would like to thank all the people of the Minami-Tateya numerical
calculation group and the ATLAS-Japan Collaboration. The author would also
like to thank the organizers of the Conference.
| {'timestamp': '2005-01-19T03:28:09', 'yymm': '0501', 'arxiv_id': 'hep-ph/0501174', 'language': 'en', 'url': 'https://arxiv.org/abs/hep-ph/0501174'} |
\section{The result}
\subsection{}
Let us remind the theorem of H. Bohr (see \cite{5}, p. 253, Theorem 11.6(C)): The function $\zeta(s)$ attains every value $a\in\mbb{C}$ except
$0$ infinitely many times in the strip $a<\sigma<1+\delta$. At the same time, this theorem does not allow us to prove anything about the set of the
roots of the equation
\bdis
|\zeta(\sigma_0+it)|=|a|,\ \sigma_0\in (1,1+\delta) ,
\edis
i.e. about the roots lying on every fixed line $\sigma=\sigma_0$. \\
In this paper we will study more complicated nonlinear equation
\be \label{1.1}
\left|\zeta\left(\frac 12+iu\right)\right|^2\left|\zeta\left(\frac 12+iv\right)\right|^2=\frac 12\zeta(2\sigma)\ln u,\ u,v>0,\ \sigma\geq\alpha>1
\ee
where $\alpha$ is an arbitrary fixed value.
\begin{remark}
The theory of H. Bohr is not applicable on the equation (\ref{1.1}). However, by the method of Jacob's ladders we obtain some information about the set
of approximative solutions of this equation.
\end{remark}
\subsection{}
Let us remind that
\be \label{1.2}
\tilde{Z}^2(t)=\frac{{\rm d}\vp_1(t)}{{\rm d}t},\ \vp_1(t)=\frac 12\vp(t)
\ee
where
\be \label{1.3}
\tilde{Z}^2(t)=\frac{Z^2(t)}{2\Phi'_\vp[\vp(t)]}=\frac{Z^2(t)}{\left\{ 1+\mcal{O}\left(\frac{\ln\ln t}{\ln t}\right)\right\}\ln t} ,
\ee
(see \cite{1}, (3.9); \cite{2}, (1.3); \cite{4}, (1.1), (3.1)), and $\vp(t)$ is the solution of the nonlinear integral equation
\bdis
\int_0^{\mu[x(T)]}Z^2(t)e^{-\frac{2}{x(T)}t}{\rm d}t=\int_0^TZ^2(t){\rm d}t .
\edis
\subsection{}
\begin{mydef2}
If there are some sequences $\{ u_n(\sigma_0)\}_{n=0}^\infty,\ \{ v_n(\sigma_0)\}_{n=0}^\infty,\ \sigma_0>1$ for which the following condition
are fulfilled
\be \label{eqA}
\lim_{n\to\infty} u_n(\sigma_0)=\infty,\ \lim_{n\to\infty}v_n=\infty , \tag{A}
\ee
\be \label{eqB}
\frac{\left|\zeta\left(\frac 12+iu_n(\sigma_0)\right)\right|^2\left|\zeta\left(\sigma_0+iv_n(\sigma_0)\right)\right|^2}
{\zeta(2\sigma_0)\ln u_n(\sigma_0)}=\frac 12+o(1) , \tag{B}
\ee
then we call each ordered pair $[u_n(\sigma_0),v_n(\sigma_0)]$ an \emph{asymptotically approximate solution} (AA solution) of the nonlinear
equation (\ref{1.1}).
\end{mydef2}
The following theorem holds true.
\begin{mydef1}
Let us define the continuum set of sequences
\bdis
\{ K_n(T)\}_{n=0}^\infty,\ K_n\geq T_0[\vp_1]
\edis
as follows
\be \label{1.4}
\begin{split}
& K_0=T,\ K_1=K_0+K_0^{1/3+2\epsilon},\ K_2=K_1+K_1^{1/3+2\epsilon}, \dots , \\
& K_{n+1}=K_n+K_n^{1/3+2\epsilon} , \dots \ .
\end{split}
\ee
Then for every $\sigma_0:\ \sigma_0\geq \alpha>1$ there is a sequence $\{ u_n(\sigma_0)\}_{n=0}^\infty,\ u_n(\sigma_0)\in (K_n,K_{n+1})$ such that
\be \label{1.5}
\frac{\left|\zeta\left(\frac 12+iu_n(\sigma_0)\right)\right|^2\left|\zeta\left(\sigma_0+i\vp_1[u_n(\sigma_0)]\right)\right|^2}
{\zeta(2\sigma_0)\ln u_n(\sigma_0)}=\frac 12+\mcal{O}\left(\frac{\ln\ln K_n}{\ln K_n}\right) .
\ee
holds true where $\vp_1[u_n(\sigma_0)]\in (\vp_1(K_n),\vp_1(K_{n+1}))$ and
\be \label{1.6}
\rho\{[K_n,K_{n+1}];[\vp_1(K_n),\vp_1(K_{n+1})]\}\sim (1-c)\pi(K_n)\to\infty
\ee
as $n\to\infty$ where $\rho$ denotes the distance of the segments, $c$ is the Euler's constant and $\pi(t)$ is the prime-counting function, i.e. the
ordered pair
\bdis
[u_n(\sigma_0),\vp_1[u_n(\sigma_0)]]; \ \vp_1[u_n(\sigma_0)]=v_n(\sigma_0)
\edis
is the AA solution of the nonlinear equation (\ref{1.1}).
\end{mydef1}
\begin{remark}
Let us point out that the formula (\ref{1.5}) binds together the values of $\zeta(s)$ at three distinct points: at the point $\frac 12+iu_n(\sigma_0)$
lying on the critical line $\sigma=\frac 12$, and at the points $\sigma_0+i\vp_1[u_n(\sigma_0)],2\sigma_0$ lying in the semi-plane
$\sigma\geq\alpha>1$ .
\end{remark}
\begin{remark}
Since by the eq. (\ref{1.5}) we have
\be \label{1.6}
\left|\zeta\left(\frac 12+iu_n(\sigma_0)\right)\right|^2\sim \frac{\zeta(2\sigma_0)\ln u_n(\sigma_0)}{2}\frac{1}
{\left|\zeta\left(\sigma_0+i\vp_1[u_n(\sigma_0)]\right)\right|^2} ,
\ee
\be \label{1.7}
\left|\zeta\left(\sigma_0+i\vp_1[u_n(\sigma_0)]\right)\right|^2\sim \frac{\zeta(2\sigma_0)\ln u_n(\sigma_0)}{2}\frac{1}
{\left|\zeta\left(\frac 12+iu_n(\sigma_0)\right)\right|^2} ,
\ee
then for the two parallel conductors placed at the positions of the lines $\sigma=\frac 12$ and $\sigma=\sigma_0\geq \alpha>1$ some art of the
Faraday law holds true: the sequence of the energies $\{|\zeta(\sigma_0+i\vp_1[u_n(\sigma_0)])|^2\}_{n=0}^\infty$ on $\sigma=\sigma_0$ generates
the sequence of the energies $\{|\zeta(\frac 12+iu_n(\sigma_0))|^2\}_{n=0}^\infty$ on $\sigma=\frac 12$, and vice versa (see (\ref{1.6}), (\ref{1.7});
the energy is proportional to the square of the amplitude of oscillations).
\end{remark}
\section{The local mean-value theorem}
\subsection{}
Let us remind that there is the \emph{global} mean-value theorem (see \cite{5}, p. 116)
\bdis
\lim_{T\to\infty}\frac{1}{2T}\int_{-T}^T |\zeta(\sigma+it)|^2{\rm d}t=\zeta(2\sigma),\ \sigma>1 .
\edis
However, for our purpose, we need the \emph{local} mean-value theorem, i.e. the formula for the the integral
\bdis
\int_T^{T+U}|\zeta(\sigma+it)|^2{\rm d}t ,\ \sigma>1 .
\edis
In this direction, the following lemma holds true.
\begin{mydef5}
The formula
\be \label{2.1}
\int_T^{T+U}|\zeta(\sigma+it)|^2{\rm d}t=\zeta(2\sigma)U+\mcal{O}(1)
\ee
holds true uniformly for $T,U>0,\ \sigma\geq\alpha$, where $\alpha>1$ is an arbitrary fixed value. The $\mcal{O}$-constant depends of course on the choice
of $\alpha$.
\end{mydef5}
\begin{remark}
The formula (\ref{2.1}) is the asymptotic formula for $U\geq \ln\ln T$, for example.
\end{remark}
\subsection{Proof of the Lemma}
Following the formula
\bdis
\zeta(s)=\zeta(\sigma+it)=\sum_{n=1}^\infty \frac{1}{n^{\sigma+it}}, \ \sigma>1
\edis
we obtain
\bdis
|\zeta(\sigma+it)|^2=\zeta(2\sigma)+\sum_n\sum_{m\not=n}\frac{1}{(mn)^\sigma}\cos\left( t\ln\frac nm\right) ,
\edis
and
\be \label{2.2}
\begin{split}
& \int_T^{T+U}|\zeta(\sigma+it)|^2{\rm d}t=\zeta(2\sigma)U+\mcal{O}\left(\sum_{n}\sum_{m<n}\frac{1}{(mn)^\sigma\ln\frac nm}\right)= \\
& = \zeta(2\sigma)U+S(m<n)
\end{split}
\ee
uniformly for $T,U>0$. Let
\be \label{2.3}
S(m<n)=S\left( m<\frac n2\right)+S\left( \frac n2\leq m<n\right)=S_1+S_2 .
\ee
Since we have $2<\frac nm$ in $S_1$, and
\bdis
\sum_{n=1}^\infty\frac{1}{n^\sigma}=1+\sum_{n=2}^\infty \frac{1}{n^\sigma}<1+\int_1^\infty x^{-\sigma}{\rm d}x=1+\frac{1}{\sigma-1}
\edis
then
\be \label{2.4}
S_1=\mcal{O}\left(\sum_n\sum_{m<\frac n2}\frac{1}{(mn)^\sigma}\right)=\mcal{O}\left\{\left(\sum_{n=1}^\infty\frac{1}{n^\sigma}\right)^2\right\}=
\mcal{O}(1) .
\ee
We put $m=n-r>\frac n2;\ 1\leq r<\frac n2$ in $S_2$ and we obtain as usual
\bdis
\ln\frac nm=\ln\frac{n}{n-r}=-\ln\left( 1-\frac rn\right)>\frac rn .
\edis
Next, we have
\bdis
\begin{split}
& \sum_{n=2}^\infty \frac{\ln n}{n^{2\sigma-1}}=\frac{\ln 2}{2^{2\sigma-1}}+\sum_{n=3}^\infty \frac{\ln n}{n^{2\sigma-1}}<
\frac{\ln 2}{2^{2\sigma-1}}+\int_2^\infty\frac{\ln x}{x^{2\sigma-1}}{\rm d}x= \\
& =\frac{\ln 2}{2^{2\sigma-1}}+\frac{2^{-2\sigma+1}}{\sigma-1}\ln 2+\frac{2^{-2\sigma}}{(\sigma-1)^2}=\mcal{O}(1);\ 2^{-\sigma}\in \left( 0,\frac 12\right) .
\end{split}
\edis
Then
\be \label{2.5}
S_2=\mcal{O}\left(\sum_n\sum_{r=1}^{n/2}\frac{n}{(mn)^\sigma r}\right)=\mcal{O}\left( 2^\sigma\sum_{n=2}^\infty \frac{\ln n}{n^{2\sigma-1}}\right)=\mcal{O}(1) .
\ee
Finally, from (\ref{2.2}) by (\ref{2.3})-(\ref{2.5}) the formula (\ref{2.1}) follows.
\section{Proof of the Theorem}
\subsection{}
Let us remind that the following lemma holds true (see \cite{3}, (2.5);\cite{4}, (3.3)): for every integrable function (in the Lebesgue sense)
$f(x),\ x\in [\vp_1(T),\vp_1(T+U)]$ the following is true
\be \label{3.1}
\int_T^{T+U}f[\vp_1(t)]\tilde{Z}^2(t){\rm d}t=\int_{\vp_1(T)}^{\vp_1(T+U)}f(x){\rm d}x ,\ U\in \left(\left. 0,\frac{T}{\ln T}\right]\right. ,
\ee
where $t-\vp_1(t)\sim (1-c)\pi(t)$.
\subsection{}
In the case $f(t)=|\zeta(\sigma_0+it)|^2,\ U=U_0=T^{1/3+2\epsilon}$ we obtain from (\ref{3.1}) the following formula
\be \label{3.2}
\int_T^{T+U_0}|\zeta(\sigma_0+i\vp_1(t))|^2\tilde{Z}^2(t){\rm d}t=\int_{\vp_1(T)}^{\vp_1(T+U_0)}|\zeta(\sigma_0+it)|^2{\rm d}t .
\ee
Since (see (\ref{2.1}))
\be \label{3.3}
\begin{split}
& \int_{\vp_1(T)}^{\vp_1(T+U_0)}|\zeta(\sigma_0+it)|^2{\rm d}t=\zeta(2\sigma_0)\{\vp_1(T+U_0)-\vp_1(T)\}+\mcal{O}(1)= \\
& =\frac 12\zeta(2\sigma_0)U_0\tan[\alpha(T,U_0)]+\mcal{O}(1)
\end{split}
\ee
where (see (\ref{1.2}))
\bdis
\frac{\vp_1(T+U_0)-\vp_1(T)}{U_0}=\frac 12\frac{\vp(T+U_0)-\vp(T)}{U_0}=\frac 12\tan[\alpha(T,U_0)] ,
\edis
and (see \cite{3}, (2.6))
\bdis
\tan[\alpha(T,U_0)]=1+\mcal{O}\left(\frac{1}{\ln T}\right) ,
\edis
then (see (\ref{3.3}))
\be \label{3.4}
\int_{\vp_1(T)}^{\vp_1(T+U_0)}|\zeta(\sigma_0+it)|^2{\rm d}t=\frac 12\zeta(2\sigma_0)U_0\left\{ 1+\mcal{O}\left(\frac{1}{\ln T}\right)\right\} .
\ee
\subsection{}
Next, by the first application of the mean-value theorem we obtain (see (\ref{1.3}), (\ref{3.2}))
\be \label{3.5}
\begin{split}
& \int_T^{T+U_0}|\zeta(\sigma_0+i\vp_1(t))|^2\tilde{Z}^2(t){\rm d}t= \\
& =\frac{1}{\left\{ 1+\mcal{O}\left(\frac{\ln\ln\xi_1}{\ln \xi_1}\right)\right\}\ln\xi_1}
\int_T^{T+U_0}|\zeta(\sigma_0+i\vp_1(t))|^2\left|\zf\right|^2{\rm d}t , \\
& \xi_1=\xi_1(\sigma_0;T,U_0)\in (T,T+U_0) ,
\end{split}
\ee
and by the second application of this we obtain
\be \label{3.6}
\begin{split}
& \int_T^{T+U_0}|\zeta(\sigma_0+i\vp_1(t))|^2\left|\zf\right|^2{\rm d}t= \\
& =|\zeta(\sigma_0+i\vp_1(\xi_2))|^2
\left|\zeta\left(\frac 12+i\xi_2\right)\right|^2U_0 , \\
& \xi_2=\xi_2(\sigma_0;T,U_0)\in (T,T+U_0),\ \vp_1(\xi_2)\in (\vp_1(T),\vp_1(T+U));\\
& \ln\xi_1\sim\ln\xi_2 .
\end{split}
\ee
Hence, from (\ref{3.5}) by (\ref{3.6}) we have
\be \label{3.7}
\int_T^{T+U_0}|\zeta(\sigma_0+i\vp_1(t))|^2\tilde{Z}^2(t){\rm d}t=
\frac{|\zeta(\sigma_0+i\vp_1(\xi_2))|^2\left|\zeta\left(\frac 12+i\xi_2\right)\right|^2}
{\left\{ 1+\mcal{O}\left(\frac{\ln\ln\xi_2}{\ln \xi_2}\right)\right\}\ln\xi_2}U_0 ,
\ee
and from (\ref{3.2}) by (\ref{3.4}), (\ref{3.7}) we obtain
\be \label{3.8}
\frac{|\zeta(\sigma_0+i\vp_1(\xi_2))|^2\left|\zeta\left(\frac 12+i\xi_2\right)\right|^2}{\zeta(2\sigma_0)\ln\xi_2}=
\frac 12+\mcal{O}\left(\frac{\ln\ln\xi_2}{\ln\xi_2}\right) .
\ee
\subsection{}
Now, if we apply (\ref{3.8}) in the case (see (\ref{1.4}))
\bdis
\begin{split}
& [T,T+U_0]\to [K_n,K_{n+1}];\ \xi_2(\sigma_0)\to\xi_{2,n}(\sigma_0)\in (K_n,K_{n+1}); \\
& \xi_{2,n}(\sigma_0)=u_n(\sigma_0) ,
\end{split}
\edis
then we obtain (\ref{1.5}). The statement (\ref{1.6}) follows from $t-\vp_1(t)\sim (1-c)\pi(t)$.
\section{Concluding remarks}
Let us remind that in \cite{1} we have shown the following formula holds true
\bdis
\int_0^T Z^2(t){\rm d}t=\vp_1(T)\ln\vp_1(T)+(c-\ln 2\pi)\vp_1(T)+c_0+\mcal{O}\left(\frac{\ln T}{T}\right),\ \vp_1(T)=\frac{\vp(T)}{2} ,
\edis
where $\vp_1(T)$ is the Jacob's ladder. It is clear that $\vp_1(T)$ is the asymptotic solution of the nonlinear transcendental equation
\bdis
\int_0^T Z^2(t){\rm d}t=V(T)\ln V(T)+(c-\ln 2\pi)V(T) .
\edis
| {'timestamp': '2011-07-27T02:02:31', 'yymm': '1107', 'arxiv_id': '1107.5200', 'language': 'en', 'url': 'https://arxiv.org/abs/1107.5200'} |
\section{Introduction}
\label{Intro}
\subsection{The classical case}
The $n$-th Bernoulli polynomial $B_n(x)$
is implicitly defined as the coefficient of
$t^n$ in the generating function
\begin{equation*}
\frac{te^{xt}}{e^t-1}=\sum_{n=0}^{\infty}B_{n}(x) \frac{t^n}{n!},
\end{equation*}
where $e$ is the base of the natural logarithm.
For $n\ge 0$ the Bernoulli numbers $B_n$ are defined by
$B_{n}=B_{n}(0).$
It is well-known that $B_0=1$, and $B_n = 0$ for every odd integer $n > 1$.
In this paper, $p$ always denotes a prime.
An odd prime $p$ is said to be \textit{B-irregular}
if $p$ divides the numerator of at least one of the Bernoulli numbers $B_{2},B_{4},\ldots,B_{p-3}$,
and \textit{B-regular} otherwise.
The first twenty B-irregular primes are
\begin{align*}
& 37, 59, 67, 101, 103, 131, 149, 157, 233, 257, 263, \\
& 271, 283, 293, 307, 311, 347, 353, 379, 389.
\end{align*}
The notion of B-irregularity has an important application in algebraic number theory. Let $\mathbb{Q}(\zeta_{p})$ be the $p$-th cyclotomic field,
and $h_p$ its class number.
Denote by $h_p^{+}$ the class number of $\Q(\zeta_p+\zeta_p^{-1})$ and
put
$h_p^{-}=h_p / h_p^{+}.$
Kummer proved that $h_p^{-}$ is an integer
(now called the \textit{relative class number} of $\Q(\zeta_p)$)
and gave the following characterization (see \cite[Theorem 5.16]{Wa}).
(Here and in the sequel, for any integer $n \ge 1$, $\zeta_n$ denotes an $n$-th primitive root of unity.)
\begin{theorem}[Kummer]
\label{thm:Kummer}
An odd prime $p$ is B-irregular if and only if $p\mid h_{p}^{-}$.
\end{theorem}
Kummer showed that if $p\nmid h_p^{-},$ then
the Diophantine equation $x^p+y^p=z^p$ does not have an integer solution
$x,y,z$ with $p$ coprime to $xyz,$ cf. \cite[Chapter 1]{Wa}.
We remark that for any odd prime $p$, $p\mid h_{p}^{-}$ if and only if $p\mid h_{p}$; see \cite[Theorem 5.34]{Wa}. The class number $h_{p}^{+}$ is rather more
elusive than $h_{p}^{-}$. Here Vandiver
made the conjecture that $p\nmid h_{p}^{+}$ for any odd prime $p$.
Jensen \cite{Jensen} was the
first to prove that there are infinitely many B-irregular primes.
More precisely, he showed that there are infinitely many B-irregular primes
not of the form $4n + 1$.
This was generalized by Montgomery \cite{Mon},
who showed that 4 can be replaced by any integer greater than 2.
To the best of our knowledge,
the following result due to Mets\"ankyl\"a \cite{Met2} has not been
improved upon.
\begin{theorem}[Mets\"ankyl\"a \cite{Met2}]
\label{thm:Met}
Given an integer $m>2$, let $\Z_m^*$ be the multiplicative group of the residue classes modulo $m$, and let $H$ be a proper subgroup of $\Z_m^*$.
Then, there exist infinitely many B-irregular primes not lying in the residue classes in $H$.
\end{theorem}
Let ${\mathcal P}_B$ be the set of B-irregular primes.
Carlitz \cite{Carlitz} gave a simple proof of
the infinitude of this set,
and recently Luca, Pizarro-Madariaga and Pomerance \cite[Theorem 1]{Luca} made this more quantitative by showing that
\begin{equation}
\label{eq:Luca}
{\mathcal P}_B(x) \ge (1+o(1)) \frac{\log\log x}{\log\log\log x}
\end{equation}
as $x \to \infty$. (Here and in the sequel, if
$S$ is a set of natural numbers, then $S(x)$ denotes
the number of elements in $S$ not exceeding $x.$)
Heuristical arguments suggest a much stronger
result (consistent with numerical data; see, for instance, \cite{BH,HHO}).
\begin{conjecture}[Siegel \cite{Siegel}] \label{conj:Siegel1}
Asymptotically we have
$$
\cP_B(x)\sim \left( 1-\frac{1}{\sqrt{e}}\right) \pi(x),
$$
where $\pi(x)$ denotes the prime counting function.
\end{conjecture}
The reasoning behind this conjecture is as follows.
We assume that the numerator of $B_{2k}$ is not divisible
by $p$ with probability $1-1/p.$ Therefore, assuming the
independence of divisibility by distinct primes, we expect
that $p$ is B-regular with probability
$$
\left( 1-\frac{1}{p}\right)^{\frac{p-3}{2}},
$$
which with increasing $p$ tends to $e^{-1/2}$.
For any positive integers $a,d$ with $\gcd(a,d)=1$, let $\cP_B(d,a)$
be the set of B-irregular primes congruent to $a$ modulo $d$.
We pose the following conjecture,
which suggests that B-irregular primes are uniformly distributed in arithmetic progressions.
It is consistent with numerical data; see Table \ref{tab:B-irre}.
\begin{conjecture} \label{conj:Siegel2}
For any positive integers $a,d$ with $\gcd(a,d)=1$, asymptotically we have
$$
\cP_B(d,a)(x)\sim \frac{1}{\varphi(d)} \left(1-\frac{1}{\sqrt{e}}\right) \pi(x),
$$
where $\varphi$ denotes Euler's totient function.
\end{conjecture}
Although we know that there are infinitely many
B-irregular primes, it is still not known whether
there are infinitely many B-regular primes!
\subsection{Some generalizations}
In \cite{Carlitz} Carlitz studied (ir)regularity with respect to Euler numbers.
The Euler numbers $E_n$ are a sequence of integers defined by the relation
$$
\frac{2}{e^t+e^{-t}} = \sum_{n=0}^{\infty} E_n \frac{t^n}{n!}.
$$
It is easy to see
that $E_0=1$, and $E_n=0$ for any odd $n\ge 1$.
Euler numbers, just as Bernoulli numbers, can be defined via special polynomial values.
The Euler polynomials $E_{n}(x)$ are implicitly defined as the coefficient of
$t^n$ in the generating function
\begin{equation*} \label{Eu-pol}
\frac{2e^{xt}}{e^t+1}=\sum_{n=0}^\infty E_n(x)\frac{t^n}{n!}.
\end{equation*}
Then, $E_n = 2^n E_n(1/2), n=0,1,2,\ldots.$
An odd prime $p$ is said to be \textit{E-irregular}
if it divides at least one of the integers $E_{2},E_{4},\ldots,E_{p-3}$,
and \textit{E-regular} otherwise.
The first twenty E-irregular primes are
\begin{align*}
& 19, 31, 43, 47, 61, 67, 71, 79, 101, 137, 139, \\
& 149, 193, 223, 241, 251, 263, 277, 307, 311.
\end{align*}
Vandiver \cite{Van} proved that if $p$ is E-regular, then
the Diophantine equation $x^p+y^p=z^p$ does not have an integer solution
$x,y,z$ with $p$ coprime to $xyz.$
This criterion makes it possible to discard
many of the exponents Kummer could not handle:
$$
37, 59, 103, 131, 157, 233, 257, 271, 283, 293,\ldots.
$$
Carlitz \cite{Carlitz} showed that there are infinitely many E-irregular primes.
Luca, Pizarro-Madariaga and Pomerance in \cite[Theorem 2]{Luca}
showed that the number of E-irregular primes up to
$x$ satisfies the same lower bound as in \eqref{eq:Luca}.
Regarding their distribution, we currently only know that there are infinitely many E-irregular primes
not lying in the residue classes $\pm 1~({\rm mod~}8)$, which was proven by Ernvall \cite{Ern}.
Like the B-(ir)regularity of primes, their E-(ir)regularity also relates to the
divisibility of class numbers of cyclotomic fields; see \cite{Ern2,Ern3}.
Let $\cP_E$ be the set of E-irregular primes, and let $\cP_E(d,a)$
be the set of E-irregular primes congruent to $a$ modulo $d$.
It was conjectured and tested in \cite[Section 2]{EM} that:
\begin{conjecture} \label{conj:E-irre1}
Asymptotically we have
$$
\cP_E(x)\sim \left(1-\frac{1}{\sqrt{e}}\right)\pi(x).
$$
\end{conjecture}
As in the case of B-irregular primes, we pose the following conjecture.
\begin{conjecture} \label{conj:E-irre2}
For any positive integers $a,d$ with $\gcd(a,d)=1$, asymptotically we have
$$
\cP_E(d,a)(x)\sim \frac{1}{\varphi(d)} \left(1-\frac{1}{\sqrt{e}}\right)\pi(x).
$$
\end{conjecture}
\noindent This conjecture is consistent with computer
calculations;
see Table \ref{tab:E-irre}.
Later on, Ernvall \cite{Ern2,Ern3} introduced $\chi$-irregular primes
and proved the infinitude of such irregular primes for any Dirichlet character $\chi$,
including B-irregular primes and E-irregular primes as special cases.
In addition, Hao and Parry \cite{HP} defined $m$-regular primes for any square-free integer $m$,
and Holden \cite{Holden} defined irregular primes by using the values of zeta functions of
totally real number fields.
In this paper, we
introduce a new kind of irregular prime based on Genocchi
numbers and study
their distribution in
detail.
\subsection{Regularity with respect to Genocchi numbers}
\label{sec:Euler}
The Genocchi numbers $G_n$ are defined by the relation
$$
\frac{2t}{e^t+1} = \sum_{n=1}^{\infty} G_n \frac{t^n}{n!}.
$$
It is well-known that $G_1=1$, $G_{2n+1}=0$ for $n \ge 1$, and $(-1)^nG_{2n}$ is an odd positive integer.
The Genocchi numbers $G_n$ are related to Bernoulli numbers $B_n$ by the formula
\begin{equation} \label{eq:BnGn}
G_n = 2(1-2^n)B_n.
\end{equation}
In view of the definitions of $G_n$ and $E_n(x)$, we directly obtain
\begin{equation} \label{eq:EnGn}
G_n = nE_{n-1}(0), \quad n \ge 1.
\end{equation}
In analogy with Kummer and Carlitz, we here define an odd prime $p$ to be \textit{G-irregular}
if it divides at least one of the integers $G_2,G_4,\ldots, G_{p-3}$, and
\textit{G-regular} if it does not.
The first twenty G-irregular primes are
\begin{align*}
& 17, 31, 37, 41, 43, 59, 67, 73, 89, 97, 101, 103, \\
& 109, 113, 127, 131, 137, 149, 151, 157.
\end{align*}
Clearly, if an odd prime $p$ is B-irregular, then it is also G-irregular.
Recall that a \textit{Wieferich prime} is an odd prime $p$ such that
$2^{p-1}\equiv 1~({\rm mod~}p^2)$, which arose in the study of Fermat's Last Theorem.
So, if an odd prime $p$ is a Wieferich prime, then $p$ divides $G_{p-1}$,
and otherwise it does not divide $G_{p-1}$.
Currently there are only two Wieferich prime known, namely
$1093$ and $3511$.
If there are further ones, they are larger than $6.7 \times 10^{15}$; see \cite{DK}.
Both $1093$ and $3511$ are G-irregular primes.
However, $1093$ is B-regular, and $3511$ is B-irregular.
As in the classical case, the G-irregularity of primes can
be linked to the divisibility of some class numbers of cyclotomic fields.
Let $S$ be the set of infinite places of $\Q(\zeta_p)$ and $T$ the set of places above the prime 2.
Denote by $h_{p,2}$ the \textit{$(S,T)$-refined class number} of $\Q(\zeta_p)$.
Similarly, let $h_{p,2}^{+}$ be the refined class number of $\Q(\zeta_p+\zeta_p^{-1})$ with respect to its infinite places and places above the prime 2
(for the definition of the refined class number of global fields,
we refer to Gross \cite[Section 1]{Gross} or Hu and Kim \cite[Section 2]{HK}).
Define
$$
h_{p,2}^{-} = h_{p,2} / h_{p,2}^{+}.
$$
It turns out that $h_{p,2}^{-}$ is an integer
(see \cite[Proof of Proposition 3.4]{HK}).
\begin{theorem}
\label{thm:class}
Let $p$ be an odd prime.
Then, if $p$ is G-irregular, we have $p\mid h_{p,2}^{-}$.
If furthermore $p$ is not a Wieferich prime, the converse is also true.
\end{theorem}
\subsubsection{Global distribution of G-irregular primes}
Let $g$ be a non-zero integer. For an odd prime $p\nmid g$, let ${\rm ord}_p(g)$ be the multiplicative
order of $g$ modulo $p$, that is, the smallest positive integer
$k$ such that $g^{k}\equiv 1~({\rm mod~}p)$.
\begin{theorem}
\label{thm:GB}
An odd prime $p$ is G-regular if and only
if it is B-regular and satisfies $\ord_p(4)=(p-1)/2$.
\end{theorem}
Note that $\ord_p(4)\mid (p-1)/2.$ Using quadratic reciprocity it is not difficult to
show (see Proposition~\ref{prop:splitsing}) that if $p\equiv 1~({\rm mod~}8)$,
then $\ord_p(4)\ne (p-1)/2$.
Thus Theorem~\ref{thm:GB} has the following corollary.
\begin{corollary}
\label{cor:allepriem}
Primes $p$ satisfying $p\equiv 1~({\rm mod~}8)$
are G-irregular.
\end{corollary}
Although the B-irregular primes are very mysterious, the set
of primes
$p$ such that $\ord_p(4)=(p-1)/2$ is far less so.
Its distributional properties are analyzed in detail in
Proposition \ref{prop:main}. The special case $a=d=1$ in
combination with Theorem \ref{thm:GB}, yields
the following estimate.
\begin{theorem}
\label{totalEirregular}
Let $\cP_G$ be the set of G-irregular primes.
Let $\epsilon>0$ be arbitrary and fixed. Then we have, for every $x$ sufficiently large,
$$
\cP_G(x) > \left( 1-\frac{3}{2}A-\epsilon\right) \frac{x}{\log x},$$
with
$A$ the Artin constant
\begin{equation}
\label{Artinconstantdef}
A= \prod_{\textrm{prime $p$}}\left( 1-\frac{1}{p(p-1)} \right) = 0.3739558136192022880547280543464\ldots .
\end{equation}
\end{theorem}
\noindent Note that $1-3A/2=0.4390662795\ldots.$
Using Siegel's heuristic,
one arrives at the following conjecture.
\begin{conjecture}
\label{con:global}
Asymptotically we have
$$
\cP_G(x)\sim \left( 1-\frac{3A}{2\sqrt{e}}\right) \pi(x)~(
\approx 0.6597765\cdot\pi(x)).
$$
\end{conjecture}
Some numerical evidence for Conjecture \ref{con:global} is given in Section \ref{sec:data}.
The heuristic behind this conjecture is straightforward.
Under the Generalized Riemann Hypothesis (GRH) it can be shown that the set
of primes
$p$ such that
$\ord_p(4)=(p-1)/2$ has density $3A/2$ (Proposition~\ref{prop:main} with $a=d=1$). By
Siegel's heuristic one expects a fraction ${3A}/{(2\sqrt{e})}$ of these to be B-regular.
The conjecture follows on invoking Theorem~\ref{thm:GB}.
\subsubsection{G-irregular primes in prescribed arithmetic progressions}
Using Proposition \ref{prop:main} we can give a
non-trivial lower bound for the number of
G-irregular primes in a prescribed arithmetic
progression. Recall that if $S\subseteq T$
are sets of natural numbers, then the relative density of
$S$ in $T$ is defined as
$${\lim}_{x\rightarrow \infty}\frac{S(x)}{T(x)},$$
if this limit exists.
We use the notations gcd and lcm for
greatest common divisor, respectively least common multiple, but often will
write $(a,b)$, rather than $\gcd(a,b)$.
We also use the ``big O" notation $O$, and we write $O_\rho$ to emphasize the dependence of the implied
constant on some parameter (or a list of parameters) $\rho$.
\begin{proposition}
\label{prop:main}
Given two coprime positive integers $a$ and $d,$ we put
\begin{equation}
\label{cqda}
\cQ(d,a)=\{p>2:~p\equiv a~({\rm mod~}d),~\ord_p(4)=(p-1)/2\}.
\end{equation}
Let $\epsilon$ be arbitrary and fixed.
Then, for every $x$ sufficiently large we have
\begin{equation}
\label{eq:underGRH2}
\cQ(d,a)(x)<
\frac{(\delta(d,a)+\epsilon)}{\varphi(d)}\frac{x}{\log x}
\end{equation}
with
$$
\delta(d,a)=c(d,a)R(d,a)A,
$$
where
\begin{equation*}
R(d,a)=2\prod_{p\mid (a-1,d)}\left( 1-\frac{1}{p}\right)
\prod_{p\mid d}\left( 1+\frac{1}{p^2-p-1}\right) ,
\end{equation*}
and
$$
c(d,a)=
\begin{cases}
3/4 & \text{if~}4\nmid d;\\
1/2 & \text{if~}4\mid d,8\nmid d,~a\equiv 1~({\rm mod~}4);\\
1 & \text{if~}4\mid d,8\nmid d,~a\equiv 3~({\rm mod~}4);\\
1 & \text{if~}8\mid d,~a\not\equiv 1~({\rm mod~}8);\\
0 & \text{if~}8\mid d,~a\equiv 1~({\rm mod~}8),
\end{cases}
$$
is the relative density of the primes
$p\not\equiv 1~({\rm mod~}8)$ in the
set of primes $p\equiv a~({\rm mod~}d).$
Under GRH, we have
\begin{equation}
\label{eq:underGRH}
\cQ(d,a)(x)=
\frac{\delta(d,a)}{\varphi(d)}\frac{x}{\log x}
+
O_d\left(\frac{x\log\log x}{\log^2x}\right).
\end{equation}
\end{proposition}
We discuss the connection of this result and
Artin's primitive root conjecture in Section \ref{Artinmethods}.
A numerical demonstration of the
estimate \eqref{eq:underGRH} is given in Section~\ref{sec:data} for
some choices of $a$ and $d$.
By Proposition~\ref{prop:splitsing}, in case $8 | d$ and $a\equiv 1~({\rm mod~}8)$,
we in fact have $\cQ(d,a)=\emptyset$ and so $\cQ(d,a)(x)=0$.
Combination of Theorem~\ref{thm:GB} and
Proposition~\ref{prop:main} yields directly the following result.
\begin{theorem}
\label{thm:mainArtin}
Given two coprime positive integers $a$ and $d$, we put
$$
\cP_G(d,a)=\{p:\, p\equiv a~({\rm mod~}d) \textrm{ and $p$ is G-irregular}\}.
$$
Let $\epsilon$ be arbitrary and fixed.
For every $x$ sufficiently large, we have
\begin{equation}
\label{dainequality}
\cP_G(d,a)(x)>
\frac{(1-\delta(d,a)-\epsilon)}{\varphi(d)}\frac{x}{\log x},
\end{equation}
where $\delta(d,a)$ is defined in Proposition~\ref{prop:main}.
Under GRH we have
\begin{equation}
\cP_G(d,a)(x)\ge
\frac{(1-\delta(d,a))}{\varphi(d)}\frac{x}{\log x}
+
O_d\left( \frac{x \log\log x}{\log^{2} x} \right) .
\end{equation}
\end{theorem}
Note that
the inequality \eqref{dainequality} (with $a=d=1$) yields Theorem \ref{totalEirregular} as a
special case.
An easy analysis
(see \eqref{eq:delta} in Section \ref{size}) shows that $1-\delta(d,a)>0,$ and so
we obtain the following corollary, which can be compared with Theorem~\ref{thm:Met}.
\begin{corollary}
\label{cor:cor}
Each primitive residue class contains a subset of G-irregular primes having positive density.
\end{corollary}
Moreover, by Corollary~\ref{cor:allepriem} in case $a\equiv 1~({\rm mod~}8)$ and $8|d$
we have $1-\delta(d,a)=1$ and in fact
$\cP_G(d,a)(x)=\pi(x;d,a)$,
where
$$
\pi(x;d,a) = \#\{p\le x:p\equiv a~({\rm mod~}d)\}.
$$
In the remaining cases we have $\delta(d,a) > 0$,
and so $1-\delta(d,a) < 1$.
However, $1-\delta(d,a)$ can be arbitrarily close to 1 (see Proposition~\ref{inf-sup}),
and the same holds
for the relative density of $\cP_G(d,a)$ by Theorem~\ref{thm:mainArtin}.
Corollary \ref{cor:cor} taken by itself is not a deep result and can
be easily proved directly, see Moree and Sha \cite[Proposition 1.6]{MS}.
The reasoning that leads us
to Conjecture \ref{con:global} in
addition with the assumption that
B-irregular primes are equidistributed over residue classes with a fixed modulus,
suggests that the following conjecture might be true.
\begin{conjecture}
\label{con:local}
Given two coprime positive integers $a$ and $d$, asymptotically we have
$$\cP_G(d,a)(x)\sim \left( 1-\frac{\delta(d,a)}{\sqrt{e}}\right) \pi(x;d,a),$$ where $\delta(d,a)$ is defined
in Proposition \ref{prop:main}.
\end{conjecture}
Numerical evidence for Conjecture \ref{con:local} is presented in Section \ref{sec:data}.
Note that this conjecture implies Conjecture \ref{con:global} (choosing $a=d=1$).
On observing that $\delta(a,d)=0$ if and only if
$8\mid d$ and $a\equiv 1~({\rm mod~}8)$,
it also implies the following conjecture.
\begin{conjecture}
\label{con:local2}
Consider the subset of G-regular primes in the primitive residue
class $a~({\rm mod~}d)$.
It has a positive density, provided
we are not in the case $8\mid d$ and
$a\equiv 1~({\rm mod~}8)$.
\end{conjecture}
\section{Preliminaries}
In this section, we gather some results which are used later on.
\subsection{Elementary results}
For a primitive Dirichlet character $\chi$ with an odd conductor $f,$
the generalized Euler numbers $E_{n,\chi}$ are defined by (see \cite[Section 5.1]{KH1})
\begin{equation*}
2\sum_{a=1}^{f}\frac{(-1)^a\chi(a)e^{at}}{e^{ft}+1}=\sum_{n=0}^{\infty}E_{n,\chi}\frac{t^n}{n!}.
\end{equation*}
For any odd prime $p$, let $\omega_p$ be the Teichm\"uller character of $\Z/p\Z.$ Any multiplicative character of $\Z/p\Z$ is of the form $\omega_p^k$ for some $1\le k \le p-1$.
In particular, the odd characters are $\omega_p^{k}$ with $k=1,3,\ldots,p-2$.
The following lemma, formulated and proved in the
notation of \cite{KH1}, is an analogue of a well-known result for the generalized Bernoulli numbers; see \cite[Corollary 5.15]{Wa}.
\begin{lemma}
\label{p4}
Suppose that $p$ is an odd prime and $k,n$
are non-negative integers. Then $E_{k,\omega_p^{n-k}}\equiv E_{n}(0) ~({\rm mod~}p).$
\end{lemma}
\begin{proof}
By \cite[Proposition 5.4]{KH1}, for arbitrary integers $k,n\geq 0$, we have
\begin{equation}\label{(1)}
E_{k,\omega_p^{n-k}}=\int_{\mathbb{Z}_{p}}\omega_p^{n-k}(a)a^{k}d\mu_{-1}(a).
\end{equation}
By \cite[Proposition 2.1 (1)]{KH1}, $E_{n}(0)$ also can be expressed as a $p$-adic integral,
namely
\begin{equation}\label{(2)}
E_{n}(0)=\int_{\mathbb{Z}_{p}}a^{n}d\mu_{-1}(a).
\end{equation}
Since $\omega_p(a)\equiv a ~({\rm mod~}p)$, we have
\begin{equation}\label{(3)}
\begin{aligned}
\omega_p^{n-k}(a)\equiv a^{n-k} ~({\rm mod~}p) \qquad \textrm{and} \qquad \omega_p^{n-k}(a)a^{k}\equiv a^{n} ~({\rm mod~}p).
\end{aligned}
\end{equation}
From \eqref{(1)}, \eqref{(2)} and \eqref{(3)}, we deduce that
$$
E_{k,\omega_p^{n-k}}-E_{n}(0)=\int_{\mathbb{Z}_{p}}(\omega_p^{n-k}(a)a^{k}-a^{n})d\mu_{-1}(a)\equiv
0~({\rm mod~}p).\qedhere
$$
\end{proof}
Recall that $\cQ(d,a)$ is defined
in \eqref{cqda}. For ease of notation
we put
\begin{equation}
\label{Qdef}
\cQ=\cQ(1,1)=
\{p>2:\ord_p(4)=(p-1)/2\}.
\end{equation}
For the understanding of the distribution of the
primes in $\cQ,$ it turns out to be very useful to
consider their residues modulo $8$.
\begin{proposition}
\label{prop:splitsing}
For $j=1,3,5,7$ we put
$\cQ_j=\cQ(8,j).$
We have $$\cQ=\cQ_1\cup \cQ_3 \cup \cQ_5 \cup \cQ_7,$$
with $\cQ_1=\emptyset,$
and, for $j=3,5,$
$$\cQ_j=\{p: p \equiv j~({\rm mod~}8),~\ord_p(2)=p-1\},$$
and, furthermore,
$$\cQ_7=\{p: p \equiv 7~({\rm mod~}8),~ \ord_p(2)=(p-1)/2\}.$$
\end{proposition}
\begin{proof}
If $p\equiv 1~({\rm mod~}8)$, then
by quadratic reciprocity $2^{(p-1)/2}
\equiv 1~({\rm mod~}p)$, and we
conclude that $\ord_p(4)\mid (p-1)/4$
and hence $\cQ_1=\emptyset$.
Note that
$$
\ord_p(4)=
\begin{cases}
\ord_p(2) & {\rm ~if~}\ord_p(2){\rm ~is~odd};\\
\ord_p(2)/2 & {\rm ~otherwise}.
\end{cases}
$$
In case $p\equiv \pm 3 ~({\rm mod~}8),$
we have $2^{(p-1)/2}
\equiv -1~({\rm mod~}p)$, and so
$\ord_p(2)$ must be even. The
assumption that $p$ is in $\cQ$ now
implies that $\ord_p(2)=2\cdot \ord_p(4)=
p-1.$
In case $p\equiv 7 ~({\rm mod~}8),$
we have $2^{(p-1)/2}
\equiv 1~({\rm mod~}p)$, and so
$\ord_p(2)$ must be odd. The
assumption that $p$ is in $\cQ$ now
implies that $\ord_p(2)=
\ord_p(4)=(p-1)/2$.
\end{proof}
\subsection{The size of $\delta(d,a)$}
\label{size}
In this section we study the
extremal behaviour of the quantity $\delta(d,a)$ defined in Proposition~\ref{prop:main}.
We put
$$
G(d)=\prod_{p \mid d}\left(1+\frac{1}{p^2-p-1}\right),\qquad F(d) = \frac{\varphi(d)}{d}G(d).
$$
An easy calculation gives that
$$
G(d)=\frac{1}{A}\prod_{p \nmid d}\left(1-\frac{1}{p(p-1)}\right)=\frac{1}{A}\left(1+O(\frac{1}{q})\right),
$$
where $q$ is the smallest prime not dividing $d$.
Trivially, $G(d)<1/A$, and $G(d)<1/(2A)$ when $d$ is odd.
It is a classical result (see, for instance, \cite[Theorem 13.14]{Apostol}), that
$$
\liminf_{d \to \infty} \frac{\varphi(d)}{d} \log\log d = e^{-\gamma},
$$
where $\gamma$ is the Euler-Mascheroni constant ($\gamma=0.577215664901532\ldots$).
The proof is in essence an application of
Mertens' theorem (see \cite[Theorem 13.13]{Apostol})
\begin{equation}
\label{mertens}
\prod_{p\le x}\left( 1-\frac{1}{p}\right) \sim \frac{e^{-\gamma}}{\log x}.
\end{equation}
An easy variation of the latter proof yields
$$
\liminf_{d \to \infty} AF(d) \log\log d = e^{-\gamma}.
$$
Recall that
\begin{equation*}
R(d,a)=2G(d)\prod_{p \mid b}
\left(1-\frac{1}{p}\right)=2G(d)\frac{\varphi(b)}{b}, \text{ with } b=(a-1,d).
\end{equation*}
Note that $$2F(d)\le R(d,a)\le 2G(d)/(2,d)<1/A,$$
and hence $\delta(d,a)=0$ or
\begin{equation}
\label{eq:delta}
0<AF(d)\le \delta(d,a)\le 2AG(d)/(2,d)<1.
\end{equation}
\begin{proposition} \label{inf-sup}
We have
$$
\liminf_{d\to \infty} \min_{\substack{1\le a < d \\ (a,d)=1\\ \delta(d,a)>0}} \delta(d,a)\log\log d = e^{-\gamma}
\qquad \text{and} \qquad
\limsup_{d\to \infty} \max_{\substack{1\le a < d \\ (a,d)=1}} \delta(d,a) = 1.
$$
\end{proposition}
\begin{proof}
From the above remarks it follows that
the limit inferior and superior
are $\ge e^{-\gamma},$ respectively $\le 1.$
We consider two infinite families of pairs $(a,d)$
to show that these bounds are actually sharp.
Let $n\ge 3$ be
arbitrary. Put $d_n=\prod_{3\le p \le n} p$.
We have $c(4d_n,1)=1/2$ and
$$
\delta(4d_n,1)=(1+o(1))\prod_{2\le p\le n}(1-1/p), \quad (n\rightarrow \infty)
$$
by Proposition \ref{prop:main}.
Using Mertens' theorem \eqref{mertens}
and the prime number theorem, we deduce that
$$
\delta(4d_n,1)\sim \frac{e^{-\gamma}}{\log n}\sim
\frac{e^{-\gamma}}{\log \log (4d_n)},\quad (n\rightarrow \infty),
$$
and so the limit inferior actually equals
$e^{-\gamma}.$
Put
$$a_n=
\begin{cases}
2+3d_n& \text{~if~}d_n\equiv 7 ~({\rm mod}~8);\\
2+d_n & \text{~otherwise}.
\end{cases}
$$
We have $a_n\not\equiv 1 ~({\rm mod}~8),$ $1\le a_n<8d_n,$ $(a_n,8d_n)=1,$ and
$(a_n-1,8d_n)$ is a power of two. We infer
that $R(8d_n,a_n)=1/A+O(1/n)$ and $c(8d_n,a_n)=1,$
and so $\delta(8d_n,a_n)=1+O(1/n),$
showing that the limit superior equals $1$.
\end{proof}
The two constructions in the above proof
are put to the test
in Table~\ref{tab:cRA}.
The table also gives an idea of how fast
the lower bound $1-\delta(4d_n,1)$
for the relative density of the set $\cP_G(4d_n,1)$
established in Theorem~\ref{thm:mainArtin}, tends to $1$.
\begin{table}
\centering
\caption{Some values of $\delta(4d_n,1)$ and $\delta(8d_n,a_n)$}
\label{tab:cRA}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
$n$ & $10^3$ & $10^4$ & $10^5$ & $10^6$ & $10^7$ \\ \hline
$\delta(4d_n,1)\approx $ & 0.080954 & 0.060884 & 0.048752 & 0.040638 & 0.034833 \\ \hline
$\delta(4d_n,1)e^{\gamma}\log \log (4d_n)\approx $ &
0.989659 & 0.997633 & 0.999422 & 0.999851 & 0.999960 \\ \hline
$\delta(8d_n,a_n) \approx $ & 0.999872 & 0.999990 & 0.999999 & 0.9999999 & 0.99999999 \\ \hline
\end{tabular}
\end{table}
\section{Some results related to Artin's primitive root conjecture}
\label{Artinmethods}
It is natural to wonder whether the set
$\cQ,$ see \eqref{Qdef}, is an infinite set or not. This is closely
related to Artin's primitive root conjecture
stating that if $g\ne -1$ or a square, then
infinitely often $\ord_p(g)=p-1$ (which is maximal by Fermat's little theorem).
In case $g$ is a square, the maximal order is
$(p-1)/2$ and one can wonder whether this happens
infinitely often.
If this is so for $g=4,$ then our
set $\cQ$ is infinite. We now go into a bit
more technical detail.
We say that a set of primes $\cP$ has
density $\delta(\cP)$ and satisfies a Hooley type
estimate, if
\begin{equation}
\label{hooleytype}
\cP(x)=\delta(\cP)\frac{x}{\log x}+
O\left(\frac{x\log\log x}{\log^2x}\right),
\end{equation}
where the implied constant may depend on $\cP$.
Let $g\not \in\{-1,0,1\}$ be
an integer.
Put
$${\mathcal P}_g=\{p: \ord_p(g)=p-1\}.$$
Artin in 1927 conjectured that this set, when $g$ is not a square, is infinite and also conjectured a density for it.
To this day, this conjecture is open; see \cite{Moree} for a survey.
Hooley \cite{Hooley1} proved in 1967
that if the Riemann Hypothesis holds for the number fields
$\Q(\zeta_n,g^{1/n})$ with all square-free $n$ (this is a weaker
form of the GRH), then the
estimate \eqref{hooleytype} holds for the set $\cP_g$ with
\begin{equation*}
\delta(g):=\delta(\cP_g)=\sum_{n=1}^\infty
\frac{\mu(n)}{[\Q(\zeta_n,g^{1/n}):\Q]},
\end{equation*}
where $\mu$ is the M{\"o}bius function;
and he also showed that $\delta(g)/A$ is rational
and explicitly determined its value,
with $A$ the Artin constant (see \eqref{Artinconstantdef}).
For example, in case $g=2$ we have $\delta(2)=A$.
By the Chebotarev density theorem, the density of primes
$p\equiv 1 ~({\rm mod~}n)$ such that $\ord_p(g)\mid (p-1)/n$
is equal to $1/[\Q(\zeta_n,g^{1/n}):\Q]$.
Note that in order to ensure that
$\ord_p(g)=p-1$, it is enough
to show that there is no prime $q$
such that $\ord_p(g)\mid (p-1)/q$.
By inclusion and exclusion we
are then led to expect that
the set $\cP_g$ has natural
density $\delta(g)$. The problem
with establishing this rigorously is
that the Chebotarev density theorem
only allows one to take finitely many
splitting conditions into account. Let
us now consider which result we can obtain on
restricting to the primes $q\le y.$
Put
\begin{equation}
\label{classicalhooley2}
\delta_y(g)=\sum_{P(n)\le y}
\frac{\mu(n)}{[\Q(\zeta_n,g^{1/n}):\Q]},
\end{equation}
where $P(n)$ denotes the largest prime factor of $n.$
Now we may apply
the Chebotarev density theorem and
we obtain
that
\begin{equation}
\label{deltayg}
\cP_g(x)\le
(\delta_y(g)+\epsilon)\frac{x}{\log x},
\end{equation}
where $\epsilon>0$ is arbitrary and
$x, y$ are sufficiently large (where sufficiently large may depend on the
choice of $\epsilon$).
Completing the sum in
\eqref{classicalhooley2} and using that
$[\Q(\zeta_n,g^{1/n}):\Q]\gg_{g}n\varphi(n)$ (see \cite[Proposition 4.1]{Wagstaff}) and the classical estimate
$\varphi(n)^{-1}=O((\log \log n)/n)$,
we obtain that
$$
\delta_y(g)=\delta(g)+O_g\left(\sum_{n\ge y}
\frac{1}{n\varphi(n)}\right)=\delta(g)+O_g\left(\frac{\log \log y}{y} \right).
$$
On combining this with
\eqref{deltayg} we obtain the
estimate
\begin{equation}
\label{deltag}
\cP_g(x)\le
(\delta(g)+\epsilon)\frac{x}{\log x},
\end{equation}
where $\epsilon>0$ is arbitrary and
$x, y$ are sufficiently large
(where sufficiently large may depend on the
choices of $\epsilon$ and $g$).
For any integer $g \not\in \{-1,0,1\}$ and any integer $t \ge 1$, put
$$
{\mathcal P}(g,t)=\{p: \,
p\equiv 1 ~({\rm mod~}t),~ \ord_p(g)=(p-1)/t\}.
$$
Now, if the Riemann Hypothesis holds for the number fields
$\Q(\zeta_{nt},g^{1/nt})$ with all square-free $n$, then
Hooley's proof can be easily extended, resulting in the
estimate \eqref{hooleytype}
for the set $\cP(g,t)$
with density
\begin{equation}
\label{classicalwagstaff}
\delta(g,t)=\sum_{n=1}^\infty
\frac{\mu(n)}{[\Q(\zeta_{nt},g^{1/nt}):\Q]},
\end{equation}
and with $\delta(g,t)/A$ a rational number; see \cite{Lenstra}.
This number was first computed
explicitly by Wagstaff \cite[Theorem 2.2]{Wagstaff},
which can be done much more
compactly and elegantly these days
using the character sum method of Lenstra et al.~\cite{LMS}.
By Wagstaff's work \cite{Wagstaff} we have
$\delta(\cQ)=\delta(4,2)=3A/2.$
Alternatively it is an easy and instructive calculation
to determine $\delta(4,2)$ oneself. Since
$\sqrt{2}\in \mathbb Q(\zeta_n)$ if and only if $8 \mid n$, we see
that if $4\nmid n,$ then
$[\Q(\zeta_{2n},2^{1/2n}):\Q]=\varphi(2n)n$ and
so by \eqref{classicalwagstaff},
$$
\delta(4,2)=\sum_{n=1}^{\infty}\frac{\mu(n)}{\varphi(2n)n}=\sum_{2\nmid n}^{\infty}\frac{\mu(n)}{\varphi(n)n}
+\sum_{2\mid n}^{\infty}\frac{\mu(n)}{2\varphi(n)n}=
\frac{3}{4}\sum_{2\nmid n}^{\infty}\frac{\mu(n)}{\varphi(n)n}=\frac{3}{2}A,
$$
where we use the fact that
$$
\sum_{n=1\atop (m,n)=1}^{\infty}\mu(n)f(n)=\prod_{p\nmid m}(1-f(p))
$$
holds certainly true if the sum is absolutely convergent and $f(n)$ is a multiplicative function defined on the square free integers (cf.
Moree and Zumalac\'arregui \cite[Appendix A.1]{Zuma},
where a similar problem with $g=9$
instead of $g=4$ is considered).
The following result generalizes the above to
the case where we require the primes in
${\mathcal P}(g,t)$ to also be in some prescribed
arithmetic progression. It follows from Lenstra's work \cite{Lenstra}, who introduced Galois theory
into the subject.
\begin{theorem}
\label{lenstrarekenkundigerij}
Let $1\le a\le d$ be coprime integers.
Let $t\ge 1$ be an integer.
Put
$${\mathcal P}(g,t,d,a)=\{p:\,
p\equiv 1~({\rm mod~}t),~p\equiv a~({\rm mod~}d),~\ord_p(g)=(p-1)/t\}.$$
Let
$\sigma_a$ be the automorphism of $\mathbb Q(\zeta_d)$ determined by
$\sigma_a(\zeta_d)=\zeta_d^a$. Let $c_a(m)$ be $1$ if the restriction
of $\sigma_a$ to the field $\mathbb Q(\zeta_d)\cap \mathbb Q(\zeta_m,g^{1/m})$
is the identity and $c_a(m)=0$ otherwise. Put
\begin{equation*}
\delta(g,t,d,a)=\sum_{n=1}^{\infty}\frac{\mu(n)c_a(nt)}{[\mathbb Q(\zeta_d,\zeta_{nt},g^{1/nt}):\mathbb Q]}.
\end{equation*}
Then, assuming RH for all
number fields $\Q(\zeta_d,\zeta_{nt},g^{1/nt})$ with
$n$ square-free,
we have
\begin{equation}
\label{hooleygeneral}
{\mathcal P}(g,t,d,a)(x)=\delta(g,t,d,a)\frac{x}{\log x}+
O_{g,t,d}\left(\frac{x\log\log x}{\log^2x}\right),
\end{equation}
Unconditionally we have the weaker
statement that
\begin{equation}
\label{deltagtfa}
{\mathcal P}(g,t,d,a)(x) \le
(\delta(g,t,d,a)+\epsilon)\frac{x}{\log x},
\end{equation}
where $\epsilon>0$ is arbitrary and
$x$ is sufficiently large (where sufficiently large may depend on the
choice of $\epsilon,g,t,d$ and $a$).
\end{theorem}
It seems that this result has
not been formulated in the literature.
It is a simple combination of two cases each of which
have been intensively studied, namely the primes
having a near-primitive root $(d=1,t>1),$ and
the primes in arithmetic progression having
a prescribed primitive root ($t=1$).
As before $\delta(g,t,d,a)/A$ is a
rational number that can be explicitly computed.
The case $g=t=2,$ $d=8$ and $a=7$ is one of the most
simple cases. This is a lucky coincidence, as in our proof of
Proposition \ref{prop:main}
we will apply Theorem \ref{lenstrarekenkundigerij} to determine $\delta(\cQ_7)=\delta(2,2,8,7)$.
Note that
$
\cP(g,t,d,a)\subseteq \{p:\, p\equiv 1~({\rm mod~}t),~p\equiv a~({\rm mod~}d),~\ord_p(g) \mid (p-1)/t\}.
$
It is shown in \cite[Theorem 1.3]{MS} that if
the latter set
is not empty, then it contains a positive density subset of
primes that are not in ${\mathcal P}(g,t,d,a).$
\section{Proofs of the main results}
It suffices to prove Theorems~\ref{thm:class} and \ref{thm:GB} and Proposition~\ref{prop:main}.
\subsection{Proof of Theorem~\ref{thm:class}}
By \cite[Proposition 3.4]{HK}, we obtain
\begin{equation*}
h_{p,2}^{-}=(-1)^{\frac{p-1}{2}}2^{2-p}E_{0,\omega_p}E_{0,\omega_p^{3}}\cdots E_{0,\omega_p^{p-2}}.
\end{equation*}
Using Lemma \ref{p4} and \eqref{eq:EnGn}, we then infer that
\begin{align*}
h_{p,2}^{-} & \equiv (-1)^{\frac{p-1}{2}}2^{2-p}E_{1}(0)E_{3}(0)\cdots E_{p-2}(0) \\
& \equiv \frac{(-1)^{\frac{p-1}{2}}2^{2-p}}{(p-1)!!}G_2 G_4 \cdots G_{p-3} G_{p-1} ~({\rm mod~}p).
\end{align*}
So, if $p$ is G-irregular, then $p \mid h_{p,2}^{-}$.
Conversely, if $p \mid h_{p,2}^{-}$ and $p$ is not a Wieferich prime,
then by \eqref{eq:BnGn} we first have $p \nmid G_{p-1}$,
and thus $p$ is G-irregular.
\qed
\subsection{Proof of Theorem~\ref{thm:GB}}
We first recall a fact about Bernoulli numbers that
any odd prime $p$ does not divide the denominators of the Bernoulli numbers $B_2, B_4, \ldots, B_{p-3}$
(this follows from the von Staudt-Clausen theorem).
Now, given an odd prime $p$, if it is G-regular, then
there is no $1\le k \le (p-3)/2$ such that
$p$ divides the integer $G_{2k}$, that is, $2(1-2^{2k})B_{2k}$ by \eqref{eq:BnGn};
and so $p$ is B-regular and $\ord_p(4)=(p-1)/2$.
Conversely, if $p$ is B-regular and $\ord_p(4)=(p-1)/2$,
then $p$ does not divide the denominators of the Bernoulli numbers $B_2, B_4, \ldots, B_{p-3}$
and $p\nmid 2^{2k}-1$ for
$1\le k\le (p-3)/2.$
Consequently $p$ does not divide any integer $G_{2k}=2(1-2^{2k})B_{2k}$
with $1\le k\le (p-3)/2$,
which implies that $p$ is G-regular. \qed
\subsection{Proof of Proposition~\ref{prop:main}}
The proof relies on Theorem \ref{lenstrarekenkundigerij}.
We only establish the assertion
under GRH, as the proof of
the unconditional result is very similar. Namely,
it uses the unconditional estimate \eqref{deltagtfa} instead of
\eqref{hooleygeneral}.
It is enough to prove the result in case $8\mid d.$
In fact, in case $8\nmid d$ we lift the congruence class $a~({\rm mod~}d)$
to congruence classes
with modulus lcm$(8,d)$. The ones among those that
are $\not\equiv 1~({\rm mod~}8)$ have
relative density $AR({\rm lcm}(8,d),a)=AR(d,a)$ (as $R(d,a)$ only depends
on the odd prime factors of $d$). The one that is
$\equiv 1~({\rm mod~}8)$ (if it exists at all) has relative density zero.
It follows that the relative density of the unlifted congruence
equals $c(d,a)R(d,a)A$ with $c(d,a)$ the relative density of the primes
$p\not\equiv 1~({\rm mod~}8)$ in the congruence class
$a~({\rm mod~}d).$ The easy determination of $c(d,a)$ is left
to the interested reader.
From now on we assume that $8\mid d.$
We can write $a\equiv j~({\rm mod~}8)$ for some
$j\in \{1,3,5,7\}$ and
distinguish three cases.
{\it Case I: $j=1$.} By Proposition \ref{prop:splitsing} the set $\cQ(d,a)$ is empty and the result
holds trivially true.
{\it Case II: $j\in \{3,5\}$.} By Proposition \ref{prop:splitsing},
$$\cQ(d,a)=\{p:~p\equiv a~({\rm mod~}d),~\ord_p(2)=p-1\}.$$
By Theorem \ref{lenstrarekenkundigerij}, under GRH, this
set has density $\delta(2,1,d,a).$ For arbitrary
$g,d,a$ the third author determined the rational number
$\delta(g,1,d,a)/A,$ see \cite[Theorem 1]{Mor1} or
\cite[Theorem 1.2]{Mor2}. On applying his result, the proof of this subcase is
then completed.
{\it Case III: $j=7$.} By Proposition \ref{prop:splitsing},
$$\cQ(d,a)=\{p:~p\equiv a~({\rm mod~}d),~\ord_p(2)=(p-1)/2\}.$$
For simplicity we write $\delta=\delta(\cQ(d,a)).$
By Theorem \ref{lenstrarekenkundigerij} we have
\begin{equation}
\label{startertje}
\delta=\delta(2,2,d,a)=
\sum_{n=1}^{\infty}
\frac{\mu(n)c_a(2n)}{[\mathbb Q(\zeta_d,\zeta_{2n},2^{1/2n})
:\mathbb Q]}.
\end{equation}
In case $n$ is even, then trivially
$\mathbb Q(\sqrt{-1})\subseteq \mathbb Q(\zeta_d)\cap
\mathbb Q(\zeta_{2n},2^{1/2n}).$
As $\sigma_a$ acts by conjugation
on $\mathbb Q(\sqrt{-1}),$ cf. \cite[Lemma 2.2]{Mor2}, and
not as the identity, it follows that $c_a(2n)=0.$
Next assume that $n$ is odd and square-free.
Then by \cite[Lemma 2.4]{Mor2} we infer that
$$\mathbb Q(\zeta_d)\cap \mathbb Q(\zeta_{2n},2^{1/2n})
=\mathbb Q(\zeta_{(d,n)},\sqrt{2}).$$
Since
$$
\sigma_a \big |_{\mathbb Q(\sqrt{2})}=\text{id.} \qquad \text{and} \qquad
\sigma_a \big |_{\mathbb Q(\zeta_{(d,2n)})}
\begin{cases}
= \text{id.} & \text{if~}a\equiv 1~({\rm mod~}(d,2n));\\
\ne \text{id.} & \text{otherwise},
\end{cases}
$$
we conclude that
$$
c_a(2n) =
\begin{cases}
1 & \text{if~}a\equiv 1~({\rm mod~}(d,2n));\\
0 & \text{otherwise},
\end{cases}
$$
with `id.'\,a shorthand for identity.
Note that the assumptions on $a,d$ and $n$ imply that
$a\equiv 1~({\rm mod~}(d,2n))$ iff
$a\equiv 1~({\rm mod~}2(d,n))$ iff
$a\equiv 1~({\rm mod~}(d,n)).$
It follows that \eqref{startertje} simplifies to
$$
\delta=
\sum_{\substack{2\nmid n\\ a\equiv 1~({\rm mod~}(d,n))}}
\frac{\mu(n)}{[\mathbb Q(\zeta_d,\zeta_{2n},2^{1/2n})
:\mathbb Q]}.
$$
When $n$ is odd and square-free, using \cite[Lemma 2.3]{Mor2} we obtain
$$
[\mathbb Q(\zeta_d,\zeta_{2n},2^{1/2n})
:\mathbb Q]=[\mathbb Q(\zeta_{\text{lcm}(d,2n)},2^{1/2n})
:\mathbb Q]=
n\varphi(\text{lcm}(d,2n))=n\varphi(\text{lcm}(d,n)).$$
We thus get
$$
\varphi(d)\delta=\sum_{\substack{2\nmid n\\ a\equiv 1~({\rm mod~}(d,n))}}
\frac{\mu(n)\varphi(d)}{n\varphi(\text{lcm}(d,n))}.
$$
Put
$$w(n)=\frac{n\varphi(\text{lcm}(d,n))}{\varphi(d)}.$$
In this notation
we obtain
$$
\delta=\frac{1}{\varphi(d)}
\sum_{\substack{2\nmid n\\ a\equiv 1~({\rm mod~}(d,n))}}
\frac{\mu(n)}{w(n)},
$$
where the argument in the sum is multiplicative
in $n.$
Using \cite[Lemma 3.1]{Mor2} and the notation used there and
in \cite[Theorem 1.2]{Mor2},
we find
\begin{equation*}
\begin{split}
\varphi(d)\delta & = S(1)-S_2(1)=2S(1)=2A(a,d,1)\\
&=2A\prod_{p|(a-1,d)}(1-\frac{1}{p})\prod_{p|d}
\left(1+\frac{1}{p^2-p-1}\right)=\delta(d,a),
\end{split}
\end{equation*}
as was to be proved. \qed
\section{Outlook}
A small improvement of
the upper bound
\eqref{eq:underGRH2}
(and consequently the lower
bound \eqref{dainequality}) would be possible if
instead of the estimate \eqref{deltagtfa}
a Vinogradov
type estimate for $\cP(g,t,d,a)(x)$ could be established, say
\begin{equation}
\label{vinogradovtypeadvanced}
\cP(g,t,d,a)(x) \le \delta(g,t,d,a)\frac{x}{\log x} + O_{g,t,d}\left(\frac{x(\log\log x)^2}{\log^{5/4} x} \right).
\end{equation}
Vinogradov \cite{Vino} established the above result in
case $a=d=t=1$.
Establishing \eqref{vinogradovtypeadvanced} seems technically quite involved. Recent work
by Pierce et al.~\cite{effectChebo}
offers perhaps some hope that one can even improve
on the error term in \eqref{vinogradovtypeadvanced}.
\section{Some numerical experiments} \label{sec:data}
In this section, using the Bernoulli numbers modulo $p$ function developed by David Harvey in Sage~\cite{Sage}
(see \cite{BH,HHO,Harvey10} for more details and improvements) and the euler{\_}number function in Sage,
we provide numerical evidence for the truth of Conjectures~\ref{conj:Siegel2}, \ref{conj:E-irre2}, \ref{con:global} and \ref{con:local}
and also for \eqref{eq:underGRH} in Proposition~\ref{prop:main}.
The Bernoulli numbers modulo $p$ function returns the values of $B_0, B_2, \ldots, B_{p-3}$ modulo $p$,
and so by checking whether there is a zero value we can determine whether $p$ is B-irregular.
For checking the E-irregularity, we use the euler{\_}number function in Sage to compute and store Euler numbers
and then use the definition of E-irregularity.
It would be a separate project to test large E-irregular primes,
cf.\,\cite{BH,HHO,Harvey10}.
In the tables, we only record the first six digits of the decimal parts.
In Tables~\ref{tab:B-irre} and
~\ref{tab:E-irre} the ratio $\cP_B(d,a)(x)/\pi(x;d,a),$
respectively $\cP_E(d,a)(x)/\pi(x;d,a)$ is recorded for
$x=10^5$ in the column `experimental' for
various choices of $d$ and $a$, and in the column `theoretical'
the limit value
predicted by Conjecture~\ref{conj:Siegel2} is given.
\begin{table}
\centering
\caption{The ratio $\cP_B(d,a)(x)/\pi(x;d,a)$ for $x=10^5$}
\label{tab:B-irre}
\begin{tabular}{|c|c|c|c|}
\hline
$d$ & $a$ & experimental & theoretical \\ \hline \hline
3 & 2 & 0.394424 & \\ \cline{1-2}
4 & 1 & 0.388877 & \\ \cline{1-2}
5 & 4 & 0.397071 & \\ \cline{1-2}
7 & 4 & 0.391005 & 0.393469 \\ \cline{1-2}
9 & 8 & 0.387742 & \\ \cline{1-2}
12 & 5 & 0.390203 & \\ \cline{1-2}
15 & 13 & 0.389858 & \\ \cline{1-2}
20 & 13 & 0.385191 & \\ \hline
\end{tabular}
\end{table}
\begin{table}
\centering
\caption{The ratio $\cP_E(d,a)(x)/\pi(x;d,a)$ for $x=10^5$}
\label{tab:E-irre}
\begin{tabular}{|c|c|c|c|}
\hline
$d$ & $a$ & experimental & theoretical \\ \hline \hline
3 & 2 & 0.395672 & \\ \cline{1-2}
4 & 1 & 0.388040 & \\ \cline{1-2}
5 & 4 & 0.397071 & \\ \cline{1-2}
7 & 4 & 0.393504 & 0.393469 \\ \cline{1-2}
9 & 8 & 0.391494 & \\ \cline{1-2}
12 & 5 & 0.388127 & \\ \cline{1-2}
15 & 13 & 0.399002 & \\ \cline{1-2}
20 & 13 & 0.385191 & \\ \hline
\end{tabular}
\end{table}
Table~\ref{tab:G-irre1} gives the ratio
$\cP_G(x)/\pi(x)$ for various values of $x,$ and the value in the column `theoretical' is the limit value
$1-3A/(2\sqrt{e})$ predicted by Conjecture~\ref{con:global}.
\begin{table}
\centering
\caption{The ratio $\cP_G(x)/\pi(x)$}
\label{tab:G-irre1}
\begin{tabular}{|c|c|c|}
\hline
$x$ & experimental & theoretical \\ \hline \hline
$10^5$ & 0.661592 & \\ \cline{1-2}
$10^6$ & 0.659558 & \\ \cline{1-2}
$2\cdot 10^6$ & 0.660860 & 0.659776 \\ \cline{1-2}
$3\cdot 10^6$ & 0.661413 & \\ \cline{1-2}
$4\cdot 10^6$ & 0.660683 & \\ \cline{1-2}
$5\cdot 10^6$ & 0.660864 & \\ \hline
\end{tabular}
\end{table}
Table~\ref{tab:G-irre2} gives the ratio $\cP_G(d,a)(x)/\pi(x;d,a)$ for
$x=5 \cdot 10^6$ in the column `experimental' for
various choices of $d$ and $a$, and the corresponding
limit values $1-c(d,a)R(d,a)A/\sqrt{e}$
predicted by Conjecture~\ref{con:local}
are in the column `theoretical'.
\begin{table}
\centering
\caption{The ratio $\cP_G(d,a)(x)/\pi(x;d,a)$ for $x=5 \cdot 10^6$}
\label{tab:G-irre2}
\begin{tabular}{|c|c|c|c|}
\hline
$d$ & $a$ & experimental & theoretical \\ \hline \hline
3 & 1 & 0.728296 & 0.727821 \\ \hline
5 & 2 & 0.643010 & 0.641870 \\ \hline
4 & 1 & 0.771512 & 0.773184 \\ \hline
20 & 9 & 0.757311 & 0.761246 \\ \hline
12 & 11 & 0.460584 & 0.455642 \\ \hline
20 & 19 & 0.528567 & 0.522493 \\ \hline
8 & 7 & 0.550086 & 0.546368 \\ \hline
24 & 13 & 0.634191 & 0.637094 \\ \hline
\end{tabular}
\end{table}
Finally, Table~\ref{tab:distrioverAP} gives the ratio
${\mathcal Q}(d,a)(x) / \pi(x;d,a)$
for $x= 5 \cdot 10^6$ in the column `experimental' for
various choices of $d$ and $a$.
In the column `theoretical', there is
the corresponding relative
density $\delta(d,a)$
predicted in \eqref{eq:underGRH} and known to be true under GRH.
\begin{table}
\centering
\caption{The ratio $\cQ(d,a)(x)/\pi(x;d,a)$ for $x=5 \cdot 10^6$}
\label{tab:distrioverAP}
\begin{tabular}{|c|c|c|c|}
\hline
$d$ & $a$ & experimental & theoretical \\ \hline \hline
3 & 1 & 0.449049 & 0.448746 \\ \hline
5 & 2 & 0.589614 & 0.590456 \\ \hline
4 & 1 & 0.374664 & 0.373955 \\ \hline
20 & 9 & 0.395498 & 0.393637 \\ \hline
12 & 11 & 0.898284 & 0.897493 \\ \hline
20 & 19 & 0.789316 & 0.787275 \\ \hline
8 & 7 & 0.747300 & 0.747911 \\ \hline
24 & 13 & 0.598815 & 0.598329 \\ \hline
\end{tabular}
\end{table}
In view of the definition of
the constant $c(d,a)$ in Proposition~\ref{prop:main},
there are four cases excluding the case $8 | d$ and $a \equiv 1 ~({\rm mod}~8)$ (which gives $c(d,a)=0$).
For each of these four cases there are two instances in Tables~\ref{tab:G-irre2} and \ref{tab:distrioverAP}.
\section*{Acknowledgement}
The authors would like to thank the referee for careful reading and valuable comments.
This work was supported by the National Natural Science Foundation of China, Grant
No.\,11501212.
The research of Min-Soo Kim and Min
Sha was also supported by the Kyungnam University Foundation Grant, 2017,
respectively a Macquarie University Research Fellowship.
The authors thank Bernd Kellner for pointing out a link with the Genocchi numbers
and suggesting the references \cite{Ern2, HP, Holden},
and Peter Stevenhagen for very helpful feedback.
They also thank Alexandru Ciolan for proofreading earlier versions.
| {'timestamp': '2019-05-08T02:04:56', 'yymm': '1809', 'arxiv_id': '1809.08431', 'language': 'en', 'url': 'https://arxiv.org/abs/1809.08431'} |
\section{Introduction}\label{sec:intro}
A detailed understanding of a potentially unknown environment plays a fundamental role in mobile robotic applications. Different robots and environments come along with varying requirements for the map building process in terms of accuracy, efficiency and usablility. Common SLAM methods, which attempt to map an environment while simultaneously localizating the robot in it, usually have to find a balance between these properties. Due to the complexity of the task, it is still very challenging to perform SLAM while maintaining a dense map representation. Compared to commonly used sparse map representation, \textit{occupancy grids} have many advantages as they integrate all available information into a single representation which is easy to understand for an operator and also allows for efficient queries.
2D projections of such dense reprenstations have been used extensively for mobile robot navigation tasks. As robots become more agile, scenes more complex and sensors more capable, it is also desireable to adopt these structures for the third dimension. For ground vehicles, so called 2.5D elevation maps as suggested by Herbert et.~al~\cite{herbertmapping} may be sufficient. However, for agile robots with complex kinematics such as legged robots or UAVs, a full 3D representation of the environment is essential.
\begin{figure}[tb]
\centering
\includegraphics[width=0.7\columnwidth]{img/vdb_iosb.png}
\caption{Map of the Fraunhofer IOSB site using the mapping approach. Preprocessing using a custom factor graph approach described in \cite{iosb_mapping}.}
\label{fig:mapping_iosb}
\vspace{-0.5cm}
\end{figure}
To store the 3D map memory efficient and with certain complexity guarantees for random access operations, different datastructures have been suggested. The most prominent example for such a 3D map representaion is the OctoMap~\cite{octomap} framework by Hornung et al., working on octrees as hierarchical tree structure. It allows for a memory effiecient storing through efficient pruning and propagating leaf states to higher levels of the tree. OctoMap has been considered state-of-the-art for a long time but as sensors are achieving higher rates with millions of points per second, the \textit{insert} operation of OctoMap is not able to achieve real-time performance. As the octree has a fixed layout, it is also difficult to later increase the volume without performance overhead. One of the most popular frameworks for global consistent mapping is VoxBlox by Oleynikova et al.~\cite{voxblox}. They incrementally build a \textit{Truncated Signed Distance Map} (TSDF)~\cite{sdf} instead of an occupancy voxel grid and reach almost real-time performance on a single core implementation using a hashmap representation instead of a tree as fundamental datastructure.
OpenVDB is a modern framework developed in a computer graphic context. Similar to OctoMap it is based on a hieracical structure but it comes with certain accelerators to support almost constant insertion and read procedures using a B+ tree like structure. The main reason for the improved performance is an advanced indexing and caching system. We therefore leverage OpenVDB as underlying datastructure for our map building approach and present its capabilities in further detail in the upcoming Section~\ref{sec:pipeline}. OpenVDB as flexible backbone allows not only fast insertions but also supports efficient raycasting step samplers and a virtually infinite map size.
Only few works utilize the VDB datastructure as occupancy representation so far. Our work was originally based on~\cite{vdb_fzi} by Besselmann et al. and can be considered a successor with a revised update scheme and optimized insertion procedure. They bring up the idea to integrate data into a temporary grid first to cope with discretization ambiguities which arise when raycasting new data. In~\cite{vdb-edt}, Zhu et al. present a full framework for occupancy and distance mapping, which also uses a raycast based insertion scheme in order to create the occupancy map. They put the focus on the \textit{Euclidean Distance Transform} step and disregard the map integration itsself. Macenski et~al.~\cite{spatiotemporal} built a spatio-temporal voxel layer on top of OpenVDB. They focus on local dynamic maps and therefore use a sensor frustrum based visibility check instead of raycasting as integration scheme.
In our work we further push the limits of the underlying OpenVDB structure by supporting a flexible multithreaded raycasting insertion scheme into the map supported by additional ray-level hashing to avoid unnecessary operations. Fast merging operations of single \textit{bit-grids} allow for a minimal lock time for modifying the global occupancy grid. Different subsample strategies can be selected to allow for a dynamically adjustable tradeoff between map accuracy and efficiency.
The main contribution of the work can be summarized as follows:
\begin{itemize}
\item We present a real-time capable and multithreaded dense mapping approach for efficiently creating occupancy maps based on the VDB data structure.
\item We introduce several optimizations in the integration scheme allowing for a user-definable tradeoff between map accuracy and efficiency.
\item We conduct benchmarks on different operations and test the whole pipeline in simulation and real environments.
\item We open-source the codebase and a corresponding wrapper for the Robot Operating System II (ROS II~\cite{ros2}) which enables fast prototyping for mobile robotic applications.
\end{itemize}
The remainder of this work is structured as follows. Section~\ref{sec:pipeline} formalizes the problem of mapping and introduces OpenVDB as underlying datastructure. We also give a detailed overview on the insertion scheme and introduce various optimizations leading to improved performance. In Section~\ref{sec:evaluation} we carry out different experiments to verify the effect of the introduced optimizations. We summarize and conclude the work and discuss potential future improvements in Section~\ref{sec:discussion}.
\section{Mapping Pipeline}\label{sec:pipeline}
In this section, we first formalize the problem before discussing the proposed insertion scheme.
\subsection{Problem Statement}
The main goal of occupancy mapping is to create a map \(\mathcal{M}_{\text{occ}}\) which stores the occupancy probability \(p(x_{i}|s_{1:t}, z_{1:t})\) for each cell \(x_{i} \in \mathcal{M}_{\text{occ}}\) given some sensor measurements \(s_{1:t}\) and the corresponding robot poses \(z_{1:t}\), where \(1{:}t\) denotes the sequence from the start up to time \(t\). We consider a cell to be an obstacle if \(p(x_{i}|s_{1:t},z_{1:t})\) exceeds a certain threshold \( \phi_{\text{occ}}\) and free if it falls below the threshold \(\phi_{\text{free}}\). Note that being marked as \textit{free} is different from not being observed yet.
\subsection{OpenVDB}
\begin{figure}[b]
\centering
\includegraphics[width=1\columnwidth]{img/vdb_structure.png}
\caption{Illustration of the underlying VDB datastructure. Image is adopted from the original publication \cite{vdb}. The height of the tree is typically 4 with one root node (gray) and two internal layers (green, orange) as well as leaf nodes, which store the actual tile values (blue). All nodes in lower levels have a branching factor equal to a power of two. Root and internal nodes store pointers to their respective child nodes. An \textit{active} bitmask for each nodes encodes if subsequent tiles are active or not (gray values).}
\label{fig:vdb}
\end{figure}
The OpenVDB framework has been introduced by Museth et~al. in 2013 \cite{vdb}. Originally it was designed for computer graphic applications such as rendering animations of complex mesh structures and time-varying sparse volumes such as clouds. Since then, it has been widely adopted to different applications as it allows for flexible modifications to its core structure. At its heart it levereges a B+ tree~\cite{bplustree} variant as main datastructure. This structure is supported by hieracically organized caches to faciliate fast access to inner tree nodes. Such a datastructure is ideal to store sparse voxelized environment representations. The discretization of the space can be adjusted by choosing the size of the leaf nodes accordingly. Other adjustable parameters are the tree depth and branching factors which can further improve the memory footprint depending on the sparsity of the environment. Typical branching factors for the VDB datastructures are very large compared to the octree branching factors, which are typically two in each spatial dimension and thus lead to less shallow trees.
The schematics of the underlying tree structure is depicted in Figure~\ref{fig:vdb}. The fixed height of the tree makes it possible to implement \textit{insert} and \textit{read} operations in constant time on average~\cite{vdb}. It also allows the framework to efficiently utilize typical cache architectures on modern CPUs to further speed up read operations for tiles with spatial proximity. Access to the tree's tile values is implemented via a virtually infinite \textit{index space}, which can be accessed by signed \(32\)-bit integer coordinates, allowing the map to grow in each direction without additional overhead. As noted in Figure~\ref{fig:vdb}, tree nodes additionally store bitsets indicating if subsequent nodes are active or not. This allows for a fast traversal of a sparse volume without the need to visit tile values explicitly. The features of the tree structure are described in further detail in the original work~\cite{vdb}. Moreover, VDB allows for efficient raycasting operations using optimized index space iterators. Therefore, we utilize a raycast based sensor insertion scheme which is described in further detail in the following section.
\subsection{Map Updates}
The proposed structure is not tied to a specific sensor. Typical data comes from a LiDAR sensor, mimicing the raycast operation which is also performed to create the obstacle map. Cells store their occupancy probability in order to reflect not only occupied but also free space. To represent time-dependent updates we faciliate the very commonly used log-odds based update scheme initially formulated by Moravec and Elfes in~\cite{logodds}. Essentially we calculate
\begin{equation}
\mathcal{M}_{\text{occ}}(x_{i}|s_{1:t})= \mathcal{M}_{\text{occ}}(x_{i}|s_{1:t-1}) + \log\left[\frac{p(x_{i}|s_{1:t})}{1-p(x_{i}|s_{1:t})}\right]
\label{eq:update}
\end{equation}
in each update step and for each \(x_{i} \in \mathcal{M}_{\text{occ}} \).
Algorithm~\ref{alg:map} gives an abstracted overview on the insertion process. Essentially we do parallel raycasting in coaligned temporary grids, which are merged together in a later step. We first create a coaligned map \(\mathcal{M}_{\text{agg}}\) (line 2), which we later use to aggregate the temporary maps. We then divide the incoming points into different chunks which can be processed in parallel (line 5). Points of each chunk are inserted by calculating their respective end position in world coordinates (line 9) and marching along the ray using OpenVDBs digital differential analyzer (DDA) implementation (lines 19-22) and marking all visited voxels as \textit{active}. The only place where we need to lock the threads is during the merge operation in lines 23-24. This can be done efficiently as we simply XOR the boolean grids together.
As OpenVDB stores the active state of nodes in a fast accessible bitmask, we can now efficiently iterate over all \textit{active} values in our aggregated map \(\mathcal{M}_{\text{agg}}\) (line 25). We increase or decrease the occupancy value following Equation~\ref{eq:update} (lines 27 to 29). If a voxel exceeds or falls below a certain threshold it will be marked as occupied or unoccupied.
We suggest and implement two runtime optimizations, which are roughly based on similar ideas used in Voxblox~\cite{voxblox}, namely a \textit{subsampling} and a \textit{bundling} optimiztation. \textit{Subsampling} (cf. Figure~\ref{fig:subsample}) uses an additional map \(\mathcal{M}_{\text{sub}}\) which increases the resolution \(\mathcal{M}_{\text{occ}}\) by a \textit{subsampling-factor} \(\delta_{\text{sub}}\). Typically, \(\delta_{\text{sub}}\) is restricted to powers of \(2\), we use \(4\) in most setups. In addition we use a Hashmap \(\mathcal{H}_{\text{sub}}\) (line 3) storing for each cell in \(\mathcal{M}_{\text{sub}}\), if it has already been visited in the current integration step. If this is the case, all subsequent integrations are skipped (lines 12 and 13). This optimization comes with the cost of inaccurate details but can save a lot of integration steps especially in dense environments with large voxel sizes, where a single voxel is hit multiple times.
\let\@latex@error\@gobble
\begin{algorithm}[H]
\caption{Map Update Scheme}
\label{alg:map}
\setstretch{1.30}
\SetKwFunction{MapUpdate}{MapUpdate}
\KwIn{Pointcloud $\mathcal{P}$, Sensor origin $\mathbf{o}$, Number of Chunks $c$, Global Occupancy Map $\mathcal{M}_{\text{occ}}$}
$\mathcal{M}_{\text{sub}} \leftarrow \text{coalign}(\mathcal{M})$ \tcp{Aligned Subsampling Map}
$\mathcal{M}_{\text{agg}} \leftarrow \text{coalign}(\mathcal{M})$ \tcp{Aligned Aggregation Map}
Hashmap$\left<\text{coord } x,\text{bool hit}\right> \mathcal{H}_{\text{sub}} \leftarrow [\;]$\;
Hashmap$\left<\text{int count}, \text{coord } x, \text{bool maxray}\right> \mathcal{H}_{\text{bun}} \leftarrow [\;]$ \;
$\mathcal{P}_{i} \leftarrow \text{Equal Chunks of } \mathcal{P} \text{ for } i=1..c$\;
\ParallelForEach{$\mathcal{P}_{i} \in \mathcal{P}$}{
$\mathcal{M}_{\text{temp}} \leftarrow \text{coalign}(\mathcal{M})$ \tcp{Aligned Temporary Map}
\ForEach{$\mathbf{p} \in \mathcal{P}_{i}$} {
$\mathbf{r}_{\text{end}}=\mathbf{o} + (\mathbf{p} - \mathbf{o}) \text{ in } \mathcal{M}_{\text{agg}}$ \;
$\mathbf{r}_{\text{end\_sub}}=\mathbf{o} + (\mathbf{p} - \mathbf{o}) \text{ in } \mathcal{M}_{\text{sub}}$ \;
is\_maxray $\leftarrow $ check for max-length ray \;
\If{$\mathcal{H}_{\text{sub}}[\mathbf{r}_{\text{end\_sub}}]$}{
\Continue\;
}
\eIf{$\mathcal{H}_{\text{bun}}[\mathbf{r}_{\text{end}}].\text{count}>\text{thresh}$}{
$(\text{count}, \overline{\mathbf{r}_{\text{end}}}, \text{maxray}) = \mathcal{H}_{\text{bun}}[\mathbf{r}_{\text{end}}]$ \;
}{
$\mathcal{H}_{\text{bun}}[\mathbf{r}_{\text{end}}]+=(1,\mathbf{r}_{\text{dir}},\text{is\_maxray})$ \;
\Continue\;
}
\Do{$\mathbf{r}_{\text{dda}} \neq \mathbf{r}_{\text{end}}$}{
$\mathcal{M}_{\text{temp}}[\mathbf{r}_{\text{dda}}].\text{active} = \text{true}$\;
$\mathbf{r}_{\text{dda}}++$
}
}
\With{MapLock$(\mathcal{M}_{\text{temp}})$}{
$\mathcal{M}_{\text{agg}} \;|=\; \mathcal{M}_{\text{temp}}$ \;
}
}
\ForEach{active value $x \in \mathcal{M}_{\text{agg}}$}{
\eIf{$x$}{
increase occupancy on $\mathcal{M}_{\text{occ}}[x]$ following Eq.~\ref{eq:update}\;
}{
reduce occupancy on $\mathcal{M}_{\text{occ}}[x]$ following Eq.~\ref{eq:update}\;
}
}
\KwOut{Updated $\mathcal{M}_{\text{occ}}$}
\end{algorithm}
\begin{figure*}[tb]
\centering
\begin{subfigure}[b]{0.35\textwidth}
\captionsetup{margin={2.6cm,0cm}}
\centering
\includegraphics[page=2, width=\textwidth]{img/mapping.pdf}
\caption{Default Raycasting}
\label{fig:raycasting}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.2\textwidth}
\centering
\includegraphics[page=5, width=\textwidth]{img/mapping.pdf}
\caption{Parallelization}
\label{fig:parallel}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.2\textwidth}
\centering
\includegraphics[page=4, width=\textwidth]{img/mapping.pdf}
\caption{Subsample Optimization}
\label{fig:subsample}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.205\textwidth}
\centering
\includegraphics[page=3, width=\textwidth]{img/mapping.pdf}
\caption{Bundle Optimization}
\label{fig:bundle}
\end{subfigure}
\caption{Scheme of different optimizations on the insertion procedure. In (a), the standard raycasting process is visualized. When enough rays hit a voxel so that \(\phi_{\text{occ}}\) is exceeded, this voxel will be marked as occupied. Figure (b) visualizes the process of dividing rays into different chunks which can then be processed in a multithreaded fashion. In (c) the target space is further subsampled with a customizable subsampling map \(\mathcal{M}_{\text{sub}}\). Rays which hit an already marked subsampled cell (the blue ray) are skipped. In (d) multiple rays are aggregated and averaged if they hit the same target cell. }
\label{fig:mapping_optimizations}
\end{figure*}
The \textit{bundling} optimization on the other hand bundles multiple rays together. Again, we use a hashmap \(\mathcal{H}_{\text{bun}}\) where we insert incoming ray end points without actually integrating the rays (lines 17 and 18). If a certain threshold is exceeded, we integrate the whole bundle targeting the end cell \(\mathbf{r}_{\text{end}}\) at once. In the map, we additionally store the original end points and if the ray reached its maximum length. During the integration of the bundle this information is used to average the final end point \(\overline{\mathbf{r}_{\text{end}}}\). Again, this optimization favors environments with a lot of redundant integrations. Figure~\ref{fig:bundle} indicates that only one orange bundle is inserted into the map, even if multiple rays hit the cell (cf. Figure~\ref{fig:raycasting}). Both optimizations can be enabled or disabled individually or together. While enabling the optimizations leads to a deliberately impaired map accuracy, it can be useful as more sensor data can be integrated overall due to the additionally gained performance.
It is worth noticing that it is not necessary to deal with discretization ambiguities introduced by sequential raycasting presented in~\cite{vdb_fzi} as we use the same two step approach as in~\cite{vdb_fzi}: First we activate all visited voxels in a seperate aggregation map \(\mathcal{M}_{\text{agg}}\) before we integrate it into the global map \(\mathcal{M}_{\text{occ}}\).
\section{Evaluation}\label{sec:evaluation}
We will now present the results of experiments which we conducted to measure the performance of our proposed methods under different conditions. We first compare different iterations of our method in a set of ablation studies to measure the effect of different optimizations. In a next step we compare the method on typical benchmark sets before we finally conduct some real world experiments by capturing outdoor and indoor scenes of our lab. All experiments are performed using a machine equipped with a 6-core Intel\copyright Core\texttrademark i7-10850H and 32~GB of memory. As hardware platforms to carry out our experiments we use a BostonDynamics Spot equipped with an Ouster OS0-64 LiDAR for outdoor experiments as well as a custom UAV platform with a solid-state LiDAR for indoor experiments (see Figure~\ref{fig:robots} for details).
\begin{figure}[tb]
\centering
\begin{subfigure}[b]{0.45\columnwidth}
\captionsetup{format=hang}
\centering
\includegraphics[width=\textwidth]{img/spot.png}
\caption{BostonDynamics Spot with additional sensors}
\label{fig:spot}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.45\columnwidth}
\captionsetup{format=hang}
\centering
\includegraphics[width=\textwidth]{img/drone.jpg}
\caption{Custom UAV platform for indoor usage}
\label{fig:drone}
\end{subfigure}
\caption{The hardware setup used for indoor and outdoor experiments. Both robots are equipped with a solid-state LiDAR (Realsense L515) and provide a localization coming from a multisensor-fusion approach.}
\label{fig:robots}
\vspace*{-0.3cm}
\end{figure}
\subsection{Ablation Studies}
\begin{table}[tb]
\centering
\caption{Runtime of different variants with 0.1m map resolution and $n$ insertions each. \textit{Random} refers to randomly sampled points in a radius which is $1.2 \cdot \text{ray length}$ as depicted in (b). The \textit{structured} environment (a) refers to a sampling where $(x,y)$ coordinates are sampled randomly but $z$ coordinates are restricted to a small width of $1$m simulating a wall structure.~(*) VDB-BUN and VDB-SUB approaches are only included as rough estimate and do not compare to the other methods, as some rays are not casted, when using bundle or subsampling optimizations. The parallel version VDB-PAR runs on 12 threads and VDB-FMAP is the improved and parallel variant restricted to a single core and with disabled optimizations while VDB-MAP is the VDB based method described in \cite{vdb_fzi}.}
\label{tab:runtime}
\begin{tabular}{@{}cl@{}rrrr@{}}
\toprule
\multirow{3}{*}{$n$} & \multirow{3}{*}{method} & \multicolumn{4}{c}{runtime [ms]} \\
& & \multicolumn{2}{c}{ray length 6m} & \multicolumn{2}{c}{ray length 60m} \\
& & structure & random & structure & random \\ \midrule
\multirow{6}{*}{\rotatebox[origin=c]{90}{50\,000}}& OctoMap~\cite{octomap} & 110 & 747 & 11676 & 42\,593 \\
& VDB-EDT~\cite{vdb-edt} & 102 & 106 & 3\,005 & 3\,973\\
& VDB-MAP~\cite{vdb_fzi} & 55 & 74 & $\mathbf{863}$ & $\mathbf{1\,984}$ \\
& VDB-FMAP & 54 & 71 & 906 & 2\,075 \\
& VDB-SUB* & 56 & 75 & 923 & 2\,149 \\
& VDB-BUN* & 31 & 40 & 491 & 1\,083 \\
& VDB-PAR & $\mathbf{10}$ & $\mathbf{20}$ & 1308 & 5\,581 \\ \midrule
\multirow{6}{*}{\rotatebox[origin=c]{90}{500\,000}} & OctoMap~\cite{octomap} & 796 & 2\,189 & 44\,072 & 223\,672 \\
& VDB-EDT~\cite{vdb-edt} & 957 & 987 & 22\,010 & 28\,684 \\
& VDB-MAP~\cite{vdb_fzi} & 547 & 709 & 6\,042 & 11\,364 \\
& VDB-FMAP & 547 & 671 & 5\,779 & 10\,718 \\
& VDB-SUB* & 288 & 583 & 5\,011 & 10\,998 \\
& VDB-BUN* & 261 & 330 & 2\,202 & 4\,764 \\
& VDB-PAR & $\mathbf{54}$ & $\mathbf{74}$ & $\mathbf{1\,979}$ & $\mathbf{10\,339}$ \\ \bottomrule \\
\end{tabular}
\begin{subfigure}[b]{0.23\textwidth}
\includegraphics[width=\textwidth]{img/structured_env.png}
\caption{Structured Raycasting}
\label{fig:structure_raycast}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.23\textwidth}
\includegraphics[width=\textwidth]{img/random_structure.png}
\caption{Random Raycasting}
\label{fig:random_raycast}
\end{subfigure}
\vspace*{-0.5cm}
\end{table}
We first compare different versions of our method with each other and against baselines from other works. We use random insertions of pointclouds with different sizes as a baseline setting. All experimental results are averaged over 5 runs. The results are presented in Table~\ref{tab:runtime}. Even the basis variant using OpenVDB is able to outperform OctoMap by a factor of 2, in multiple settings this advantage grows up to a factor of 30. Interestingly, the parallel integration scheme \textit{VDB-PAR} achieves almost linear speedup for small ray lengths. Figure~\ref{fig:eval_threads} gives further insight into that evaluation. The reason for the performance drop for higher ray lengths is the increasing amount of time required for the \textit{merge} operations as insertion performance stays constant for increasing submap sizes while the merge workload grows cubic. This insight can be derived from Figure~\ref{fig:eval_length}, where the different steps of an integration procedure are measured. Consequently, the best parallelization speedup can be achieved in low-range settings with a lot of points to be integrated. This exactly matches the domain of indoor mapping scenarios using high-resolution solid-state LiDARs as we will show in the next sectione A detailed evaluation of different ray lengths is given in Figure~\ref{fig:eval_full}. OctoMap and the VDB-EDT approach from~\cite{vdb-edt} are outperformed in low range (below 30m) scenarios by almost a magnitude.
The \textit{bundling} optimization VDB-BUN guarantees to save runtime, as the first ray to a specific voxel is skipped in every case. This approximately halves the runtime over all settings. The \textit{subsampling} optimization VDB-SUB on the other hand only applies when many points reside in a small volume. Consequently it saves the most runtime in a setting with many points in a \textit{structured} environment, whereas it is not faster or even slower for different settings.
\begin{table}[bt]
\caption{Benchmarks on the \textit{cow-and-lady} dataset. \textit{\#Points} denote the amount of processed points over all frames. This varies due to different processing speeds and different temporal alignments bewteen poses and pointcloud. \textit{Time} measures the total integration time and \textit{\#Occupied Voxels} counts the number of occupied voxels after the integration procedure.}
\label{tab:datasets}
\renewcommand{\arraystretch}{1.1}
\centering
\begin{tabular}{@{}lrrr}
\toprule
Name & \#Points & \makecell{\#Occupied\\ Voxels} & \makecell{Time per \\ frame [ms]} \\ \midrule
OctoMap~\cite{octomap} & $0.463 \cdot 10^{9}$ & $0.332 \cdot 10^{6}$ & 388 \\
VDB-EDT~\cite{vdb-edt} & $0.557 \cdot 10^{9}$ & $0.567 \cdot 10^{6}$ & 357 \\
VDB-MAP~\cite{vdb_fzi} & $0.518 \cdot 10^{9}$ & $0.536 \cdot 10^{6}$ & 263 \\
VDB-FMAP & $0.513 \cdot 10^{9}$ & $0.523 \cdot 10^{6}$ & 282 \\
VDB-BUN & $0.522 \cdot 10^{9}$ & $0.316 \cdot 10^{6}$ & 94 \\
VDB-SUB & $0.521 \cdot 10^{9}$ & $0.510 \cdot 10^{6}$ & 170 \\
VDB-PAR & $0.526 \cdot 10^{9}$ & $0.530 \cdot 10^{6}$ & $\mathbf{47}$ \\ \bottomrule
\end{tabular}
\end{table}
\begin{figure}[tb]
\centering
\begin{subfigure}[b]{0.23\textwidth}
\centering
\includegraphics[width=\textwidth]{img/sup_rviz.png}
\caption{VDB-FMAP \((100\%)\)}
\label{fig:gaz_raycasting}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.23\textwidth}
\centering
\includegraphics[width=\textwidth]{img/bun_rviz_ann.png}
\caption{VDB-BUN \((23.0\%)\)}
\label{fig:gaz_bundle}
\end{subfigure}\\[0.3cm]
\begin{subfigure}[b]{0.23\textwidth}
\centering
\includegraphics[width=\textwidth]{img/par_rviz_ann.png}
\caption{VDB-SUB \((39.7\%)\)}
\label{fig:gaz_subsample}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.23\textwidth}
\centering
\includegraphics[width=\textwidth]{img/sim1.png}
\caption{Indoor lab map \((100\%)\)}
\label{fig:sim_hall}
\end{subfigure}
\caption{Qualitative visualization of different mapping optimization effects on the \textit{cow-and-lady} dataset in (a)-(c) as well as the lab environment captured by an indoor UAV (cf.~\ref{fig:drone}) in (d). The percentage denotes how many of the original points are casted as rays using the respective optimization.}
\label{fig:mapping_sim}
\end{figure}
\input{img/plots.tex}
\subsection{Benchmark and Real Datasets}
We evaluate the presented methods on the indoor \textit{cow-and-lady} dataset released as part of VoxBlox~\cite{voxblox}. It consists of 2831 depth frames captured by a Microsoft Kinect I as well as corresponding pose data. The results in Table~\ref{tab:datasets} show that VDB based methods outperform previous methods in terms of simple integration performance. For the \textit{bundling} optimization VDB-BUN, the resulting amount of occupied voxels is only half compared to the other variants. This indicates that the map quality is reduced. On the other hand, the \textit{subsampling} strategy VDB-SUB only comes along with a negliglible reduction of occupied voxels. This shows that mostly redundant rays are omitted in the integration procedure while it is almost able to halve the runtime. Again, the parallel version outperforms all other versions without any reduction in accuracy.
Figure~\ref{fig:mapping_sim} examplarily shows an excerpt of the \textit{cow-and-lady} dataset after 500 frames for different integration strategies. There are only few details (marked in red) of quality loss for saving more than \(60\%\) of the integration steps using the subsampling optimization and almost 80\% using the bundled integration. Figures~\ref{fig:mapping_iosb} and~\ref{fig:sim_hall} show that our mapping procedure is capable of producing high quality maps of outdoor and indoor environments using a legged robot or a drone, respectively. Even without preprocessing the poses using a factor graph~\cite{Dellaert17fnt}, the resulting map quality is satisfactory, if the pose drift is not too large (cf.~\ref{fig:sim_hall}).
\section{Conclusion and Discussion}\label{sec:discussion}
We presented an efficient map building approach for mobile robots, especially suitable for agile robots in dynamic environments. The experimental evaluation demonstrates the effectiveness of the approach particularly in indoor environments with low sensor ranges, where the approach outperforms current solutions. The parallel fusion of different maps allows not only fast but also flexible integration of new sensor data into the map. This leaves room for additional improvements such as an extension with dynamic resolution adaption. Maintaining a real-time distance map is also a valuable extension, which could be implemented efficiently using the VDB datastructure. In order to reduce global inconsistencies coming from drifts in the localization, one could couple the VDB representation with a factor graph backend. This has been tested in a prototyped fashion as demonstrated in Figure~\ref{fig:mapping_iosb} but can potentially be improved by a tighter coupling.
\bibliographystyle{IEEEtran}
| {'timestamp': '2022-11-09T02:09:14', 'yymm': '2211', 'arxiv_id': '2211.04067', 'language': 'en', 'url': 'https://arxiv.org/abs/2211.04067'} |
\section{Proof of the First Zonklar Equation}
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\input{./section/6_Bibliography}
\begin{IEEEbiography}[{\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{./fig/6_Bibliography/alessandro_betti.jpg}}]{Alessandro Betti}
Dr. Alessandro Betti received the M.S. degree in Physics and the Ph.D. degree in Electronics Engineering from the University of Pisa, Italy, in 2007 and 2011, respectively. His main field of research was modeling of noise and transport in quasi-one dimensional devices. His work has been published in 10 papers in peer-reviewed journals in the field of solid state electronics and condensed matter physics and in 16 conference papers, presenting for 3 straight years his research at the top International Conference IEEE in electron devices, the International Electron Device Meeting in USA. In September 2015 he joined the company i-EM in Livorno, where he currently works as a Senior Data Scientist developing power generation forecasting, predictive maintenance and Deep Learning models, as well as solutions in the electrical mobility fields and managing a Data Science Team.
\end{IEEEbiography}
\begin{IEEEbiography}[{\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{./fig/6_Bibliography/Emanuele_Crisostomi}}]{Emanuele Crisostomi}(M'12-SM'16) received the B.Sc. degree in computer science engineering, the M.Sc. degree in automatic control, and the Ph.D. degree in automatics, robotics, and bioengineering from the University of Pisa, Pisa, Italy, in 2002, 2005, and 2009, respectively. He is currently an Associate Professor of electrical engineering in the Department of Energy, Systems, Territory and Constructions Engineering, University of Pisa. He has authored tens of publications in top refereed journals and he is a co-author of the recently published book on ``Electric and Plug-in Vehicle Networks: Optimization and Control'' (CRC Press, Taylor and Francis Group, 2017). His research interests include control and optimization of large scale systems, with applications to smart grids and green mobility networks.
\end{IEEEbiography}
\begin{IEEEbiography}[{\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{./fig/6_Bibliography/gianluca_paolinelli.jpg}}]{Gianluca Paolinelli}
Gianluca Paolinelli received the bachelor and master degree in electrical engineering from the University of Pisa, Pisa, Italy, in 2014 and 2018 respectively. His research interests included big data analysis and computational intelligence applied in on-line monitoring and diagnostics.
Currently, he is an electrical software engineer focused in the development of electric and hybrid power-train controls for Pure Power Control S.r.l.
\end{IEEEbiography}
\begin{IEEEbiography}[{\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{./fig/6_Bibliography/antonio_piazzi.jpg}}]{Antonio Piazzi}
Antonio Piazzi received the M.Sc. degree in electrical engineering from the University of Pisa, Pisa, Italy, 2013. Electrical engineer at i-EM since 2014, he gained his professional experience in the field of renewable energies. His research interests include machine learning and statistical data analysis, with main applications on modeling and monitoring the behaviour of renewable power plants. Currently, he is working on big data analysis applied on hydro power plants signals.
\end{IEEEbiography}
\begin{IEEEbiography}[{\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{./fig/6_Bibliography/FabrizioRuffini_Linkedin}}]{Fabrizio Ruffini}
Fabrizio Ruffini received the Ph.D. degree in Experimental Physics from University of Siena, Siena, Italy, in 2013. His research-activity is centered on data analysis, with particular interest in multidimensional statistical analysis. During his research activities, he was at the Fermi National Accelerator Laboratory (Fermilab), Chicago, USA, and at the European Organization for Nuclear Research (CERN), Geneva, Switzerland. Since 2013, he has been working at i-EM as data scientist focusing on applications in the renewable energy sector, atmospheric physics, and smart grids. Currently, he is senior data scientist with focus on international funding opportunities and dissemination activities.
\end{IEEEbiography}
\begin{IEEEbiography}[{\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{./fig/6_Bibliography/Mauro_Tucci}}]{Mauro Tucci} received the Ph.D.degree in applied electromagnetism from the University of Pisa, Pisa, Italy, in 2008. Currently, he is an Associate Professor with the Department of Energy, Systems, Territory and Constructions Engineering, University of Pisa. His research interests include computational intelligence
and big data analysis, with applications in electromagnetism, non destructive testing, powerline communications, and smart grids.
\end{IEEEbiography}
\end{document}
\section{Introduction}
\subsection{Motivation}
\IEEEPARstart{A}{s} power generation from renewable sources is increasingly seen as a fundamental component in a joint effort to support decarbonization strategies, hydroelectric power generation is experiencing a new golden age. In fact, hydropower has a number of advantages compared to other types of power generation from renewable sources. Most notably,
hydropower generation can be ramped up and down, which provides a valuable source of flexibility for the power grid, for instance, to support the integration of power generation from other renewable energy sources, like wind and solar. In addition, water in hydropower plants' large reservoirs may be seen as an energy storage resource in low-demand periods and transformed into electricity when needed \cite{Helseth2016,Hjelmeland2019}. Finally, for large turbine-generator units, the mechanical-to-electrical energy conversion process can have a combined efficiency of over 90\% \cite{Bolduc2014}. Accordingly, in 2016, around 13\% of the world's consumed electricity was generated from hydropower\footnote{\url{https://www.iea.org/statistics/balances/}}. In addition, hydropower plants have provided for more than 95\% of energy storage for all active tracked storage installations worldwide\footnote{\url{http://www.energystorageexchange.org/projects/data_visualization}}.
In addition to the aforementioned advantages, hydroelectric power plants are also typically characterized by a long lifespan and relatively low operation and maintenance costs, usually around 2.5\% of the overall cost of the plant. However, according to \cite{IHA_db}, by 2030 over half of the world's hydropower plants will be due for upgrade and modernization, or will have already been renovated. Still according to \cite{IHA_db}, the main reason why major works seem around the corner is that most industries in this field wish to adopt best practices in operations and asset management plans, or in other words, share a desire for optimized performance and increased efficiency. In combination with the quick pace of technological innovation in hydropower operations and maintenance, together with the increased ability to handle and manage big amounts of data, a technological revolution of most hydropower plants is expected to take place soon.
\subsection{State of the art}
In hydropower plants, planned periodic maintenance has been for a long time the main, if not the only one, adopted maintenance method. Condition monitoring procedures have been often reserved for protection systems, leading to shutting down the plants when single monitored signals exceeded pre-defined thresholds (e.g., bearings with temperature and vibration protection). In this context, one of the earliest works towards predictive maintenance has been \cite{Jiang2008}, where Artificial Neural Networks (ANNs) were used to monitor, identify and diagnose the dynamic performance of a prototype of system. Predictive maintenance methods obviously require the measurement and storage of all the relevant data regarding the power plant. An example of early digitalization is provided in \cite{Li2009} where a Wide Area Network for condition monitoring and diagnosis of hydro power stations and substations of the Gezhouba Hydro Power Plant (in China) was established. Thanks to measured data, available in real-time, more advanced methods that combine past history and domain knowledge can provide more efficient monitoring services, advanced fault prognosis, short- and long-term prediction of incipient faults, prescriptive maintenance tools, and residual lifetime and future risk assessment. Benefits of this include, among other things, preventing (possibly severe) faults from occurring, avoiding unnecessary replacements of components, more efficient criteria for scheduled maintenance.
The equipment required for predictive maintenance in hydro generators is also described in \cite{Ribeiro2014}, where the focus was to gain the ability to detect and classify faults regarding the electrical and the mechanical defects of the generator-turbine set, through a frequency spectrum analysis. More recent works (e.g., \cite{Selak2014}) describe condition monitoring and fault-diagnostics (CMFD) software systems that use recent Artificial Intelligence (AI) techniques (in this case a Support Vector Machine (SVM) classifier) for fault diagnostics. In \cite{Selak2014} a CMFD has been implemented on a hydropower plant with three Kaplan units. Another expert system has been developed for an 8-MW bulb turbine downstream irrigation hydropower plant in India, as described in \cite{Buaphan2017}.
An online temperature monitoring system was developed in \cite{Milic2013}, and an artificial neural network based predictive maintenance system was proposed in \cite{Chuang2004}. The accuracy of early fault detection system is an important feature for accurate reliability modeling \cite{Khalilzadeh2014}.
\subsection{Paper contribution}
In this paper we propose a novel Key Performance Indicator (KPI) based on an appropriately trained Self-Organizing Map (SOM) for condition monitoring in a hydropower plant. In addition to detecting faulty operating conditions, the proposed indicator also identifies the component of the plant that most likely gave rise to the faulty behaviour. Very few works, as from the previous section, address the same problem, although there is a general consensus that this could soon become a very active area of research \cite{IHA_db}. In this paper we show that the proposed KPI performs better than a standard multivariate process control tool like the Hotelling control chart, over a test period of more than one year (from April 2018 to July 2019).
This paper is organized as follows: Section \ref{case_study} describes more in detail the case study of interest, and the data used to tune the proposed indicator. Section \ref{Methodologies} illustrates the proposed indicator. Also, the basic theory of the Hotelling multivariate control chart is recalled, as it is used for comparison purposes. The obtained results are provided and discussed in Section \ref{Results}. Finally, in Section \ref{Conclusion} we conclude our paper and outline our current lines of research in this topic.
\section{Case study}
\label{case_study}
Throughout the paper, we will refer to two hydroelectric power plants, called plant A and plant B, which have an installed power of 215 MW and 1000 MW respectively. The plants are located in Italy as shown in Figure \ref{fig:HPP_Plant_Soverzene_Presenzano_location}, and both use Francis turbines. Plant A is of type reservoir, while plant B is of type pumped-storage. More details are provided in the following subsections.
\begin{figure}[ht]
\centering
\includegraphics[width=0.3\textwidth]{./fig/2_Case_study/italy.png}
\caption{Location of the two considered hydropower plants in Italy.
}
\label{fig:HPP_Plant_Soverzene_Presenzano_location}
\end{figure}
\subsection{Hydropower plants details}
Plant A is located in Northern Italy and consists of four generation units moved by vertical axes Francis turbines, with a power of 60 MVA for each unit. The machinery room is located 500 meters inside the mountain. The plant is powered by two connected basins: the main basin, with a daily regulation purpose with a capacity of 5.9 $\times 10^6$ m$^3$, and a second basin, limited by a dam, with a seasonal regulation capability. The plant is part of a large hydraulic system and it has been operative since 1951. At full power, the plant employs a flow of 88 m$^3/$s, with a net head of 284 m; in nominal conditions (1 unit working 24/7, 3 units working 12/7) the main basin can be emptied in about 24 hours. The 2015 net production was of 594 GWh, serving both the energy and the service market thanks to the storage capability.
Plant B is representative of pumped-storage power plants; power is generated by releasing water from the upper reservoir down to the power plant which contains four reversible 250 MW Francis pump-turbine-generators. After power production, the water is sent to the lower reservoir. The upper reservoir, formed by an embankment dam, is located at an elevation of 643 meters in the Province of Isernia. Both the upper and lower reservoirs have an active storage capacity of 60 $\times 10^6$ m$^3$. The difference in elevation leads to a hydraulic head of 495 meters. The plant has been in operation since 1994, and its net production in 2015 was 60 GWh. This plant is strategical for its pumping storage capability and sells mainly to the services market.
\subsection{Dataset}
The dataset of plant A consists of 630 analog signals with a sampling time of 1 minute. The dataset of plant B consists of 60 analog signals, since a smaller number of sensors is installed in this system. The signals are collected from several plant-components, for instance, the water intake, penstocks, turbines, generators, and HV transformers.
The acquisition system has been in service since the 1$^{st}$ of May 2017. In this work, we used data from the 1$^{st}$ of May 2017 to the 31$^{st}$ of March 2018, as the initial training set. Then, the model was tested online from the 1$^{st}$ of April 2018 until the end of July 2019. During the online testing phase, we retrained the model every two months to include the most recent data.
As usual in this kind of applications, before starting the training phase, an accurate pre-processing was required to improve the quality of measured data. In particular:
\begin{enumerate}
\item
several measured signals had a large number of not regular data, such as missing, or ``frozen'' samples (i.e., where the signal measured by the sensor does not change in time), values out of physical or operative limits, spikes, and statistical outliers in general;
\item
the training dataset did not contain information about historical anomalies occurred in the plants; similarly, Operation and Maintenance (O\&M) logs were not available.
\end{enumerate}
Accordingly, we first implemented a classic procedure of data cleaning (see for instance \cite{Data_Cleaning_Book}). This procedure has two advantages itself: first of all, it allows any data-based condition monitoring methodology to be tuned upon nominal data corresponding to a correct functioning of the plant; in addition, the plants' operators were informed of those signals whose percentage of regular samples was below a given threshold, so that they could evaluate whether it could be possible to mitigate the noise with which data were recorded, or whether the sensor was actually broken.
The second problem was mainly due to the fact that in many operating plants, the O\&M logs are often not integrated with the historical databases. Thus, it may be subtle to distinguish nominal and faulty behaviours occurred in the past. In this case, we found it useful to iteratively merge analytic results with feedbacks from on-site operators to reconstruct correct sequences of nominal data. In particular, as we shall see in greater detail in the next section, the output of our proposed procedure is a newly proposed KPI, which monitors the functioning of the hydropower plant. In particular, a warning is triggered when the KPI drops below a threshold and an automatic notification is sent to the operator. The operator, guided by the provided warning, checks and possibly confirms the nature of the detected anomaly. Then, the sequence of data during the fault is eventually removed from the log, so that it will not be included in any historical dataset and it will not be used in the future retraining stages.
\section{Methodologies}
\label{Methodologies}
The proposed approach consists in training a self-organizing map (SOM) neural network in order to build a model of the nominal behaviour of the system, using a historical dataset comprehending nominal state observations. The new state observations are then classified as ``in control'' or ``out-of-control'' after comparing their distortion measure to the average distortion measure of the nominal states used during the training, as it is now illustrated in more detail.
\subsection{Self-Organizing Map neural network based Key Performance Indicator}
Self-organizing maps (SOMs) are popular artificial neural network algorithms, belonging to the unsupervised learning category, which have been frequently used in a wide range of applications \cite{Kohonen1990}-\cite{Tuc_2010}. Given a high-dimensional input dataset, the SOM algorithm produces a topographic mapping of the data into a lower-dimensional output set.
The SOM output space consists of a fixed and ordered bi-dimensional grid of cells, identified by an index in the range $1,\dots,D$, where a distance metric $d(c,i)$ between any two cells of index $c$ and $i$ is defined \cite{Kohonen1990}. Each cell of index $i$ is associated with a model vector $\mathbf{m}_i\in \mathbb{R}^{1 \times n}$ that lies in the same high-dimensional space of the input patterns $\mathbf{r}\in \Delta$, where the matrix $\Delta\in \mathbb{R}^{N \times n}$ represents the training dataset to be analyzed, containing $N$ observations of row vectors $\mathbf{r}\in \mathbb{R}^{1 \times n}$. After the training, the distribution of the model vectors resembles the distribution of the input data, with the additional feature of preserving the grid topology: model vectors that correspond to neighbouring cells shall be neighbours in the high-dimensional input space as well. When a new input sample $\mathbf{r}$ is presented to the network, the SOM finds the best matching unit (BMU) $c$, whose model vector $\mathbf{m}_c$ has the minimum Euclidean distance from $\mathbf{r}$:
\[c = \mathbf{arg min}_{i} \{\| \mathbf{r} - \mathbf{m}_i\|\}.\]
It is known that the goal of the SOM training algorithm is to minimize the following distortion measure:
\begin{equation}\label{DM_average}
DM_{\Delta}=\frac{1}{N} \sum_{\mathbf{r} \in \Delta} \sum_{i=1}^{D} w_{ci} \|\mathbf{r}-\mathbf{m}_i\|,
\end{equation}
where the function
\begin{equation}
w_{ci} = exp\left( \frac{-d(c,i)^2}{2 \sigma^2} \right),
\end{equation}
is the neighbourhood function, $c$ is the BMU corresponding to input sample $\mathbf{r}$, and $\sigma$ is the neighbourhood width.
The distortion measure indicates the capacity of the trained SOM to fit the data maintaining the bi-dimensional topology of the output grid.
The distortion measure relative to a single input pattern $\mathbf{r}$ is computed as:
\begin{equation}\label{DM_single}
DM(\mathbf{r})= \sum_{i=1}^{D} w_{ci} \|\mathbf{r}-\mathbf{m}_i\|,
\end{equation}
from which it follows that $DM_{\Delta}$, as defined in (\ref{DM_average}), is the average of the distortion measures of all the patterns in the training data $\mathbf{r}\in \Delta$.
In order to assess the condition of newly observed state patterns $\mathbf{r}$ to be monitored, we introduce the following KPI:
\begin{equation}
KPI(\mathbf{r})=\frac{1}{1+\|1- \frac{DM(\mathbf{r} )}{DM_{\Delta}} \|}.
\end{equation}
Roughly speaking, the rationale behind the previous KPI definition is as follows: if the acquired state $\mathbf{r}$ corresponds to a normal behaviour, its distortion measure $DM(\mathbf{r}) $ should be similar to the average distortion measure of the nominal training set $DM_{\Delta}$ (which consists of non-faulty states), and the ratio $ \frac{DM(\mathbf{r}) }{DM_{\Delta}} $ should be close to one, which in turn gives a $KPI( \mathbf{r})$ value to be close to one as well. On the other hand, if the acquired state $\mathbf{r}$ actually corresponds to an anomalous behaviour, $DM(\mathbf{r}) $ should substantially differ from $DM_{\Delta}$, leading to values of $KPI( \mathbf{r})$ considerably smaller than one. In this way, values of the KPI near to one indicate a nominal functioning, while smaller values indicate that the plant is going out of control.
A critical aspect is the choice of the threshold to discriminate a correct and a faulty functioning: for this purpose, we compute the average value $\mu_{KPI}$ and the variance $\sigma^2_{KPI}$ of the filtered KPI values of all the points in the training set $\Delta$, and we define the threshold as a lower control limit (LCL) as follows:
\begin{equation}
LCL_{kpi}= \mu_{kpi}-3\sigma_{kpi}.
\end{equation}
If the measured data are noisy, the proposed KPI may present a noisy nature as well. For this purpose, in our work we filtered the KPI using an exponentially weighted average filter over the last 12 hours of consecutive KPI values.
\subsection{The contribution of individual variables to the SOM-based KPI}
When the SOM-based KPI deviates from its nominal pattern, it is desired to identify the individual variables that most contribute to the KPI variation. This allows the operator not only to identify a possible malfunctioning in the hydropower plant, but also the specific cause, or location, of such a malfunctioning. For this purpose, we first calculate an average contribution to DM of individual variables using the data in the nominal dataset $\Delta$. We then compare the contribution of individual variables of newly acquired patterns to the average contribution of nominal training patterns.
For each pattern $\mathbf{r} \in \Delta$, we calculate the following average distance vector $\mathbf{d(r)}\in \mathbb{R}^{1 \times n}$ :
\begin{equation}
\mathbf{d(r)} =\frac{1}{D} \sum_{i=1}^{D} w_{ci} ( \mathbf{r}-\mathbf{m}_i ), \forall \mathbf{r} \in \Delta.
\end{equation}
Then we calculate the vector of squared components of $\mathbf{d(r)}$ normalized to have norm 1, named $\mathbf{d_n(r)} \in \mathbb{R}^{1 \times n}$, as
\begin{equation}\label{vet_dist}
\mathbf{d_n(r)} = \frac{\mathbf{d(r)}\circ \mathbf{d(r)} }{\| \mathbf{d(r)}\|^2}, \forall \mathbf{r} \in \Delta,
\end{equation}
where the symbol $\circ$ denotes the Hadamard (element-wise) product. Finally, we compute the average vector of the normalized squared distance components for all patterns in $\Delta$:
\begin{equation}
\mathbf{d_{n_{\Delta}}} =\frac{1}{N} \sum_{r\in\Delta} \mathbf{d_n(r)} .
\end{equation}
When a new pattern $\mathbf{r}$ is acquired during the monitoring phase, we calculate the following Hadamard ratio:
\begin{equation}
\mathbf{d_n(r)}\div \mathbf{d_{n_{\Delta}}} = [ cr_1 cr_2 \dots cr_n]
\end{equation}
where the contribution ratios $cr_i$ , $i=1\dots n$ represent how individual variables of the new pattern influence the DM compared to their influence in non-faulty conditions. If the new pattern actually corresponds to a non-faulty condition, $cr_i$ takes values close to one. If the new pattern deviates from the nominal behaviour, some of the $cr_i$ exceed the unitary value. An empirical threshold 1.3 was selected, as a trade-off between false positives and true positives.
\subsection{Hotelling multivariate control chart}
As a term of comparison for our SOM-based KPI indicator, we consider the Hotelling multivariate control chart \cite{Hotelling1947}. While very few works may be found for our hydropower plant application of interest, Hotelling charts are quite popular for multivariable process control problems in general, and we take it as a benchmark procedure for comparison.
The Hotelling control chart performs a projection of the multivariate data to a scalar parameter denoted as $t^2$ statistics,
which is defined as the square of the Mahalanobis distance \cite{Maesschalck2000} between the
observed pattern and the vector containing the mean values of
the variables in nominal conditions. The Hotelling $t^2$ statistics is able to capture the changes in
multivariate data, revealing the deviations from the nominal
behaviour, and for these reasons the
Hotelling control chart is widely used for early detection of
incipient faults, see for instance\cite{Aparisi2009}.
The construction of the control chart includes two phases: in the
first phase, historical data are analyzed and the control limits
are computed; phase two corresponds to the monitoring of the newly acquired state patterns.
\subsubsection{Phase one}
Let the nominal historic dataset be represented by the matrix ${{\Delta}} \in {\mathbb{R} ^{N \times n}}$, containing N observations of row vectors $ {{\bf{r}}}\in {\mathbb{R} ^{1 \times n}}$. The sample mean vector ${{\boldsymbol{\mu }}}\in {\mathbb{R} ^{1 \times n}}$ of the data is defined as:
\begin{equation}
\label{mean_vector_phase_one}
{{{\boldsymbol{\mu }}}} = \frac{1}{N}\sum\limits_{\mathbf{r} \in \Delta} {{{\bf{r}}}} .
\end{equation}
The covariance matrix is defined by means of the zero-mean data matrix ${{\Delta}_{0}} \in {\mathbb{R} ^{N \times n}}$:
\begin{equation}
{{\Delta}_{0}} = \Delta- \bf{1} \cdot \boldsymbol{\mu},
\end{equation}
where ${\bf{1}} \in {\mathbb{R} ^{N \times 1}}$ represents a column vector with all entries equal to one.
Then, the covariance matrix ${{\bf{C}}} \in {\mathbb{R} ^{n \times n}}$ of the data is defined as:
\begin{equation}
{{\bf{C}}} = \frac{1}{{N - 1}}{\Delta}_{0}^T{{\Delta}_{0}},
\end{equation}
where ${\left( {} \right)^T}$ denotes vector transpose operation. The multivariate statistics ${{\boldsymbol{\mu }}}$ and $\bf{C}$ represent the multivariate distribution of nominal observations, and we assume that $\bf{C}$ is full rank. The scalar ${t^2}$ statistics is defined as a function of a single state pattern ${\bf{r}}\in \mathbb{R} ^{1 \times n}$:
\begin{equation}
\label{t_2_def}
t^2(\bf{r}) = \left( {{{\bf{r}}} - {{\boldsymbol{\mu }}}} \right){\bf{C}}^{ - 1}{\left( {{{\bf{r}}} - {{\boldsymbol{\mu }}}} \right)^T}.\,
\end{equation}
The ${t^2}$ statistics is small when pattern vector ${\bf{r}}$ represents nominal states, while it increases when the pattern vector ${\bf{r}}$ deviates from the nominal behaviour. In order to define the control limits $UCL$ and $LCL$ of the control chart, in this first phase we calculate the mean value $\mu_{t^2}$ and standard deviation $\sigma_{t^2}$ of the $t^2$ values obtained with all the observations of the historical dataset ${\bf{r}}\in{\Delta}$. Then we define the safety thresholds as:
\begin{equation}
\label{Safety_Thresholds}
\left\{
\begin{array}{lll}
UCL_{t^2} & = & \mu_{t^2} + 3 \sigma_{t^2}\\
& & \\
LCL_{t^2} & = & max(\mu_{t^2} - 3 \sigma_{t^2}, 0)\\
\end{array}\right..
\end{equation}
\subsubsection{Phase two}
In the second phase, new observation vectors are measured, and the corresponding $t^2$ values are calculated as in Equation (\ref{t_2_def}). The Hotelling control chart may be seen as a monitoring tool that plots the $t^2$ values as consecutive points in time and compares them against the control limits. The process is considered to be ``out of control'', when the $t^2$ values continuously exceed the control limits.
\section{Results}
\label{Results}
After a prototyping phase, the condition monitoring system has been operating since April 2018 on several components of the plants described in Section \ref{case_study}, with a total of more than 600 input signals. As an example, some of the most critical components are shown in Table \ref{tab:20190208_analyzed_components}. As can be seen from the last column of the table, there is usually a large redundancy of sensors measuring the same, or closely related, signals. The components listed in Table \ref{tab:20190208_analyzed_components} are among the most relevant components for the plant operators, as it is known that their malfunctioning may, in some cases, lead to major failures or plant emergency stops.
\begin{table}[ht!]
\centering
\begin{tabular}{l|l|c}
\hline
\textbf{Component name} & \textbf{Measured Signals} & \textbf{Number of sensors} \\
\hline
Generation Units & Vibrations & 34 \\
\hline
HV Transformer & Temperatures & 27 \\
& Gasses levels & \\
\hline
Turbine & Pressures & 27 \\
& Flows & \\
& Temperatures & \\
\hline
Oleo-dynamic system & Pressures & 20 \\
& Temperatures & \\
\hline
Supports & Temperatures & 54 \\
\hline
Alternator & Temperatures & 43 \\
\hline
\end{tabular}
\vspace{0.05cm}
\caption{List of main components analyzed for the hydro plants.}
\label{tab:20190208_analyzed_components}
\end{table}
Since April 2018, our system detected more than $20$ anomalous situations, the full list of whose occurrences is given in Table \ref{occurrences}, with different degrees of severity, defined with plant operators, ranging from ``no action needed'' to ``severe malfunction'', leading to plant emergency stop.
It is worth noting that these events were not reported by the standard condition-monitoring systems operative in the plants, and in some cases would not have been well identified by the multivariate Hotelling control chart either, thus emphasizing the importance of the more sophisticated KPI introduced in this paper. In addition, we also see how the ability to identify non-nominal operations improves over time, as new information is acquired.
\begin{table*}[htbp]
\centering
\begin{tabular}{|c|c|c|l|l|l|}
\hline
\rowcolor[rgb]{ .949, .949, .949} \textbf{Plant Code} & \textbf{Unit ID} & \textbf{Warning Date} & \textbf{Title} & \textbf{Severity Level} & \textbf{Feedback} \\
\hline
\rowcolor[rgb]{ .886, .937, .855} A & 2 & 01/25/2018 & Efficiency Parameters & Low & Weather anomalies effects \\
\hline
\rowcolor[rgb]{ .886, .937, .855} A & 1 & 07/02/2019 & Supports Temperature & Low & Under investigation \\
\hline
\rowcolor[rgb]{ .886, .937, .855} A & 3 & 03/03/2019 & Francis Turbine & Low & Anomaly related to ongoing maintenance activities\\
\hline
\rowcolor[rgb]{ .988, .894, .839} A & 1 & 06/11/2018 & Efficiency Parameters & Medium-Low & Not relevant anomaly \\
\hline
\rowcolor[rgb]{ .988, .894, .839} A & 1 & 06/29/2018 & Francis Turbine & Medium-Low & Weather anomalies effects on turbine's components temperature \\
\hline
\rowcolor[rgb]{ .988, .894, .839} A & 4 & 06/30/2018 & Efficiency Parameters & Medium-Low & Not relevant anomaly \\
\hline
\rowcolor[rgb]{ .988, .894, .839} A & 2 & 08/07/2018 & Francis Turbine & Medium-Low & Weather anomalies effects on turbine's components temperature \\
\hline
\rowcolor[rgb]{ .988, .894, .839} A & 2 & 09/14/2018 & Generator Vibrations & Medium-Low & Not relevant anomaly \\
\hline
\rowcolor[rgb]{ .988, .894, .839} A & 3 & 01/10/2019 & Generator Temperature & Medium-Low & Under investigation \\
\hline
\rowcolor[rgb]{ .988, .894, .839} A & 3 & 01/16/2019 & Transformer Temperature & Medium-Low & Weather anomalies effects on transformer temperature \\
\hline
\rowcolor[rgb]{ .988, .894, .839} A & 2 & 03/03/2019 & Generator Temperature & Medium-Low & Anomaly related to ongoing maintenance activities\\
\hline
\rowcolor[rgb]{ .988, .894, .839} A & 3 & 06/17/2019 & Generator Temperature & Medium-Low & Under investigation \\
\hline
\rowcolor[rgb]{ .988, .894, .839} A & 4 & 06/21/2019 & Generator Temperature & Medium-Low & Weather anomalies effects on generator temperature \\
\hline
\rowcolor[rgb]{ .886, .937, .855} A & 1 & 04/22/2018 & HV transformer gasses & Medium-High & Monitoring system anomaly \\
\hline
\rowcolor[rgb]{ .886, .937, .855} B & 2 & 06/27/2018 & Generator Vibrations & Medium-High & Data Quality issue \\
\hline
\rowcolor[rgb]{ .886, .937, .855} A & 1 & 10/29/2018 & Generator Vibrations & Medium-High & Data Quality issue \\
\hline
\rowcolor[rgb]{ .886, .937, .855} A & 3 & 11/20/2018 & Generator Vibrations & Medium-High & Under investigation \\
\hline
\rowcolor[rgb]{ .886, .937, .855} A & 1 & 03/24/2019 & Francis Turbine & Medium-High & Under investigation \\
\hline
\rowcolor[rgb]{ .886, .937, .855} A & 2 & 04/07/2019 & Oleo-Dynamic System & Medium-High & Under investigation \\
\hline
\rowcolor[rgb]{ .988, .894, .839} B & 2 & 10/01/2018 & Generator Temperature & High & Sensor anomaly on block channel signal\\
\hline
\rowcolor[rgb]{ .988, .894, .839} B & 2 & 11/01/2018 & Generator Temperature & High & Sensor anomaly on block channel signal\\
\hline
\rowcolor[rgb]{ .988, .894, .839} A & 2 & 03/01/2019 & HV transformer gasses & High & Under investigation \\
\hline
\end{tabular}%
\caption{List of anomalous behaviours that have been noticed during 16 months of test on the two hydropower plants. Lines 14 and 20 correspond to the two faults that have been illustrated in this paper.}
\label{occurrences}%
\end{table*}%
We now describe more in detail two different anomalies belonging to the two different plants, as summarized in Table \ref{tab:20190131_SOM_results}.
\begin{table}[ht!]
\centering
\begin{tabular}{c|c|c|l}
\hline
\textbf{Case Study} &\textbf {Plant} & \textbf{Warning name} &\textbf{ Warning date} \\
\hline
1 & B & Generator Temperature & 10/01/2018 \\
\hline
2 & A & HV transformer gasses & 04/22/2018 \\
\hline
\end{tabular}%
\caption{Two failures reported by the predictive system and discussed in detail.}
\label{tab:20190131_SOM_results}%
\end{table}%
\subsection{Case Study 1: plant B - generator temperature signals}
In October 2018, an anomalous behavior was reported by our system, which indicated an anomaly regarding a sensor measuring the iron temperature of the alternator. It was then observed that the temperature values of such a sensor were higher than usual, as shown in Figure \ref{fig:Temperatura_1_Ferro_Alternatore_2_warning_id1_zoom.}, and also higher than the measurements taken by other similar sensors. However the temperature values did not yet exceed the warning threshold of the condition-monitoring systems already operative in the plant.
From an inspection of the sensor measurements it was possible to establish that the exact day when this anomaly started occurring, the proposed SOM based KPI sharply notified a warning, as shown in Figure \ref{fig:KPI_temp_IDRT_subplot_SOM_T2}; for this case, the time-extension of the training dataset was nine months, from 1 January 2018 to 1 September 2018. After receiving the warning alert, the plant operators checked the sensor and confirmed the event as a relevant anomaly. In particular, they acknowledged that this was a serious problem, since a further degradation of the measurement could have eventually led to the stop of the generation unit. For this reason, timely actions were taken: operators restored the nominal and correct behavior of the sensor starting on the 12$^{th}$ of October 2018. The saved costs related to the prevented stopping of the generation unit were estimated in the range between 25 k\euro\ and 100 k\euro.
While also the Hotelling control chart noticed an anomalous behaviour of the sensor in the same time frame, still it would have given rise to several false positives in the past, as shown in Figure \ref{fig:KPI_temp_IDRT_subplot_SOM_T2}.
\begin{figure}[!ht]
\centering
\includegraphics[trim=3cm 0cm 2cm 0cm,width=1\linewidth]
{./fig/4_Results/Presenzano_Generator_v2.eps}
\caption{Measurement of the temperature sensor in the alternator of plant B. Anomalous values were detected by our system (start warning), timely actions were taken and the correct functioning was restored (end warning) }
\label{fig:Temperatura_1_Ferro_Alternatore_2_warning_id1_zoom.}
\end{figure}
\begin{figure*}[!ht]
\centering
\includegraphics[trim=3cm 0cm 2cm 0cm,width=\linewidth]{./fig/4_Results/Presenzano_Generator_con_T2_Mauro.eps}
\caption{Plant A: KPI as a function of time. SOM-based results (top) are compared with $t^2$-based results (bottom).}
\label{fig:KPI_temp_IDRT_subplot_SOM_T2}
\end{figure*}
\subsection{Case study 2: Plant A - HV Transformer anomaly}
The SOM-based KPI detected an anomaly on the HV transformer of one of the Generation Units at the beginning of June 2018 as shown in fig. \ref{fig:KPI_hydran_IDCA_subplot_SOM_T2}. After inspection of the operators, they informed us that similar anomalous situations had occurred in the past as well, but had not been tagged as faulty behaviours. As soon as the time occurrences of the similar past anomalies had been notified by a plant operator, we proceeded to remove the corresponding signals from the training set. Then we recomputed our KPI based on the revised corrected historical dataset, and the KPI retrospectively found out that the ongoing faulty pattern had actually started one month earlier. This updated information was then validated by analyzing the output of an already installed gas monitoring system, that continuously monitors a composite value of various fault gases in ppM (Parts per Million) and tracks the oil-moisture. The gas monitoring system had been measuring increasing values, with respect to the historical ones, since the 22$^{nd}$ of April 2018, but no warning had been generated by that system. In this case, this was not however a critical fault, as the level of gasses in the transformer oil was not exceeding the maximum feasible limit. However, the warning triggered by our system was used to schedule maintenance actions that restored the nominal operating conditions. In addition, the feedback from the plant operators was very useful in order to tune the SOM-based monitoring system, and to increase its early detection capabilities.
In this case, the Hotelling control chart realized of the anomalous behaviour only 20 days after our SOM-based KPI.
\begin{figure*}[!ht]
\centering
\includegraphics[trim=4cm 0cm 2cm 0cm,width=\linewidth]
{./fig/4_Results/Soverzene_Transormer_con_T2.eps}
\caption{Plant B: KPI as a function of time. SOM-based results (top) are compared with $t^2$-based results (bottom).}
\label{fig:KPI_hydran_IDCA_subplot_SOM_T2}
\end{figure*}
\section{Conclusion}
\label{Conclusion}
Driven by rapidly evolving enabling technologies, most notably Internet-of-Thing sensors and communication tools, together with more powerful artificial intelligent algorithms, condition monitoring, early diagnostics and predictive maintenance methodologies and tools are becoming some of the most interesting areas of research in the power community. While some preliminary examples can be found in many fields, solar and wind plants being one of them, fewer applications can be found in the field of hydropower plants. In this context, this paper is one of the first examples, at least up to the knowledge of the authors, that results of a newly proposed KPI are validated over a 1-year running test field in two hydropower plants.\\
\newline
This paper provided very promising preliminary results, that encourage further research on this topic. In particular the proposed procedure can not be implemented in a fully unsupervised fashion yet, but some iterations with plant operators still take place when alarm signals are alerted. Also, the proposed condition monitoring strategy is a first step towards fully automatic predictive maintenance schemes, where faults are not just observed but are actually predicted ahead of time, possibly when they are only at an incipient stage. In the opinion of the authors this is a very promising area of research and there is a general interest towards the development of such predictive strategies.
\section{Proof of the First Zonklar Equation}
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\input{./section/6_Bibliography}
\begin{IEEEbiography}[{\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{./fig/6_Bibliography/alessandro_betti.jpg}}]{Alessandro Betti}
Dr. Alessandro Betti received the M.S. degree in Physics and the Ph.D. degree in Electronics Engineering from the University of Pisa, Italy, in 2007 and 2011, respectively. His main field of research was modeling of noise and transport in quasi-one dimensional devices. His work has been published in 10 papers in peer-reviewed journals in the field of solid state electronics and condensed matter physics and in 16 conference papers, presenting for 3 straight years his research at the top International Conference IEEE in electron devices, the International Electron Device Meeting in USA. In September 2015 he joined the company i-EM in Livorno, where he currently works as a Senior Data Scientist developing power generation forecasting, predictive maintenance and Deep Learning models, as well as solutions in the electrical mobility fields and managing a Data Science Team.
\end{IEEEbiography}
\begin{IEEEbiography}[{\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{./fig/6_Bibliography/Emanuele_Crisostomi}}]{Emanuele Crisostomi}(M'12-SM'16) received the B.Sc. degree in computer science engineering, the M.Sc. degree in automatic control, and the Ph.D. degree in automatics, robotics, and bioengineering from the University of Pisa, Pisa, Italy, in 2002, 2005, and 2009, respectively. He is currently an Associate Professor of electrical engineering in the Department of Energy, Systems, Territory and Constructions Engineering, University of Pisa. He has authored tens of publications in top refereed journals and he is a co-author of the recently published book on ``Electric and Plug-in Vehicle Networks: Optimization and Control'' (CRC Press, Taylor and Francis Group, 2017). His research interests include control and optimization of large scale systems, with applications to smart grids and green mobility networks.
\end{IEEEbiography}
\begin{IEEEbiography}[{\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{./fig/6_Bibliography/gianluca_paolinelli.jpg}}]{Gianluca Paolinelli}
Gianluca Paolinelli received the bachelor and master degree in electrical engineering from the University of Pisa, Pisa, Italy, in 2014 and 2018 respectively. His research interests included big data analysis and computational intelligence applied in on-line monitoring and diagnostics.
Currently, he is an electrical software engineer focused in the development of electric and hybrid power-train controls for Pure Power Control S.r.l.
\end{IEEEbiography}
\begin{IEEEbiography}[{\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{./fig/6_Bibliography/antonio_piazzi.jpg}}]{Antonio Piazzi}
Antonio Piazzi received the M.Sc. degree in electrical engineering from the University of Pisa, Pisa, Italy, 2013. Electrical engineer at i-EM since 2014, he gained his professional experience in the field of renewable energies. His research interests include machine learning and statistical data analysis, with main applications on modeling and monitoring the behaviour of renewable power plants. Currently, he is working on big data analysis applied on hydro power plants signals.
\end{IEEEbiography}
\begin{IEEEbiography}[{\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{./fig/6_Bibliography/FabrizioRuffini_Linkedin}}]{Fabrizio Ruffini}
Fabrizio Ruffini received the Ph.D. degree in Experimental Physics from University of Siena, Siena, Italy, in 2013. His research-activity is centered on data analysis, with particular interest in multidimensional statistical analysis. During his research activities, he was at the Fermi National Accelerator Laboratory (Fermilab), Chicago, USA, and at the European Organization for Nuclear Research (CERN), Geneva, Switzerland. Since 2013, he has been working at i-EM as data scientist focusing on applications in the renewable energy sector, atmospheric physics, and smart grids. Currently, he is senior data scientist with focus on international funding opportunities and dissemination activities.
\end{IEEEbiography}
\begin{IEEEbiography}[{\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{./fig/6_Bibliography/Mauro_Tucci}}]{Mauro Tucci} received the Ph.D.degree in applied electromagnetism from the University of Pisa, Pisa, Italy, in 2008. Currently, he is an Associate Professor with the Department of Energy, Systems, Territory and Constructions Engineering, University of Pisa. His research interests include computational intelligence
and big data analysis, with applications in electromagnetism, non destructive testing, powerline communications, and smart grids.
\end{IEEEbiography}
\end{document}
| {'timestamp': '2019-11-15T02:16:55', 'yymm': '1911', 'arxiv_id': '1911.06242', 'language': 'en', 'url': 'https://arxiv.org/abs/1911.06242'} |
\section{Introduction}\label{sec:intro}
Exact computation of characteristic quantities of Web-scale networks is often
impractical or even infeasible due to the humongous size of these graphs. It is
natural in these cases to resort to \emph{efficient-to-compute approximations}
of these quantities that, when of sufficiently high quality, can be used as
proxies for the exact values.
In addition to being huge, many interesting networks are \emph{fully-dynamic}
and can be represented as a \emph{stream} whose elements are edges/nodes
insertions and deletions which occur in an \emph{arbitrary} (even adversarial)
order. Characteristic quantities in these graphs are \emph{intrinsically
volatile}, hence there is limited added value in maintaining them exactly.
The goal is rather to keep track, \emph{at all times}, of a high-quality
approximation of these quantities. For efficiency, the algorithms should
\emph{aim at exploiting the available memory space as much as possible} and they
should \emph{require only one pass over the stream}.
We introduce \textsc{\algoname}\xspace, a suite of \emph{sampling-based, one-pass algorithms for
adversarial fully-dynamic streams to approximate the global number of triangles
and the local number of triangles incident to each vertex}. Mining local and
global triangles is a fundamental primitive with many applications (e.g.,
community detection~\citep{BerryHLVP11}, topic
mining~\citep{EckmannM02}, spam/anomaly
detection~\citep{BecchettiBCG10,LimK15},
ego-networks mining~\cite{epasto2015ego}
and protein interaction networks analysis~\citep{MiloSOIKA02}.)
Many previous works on triangle estimation in streams also employ sampling (see
Sect.~\ref{sec:relwork}), but they usually require the user to
specify \emph{in advance} an \emph{edge sampling probability $p$} that is fixed
for the entire stream. This approach presents several significant drawbacks.
First, choosing a $p$ that allows to obtain the desired approximation
quality requires to know or guess a number of properties of the input (e.g., the
size of the stream). Second, a fixed $p$ implies that the sample size grows with
the size of the stream, which is problematic when the stream size is not known
in advance: if the user specifies a large $p$, the algorithm may run out of
memory, while for a smaller $p$ it will provide a suboptimal estimation.
Third, even assuming to be able to compute a $p$ that ensures (in expectation)
full use of the available space, the memory would be fully utilized only at the
end of the stream, and the estimations computed throughout the execution would
be suboptimal.
\paragraph{Contributions}
We address all the above issues by taking a significant departure from
the fixed-probability, independent edge sampling approach taken even by
state-of-the-art methods~\citep{LimK15}. Specifically:
\begin{longitem}
\item We introduce \textsc{\algoname}\xspace (\emph{TRI}angle \emph{E}stimation from
\emph{ST}reams), a suite of \emph{one-pass streaming algorithms} to
approximate, at each time instant, the global and local number of
triangles in a \emph{fully-dynamic} graph stream (i.e., a sequence of
edges additions and deletions in arbitrary order) using a \emph{fixed
amount of memory}. This is the first contribution that enjoys all these
properties. \textsc{\algoname}\xspace only requires the user to specify \emph{the amount of
available memory}, an interpretable parameter that is definitively known
to the user.
\item Our algorithms maintain a sample of edges: they use the \emph{reservoir
sampling}~\citep{Vitter85} and \emph{random pairing}~\citep{GemullaLH08}
sampling schemes to exploit the available memory as much as possible. To the
best of our knowledge, ours is the first application of these techniques to
subgraph counting in fully-dynamic, arbitrarily long, adversarially ordered
streams. We present an analysis of the unbiasedness and of the variance of
our estimators, and establish strong concentration results for them. The use
of reservoir sampling and random pairing requires additional sophistication
in the analysis, as the presence of an edge in the sample is \emph{not
independent} from the concurrent presence of another edge. Hence, in our
proofs we must consider the complex dependencies in events involving sets of
edges. The gain is worth the effort: we prove that the variance of our
algorithms is smaller than that of state-of-the-art methods~\citep{LimK15},
and this is confirmed by our experiments.
\item We conduct an extensive experimental evaluation of \textsc{\algoname}\xspace on very large
graphs, some with billions of edges, comparing the performances of our
algorithms to those of existing state-of-the-art
contributions~\citep{LimK15,JhaSP15,PavanTTW13}. \emph{Our algorithms
significantly and consistently reduce the average estimation error by up to
$90\%$} w.r.t.~the state of the art, both in the global and local estimation
problems, while using the same amount of memory. Our algorithms are also
extremely scalable, showing update times in the order of hundreds of
microseconds for graphs with billions of edges.
\end{longitem}
In this article, we extend the conference version~\citep{DeStefaniERU16} in
multiple ways. First of all, we include all proofs of our theoretical results
and give many additional details that were omitted from the conference version
due to lack of space. Secondly, we strengthen the analysis of \textsc{\algoname}\xspace, presenting
tighter bounds to the variance of its variants. Thirdly, we show how to extend
\textsc{\algoname}\xspace to approximate the count of triangles in multigraphs. Additionally, we
include a whole subsection of discussion of our results, highlighting their
advantages, disadvantages, and limitations. Finally, we expand our experimental
evaluation, reporting the results of additional experiments and giving
additional details on the comparison with existing state-of-the-art methods.
\paragraph{Paper organization}
We formally introduce the settings and the problem in
Sect.~\ref{sec:prelims}. In Sect.~\ref{sec:relwork} we discuss related works.
We present and analyze \textsc{\algoname}\xspace and discuss our design choices in
Sect.~\ref{sec:algorithms}. The results of our experimental evaluation are
presented in Sect.~\ref{sec:experiments}. We draw our conclusions in
Sect.~\ref{sec:concl}. Some of the proofs of our theoretical results are
deferred to Appendix~\ref{sec:appendix}.
\section{Preliminaries}\label{sec:prelims}
We study the problem of counting global and local triangles in a fully-dynamic
undirected graph as an arbitrary (adversarial) stream of edge insertions and
deletions.
Formally, for any (discrete) time instant $t\ge 0$, let
$G^{(t)}=(V^{(t)},E^{(t)})$ be the graph observed up to and including time $t$.
At time $t=0$ we have $V^{(t)}=E^{(t)}=\emptyset$. For any $t>0$, at time $t+1$
we receive an element $e_{t+1}=(\bullet,(u,v))$ from a stream, where
$\bullet\in\{+,-\}$ and $u,v$ are two distinct vertices. The graph
$G^{(t+1)}=(V^{(t+1)},E^{(t+1)})$ is obtained by \emph{inserting a new edge or
deleting an existing edge} as follows:
\[
E^{(t+1)}=\left\{
\begin{array}{ll}
E^{(t)}\cup\{(u,v)\}\mbox{ if } \bullet=\mbox{``}+\mbox{''}\\
E^{(t)}\setminus\{(u,v)\}\mbox{ if } \bullet=\mbox{``}-\mbox{''}
\end{array}
\right.\enspace.
\]
If $u$ or $v$ do not belong to $V^{(t)}$, they are added to $V^{(t+1)}$. Nodes
are deleted from $V^{(t)}$ when they have degree zero.
Edges can be added and deleted in the graph in an arbitrary adversarial order,
i.e., as to cause the worst outcome for the algorithm, but we assume that the
adversary has \emph{no access to the random bits} used by the algorithm.
We assume that \emph{all operations have effect}: if $e\in E^{(t)}$
(resp.~$e\not\in E^{(t)}$), $(+,e)$ (resp.~$(-,e)$) can not be on the stream at
time $t+1$.
Given a graph $G^{(t)}=(V^{(t)},E^{(t)})$, a \emph{triangle} in $G^{(t)}$ is a
\emph{set} of three edges $\{(u,v),(v,w),(w,u)\}\subseteq E^{(t)}$, with $u$,
$v$, and $w$ being three distinct vertices. We refer to $\{u,v,w\}\subseteq
V^{(t)}$ as the \emph{corners} of the triangle. We denote with $\Delta^{(t)}$
the set of \emph{all} triangles in $G^{(t)}$, and, for any vertex $u\in
V^{(t)}$, with $\Delta^{(t)}_u$ the subset of $\Delta^{(t)}$ containing all and
only the triangles that have $u$ as a corner.
{\em Problem definition.} We study the \emph{Global} (resp.~\emph{Local})
Triangle Counting Problem in Fully-dynamic Streams, which requires to compute,
at each time $t\ge 0$ an estimation of $|\Delta^{(t)}|$ (resp.~for each $u\in V$
an estimation of $|\Delta^{(t)}_u|$).
\paragraph{Multigraphs} Our approach can be further extended to count the number
of global and local triangles on a \emph{multigraph} represented as a stream of
edges. Using a formalization analogous to that discussed for graphs, for any
(discrete) time instant $t\ge 0$, let $G^{(t)}=(V^{(t)},\mathcal{E}^{(t)})$ be
the multigraph observed up to and including time $t$, where $\mathcal{E}^{(t)}$
is now a \emph{bag} of edges between vertices of $V^{(t)}$. The multigraph
evolves through a series of edges additions and deletions according to the same
process described for graphs. The definition of triangle in a multigraph is also
the same. As before we denote with $\Delta^{(t)}$ the set of \emph{all}
triangles in $G^{(t)}$, but now this set may contain multiple triangles with the
same set of vertices, although each of these triangles will be a different set
of three edges among those vertices. For any vertex $u\in V^{(t)}$, we still
denote with $\Delta_u^{(t)}$ the subset of $\Delta^{(t)}$ containing all and
only the triangles that have $u$ as a corner, with a similar caveat as
$\Delta^{(t)}$. The problems of global and local triangle counting in multigraph
edge streams are defined exactly in the same way as for graph edge streams.
\section{Related work}\label{sec:relwork}
The literature on exact and approximate triangle counting is extremely rich,
including exact algorithms, graph
sparsifiers~\citep{TsourakakisKMF09,TsourakakisKM11}, complex-valued
sketches~\citep{ManjunathMPS11,KaneMSS12}, and MapReduce
algorithms~\citep{SuriV11,PaghT12,ParkC13,ParkSKP14,ParkMK16}. Here we restrict the discussion to
the works most related to ours, i.e., to those presenting algorithms for
counting or approximating the number of triangles from data streams. We refer
to the survey by~\citet{Latapy08} for an in-depth discussion of other works.
Table~\ref{table:comparison} presents a summary of the comparison, in terms of
desirable properties, between this work and relevant previous contributions.
Many authors presented algorithms for more restricted (i.e., less generic)
settings than ours, or for which the constraints on the computation are more
lax~\citep{BarYossefKS02,BuriolFSMSS06,JowharyG05,KutzkovP13}. For example,
\citet{BecchettiBCG10} and~\citet{KolountzakisMPT12} present algorithms for
approximate triangle counting from \emph{static} graphs by performing multiple
passes over the data. \citet{PavanTTW13} and \citet{JhaSP15} propose algorithms
for approximating only the global number of triangles from
\emph{edge-insertion-only} streams. \citet{BulteauFKP15} present a one-pass
algorithm for fully-dynamic graphs, but the triangle count estimation is
(expensively) computed only at the end of the stream and the algorithm requires,
in the worst case, more memory than what is needed to store the entire graph.
\citet{AhmedDKN14} apply the sampling-and-hold approach to insertion-only graph
stream mining to obtain, only at the end of the stream and using non-constant
space, an estimation of many network measures including triangles.
None of these works has \emph{all} the features offered by \textsc{\algoname}\xspace: performs a
single pass over the data, handles fully-dynamic streams, uses a fixed amount of
memory space, requires a single interpretable parameter, and returns an
estimation at each time instant. Furthermore, our experimental results show that
we outperform the algorithms from~\citep{PavanTTW13,JhaSP15} on insertion-only
streams.
\citet{LimK15} present an algorithm for insertion-only streams that is based on
independent edge sampling with a fixed probability: for each edge on the
stream, a coin with a user-specified fixed tails probability $p$ is flipped,
and, if the outcome is tails, the edge is added to the stored sample and the
estimation of local triangles is updated. Since the memory is not fully utilized
during most of the stream, the variance of the estimate is large. Our approach
handles fully-dynamic streams and makes better use of the available memory space
at each time instant, resulting in a better estimation, as shown by our
analytical and experimental results.
\citet{Vitter85} presents a detailed analysis of the reservoir sampling scheme
and discusses methods to speed up the algorithm by reducing the number of calls to
the random number generator. Random Pairing~\citep{GemullaLH08} is an extension
of reservoir sampling to handle fully-dynamic streams with insertions and
deletions. \citet{CohenCD12} generalize and extend the Random Pairing approach
to the case where the elements on the stream are key-value pairs, where the
value may be negative (and less than $-1$). In our settings, where the value is
not less than $-1$ (for an edge deletion), these generalizations do not apply
and the algorithm presented by~\citet{CohenCD12} reduces essentially to Random
Pairing.
\begin{table}[ht]
\tbl{Comparison with previous contributions
}{
\begin{tabular}{cccccc}
\toprule
Work & \shortstack[c]{Single\\pass} & \shortstack[c]{Fixed\\space} &
\shortstack[c]{Local\\counts} & \shortstack[c]{Global\\counts} &
\shortstack[c]{Fully-dynamic\\streams}\\
\midrule
\citep{BecchettiBCG10} & \xmark & \cmark/\xmark$^\dagger$ & \cmark & \xmark & \xmark\\
\citep{KolountzakisMPT12} & \xmark & \xmark & \xmark & \cmark & \xmark \\
\citep{PavanTTW13} & \cmark & \cmark & \xmark & \cmark & \xmark \\
\citep{JhaSP15} & \cmark & \cmark & \xmark & \cmark & \xmark \\
\citep{AhmedDKN14} & \cmark & \xmark & \xmark & \cmark & \xmark \\
\citep{LimK15} & \cmark & \xmark & \xmark & \xmark & \xmark \\
This work & \cmark & \cmark & \cmark & \cmark & \cmark \\
\bottomrule
\end{tabular}
}
\begin{tabnote}
\tabnoteentry{$^\dagger$}{The required space is $O(|V^{(t)}|)$, which,
although not dependent on the number of triangles or on the number of
edges, is not fixed in the sense that it can be fixed a-priori.}
\end{tabnote}
\label{table:comparison}
\end{table}
\section{Algorithms}\label{sec:algorithms}
We present \textsc{\algoname}\xspace, a suite of three novel algorithms for approximate global and
local triangle counting from edge streams. The first two work on insertion-only
streams, while the third can handle fully-dynamic streams where edge deletions
are allowed. We defer the discussion of the multigraph case to
Sect.~\ref{sec:multigraphs}.
\paragraph{Parameters}
Our algorithms keep an edge sample $\mathcal{S}$ containing up to $M$ edges from the
stream, where $M$ is a positive integer parameter. For ease of presentation, we
realistically assume $M\ge 6$. In Sect.~\ref{sec:intro} we motivated the design
choice of only requiring $M$ as a parameter and remarked on its advantages over
using a fixed sampling probability $p$. Our algorithms are designed to use the
available space as much as possible.
\paragraph{Counters}
\textsc{\algoname}\xspace algorithms keep \emph{counters} to compute the estimations of the global
and local number of triangles. They \emph{always} keep one global counter $\tau$
for the estimation of the global number of triangles. Only the global counter is
needed to estimate the total triangle count. To estimate the local triangle
counts, the algorithms keep a set of local counters $\tau_u$ for a subset of the
nodes $u\in V^{(t)}$. The local counters are created on the fly as needed, and
\emph{always} destroyed as soon as they have a value of $0$. Hence our
algorithms use $O(M)$ space (with one exception, see Sect.~\ref{sec:improved}).
\paragraph{Notation}
For any $t\ge 0$, let $G^\mathcal{S}=(V^\mathcal{S},E^\mathcal{S})$ be the subgraph of $G^{(t)}$
containing all and only the edges in the current sample $\mathcal{S}$. We denote with
$\mathcal{N}^{\mathcal{S}}_u$ the \emph{neighborhood} of $u$ in $G^\mathcal{S}$:
$\mathcal{N}^{\mathcal{S}}_u=\{v\in V^{(t)} ~:~ (u,v)\in \mathcal{S}\}$ and with
$\mathcal{N}^{\mathcal{S}}_{u,v}= \mathcal{N}^{\mathcal{S}}_u \cap \mathcal{N}^{\mathcal{S}}_v$ the \emph{shared
neighborhood} of $u$ and $v$ in $G^\mathcal{S}$.
\paragraph{Presentation}
We only present the analysis of our algorithms for the problem of \emph{global}
triangle counting. For each presented result involving the estimation of the
global triangle count (e.g., unbiasedness, bound on variance, concentration
bound) and potentially using other global quantities (e.g., the number of
pairs of triangles in $\Delta^{(t)}$ sharing an edge), it is straightforward to
derive the correspondent variant for the estimation of the local triangle count,
using similarly defined local quantities (e.g., the number of
pairs of triangles in $\Delta_u^{(t)}$ sharing an edge.)
\subsection{A first algorithm -- \textsc{\algoname-base}\xspace}\label{sec:algobase}
We first present \textsc{\algoname-base}\xspace, which works on insertion-only streams and uses
standard reservoir sampling~\citep{Vitter85} to maintain the edge sample $\mathcal{S}$:
\begin{itemize}
\item If $t\le M$, then the edge $e_t=(u,v)$ on the stream at time $t$ is
deterministically inserted in $\mathcal{S}$.
\item If $t>M$, \textsc{\algoname-base}\xspace flips a biased coin with heads probability $M/t$. If
the outcome is heads, it chooses an edge $(w,z)\in\mathcal{S}$ uniformly at
random, removes $(w,z)$ from $\mathcal{S}$, and inserts $(u,v)$ in $\mathcal{S}$.
Otherwise, $\mathcal{S}$ is not modified.
\end{itemize}
After each insertion (resp.~removal) of an edge $(u,v)$ from $\mathcal{S}$, \textsc{\algoname-base}\xspace
calls the procedure \textsc{UpdateCounters} that increments (resp.~decrements)
$\tau$, $\tau_u$ and $\tau_v$ by $|\mathcal{N}^\mathcal{S}_{u,v}|$, and $\tau_c$ by one, for
each $c\in\mathcal{N}^{\mathcal{S}}_{u,v}$.
The pseudocode for \textsc{\algoname-base}\xspace is presented in Alg.~\ref{alg:triest-base}.
\begin{algorithm}[ht]
\small
\caption{\textsc{\algoname-base}\xspace}
\label{alg:triest-base}
\begin{algorithmic}[1]
\Statex{\textbf{Input:} Insertion-only edge stream $\Sigma$, integer $M\ge6$}
\State $\mathcal{S}\leftarrow\emptyset$, $t \leftarrow 0$, $\tau \leftarrow 0$
\For{ {\bf each} element $(+,(u,v))$ from $\Sigma$}
\State $t\leftarrow t +1$
\If{\textsc{SampleEdge}$((u,v), t )$}
\State $\mathcal{S} \leftarrow \mathcal{S}\cup \{(u,v)\}$
\State \textsc{UpdateCounters}$(+,(u,v))$
\EndIf
\EndFor
\Statex
\Function{\textsc{SampleEdge}}{$(u,v),t$}
\If {$t\leq M$}
\State \textbf{return} True
\ElsIf{\textsc{FlipBiasedCoin}$(\frac{M}{t}) = $ heads}
\State $(u',v') \leftarrow$ random edge from $\mathcal{S}$
\State $\mathcal{S}\leftarrow \mathcal{S}\setminus \{(u',v')\}$
\State \textsc{UpdateCounters}$(-,(u',v'))$
\State \textbf{return} True
\EndIf
\State \textbf{return} False
\EndFunction
\Statex
\Function{UpdateCounters}{$(\bullet, (u,v))$}
\State $\mathcal{N}^\mathcal{S}_{u,v} \leftarrow \mathcal{N}^\mathcal{S}_u \cap \mathcal{N}^\mathcal{S}_v$
\ForAll {$c \in \mathcal{N}^\mathcal{S}_{u,v}$}
\State $\tau \leftarrow \tau \bullet 1$
\State $\tau_c \leftarrow \tau_c \bullet 1$
\State $\tau_u \leftarrow \tau_u \bullet 1$
\State $\tau_v \leftarrow \tau_v \bullet 1$
\EndFor
\EndFunction
\end{algorithmic}
\end{algorithm}
\subsubsection{Estimation}
For any pair of positive integers $a$ and $b$
such that $a\le\min\{M,b\}$ let
\[
\xi_{a,b} =
\left\{\begin{array}{ccl} & 1 & \text{if } b\le M\\
\displaystyle\binom{b}{M}\Big/\displaystyle\binom{b-a}{M-a}&=\displaystyle\prod_{i=0}^{a-1}\frac{b-i}{M-i} & \text{otherwise}\end{array}\right.
\enspace.
\]
As shown in the following lemma, $\xi_{k,t}^{-1}$ is the probability that $k$
edges of $G^{(t)}$ are all in $\mathcal{S}$ at time $t$, i.e., the $k$-th order
inclusion probability of the reservoir sampling scheme. The proof can be found
in App.~\ref{app:algobase}.
\begin{lemma}\label{lem:reservoirhighorder}
For any time step $t$ and any positive integer $k\le t$, let $B$ be any
subset of $E^{(t)}$ of size $|B|=k\le t$. Then, at the end of time step $t$,
\[
\Pr(B\subseteq\mathcal{S})=\left\{\begin{array}{cl}0 & \text{if } k>M\\
\xi_{k,t}^{-1} & \text{otherwise}\end{array} \right.\enspace.
\]
\end{lemma}
We make use of this lemma in the analysis of \textsc{\algoname-base}\xspace.
Let, for any $t\ge 0$, $\xi^{(t)}= \xi_{3,t}$ and let $\tau^{(t)}$
(resp.~$\tau_u^{(t)}$) be the value of the counter $\tau$ at the end of time
step $t$ (i.e., after the edge on the stream at time $t$ has been processed by
\textsc{\algoname-base}\xspace) (resp.~the value of the counter $\tau_u$ at the end of time step $t$
if there is such a counter, 0 otherwise). When queried at the end of time $t$,
\textsc{\algoname-base}\xspace returns $\xi^{(t)}\tau^{(t)}$ (resp.~$\xi^{(t)}\tau_u^{(t)}$) as the
estimation for the global (resp.~local for $u\in V^{(t)}$) triangle count.
\subsubsection{Analysis}
We now present the analysis of the estimations computed by \textsc{\algoname-base}\xspace.
Specifically, we prove their unbiasedness (and their exactness for $t\le M$) and
then show an exact derivation of their variance and a concentration result. We
show the results for the global counts, but results analogous to those in
Thms.~\ref{thm:baseunbiased}, \ref{thm:basevariance},
and~\ref{thm:baseconcentration} hold for the local triangle count for any $u\in
V^{(t)}$, replacing the global quantities with the corresponding local ones. We
also compare, theoretically, the variance of \textsc{\algoname-base}\xspace with that of a
fixed-probability edge sampling approach~\citep{LimK15}, showing that \textsc{\algoname-base}\xspace
has smaller variance for the vast majority of the stream.
\subsubsection{Expectation}
We have the following result about the estimations computed by \textsc{\algoname-base}\xspace.
\begin{theorem}\label{thm:baseunbiased}
We have
\begin{align*}
\xi^{(t)}\tau^{(t)}=\tau^{(t)}=|\Delta^{(t)}| &\mbox{ if } t\le M\\
\mathbb{E}\left[\xi^{(t)}\tau^{(t)}\right]=|\Delta^{(t)}| &\mbox{ if } t>
M\enspace.
\end{align*}
\end{theorem}
The \textsc{\algoname-base}\xspace estimations are not only \emph{unbiased in all cases}, but
actually \emph{exact for $t\le M$}, i.e., for $t\le M$, they are the true
global/local number of triangles in $G^{(t)}$.
To prove Thm.~\ref{thm:baseunbiased}, we need to introduce a technical
lemma. Its proof can be found in Appendix~\ref{app:algobase}. We denote with
$\Delta^{\mathcal{S}}$ the set of triangles in $G^{\mathcal{S}}$.
\begin{lemma}\label{lem:baseunbiasedaux}
After each call to \textsc{UpdateCounters}, we have $\tau=|\Delta^\mathcal{S}|$
and $\tau_v=|\Delta_v^\mathcal{S}|$ for any $v\in V_\mathcal{S}$ s.t. $|\Delta_v^\mathcal{S}|\ge
1$.
\end{lemma}
From here, the proof of Thm.~\ref{thm:baseunbiased} is a straightforward
application of Lemma~\ref{lem:baseunbiasedaux} for the case $t\le M$ and of that
lemma, the definition of expectation, and Lemma~\ref{lem:reservoirhighorder}
otherwise. The complete proof can be found in App.~\ref{app:algobase}.
\subsubsection{Variance}
We now analyze the variance of the estimation returned by \textsc{\algoname-base}\xspace for $t>M$
(the variance is $0$ for $t\le M$.)
Let $r^{(t)}$ be the \emph{total} number of unordered pairs of distinct
triangles from $\Delta^{(t)}$ sharing an edge,\footnote{Two distinct triangles
can share at most one edge.} and $w^{(t)}=\binom{|\Delta^{(t)}|}{2}-r^{(t)}$ be
the number of unordered pairs of distinct triangles that do not share any edge.
\begin{theorem}\label{thm:basevariance}
For any $t>M$, let $f(t) = \xi^{(t)}-1$,
\[
g(t) = \xi^{(t)}\frac{(M-3)(M-4)}{(t-3)(t-4)} -1
\]
and
\[
h(t) = \xi^{(t)}\frac{(M-3)(M-4)(M-5)}{(t-3)(t-4)(t-5)}
-1\enspace(\le 0).
\]
We have:
\begin{equation}\label{eq:globvariancebase}
\mathrm{Var}\left[\xi(t)\tau^{(t)}\right] = |\Delta^{(t)}|
f(t)+r^{(t)}g(t)+w^{(t)}h(t).
\end{equation}
\end{theorem}
In our proofs, we carefully account for the fact that, as we use reservoir
sampling~\citep{Vitter85}, the presence of an edge $a$ in $\mathcal{S}$ is \emph{not
independent} from the concurrent presence of another edge $b$ in $\mathcal{S}$. This is
not the case for samples built using fixed-probability independent edge
sampling, such as \textsc{mascot}~\citep{LimK15}. When computing the variance,
we must consider not only pairs of triangles that share an edge, as in the case
for independent edge sampling approaches, but also pairs of triangles sharing no
edge, since their respective presences in the sample are not independent events.
The gain is worth the additional sophistication needed in the analysis, as the
contribution to the variance by triangles no sharing edges is
\emph{non-positive} ($h(t)\le 0$), i.e., it reduces the variance. A comparison
of the variance of our estimator with that obtained with a fixed-probability
independent edge sampling approach, is discussed in Sect.~\ref{sec:comparison}.
\begin{proof}[of Thm.~\ref{thm:basevariance}]
Assume $|\Delta^{(t)}|>0$, otherwise the estimation is deterministically
correct and has variance 0 and the thesis holds. Let $\lambda\in\Delta^{(t)}$
and $\delta_\lambda^{(t)}$ be as in the proof of Thm.~\ref{thm:baseunbiased}.
We have $\mathrm{Var}[\delta_\lambda^{(t)}]=\xi^{(t)}-1$ and from this and the
definition of variance and covariance we obtain
\begin{align}
\mathrm{Var}\left[\xi^{(t)}\tau^{(t)}\right] &= \mathrm{Var}\left[\sum_{\lambda\in\Delta^{(t)}}\delta_\lambda^{(t)}\right]
= \sum_{\lambda\in \Delta^{(t)}} \sum_{\gamma \in
\Delta^{(t)}}\textrm{Cov}\left[\delta_\lambda^{(t)} ,\delta_\gamma^{(t)}\right]
\nonumber\\
&= \sum_{\lambda\in \Delta^{(t)}}
\mathrm{Var}\left[\delta_\lambda^{(t)}\right] + \sum_{\substack{\lambda,
\gamma\in \Delta^{(t)}\\ \lambda \neq \gamma}}
\textrm{Cov}\left[\delta_\lambda^{(t)}
,\delta_\gamma^{(t)}\right]\nonumber\\
&= |\Delta^{(t)}|(\xi^{(t)}-1) + \sum_{\substack{\lambda,
\gamma\in \Delta^{(t)}\\ \lambda \neq \gamma}}
\textrm{Cov}\left[\delta_\lambda^{(t)}
,\delta_\gamma^{(t)}\right]\nonumber\\
&= |\Delta^{(t)}|(\xi^{(t)}-1) + \sum_{\substack{\lambda,
\gamma\in \Delta^{(t)}\\ \lambda \neq \gamma}}
\left(\mathbb{E}\left[\delta_\lambda^{(t)}\delta_\gamma^{(t)}\right] -
\mathbb{E}\left[\delta_\lambda^{(t)}\right]\mathbb{E}\left[\delta_\gamma^{(t)}\right]\right)\nonumber\\
&= |\Delta^{(t)}|(\xi^{(t)}-1) + \sum_{\substack{\lambda,
\gamma\in \Delta^{(t)}\\ \lambda \neq \gamma}}\left(
\mathbb{E}\left[\delta_\lambda^{(t)}\delta_\gamma^{(t)}\right] - 1\right)
\enspace.\label{eq:proofvariancebase}
\end{align}
Assume now $|\Delta^{(t)}|\ge 2$, otherwise we have $r^{(t)}=w^{(t)}=0$ and
the thesis holds as the second term on the
r.h.s.~of~\eqref{eq:proofvariancebase} is 0. Let $\lambda$ and $\gamma$ be
two distinct triangles in $\Delta^{(t)}$. If $\lambda$ and $\gamma$ do not
share an edge, we have
$\delta_\lambda^{(t)}\delta_\gamma^{(t)}=\xi^{(t)}\xi^{(t)}=\xi_{3,t}^2$
if all \emph{six} edges composing $\lambda$ and $\gamma$ are in $\mathcal{S}$ at
the end of time step $t$, and $\delta_\lambda^{(t)}\delta_\gamma^{(t)}=0$
otherwise. From Lemma~\ref{lem:reservoirhighorder} we then have that
\begin{align}
\mathbb{E}\left[\delta_\lambda^{(t)}\delta_\gamma^{(t)}\right]&=\xi_{3,t}^{2}\Pr\left(\delta_\lambda^{(t)}\delta_\gamma^{(t)}=\xi_{3,t}^{2}\right)=\xi_{3,t}^{2}\frac{1}{\xi_{6,t}}=\xi_{3,t}\prod_{j=3}^{5}\frac{M-j}{t-j}\nonumber\\
&=\xi^{(t)}\frac{(M-3)(M-4)(M-5)}{(t-3)(t-4)(t-5)}\label{eq:expectprodnoshare}\enspace.
\end{align}
If instead $\lambda$ and $\gamma$ share exactly an edge we have
$\delta_\lambda^{(t)}\delta_\gamma^{(t)}=\xi_{3,t}^{2}$ if all \emph{five}
edges composing $\lambda$ and $\gamma$ are in $\mathcal{S}$ at the end of time step
$t$, and $\delta_\lambda^{(t)}\delta_\gamma^{(t)}=0$ otherwise. From
Lemma~\ref{lem:reservoirhighorder} we then have that
\begin{align}
\mathbb{E}\left[\delta_\lambda^{(t)}\delta_\gamma^{(t)}\right]&=\xi_{3,t}^{2}\Pr\left(\delta_\lambda^{(t)}\delta_\gamma^{(t)}=\xi_{3,t}^{2}\right)=\xi_{3,t}^{2}\frac{1}{\xi_{5,t}}=\xi_{3,t}\prod_{j=3}^4\frac{M-j}{t-j}\nonumber\\
&=\xi^{(t)}\frac{(M-3)(M-4)}{(t-3)(t-4)}\enspace.\label{eq:expectprodshare}
\end{align}
The thesis follows by combining~\eqref{eq:proofvariancebase},
\eqref{eq:expectprodnoshare}, \eqref{eq:expectprodshare}, recalling the
definitions of $r^{(t)}$ and $w^{(t)}$, and slightly reorganizing the terms.
\end{proof}
\subsubsection{Concentration}
We have the following concentration result on the estimation returned by
\textsc{\algoname-base}\xspace. Let $h^{(t)}$ denote the maximum number of triangles sharing a single
edge in $G^{(t)}$.
\begin{theorem}\label{thm:baseconcentration}
Let $t\ge 0$ and assume $|\Delta^{(t)}| >0$.\footnote{If $|\Delta^{(t)}|=0$,
our algorithms correctly estimate $0$ triangles.}
For any $\varepsilon,\delta\in(0,1)$, let
\[
\Phi = \sqrt[3]{8\varepsilon^{-2}\frac{3h^{(t)}+1}{|\Delta^{(t)}|}\ln
\left(\frac{(3h^{(t)}+1)e}{\delta}\right)}\enspace.
\]
If
\[
M \ge \max \left\{t \Phi \left(1+\frac{1}{2}\ln^{2/3} \left (t \Phi
\right)\right), 12\varepsilon^{-1}+e^2, 25\right\},
\]
then $|\xi^{(t)}\tau^{(t)}-|\Delta^{(t)}||<\varepsilon|\Delta^{(t)}|$ with probability
$>1-\delta$.
\end{theorem}
The roadmap to proving Thm.~\ref{thm:baseconcentration} is the following:
\begin{longenum}
\item we first define two simpler algorithms, named \textsc{indep} and
\textsc{mix}. The algorithms use, respectively, fixed-probability
independent sampling of edges and reservoir sampling (but with a different
estimator than the one used by \textsc{\algoname-base}\xspace);
\item we then prove concentration results on the estimators of
\textsc{indep} and \textsc{mix}. Specifically, the concentration result for
\textsc{indep} uses a result by~\citet{hajnal1970proof} on graph coloring,
while the one for \textsc{mix} will depend on the concentration result for
\textsc{indep} and on a Poisson-approximation-like technical result stating
that probabilities of events when using reservoir sampling are close to the
probabilities of those events when using fixed-probability independent
sampling;
\item we then show that the estimates returned by \textsc{\algoname-base}\xspace are close to
the estimates returned by \textsc{mix};
\item finally, we combine the above results and show that, if $M$ is large
enough, then the estimation provided by \textsc{mix} is likely to be
close to $|\Delta^{(t)}|$ and since the estimation computed by \textsc{\algoname-base}\xspace
is close to that of \textsc{mix}, then it must also be close to
$|\Delta^{(t)}|$.
\end{longenum}
\emph{Note:} for ease of presentation, in the following we use $\phi^{(t)}$ to
denote the estimation returned by \textsc{\algoname-base}\xspace, i.e.,
$\phi^{(t)}=\xi^{(t)}\tau^{(t)}$.
\paragraph{The \textsc{indep} algorithm}
The \textsc{indep} algorithm works as follows: it creates a sample
$\mathcal{S}_\textsc{in}$ by sampling edges in $E^{(t)}$ independently with a fixed
probability $p$. It estimates the global number of triangles in $G^{(t)}$ as
\[
\phi_\textsc{in}^{(t)}= \frac{\tau_\textsc{in}^{(t)}}{p^3},
\]
where $\tau_\text{\sc in}^{(t)}$ is the number of triangles in
$\mathcal{S}_\textsc{in}$. This is for example the approach taken
by \textsc{mascot-c}~\citep{LimK15}.
\paragraph{The \textsc{mix} algorithm}
The \textsc{mix} algorithm works as follows: it uses reservoir sampling (like
\textsc{\algoname-base}\xspace) to create a sample $\mathcal{S}_\textsc{mix}$ of $M$ edges from $E^{(t)}$,
but uses a different estimator than the one used by \textsc{\algoname-base}\xspace. Specifically,
\textsc{mix} uses
\[
\phi_\textsc{mix}^{(t)}=\left(\frac{t}{M}\right)^3\tau^{(t)}
\]
as an estimator for $|\Delta^{(t)}|$, where $\tau^{(t)}$ is, as in \textsc{\algoname-base}\xspace,
the number of triangles in $G^\mathcal{S}$ (\textsc{\algoname-base}\xspace uses
$\phi^{(t)}=\frac{t(t-1)(t-2)}{M(M-1)(M-2)}\tau^{(t)}$ as an estimator.)
We call this algorithm \textsc{mix} because it uses reservoir sampling to create
the sample, but computes the estimate as if it used fixed-probability
independent sampling, hence in some sense it ``mixes'' the two approaches.
\paragraph{Concentration results for \textsc{indep} and \textsc{mix}}
We now show a concentration result for \textsc{indep}. Then we show a technical
lemma (Lemma~\ref{lem:equivMp}) relating the probabilities of events when using
reservoir sampling to the probabilities of those events when using
fixed-probability independent sampling. Finally, we use these results to show
that the estimator used by \textsc{mix} is also concentrated
(Lemma~\ref{lem:concentrationmix}).
\begin{lemma}\label{lem:concentrationindependent}
Let $t\ge 0$ and assume $|\Delta^{(t)}|>0$.\footnote{For $|\Delta^{(t)}|=0$,
\textsc{indep} correctly and deterministically returns $0$ as the
estimation.} For any $\varepsilon,\delta\in(0,1)$, if
\begin{equation}\label{eq:requirementp}
p \ge \sqrt[3]{2\varepsilon^{-2}\ln \left(\frac{3h^{(t)}+1}{\delta}\right)\frac{3h^{(t)}+1}{|\Delta^{(t)}|}}
\end{equation}
then
\[
\Pr\left(|\phi_\textsc{in}^{(t)}-\Delta^{(t)}||<\varepsilon|\Delta^{(t)}|\right)> 1-\delta\enspace.
\]
\end{lemma}
\begin{proof}
Let $H$ be a graph built as follows: $H$ has one node for each triangle in
$G^{(t)}$ and there is an edge between two nodes in $H$ if the corresponding
triangles in $G^{(t)}$ share an edge. By this construction, the maximum degree
in $H$ is $3h^{(t)}$. Hence by the Hajanal-Szem\'eredi's
theorem~\citep{hajnal1970proof} there is a proper coloring of $H$ with at most
$3h^{(t)}+1$ colors such that for each color there are at least $L =
\frac{|\Delta^{(t)}|}{3h^{(t)}+1}$ nodes with that color.
Assign an arbitrary numbering to the triangles of $G^{(t)}$
(and, therefore, to the nodes of $H$) and let $X_i$ be a Bernoulli random
variable, indicating whether the triangle $i$ in $G^{(t)}$ is in the sample at time $t$.
From the properties of independent sampling of edges we have
$\Pr(X_i=1)=p^3$ for any triangle $i$. For any color $c$ of the coloring of
$H$, let $\mathcal{X}_c$ be the set of r.v.'s $X_i$ such that the node $i$
in $H$ has color $c$. Since the coloring of $H$ which we are considering is
proper, the r.v.'s in $\mathcal{X}_c$ are independent, as they correspond to
triangles which do not share any edge and edges are sampled independent of
each other. Let $Y_c$ be the sum of the r.v.'s in $\mathcal{X}_c$. The
r.v.~$Y_c$ has a binomial distribution with parameters $|\mathcal{X}_c|$ and
$p_t^3$. By the Chernoff bound for binomial r.v.'s, we have that
\begin{align*}
\Pr\left(|p^{-3}Y_c - |\mathcal{X}_c||>\varepsilon
|\mathcal{X}_c|\right)&<
2\exp\left(-\varepsilon^2p^3|\mathcal{X}_c|/2\right)\\
&<2\exp\left(-\varepsilon^2p^3L/2\right)\\
&\le\frac{\delta}{3h^{(t)}+1},
\end{align*}
where the last step comes from the requirement
in~\eqref{eq:requirementp}.Then by applying the union bound over all the (at
most) $3h^{(t)}+1$ colors we get
\[
\Pr(\exists \mbox{ color } c \mbox{ s.t. } |p^{-3}Y_c -
|\mathcal{X}_c||>\varepsilon |\mathcal{X}_c| ) < \delta\enspace.
\]
Since $\phi_\textsc{in}{(t)}=p^{-3}\displaystyle\sum_{\mbox{\tiny color } c} Y_c$,
from the above equation we have that, with probability at least $1-\delta$,
\begin{align*}
|\phi_\textsc{in}^{(t)}-|\Delta^{(t)}||&\le \left|\sum_{\mbox{\tiny color } c}
p^{-3}Y_c-\sum_{\mbox{\tiny color } c} |\mathcal{X}_c|\right| \\
&\le \sum_{\mbox{\tiny color } c} |p^{-3}Y_c - |\mathcal{X}_c||
\le \sum_{\mbox{\tiny color } c}\varepsilon|\mathcal{X}_c|\le\varepsilon
|\Delta^{(t)}|\enspace.
\end{align*}
\end{proof}
The above result is of independent interest and can be used, for example, to
give concentration bounds to the estimation computed by
\textsc{mascot-c}~\citep{LimK15}.
We remark that we can not use the same approach from
Lemma~\ref{lem:concentrationindependent} to show a concentration result for
\textsc{\algoname-base}\xspace because it uses reservoir sampling, hence the event of having a
triangle $a$ in $\mathcal{S}$ and the event of having another triangle $b$ in $\mathcal{S}$
are not independent.
We can however show the following general result, similar in spirit to the
well-know Poisson approximation of balls-and-bins
processes~\citep{mitzenmacher2005probability}. Its proof can be found in
App.~\ref{app:algobase}.
Fix the parameter $M$ and a time $t>M$. Let $\mathcal{S}_\textsc{mix}$ be a sample of
$M$ edges from $E^{(t)}$ obtained through reservoir sampling (as \textsc{mix}
would do), and let $\mathcal{S}_\textsc{in}$ be a sample of the edges in $E^{(t)}$
obtained by sampling edges independently with probability $M/t$ (as
\textsc{indep} would do). We remark that the size of $\mathcal{S}_\textsc{in}$ is in
$[0,t]$ but not necessarily $M$.
\begin{lemma}\label{lem:equivMp}
Let $f~:~2^{E^{(t)}}\to \{0,1\}$ be an arbitrary binary function from the
powerset of $E^{(t)}$ to $\{0,1\}$ . We have
\[
\Pr\left(f(\mathcal{S}_\textsc{mix}) = 1\right) \le e\sqrt{M}
\Pr\left(f(\mathcal{S}_\textsc{in}) = 1\right)\enspace.
\]
\end{lemma}
We now use the above two lemmas to show that the estimator
$\phi_\textsc{mix}^{(t)}$ computed by \textsc{mix} is concentrated. We will
first need the following technical fact.
\begin{fact}\label{fact:loglog}
For any $x\ge 5$, we have
\[
\ln\left(x(1+\ln^{2/3}x)\right) \le \ln^2 x\enspace.
\]
\end{fact}
\begin{lemma}\label{lem:concentrationmix}
Let $t\ge 0$ and assume $|\Delta^{(t)}|<0$. For any
$\varepsilon,\delta\in(0,1)$, let
\[
\Psi = 2\varepsilon^{-2}\frac{3h^{(t)}+1}{|\Delta^{(t)}|} \ln
\left(e\frac{3h^{(t)}+1}{\delta}\right) \enspace.
\]
If
\[
M \ge\max\left\{ t \sqrt[3]{\Psi} \left(1 + \frac{1}{2}\ln^{2/3} \left (t \sqrt[3]{\Psi}
\right)\right), 25\right\}
\]
then
\[
\Pr\left(|\phi_\textsc{mix}^{(t)}-|\Delta^{(t)}||<\varepsilon|\Delta^{(t)}|\right)\ge
1-\delta\enspace.
\]
\end{lemma}
\begin{proof}
For any $S\subseteq E^{(t)}$ let $\tau(S)$ be the number of triangles in $S$,
i.e., the number of triplets of edges in $S$ that compose a triangle in
$G^{(t)}$. Define the function $g ~:~ 2^{E^{(t)}}\to \mathbb{R}$ as
\[
g(S) =\left(\frac{t}{M}\right)^3 \tau(S)\enspace.
\]
Assume that we run \textsc{indep} with $p=M/t$, and let
$\mathcal{S}_\textsc{in}\subseteq E^{(t)}$ be the sample built by \textsc{indep}
(through independent sampling with fixed probability $p$). Assume also that
we run \textsc{mix} with parameter $M$, and let $\mathcal{S}_\textsc{mix}$ be the
sample built by \textsc{mix} (through reservoir sampling with a reservoir of
size $M$). We have that $\phi_\textsc{in}^{(t)}=g(\mathcal{S}_\textsc{in})$ and
$\phi_\textsc{mix}^{(t)}=g(\mathcal{S}_\textsc{mix})$. Define now the binary
function $f ~:~ 2^{E^{(t)}}\to \{0,1\}$ as
\[
f(S)=\left\{\begin{array}{cl}1 & \text{if }
|g(S)-|\Delta^{(t)}||>\varepsilon|\Delta^{(t)}|\\ 0 & \text{otherwise} \end{array}\right.\enspace.
\]
We now show that, for $M$ as in the hypothesis, we have
\begin{equation}\label{eq:whattoshow}
p \ge \sqrt[3]{2\varepsilon^{-2}\frac{3h^{(t)}+1}{|\Delta^{(t)}|} \ln \left(e\sqrt{M}\frac{3h^{(t)}+1}{\delta}\right)}\enspace.
\end{equation}
Assume for now that the above is true. From this, using
Lemma~\ref{lem:concentrationindependent} and the above fact about $g$ we get
that
\[
\Pr\left(|\phi_\textsc{in}^{(t)}-|\Delta^{(t)}||>\varepsilon|\Delta^{(t)}|\right)
= \Pr\left(f(\mathcal{S}_\textsc{in})=1\right) < \frac{\delta}{e\sqrt{M}}\enspace.
\]
From this and Lemma~\ref{lem:equivMp}, we get that
\[
\Pr\left(f(\mathcal{S}_\textsc{mix})=1\right) \le \delta
\]
which, from the definition of $f$ and the properties of $g$, is equivalent to
\[
\Pr\left(|\phi_\textsc{mix}^{(t)}-|\Delta^{(t)}||>\varepsilon|\Delta^{(t)}|\right) \le \delta
\]
and the proof is complete. All that is left is to show
that~\eqref{eq:whattoshow} holds for $M$ as in the hypothesis.
Since $p=M/t$, we have that~\eqref{eq:whattoshow} holds for
\begin{align}
M^3 &\ge t^3 2\varepsilon^{-2}\frac{3h^{(t)}+1}{|\Delta^{(t)}|} \ln
\left(\sqrt{M}e\frac{3h^{(t)}+1}{\delta}\right)\nonumber\\
& = t^3 2\varepsilon^{-2}\frac{3h^{(t)}+1}{|\Delta^{(t)}|} \left (\ln
\left(e\frac{3h^{(t)}+1}{\delta}\right) + \frac{1}{2}\ln M
\right)\enspace.\label{eq:M}
\end{align}
We now show that~\eqref{eq:M} holds.
Let $A=t\sqrt[3]{\Psi}$ and let $B=t\sqrt[3]{\Psi}\ln^{2/3} \left (t
\sqrt[3]{\Psi} \right)$. We now show that $A^3+B^3$ is greater or equal to the
r.h.s.~of~\eqref{eq:M}, hence $M^3=(A+B)^3> A^3 + B^3$ must also be greater or
equal to the r.h.s.~of~\eqref{eq:M}, i.e., \eqref{eq:M} holds. This really
reduces to show that
\begin{equation}\label{eq:bcube}
B^3\ge t^3 2\varepsilon^{-2}\frac{3h^{(t)}+1}{|\Delta^{(t)}|}\frac{1}{2}\ln M
\end{equation}
as the r.h.s.of~\eqref{eq:M} can be written as
\[
A^3+ t^3
2\varepsilon^{-2}\frac{3h^{(t)}+1}{|\Delta^{(t)}|}\frac{1}{2}\ln
M\enspace.
\]
We actually show that
\begin{equation}\label{eq:bcubesecond}
B^3 \ge t^3\Psi\frac{1}{2}\ln M
\end{equation}
which implies~\eqref{eq:bcube} which, as discussed, in turn
implies~\eqref{eq:M}. Consider the ratio
\begin{equation}\label{eq:ratio}
\frac{B^3}{t^3\Psi\frac{1}{2}\ln M} =
\frac{\frac{1}{2}t^3\Psi\ln^2(t\sqrt[3]{\Psi})}{t^3\Psi\frac{1}{2}\ln M} =
\frac{\ln^2(t\sqrt[3]{\Psi})}{\ln M} \ge
\frac{\ln^2(t\sqrt[3]{\Psi})}{\ln\left(t \sqrt[3]{\Psi}
\left(1 + \ln^{2/3} \left (t \sqrt[3]{\Psi} \right)\right)\right)
}\enspace.
\end{equation}
We now show that $t\sqrt[3]{\Psi} \ge 5$. By the assumptions $t> M \ge 25$ and by
\[
t \sqrt[3]{\Psi}\ge \frac{t}{\sqrt[3]{|\Delta^{(t)}|}} \ge \sqrt{t}
\]
which holds because $|\Delta^{(t)}| \le t^{3/2}$ (in a graph with $t$ edges there can
not be more than $t^{3/2}$ triangles) we have that $t\sqrt[3]{\Psi} \ge 5$.
Hence Fact~\ref{fact:loglog} holds and we can write, from~\eqref{eq:ratio}:
\[
\frac{\ln^2(t\sqrt[3]{\Psi})}{\ln\left(t \sqrt[3]{\Psi}
\left(1 + \ln^{2/3} \left (t \sqrt[3]{\Psi} \right)\right)\right)}\ge
\frac{\ln^2(t\sqrt[3]{\Psi})}{\ln^2\left(t
\sqrt[3]{\Psi}\right)}\ge 1,
\]
which proves~\eqref{eq:bcubesecond}, and in cascade~\eqref{eq:bcube},
\eqref{eq:M}, \eqref{eq:whattoshow}, and the thesis.
\end{proof}
\paragraph{Relationship between \textsc{\algoname-base}\xspace and \textsc{mix}} When both \textsc{\algoname-base}\xspace
and \textsc{mix} use a sample of size $M$, their respective estimators
$\phi^{(t)}$ and $\phi_\textsc{mix}^{(t}$ are related as discussed in the
following result, whose straightforward proof is deferred to App.~\ref{app:algobase}.
\begin{lemma}\label{lem:estimatorsrelationship}
For any $t>M$ we have
\[
\left|\phi^{(t)}-\phi_\textsc{mix}^{(t)}\right|\le
\phi_\textsc{mix}^{(t)}\frac{4}{M-2}\enspace.
\]
\end{lemma}
\paragraph{Tying everything together} Finally we can use the previous lemmas
to prove our concentration result for \textsc{\algoname-base}\xspace.
\begin{proof}[of Thm.~\ref{thm:baseconcentration}]
For $M$ as in the hypothesis we have, from Lemma~\ref{lem:concentrationmix},
that
\[
\Pr\left(\phi_\textsc{mix}^{(t)}\le
(1+\varepsilon/2)|\Delta^{(t)}|\right)\ge
1-\delta\enspace.
\]
Suppose the event $\phi_\textsc{mix}^{(t)}\le
(1+\varepsilon/2)|\Delta^{(t)}|$ (i.e., $|\phi_\textsc{mix}^{(t)} -
|\Delta^{(t)}| |\le \varepsilon|\Delta^{(t)}|/2$) is indeed verified. From this and
Lemma~\ref{lem:estimatorsrelationship} we have
\[
|\phi^{(t)}-\phi_\textsc{mix}^{(t)}|\le
\left(1+\frac{\varepsilon}{2}\right)|\Delta^{(t)}|\frac{4}{M-2}\le
|\Delta^{(t)}|\frac{6}{M-2},
\]
where the last inequality follows from the fact that $\varepsilon<1$. Hence,
given that $M\ge 12\varepsilon^{-1}+e^2\ge 12\varepsilon^{-1}+2$, we have
\[
|\phi^{(t)}-\phi_\textsc{mix}^{(t)}|\le
|\Delta^{(t)}|\frac{\varepsilon}{2}\enspace.
\]
Using the above, we can then write:
\begin{align*}
|\phi^{(t)} - |\Delta^{(t)}| | &=
|\phi^{(t)} - \phi_\textsc{mix}^{(t)} + \phi_\textsc{mix}^{(t)} - |\Delta^{(t)}| | \\
&\le |\phi^{(t)} - \phi_\textsc{mix}^{(t)}| + |\phi_\textsc{mix}^{(t)} - |\Delta^{(t)}| | \\
&\le \frac{\varepsilon}{2} |\Delta^{(t)}| +
\frac{\varepsilon}{2}|\Delta^{(t)}| = \varepsilon |\Delta^{(t)}|
\end{align*}
which completes the proof.
\end{proof}
\subsubsection{Comparison with fixed-probability
approaches}\label{sec:comparison}
We now compare the variance of \textsc{\algoname-base}\xspace to the variance of the fixed
probability sampling approach \textsc{mascot-c}~\citep{LimK15}, which samples
edges \emph{independently} with a fixed probability $p$ and uses
$p^{-3}|\Delta_\mathcal{S}|$ as the estimate for the global number of triangles at time
$t$. As shown by~\citet[Lemma 2]{LimK15}, the variance of this estimator is
\[
\mathrm{Var}[p^{-3}|\Delta_\mathcal{S}|] = |\Delta^{(t)}|\bar{f}(p) + r^{(t)}
\bar{g}(p),
\]
where $\bar{f}(p) = p^{-3} - 1$ and $\bar{g}(p) = p^{-1} - 1$.
Assume that we give \textsc{mascot-c} the additional information that the stream
has finite length $T$, and assume we run \textsc{mascot-c} with $p=M/T$
so that the expected sample size at the end of the stream is
$M$.\footnote{We are giving \textsc{mascot-c} a significant advantage: if
only space $M$ were available, we should run \textsc{mascot-c} with a sufficiently
smaller $p'<p$, otherwise there would be a constant probability that
\textsc{mascot-c} would run out of memory.} Let $\mathbb{V}^{(t)}_\text{fix}$ be
the resulting variance of the \textsc{mascot-c} estimator at time $t$, and let
$\mathbb{V}^{(t)}$ be the variance of our estimator at time $t$
(see~\eqref{eq:globvariancebase}). For $t\le M$, $\mathbb{V}^{(t)}=0$, hence
$\mathbb{V}^{(t)}\le \mathbb{V}^{(t)}_\text{fix}$.
For $M< t<T$, we can show the following result. Its proof is more tedious than
interesting so we defer it to App.~\ref{app:algobase}.
\begin{lemma}\label{lem:variancecomparison}
Let $0<\alpha<1$ be a constant. For any constant
$M>\max(\frac{8\alpha}{1-\alpha}, 42)$ and any $t \le \alpha T$ we have
$\mathbb{V}^{(t)} < \mathbb{V}^{(t)}_\mathrm{fix}$.
\end{lemma}
For example, if we set $\alpha = 0.99$ and run \textsc{\algoname-base}\xspace with $M\ge 400$ and
\textsc{mascot-c} with $p=M / T$, we have that \textsc{\algoname-base}\xspace has strictly
smaller variance than \textsc{mascot-c} for $99\%$ of the stream.
What about $t=T$? The difference between the definitions of
$\mathbb{V}^{(t)}_\text{fix}$ and $\mathbb{V}^{(t)}$ is in the presence of
$\bar{f}(M/T)$ instead of $f(t)$ (resp.~$\bar{g}(M/T)$ instead of $g(t)$) as
well as the additional term $w^{(t)}h(M,t)\le 0$ in our $\mathbb{V}^{(t)}$. Let
$M(T)$ be an arbitrary slowly increasing function of $T$. For $T \to \infty$ we
can show that $\lim_{T \to \infty}\frac{\bar{f}(M(T)/T)}{f(T)} = \lim_{T \to
\infty}\frac{\bar{g}(M(T)/T)}{g(T)} = 1$, hence, \emph{informally},
$\mathbb{V}^{(T)}\to\mathbb{V}^{(T)}_\text{fix}$, for $T\to\infty$.
A similar discussion also holds for the method we present in
Sect.~\ref{sec:improved}, and explains the results of our experimental
evaluations, which shows that our algorithms have strictly lower (empirical)
variance than fixed probability approaches for most of the stream.
\subsubsection{Update time}
The time to process an element of the stream is dominated by
the computation of the shared neighborhood $\mathcal{N}_{u,v}$ in
\textsc{UpdateCounters}. A \textsc{Mergesort}-based algorithm for the
intersection requires $O\left(\deg(u) + \deg(v)\right)$ time, where the degrees
are w.r.t.~the graph $G_\mathcal{S}$. By storing the neighborhood of each vertex in a
Hash Table (resp.~an AVL tree), the update time can be reduced to
$O(\min\{\deg(v),\deg(u)\})$ (resp.~amortized time
$O(\min\{\deg(v),\deg(u)\}+\log\max\{\deg(v),\deg(u)\})$).
\subsection{Improved insertion algorithm -- \textsc{\algoname-impr}\xspace}\label{sec:improved}
\textsc{\algoname-impr}\xspace is a variant of \textsc{\algoname-base}\xspace with small modifications that result in
higher-quality (i.e., lower variance) estimations. The changes are:
\begin{longenum}
\item \textsc{UpdateCounters} is called \emph{unconditionally for each element
on the stream}, before the algorithm decides whether or not to insert the
edge into $\mathcal{S}$. W.r.t.~the pseudocode in Alg.~\ref{alg:triest-base}, this
change corresponds to moving the call to \textsc{UpdateCounters} on line 6
to \emph{before} the {\bf if} block. \textsc{mascot}~\citep{LimK15} uses a
similar idea, but \textsc{\algoname-impr}\xspace is significantly different as \textsc{\algoname-impr}\xspace
allows edges to be removed from the sample, since it uses reservoir sampling.
\item \textsc{\algoname-impr}\xspace \emph{never} decrements the counters when an edge is
removed from $\mathcal{S}$. W.r.t.~the pseudocode in Alg.~\ref{alg:triest-base}, we
remove the call to \textsc{UpdateCounters} on line 13.
\item \textsc{UpdateCounters} performs a \emph{weighted}
increase of the counters using $\eta^{(t)}=\max\{1,(t-1)(t-2)/(M(M-1))\}$ as
weight. W.r.t.~the pseudocode in Alg.~\ref{alg:triest-base}, we replace
``$1$'' with $\eta^{(t)}$ on lines 19--22 (given change 2 above, all the
calls to \textsc{UpdateCounters} have $\bullet=+$).
\end{longenum}
The resulting pseudocode for \textsc{\algoname-impr}\xspace is presented in
Alg.~\ref{alg:triest-impr}.
\begin{algorithm}[ht]
\small
\caption{\textsc{\algoname-impr}\xspace}
\label{alg:triest-impr}
\begin{algorithmic}[1]
\Statex{\textbf{Input:} Insertion-only edge stream $\Sigma$, integer $M\ge6$}
\State $\mathcal{S}\leftarrow\emptyset$, $t \leftarrow 0$, $\tau \leftarrow 0$
\For{ {\bf each} element $(+,(u,v))$ from $\Sigma$}
\State $t\leftarrow t +1$
\State \textsc{UpdateCounters}$(u,v)$
\If{\textsc{SampleEdge}$((u,v), t )$}
\State $\mathcal{S} \leftarrow \mathcal{S}\cup \{(u,v)\}$
\EndIf
\EndFor
\Statex
\Function{\textsc{SampleEdge}}{$(u,v),t$}
\If {$t\leq M$}
\State \textbf{return} True
\ElsIf{\textsc{FlipBiasedCoin}$(\frac{M}{t}) = $ heads}
\State $(u',v') \leftarrow$ random edge from $\mathcal{S}$
\State $\mathcal{S}\leftarrow \mathcal{S}\setminus \{(u',v')\}$
\State \textbf{return} True
\EndIf
\State \textbf{return} False
\EndFunction
\Statex
\Function{UpdateCounters}{$u,v$}
\State $\mathcal{N}^\mathcal{S}_{u,v} \leftarrow \mathcal{N}^\mathcal{S}_u \cap \mathcal{N}^\mathcal{S}_v$
\State $\eta=\max\{1,(t-1)(t-2)/(M(M-1))\}$
\ForAll {$c \in \mathcal{N}^\mathcal{S}_{u,v}$}
\State $\tau \leftarrow \tau + \eta$
\State $\tau_c \leftarrow \tau_c + \eta$
\State $\tau_u \leftarrow \tau_u + \eta$
\State $\tau_v \leftarrow \tau_v + \eta$
\EndFor
\EndFunction
\end{algorithmic}
\end{algorithm}
\paragraph{Counters} If we are interested only in estimating the global number of
triangles in $G^{(t)}$, \textsc{\algoname-impr}\xspace needs to maintain only the counter $\tau$
and the edge sample $\mathcal{S}$ of size $M$, so it still requires space $O(M)$. If
instead we are interested in estimating the local triangle counts, at any time $t$
\textsc{\algoname-impr}\xspace maintains (non-zero) local counters \emph{only} for the nodes $u$
such that at least one triangle with a corner $u$ has been detected by the
algorithm up until time $t$. The number of such nodes may be greater than
$O(M)$, but this is the price to pay to obtain estimations with lower variance
(Thm.~\ref{thm:improvedvariance}).
\subsubsection{Estimation}
When queried for an estimation, \textsc{\algoname-impr}\xspace returns the value of the
corresponding counter, unmodified.
\subsubsection{Analysis}
We now present the analysis of the estimations computed by \textsc{\algoname-impr}\xspace,
showing results involving their unbiasedness, their variance, and their
concentration around their expectation. Results analogous to those in
Thms.~\ref{thm:improvedunbiased}, \ref{thm:improvedvariance},
and~\ref{thm:improvedconcentration} hold for the local triangle count for any
$u\in V^{(t)}$, replacing the global quantities with the corresponding local
ones.
\subsubsection{Expectation}
As in \textsc{\algoname-base}\xspace, the estimations by \textsc{\algoname-impr}\xspace are \emph{exact} at time
$t\le M$ and unbiased for $t>M$. The proof of the following theorem follows the
same steps as the one for Thm~\ref{thm:baseunbiased}, and can be found in
App.~\ref{app:algoimproved}.
\begin{theorem}\label{thm:improvedunbiased}
We have $\tau^{(t)}=|\Delta^{(t)}|$ if $t\le M$ and
$\mathbb{E}\left[\tau^{(t)}\right]$ $=|\Delta^{(t)}|$ if $t> M$.
\end{theorem}
\subsubsection{Variance}
We now show an \emph{upper bound} to the variance of the \textsc{\algoname-impr}\xspace
estimations for $t>M$. The proof relies on a very careful analysis of the
covariance of two triangles which depends on the order of arrival of the edges
in the stream (which we assume to be adversarial). For any
$\lambda\in\Delta^{(t)}$ we denote as $t_\lambda$ the time at which the last
edge of $\lambda$ is observed on the stream. Let $z^{(t)}$ be the number of
unordered pairs $(\lambda,\gamma)$ of distinct triangles in $G^{(t)}$ that share
an edge $g$ and are such that:
\begin{enumerate}
\item $g$ is neither the last edge of $\lambda$ nor $\gamma$ on the stream;
and
\item $\min\{t_\lambda,t_\gamma\}>M+1$.
\end{enumerate}
\begin{theorem}\label{thm:improvedvariance}
Then, for any time $t>M$, we have
\begin{equation}\label{eq:improvedvariance}
\mathrm{Var}\left[\tau^{(t)}\right]\le
|\Delta^{(t)}|(\eta^{(t)}-1)+z^{(t)}\frac{t-1-M}{M}\enspace.
\end{equation}
\end{theorem}
The bound to the variance presented in~\eqref{eq:improvedvariance} is extremely
pessimistic and loose. Specifically, it does not contain the \emph{negative}
contribution to the variance given by the $\binom{|\Delta^{(t)}|}{2}-z^{(t)}$
triangles that do not satisfy the requirements in the definition of $z^{(t)}$.
Among these pairs there are, for example, all pairs of triangles not sharing any
edge, but also many pairs of triangles that share an edge, as the definition of
$z^{(t)}$ consider only a subsets of these. All these pairs would give a
negative contribution to the variance, i.e., decrease the
r.h.s.~of~\eqref{eq:improvedvariance}, whose more correct form would be
\[
|\Delta^{(t)}|(\eta^{(t)}-1)+z^{(t)}\frac{t-1-M}{M}+\left(\binom{|\Delta^{(t)}|}{2}-z^{(t)}\right)\omega_{M,t}
\]
where $\omega_{M,t}$ is (an upper bound to) the minimum \emph{negative}
contribution of a pair of triangles that do not satisfy the requirements in the
definition of $z^{(t)}$. Sadly, computing informative upper bounds to
$\omega_{M,t}$ is not straightforward, even in the restricted setting where only
pairs of triangles not sharing any edge are considered.
To prove Thm.~\ref{thm:improvedvariance} we first need
Lemma~\ref{lem:negativedependence}, whose proof is deferred to
App.~\ref{app:algoimproved}.
For any time step $t$ and any edge $e\in E^{(t)}$, we denote with $t_e$ the time
step at which $e$ is on the stream. For any $\lambda\in\Delta^{(t)}$, let $\lambda=(\ell_1,\ell_2,\ell_3)$, where the edges are numbered in
order of appearance on the stream. We define the event $D_\lambda$ as the event
that $\ell_1$ and $\ell_2$ are both in the edge sample $\mathcal{S}$ at the end of time
step $t_\lambda-1$.
\begin{lemma}\label{lem:negativedependence}
Let $\lambda=(\ell_1,\ell_2,\ell_3)$ and $\gamma=(g_1,g_2,g_3)$ be two
\emph{disjoint} triangles, where the edges are numbered in order of
appearance on the stream, and assume, w.l.o.g., that the last edge of
$\lambda$ is on the stream \emph{before} the last edge of $\gamma$. Then
\[
\Pr(D_\gamma~|~D_\lambda) \le \Pr(D_\gamma)\enspace.
\]
\end{lemma}
We can now prove Thm.~\ref{thm:improvedvariance}.
\begin{proof}[of Thm.~\ref{thm:improvedvariance}]
Assume $|\Delta^{(t)}|>0$, otherwise \textsc{\algoname-impr}\xspace estimation is
deterministically correct and has variance 0 and the thesis holds. Let
$\lambda\in\Delta^{(t)}$ and let $\delta_\lambda$ be a random variable that
takes value $\xi_{2,t_\lambda-1}$ if both $\ell_1$ and $\ell_2$ are in $\mathcal{S}$
at the end of time step $t_\lambda-1$, and $0$ otherwise. Since
\[
\mathrm{Var}\left[\delta_{\lambda}\right] = \xi_{2,t_\lambda-1}-1\le
\xi_{2,t-1},
\]
we have:
\begin{align}
\mathrm{Var}\left[\tau^{(t)}\right] &=
\mathrm{Var}\left[\sum_{\lambda\in\Delta^{(t)}} \delta_{\lambda}\right]=
\sum_{\lambda\in \Delta^{(t)}} \sum_{\gamma \in
\Delta^{(t)}}\text{Cov}\left[\delta_{\lambda},\delta_{\gamma}\right]
\nonumber\\
&= \sum_{\lambda\in \Delta^{(t)}}
\mathrm{Var}\left[\delta_{\lambda}\right] + \sum_{\substack{\lambda,
\gamma\in \Delta^{(t)}\\\lambda \neq \gamma}}
\text{Cov}\left[\delta_{\lambda},\delta_{\gamma}\right]\nonumber\\
&\le |\Delta^{(t)}|(\xi_{2,t-1}-1) + \sum_{\substack{\lambda,
\gamma\in \Delta^{(t)}\\\lambda \neq \gamma}}\left(
\mathbb{E}[\delta_\lambda\delta_\gamma]-\mathbb{E}[\delta_\lambda]\mathbb{E}[\delta_\gamma]\right)\nonumber\\
&\le |\Delta^{(t)}|(\xi_{2,t-1}-1) + \sum_{\substack{\lambda,
\gamma\in \Delta^{(t)}\\\lambda \neq \gamma}}
\left(\mathbb{E}[\delta_\lambda\delta_\gamma] - 1\right) \enspace.\label{eq:variance2improved}
\end{align}
For any $\lambda\in\Delta^{(t)}$ define $q_\lambda=\xi_{2,t_\lambda-1}$.
Assume now $|\Delta^{(t)}|\ge 2$, otherwise we have $r^{(t)}=w^{(t)}=0$ and
the thesis holds as the second term on the
r.h.s.~of~\eqref{eq:variance2improved} is 0. Let now $\lambda$ and $\gamma$
be two distinct triangles in $\Delta^{(t)}$
(hence $t\ge 5$). We have
\begin{equation*}
\mathbb{E}\left[\delta_{\lambda}\delta_{\gamma}\right]=
q_\lambda q_\gamma\Pr\left(\delta_\lambda\delta_\gamma=q_\lambda q_\gamma\right)
\end{equation*}
The event ``$\delta_\lambda\delta_\gamma=q_\lambda q_\gamma$'' is the
intersection of events $D_\lambda\cap D_\gamma$, where $D_\lambda$ is the
event that the first two edges of $\lambda$ are in $\mathcal{S}$ at the end of time
step $t_\lambda-1$, and similarly for $D_\gamma$. We now look at
$\Pr(D_\lambda\cap D_\gamma)$ in the various possible cases.
Assume that $\lambda$ and $\gamma$ do not share any edge, and, w.l.o.g.,
that the third (and last) edge of $\lambda$ appears on the stream before the
third (and last) edge of $\gamma$, i.e., $t_\lambda< t_\gamma$. From
Lemma~\ref{lem:negativedependence} and Lemma~\ref{lem:reservoirhighorder} we
then have
\[
\Pr(D_\lambda\cap D_\gamma)=\Pr(D_\gamma|D_\lambda)\Pr(D_\lambda)\le
\Pr(D_\gamma)\Pr(D_\lambda)\le \frac{1}{q_\lambda q_\gamma}\enspace.
\]
Consider now the case where $\lambda$ and $\gamma$ share an edge $g$.
W.l.o.g., let us assume that $t_\lambda\le t_\gamma$ (since the shared edge
may be the last on the stream both for $\lambda$ and for $\gamma$, we may
have $t_\lambda=t_\gamma$). There are the following possible sub-cases :
\begin{description}
\item[$g$ is the last on the stream among all the edges of $\lambda$ and
$\gamma$] In this case we have $t_\lambda=t_\gamma$. The event
``$D_\lambda\cap D_\gamma$'' happens if and only if the \emph{four}
edges that, together with $g$, compose $\lambda$ and $\gamma$ are
all in $\mathcal{S}$ at the end of time step $t_\lambda-1$. Then, from
Lemma~\ref{lem:reservoirhighorder} we have
\[
\Pr(D_\lambda\cap D_\gamma)= \frac{1}{\xi_{4,t_\lambda-1}}
\le
\frac{1}{q_\lambda}\frac{(M-2)(M-3)}{(t_\lambda-3)(t_\lambda-4)}\le\frac{1}{q_\lambda}\frac{M(M-1)}{(t_\lambda-1)(t_\lambda-2)}\le
\frac{1}{q_\lambda q_\gamma}\enspace.
\]
\item[$g$ is the last on the stream among all the edges of $\lambda$ and
the first among all the edges of $\gamma$] In this case, we
have that $D_\lambda$ and $D_\gamma$ are independent. Indeed
the fact that the first two edges of $\lambda$ (neither of which
is $g$) are in $\mathcal{S}$ when $g$ arrives on the stream has no
influence on the probability that $g$ and the second edge of
$\gamma$ are inserted in $\mathcal{S}$ and are not evicted until
the third edge of $\gamma$ is on the stream. Hence we have
\[
\Pr(D_\lambda\cap D_\gamma)=
\Pr(D_\gamma)\Pr(D_\lambda)=\frac{1}{q_\lambda q_\gamma}\enspace.
\]
\item[$g$ is the last on the stream among all the edges of $\lambda$ and
the second among all the edges of $\gamma$]
In this case we can follow an approach similar to the one in the
proof for Lemma~\ref{lem:negativedependence} and have that
\[
\Pr(D_\lambda\cap D_\gamma) \le
\Pr(D_\gamma)\Pr(D_\lambda)\le \frac{1}{q_\lambda q_\gamma}\enspace.
\]
The intuition behind this is that if the first two edges of
$\lambda$ are in $\mathcal{S}$ when $g$ is on the stream, their presence
lowers the probability that the first edge of $\gamma$ is in $\mathcal{S}$
at the same time, and hence that the first edge of $\gamma$ and $g$
are in $\mathcal{S}$ when the last edge of $\gamma$ is on the stream.
\item[$g$ is not the last on the stream among all the edges of
$\lambda$]
There are two situations to consider, or better, one situation and
all other possibilities. The situation we consider is that
\begin{enumerate}
\item $g$ is the first edge of $\gamma$ on the stream; and
\item the second edge of $\gamma$ to be on the stream is on the
stream at time $t_2>t_\lambda$.
\end{enumerate}
Suppose this is the case. Recall that if $D_\lambda$ is verified,
than we know that $g$ is in $\mathcal{S}$ at the
beginning of time step $t_\lambda$. Define the following events:
\begin{itemize}
\item $E_1$: ``the set of edges evicted from $\mathcal{S}$ between the
beginning of time step $t_\lambda$ and the beginning of time step
$t_2$ does not contain $g$.''
\item $E_2$: ``the second edge of $\gamma$, which is on the stream at
time $t_2$, is inserted in $\mathcal{S}$ and the edge that is evicted is not
$g$.''
\item $E_3$: ``the set of edges evicted from $\mathcal{S}$ between the
beginning of time step $t_2+1$ and the beginning of time
step $t_\gamma$ does not contain either $g$ or the second edge of
$\gamma$.''
\end{itemize}
We can then write
\[
\Pr(D_\gamma~|~D_\lambda)= \Pr(E_1~|~D_\lambda)
\Pr(E_2~|~E_1\cap D_\lambda)
\Pr(E_3~|~E_2\cap E_1\cap D_\lambda)\enspace.
\]
We now compute the probabilities on the r.h.s., where we denote with
$\mathds{1}_{t_2>M}(1)$ the function that has value $1$ if $t_2>M$,
and value $0$ otherwise:
\begin{align*}
\Pr(E_1~|~D_\lambda) &=
\prod_{j=\max\{t_\lambda,M+1\}}^{t_2-1}\left(\left(1-\frac{M}{j}\right)+\frac{M}{j}\left(\frac{M-1}{M}\right)\right)\\
&=\prod_{j=\max\{t_\lambda,M+1\}}^{t_2-1}\frac{j-1}{j}=
\frac{\max\{t_\lambda-1, M\}}{\max\{M,t_2-1\}}\enspace;\\
\Pr(E_2~|~E_1\cap D_\lambda) &=
\frac{M}{\max\{t_2,M\}}\frac{M-\mathds{1}_{t_2>M}(1)}{M}=
\frac{M-\mathds{1}_{t_2>M}(1)}{\max\{t_2,M\}}\enspace;\\
\Pr(E_3~|~E_2\cap E_1\cap D_\lambda)&=
\prod_{j=\max\{t_2+1,M+1\}}^{t_\gamma-1}\left(\left(1-\frac{M}{j}\right)+\frac{M}{j}\left(\frac{M-2}{M}\right)\right)\\
&= \prod_{j=\max\{t_2+1,M+1\}}^{t_\gamma-1}\frac{j-2}{j}
=\frac{\max\{t_2,M\}\max\{t_2-1,M-1\}}{\max\{t_\gamma-2,M-1\}\max\{t_\gamma-1,M\}}\enspace.
\end{align*}
Hence, we have
\[
\Pr(D_\gamma~|~D_\lambda) =
\frac{\max\{t_\lambda-1,M\} (M-\mathds{1}_{t_2>M}(1))
\max\{t_2-1,M-1\}}{ \max\{M,t_2-1\}
\max\{(t_\gamma-2)(t_\gamma-1),M(M-1)\}}\enspace.
\]
With a (somewhat tedious) case analysis we can verify that
\[
\Pr(D_\gamma~|~D_\lambda)\le
\frac{1}{q_\gamma}\frac{\max\{M,t_\lambda-1\}}{M}\enspace.
\]
Consider now the complement of the situation we just analyzed. In
this case, two edges of $\gamma$, that is, $g$ and another edge $h$ are on
the stream before time $t_\lambda$, in some non-relevant order (i.e., $g$
could be the first or the second edge of $\gamma$ on the stream). Define
now the following events:
\begin{itemize}
\item $E_1$: ``$h$ and $g$ are both in $\mathcal{S}$ at the beginning
of time step $t_\lambda$.''
\item $E_2$: ``the set of edges evicted from $\mathcal{S}$ between the
beginning of time step $t_\lambda$ and the beginning of time
step $t_\gamma$ does not contain either $g$ or $h$.''
\end{itemize}
We can then write
\[
\Pr(D_\gamma~|~D_\lambda) =
\Pr(E_1~|~D_\lambda)\Pr(E_2~|~E_1\cap D_\lambda)\enspace.
\]
If $t_\lambda \le M+1$, we have that $\Pr(E_1~|~D_\lambda)=1$.
Consider instead the case $t_\lambda > M+1$. If $D_\lambda$ is
verified, then both $g$ and the other edge of $\lambda$ are in
$\mathcal{S}$ at the beginning of time step $t_\lambda$. At this time,
all subsets of $E^{(t_\lambda-1)}$ of size $M$ and containing both
$g$ and the other edge of $\lambda$ have an equal probability of
being $\mathcal{S}$, from Lemma~\ref{lem:reservoir}. There are
$\binom{t_\lambda-3}{M-2}$ such sets. Among these, there are
$\binom{t_\lambda-4}{M-3}$ sets that also contain $h$. Therefore, if
$t_\lambda > M+1$, we have
\[
\Pr(E_1~|~D_\lambda)=\frac{\binom{t_\lambda-4}{M-3}}{\binom{t_\lambda-3}{M-2}}=\frac{M-2}{t_\lambda-3}\enspace.
\]
Considering what we said before for the case $t_\lambda\le M+1$, we
then have
\[
\Pr(E_1~|~D_\lambda)=\min\left\{1,\frac{M-2}{t_\lambda-3}\right\}\enspace.
\]
We also have
\[
\Pr(E_2~|~E_1\cap D_\lambda)=
\prod_{j=\max\{t_\lambda,M+1\}}^{t_\gamma-1}\frac{j-2}{j}=
\frac{\max\{(t_\lambda-2)(t_\lambda-1),
M(M-1)\}}{\max\{(t_\gamma-2)(t_\gamma-1), M(M-1)\}}\enspace.
\]
Therefore,
\[
\Pr(D_\gamma~|~D_\lambda)=
\min\left\{1,\frac{M-2}{t_\lambda-3}\right\}\frac{\max\{(t_\lambda-2)(t_\lambda-1), M(M-1)\}}{\max\{(t_\gamma-2)(t_\gamma-1), M(M-1)\}}\enspace.
\]
With a case analysis, one can show that
\[
\Pr(D_\gamma~|~D_\lambda)\le\frac{1}{q_\gamma}\frac{\max\{M,t_\lambda-1\}}{M}\enspace.
\]
\end{description}
To recap we have the following two scenarios when considering two distinct
triangles $\gamma$ and $\lambda$:
\begin{longenum}
\item if $\lambda$ and $\gamma$ share an edge and, assuming
w.l.o.g.~that the third edge of $\lambda$ is on the stream no later
than the third edge of $\gamma$, and the shared edge is neither the
last among all edges of $\lambda$ to appear on the stream nor the
last among all edges of $\gamma$ to appear on the stream, then we
have
\begin{align*}
\mathrm{Cov}[\delta_\lambda,\delta_\gamma] &\le
\mathbb{E}[\delta_\lambda\delta_\gamma]-1
=q_\lambda q_\gamma\Pr(\delta_\lambda\delta_\gamma
=q_\lambda q_\gamma)-1\\
&\le q_\lambda q_\gamma \frac{1}{q_\lambda q_\gamma}\frac{\max\{M,t_\lambda-1\}}{M} -1\le
\frac{\max\{M,t_\lambda-1\}}{M} -1\le \frac{t-1-M}{M};
\end{align*}where the last inequality follows from the fact that $t_\lambda\le
t$ and $t-1\ge M$.
For the pairs $(\lambda,\gamma)$ such that $t_\lambda\le M+1$, we have
$\max\{M,t_\lambda-1\}/M=1$ and therefore
$\mathrm{Cov}[\delta_\lambda,\delta_\gamma]\leq 0$. We should therefore
only consider the pairs for which $t_\lambda> M+1$. Their number is given
by $z^{(t)}$.
\item in all other cases, including when $\gamma$ and $\lambda$ do not
share an edge, we have
$\mathbb{E}[\delta_\lambda\delta_\gamma]\le 1$, and since
$\mathbb{E}[\delta_\lambda]\mathbb{E}[\delta_\gamma]=1$, we have
\[
\mathrm{Cov}[\delta_\lambda,\delta_\gamma] \le 0\enspace.
\]
\end{longenum}
Hence, we can bound
\[
\sum_{\substack{\lambda,\gamma\in\Delta^{(t)}\\\lambda\neq\gamma}}\mathrm{Cov}[\delta_\lambda,\delta_\gamma]\le
z^{(t)}\frac{t-1-M}{M}
\]
and the thesis follows by combining this into~\eqref{eq:variance2improved}.
\end{proof}
\subsubsection{Concentration}
We now show a concentration result on the estimation of \textsc{\algoname-impr}\xspace, which
relies on Chebyshev's
inequality~\citep[Thm.~3.6]{mitzenmacher2005probability} and
Thm.~\ref{thm:improvedvariance}.
\begin{theorem}\label{thm:improvedconcentration}
Let $t\ge 0$ and assume $|\Delta^{(t)}| >0$. For any
$\varepsilon,\delta\in(0,1)$, if
\[
M> \max\left\{\sqrt{\frac{2(t-1)(t-2)}{\delta
\varepsilon^2|\Delta^{(t)}|+2}+\frac{1}{4}}+\frac{1}{2},\frac{2z^{(t)}(t-1)}{\delta
\varepsilon^2 |\Delta^{(t)}|^2+2z^{(t)}}\right\}
\]
then $|\tau^{(t)}-|\Delta^{(t)}||<\varepsilon|\Delta^{(t)}|$ with
probability $>1-\delta$.
\end{theorem}
\begin{proof}
By Chebyshev’s inequality it is sufficient to prove that
$$\frac{\mathrm{Var}[\tau^{(t)}]}{\varepsilon^2|\Delta^{(t)}|^2} <\delta\enspace.$$
We can write
$$\frac{\mathrm{Var}[\tau^{(t)}]}{\varepsilon^2|\Delta^{(t)}|^2}\le
\frac{1}{\varepsilon^2|\Delta^{(t)}|}\left((\eta(t)-1)+z^{(t)}\frac{t-1-M}{M|\Delta^{(t)}|}\right)\enspace.
$$
Hence it is sufficient to impose the following two conditions:
\begin{description}
\item[Condition 1]
\begin{align}
\frac{\delta}{2} &> \frac{\eta(t)-1}{\varepsilon^2|\Delta^{(t)}|}\label{eq:cond1}\\
&>\frac{1}{\varepsilon^2|\Delta^{(t)}|}\frac{(t-1)(t-2)-M(M-1)}{M(M-1)}\nonumber,
\end{align}
which is verified for:
\begin{equation*}
M(M-1) > \frac{2(t-1)(t-2)}{\delta \varepsilon^2|\Delta^{(t)}|+2}.
\end{equation*}
As $t> M$, we have $\frac{2(t-1)(t-2)}{\delta \varepsilon^2|\Delta^{(t)}|+2}>0$.
The condition in~\eqref{eq:cond1} is thus verified for:
\begin{equation*}
M>\frac{1}{2}\left(\sqrt{4\frac{2(t-1)(t-2)}{\delta \varepsilon^2|\Delta^{(t)}|+2}+1}+1\right)
\end{equation*}
\item[Condition 2]
\begin{align*}
\frac{\delta}{2} &> z^{(t)}\frac{t-1-M}{M\varepsilon^2|\Delta^{(t)}|^2},
\end{align*}
which is verified for:
\begin{equation*}
M > \frac{2z^{(t)}(t-1)}{\delta \varepsilon^2 |\Delta^{(t)}|^2+2z^{(t)}}.
\end{equation*}
\end{description}
The theorem follows.
\end{proof}
In Thms.~\ref{thm:improvedvariance} and~\ref{thm:improvedconcentration}, it is
possible to replace the value $z^{(t)}$ with the more interpretable $r^{(t)}$,
which is agnostic to the order of the edges on the stream but gives a looser
upper bound to the variance.
\subsection{Fully-dynamic algorithm -- \textsc{\algoname-fd}\xspace}\label{sec:fulldyn}
\textsc{\algoname-fd}\xspace computes unbiased estimates of the global and local triangle counts in a
\emph{fully-dynamic stream where edges are inserted/deleted in any arbitrary,
adversarial order}. It is based on \emph{random pairing}
(RP)~\citep{GemullaLH08}, a sampling scheme that extends reservoir sampling and
can handle deletions. The idea behind the RP scheme is that edge deletions seen
on the stream will be ``compensated'' by future edge insertions. Following RP,
\textsc{\algoname-fd}\xspace keeps a counter $d_\mathrm{i}$ (resp.~$d_\mathrm{o}$) to keep track of
the number of uncompensated edge deletions involving an edge $e$ that was
(resp.~was \emph{not}) in $\mathcal{S}$ at the time the deletion for $e$ was on the
stream.
When an edge deletion for an edge $e\in E^{(t-1)}$ is on the stream at the
beginning of time step $t$, then, if $e\in\mathcal{S}$ at this time, \textsc{\algoname-fd}\xspace
removes $e$ from $\mathcal{S}$ (effectively decreasing the number of edges stored in
the sample by one) and increases $d_\mathrm{i}$ by one. Otherwise, it just
increases $d_\mathrm{o}$ by one. When an edge insertion for an edge $e\not\in\
E^{(t-1)}$ is on the stream at the beginning of time step $t$, if
$d_\mathrm{i}+d_\mathrm{o}=0$, then \textsc{\algoname-fd}\xspace follows the standard reservoir
sampling scheme. If $|\mathcal{S}|<M$, then $e$ is deterministically inserted in
$\mathcal{S}$ without removing any edge from $\mathcal{S}$ already in $\mathcal{S}$, otherwise it
is inserted in $\mathcal{S}$ with probability $M/t$, replacing an uniformly-chosen edge
already in $\mathcal{S}$. If instead $d_\mathrm{i}+d_\mathrm{o}>0$, then $e$ is
inserted in $\mathcal{S}$ with probability $d_\mathrm{i}/(d_\mathrm{i}+d_\mathrm{o})$;
since it must be $d_\mathrm{i}>0$, then it must be $|\mathcal{S}|<M$ and no edge
already in $\mathcal{S}$ needs to be removed. In any case, after having handled the
eventual insertion of $e$ into $\mathcal{S}$, the algorithm decreases $d_\mathrm{i}$ by
$1$ if $e$ was inserted in $\mathcal{S}$, otherwise it decreases $d_\mathrm{o}$ by $1$.
\textsc{\algoname-fd}\xspace also keeps track of $s^{(t)}=|E^{(t)}|$ by appropriately incrementing or
decrementing a counter by $1$ depending on whether the element on the stream is
an edge insertion or deletion. The pseudocode for \textsc{\algoname-fd}\xspace is presented in
Alg.~\ref{algo: TRIEST-FD} where the {\sc UpdateCounters} procedure is the one
from Alg.~\ref{alg:triest-base}.
\begin{algorithm}[t]
\small
\caption{\textsc{\algoname-fd}\xspace}
\label{algo: TRIEST-FD}
\begin{algorithmic}[1]
\Statex{\textbf{Input:} Fully-dynamic edge stream $\Sigma$, integer $M\ge 6$}
\State $\mathcal{S} \leftarrow \emptyset$, $d_\mathrm{i} \leftarrow 0$, $d_\mathrm{o} \leftarrow 0$, $t\leftarrow
0$, $s\leftarrow 0$
\For{\textbf{each} element $\left(\bullet, \left(u,v\right)\right)$ from $\Sigma$}
\State{$t\leftarrow t +1$}
\State{$s\leftarrow s \bullet 1$}
\If{$\bullet = +$}
\If{$\textsc{SampleEdge}\left(u,v\right)$}
\State \textsc{UpdateCounters}$\left(+,\left(u,v\right)\right)$
\Comment \textsc{UpdateCounters} is defined as in
Alg.~\ref{alg:triest-base}.
\EndIf
\ElsIf{$\left(u,v\right)\in \mathcal{S}$}
\State \textsc{UpdateCounters}$\left(-,\left(u,v\right)\right)$
\State $\mathcal{S}\leftarrow \mathcal{S}\setminus\{(u,v)\}$
\State $d_\mathrm{i} \leftarrow d_\mathrm{i} +1$
\Else
$\quad d_\mathrm{o} \leftarrow d_\mathrm{o} +1$
\EndIf
\EndFor
\Statex
\Function{\textsc{SampleEdge}}{$u,v$}
\If{$d_\mathrm{o}+d_\mathrm{i}=0$}
\If{$|\mathcal{S}|<M$}
\State{$\mathcal{S} \leftarrow \mathcal{S} \cup \{\left(u,v\right)\}$}
\State{\textbf{return}} True
\ElsIf{\textsc{FlipBiasedCoin}$(\frac{M}{t}) = $ heads}
\State Select $(z,w)$ uniformly at random from $\mathcal{S}$
\State \textsc{UpdateCounters}$\left(-,(z,w)\right)$
\State{$\mathcal{S}\leftarrow \left(\mathcal{S} \setminus \{(z,w)\}\right) \cup \{ \left(u,v\right)\}$}
\State{\textbf{return}} True
\EndIf
\ElsIf{\textsc{FlipBiasedCoin}$\left(\frac{d_\mathrm{i}}{d_\mathrm{i}+d_\mathrm{o}}\right) = $ heads}
\State{$\mathcal{S} \leftarrow \mathcal{S} \cup \{\left(u,v\right)\}$}
\State $d_\mathrm{i} \leftarrow d_\mathrm{i} -1$
\State{\textbf{return}} True
\Else
\State $d_\mathrm{o} \leftarrow d_\mathrm{o} -1$
\State{\textbf{return}} False
\EndIf
\EndFunction
\end{algorithmic}
\end{algorithm}
\subsubsection{Estimation}
We denote as $M^{(t)}$ the size of $\mathcal{S}$ at the end of time $t$ (we always have
$M^{(t)}\le M$). For any time $t$, let $d_\mathrm{i}^{(t)}$ and
$d_\mathrm{o}^{(t)}$ be the value of the counters $d_\mathrm{i}$ and
$d_\mathrm{o}$ at the end of time $t$ respectively, and let
$\omega^{(t)}=\min\{M, s^{(t)}+d_\mathrm{i}^{(t)}+d_\mathrm{o}^{(t)}\}$. Define
\begin{equation}\label{eq:kappa}
\kappa^{(t)}=1-\sum_{j=0}^2\binom{s^{(t)}}{j}\binom{d_\mathrm{i}^{(t)}+d_\mathrm{o}^{(t)}}{\omega^{(t)}-j}\bigg/\binom{s^{(t)}+d_\mathrm{i}^{(t)}+d_\mathrm{o}^{(t)}}{\omega^{(t)}}\enspace.
\end{equation}
For any three positive integers $a,b,c$ s.t.~$a\le b\le c$, define\footnote{We
follow the convention that $\binom{0}{0}=1$.}
\[
\psi_{a,b,c}=\binom{c}{b}\Big/\binom{c-a}{b-a}= \prod_{i=0}^{a-1}\frac{c-i}{b-i}\enspace.
\]
When queried at the end of time $t$, for an estimation of the global
number of triangles, \textsc{\algoname-fd}\xspace returns
\[
\rho^{(t)}=\left\{\begin{array}{l} 0 \text{ if } M^{(t)}<3 \\
\frac{\tau^{(t)}}{\kappa^{(t)}}\psi_{3,M^{(t)},s^{(t)}}=\frac{\tau^{(t)}}{\kappa^{(t)}}\frac{s^{(t)}(s^{(t)}-1)(s^{(t)}-2)}{M^{(t)}(M^{(t)}-1)(M^{(t)}-2)}
\text{ othw.}
\end{array}\right.
\]
\textsc{\algoname-fd}\xspace can keep track of $\kappa^{(t)}$ during the execution, each update of
$\kappa^{(t)}$ taking time $O(1)$. Hence the time to return the estimations is
still $O(1)$.
\subsubsection{Analysis}
We now present the analysis of the estimations computed by \textsc{\algoname-impr}\xspace,
showing results involving their unbiasedness, their variance, and their
concentration around their expectation. Results analogous to those in
Thms.~\ref{thm:fdunbiased}, \ref{thm:fdvariance}, and~\ref{thm:fdconcentration}
hold for the local triangle count for any $u\in V^{(t)}$, replacing the global
quantities with the corresponding local ones.
\subsubsection{Expectation}
Let $t^*$ be the first $t\ge M+1$ such that $|E^{(t)}|=M+1$, if such a time step
exists (otherwise $t^*=+\infty$).
\begin{theorem}\label{thm:fdunbiased}
We have $\rho^{(t)}=|\Delta^{(t)}|$ for all $t<t^*$, and
$\mathbb{E}\left[\rho^{(t)}\right] = |\Delta^{(t)}|$ for $t\ge t^*$.
\end{theorem}
The proof, deferred to App.~\ref{app:algobase}, relies on properties of RP and
on the definitions of $\kappa^{(t)}$ and $\rho^{(t)}$. Specifically, it uses
Lemma~\ref{lem:gemullahighorder}, which is the correspondent of
Lemma~\ref{lem:reservoirhighorder} but for RP, and some additional technical
lemmas (including an equivalent of Lemma~\ref{lem:baseunbiasedaux} but for RP)
and combine them using the law of total expectation by conditioning on the value
of $M^(t)$.
\subsubsection{Variance}
\begin{theorem}\label{thm:fdvariance}
Let $t>t^*$ s.t.~$|\Delta^{(t)}| >0$ and $s^{(t)}\geq M$. Suppose we have
$d^{(t)}=d_o^{(t)}+d_i^{(t)}\leq \alpha s^{(t)}$ total unpaired deletions at
time $t$, with $0\leq\alpha< 1$. If $M\geq
\frac{1}{2\sqrt{\alpha'-\alpha}}7\ln s^{(t)}$ for some $\alpha< \alpha'<1$, we
have:
\begin{align*}
\mathrm{Var}\left[\rho^{(t)}\right] &\leq
(\kappa^{(t)})^{-2}|\Delta^{(t)}|\left(\psi_{3,M(1-\alpha'),s^{(t)}}-1 \right) +(\kappa^{(t)})^{-2}2\\
&+(\kappa^{(t)})^{-2}r^{(t)}\left(\psi_{3,M(1-\alpha'),s^{(t)}}^2\psi_{5,M(1-\alpha'),s^{(t)}}^{-1}-1\right)
\end{align*}
\end{theorem}
The proof of Thm.~\ref{thm:fdvariance} is deferred to App.~\ref{app:algofd}. It
uses two results on the variance of $\rho^{(t)}$ conditioned on a specific value of
$M^{(t)}$ (Lemmas~\ref{lem:confdvariance} and~\ref{lem:boundvarfd}), and an
analysis of the probability distribution of $M^{(t)}$ (Lemma~\ref{lem:mtprops}
and Corollary~\ref{corol:samplesizebound}). These results are then combined
using the law of total variance.
\subsubsection{Concentration}
The following result relies on Chebyshev's inequality and on
Thm.~\ref{thm:fdvariance}, and the proof (reported in App.~\ref{app:algofd})
follows the steps similar to those in the proof for Thm.~\ref{thm:improvedvariance}.
\begin{theorem}\label{thm:fdconcentration}
Let $t\ge t^*$ s.t.~$|\Delta^{(t)}| >0$ and $s^{(t)}\geq M$. Let
$d^{(t)}=d_o^{(t)}+d_i^{(t)}\leq \alpha s^{(t)}$ for some $0\leq\alpha< 1$.
For any $\varepsilon,\delta\in(0,1)$, if for some $\alpha< \alpha'<1$
\begin{align*}
M >&\max\Bigg\{\frac{1}{\sqrt{a'-\alpha}}7\ln s^{(t)},\\
&(1-\alpha')^{-1}\left(\sqrt[3]{\frac{2s^{(t)}(s^{(t)}-1)(s^{(t)}-2)}{\delta \varepsilon^2|\Delta^{(t)}|(\kappa^{(t)})^{2}+2\frac{|\Delta^{(t)}|-2}{|\Delta^{(t)}|}}}+2 \right),\\
&\frac{(1-\alpha')^{-1}}{3} \left( \frac{r^{(t)}s^{(t)}}{\delta \varepsilon^2 |\Delta^{(t)}|^2(\kappa^{(t)})^{-2}+2r^{(t)}}\right)\Bigg\}
\end{align*}
then $|\rho^{(t)}-|\Delta^{(t)}||<\varepsilon|\Delta^{(t)}|$ with
probability $>1-\delta$.
\end{theorem}
\subsection{Counting global and local triangles in
multigraphs}\label{sec:multigraphs}
We now discuss how to extend \textsc{\algoname}\xspace to approximate the local and global triangle
counts in multigraphs.
\subsubsection{\textsc{\algoname-base}\xspace on multigraphs}
\textsc{\algoname-base}\xspace can be adapted to work on multigraphs as follows. First of all, the
sample $\mathcal{S}$ should be considered a \emph{bag}, i.e., it may contain multiple
copies of the same edge. Secondly, the function \textsc{UpdateCounters} must be
changed as presented in Alg.~\ref{alg:triest-base-multi}, to take into account
the fact that inserting or removing an edge $(u,v)$ from $\mathcal{S}$ respectively
increases or decreases the global number of triangles in $G^\mathcal{S}$ by a quantity
that depends on the product of the number of edges $(c,u)\in\mathcal{S}$ and
$(c,v)\in\mathcal{S}$, for $c$ in the shared neighborhood (in $G^\mathcal{S}$) of $u$ and $v$
(and similarly for the local number of triangles incidents to $c$).
\begin{algorithm}[ht]
\small
\caption{\textsc{UpdateCounters} function for \textsc{\algoname-base}\xspace on multigraphs}
\label{alg:triest-base-multi}
\begin{algorithmic}[1]
\Statex
\Function{UpdateCounters}{$(\bullet, (u,v))$}
\State $\mathcal{N}^\mathcal{S}_{u,v} \leftarrow \mathcal{N}^\mathcal{S}_u \cap \mathcal{N}^\mathcal{S}_v$
\ForAll {$c \in \mathcal{N}^\mathcal{S}_{u,v}$}
\State $y_{c,u} \leftarrow $ number of edges between $c$ and $u$ in
$\mathcal{S}$
\State $y_{c,v} \leftarrow $ number of edges between $c$ and $v$ in
$\mathcal{S}$
\State $y_{c} \leftarrow y_{c,u}\cdot y_{c,v}$
\State $\tau \leftarrow \tau \bullet y_c$
\State $\tau_c \leftarrow \tau_c \bullet y_c$
\State $\tau_u \leftarrow \tau_u \bullet y_c$
\State $\tau_v \leftarrow \tau_v \bullet y_c$
\EndFor
\EndFunction
\end{algorithmic}
\end{algorithm}
For this modified version of \textsc{\algoname-base}\xspace, that we call \textsc{\textsc{\algoname-base}\xspace-m}, an
equivalent version of Lemma~\ref{lem:baseunbiasedaux} holds. Therefore, we can
prove a result on the unbiasedness of \textsc{\textsc{\algoname-base}\xspace-m} equivalent (i.e.,
with the same statement) as Thm.~\ref{thm:baseunbiased}. The proof of such
result is also the same as the one for Thm.~\ref{thm:baseunbiased}.
To analyze the variance of \textsc{\textsc{\algoname-base}\xspace-m}, we need to take into
consideration the fact that, in a multigraph, a pair of triangles may share
\emph{two} edges, and the variance depends (also) on the number of such pairs.
Let $r_1^{(t)}$ be the number of unordered pairs of distinct triangles from
$\Delta^{(t)}$ sharing an edge and let $r_2^{(t)}$ be the number of unordered
pairs of distinct triangles from $\Delta^{(t)}$ sharing \emph{two} edges (such
pairs may exist in a multigraph, but not in a simple graph). Let
$q^{(t)}=\binom{|\Delta^{(t)}|}{2}-r_1^{(t)}-r_2^{(t)}$ be the number of
unordered pairs of distinct triangles that do not share any edge.
\begin{theorem}\label{thm:basevariance-multi}
For any $t>M$, let $f(t) = \xi^{(t)}-1$,
\[
g(t) = \xi^{(t)}\frac{(M-3)(M-4)}{(t-3)(t-4)} -1
\]
and
\[
h(t) = \xi^{(t)}\frac{(M-3)(M-4)(M-5)}{(t-3)(t-4)(t-5)}
-1\enspace(\le 0),
\]
and
\[
j(t) = \xi^{(t)}\frac{M-3}{t-3}
-1\enspace.
\]
We have:
\[
\mathrm{Var}\left[\xi(t)\tau^{(t)}\right] = |\Delta^{(t)}|
f(t)+r_1^{(t)}g(t)+r_2^{(t)}j(t)+q^{(t)}h(t).
\]
\end{theorem}
The proof follows the same lines as the one for Thm.~\ref{thm:basevariance},
with the additional steps needed to take into account the contribution of the
$r_2^{(t)}$ pairs of triangles in $G^{(t)}$ sharing two edges.
\subsubsection{\textsc{\algoname-impr}\xspace on multigraphs}
A variant \textsc{\textsc{\algoname-impr}\xspace-m} of \textsc{\algoname-impr}\xspace for multigraphs can be
obtained by using the function \textsc{UpdateCounters} defined in
Alg.~\ref{alg:triest-base-multi}, modified to increment\footnote{As in
\textsc{\algoname-impr}\xspace, all calls to \textsc{UpdateCounters} in \textsc{\textsc{\algoname-impr}\xspace-m}
have $\bullet=+$. See also Alg.~\ref{alg:triest-impr}.} the counters by
$\eta^{(t)}y_c^{(t)}$, rather than $y_c^{(t)}$, where
$\eta^{(t)}=\max\{1,(t-1)(t-2)/(M(M-1))\}$. The result stated in
Thm.~\ref{thm:improvedunbiased} holds also for the estimations computed by
\textsc{\textsc{\algoname-impr}\xspace-m}. An upper bound to the variance of the estimations,
similar to the one presented in Thm.~\ref{thm:improvedvariance} for
\textsc{\algoname-impr}\xspace, could potentially be obtained, but its derivation would involve a
high number of special cases, as we have to take into consideration the order of
the edges in the stream.
\subsubsection{\textsc{\algoname-fd}\xspace on multigraphs}
\textsc{\algoname-fd}\xspace can be modified in order to provide an approximation of the number of
global and local triangles on multigraphs observed as a stream of edge deletions
and deletions. It is however necessary to clearly state the data model. We
assume that for all pairs of vertices $u,v \in V^{(t)}$, each edge connecting
$u$ and $v$ is assigned a label that is unique among the edges connecting $u$
and $v$. An edge is therefore uniquely identified by its endpoints and its label
as $(u,v), label)$. Elements of the stream are now in the form $(\bullet, (u,v),
label)$ (where $\bullet$ is either $+$ or $-$). This assumption, somewhat
strong, is necessary in order to apply the \emph{random pairing} sampling
scheme~\citep{GemullaLH08} to fully-dynamic multigraph edge streams.
Within this model, we can obtain an algorithm \textsc{\textsc{\algoname-fd}\xspace-m} for multigraphs
by adapting \textsc{\algoname-fd}\xspace as follows. The sample $\mathcal{S}$ is a \emph{set} of elements
$((u,v), label)$. When a deletion $(-, (u,v), label)$ is on the stream, the
sample $\mathcal{S}$ is modified if and only if $((u,v), label)$ belongs to $\mathcal{S}$.
This change can be implemented in the pseudocode from Alg.~\ref{algo: TRIEST-FD}
by modifying line 8 to be
\[
\mbox{``\bf else if } ((u,v), label)\in \mathcal{S} \mbox{ {\bf then}''}\enspace.
\]
Additionally, the function \textsc{UpdateCounters} to be used is the one
presented in Alg.~\ref{alg:triest-base-multi}.
We can prove a result on the unbiasedness of \textsc{\textsc{\algoname-fd}\xspace-m} equivalent
(i.e., with the same statement) as Thm.~\ref{thm:fdunbiased}. The proof of such
result is also the same as the one for Thm.~\ref{thm:fdunbiased}. An upper bound
to the variance of the estimations, similar to the one presented in
Thm.~\ref{thm:fdvariance} for \textsc{\algoname-fd}\xspace, could be obtained by considering the fact
that in a multigraph two triangles can share two edges, in a fashion similar to
what we discussed in Thm.~\ref{thm:basevariance-multi}.
\subsection{Discussion}\label{sec:discussion}
We now briefly discuss over the algorithms we just presented, the techniques
they use, and the theoretical results we obtained for \textsc{\algoname}\xspace, in order to
highlight advantages, disadvantages, and limitations of our approach.
\paragraph{On reservoir sampling}
Our approach of using reservoir sampling to keep a random sample of edges can be
extended to many other graph mining problems, including approximate counting of
other subgraphs more or less complex than triangles (e.g., squares, trees with a
specific structure, wedges, cliques, and so on). The estimations of such counts
would still be unbiased, but as the number of edges composing the subgraph(s) of
interest increases, the variance of the estimators also increases, because the
probability that all edges composing a subgraph are in the sample (or all but
the last one when the last one arrives, as in the case of \textsc{\algoname-impr}\xspace),
decreases as their number increases. Other works in the triangle counting
literature~\citep{PavanTTW13,JhaSP15} use samples of wedges, rather than edges.
They perform worse than \textsc{\algoname}\xspace in both accuracy and runtime (see
Sect.~\ref{sec:experiments}), but the idea of sampling and storing more complex
structures rather than simple edges could be a potential direction for
approximate counting of larger subgraphs.
\paragraph{On the analysis of the variance}
We showed an exact analysis of the variance of \textsc{\algoname-base}\xspace but for the other
algorithms we presented \emph{upper bounds} to the variance of the estimates.
These bounds can still be improved as they are not currently tight. For example,
we already commented on the fact that the bound in~\eqref{eq:improvedvariance}
does not include a number of negative terms that would tighten it
(i.e., decrease the bound), and that could potentially be no smaller than the
term depending on $z^{(t)}$. The absence of such terms is due to the fact that
it seems very challenging to obtain non-trivial \emph{upper bounds} to them
that are valid for every $t>M$. Our proof for this bound uses a careful case-by-case
analysis, considering the different situations for pair of triangles (e.g.,
sharing or not sharing an edge, and considering the order of edges on the
stream). It may be possible to obtain tighter bounds to the variance by
following a more holistic approach that takes into account the fact that the
sizes of the different classes of triangle pairs are highly dependent on each
other.
Another issue with the bound to the variance from~\eqref{eq:improvedvariance} is
that the quantity $z^{(t)}$ depends on the order of edges on the stream. As
already discussed, the bound can be made independent of the order by loosening
it even more. Very recent developments in the sampling theory
literature~\citep{DevilleT04} presented sampling schemes and estimators whose
second-order sampling probabilities do not depend on the order of the stream, so
it should be possible to obtain such bounds also for the triangle counting
problem, but a sampling scheme different than reservoir sampling would have to
be used, and a careful analysis is needed to establish its net advantages in
terms of performances and scalability to billion-edges graphs.
\paragraph{On the trade-off between speed and accuracy}
We concluded both previous paragraphs in this subsection by mentioning
techniques different than reservoir sampling of edges as potential directions to
improve and extend our results. In both cases these techniques are more complex
not only in their analysis but also \emph{computationally}. Given that the main
goal of algorithms like \textsc{\algoname}\xspace is to make it possible to analyze graphs with
billions (and possibly more) nodes, the gain in accuracy
need to be weighted against expected slowdowns in execution. As we show in our
experimental evaluation in the next section, \textsc{\algoname}\xspace, especially in the
\textsc{\algoname-impr}\xspace variant, actually seems to strike the right balance between
accuracy and tradeoff, when compared with existing contributions.
\section{Experimental evaluation}\label{sec:experiments}
We evaluated \textsc{\algoname}\xspace on several real-world graphs with up to a billion edges. The
algorithms were implemented in \verb C++, and ran on the Brown University CS
department
cluster.\footnote{\url{https://cs.brown.edu/about/system/services/hpc/grid/}}
Each run employed a single core and used at most $4$ GB of RAM. The code is
available from \url{http://bigdata.cs.brown.edu/triangles.html}.
\paragraph{Datasets} We created the streams from the following publicly
available graphs (properties in Table~\ref{table:graphs}).
\begin{description}
\item[Patent (Co-Aut.) and Patent (Cit.)] The {\it Patent (Co-Aut.)} and {\it
Patent (Cit.)} graphs are obtained from a dataset of $\approx 2$ million
U.S.~patents granted between '75 and '99~\citep{hjt01}. In {\it Patent
(Co-Aut.)}, the nodes represent inventors and there is an edge with
timestamp $t$ between two co-inventors of a patent if the patent was granted
in year $t$. In {\it Patent (Cit.)}, nodes are patents and there is an edge
$(a,b)$ with timestamp $t$ if patent $a$ cites $b$ and $a$ was granted in
year $t$.
\item[LastFm] The LastFm graph is based on a dataset~\citep{celma2009music,
koblenz} of $\approx20$ million \url{last.fm} song listenings, $\approx1$
million songs and $\approx1000$ users. There is a node for each song and an
edge between two songs if $\ge3$ users listened to both on day $t$.
\item[Yahoo!-Answers] The Yahoo!~Answers graph is obtained from a sample of
$\approx 160$ million answers to $\approx25$ millions questions posted on
Yahoo!~Answers~\citep{yahoo-ans}. An edge connects two users at time
$max(t_1, t_2)$ if they both answered the same question at times $t_1$,
$t_2$ respectively. We removed $6$ outliers questions with more than $5000$
answers.
\item[Twitter] This is a snapshot~\citep{kwak2010twitter,brsllp} of the
Twitter followers/following network with $\approx 41$ million nodes and
$\approx 1.5$ billions edges. We do not have time information for the edges,
hence we assign a random timestamp to the edges (of which we ignore the
direction).
\end{description}
\paragraph{Ground truth}
To evaluate the accuracy of our algorithms, we computed
the \emph{ground truth} for our smaller graphs (i.e., the exact number of
global and local triangles for each time step), using an exact algorithm. The
entire current graph is stored in memory and when an edge $u,v$ is inserted (or
deleted) we update the current count of local and global triangles by checking
how many triangles are completed (or broken). As exact algorithms are not
scalable, computing the exact triangle count is feasible only for small graphs
such as Patent (Co-Aut.), Patent (Cit.) and LastFm. Table~\ref{table:graphs}
reports the exact total number of triangles at the end of the stream for those
graphs (and an estimate for the larger ones using \textsc{\algoname-impr}\xspace with
$M=1000000$).
\begin{table}[ht]
\tbl{Properties of the dynamic graph streams analyzed. $|V|$, $|E|$, $|E_u|$,
$|\Delta|$ refer respectively to the number of nodes in the graph, the
number of edge addition events, the number of distinct edges additions, and
the maximum number of triangles in the graph (for Yahoo!~Answers and Twitter
estimated with \textsc{\algoname-impr}\xspace with $M=1000000$, otherwise computed exactly with
the na\"ive algorithm).
}{
\begin{tabular}{cccccc}
\toprule
Graph & $|V|$ & $|E|$ &$|E_u|$ & $|\Delta|$ \\
\midrule
Patent (Co-Aut.) & $1{,}162{,}227$ & $3{,}660{,}945$ & 2{,}724{,}036 & \num{3.53e+06} \\
\midrule
Patent (Cit.) & $2{,}745{,}762$ & $13{,}965{,}410$ & 13{,}965{,}132 & \num{6.91e+06}\\
\midrule
LastFm & $681{,}387$ & $43{,}518{,}693$ & 30{,}311{,}117 & \num{1.13e+09}\\
\midrule
Yahoo!~Answers & $2{,}432{,}573$ & $\num{1.21e9}$&\num{1.08e9} & \num{7.86e10} \\
\midrule
Twitter & $41{,}652{,}230$& $\num{1.47e9}$ & \num{1.20e9}& \num{3.46e10}\\
\end{tabular}
}
\label{table:graphs}
\end{table}
\subsection{Insertion-only case}
We now evaluate \textsc{\algoname}\xspace on insertion-only streams and compare its performances
with those of state-of-the-art approaches~\citep{LimK15,JhaSP15,PavanTTW13},
showing that \textsc{\algoname}\xspace has an average estimation error significantly smaller than
these methods both for the global and local estimation problems, while using the
same amount of memory.
\paragraph{Estimation of the global number of triangles}
Starting from an empty graph we add one edge at a time, in timestamp order.
Figure~\ref{fig:add-only} illustrates the evolution, over time, of the
estimation computed by \textsc{\algoname-impr}\xspace with $M=1{,}000{,}000$. For smaller graphs
for which the ground truth can be computed exactly, the curve of the exact count
is practically indistinguishable from \textsc{\algoname-impr}\xspace estimation, showing the
precision of the method. The estimations have very small variance even on the
very large Yahoo!\ Answers and Twitter graphs (point-wise max/min estimation
over ten runs is almost coincident with the average estimation). These results
show that \textsc{\algoname-impr}\xspace is very accurate even when storing less than a $0.001$
fraction of the total edges of the graph.
\begin{figure}[ht]
\subfigure[Patent
(Cit.)]{\includegraphics[width=0.49\textwidth,keepaspectratio]{triangle-patent-cit-add-RH-1000000.pdf}}
\subfigure[LastFm]{\includegraphics[width=0.49\textwidth,keepaspectratio]{triangle-lastfm-add-RH-1000000.pdf}}
\subfigure[Yahoo!\
Answers]{\includegraphics[width=0.49\textwidth,keepaspectratio]{triangle-yahoo-add-RH-1000000.pdf}}
\subfigure[Twitter]{\includegraphics[width=0.49\textwidth,keepaspectratio]{triangle-twitter-add-RH-1000000.pdf}}
\caption{Estimation by \textsc{\algoname-impr}\xspace of the global number of triangles over
time (intended as number of elements seen on the stream). The max, min, and
avg are taken over 10 runs. The curves are \emph{indistinguishable on
purpose}, to highlight the fact that \textsc{\algoname-impr}\xspace estimations have very small
error and variance. For example, the ground truth (for graphs for which it is
available) is indistinguishable even from the max/min point-wise estimations
over ten runs. For graphs for which the ground truth is not available, the small
deviations from the avg suggest that the estimations are also close to the true
value, given that our algorithms gives unbiased estimations.}
\label{fig:add-only}
\end{figure}
\paragraph{Comparison with the state of the art}
We compare quantitatively with three state-of-the-art methods:
\textsc{mascot}~\citep{LimK15}, \textsc{Jha et al.}~\citep{JhaSP15} and
\textsc{Pavan et al.}~\citep{PavanTTW13}. \textsc{mascot} is a suite of local
triangle counting methods (but provides also a global estimation). The other two
are global triangle counting approaches. None of these can handle fully-dynamic
streams, in contrast with \textsc{\algoname-fd}\xspace. We first compare the three methods to \textsc{\algoname}\xspace
for the global triangle counting estimation. \textsc{mascot} comes in two memory
efficient variants: the basic \textsc{mascot-c} variant and an improved
\textsc{mascot-i} variant.\footnote{In the original work~\citep{LimK15}, this
variant had no suffix and was simply called \textsc{mascot}. We add the
\textsc{-i} suffix to avoid confusion. Another variant \textsc{mascot-A}
can be forced to store the entire graph with probability $1$ by appropriately
selecting the edge order (which we assume to be adversarial) so we do not
consider it here.} Both variants sample edges with fixed probability $p$, so
there is no guarantee on the amount of memory used during the execution. To
ensure fairness of comparison, we devised the following experiment. First, we
run both \textsc{mascot-c} and \textsc{mascot-i} for $\ell=10$ times with a
fixed $p$ using the same random bits for the two algorithms run-by-run (i.e. the
same coin tosses used to select the edges) measuring each time the number of
edges $M'_i$ stored in the sample at the end of the stream (by construction this
the is same for the two variants run-by-run). Then, we run our algorithms using
$M=M'_i$ (for $i\in[\ell]$). We do the same to fix the size of the edge memory
for \textsc{Jha et al.}~\citep{JhaSP15} and \textsc{Pavan et
al.}~\citep{PavanTTW13}.\footnote{More precisely, we use $M'_i/2$ estimators in
\textsc{Pavan et al.} as each estimator stores two edges. For \textsc{Jha et
al.} we set the two reservoirs in the algorithm to have each size $M'_i/2$. This
way, all algorithms use $M'_i$ cells for storing (w)edges.} This way, all
algorithms use the same amount of memory for storing edges (run-by-run).
We use the \emph{MAPE} (Mean Average Percentage Error) to assess the accuracy of
the global triangle estimators over time. The MAPE measures the average
percentage of the prediction error with respect to the ground truth, and is
widely used in the prediction literature~\citep{hk06}. For $t=1,\dotsc,T$, let
$\overline{\Delta}^{(t)}$ be the estimator of the number of triangles at time
$t$, the MAPE is defined as
$ \frac{1}{T}{\sum_{t=1}^T \left|\frac{|\Delta^{(t)}| -
\overline{\Delta}^{(t)}}{|\Delta^{(t)}|}\right|}$.\footnote{The MAPE is not
defined for $t$ s.t.~$\Delta^{(t)}=0$ so we compute it only for $t$
s.t.~$|\Delta^{(t)}| > 0$. All algorithms we consider are guaranteed to output the
correct answer for $t$ s.t.~$|\Delta^{(t)}|=0$.}
In Fig.~\ref{fig:mape-all-add}, we compare the average MAPE of
\textsc{\algoname-base}\xspace and \textsc{\algoname-impr}\xspace as well as the two \textsc{mascot} variants and the other two streaming
algorithms for the Patent (Co-Aut.) graph, fixing $p=0.01$. \textsc{\algoname-impr}\xspace has
the smallest error of all the algorithms compared.
We now turn our attention to the efficiency of the methods. Whenever we refer to
one operation, we mean handling one element on the stream, either one edge
addition or one edge deletion. The average update time per operation is obtained
by dividing the total time required to process the entire stream by the number
of operations (i.e., elements on the streams).
Figure~\ref{fig:time-all-add} shows the average update time per operation in
Patent (Co-Aut.) graph, fixing $p=0.01$. Both \textsc{Jha et
al.}~\citep{JhaSP15} and \textsc{Pavan et al.}~\citep{PavanTTW13}
are up to $\approx 3$ orders of magnitude slower than the \textsc{mascot} variants and \textsc{\algoname}\xspace.
This is expected as both algorithms have an update complexity of $\Omega(M)$
(they have to go through the entire reservoir graph at each step),
while both \textsc{mascot} algorithms and \textsc{\algoname}\xspace need only to access the
neighborhood of the nodes involved in the edge addition.\footnote{We observe
that \textsc{Pavan et al.}~\citep{PavanTTW13} would be more efficient with batch
updates. However, we want to estimate the triangles continuously at each update.
In their experiments they use batch sizes of million of updates for efficiency.}
This allows both algorithms to efficiently exploit larger memory sizes. We can
use efficiently $M$ up to $1$ million edges in our experiments, which only
requires few megabytes of RAM.\footnote{The experiments by~\citet{JhaSP15}
use $M$ in the order of $10^3$, and in those by~\citet{PavanTTW13}, large $M$
values require large batches for efficiency.}
\textsc{mascot} is one order of magnitude faster than \textsc{\algoname}\xspace
(which runs in $\approx 28$ micros/op), because it does not have to handle edge
removal from the sample, as it offers no guarantees on the used memory. As we
will show, \textsc{\algoname}\xspace has much higher precision and scales well on billion-edges
graphs.
\begin{table}[ht]
\tbl{Global triangle estimation MAPE for \textsc{\algoname}\xspace and \textsc{mascot}. The
rightmost column shows the reduction in terms of the avg.~MAPE obtained by
using \textsc{\algoname}\xspace. Rows with $Y$ in column ``Impr.'' refer to improved algorithms
(\textsc{\algoname-impr}\xspace and \textsc{mascot-i}) while those with $N$ to basic
algorithms (\textsc{\algoname-base}\xspace and \textsc{mascot-c}).
}{
\begin{tabular}{lcp{5pt}ccccc}
\toprule
\multicolumn{3}{c}{}&\multicolumn{2}{c}{Max.~MAPE}&\multicolumn{3}{c}{Avg.~MAPE}\\
\cmidrule(l{2pt}r{2pt}){4-5}\cmidrule(l{2pt}r{2pt}){6-8}
Graph & Impr. &$p$ & \textsc{mascot} & \textsc{\algoname}\xspace & \textsc{mascot} &
\textsc{\algoname}\xspace & Change\\
\midrule
\multirow{4}{.7cm}{Patent (Cit.)} & N &0.01& 0.9231 & 0.2583 & 0.6517 & 0.1811&-72.2\%\\
& Y &0.01 & 0.1907&0.0363& 0.1149& 0.0213& -81.4\%\\
&N& 0.1& 0.0839&0.0124 &0.0605&0.0070&-88.5\%\\
& Y &0.1& 0.0317&0.0037& 0.0245&0.0022& -91.1\%\\
\midrule
\multirow{4}{.7cm}{Patent (Co-A.)} &N&0.01& 2.3017& 0.3029 &0.8055 &0.1820&-77.4\%\\
& Y &0.01&0.1741 & 0.0261& 0.1063& 0.0177& -83.4\%\\
&N&0.1& 0.0648& 0.0175& 0.0390& 0.0079& -79.8\%\\
& Y &0.1&0.0225 & 0.0034&0.0174 & 0.0022 & -87.2\%\\
\midrule
\multirow{4}{.7cm}{LastFm} &N& 0.01& 0.1525& 0.0185& 0.0627& 0.0118&-81.2\%\\
& Y &0.01& 0.0273& 0.0046 &0.0141 &0.0034& -76.2\%\\
&N& 0.1 & 0.0075& 0.0028&0.0047& 0.0015& -68.1\%\\
& Y &0.1&0.0048 &0.0013 & 0.0031 &0.0009& -72.1\%\\
\bottomrule
\end{tabular}
}
\label{table:fix-p-vs-res-add}
\end{table}
Given the slow execution of the other algorithms on the larger datasets we
compare in details \textsc{\algoname}\xspace only with \textsc{mascot}.\footnote{We attempted to run
the other two algorithms but they did not complete after $12$ hours for the
larger datasets in Table~\ref{table:fix-p-vs-res-add} with the prescribed $p$
parameter setting.} Table~\ref{table:fix-p-vs-res-add} shows the average MAPE of
the two approaches. The results confirm the pattern observed in
Figure~\ref{fig:mape-all-add}: \textsc{\algoname-base}\xspace and \textsc{\algoname-impr}\xspace both have an average
error significantly smaller than that of the basic \textsc{mascot-c} and
improved \textsc{mascot} variant respectively. We achieve up to a 91\% (i.e.,
$9$-fold) reduction in the MAPE while using the same amount of memory. This
experiment confirms the theory: reservoir sampling has overall lower or equal
variance in all steps for the same expected total number of sampled edges.
\begin{figure}[ht]
\subfigure[MAPE]{\label{fig:mape-all-add}\includegraphics[width=0.49\textwidth]{mape-all-patent-coaut.pdf}}
\subfigure[Update
Time]{\label{fig:time-all-add}\includegraphics[width=0.49\textwidth]{times-all-patent-coaut.pdf}}
\caption{Average MAPE and average update time of the various methods on
the Patent (Co-Aut.) graph with $p=0.01$ (for \textsc{mascot}, see the main text
for how we computed the space used by the other algorithms) -- insertion only.
\textsc{\algoname-impr}\xspace has the lowest error. Both \textsc{Pavan et al.} and \textsc{Jha
et al.} have very high update times compared to our method and the two
\textsc{mascot} variants.}
\label{fig:mape-time-all-add}
\end{figure}
To further validate this observation we run \textsc{\algoname-impr}\xspace and the improved
\textsc{mascot-i} variant using the same (expected memory) $M=10000$.
Figure~\ref{fig:variance-improv-vs-mascot-sh} shows the max-min estimation over
$10$ runs and the standard deviation of the estimation over those runs.
\textsc{\algoname-impr}\xspace shows significantly lower standard deviation (hence variance) over
the evolution of the stream, and the max and min lines are also closer to the
ground truth. This confirms our theoretical observations in the previous
sections. Even with very low $M$ (about $2/10000$ of the size of the graph)
\textsc{\algoname}\xspace gives high-quality estimations.
\begin{figure}[ht]
\subfigure[Ground truth, max, and min]{\includegraphics[width=0.49\textwidth]{triangle-lastfm-add-RH-vs-FH-10000.pdf}}
\subfigure[Standard deviation]{\includegraphics[width=0.49\textwidth]{triangle-lastfm-add-RH-vs-FH-10000-stddev.pdf}}
\caption{Accuracy and stability of the estimation of \textsc{\algoname-impr}\xspace with
$M=10000$ and of \textsc{mascot-i} with same expected memory, on LastFM,
over 10 runs. \textsc{\algoname-impr}\xspace has a smaller standard deviation and moreover
the max/min estimation lines are closer to the ground truth. Average
estimations not shown as they are qualitatively similar.}
\label{fig:variance-improv-vs-mascot-sh}
\end{figure}
\paragraph{Local triangle counting}
We compare the precision in local triangle count estimation of \textsc{\algoname}\xspace
with that of \textsc{mascot}~\citep{LimK15} using the same approach of the
previous experiment. We can not compare with \textsc{Jha et al.} and
\textsc{Pavan et al.} algorithms as they provide only global estimation. As
in~\citep{LimK15}, we measure the Pearson coefficient and the average
$\varepsilon$ error (see~\citep{LimK15} for definitions). In
Table~\ref{table:fix-p-vs-res-local-add} we report the Pearson coefficient and
average $\varepsilon$ error over all timestamps for the smaller
graphs.\footnote{For efficiency, in this test we evaluate the local number of
triangles of all nodes every $1000$ edge updates.} \textsc{\algoname}\xspace (significantly)
improves (i.e., has higher correlation and lower error) over the
state-of-the-art \textsc{mascot}, using the same amount of memory.
\begin{table}[ht]
\tbl{Comparison of the quality of the local triangle estimations between our
algorithms and the state-of-the-art approach in~\citep{LimK15}. Rows with
$Y$ in column ``Impr.'' refer to improved algorithms (\textsc{\algoname-impr}\xspace and
\textsc{mascot-i}) while those with $N$ to basic algorithms (\textsc{\algoname-base}\xspace and
\textsc{mascot-c}). In virtually all cases we significantly outperform
\textsc{mascot} using the same amount of memory.
}{
\begin{tabular}{ccccccccc}
\toprule
\multicolumn{3}{c}{}& \multicolumn{3}{c}{Avg.~Pearson} &
\multicolumn{3}{c}{Avg.~$\varepsilon$ Err.}\\
\cmidrule(l{2pt}r{2pt}){4-6} \cmidrule(l{2pt}r{2pt}){7-9}
Graph & Impr. &$p$ & \textsc{mascot} & \textsc{\algoname}\xspace & Change &
\textsc{mascot} & \textsc{\algoname}\xspace &Change\\
\midrule
\multirow{6}{*}{LastFm} &\multirow{3}{*}{Y} &0.1 &0.99& 1.00& +1.18\%& 0.79& 0.30& -62.02\%\\
& &0.05 &0.97 &1.00& +2.48\% &0.99& 0.47& -52.79\%\\
& &0.01& 0.85& 0.98& +14.28\% & 1.35& 0.89& -34.24\%\\
\cmidrule{2-9}
& \multirow{3}{*}{N} &0.1& 0.97& 0.99& +2.04\%& 1.08& 0.70& -35.65\%\\
& & 0.05& 0.92 &0.98& +6.61\% &1.32 &0.97& -26.53\%\\
& &0.01& 0.32& 0.70& +117.74\% &1.48& 1.34& -9.16\%\\
\midrule
\multirow{6}{*}{Patent (Cit.)} &\multirow{3}{*}{Y} &0.1& 0.41& 0.82& +99.09\% &0.62& 0.37&-39.15\%\\
&&0.05 &0.24 &0.61& +156.30\%&0.65& 0.51& -20.78\%\\
&& 0.01& 0.05& 0.18& +233.05\% &0.65& 0.64& -1.68\%\\
\cmidrule{2-9}
&\multirow{3}{*}{N} & 0.1 &0.16 &0.48& +191.85\% &0.66& 0.60& -8.22\%\\
&& 0.05& 0.06& 0.24& +300.46\%& 0.67& 0.65& -3.21\%\\
&& 0.01& 0.00& 0.003& +922.02\%& 0.86& 0.68& -21.02\%\\
\midrule
\multirow{6}{*}{Patent (Co-aut.)} &\multirow{3}{*}{Y} & 0.1 &0.55 &0.87& +58.40\% &0.86& 0.45& -47.91\%\\
&& 0.05& 0.34& 0.71& +108.80\%& 0.91& 0.63& -31.12\%\\
&&0.01 &0.08 &0.26& +222.84\% &0.96 &0.88& -8.31\%\\
\cmidrule{2-9}
&\multirow{3}{*}{N} & 0.1& 0.25& 0.52& +112.40\%& 0.92& 0.83& -10.18\%\\
&& 0.05& 0.09& 0.28& +204.98\% &0.92& 0.92& 0.10\%\\
&& 0.01& 0.01& 0.03& +191.46\%& 0.70& 0.84& 20.06\%\\
\bottomrule
\end{tabular}
}
\label{table:fix-p-vs-res-local-add}
\end{table}
\paragraph{Memory vs accuracy trade-offs}
We study the trade-off between the sample size $M$ vs the running time and
accuracy of the estimators. Figure~\ref{fig:mape-vs-m-add} shows the tradeoffs
between the accuracy of the estimation (as MAPE) and the size $M$ for the
smaller graphs for which the ground truth number of triangles can be computed
exactly using the na\"ive algorithm. Even with small $M$, \textsc{\algoname-impr}\xspace achieves
very low MAPE value. As expected, larger $M$ corresponds to higher accuracy and
for the same $M$ \textsc{\algoname-impr}\xspace outperforms \textsc{\algoname-base}\xspace.
Figure~\ref{fig:time-vs-m-add} shows the average time per update in microseconds
($\mu$s) for \textsc{\algoname-impr}\xspace as function of $M$. Some considerations on the
running time are in order. First, a larger edge sample (larger $M$) generally
requires longer average update times per operation. This is expected as a larger
sample corresponds to a larger sample graph on which to count triangles. Second,
on average a few hundreds microseconds are sufficient for handling any update
even in very large graphs with billions of edges. Our algorithms can handle
hundreds of thousands of edge updates (stream elements) per second, with very
small error (Fig.~\ref{fig:mape-vs-m-add}), and therefore \textsc{\algoname}\xspace can be used
efficiently and effectively in high-velocity contexts. The larger average time
per update for Patent (Co-Auth.) can be explained by the fact that the graph is
relatively dense and has a small size (compared to the larger Yahoo!\ and
Twitter graphs). More precisely, the average time per update (for a fixed $M$)
depends on two main factors: the average degree and the length of the stream.
The denser the graph is, the higher the update time as more operations are
needed to update the triangle count every time the sample is modified. On the
other hand, the longer the stream, for a fixed $M$, the lower is the frequency of
updates to the reservoir (it can be show that the expected number of updates
to the reservoir is $O(M(1+\log(\frac{t}{M})))$ which grows sub-linearly in the
size of the stream $t$). This explains why the average update time for the large
and dense Yahoo!\ and Twitter graphs is so small, allowing the algorithm to
scale to billions of updates.
\begin{figure}[ht]
\subfigure[$M$ vs
MAPE]{\label{fig:mape-vs-m-add}\includegraphics[width=0.49\textwidth]{mape-vs-m-add.pdf}}
\subfigure[$M$ vs Update
Time (\textsc{\algoname-impr}\xspace)]{\label{fig:time-vs-m-add}\includegraphics[width=0.49\textwidth]{time-vs-m-add.pdf}}
\caption{Trade-offs between $M$ and MAPE and avg.~update time ($\mu$s) -- edge
insertion only. Larger $M$ implies lower errors but generally higher update times.}
\label{fig:mape-time-vs-m-add}
\end{figure}
\paragraph{Alternative edge orders}
In all previous experiments the edges are added in their natural order (i.e., in
order of their appearance).\footnote{Excluding Twitter for which we used the
random order, given the lack of timestamps.} While the natural order is the most
important use case, we have assessed the impact of other ordering on the
accuracy of the algorithms. We experiment with both the uniform-at-random
(u.a.r.)~order of the edges and the random BFS order: until all the graph is
explored a BFS is started from a u.a.r.~unvisited node and edges are added in
order of their visit (neighbors are explored in u.a.r.~order). The results for
the random BFS order and u.a.r.~order (Fig.~\ref{fig:mape-all-orders-add}) confirm
that \textsc{\algoname}\xspace has the lowest error and is very scalable in every tested ordering.
\begin{figure}[ht]
\centering
\subfigure[BFS order]{\includegraphics[width=0.49\textwidth]{mape-all-patent-coaut-bfs.pdf}}
\subfigure[U.a.r. order]{\includegraphics[width=0.49\textwidth]{mape-all-patent-coaut-uar.pdf}}
\caption{Average MAPE on Patent (Co-Aut.), with $p=0.01$ (for \textsc{mascot},
see the main text for how we computed the space used by the other
algorithms) -- insertion only in Random BFS order and in u.a.r. order. \textsc{\algoname-impr}\xspace has the
lowest error.}
\label{fig:mape-all-orders-add}
\end{figure}
\subsection{Fully-dynamic case}
We evaluate \textsc{\algoname-fd}\xspace on fully-dynamic streams. We
cannot compare \textsc{\algoname-fd}\xspace with the algorithms previously
used~\citep{JhaSP15, PavanTTW13,LimK15} as they only handle insertion-only streams.
In the first set of experiments we model deletions using the widely used \textit{sliding window
model}, where a sliding window of the most recent edges defines the current
graph. The sliding window model is of practical interest as it allows to observe recent trends in the stream. For Patent (Co-Aut.) \& (Cit.) we keep in the sliding window the edges
generated in the last $5$ years, while for LastFm we keep the edges generated in
the last $30$ days. For Yahoo!\ Answers we keep the last $100$ millions edges in
the window\footnote{The sliding window model is not interesting for the Twitter
dataset as edges have random timestamps. We omit the results for Twitter but
\textsc{\algoname-fd}\xspace is fast and
has low variance.}.
Figure~\ref{fig:add-rem} shows the evolution of the global number of triangles
in the sliding window model using \textsc{\algoname-fd}\xspace using $M=200{,}000$ ($M=1{,}000{,}000$
for Yahoo!\ Answers). The sliding window scenario is significantly more
challenging than the addition-only case (very often the entire sample of edges
is flushed away) but \textsc{\algoname-fd}\xspace maintains good variance and scalability even when,
as for LastFm and Yahoo!\ Answers, the global number of triangles
varies quickly.
\begin{figure}[ht]
\subfigure[Patent (Co-Aut.)]{\includegraphics[width=0.49\textwidth]{triangle-patent-coaut-w5-R-200000.pdf}}
\subfigure[Patent (Cit.)]{\includegraphics[width=0.49\textwidth]{triangle-patent-cit-w5-R-200000.pdf}}
\subfigure[LastFm]{\includegraphics[width=0.49\textwidth]{triangle-lastfm-w5-R-200000.pdf}}
\subfigure[Yahoo!\ Answers]{\includegraphics[width=0.49\textwidth]{triangle-yahoo-w100M-R-1000000.pdf}}
\caption{Evolution of the global number of triangles in the fully-dynamic case
(sliding window model for edge deletion). The curves are
\emph{indistinguishable on purpose}, to remark the fact that \textsc{\algoname-fd}\xspace
estimations are extremely accurate and consistent. We comment on the observed
patterns in the text.}
\label{fig:add-rem}
\end{figure}
Continuous monitoring of triangle counts with \textsc{\algoname-fd}\xspace allows to detect patterns
that would otherwise be difficult to notice. For LastFm
(Fig.~\ref{fig:add-rem}(c)) we observe a sudden
spike of several order of magnitudes. The dataset is anonymized so we cannot
establish which songs are responsible for this spike. In Yahoo!\ Answers
(Fig.~\ref{fig:add-rem}(d)) a
popular topic can create a sudden (and shortly lived) increase in the number of
triangles, while the evolution of the Patent co-authorship and co-citation
networks is slower, as the creation of an edge requires filing a patent
(Fig.~\ref{fig:add-rem}(a) and (b)). The almost constant increase over
time\footnote{The decline at the end is due to the removal of the last edges
from the sliding window after there are no more edge additions.} of the number
of triangles in Patent graphs is consistent with previous observations of {\it
densification} in collaboration networks as in the case of nodes'
degrees~\citep{leskovec2007graph} and the observations on the density of the
densest subgraph~\citep{epasto2015efficient}.
\begin{figure}[ht]
\centering
\includegraphics[width=0.6\textwidth]{time-vs-m-add-rem.pdf}
\caption{Trade-offs between the avg.~update time ($\mu$s) and $M$ for \textsc{\algoname-fd}\xspace.}
\label{fig:time-vs-m-add-rem}
\end{figure}
Table~\ref{table:res-global-local-add-rem} shows the results for both the local
and global triangle counting estimation provided by \textsc{\algoname-fd}\xspace. In this case we
can not compare with previous works, as they only handle insertions. It is
evident that precision improves with $M$ values, and even relatively small $M$
values result in a low MAPE (global
estimation), high Pearson correlation and low $\varepsilon$ error (local
estimation). Figure~\ref{fig:time-vs-m-add-rem} shows the tradeoffs between
memory (i.e., accuracy) and time. In all cases our algorithm is very fast and it
presents update times in the order of hundreds of microseconds for datasets with
billions of updates (Yahoo!\ Answers).
\begin{table}[ht]
\tbl{Estimation errors for \textsc{\algoname-fd}\xspace.
}{
\begin{tabular}{crccc}
\toprule
\multicolumn{2}{c}{}&\multicolumn{1}{c}{Avg. Global}&\multicolumn{2}{c}{Avg. Local}\\
\cmidrule(l{2pt}r{2pt}){3-3} \cmidrule(l{2pt}r{2pt}){4-5}
Graph & \multicolumn{1}{c}{$M$} & MAPE & Pearson & $\varepsilon$ Err. \\
\midrule
\multirow{2}{*}{LastFM}&200000&0.005&0.980&0.020\\
&1000000&0.002&0.999&0.001\\
\midrule
\multirow{2}{*}{Pat. (Co-Aut.)}&200000&0.010&0.660&0.300\\
&1000000&0.001&0.990&0.006\\
\midrule
\multirow{2}{*}{Pat. (Cit.)}&200000&0.170&0.090&0.160\\
&1000000&0.040&0.600&0.130\\
\bottomrule
\end{tabular}
}
\label{table:res-global-local-add-rem}
\end{table}
\paragraph{Alternative models for deletion}
We evaluate \textsc{\algoname-fd}\xspace using other models for deletions than the sliding
window model. To assess the resilience of the algorithm to massive deletions we
run the following experiments. We added edges in their natural order but each edge
addition is followed with probability $q$ by a mass deletion event where each
edge currently in the the graph is deleted with probability $d$ independently.
We run experiments with $q = 3{,}000{,}000^{-1}$ (i.e., a mass deletion expected
every $3$ millions edges) and $d=0.80$ (in expectation $80\%$ of edges are
deleted).
The results are shown in Table~\ref{table:res-global-local-add-rem-mass-deletion}.
\begin{table}[ht]
\tbl{Estimation errors for \textsc{\algoname-fd}\xspace -- mass deletion experiment, $q =
3{,}000{,}000^{-1}$ and $d=0.80$.
}{
\begin{tabular}{crccc}
\toprule
\multicolumn{2}{c}{}&\multicolumn{1}{c}{Avg. Global}&\multicolumn{2}{c}{Avg. Local}\\
\cmidrule(l{2pt}r{2pt}){3-3} \cmidrule(l{2pt}r{2pt}){4-5}
Graph & \multicolumn{1}{c}{$M$} & MAPE & Pearson & $\varepsilon$ Err. \\
\midrule
\multirow{2}{*}{LastFM}&200000&0.040&0.620&0.53\\
&1000000&0.006&0.950&0.33\\
\midrule
\multirow{2}{*}{Pat. (Co-Aut.)}&200000&0.060&0.278&0.50\\
&1000000&0.006&0.790&0.21\\
\midrule
\multirow{2}{*}{Pat. (Cit.)}&200000&0.280&0.068&0.06\\
&1000000&0.026&0.510&0.04\\
\bottomrule
\end{tabular}
}
\label{table:res-global-local-add-rem-mass-deletion}
\end{table}
We observe that \textsc{\algoname-fd}\xspace maintains a good accuracy and scalability even in
face of a massive (and unlikely) deletions of the vast majority of the edges:
e.g., for LastFM with $M=200000$ (resp. $M=1{,}000{,}000$) we observe
$0.04$ (resp. $0.006$) Avg. MAPE.
\section{Conclusions}\label{sec:concl}
We presented \textsc{\algoname}\xspace, the first suite of algorithms that use reservoir sampling
and its variants to continuously maintain unbiased, low-variance estimates of
the local and global number of triangles in fully-dynamic graphs streams of
arbitrary edge/vertex insertions and deletions using a fixed, user-specified
amount of space. Our experimental evaluation shows that \textsc{\algoname}\xspace outperforms
state-of-the-art approaches and achieves high accuracy on real-world datasets
with more than one billion of edges, with update times of hundreds of
microseconds.
| {'timestamp': '2016-06-29T02:13:52', 'yymm': '1602', 'arxiv_id': '1602.07424', 'language': 'en', 'url': 'https://arxiv.org/abs/1602.07424'} |
\section{Introduction}
Small satellites are well-suited for formation flying missions, where
multiple satellites operate together in a cluster or predefined geometry
to accomplish the task of a single conventionally large satellite.
Application of swarms of hundreds to thousands of femtosatellites
($100$-gram-class satellites) for synthetic aperture applications
is discussed in \cite{Ref:Hadaegh13}. In this paper, we introduce
an inhomogeneous Markov chain approach to develop probabilistic swarm
guidance algorithms for constructing and reconfiguring a multi-agent
network comprised of a large number of autonomous agents or spacecraft.
Analogous to fluid mechanics, the traditional view of guidance of
multi-agent systems is \textit{Lagrangian}, as it deals with an indexed
collection of agents \nocite{Ref:Chung12,Ref:Mesbahi01,Ref:Bullo08,Ref:Arslan07,Ref:Rus12,Ref:How02}\cite{Ref:Chung12}--\cite{Ref:How02}.
Note that such deterministic, Lagrangian approaches tend to perform
poorly with a large number ($100$s--$1000$s) of agents. In this
paper, we adopt an \textit{Eulerian} view, as we control the swarm
density distribution of a large number of index-free agents over disjoint
bins \cite{Ref:Egerstedt10,Ref:Menon04}. A centralized approach for
controlling the swarm, using optimal control of partial differential
equations, is discussed in \cite{Ref:Milutinovic06,Ref:Ferrari14}.
Distributed control of the density distribution, using region-based
shape controllers or attraction-repulsion forcing functions, are discussed
in \cite{Ref:Slotine09,Ref:MKumar11}.
In this paper, \textit{guidance} refers to both motion planning and
open--loop control that generate a desired trajectory for each agent
\cite{Ref:Scarf03}. Instead of allocating specific positions to individual
agents a priori, a probabilistic guidance algorithm is concerned with
achieving the desired swarm distribution across bins \nocite{Ref:Hespanha08,Ref:Kumar09,Ref:Chattopadhyay09,Ref:Acikmese12}\cite{Ref:Hespanha08}--\cite{Ref:Acikmese12}.
Each autonomous agent or robot independently determines its transition
from one bin to another following a synthesized Markov chain so that
the overall swarm converges to the desired formation. Because this
Markovian approach automatically fills deficient bins, the resulting
algorithm is robust to external damages to the formation. The main
limitation of probabilistic guidance algorithm using homogeneous Markov
chains, where the Markov matrix $M$ is fixed over time, is that the
agents are not allowed to settle down even after the swarm reaches
the desired steady-state distribution resulting in significant wastage
of control effort (e.g., fuel).
The main contribution of this paper is to develop a probabilistic
swarm guidance algorithm using inhomogeneous Markov chains (PSG--IMC)
to address this limitation and minimize the number of transitions
for achieving and maintaining the formation. Our key concept, which
was first presented in \cite{Ref:Bandyopadhyay13SFFMT}, is to develop
time-varying Markov matrices $M_{k}^{j}$ with $\lim_{k\rightarrow\infty}M_{k}^{j}=\mathbf{I}$
to ensure that the agents settle down after the desired formation
is achieved, thereby minimizing the number of unnecessary transitions
across the bins. The inhomogeneous Markov transition matrix for an
agent at each time instant depends on the current swarm distribution,
the agent's current bin location, the time-varying motion constraints,
and the agent's choice of distance-based parameters. We derive proofs
of convergence based on analysis of inhomogeneous Markov chains (e.g.,
\nocite{Ref:Hajnal56,Ref:Wolfowitz63,Ref:Chatterjee77,Ref:Seneta06,Ref:Touri11}\cite{Ref:Hajnal56}--\cite{Ref:Touri11}).
It is necessary that each agent communicates with its neighboring
agents to estimate the current swarm distribution. Consensus algorithms
are studied for formation control, sensor networks, and formation
flying applications \nocite{Ref:Chung12,Ref:Saber04,Ref:Tsitsiklis86,Ref:Jadbabaie03,Ref:Boyd04,Ref:Shamma07,Ref:Ren05TAC}\cite{Ref:Chung12,Ref:Saber04}--\cite{Ref:Ren05TAC}.
In this paper, by using a decentralized consensus algorithm on probability
distributions \cite{Ref:Bandyopadhyay13TAC}, the agents reach an
agreement on the current estimate of the swarm distribution.
Inter-agent collisions are not considered in this paper for concise
presentation since a collision-avoidance algorithm within each bin
can be combined with the proposed PSG-IMC algorithm (see \cite{Ref:Morgan14,Ref:Giri14}
for details). As an illustrative example, the guidance of swarms of
spacecraft orbiting Earth is presented in this paper.
\begin{figure}
\begin{centering}
\includegraphics[bb=0bp 110bp 960bp 455bp,clip,width=3in]{PGA_random_I_v4.pdf}
\par\end{centering}
\begin{centering}
\vspace{-10pt}\protect\caption{The PSG--IMC independently determines each agent's trajectory so that
the overall swarm converges to the desired formation (here letter
``I''), starting from any initial distribution. Here, the state
space is partitioned into $25$ bins and the desired formation $\boldsymbol{\pi}$
is given by $\frac{1}{7}[0,0,0,0,0,\thinspace0,1,1,1,0,\thinspace0,0,1,0,0,\thinspace0,1,1,1,0,\thinspace0,0,0,0,0]$.
\label{fig:PSG-IMC-example}}
\par\end{centering}
\vspace{-15pt}
\end{figure}
\textit{Notation:} The \textit{time index} is denoted by a right subscript
and the \textit{agent index} is denoted by a lower-case right superscript.
The symbol $\mathbb{P}(\cdot)$ refers to the probability of an event.
The graph $\mathcal{G}_{k}$ represents the directed time-varying
communication network topology at the $k^{\textrm{th}}$ time instant.
Let $\mathcal{N}_{k}^{j}$ denote the neighbors of the $j^{\textrm{th}}$
agent at the $k^{\textrm{th}}$ time instant from which it receives
information. The set of inclusive neighbors of the $j^{\textrm{th}}$
agent is denoted by $\mathcal{J}_{k}^{j}:=\mathcal{N}_{k}^{j}\cup\{j\}$.
Let $\mathbb{N}$, $\mathbb{Z}^{*}$, and $\mathbb{R}$ be the sets
of natural numbers (positive integers), nonnegative integers, and
real numbers respectively. Let $\sigma$ represent the singular value
of a matrix. Let $\mathrm{diag}(\boldsymbol{\alpha})$ denote the
diagonal matrix of appropriate size with $\boldsymbol{\alpha}$ as
its diagonal elements. Let $\min{}^{+}$ refer to the minimum of the
positive elements. Let $\mathbf{1}=[1,1,\ldots,1]^{T}$, $\mathbf{I}$,
$\mathbf{0}$, and $\emptyset$ be the ones (column) vector, the identity
matrix, the zero matrix of appropriate sizes, and the empty set respectively.
Let $\|\cdot\|_{p}$ denote the $\ell_{p}$ vector norm. The symbols
$\left|\cdot\right|$ and $\left\lceil \cdot\right\rceil $ denote
the absolute value or the cardinality of a set and the ceiling function
respectively.
\section{Problem Statement and Overview of PSG--IMC \label{sec:Problem-Statement}}
Let $\mathcal{R}\subset\mathbb{R}^{n_{x}}$ denote the $n_{x}$-dimensional
compact physical space over which the swarm is distributed. The region
$\mathcal{R}$ is partitioned into $n_{\textrm{cell}}$ disjoint bins
represented by $R[i],\thinspace i=1,\ldots,n_{\textrm{cell}}$ so
that $\bigcup_{i=1}^{n_{\textrm{cell}}}R[i]=\mathcal{R}$ and $R[i]\cap R[q]=\emptyset$,
if $i\not=q$.
Let $m\in\mathbb{N}$ agents belong to the swarm. Note that we assume
$m\gg n_{\textrm{cell}}$, since we deal with the swarm density distribution
over these bins. Let the row vector $\boldsymbol{r}_{k}^{j}$ represent
the bin in which the $j^{\textrm{th}}$ agent is actually present
at the $k^{\textrm{th}}$ time instant. If $\boldsymbol{r}_{k}^{j}[i]=1$,
then the $j^{\textrm{th}}$ agent is inside the $R[i]$ bin at the
$k^{\textrm{th}}$ time instant; otherwise $\boldsymbol{r}_{k}^{j}[i]=0$.
The current swarm distribution ($\mathcal{F}_{k}^{\star}$) is given
by the ensemble mean of actual agent positions, i.e., $\mathcal{F}_{k}^{\star}:=\frac{1}{m}\sum_{j=1}^{m}\boldsymbol{r}_{k}^{j}$.
Let us now define the desired formation.
\begin{definition} \label{def:Desired-formation} \textit{(Desired
Formation $\boldsymbol{\pi}$)} Let the desired formation shape be
represented by a probability (row) vector $\boldsymbol{\pi}\in\mathbb{R}^{n_{\textrm{cell}}}$
over the bins in $\mathcal{R}$, i.e., $\boldsymbol{\pi}\mathbf{1}=1$.
Note that $\boldsymbol{\pi}$ can be any arbitrary probability vector,
but it is the same for all agents within the swarm. In the presence
of motion constraints, $\boldsymbol{\pi}$ needs to satisfy Assumption
\ref{assump_Pi} discussed in Section \ref{sec:Handling-Motion-Constraints}.
For example, $\boldsymbol{\pi}$ is the shape of the letter ``I''
in Fig. \ref{fig:PSG-IMC-example}. Note that any guidance algorithm,
using a swarm of $m$ agents, can only achieve the best quantized
representation of the desired formation $\boldsymbol{\pi}$, where
$\frac{1}{m}$ is the quantization factor. \hfill $\Box$ \end{definition}
The objectives of probabilistic swarm guidance using inhomogeneous
Markov chains (PSG--IMC) running on board each agent are as follows:
\begin{enumerate}
\item Determine the agent's trajectory using a Markov chain, which obeys
motion constraints, so that the overall swarm converges to a desired
formation ($\boldsymbol{\pi}$).
\item Reduce the number of transitions for achieving and maintaining the
formation in order to reduce control effort (e.g., fuel).
\item Maintain the swarm distribution and automatically detect and repair
damages to the formation.
\end{enumerate}
The key idea of the proposed PSG--IMC is to synthesize inhomogeneous
Markov chains for each agent so that each agent can independently
determine its trajectory while the swarm distribution converges to
the desired formation. The pseudo code for the algorithm is given
in \textbf{Algorithm} \textbf{\ref{alg:PGA-inhomo-motion-const}}.
\begin{algorithm}[th]
\protect\caption{Probabilistic swarm guidance algorithm using inhomogeneous Markov
chains (PSG--IMC) \label{alg:PGA-inhomo-motion-const}}
\begin{singlespace}
{\small{}}%
\begin{tabular}{cll}
{\small{}1:} & \multicolumn{2}{l}{{\small{}(one cycle of $j^{\textrm{th}}$ agent during $k^{\textrm{th}}$
time instant)}}\tabularnewline
{\small{}2:} & \multicolumn{2}{l}{{\small{}Agent determines its present bin, e.g., $\boldsymbol{r}_{k}^{j}[i]=1$}}\tabularnewline
{\small{}3:} & {\small{}Set $n_{\textrm{loop}}$, the weighting factors $a_{k}^{j\ell}$ } & {\small{}$\}$ Theorem \ref{thm:Bayesian_Consensus_Filtering_Static_balanced_graphs}}\tabularnewline
{\small{}4:} & \textbf{\small{}for }{\small{}$\nu=1$ to $n_{\textrm{loop}}$} & \multirow{6}{*}{{\small{}$\left\} \begin{array}{c}
\\
\\
\textrm{Consensus}\\
\textrm{Stage}\\
\\
\\
\end{array}\right.$}}\tabularnewline
{\small{}5:} & \textbf{\small{}$\:\:\:$if $\nu=1$ then }{\small{}Set $\mathcal{F}_{k,0}^{j}$
from $\boldsymbol{r}_{k}^{j}$ }\textbf{\small{}end if} & \tabularnewline
{\small{}6:} & \textbf{\small{}$\:\:\:$}{\small{}Transmit the pmf $\mathcal{F}_{k,\nu-1}^{j}$
to other agents} & \tabularnewline
{\small{}7:} & \textbf{\small{}$\:\:\:$}{\small{}Obtain the pmfs $\mathcal{F}_{k,\nu-1}^{\ell},\forall\ell\in\mathcal{J}_{k}^{j}$ } & \tabularnewline
{\small{}8:} & \textbf{\small{}$\:\:\:$}{\small{}Compute the new pmf $\mathcal{F}_{k,\nu}^{j}$
using LinOP} & \tabularnewline
{\small{}9:} & \textbf{\small{}end for} & \tabularnewline
{\small{}10:} & {\small{}Compute the tuning parameter $\xi_{k}^{j}$ } & {\small{}$\}$ Eq. (\ref{eq:tuning_eqn})}\tabularnewline
{\small{}11:} & \textbf{\small{}if}{\small{} }\textbf{\small{}$R[i]\not\in\Pi$ then
}{\small{}Go to bin $R[\ell]$} & {\small{}$\}$ Eq. (\ref{eq:trapping_region})}\tabularnewline
{\small{}12:} & \textbf{\small{}else }{\small{}Compute the $\boldsymbol{\alpha}_{k}^{j}$
vector} & {\small{}$\}$ Eq. (\ref{eq:alpha_vector})}\tabularnewline
{\small{}13:} & \textbf{\small{}$\:\:\:$}{\small{}Compute the Markov matrix $M_{k}^{j}$ } & {\small{}$\}$ Prop. \ref{thm:Markov_matrix_new}}\tabularnewline
14: & {\small{}$\:\:\:$Modify the Markov matrix $\tilde{M}_{k}^{j}$} & {\small{}$\}$ Prop. \ref{thm:Markov_matrix_modified}}\tabularnewline
{\small{}15:} & \textbf{\small{}$\:\:\:$}{\small{}Generate a random number $z\in\textrm{unif}[0;1]$} & \multirow{3}{*}{{\small{}$\left\} \begin{array}{c}
\\
\textrm{Random}\\
\textrm{sampling}
\end{array}\right.$}}\tabularnewline
{\small{}16:} & \textbf{\small{}$\:\:\:$}{\small{}Go to bin $R[q]$ such that } & \tabularnewline
& \textbf{\small{}$\:\:\:$$\:\:\:$}{\small{}$\sum_{\ell=1}^{q-1}\tilde{M}_{k}^{j}[i,\ell]\leq z<\sum_{\ell=1}^{q}\tilde{M}_{k}^{j}[i,\ell]$ } & \tabularnewline
{\small{}17:} & \textbf{\small{}end if} & \tabularnewline
\end{tabular}\end{singlespace}
\end{algorithm}
The first step (line 2) involves determining the bin in which the
agent is located. For example, $\boldsymbol{r}_{k}^{j}[i]=1$. During
the second step (lines 3--9), each agent estimates the current swarm
distribution ($\mathcal{F}_{k}^{\star}$) by reaching an agreement
across the network using the consensus algorithm \cite{Ref:Bandyopadhyay13TAC}.
This is elucidated in Section \ref{sec:Bayesian-Consensus-Filtering}.
In this paper we use the difference between the estimated swarm distribution
($\mathcal{\hat{F}}_{k,n_{\textrm{loop}}}^{j}$) and the desired formation
($\boldsymbol{\pi}$) to dictate the motion of agents in the swarm.
The Hellinger distance (HD) is a symmetric measure of the difference
between two probability distributions and it is upper bounded by $1$
\cite{Ref:Torgerson91}. As discussed in Section \ref{sec:Bayesian-Consensus-Filtering},
the tuning parameter ($\xi_{k}^{j}$) computed in line 10 is the HD
between the current swarm distribution and the desired formation.
It may be desired that the agents in a particular bin can only transition
to some bins, while they cannot transition to other bins. Each agent
checks if it is currently in the trapping region or a transient bin
in line 11 and then it transitions to the bin $R[\ell]$ which is
best-suited to reach the formation. The concept of trapping region
and the method for handling motion constraints are presented in Section
\ref{sec:Handling-Motion-Constraints}.
The next step (lines 12--14) involves designing a family of row stochastic
Markov transition matrices $\tilde{M}_{k}^{j}$ with $\boldsymbol{\pi}$
as their stationary distributions, which is presented in Sections
\ref{sec:Designing-Markov-Matrix} and \ref{sec:Handling-Motion-Constraints}.
When the HD between the estimated swarm distribution and the desired
formation is large, each agent propagates its position in a statistically-independent
manner so that the swarm asymptotically tends to the desired formation.
As this HD decreases, the Markov matrices also tend to an identity
matrix and each agent holds its own position. In lines 15--16, random
sampling of the Markov matrix generates the next location of the agent.
The stability and convergence guarantees for PSG--IMC are presented
in Section \ref{sec:Convergence-of-Inhomogeneous}.
The guidance or motion planning of spacecraft in a swarm is discussed
in Section \ref{sec:Guidance-Spacecraft}. Strategies for implementing
PSG--IMC algorithms are demonstrated with numerical examples in Sections
\ref{sec:Handling-Motion-Constraints} and \ref{sec:Guidance-Spacecraft}.
\section{Consensus Estimation of Swarm Distribution \label{sec:Bayesian-Consensus-Filtering}}
In this section, we use the decentralized consensus algorithm \cite{Ref:Bandyopadhyay13TAC}
to estimate the current swarm distribution, as illustrated in \textbf{Algorithm
\ref{alg:PGA-inhomo-motion-const}}. The objective of the consensus
stage is to estimate the current swarm distribution ($\mathcal{F}_{k}^{\star}$)
and maintain consensus across the network during each time step.
Let the row vector $\hat{\mathcal{F}}_{k,\nu}^{j}$ represent the
$j^{\textrm{th}}$ agent's estimate of the current swarm distribution
during the $\nu^{\textrm{th}}$ consensus loop at the $k^{\textrm{th}}$
time instant. At the beginning of the consensus stage of the $k^{\textrm{th}}$
time instant, the $j^{\textrm{th}}$ agent generates a row vector
of local estimate of the swarm distribution $\hat{\mathcal{F}}_{k,0}^{j}$
by only determining its present bin location: \vspace{-5pt}
\begin{equation}
\hat{\mathcal{F}}_{k,0}^{j}[i]=1\thinspace\textrm{ if }\boldsymbol{r}_{k}^{j}[i]=1,\thinspace\textrm{otherwise}\thinspace\thinspace0.
\end{equation}
In essence, the local estimate at the start of the consensus stage
is a discrete representation of the position of the $j^{\textrm{th}}$
agent in the space $\mathcal{R}$, i.e., $\hat{\mathcal{F}}_{k,0}^{j}=\boldsymbol{r}_{k}^{j}$.
Hence the current swarm distribution is also given by $\mathcal{F}_{k}^{\star}=\sum_{i=1}^{m}\frac{1}{m}\hat{\mathcal{F}}_{k,0}^{i}$,
which is equal to the ensemble mean of actual agent positions $\{\boldsymbol{r}_{k}^{j}\}_{j=1}^{m}$
over the bins in $\mathcal{R}$.
During the consensus stage, the agents recursively combine and update
their local distributions to reach an agreement across the network.
The Linear Opinion Pool (LinOP) of probability measures, which is
used for combining individual distributions \cite{Ref:DeGroot74,Ref:Genest86},
is given by: \vspace{-10pt}
\begin{align}
\hat{\mathcal{F}}_{k,\nu}^{j} & =\sum_{\ell\in\mathcal{J}_{k}^{j}}a_{k,\nu-1}^{\ell j}\hat{\mathcal{F}}_{k,\nu-1}^{j},\thinspace\forall j,\ell\in\{1,\ldots,m\},\forall\nu\in\mathbb{N},\label{eq:agreement_equation}
\end{align}
where $\sum_{\ell\in\mathcal{J}_{k}^{j}}a_{k,\nu-1}^{\ell j}=1$.
The updated distribution $\hat{\mathcal{F}}_{k,\nu}^{j}$ after the
$\nu^{\textrm{th}}$ consensus loop is a weighted average of the distributions
of the inclusive neighbors $\hat{\mathcal{F}}_{k,\nu-1}^{j},\forall\ell\in\mathcal{J}_{k}^{j}$
at $k^{\textrm{th}}$ time instant. Let $\mathbf{\mathcal{W}}_{k,\nu}=\left[\hat{\mathcal{F}}_{k,\nu}^{1},\ldots,\hat{\mathcal{F}}_{k,\nu}^{m}\right]$
be a row vector of pmf functions of the agents after the $\nu^{\textrm{th}}$
consensus loop. The LinOP (\ref{eq:agreement_equation}) can be expressed
concisely as $\mathbf{\mathcal{W}}_{k,\nu}=\mathbf{\mathcal{W}}_{k,\nu-1}P_{k,\nu-1},\thinspace\forall\nu\in\mathbb{N}$,
where $P_{k,\nu-1}$ is a matrix with entries $P_{k,\nu-1}[\ell,j]=a_{k,\nu-1}^{\ell j}$.
\begin{assumption} The communication network topology of the multi-agent
system $\mathcal{G}(k)$ is strongly connected (SC). The weighting
factors $a_{k,\nu-1}^{\ell j},\forall j,\ell\in\{1,\ldots,m\}$ and
the matrix $P_{k,\nu-1}$ have the following properties: (i) the weighting
factors are the same for all consensus loops within each time instants;
(ii) the matrix $P_{k}$ conforms with the graph $\mathcal{G}(k)$;
(iii) the matrix $P_{k}$ is column stochastic; and (iv) the weighting
factors $a_{k}^{\ell j}$ are balanced. \hfill $\Box$ \label{assump_weights2}
\end{assumption}
Since $m\gg n_{\textrm{cell}}$ and multiple agents are within the
same bin, it is guaranteed almost surely using Erd\"{o}s--R\'{e}nyi
random graphs or random nearest--neighbor graphs that the resulting
network is SC \cite{Ref:Jackson08,Ref:Jingjin14,Ref:Kumar04}. Moreover,
distributed algorithms exist for the agents to generate SC balanced
graphs \cite{Ref:Boyd04,Ref:Cortes12,Ref:Hadjicostis14}.
Let $\boldsymbol{\theta}_{k,\nu}=\left[\theta_{k,\nu}^{1},\ldots,\theta_{k,\nu}^{m}\right]$
be the disagreement vector, where $\theta_{k,\nu}^{j}$ is the $\mathcal{L}_{1}$
distances between $\hat{\mathbf{\mathcal{F}}}_{k,\nu}^{j}$ and $\mathbf{\mathcal{F}}_{k}^{\star}$,
i.e., $\theta_{k,\nu}^{j}=\sum_{R[i]\in\mathcal{R}}|\hat{\mathcal{F}}_{k,\nu}^{j}[i]-\mathcal{F}_{k}^{\star}[i]|$.
Since the $\mathcal{L}_{1}$ distances between pmfs is bounded by
$2$, the $\ell_{2}$ vector norm $\|\boldsymbol{\theta}_{k,\nu}\|_{2}$
is upper bounded by $2\sqrt{m}$.
\begin{theorem} \label{thm:Bayesian_Consensus_Filtering_Static_balanced_graphs}
\cite{Ref:Saber04}--\cite{Ref:Bandyopadhyay13TAC} \emph{(Consensus
using the LinOP on SC Balanced Digraphs)} Under Assumption \ref{assump_weights2},
using the LinOP (\ref{eq:agreement_equation}), each $\hat{\mathbf{\mathcal{F}}}_{k,\nu}^{j}$
globally exponentially converges to $\mathbf{\mathcal{F}}_{k}^{\star}=\sum_{i=1}^{m}\frac{1}{m}\hat{\mathbf{\mathcal{F}}}_{k,0}^{i}$
pointwise with a rate faster or equal to the second largest singular
value of $P_{k}$, i.e., $\sigma_{m-1}(P_{k})$. If for some consensus
error $\varepsilon_{\mathrm{cons}}>0$, the number of consensus loops
within each consensus stage is $n_{\mathrm{loop}}\geq\left\lceil \frac{\ln\left(\varepsilon_{\mathrm{cons}}/(2\sqrt{m})\right)}{\ln\sigma_{m-1}(P_{k})}\right\rceil $
then the $\ell_{2}$ norm of the disagreement vector at the end of
the consensus stage is less than $\varepsilon_{\mathrm{cons}}$, i.e.,
$\|\boldsymbol{\theta}_{k,n_{\mathrm{loop}}}\|_{2}\leq\varepsilon_{\mathrm{cons}}$.
\end{theorem}
In this paper, we assume $\varepsilon_{\mathrm{cons}}\leq\frac{1}{m}$
and $n_{\textrm{loop}}$ is computed using Theorem \ref{thm:Bayesian_Consensus_Filtering_Static_balanced_graphs}.
Note that, in order to transmit the pmf $\hat{\mathcal{F}}_{k,\nu}^{j}$
to another agent, the agent needs to transmit $n_{\textrm{cell}}$
real numbers bounded by $[0,1]$. For a practical scenario involving
spacecraft swarms, as discussed in Section \ref{sec:Guidance-Spacecraft},
it is feasible to execute multiple consensus loops within each time
step.%
\footnote{In order to obtain an estimate of the communication load, let us assume
that during each of the $20$ consensus loops, each agent needs to
transmits its estimated pmf $\hat{\mathcal{F}}_{k,\nu}^{j}$ to $100$
other agents and receives pmf estimates from $100$ neighboring agents.
The total transmission time using a $250$ Kbps XBee radio \cite{Ref:Brumbaugh12Master}
is $\frac{900\times8\times2\times100\times20}{250\times10^{3}}\approx2$
minutes. This is significantly less than the time step of $8-10$
minutes used for such missions. %
}
Next, we compute the tuning parameter ($\xi_{k}^{j}$) described in
line 10 of \textbf{Algorithm \ref{alg:PGA-inhomo-motion-const}}.
\begin{definition} \label{def:Hellinger-Distance} \textit{(Hellinger
Distance based Tuning Parameter $\xi_{k}^{j}$)} The HD is a symmetric
measure of the difference between two probability distributions and
it is upper bounded by $1$ \cite{Ref:Torgerson91,Ref:Cha07}. Each
agent chooses the tuning parameter ($\xi_{k}^{j}$), based on the
HD, using the following equation: \vspace{-10pt}
\begin{align}
\xi_{k}^{j} & \!=\! D_{H}(\boldsymbol{\pi},\hat{\mathcal{F}}_{k,n_{\textrm{loop}}}^{j})\!:=\!\frac{1}{\sqrt{2}}\sqrt{\sum_{i=1}^{n_{\textrm{cell}}}\left(\sqrt{\boldsymbol{\pi}[i]}-\sqrt{\hat{\mathcal{F}}_{k,n_{\textrm{loop}}}^{j}[i]}\right)^{2}}.\label{eq:tuning_eqn}
\end{align}
where $\hat{\mathcal{F}}_{k,n_{\textrm{loop}}}^{j}$ is the estimated
current swarm distribution and $\boldsymbol{\pi}$ is the desired
formation. \hfill $\Box$ \end{definition}
\begin{remark} In contrast with the standard $\mathcal{L}_{1}$ or
$\mathcal{L}_{2}$ distance metrics, the weight assigned by the HD
to the error in a bin is inversely proportional to the square root
of the desired distribution in that bin. For example, the $\mathcal{L}_{1}$
distances of $\mathcal{F}_{1}=[0.1,0,0.4,0,0.5]$ and $\mathcal{F}_{2}=[0,0,0.5,0,0.5]$
from the desired distribution $\boldsymbol{\pi}=[0,0,0.4,0,0.6]$
are equal to $0.2$. However, the agents in the first bin with $\mathcal{F}_{1}[1]=0.1$
should move to other bins with nonzero probability, which is better
encapsulated by the Hellinger distance (e.g., $D_{H}(\boldsymbol{\pi},\mathcal{F}_{1})=0.2286$
while $D_{H}(\boldsymbol{\pi},\mathcal{F}_{2})=0.0712$). \hfill
$\Box$ \end{remark}
\section{Family of Markov Transition Matrices \label{sec:Designing-Markov-Matrix}}
The key concept of PSG--IMC is that each agent can independently
determine its trajectory from the evolution of an inhomogeneous Markov
chain with a desired stationary distribution. In this section, we
design the family of Markov transition matrices for a desired stationary
distribution, using the tuning parameter from Definition \ref{def:Hellinger-Distance}.
Let $\boldsymbol{x}_{k}^{j}\in\mathbb{R}^{n_{\textrm{cell}}}$ denote
the row vector of probability mass function (pmf) of the predicted
position of the $j^{\textrm{th}}$ agent at the $k^{\textrm{th}}$
time instant, i.e., $\boldsymbol{x}_{k}^{j}\mathbf{1}=1$. The $i^{\textrm{th}}$
element ($\boldsymbol{x}_{k}^{j}[i]$) is the probability of the event
that the $j^{\textrm{th}}$ agent is in $R[i]$ bin at the $k^{\textrm{th}}$
time instant: \vspace{-5pt}
\begin{equation}
\boldsymbol{x}_{k}^{j}[i]=\mathbb{P}(\boldsymbol{r}_{k}^{j}[i]=1),\quad\textrm{for all }i\in\{1,\ldots,n_{\textrm{cell}}\}\thinspace.\label{eq:def_prob_vector}
\end{equation}
The elements of the row stochastic Markov transition matrix $M_{k}^{j}\in\mathbb{R}^{n_{\textrm{cell}}\times n_{\textrm{cell}}}$
are the transition probabilities of the $j^{\textrm{th}}$ agent at
the $k^{\textrm{th}}$ time instant: \vspace{-5pt}
\begin{equation}
M_{k}^{j}[i,\ell]:=\mathbb{P}\left(\boldsymbol{r}_{k+1}^{j}[\ell]=1|\boldsymbol{r}_{k}^{j}[i]=1\right)\thinspace.
\end{equation}
In other words, the probability that the $j^{\textrm{th}}$ agent
in $R[i]$ bin at the $k^{\textrm{th}}$ time instant will transition
to $R[\ell]$ bin at the $(k+1)^{\textrm{th}}$ time instant is given
by $M_{k}^{j}[i,\ell]$. The Markov transition matrix $M_{k}^{j}$
determines the time evolution of the pmf row vector $\boldsymbol{x}_{k}^{j}$
by: \vspace{-15pt}
\begin{equation}
\boldsymbol{x}_{k+1}^{j}=\boldsymbol{x}_{k}^{j}M_{k}^{j},\quad\textrm{for all }k\in\mathbb{Z}^{*}\thinspace.\label{eq:time_evolution_single}
\end{equation}
Lines 12--14 of \textbf{Algorithm \ref{alg:PGA-inhomo-motion-const}}
involves designing a family of Markov transition matrices for each
agent $M_{k}^{j}$, with $\boldsymbol{\pi}$ as their stationary distributions.
The following theorem is used by each agent to find these Markov matrices
at each time instant.
\begin{proposition} \label{thm:Markov_matrix_new} \textit{(Family
of Markov transition matrices for a desired stationary distribution)}
Let $\boldsymbol{\alpha}_{k}^{j}\in\mathbb{R}^{n_{\mathrm{cell}}}$
be a nonnegative bounded column vector with $\|\boldsymbol{\alpha}_{k}^{j}\|_{\infty}\leq1$.
For given $\xi_{k}^{j}$ from (\ref{eq:tuning_eqn}), the following
parametrized family of row stochastic Markov matrices $M_{k}^{j}$
have $\boldsymbol{\pi}$ as their stationary distribution (i.e., $\boldsymbol{\pi}M_{k}^{j}=\boldsymbol{\pi}$)
: \vspace{-10pt}
\begin{align}
M_{k}^{j} & =\boldsymbol{\alpha}_{k}^{j}\frac{\xi_{k}^{j}}{\boldsymbol{\pi}\boldsymbol{\alpha}_{k}^{j}}\boldsymbol{\pi}\mathrm{diag}(\boldsymbol{\alpha}_{k}^{j})+\mathbf{I}-\xi_{k}^{j}\mathrm{diag}(\boldsymbol{\alpha}_{k}^{j}),\label{eq:Markov_matrix_family_new}
\end{align}
where $\boldsymbol{\pi}\boldsymbol{\alpha}_{k}^{j}\not=0$ and $\sup_{k}\xi_{k}^{j}\|\boldsymbol{\alpha}_{k}^{j}\|_{\infty}\leq1$.
\end{proposition}
\begin{IEEEproof}
For a valid first term in (\ref{eq:Markov_matrix_family_new}), we
need $\boldsymbol{\pi}\boldsymbol{\alpha}_{k}^{j}\not=0$. We first
show that $M_{k}^{j}$ is a row stochastic matrix. Right multiplying
both sides of (\ref{eq:Markov_matrix_family_new}) with $\boldsymbol{1}$
gives: \vspace{-5pt}
\[
M_{k}^{j}\boldsymbol{1}=\xi_{k}^{j}\boldsymbol{\alpha}_{k}^{j}\tfrac{\boldsymbol{\pi}\boldsymbol{\alpha}_{k}^{j}}{\boldsymbol{\pi}\boldsymbol{\alpha}_{k}^{j}}+\boldsymbol{1}-\xi_{k}^{j}\mathrm{diag}(\boldsymbol{\alpha}_{k}^{j})\boldsymbol{1}=\boldsymbol{1}\thinspace.
\]
Next, we show that $M_{k}^{j}$ is a Markov matrix with $\boldsymbol{\pi}$
as its stationary distribution, as $\boldsymbol{\pi}$ is the left
eigenvector corresponding to its largest eigenvalue $1$, i.e., $\boldsymbol{\pi}M_{k}^{j}=\boldsymbol{\pi}$.
Left multiplying both sides of (\ref{eq:Markov_matrix_family_new})
with $\boldsymbol{\pi}$ gives: \vspace{-12pt}
\[
\boldsymbol{\pi}M_{k}^{j}=\tfrac{\boldsymbol{\pi}\boldsymbol{\alpha}_{k}^{j}}{\boldsymbol{\pi}\boldsymbol{\alpha}_{k}^{j}}\boldsymbol{\pi}\xi_{k}^{j}\mathrm{diag}(\boldsymbol{\alpha}_{k}^{j})+\boldsymbol{\pi}-\boldsymbol{\pi}\xi_{k}^{j}\mathrm{diag}(\boldsymbol{\alpha}_{k}^{j})=\boldsymbol{\pi}\thinspace.
\]
In order to ensure that all the elements in the matrix $M_{k}^{j}$
are nonnegative, we enforce that $\mathbf{I}-\xi_{k}^{j}\mathrm{diag}(\boldsymbol{\alpha}_{k}^{j})\geq0$
which results in the condition $\sup_{k}\xi_{k}^{j}\|\boldsymbol{\alpha}_{k}^{j}\|_{\infty}\leq1$.
\end{IEEEproof}
The additional degrees of freedom due to $\boldsymbol{\alpha}_{k}^{j}$
vector allows us to capture the physical distance between bins while
designing the Markov matrix.
\begin{definition} \label{def-alpha-vec} \textit{(Physical distance
based $\boldsymbol{\alpha}_{k}^{j}$ vector)} The $j^{\textrm{th}}$
agent at the $k^{\textrm{th}}$ time instant selects bin $R[c]$ independent
of its current location. Then each element of the $\boldsymbol{\alpha}_{k}^{j}$
vector is determined using the physical distance between bins in the
following manner for all $\ell\in\{1,\ldots,n_{\textrm{cell}}\}$:
\vspace{-10pt}
\begin{align}
\boldsymbol{\alpha}_{k}^{j}[\ell] & :=1-\frac{\textrm{dis}(R[\ell],R[c])}{\max_{q\in\{1,\ldots,n_{\textrm{cell}}\}}\textrm{dis}(R[q],R[c])+1}\thinspace,\label{eq:alpha_vector}
\end{align}
where $\textrm{dis}(R[\ell],R[c])$ is the $\ell_{1}$ distance between
the bins $R[\ell]$ and $R[c]$. If $\boldsymbol{\kappa}[\ell]\in\mathbb{R}^{n_{x}}$
denotes the location of the centroid of the bin $R[\ell]$, then $\textrm{dis}(R[\ell],R[c])=\|\boldsymbol{\kappa}[\ell]-\boldsymbol{\kappa}[c]\|_{1}$.
Irrespective of the distribution of $\boldsymbol{\pi}$, the condition
$\boldsymbol{\pi}\boldsymbol{\alpha}_{k}^{j}\not=0$ is satisfied
because $\boldsymbol{\alpha}_{k}^{j}[\ell]>0,\thinspace\forall\ell\in\{1,\ldots,n_{\textrm{cell}}\}$.
Moreover, the condition $\sup_{k}\xi_{k}^{j}\|\boldsymbol{\alpha}_{k}^{j}\|_{\infty}\leq1$
is satisfied as $\xi_{k}^{j}=D_{H}(\boldsymbol{\pi},\hat{\mathcal{F}}_{k,n_{\textrm{loop}}}^{j})\leq1$
in (\ref{eq:tuning_eqn}) and $\|\boldsymbol{\alpha}_{k}^{j}\|_{\infty}=1$
in (\ref{eq:alpha_vector}). \hfill $\Box$ \end{definition}
The evolution of the agent's location during each time step is based
on random sampling of the Markov transition matrix, as shown in line
15--16 of \textbf{Algorithm \ref{alg:PGA-inhomo-motion-const}}. An
agent is said to have undergone a transition if it jumps from bin
$R[i]$ to bin $R[\ell],\thinspace\ell\not=i$ during a given time
step. If $\xi_{k}^{j}=0$ in (\ref{eq:Markov_matrix_family_new}),
then $M_{k}^{j}=\mathbf{I}$ and no transitions occur as the transition
probabilities from a bin to any bin (other than itself) goes to zero.
If $\xi_{k}^{j}$ is large (close to $1$), the agents vigorously
transition from one bin to another. In this paper, we seek to minimize
unnecessary bin-to-bin transitions during each time step while maintaining
or reconfiguring the formation.
\begin{remark} \textit{(Dynamics based $\boldsymbol{\alpha}_{k}^{j}$
vector)} The agent dynamics can also be captured using the $\boldsymbol{\alpha}_{k}^{j}$
vector. At the $k^{\textrm{th}}$ time instant, the $\boldsymbol{\alpha}_{k}^{j}[\ell]$
element is the likelihood of the $j^{\textrm{th}}$ agent transitioning
to the bin $R[\ell]$ due to its dynamics and is independent of its
current location. In order to obtain valid solutions, it is necessary
that $\boldsymbol{\alpha}_{k}^{j}[\ell]>0,\thinspace\forall\ell\in\{1,\ldots,n_{\textrm{cell}}\}$
and $\|\boldsymbol{\alpha}_{k}^{j}\|_{\infty}=1$ for all time instants.
\hfill $\Box$ \end{remark}
In the probabilistic guidance algorithm (PGA) \textit{\cite{Ref:Chattopadhyay09,Ref:Acikmese12},}
the same homogeneous Markov transition matrix is used by all agents
for all time, i.e., $M_{k}^{j}=M$ for all $j\in\{1,\ldots,m\}$ and
$k\in\mathbb{Z}^{*}$. As the agents do not realize whether the desired
formation has already been achieved, they continue to transition for
all time steps. In contrast, in this paper, each agent executes a
different time-varying Markov matrix $M_{k}^{j}$ at each time step.
We show that the inhomogeneous Markov chain not only guides the agents
so that the swarm distribution converges to the desired steady-state
distribution but also reduces the number of transitions across the
bins.
\section{Convergence Analysis of Inhomogeneous Markov Chains \label{sec:Convergence-of-Inhomogeneous}}
In this section, we provide asymptotic guarantees for the proposed
PSG--IMC algorithm without motion constraints, and study its stability
and convergence characteristics. Theorem \ref{thm:Convergence-of-imhomo-MC}
states that each agent's pmf vector $\boldsymbol{x}_{k}^{j}$ asymptotically
converges pointwise to the desired formation $\boldsymbol{\pi}$ while
Theorem \ref{thm:Convergence-swarm-dist} states that the swarm distribution
$\mathcal{F}_{k}^{\star}$ also asymptotically converges pointwise
to the desired formation $\boldsymbol{\pi}$, when the number of agents
tends to infinity. For practical implementation of this algorithm,
a lower bound on the number of agents is provided in Theorem \ref{thm:Convergence-swarm-dist-practical}.
Finally, Remark \ref{rem:Markov_identity} states that if the consensus
error tends to zero, then the Markov transition matrix $M_{k}^{j}$
asymptotically converges to an identity matrix. This means that the
agents stop moving after the desired formation is achieved, resulting
in significant savings of control effort compared to the homogeneous
Markov chain case. Therefore, the first two objectives of PSG--IMC,
stated in Section \ref{sec:Problem-Statement}, are achieved.
The time evolution of the pmf vector $\boldsymbol{x}_{k}^{j}$, defined
in (\ref{eq:def_prob_vector}), from an initial condition ($\boldsymbol{x}_{0}^{j}$)
to the $k^{\textrm{th}}$ time instant is given by the inhomogeneous
Markov chain: \vspace{-5pt}
\begin{equation}
\boldsymbol{x}_{k}^{j}=\boldsymbol{x}_{0}^{j}U_{0,k}^{j},\quad\textrm{for all }k\in\mathbb{N},\label{eq:time_evolution_overall}
\end{equation}
where $U_{0,k}^{j}=M_{0}^{j}M_{1}^{j}\ldots M_{k-2}^{j}M_{k-1}^{j}$
and each $M_{k}^{j}$ is a $\mathbb{R}^{n_{\textrm{cell}}\times n_{\textrm{cell}}}$
row stochastic matrix obtained using Proposition \ref{thm:Markov_matrix_new}.
We first focus on the convergence of each agent's pmf vector. In the
following proof, we first show that the inhomogeneous Markov chain
is strongly ergodic and then show that the limit is the desired distribution
$\boldsymbol{\pi}$. Moreover, we assume that the swarm has not converged
if at least two or more agents are not in the correct location. If
at most one agent is not in its correct location,%
\footnote{The probability of the event, that at most one agent is not in its
correct location, is upper bounded by $(\frac{1}{n_{\textrm{rec}}})^{m-1}$.
For the simulation example discussed in Section \ref{sub:Numerical-Example-1},
it is approximately $10^{-7000}$.%
} then we state that the swarm has already converged and no further
convergence is necessary.
\begin{theorem} \label{thm:Convergence-of-imhomo-MC} \textit{(Convergence
of inhomogeneous Markov chains)} Each agent's time evolution of the
pmf vector $\boldsymbol{x}_{k}^{j}$, from any initial condition $\boldsymbol{x}_{0}^{j}\in\mathbb{R}^{n_{\textrm{cell}}}$,
is given by the inhomogeneous Markov chain (\ref{eq:time_evolution_overall}).
If each agent executes the PSG--IMC algorithm, then $\boldsymbol{x}_{k}^{j}$
asymptotically converges pointwise to the desired stationary distribution
$\boldsymbol{\pi}$, i.e., $\lim_{k\rightarrow\infty}\boldsymbol{x}_{k}^{j}=\boldsymbol{\pi}$
pointwise for all $j\in\{1,\ldots,m\}$. \end{theorem}
\begin{IEEEproof}
According to Proposition \ref{thm:Markov_matrix_new}, each matrix
$M_{k}^{j}$ is a function of the tuning parameter $\xi_{k}^{j}$
which is determined by (\ref{eq:tuning_eqn}) and bounded by $[0,1]$.
Let us define the bins with nonzero probabilities in $\boldsymbol{\pi}$
as recurrent bins. According to the design of the Markov matrix, if
$\xi_{k}^{j}>0$, then an agent can enter these recurrent bins from
all other bins. The bins which are not recurrent are called transient
bins and they have zero probabilities in $\boldsymbol{\pi}$. The
agents can only leave the transient bins when $\xi_{k}^{j}>0$, but
they can never enter a transient bin from any other bin due to the
structure of the Markov matrix in (\ref{eq:Markov_matrix_family_new}).
In this proof, we first consider the special case where all the bins
are recurrent and show that each agent's pmf vector converges to $\boldsymbol{\pi}$.
We next consider the general case where transient bins are also present
and show that this situation converges to the special case geometrically
fast.
\textbf{Case 1: all bins are recurrent, }i.e., $\boldsymbol{\pi}[i]>0,\thinspace\textrm{for all }i\in\{1,\ldots,n_{\textrm{cell}}\}$.
For $\xi_{k}^{j}>0$, Proposition \ref{thm:Markov_matrix_new} guarantees
that the matrix $M_{k}^{j}$ is positive, strongly connected and primitive.
We next prove that $\lim_{k\rightarrow\infty}U_{0,k}^{j}$ is also
a primitive matrix. Forward multiplication of two row stochastic Markov
matrices ($M_{s}^{j},\thinspace M_{s+1}^{j}$ with $\xi_{s}^{j},\thinspace\xi_{s+1}^{j}>0$
and $s\in\mathbb{Z}^{*}$) obtained using Proposition \ref{thm:Markov_matrix_new}
yields the positive matrix $M_{s}^{j}M_{s+1}^{j}$ which is a primitive
matrix.
If $\xi_{k}^{j}=0$, then $M_{k}^{j}=\mathbf{I}$ from (\ref{eq:Markov_matrix_family_new}).
The matrix product $U_{0,k}^{j}$ can be decomposed into two parts;
(a) the tuning parameter ($\xi_{s}^{j}$) of the matrices in the first
part is always zero and (b) the second part contains the remaining
Markov matrices with $\xi_{s}^{j}>0$. \vspace{-15pt}
\begin{align*}
U_{0,k}^{j} & =M_{0}^{j}(\xi_{0}^{j}>0)\thinspace M_{1}^{j}(\xi_{1}^{j}>0)\thinspace M_{2}^{j}(\xi_{2}^{j}=0)\thinspace M_{3}^{j}(\xi_{3}^{j}>0)\ldots\\
& \quad M_{k-3}^{j}(\xi_{k-3}^{j}>0)\thinspace M_{k-2}^{j}(\xi_{k-2}^{j}=0)\thinspace M_{k-1}^{j}(\xi_{k-1}^{j}>0)\thinspace,\\
& =\underbrace{\left(\mathbf{I}\ldots\mathbf{I}\right)}_{\text{1st part}}\cdot\underbrace{\left(M_{0}^{j}M_{1}^{j}M_{3}^{j}\ldots M_{k-3}^{j}M_{k-1}^{j}\right)}_{\text{2nd part}}\thinspace.
\end{align*}
Since the first part results in an identity matrix, the matrix product
$U_{0,k}^{j}$ can be completely defined by the second part containing
Markov matrices with $\xi_{k}^{j}>0$. In the matrix product $\lim_{k\rightarrow\infty}U_{0,k}^{j}$,
the tuning parameters ($\xi_{k}^{j}$) for the first few Markov matrices
are nonzero because the swarm starts from initial conditions that
are different from the desired formation. Hence there exists at least
one matrix in the second part of $\lim_{k\rightarrow\infty}U_{0,k}^{j}$.
Thus we can prove (by induction) that the matrix product $\lim_{k\rightarrow\infty}U_{0,k}^{j}$
is a primitive matrix.
Next, we prove that the matrix product $\lim_{k\rightarrow\infty}U_{0,k}^{j}$
is asymptotically homogeneous and strongly ergodic (See Definition
\ref{def:Strong-ergodic-asymp-homo} in Appendix). Let us find a positive
$\gamma$ independent of $k$ such that $\gamma\leq\min^{+}M_{k}^{j}[i,\ell]$
for all $i,\ell\in\{1,\ldots,n_{\mathrm{cell}}\}$. Let $\varepsilon_{\mathrm{cons}}\leq\frac{1}{m}$
and $n_{\textrm{loop}}$ is computed using Theorem \ref{thm:Bayesian_Consensus_Filtering_Static_balanced_graphs}.
If at least two agents are not in the correct location, i.e., $\sum_{R[i]\in\mathcal{R}}|\mathcal{F}_{k}^{\star}[i]-\boldsymbol{\pi}[i]|\geq\frac{2}{m}$;
then due to the quantization of the pmf by the number of agents $m$,
the smallest positive tuning parameter $\xi_{\min}$ is given by:
\vspace{-10pt}
\begin{equation}
\xi_{\min}=\frac{1}{2^{\frac{3}{2}}m}\leq\min{}_{j\in\{1,\ldots,m\},\thinspace k\in\mathbb{Z}^{*}}^{+}\xi_{k}^{j}\thinspace.
\end{equation}
The smallest positive element in the $\boldsymbol{\alpha}_{k}^{j}$
vector in (\ref{eq:alpha_vector}) is given by: \vspace{-10pt}
\begin{align}
\alpha_{\min} & =\left(1-\frac{\max_{i,q\in\{1,\ldots,n_{\textrm{cell}}\}}\textrm{dis}(R[q],R[i])}{\max_{i,q\in\{1,\ldots,n_{\textrm{cell}}\}}\textrm{dis}(R[q],R[i])+1}\right)\thinspace.
\end{align}
Finally, the smallest positive element in the stationary distribution
$\boldsymbol{\pi}$ is given by $\pi_{\min}=$ $\left(\min{}_{i\in\{1,\ldots,n_{\mathrm{cell}}\}}^{+}\boldsymbol{\pi}[i]\right)$.
The diagonal and off-diagonal elements of the Markov matrix $M_{k}^{j}$
designed using (\ref{eq:Markov_matrix_family_new}) are given by:
\vspace{-15pt}
\begin{align}
M_{k}^{j}[i,i] & =1-\xi_{k}^{j}\boldsymbol{\alpha}_{k}^{j}[i]+\tfrac{\xi_{k}^{j}}{\boldsymbol{\pi\alpha}_{k}^{j}}(\boldsymbol{\alpha}_{k}^{j}[i])^{2}\boldsymbol{\pi}[i]\thinspace,\\
M_{k}^{j}[i,\ell] & =\tfrac{\xi_{k}^{j}}{\boldsymbol{\pi\alpha}_{k}^{j}}\boldsymbol{\alpha}_{k}^{j}[i]\boldsymbol{\pi}[\ell]\boldsymbol{\alpha}_{k}^{j}[\ell]\thinspace,\textrm{ where }i\not=\ell.\label{eq:off_diagonal_term}
\end{align}
Irrespective of the choice of $\xi_{k}^{j}$, the diagonal elements
of $M_{k}^{j}$ will not tend to zero. But the off-diagonal elements
of $M_{k}^{j}$ will tend to zero if $\xi_{k}^{j}\rightarrow0$. Moreover,
the largest possible value of $\boldsymbol{\pi\alpha}_{k}^{j}=1$,
when $\boldsymbol{\alpha}_{k}^{j}=\mathbf{1}$. Hence the smallest
nonzero element in $M_{k}^{j}$ is lower bounded by the smallest possible
values of the terms in (\ref{eq:off_diagonal_term}).
Thus we get a positive $\gamma$ that is independent of $k$, i.e.,
$\gamma=\xi_{\min}\alpha_{\min}^{2}\pi_{\min}$. If $\xi_{k}^{j}=0$,
then $\min{}_{i,\ell\in\{1,\ldots,n_{\mathrm{cell}}\}}^{+}M_{k}^{j}[i,\ell]=1>\gamma$.
Since $M_{k}^{j}$ is row stochastic, $M_{k}^{j}[i,\ell]\leq1$ for
all $i,\ell\in\{1,\ldots,n_{\mathrm{cell}}\}$ and $k\in\mathbb{Z}^{*}$.
Thus each Markov matrix satisfies the condition (\ref{eq:condition_C})
in Lemma \ref{thm:Asymptotic-Homogeneity-Strong-Ergodicity} given
in Appendix. Since $\boldsymbol{\psi}=\boldsymbol{\pi}$ in (\ref{eq:condition_asymtotic_homogeneity}),
it follows from Lemma \ref{thm:Asymptotic-Homogeneity-Strong-Ergodicity}
that the matrix product $\lim_{k\rightarrow\infty}U_{0,k}^{j}$ is
asymptotically homogeneous and strongly ergodic.
Note that $M_{k}^{j}$ for all $k\in\mathbb{Z}^{*}$, by Proposition
\ref{thm:Markov_matrix_new}, have $\boldsymbol{\pi}$ as their left
eigenvector for the eigenvalue $1$. Hence $\boldsymbol{v}_{0}^{j}=\boldsymbol{\pi}$
is a set of absolute probability vectors for $\lim_{k\rightarrow\infty}U_{0,k}^{j}$.
According to Lemma \ref{thm:Uniqueness-Absolute-Probability-Vectors}
given in Appendix, $\boldsymbol{v}_{0}^{j}=\boldsymbol{\pi}$ is the
unique set of absolute probability vectors for $\lim_{k\rightarrow\infty}U_{0,k}^{j}$
\cite{Ref:Blackwell45} and $\boldsymbol{w}_{0}^{j}=\boldsymbol{\pi}$
in (\ref{eq:strong_ergodicity}). Hence the individual pmfs asymptotically
converge to: \vspace{-10pt}
\begin{equation}
\lim_{k\rightarrow\infty}\boldsymbol{x}_{k}^{j}=\lim_{k\rightarrow\infty}\boldsymbol{x}_{0}^{j}U_{0,k}^{j}=\boldsymbol{x}_{0}^{j}\mathbf{1}\boldsymbol{\pi}=\boldsymbol{\pi}\thinspace.
\end{equation}
\textbf{Case 2: both transient and recurrent bins are present, }i.e.,
$\boldsymbol{\pi}[i]>0$ for all $i\in\{1,\ldots,n_{\textrm{rec}}\}$
and $\boldsymbol{\pi}[i]=0$ for all $i\in\{n_{\textrm{rec}}+1,\ldots,n_{\textrm{cell}}\}$.\textbf{ }
Without loss of generality, let us reorder the bins such that the
first $n_{\textrm{rec}}$ bins are recurrent, and the remaining bins
are transient. Hence the pmf vector $\boldsymbol{x}_{k}^{j}$ is also
reordered so that $\boldsymbol{x}_{k}^{j}[i],\thinspace i\in\{1,\ldots,n_{\textrm{rec}}\}$
is the probability of the event that the $j^{\textrm{th}}$ agent
is in the $R[i]$ recurrent bin at the $k^{\textrm{th}}$ time instant.
It is known that the agents leave the transient bins geometrically
fast \cite[Theorem 4.3, pp. 120]{Ref:Seneta06}. Once all the agents
are in the recurrent bins, then the situation is similar to Case 1
with $n_{\textrm{rec}}$ bins. The submatrix $M_{k,\textrm{sub}}^{j}\in\mathbb{R}^{n_{\textrm{rec}}\times n_{\textrm{rec}}}$
of the original Markov matrix, as shown in Fig. \ref{fig:Submatrix},
is primitive when $\xi_{k}^{j}>0$. Then, it follows from Case 1 that
$\lim_{k\rightarrow\infty}\boldsymbol{x}_{k}^{j}=\boldsymbol{\pi}.$
\end{IEEEproof}
\begin{figure}
\begin{centering}
\includegraphics[bb=80bp 190bp 910bp 415bp,clip,width=3in]{Markov_submatrix.pdf}
\par\end{centering}
\protect\caption{Submatrix $M_{k,\textrm{sub}}^{j}\in\mathbb{R}^{n_{\textrm{rec}}\times n_{\textrm{rec}}}$
of only the recurrent bins in the original Markov matrix $M_{k}^{i}$.
\label{fig:Submatrix}}
\end{figure}
Since each agent's pmf vector converges to $\boldsymbol{\pi}$, we
now focus on the convergence of the current swarm distribution.
\begin{theorem} \label{thm:Convergence-swarm-dist} \textit{(Convergence
of swarm distribution to desired formation)} If the number of agents,
executing the PSG--IMC algorithm, tends to infinity; then the current
swarm distribution ($\mathcal{F}_{k}^{\star}=\frac{1}{m}\sum_{j=1}^{m}\boldsymbol{r}_{k}^{j}$)
asymptotically converges pointwise to the desired stationary distribution
$\boldsymbol{\pi}$, i.e., $\lim_{m\rightarrow\infty}\lim_{k\rightarrow\infty}\mathcal{F}_{k}^{\star}=\boldsymbol{\pi}$
pointwise. \end{theorem}
\begin{IEEEproof}
Let $X_{k}^{j}[i]$ denote the independent Bernoulli random variable
representing the event that the $j^{\textrm{th}}$ agent is actually
located in bin $R[i]$ at the $k^{\textrm{th}}$ time instant, i.e.,
$X_{k}^{j}[i]=1$ if $\boldsymbol{r}_{k}^{j}[i]=1$ and $X_{k}^{j}[i]=0$
otherwise. Let $X_{\infty}^{j}[i]$ denote the random variable $\lim_{k\rightarrow\infty}X_{k}^{j}[i]$.
Theorem \ref{thm:Convergence-of-imhomo-MC} implies that the success
probability of $X_{\infty}^{j}[i]$ is given by $\mathbb{P}\left(X_{\infty}^{j}[i]=1\right)=\lim_{k\rightarrow\infty}x_{k}^{j}[i]=\boldsymbol{\pi}[i]$.
Hence $\mathbb{E}\left[X_{\infty}^{j}[i]\right]=\boldsymbol{\pi}[i]\cdot1+(1-\boldsymbol{\pi}[i])\cdot0=\boldsymbol{\pi}[i]$,
where $\mathbb{E}[\cdot]$ is the expected value of the random variable.
Let $S_{\infty}^{m}[i]=X_{\infty}^{1}[i]+\ldots+X_{\infty}^{m}[i]$.
As the random variables $X_{\infty}^{j}[i]$ are independent and identically
distributed, the strong law of large numbers (cf. \cite[pp. 85]{Ref:Billingsley95})
states that: \vspace{-5pt}
\begin{equation}
\mathbb{P}\left(\lim_{m\rightarrow\infty}\frac{S_{\infty}^{m}[i]}{m}=\boldsymbol{\pi}[i]\right)=1\thinspace.\label{eq:SLLN_1}
\end{equation}
The final swarm distribution is also given by $\lim_{k\rightarrow\infty}\mathcal{F}_{k}^{\star}[i]=\frac{1}{m}\sum_{j=1}^{m}\boldsymbol{r}_{k}^{j}[i]=\frac{S_{\infty}^{m}[i]}{m}$.
Hence (\ref{eq:SLLN_1}) implies that $\lim_{m\rightarrow\infty}\lim_{k\rightarrow\infty}\mathcal{F}_{k}^{\star}=\boldsymbol{\pi}$
pointwise. It follows from Scheff$\acute{\textrm{e}}$'s theorem \cite[pp. 84]{Ref:Durrett05}
that the measure induced by $\mathcal{F}_{k}^{\star}$ on the $\sigma$-algebra
of $\mathcal{R}$ converges in total variation to the measure induced
by $\boldsymbol{\pi}$.
\end{IEEEproof}
In practical scenarios, the number of agents is finite, hence we need
to specify a convergence error threshold. The following theorem gives
the minimum number of agents needed to establish $\varepsilon$-convergence
of the swarm.
\begin{theorem} For some acceptable convergence error $\varepsilon_{\mathrm{conv}}>0$
and $\varepsilon_{\mathrm{bin}}>0$, if the number of agents is at
least $m\geq\frac{1}{4\varepsilon_{\mathrm{bin}}^{2}\varepsilon_{\mathrm{conv}}}$,
then the pointwise error probability for each bin is bounded by $\varepsilon_{\mathrm{conv}}$,
i.e., $\mathbb{P}\left(\left|\frac{S_{\infty}^{m}[i]}{m}-\boldsymbol{\pi}[i]\right|>\varepsilon_{\mathrm{bin}}\right)\leq\varepsilon_{\mathrm{conv}},\thinspace\forall i\in\{1,\ldots,n_{\mathrm{cell}}\}$.
\label{thm:Convergence-swarm-dist-practical} \end{theorem}
\begin{IEEEproof}
The variance of the independent random variable from Theorem \ref{thm:Convergence-swarm-dist}
is $\textrm{Var}(X_{\infty}^{j}[i])=\boldsymbol{\pi}[i](1-\boldsymbol{\pi}[i])$,
hence $\textrm{Var}(\frac{S_{\infty}^{m}[i]}{m})=\frac{\boldsymbol{\pi}[i](1-\boldsymbol{\pi}[i])}{m}$.
The Chebychev\textquoteright s inequality (cf. \cite[Theorem 1.6.4, pp. 25]{Ref:Durrett05})
implies that for any $\varepsilon_{\mathrm{bin}}>0$, the pointwise
error probability for each bin is bounded by: \vspace{-10pt}
\[
\mathbb{P}\left(\left|\frac{S_{\infty}^{m}[i]}{m}-\boldsymbol{\pi}[i]\right|\!>\!\varepsilon_{\mathrm{bin}}\right)\!\leq\!\frac{\boldsymbol{\pi}[i](1-\boldsymbol{\pi}[i])}{m\varepsilon_{\mathrm{bin}}^{2}}\!\leq\!\frac{1}{4m\varepsilon_{\mathrm{bin}}^{2}}.
\]
Hence, the minimum number of agents is given by $\frac{1}{4m\varepsilon_{\mathrm{bin}}^{2}}\leq\varepsilon_{\mathrm{conv}}$.
\end{IEEEproof}
We now study the convergence of the tuning parameter and the Markov
matrix.
\begin{remark} \label{rem:Markov_identity} \textit{(Convergence
of Markov matrix to identity matrix)} If the consensus error tends
to zero, i.e., $\varepsilon_{\mathrm{cons}}\rightarrow0$, then $\hat{\mathcal{F}}_{k,n_{\mathrm{loop}}}^{j}=\mathcal{F}_{k}^{\star}$.
If the number of agents, executing the PSG--IMC algorithm, tends to
infinity then Theorem \ref{thm:Convergence-swarm-dist} implies that
the limiting tuning parameter (\ref{eq:tuning_eqn}) is given by:
\vspace{-15pt}
\begin{align*}
& \lim_{m\rightarrow\infty}\lim_{k\rightarrow\infty}\lim_{n_{\mathrm{loop}}\rightarrow\infty}\xi_{k}^{j}=\lim_{m\rightarrow\infty}\lim_{k\rightarrow\infty}\lim_{n_{\mathrm{loop}}\rightarrow\infty}D_{H}(\boldsymbol{\pi},\hat{\mathcal{F}}_{k,n_{\textrm{loop}}}^{j})\\
& \quad=\lim_{m\rightarrow\infty}\lim_{k\rightarrow\infty}D_{H}(\boldsymbol{\pi},\mathcal{F}_{k}^{\star})=D_{H}(\boldsymbol{\pi},\boldsymbol{\pi})=0\thinspace.
\end{align*}
Therefore Proposition \ref{thm:Markov_matrix_new} implies that the
Markov transition matrix $\lim_{m\rightarrow\infty}\lim_{k\rightarrow\infty}\lim_{n_{\mathrm{loop}}\rightarrow\infty}M_{k}^{j}=\mathbf{I}$.
\hfill $\Box$ \end{remark}
In practical scenarios, $n_{\mathrm{loop}}$ is finite, hence $\xi_{k}^{j}$
may not converge to zero. A practical work-around to avoid transitions
due to nonzero $\xi_{k}^{j}$ is to set $M_{k}^{j}=\mathbf{I}$ when
$\xi_{k}^{j}$ is sufficiently small and has leveled out. In the next
section, we use the above theorems to prove convergence of the PSG--IMC
algorithm with motion constraints.
\section{Motion Constraints \label{sec:Handling-Motion-Constraints}}
In this section, we introduce additional constraints on the motion
of agents and study their effects on the convergence of the swarm.
We first introduce the motion constraints and the corresponding trapping
problem. Next, we discuss the strategy for leaving the trapping region
and an additional condition on the desired formation. Finally, Theorem
\ref{thm:Convergence-of-imhomo-MC-motion-const} shows that each agent's
pmf vector $\boldsymbol{x}_{k}^{j}$ asymptotically converges pointwise
to the desired formation $\boldsymbol{\pi}$, if it executes the PSG--IMC
algorithm with motion constraints.
The agents in a particular bin can only transition to some bins but
cannot transition to other bins because of the dynamics or physical
constraints. These (possibly time-varying) motion constraints are
specified in a matrix $A_{k}^{j}$ as follows: \vspace{-15pt}
\begin{align}
A_{k}^{j}[i,\ell] & =\begin{cases}
1 & \textrm{if the \ensuremath{j^{\textrm{th}}}agent can transition to }R[\ell]\\
0 & \textrm{if the \ensuremath{j^{\textrm{th}}}agent cannot transition to }R[\ell]
\end{cases},\nonumber \\
& \thinspace\textrm{where }\boldsymbol{r}_{k}^{j}[i]=1,\thinspace\textrm{for all }\ell\in\{1,\ldots,n_{\textrm{cell}}\}\thinspace.\label{eq:motion_constraints}
\end{align}
\begin{assumption} The matrix $A_{k}^{j}$ is symmetric and the graph
conforming to the $A_{k}^{j}$ matrix is strongly connected. Moreover,
an agent can always choose to remain in its present bin, i.e., $A_{k}^{j}[i,i]=1$
for all $i\in\{1,\ldots,n_{\textrm{cell}}\}$ and $k\in\mathbb{Z}^{*}$.
\hfill $\Box$ \label{assump_SC} \end{assumption}
In this paper, we introduce a simple, intuitive method for handling
motion constraints, such that the convergence results in Section \ref{sec:Convergence-of-Inhomogeneous}
are not affected. The key idea is to modify the Markov matrix $M_{k}^{j}$
designed in Proposition \ref{thm:Markov_matrix_new} to capture the
motion constraints in (\ref{eq:motion_constraints}).
\begin{proposition} \label{thm:Markov_matrix_modified}\textit{ }Let
$\tilde{M}_{k}^{j}$ represent the modified Markov matrix that satisfies
motion constraints, which is obtained from the original Markov matrix
$M_{k}^{j}$ given by Proposition \ref{thm:Markov_matrix_new}. For
each transition that is not allowed by the motion constraint, i.e.,
$A_{k}^{j}[i,\ell]=0$, the corresponding transition probability in
$M_{k}^{j}$ is added to the diagonal element in $\tilde{M}_{k}^{j}$.
First, let $\tilde{M}_{k}^{j}=M_{k}^{j}$. Then set: \vspace{-10pt}
\[
\tilde{M}_{k}^{j}[i,i]=M_{k}^{j}[i,i]+\sum_{\ell\in\{1,\ldots,n_{\mathrm{cell}}:A_{k}^{j}[i,\ell]=0\}}M_{k}^{j}[i,\ell]\thinspace.
\]
Finally, for all $i,\ell\in\{1,\ldots,n_{\textrm{cell}}\}$: \vspace{-10pt}
\begin{align*}
\textrm{if}\quad & A_{k}^{j}[i,\ell]=0,\textrm{ then set }\tilde{M}_{k}^{j}[i,\ell]=0\thinspace.
\end{align*}
The resulting row stochastic Markov matrix $\tilde{M}_{k}^{j}$ has
$\boldsymbol{\pi}$ as its stationary distribution (i.e., $\boldsymbol{\pi}\tilde{M}_{k}^{j}=\boldsymbol{\pi}$).
\end{proposition}
\begin{IEEEproof}
The modified Markov matrix $\tilde{M}_{k}^{j}$ is row stochastic
since $\tilde{M}_{k}^{j}\mathbf{1}=M_{k}^{j}\mathbf{1}=\mathbf{1}$.
Moreover, $\tilde{M}_{k}^{j}$ also has $\boldsymbol{\pi}$ as its
stationary distribution because of the reversible property of $M_{k}^{j}$,
i.e. $\boldsymbol{\pi}[\ell]M_{k}^{j}[\ell,i]=\boldsymbol{\pi}[i]M_{k}^{j}[i,\ell]$.
Hence $\tilde{M}_{k}^{j}$ is indeed a valid Markov matrix.
\end{IEEEproof}
For a bin $R[i]$, let us define $\mathcal{A}_{k}^{j}(R[i])$ as the
set of all bins that the $j^{\textrm{th}}$ agent can transition to
at the $k^{\textrm{th}}$ time instant: \vspace{-10pt}
\begin{equation}
\mathcal{A}_{k}^{j}(R[i]):=\left\{ R[\ell]:\ell\in\{1,\dots,n_{\textrm{cell}}\};\thinspace A_{k}^{j}[i,\ell]=1\right\} .
\end{equation}
Similarly, let us define $\Pi$ as the set of all bins that have nonzero
probabilities in the desired formation ($\boldsymbol{\pi}$): \vspace{-10pt}
\begin{equation}
\Pi:=\left\{ R[\ell]\thinspace:\thinspace\ell\in\{1,\dots,n_{\textrm{cell}}\};\textrm{ and }\boldsymbol{\pi}[\ell]>0\right\} \thinspace.
\end{equation}
As defined before in the proof of Theorem \ref{thm:Convergence-of-imhomo-MC},
the bins in the set $\Pi$ are called recurrent bins while those not
in $\Pi$ are called transient bins.
\begin{remark} \textit{(Trapping Problem)} If the $j^{\textrm{th}}$
agent is actually located in bin $R[i]$ and we observe that $\mathcal{A}_{k}^{j}(R[i])\cap\Pi=\emptyset$
for all $k\in\mathbb{Z}^{*}$, then the $j^{\textrm{th}}$ agent is
trapped in the bin $R[i]$ forever. Let us define $\mathcal{T}_{k}^{j}$
as the set of all bins that satisfy this trapping condition: \vspace{-10pt}
\begin{equation}
\mathcal{T}_{k}^{j}:=\bigcup{}_{i\in\{1,\ldots,n_{\textrm{cell}}\}}\left\{ R[i]\thinspace:\thinspace\mathcal{A}_{k}^{j}(R[i])\cap\Pi=\emptyset\right\} \thinspace.\label{eq:trapping_region}
\end{equation}
To avoid this trapping problem, we enforce a secondary condition on
the $j^{\textrm{th}}$ agent if it is actually located in a bin belonging
to the set $\mathcal{T}_{k}^{j}$. For each bin $R[i]\in\mathcal{T}_{k}^{j}$,
we chose another bin $\Psi(R[i])$; where $\Psi(R[i])\in\mathcal{A}_{k}^{j}(R[i])$
and either $\Psi(R[i])\not\in\mathcal{T}_{k}^{j}$ or $\Psi(R[i])$
is close to other bins which are not in $\mathcal{T}_{k}^{j}$. The
secondary condition on the $j^{\textrm{th}}$ agent is that it will
transition to bin $\Psi(R[i])$ during the $k^{\textrm{th}}$ time
step. Moreover, the agents in the transient bins, which are not in
the trapping region, eventually transition to the recurrent bins.
We also speed up this process. The information about exiting the trapping
region and the transient bins is captured by the matrix $C_{k}^{j}\in\mathbb{R}^{n_{\textrm{cell}}\times n_{\textrm{cell}}}$
given by: \vspace{-10pt}
\begin{align}
C_{k}^{j}[i,\ell] & =\begin{cases}
1 & \textrm{if }R[i]\in\mathcal{T}_{k}^{j}\textrm{ and }R[\ell]=\Psi(R[i])\\
\frac{1}{\left|\mathcal{A}_{k}^{j}(R[i])\cap\Pi\right|} & \textrm{else if }R[i]\not\in\mathcal{T}_{k}^{j}\textrm{ and }R[i]\not\in\Pi\\
& \thinspace\thinspace\thinspace\textrm{and }R[\ell]\in\mathcal{A}_{k}^{j}(R[i])\cap\Pi\\
0 & \textrm{otherwise}
\end{cases},\nonumber \\
& \quad\textrm{for all }i,\ell\in\{1,\ldots,n_{\textrm{cell}}\}\thinspace,\label{eq:C_matrix}
\end{align}
which is used instead of the Markov matrix in (\ref{eq:time_evolution_single}).
Note that this secondary condition would not cause an infinite loop
as the graph conforming to the $A_{k}^{j}$ matrix is strongly connected.
\hfill $\Box$ \end{remark}
\begin{figure}
\begin{centering}
\begin{tabular}{cc}
\includegraphics[bb=360bp 160bp 615bp 415bp,clip,width=1in]{Pi_example.pdf} & \includegraphics[bb=0bp 0bp 285bp 284bp,clip,width=1in]{Desired_distribution_Illini_logo_with_motion_constraint_v6.pdf}\tabularnewline
(a) & (b)\tabularnewline
\end{tabular}
\par\end{centering}
\begin{centering}
\protect\caption{(a) The red region denotes the set $\Pi$. The motion constraints
are such that the agents can only transition to the immediate neighboring
bins. Note that each of the four corners form four subsets of $\Pi$,
such that any agent cannot transition from one subset to another subset
without exiting the set $\Pi$. (b) The UIUC logo, marked in red,
is the desired formation $\boldsymbol{\pi}$. Although $\mathcal{R}$
is partitioned into $900$ bins, the set $\Pi$ contains only $262$
bins. The trapping region $\mathcal{T}_{k}^{j}$ for all $k\in\mathbb{N}$
is marked in green. \label{fig:Pi-example} }
\par\end{centering}
\vspace{-20pt}
\end{figure}
It is possible that the set of all the bins that have nonzero probabilities
in the desired formation ($\Pi$) can be decomposed into subsets,
such that any agent cannot transition from one subset to another subset
without exiting the set $\Pi$ (for example see Fig. \ref{fig:Pi-example}(a)).
Since our proposed algorithm suggests that the agents will always
transition within $\Pi$ after entering it, the agents in one such
subset will never transition to the other subsets. Hence, the proposed
algorithms will not be able to achieve the desired formation, as the
agents in each subset of $\Pi$ are trapped within that subset. In
order to avoid such situations, we need the following assumption on
$\Pi$ and $A_{k}^{j}$.
\begin{assumption} The set of all bins that have nonzero probabilities
in the desired formation ($\Pi$) and the matrix specifying the motion
constraints ($A_{k}^{j}$) are such that each agent can transition
from any bin in $\Pi$ to any other bin in $\Pi$, without exiting
the set $\Pi$ while satisfying the motion constraints. \hfill $\Box$
\label{assump_Pi} \end{assumption}
For the $j^{\textrm{th}}$ agent in bin $R[i]$, random sampling of
the Markov chain selects the bin $R[q]$ in line 15--16 of \textbf{Algorithm
\ref{alg:PGA-inhomo-motion-const}}. Then the time evolution of the
pmf vector $\boldsymbol{x}_{k}^{j}$ can be written as: \vspace{-15pt}
\begin{equation}
\boldsymbol{x}_{k+1}^{j}=\boldsymbol{x}_{k}^{j}B_{k}^{j},\thinspace\textrm{where }B_{k}^{j}=\begin{cases}
C_{k}^{j} & \textrm{ if }R[i]\not\in\Pi\\
\tilde{M}_{k}^{j} & \textrm{ otherwise }
\end{cases}\thinspace.\label{eq:time_evolution_motion_constraint}
\end{equation}
Note that the Markov matrix $\tilde{M}_{k}^{j}$ in (\ref{eq:time_evolution_motion_constraint})
is computed using Proposition \ref{thm:Markov_matrix_new} and then
modified using Proposition \ref{thm:Markov_matrix_modified}. Let
$V_{0,k}^{j}$ denote the row stochastic matrix defined by the forward
matrix multiplication: \vspace{-15pt}
\begin{equation}
V_{0,k}^{j}=B_{0}^{j}B_{1}^{j}\ldots B_{k-2}^{j}B_{k-1}^{j},\quad\textrm{for all }k\in\mathbb{N}\thinspace,\label{eq:matrix_product-1}
\end{equation}
where each $B_{k}^{j}$ is given by (\ref{eq:time_evolution_motion_constraint}).
Similar to (\ref{eq:time_evolution_overall}), the evolution of the
probability vector $\boldsymbol{x}_{k}^{j}$ from an initial condition
to any $k^{\textrm{th}}$ time instant is given by $\boldsymbol{x}_{k}^{j}=\boldsymbol{x}_{0}^{j}V_{0,k}^{j}$
for all $k\in\mathbb{\mathbb{N}}$.
\begin{theorem} \label{thm:Convergence-of-imhomo-MC-motion-const}
\textit{(Convergence of inhomogeneous Markov chains with motion constraints)}
Under Assumptions \ref{assump_SC} and \ref{assump_Pi}, each agent's
time evolution of the pmf vector $\boldsymbol{x}_{k}^{j}$, from any
initial condition $\boldsymbol{x}_{0}^{j}\in\mathbb{R}^{n_{\textrm{cell}}}$,
is given by the inhomogeneous Markov chain $\boldsymbol{x}_{k}^{j}=\boldsymbol{x}_{0}^{j}V_{0,k}^{j}$.
If each agent executes the PSG--IMC algorithm, then $\boldsymbol{x}_{k}^{j}$
asymptotically converges pointwise to the desired stationary distribution
$\boldsymbol{\pi}$, i.e., $\lim_{k\rightarrow\infty}\boldsymbol{x}_{k}^{j}=\boldsymbol{\pi}$
pointwise for all $j\in\{1,\ldots,m\}$. \end{theorem}
\begin{IEEEproof}
We first show that there are finitely many occurrences of the $C_{k}^{j}$
matrix in the matrix product $\lim_{k\rightarrow\infty}V_{0,k}^{j}$.
Due to the design of the Markov matrix in Proposition \ref{thm:Markov_matrix_new},
the bins in the set $\Pi$ are absorbing; i.e., if an agent enters
any of the bins in the set $\Pi$, then it cannot leave the set $\Pi$.
Next we notice that if $R[\ell]$ is a transient bin but not in the
trapping region, then the only possible transitions are to the bins
in $\Pi$. Moreover, once the agent exits the set $\mathcal{T}_{k}^{j}$,
it cannot enter it again. Finally, the number of steps inside the
transient bins is limited by the total number of bins $n_{\textrm{cell}}$.
Hence the $C_{k}^{j}$ can only occur finite number of times in the
matrix product $\lim_{k\rightarrow\infty}V_{0,k}^{j}$. The new initial
condition of the agent is obtained by forward multiplying the previous
initial condition with the $C_{k}^{j}$ matrices: \vspace{-10pt}
\begin{equation}
\tilde{\boldsymbol{x}}_{0}^{j}=\boldsymbol{x}_{0}^{j}C_{0}^{j}C_{1}^{j}\ldots C_{s-1}^{j}C_{s}^{j}\thinspace,
\end{equation}
where $s$ is the maximum number of steps that the $j^{\textrm{th}}$
agent makes in the transient bins. Hence the overall time evolution
of the pmf $\boldsymbol{x}_{k}^{j}$ can be written as $\lim_{k\rightarrow\infty}\boldsymbol{x}_{k}^{j}=\lim_{k\rightarrow\infty}\boldsymbol{x}_{0}^{j}V_{0,k}^{j}=\lim_{k\rightarrow\infty}\tilde{\boldsymbol{x}}_{0}^{j}U_{0,k}^{j}$.
Here the matrix product $\lim_{k\rightarrow\infty}U_{0,k}^{j}$ is
the product of modified Markov matrices obtained using Proposition
\ref{thm:Markov_matrix_modified}.
Once the agent exits the transient bins, the situation is exactly
similar to that discussed in Case 1 of the proof of Theorem \ref{thm:Convergence-of-imhomo-MC}.
Hence, we now focus on proving that the submatrix $\tilde{M}_{k,\textrm{sub}}^{j}\in\mathbb{R}^{n_{\textrm{rec}}\times n_{\textrm{rec}}}$
of the original Markov matrix $\tilde{M}_{k}^{j}$, similar to that
shown in Fig. \ref{fig:Submatrix}, is primitive when $\xi_{k}^{j}>0$.
The modified Markov submatrix $\tilde{M}_{k,\textrm{sub}}^{j}$ is
a nonnegative matrix due to the motion constraints $A_{k}^{j}$. But
$\tilde{M}_{k,\textrm{sub}}^{j}$ is strongly connected and irreducible
when $\xi_{k}^{j}>0$ due to Assumptions \ref{assump_SC} and \ref{assump_Pi}.
Since $(\tilde{M}_{k,\textrm{sub}}^{j})^{n_{\textrm{rec}}}>0$, the
primitive matrix theorem \cite[Theorem 8.5.2, pp. 516]{Ref:Horn85}
implies that $\tilde{M}_{k,\textrm{sub}}^{j}$ is a primitive matrix.
Then it follows from Case 1 that the product of submatrices $\lim_{k\rightarrow\infty}U_{0,k,\textrm{sub}}^{j}$
is also a primitive matrix. Finally, it follows from the proof of
Theorem \ref{thm:Convergence-of-imhomo-MC} that $\lim_{k\rightarrow\infty}\boldsymbol{x}_{k}^{j}=\boldsymbol{\pi}$
pointwise for all $j\in\{1,\ldots,m\}$.
\end{IEEEproof}
Note that Theorem \ref{thm:Convergence-swarm-dist} and Remark \ref{rem:Markov_identity}
can be directly applied to satisfy the first two objectives of PSG--IMC,
even under motion constraints.
\begin{figure}
\begin{centering}
\begin{tabular}{cc}
\includegraphics[bb=0bp 0bp 529bp 398bp,clip,width=1.3in]{HD_num_trans_Illini_with_motion_const_v75.png} & \includegraphics[bb=0bp 0bp 527bp 398bp,clip,width=1.3in]{HD_num_trans_Illini_with_motion_const_v76.png}\tabularnewline
(a) & (b)\tabularnewline
\end{tabular}
\par\end{centering}
\begin{centering}
\protect\caption{The PSG--IMC algorithm is compared with the homogeneous PGA algorithm
using Monte Carlo simulations. The figures show (a) convergence of
the swarm to the desired formation in terms of HD and (b) the number
of agents transitioning per time step, along with their $3\sigma$
error bars. \label{fig:Desired-dist-Convergence-with-motion-const}}
\par\end{centering}
\vspace{-10pt}
\end{figure}
\vspace{-10pt}
\subsection{Numerical Example \label{sub:Numerical-Example-1}}
In this example, the PSG--IMC algorithm with motion constraints is
used to guide a swarm containing $m=3000$ agents to the desired formation
$\boldsymbol{\pi}$ associated with the UIUC logo shown in Fig. \ref{fig:Pi-example}(b).
Monte Carlo simulations were performed to compare the PSG--IMC algorithm
with the homogeneous PGA algorithm and the cumulative results from
$50$ runs are shown in Fig. \ref{fig:Desired-dist-Convergence-with-motion-const}.
As shown in Fig. \ref{fig:Color-plot-of-agents-with-motion-const},
at $k=1$, each simulation starts with the agents uniformly distributed
across $\mathcal{R}\subset\mathbb{R}^{2}$, which is partitioned into
$30\times30$ bins. Each agent independently executes the PSG--IMC
with motion constraints, illustrated in \textbf{Algorithm \ref{alg:PGA-inhomo-motion-const}}.
During the consensus stage, each agent is allowed to communicate with
those agents which are at most $10$ steps away. The communication
matrix $P_{k}$ is generated using Metropolis weights \cite{Ref:Boyd04},
hence it satisfies Assumption \ref{assump_weights2}. Moreover, each
agent is allowed to transition to only those bins which are at most
$5$ steps away.%
\footnote{This simulation video is available at \url{http://youtu.be/KFFCYHgLfvw}.%
}
As shown in the HD graph in Fig. \ref{fig:Desired-dist-Convergence-with-motion-const}(a),
the desired formation is almost achieved within the first $200$ time
steps for all simulation runs. After the $500^{\textrm{th}}$ time
step, the swarm is externally damaged by eliminating approximately
$460\pm60$ agents from the middle section of the formation. This
can been seen by comparing the images for the $500^{\textrm{th}}$
and $501^{\textrm{st}}$ time step in Fig. \ref{fig:Color-plot-of-agents-with-motion-const}
and the discontinuity during the $501^{\textrm{st}}$ time step in
Fig. \ref{fig:Desired-dist-Convergence-with-motion-const}. Note that
the swarm always recovers from this damage and attains the desired
formation within another $100$ time steps. Thus the third objective
of PSG--IMC, stated in Section \ref{sec:Problem-Statement}, is also
achieved. From the correlation between the plots of HD and number
of transitions in Fig. \ref{fig:Desired-dist-Convergence-with-motion-const}(b),
we can infer that the agents transition only when necessary.
During the first $500$ time steps, each agent in the PSG--IMC algorithm
undergoes just $10$ transitions, compared to approximately $60$
transitions in the homogeneous PGA algorithm. Moreover, the swarm
of agents executing the PSG--IMC algorithm converges faster to the
desired formation during this stage as seen in Fig. \ref{fig:Desired-dist-Convergence-with-motion-const}(a).
Hence, it is evident from the cumulative results in Fig. \ref{fig:Desired-dist-Convergence-with-motion-const}
that the PSG--IMC algorithm is more efficient than the homogeneous
PGA algorithm.
\begin{figure}[h]
\begin{centering}
\includegraphics[bb=0bp 0bp 405bp 285bp,clip,width=3in]{six_diff_views_of_Illini_with_motion_const_v6.pdf}
\par\end{centering}
\vspace{-5pt}
\begin{centering}
\protect\caption{Histogram plots of the swarm distribution at different time instants
for $3000$ agents, in a sample run of the Monte Carlo simulation.
The colorbar represents the pmf of the swarm distribution. Each agent
is allowed to transition to only those bins which are at most $5$
steps away. \label{fig:Color-plot-of-agents-with-motion-const}}
\par\end{centering}
\vspace{-15pt}
\end{figure}
\section{Guidance of Spacecraft Swarms \label{sec:Guidance-Spacecraft}}
Since it is technically feasible to develop and deploy swarms ($100$s--$1000$s)
of femtosatellites \cite{Ref:Hadaegh13}, in this section we solve
the guidance problem for such swarms of spacecraft in Earth orbit.
Assume that the spacecraft are located in the Local Vertical Local
Horizontal (LVLH) frame rotating around Earth. As shown in Fig. \ref{fig:Changing-bin-locations-HCW}(a),
for some initial conditions and no further control input, each spacecraft
is in a closed elliptical orbit in the LVLH frame, called passive
relative orbit (PRO) \cite{Ref:Morgan12}. If time-varying bins ($R_{k}[i],\thinspace\forall i\in\{1,\ldots,n_{\textrm{cell}}\}$)
are designed so that the spacecraft continues to coast along the PRO
in the LVLH frame, as shown in Fig. \ref{fig:Changing-bin-locations-HCW}(a),
then no control input is required to transition from $R_{k}[i]$ to
$R_{k+1}[i]$.
Motion constraints could arise from the spacecraft dynamics, as it
might be infeasible for the spacecraft in a certain bin to transition
to another bin within a single time step due to the large distance
between these bins. Some constraints could also arise from a limit
on the amount of fuel that can be consumed in a single time step or
limited control authority. A time-varying motion constraints matrix
$A_{k}^{j}$ is designed to handle such motion constraints due to
spacecraft dynamics. If a spacecraft is currently in $R_{k}[i]$ bin,
then it can only transition to the light blue cells in Fig. \ref{fig:Changing-bin-locations-HCW}(b)
during the $(k+1)^{\textrm{th}}$ time instant due to the motion constraints.
We have already shown that if each spacecraft executes the PSG--IMC
algorithm, then each spacecraft satisfies these motion constraints,
the swarm converges to the desired formation, and the Markov matrices
converge to the identity matrix.
\begin{center}
\begin{figure}
\begin{centering}
\begin{tabular}{cc}
\includegraphics[bb=310bp 70bp 700bp 430bp,clip,width=1.3in]{HCW_motion_constraint_v5.pdf} & \includegraphics[bb=315bp 0bp 831bp 510bp,clip,width=1.3in]{Guidance_20steps_only_bins_v8.pdf}\tabularnewline
(a) & (b)\tabularnewline
\end{tabular}
\par\end{centering}
\begin{centering}
\protect\caption{(a) The PRO through $R_{k}[i]$ bin and the time-varying bin locations
in the LVLH frame. (b) Location of the bins at different time steps
in the LVLH frame along with the respective swarm distributions. \label{fig:Changing-bin-locations-HCW}}
\par\end{centering}
\vspace{-15pt}
\end{figure}
\par\end{center}
\vspace{-25pt}
We now extend the example discussed in Section \ref{sub:Numerical-Example-1}
to a swarm of spacecraft in Earth orbit. If $\left(x[i],\thinspace y[i]\right)$
denotes the location of the bin $R[i]$ in the $30\times30$ grid,
then the time-varying location of the centroid of the bin on the PRO
in the LVLH frame is given by: \vspace{-10pt}
\begin{equation}
\boldsymbol{\kappa}_{k}[i]=\left(\begin{array}{c}
\frac{1}{2}(1+\frac{1}{15}x[i])\sin(\frac{\pi}{10}k+\frac{\pi}{300}y[i])\\
(1+\frac{1}{15}x[i])\cos(\frac{\pi}{10}k+\frac{\pi}{300}y[i])
\end{array}\right)\thinspace.
\end{equation}
Fig. \ref{fig:Changing-bin-locations-HCW}(b) shows the locations
of the time-varying bins at different time steps along with the respective
swarm distributions. Similar to the previous example, the swarm converges
to the desired formation and the spacecraft settle down after the
final formation is achieved.
\section{Conclusions}
This paper presents a new approach to shaping and reconfiguring a
large number of autonomous agents in a probabilistic framework. In
the proposed PSG--IMC algorithm, each agent independently determines
its own trajectory so that the overall swarm asymptotically converges
to the desired formation, which is robust to external disturbances
or damages. Compared to prior work using homogeneous Markov chains
for all agents, the proposed algorithm using inhomogeneous Markov
chains helps the agents avoid unnecessary transitions when the swarm
converges to the desired formation. We present a novel technique for
constructing inhomogeneous Markov matrices in a distributed fashion,
thereby achieving and maintaining the desired swarm distribution while
satisfying motion constraints. Using a consensus algorithm along with
communication with neighboring agents, the agents estimate the current
swarm distribution and transition so that the Hellinger distance between
the estimated swarm distribution and the desired formation is minimized.
The application of PSG--IMC algorithm to guide a swarm of spacecraft
in Earth orbit is discussed. Results from multiple simulation runs
demonstrate the properties of self-repair capability, reliability
and improved convergence of the proposed PSG--IMC algorithm.
| {'timestamp': '2014-09-16T02:00:45', 'yymm': '1403', 'arxiv_id': '1403.4134', 'language': 'en', 'url': 'https://arxiv.org/abs/1403.4134'} |
\section{Introduction}
Hawkes processes \citep{hawkes_spectra_1971} are a class of point processes that are used to model event data when the events can occur in clusters or bursts. Typical examples include earthquakes \citep{omi_estimating_2014}, stock transaction \citep{rambaldi_role_2017}, crime \citep{Shelton}, and social media \citep{lai_topic_2016, mei_neural_2017}.
The Hawkes process is defined by a conditional intensity function $\lambda(\cdot)$ which controls the probability of events occurring at each interval in time, based on the previous history of the process. The conditional nature of the intensity function allows the intensity to increase for a short period of time whenever an event occurs, which results in a higher probability of further events occurring, hence creating event clusters and bursts.
Typically $\lambda(\cdot)$ is assigned a parametric form which allows for a relatively straightforward estimation using maximum likelihood or Bayesian methods \citep{Veen_2008_estimation,rasmussen_bayesian_2013}. Alternatively, a nonparametric estimation of the intensity function has been proposed for a variety of settings: stochastic intensity functions \citep{donnet_nonparametric_2018} or using kernel methods \citep{choi_nonparametric_1999}, LTSM neural networks \citep{mei_neural_2017}, and GANs \citep{xiao_wasserstein_2017}.
Most applications of the Hawkes process assume that the entire history of events has been accurately observed. However, many real world applications suffer from missing data where some events are undetected \citep{mei_imputing_2019}, or from noisy data where the recorded event times are inaccurate \citep{trouleau_learning_2019}. We refer to this as data distortion, and it can occur for several reasons. For example, in earthquake catalogues it is well known that the occurrence of large earthquakes has a masking effect which reduces the probability of subsequent earthquakes being detected for a period of time \citep{helmstetter_comparison_2006, omi_estimating_2014, arcangelis_overlap_2018}. Alternatively, event data may simply not be available for a certain interval of time, such as in a terrorism data set considered by \citet{tucker_handling_2019}.
Distorted (e.g. missing or noisy) data causes serious problems for Hawkes processes. If the model parameters are learned using only the observed data then the estimation of $\lambda(\cdot)$ may be severely biased. As such, a principled learning algorithm needs to consider the impact of the distortion. So far, there has only been limited work on learning Hawkes processes in the presence of distorted data. The main exception to this is when the distortion takes the form of gaps in the observations where no events are detected at all for a certain period \citep{Le, Shelton}. In this context, \citet{tucker_handling_2019} develop a Bayesian estimation algorithm which uses MCMC to impute missing events, and a similar approach is proposed by \cite{mei_imputing_2019} using particle smoothing. \citet{linderman_bayesian_2017} view the true generating process as a latent variable, which can be learned through sequential Monte Carlo techniques. Other examples look at specific instances of censored data \citep{xu_learning_2017} or asynchronous data \citep{upadhyay_deep_2018, trouleau_learning_2019}
In this paper, we present a more general approach for estimating Hawkes processes in the presence of distortion, which can handle a much wider class of distortion scenarios, including the case of gaps in the observed data, the case where there is a reduced probability of detecting events during some time period, and the case of noise in the recorded observation times. Our approach assumes the existence of a general distortion function $h(\cdot)$ which specifies the type of distortion that is present. The resulting Hawkes process likelihood is computationally intractable, since the self-excitation component involves triggering from the (unobserved) true event times, which must be integrated out to give the likelihood of the observed data. To solve this problem, we propose a novel estimation scheme using Approximate Bayesian Computation \citep[ABC,][]{marin_approximate_2012} to learn the Hawkes intensity in the presence of distortion. The resulting algorithm, ABC-Hawkes, is based on applying ABC using particular summary statistics of the Hawkes process, with separate convergence thresholds for each statistic.
The paper is organized as follows. In Section~\ref{sec_problemoverview} we introduce the Hawkes process and the distorted data setting. Section~\ref{sec_ABC} summarizes ABC and introduces the ABC-Hawkes algorithm for parameter learning in the presence of distorted data. Section~\ref{sec_realdata} provides experimental results using Twitter data and simulations. We finish with a discussion of our work and contributions, as well as comments on future research. The \textsf{R} code for our analyses can be found in the supplementary material.
\section{Problem Overview} \label{sec_problemoverview}
In this section we define the Hawkes process and propose the implementation of distortion functions. We highlight the problem arising from missing or noisy events, which causes the Hawkes likelihood function to become computationally intractable.
\subsection{Hawkes Processes} \label{sec_Hawkes}
The (unmarked) Hawkes process introduced by \citet{hawkes_spectra_1971} is a self-exciting point process which models a collection of event times $Y = (t_i)_{i = 1}^N$. The occurrence of each event causes a short-term increase in the underlying point process conditional intensity function $\lambda(\cdot)$, known as self-excitation. This naturally produces temporal clusters of events, and it is hence an appropriate model for events which occur in bursts. More formally, a Hawkes process is a point process defined on the interval $[0, T]$ with a conditional intensity function:
\begin{equation}
\lambda(t| H_t,\theta) = \lambda_0(t|\theta) + \sum_{i: t > t_i}\nu(t-t_i|\theta)
\label{eqn:hawkes}
\end{equation}
where $\theta$ is a vector of model parameters, and $H_t = \{t_i | t_i <t)$ denotes the set of events which occurred prior to time $t$. Here, $\lambda_0(t) > 0$ is the background intensity which defines the equilibrium rate at which events occur. The excitation kernel $\nu(z)>0$ controls how much the intensity increases in response to an event at time $t$. Typically, the excitation kernel is monotonically decreasing, so that more recent events are more influential. The choice of kernel varies from application to application. A typical choice for an unmarked point process is the exponential kernel \citep{NIPS2012_4834, rasmussen_bayesian_2013, Shelton}:
\begin{equation}
\nu(z|\theta) = K \beta e^{- \beta z}, \quad \theta = \{K, \beta\} \label{eq_exponential}.
\end{equation}
The Hawkes process can also be interpreted as a branching process \citep{hawkes_cluster_1974}, where ``immigrant'' events are generated from the background process with intensity $\lambda_0(t)$, with each event spawning an additional off-spring process with intensity $\nu(z)$ which produces further events, and so on.
For a set of observations $Y$ the likelihood of a Hawkes process is given by \citep{daley_introduction_2003}:
\begin{equation}
p(Y | \theta)=\prod_{i=1}^N \lambda\left(t_{i} | \theta \right) e^{-\int_{0}^{\infty} \lambda\left(z | \theta \right) \,d z} \label{eq_etas_lik}
\end{equation}
where $\theta$ is the vector of unknown model parameters which must be estimated in order to fit the Hawkes process to the data. For ease of exposition, we will assume without loss of generality that the background intensity $\lambda_0(t) = \mu$ is constant, and that the excitation kernel is exponential. In this case, $\theta =(\mu, K, \beta)$. However nothing in our algorithm requires these assumptions, and our method is equally applicable to any other choice of background rate or kernel function.
\subsection{Distorted Data} \label{sec_missing_data}
In many applications, the observed data will be distorted, i.e. noisy or containing missing events. This is often due to data collection issues which result in some events being undetected or observed with error. Our proposal is to model this distorted data using a distortion function $h(\cdot)$, and now present some examples of specific choices of this function.
A common type of data distortion is \textbf{missing data} where a subset of the data is not observed. This is a well-known phenomenon which affects the use of Hawkes processes in earthquake forecasting since seismic detectors lose their ability to detect smaller earthquakes in the immediate aftermath of large earthquakes \citep{omi_estimating_2014}. In other examples, data from a certain period might simply have been lost, as is the case in the terrorism data set discussed by \cite{tucker_handling_2019}.
In the case of missing data, let $h(t)$ be a detection function specifying the probability that an event occurring at time $t$ will be successfully detected, and hence which will be present in the observed data $Y$. The observed data can be viewed as having arisen from the following generative process: First, a set of events $(t_1, \ldots, t_K)$ are generated from a Hawkes process with some intensity function $\lambda(\cdot)$. Then, for each event $t_k$ for $k = 1, \dots, K$, let $D_k = 1$ with probability $h(t_k)$ and $D_k = 0$ with probability $1-h(t_k)$. The observed data is then the collection of events for which $D_k = 1$, so that $Y = \{t_k | k: D_k = 1\}$. Note that this formulation of missing data is quite general and includes the classic ``gaps in the data'' scenario as a special case, since this is equivalent to setting $h(t) = 0$ during the period where gaps occur, and $h(t)=1$ elsewhere. However, this specification is highly flexible and also covers scenarios where events are not guaranteed to be missing for a certain period of time, but are instead missing with a (possibly time-dependent) non-zero probability, as in the earthquake scenario.
When data is potentially missing, it is very challenging to learn the $\theta$ parameter of the Hawkes process. If the Hawkes process did not have a self-exciting component, then the likelihood function in the presence of missing data would be obtained by assuming that the data came from a modified point process with intensity $r(t) = \lambda(t)h(t)$, i.e. the product of the intensity function and the detection function. The parameters of $r(\cdot)$ could then be learned using a standard method such as maximum likelihood or Bayesian inference. However, when working with Hawkes processes, the situation is substantially more complex. The main issue is that undetected events will still contribute to exciting the process intensity, i.e. the summation in Equation~\eqref{eqn:hawkes} needs to be over both the observed and unobserved events. The resulting likelihood function hence depends on both the set of observed events $Y$ and the set of unobserved events which we denote by $Y_{u}$. This requires integrating out the unobserved events, to give the likelihood function:
\begin{align}
p(Y | \theta) \propto
\int p(Y, Y_{u} | \theta) \,
\prod_{t_i \in Y} h(t_i) \prod_{t_l \in Y_{u}} (1- h(t_l)) \, d Y_{u} \label{eqn:missing}
\end{align}
where $Y_u = \{t_k | k: D_k = 0\}$ denotes the set of missing events. Due to the integral over the unknown number of missing events, this likelihood function is intractable and cannot be evaluated.
For \textbf{noisy data} (rather than missing data), we use a similar approach except that the distortion function $h(t)$ will now specify the time at which an event is observed to have occurred, give that it truly occurs at time $t$. For example, $h(t) = t + \varepsilon_t$ where $\varepsilon_t \sim N(0,\sigma^2_t)$ would be appropriate in a setting where the observation times are corrupted by Gaussian noise, while $h(t) = t + c$ for a constant scalar $c$ would be used when there is a fixed delay present in the recording of all observation times. Such a synchronization example is studied by \citet{trouleau_learning_2019}. As in the case of missing data, this leads to an intractable likelihood function since the excitation in the summation from Equation~\eqref{eqn:hawkes} depends on the true observation time $t_i$ which is unknown.
The distortion function may be parameterized by a vector of parameters $\xi$ which can denote, for example, the detection probability in the case of missing data, or the noise variance $\sigma_t$ in the case of noise. In some situations these parameters will be known due to knowledge of the underlying machinery used to detect the events. In other situations, they may be learned from the data.
Since all the above data distortion scenarios make direct learning of the Hawkes process impossible due to the intractable likelihood, we now propose a novel learning scheme for Hawkes processes with distorted data based on Approximate Bayesian Computation which we refer to as ABC-Hawkes.
\section{Approximate Bayesian Computation} \label{sec_ABC}
ABC is a widely studied approach to Bayesian inference in models with intractable likelihood functions \citep{beaumont_approximate_2002, marjoram_markov_2003,marin_approximate_2012}. We will now review the general ABC framework and then present a version of ABC for sampling from the posterior distribution of the Hawkes process parameters in the presence of distorted data.
Bayesian inference for a parameter vector $\theta \in \Theta$ assumes the existence of a prior $\pi(\theta)$ and a likelihood function $p(Y | \theta)$ for data $Y \in \mathcal{Y}$, with parameter inference based on the resulting posterior distribution $\pi(\theta | Y)$. Traditional tools for posterior inference such as Markov Chain Monte Carlo (MCMC) require the evaluation of the likelihood function and therefore cannot be applied when the likelihood is intractable. In this case, however, it may still be possible to generate samples $Y^{(j)}$ from the model. ABC is an approach to parameter and posterior density estimation which only uses such generated samples, without any need to evaluate the likelihood \citep{beaumont_approximate_2002}.
The core idea of ABC is as follows: We assume that the observed data $Y$ has been generated by some (unknown) value of $\theta$. For a proposed value of $\theta^{(j)}$, we generate pseudo-data $Y^{(j)}$ from the model $p(Y|\theta^{(j)})$ in a way which does not involve evaluating the likelihood function. If $\theta^{(j)}$ is close to the real $\theta$, then we would also expect $Y^{(j)}$ to be `close' to the real (observed) data $Y$, as measured by similarity function. We can hence accept/reject parameter proposals $\theta^{(j)}$ based only on the similarity between $Y$ and $Y^{(j)}$ .
This algorithm crucially depends on how we measure similarity between data sets $Y$ and $Y^{(j)}$. Typically, low-dimensional summary statistics $S(\cdot)$ of the data are chosen and then compared based on some distance metric $\mathcal{D}(\cdot ,\cdot )$ \citep{fearnhead_constructing_2012}. A proposed parameter $\theta^{(j)}$ is then accepted if this distance is less than a chosen threshold $\epsilon$. Generally, this procedure will not target the true posterior $\pi(\theta | Y)$, but instead targets the ABC posterior $\pi_{ABC}(\theta | \mathcal{D}(S(Y) , S(Y^{(j)}) < \epsilon)$. However, if the statistics $S(\cdot)$ are chosen to be the sufficient statistics for the model parameters and $\epsilon \to 0$, then $\pi_{ABC}(\theta | \mathcal{D}(S(Y) , S(Y^{(j)}) < \epsilon) \to \pi(\theta | Y)$ \citep{marin_approximate_2012}. A direct implementation of this ABC procedure is based upon rejection sampling, where $\theta^{(j)}$ is proposed from the prior distribution \citep{pritchard_population_1999}. However this can be inefficient, so instead ABC-MCMC methods can be used which make proposals based on a Metropolis-Hastings kernel without the need to evaluate the likelihood function, which can lead to a higher acceptance rate for ABC \citep{beaumont_approximate_2002}.
In most realistic applications, it is not possible to choose a low-dimensional set of summary statistics which is sufficient for the parameter vector, since only the limited class of distributions that lie in the exponential family admit a finite dimensional set of sufficient statistics \citep{brown_fundamentals_1986}. Therefore, finding suitable summary statistics and an appropriate distance metrics for a particular model is non-trivial, and is a vital part of designing efficient ABC algorithms \citep{marin_approximate_2012}. Some authors construct summary statistics which are carefully tailored to their application \citep{aryal_fitting_2019}, while others present a semi-automatic way to constructing summary statistics \citep{fearnhead_constructing_2012}. \citet{bernton_approximate_2019} develop a general approach, that uses the Wasserstein Distance between the observed and synthetic data set, hence eliminating the need for summary statistics altogether.
\subsection{ABC-Hawkes Algorithm} \label{sec_ABC_pointprocess}
Our core idea is to use ABC to perform inference for the Hawkes process with distorted data, which circumvents the intractability of the likelihood function. This is made possible by the fact that simulating data from the distorted generative model is straightforward, and can be done by first simulating data from the Hawkes process with intensity function $\lambda(\cdot)$ which represents the true (unobserved) data and then distorting these events based on the distortion function $h(\cdot)$ as discussed in Section~\ref{sec_missing_data}, which gives the observed data $Y$. The simulation from $\lambda(\cdot)$ can be carried out using a standard simulation algorithm for Hawkes process such as the thinning procedure of \citet{ogata_lewis_1981}. The distortion of the data is then performed by applying $h(\cdot$) to each simulated data point, which introduces missingness and/or noise. For example in the case of gaps, this would consist of deleting the simulated observations which lie within the gap region, while for noise it would consist of adding on noise to the simulated data, as specified by $h(\cdot)$. The resulting observations are hence a realization of the Hawkes process with intensity function $\lambda(\cdot)$ that has been distorted through $h(\cdot)$.
For ABC-Hawkes we use a variant of the ABC-MCMC algorithm, as shown in Algorithm~\ref{alg_abc_MCMC}. This is an extension of the usual Metropolis-Hastings algorithm which essentially replaces the intractable likelihood function with an estimate based on the simulated data and can be shown to converge correctly to the ABC posterior $\pi_{ABC}(\theta | \mathcal{D}(S(Y) , S(Y^{(j)}) < \epsilon)$ \citep{marjoram_markov_2003}.
\begin{algorithm}[tbp]
\caption{ABC-Hawkes} \label{alg_abc_MCMC}
\begin{algorithmic}
\STATE {\bfseries Input:} observed data $Y$ where data is distorted according to a distortion function $h(\cdot)$, the parameter prior $\pi(\cdot)$, the desired number of posterior samples $J$, a function to compute the $P$ summary statistics $S_1(\cdot),\ldots,S_P(\cdot)$, the $P$ threshold levels $\epsilon_p > 0$, and a Metropolis-Hastings transition kernel $q(\cdot | \cdot)$ \\
\hrulefill
\STATE{Initialise $\theta^{(0)}$}
\FOR {$j=1$ {\bfseries to} $J$}
\STATE{$\theta^* \sim q(\cdot | \theta^{(j -1)})$}
\STATE{$ Z^{*} \sim p(\cdot | \theta^{*})$, where $p(\cdot)$ simulates from a Hawkes process}
\STATE{$Y^{*} = h(Z^{*})$}
\IF{$|S_p(Y^{*}) - S(Y)| < \epsilon_p$ for all $p = 1, \dots, P$}
\STATE{With probability $min \left\{1, \frac{q(\theta^{(j-1)}|\theta^{*}) \pi(\theta^*)}{q(\theta^*|\theta^{(j-1)}) \pi(\theta^{(j-1)})}\right\}$ set $\theta^{(j)} = \theta^*$}
\ELSE
\STATE{Set $\theta^{(j)} = \theta^{(j-1)}$}
\ENDIF
\ENDFOR
\STATE {\bfseries Output:} $(\theta^{(1)}, \dots, \theta^{(J)})$
\end{algorithmic}
\end{algorithm}
The choices of summary statistics $S(Y)$ and corresponding threshold $\epsilon$ are crucial for the application of ABC \citep{marin_approximate_2012}. While the ABC literature offers a wealth of theory on summary statistics, their actual construction is less prominent and tends to be highly application-dependent. In the context of Hawkes processes, this is complicated by the non-i.i.d. structure of the data, which renders many existing approaches impossible as they require multiple (bootstrapped) samples of the original data set; such as using random forests to find summary statistics \citep{pudlo_reliable_2016}, or utilizing a classifier or reinforcement learning to judge the similarity between data sets \citep{gutmann_likelihood-free_2018, li_learning_2018}. These approaches are not applicable here as only one data set is available and bootstrap sampling cannot be used due to the complex dependency structure of self-exciting point processes.
While there has been some previous literature on ABC for Hawkes processes outside of the distorted data setting \citep{ertekin_reactive_2015, shirota_approximate_2017} we found that the presented summary statistics did not extend well to distorted data. Instead, through extensive simulations we have found a set of summary statistics which we have empirically found to accurately captures the posterior distribution of the Hawkes process parameters, and in the Experiments section we will provide evidence to support this. For ABC-Hawkes, the resulting summary statistics $S(Y) = (S_1(Y), \ldots , S_7(Y)))$ we use are:
\begin{itemize}
\item $S_1(Y)$: The logarithm of the number of observed events in the process. This is highly informative for the $\mu$ and $K$ parameters, since $\mu$ controls the number of background events, while $K$ defines how much total triggering is associated with each event.
\item $S_2(Y)$: The median of the event time differences $\Delta_i = t_i - t_{i-1}$ divided by the mean event time differences $\mathbb{E}[\Delta]$. This is highly informative for the parameters of the self-excitation kernel (e.g. $\beta$, for an exponential kernel).
\item $S_3(Y) \dots S_5(Y)$: Ripley's $K$ statistic \citep{ripley_1977_modelling} for a window size of $(1, 2, 4)$. This counts the events that happen within the respective window-length of each other. We do not use a correction for the borders. This captures the degree of clustering in the event sequence and is hence informative for $K$ and the parameters of the excitation kernel.
\item $S_6(Y), S_7(Y)$: The average of the $\Delta_i$ differences that lie above their $90\%$-quantile $\mathbb{E}[\Delta_i | \Delta_i > q_{90}]$ and below their median $\mathbb{E}[\Delta_i | \Delta_i < q_{50}]$. In extensive simulation studies we found that these complement the other statistics well and lead to an accurate approximation of the posterior distribution.
\end{itemize}
When using multiple summary statistics, careful consideration must be made when combining them together. We choose to work with a separate thresholds $\epsilon_p$ for each of the summary statistics. We set this to be a fraction of the empirical standard deviation of the summary statistic based on a pilot run of the simulation, such that $0.01-0.1$\% of the proposals are accepted \citep{vihola_use_2020}. We hence accept a proposed value $\theta^{(j)}$ only if $\mathcal{D}(S_p(Y), S_p(Y^{(j)}) < \epsilon_p$ for all $p$, using the absolute value function ($L_1$ norm) as the distance metric $\mathcal{D}(\cdot, \cdot)$.
In the case where the detection function $h(\cdot)$ is parameterized based on an unknown set of hyperparameters $\xi$, these can also be sampled from their posterior distribution at each stage in the above MCMC algorithm using standard Metropolis-Hastings methods.
\section{Experimental Results} \label{sec_realdata}
We now present experimental results to show the performance of the ABC-Hawkes algorithm. First, we will present evidence that the set of summary statistics we presented above capture most of the information in the parameter posterior distribution for the Hawkes process when distorting is not present. Next, we will investigate how accurately they allow the parameters to be estimated in the presence of distortion. To do this, we will manually insert data distortion into a simulated event sequences where we have access to the true event times. This allows us to compare the posterior distribution estimated by ABC-Hawkes on the distorted data to the ``true'' posterior which would have been obtained if we had access to the undistorted data.
\subsection{No-distortion Setting}
We first confirm whether the above ABC summary statistics accurately allows the posterior distribution to be estimated in a standard Hawkes process without distortion. We will then compare our approach to that of \citet{ertekin_reactive_2015} which is the only existing example we could find of applying ABC to Hawkes processes, although they do not consider the data distortion setting.
To investigate the capabilities of our algorithm to recover the true posterior distribution without distortion we generate three data sets and compare the posterior estimates from ABC-Hawkes to the ground truth from Stan. We use the following priors: $\mu \sim \mathcal{U}(0.05, 0.85)$, $ K \sim \mathcal{U}(0, 0.9)$, $ \beta \sim \mathcal{U}(0.1, 3)$. As shown in Figure~\ref{fig_post_proofofconcept} ABC-Hawkes can approximate the posterior distributions, both in location and shape, in this simulation study. We note that the overestimation of the posterior variance for $\beta$ in the first data set is an often observed issue in ABC stemming from a necessary non-zero choice of the $\epsilon_p$ threshold \citep{li_convergence_2017}.
\begin{figure}[t]
\flushleft
\begin{subfigure}{.09\textwidth}
\flushright
Data Set 1
\end{subfigure}
\begin{subfigure}{.28\textwidth}
\centering
\includegraphics[width=.99\linewidth]{figures/ProofofConcept/sim_data1_mu.pdf}
\end{subfigure}
\begin{subfigure}{.28\textwidth}
\centering
\includegraphics[width=.99\linewidth]{figures/ProofofConcept/sim_data1_K.pdf}
\end{subfigure}
\begin{subfigure}{.28\textwidth}
\centering
\includegraphics[width=.99\linewidth]{figures/ProofofConcept/sim_data1_beta.pdf}
\end{subfigure} \\
\begin{subfigure}{.09\textwidth}
\flushright
Data Set 2
\end{subfigure}
\begin{subfigure}{.28\textwidth}
\centering
\includegraphics[width=.99\linewidth]{figures/ProofofConcept/sim_data2_mu.pdf}
\end{subfigure}
\begin{subfigure}{.28\textwidth}
\centering
\includegraphics[width=.99\linewidth]{figures/ProofofConcept/sim_data2_K.pdf}
\end{subfigure}
\begin{subfigure}{.28\textwidth}
\centering
\includegraphics[width=.99\linewidth]{figures/ProofofConcept/sim_data2_beta.pdf}
\end{subfigure}\\
\begin{subfigure}{.09\textwidth}
\flushright
Data Set 3
\end{subfigure}
\begin{subfigure}{.28\textwidth}
\centering
\includegraphics[width=.99\linewidth]{figures/ProofofConcept/sim_data3_mu.pdf}
\caption{$\mu$}
\end{subfigure}
\begin{subfigure}{.28\textwidth}
\centering
\includegraphics[width=.99\linewidth]{figures/ProofofConcept/sim_data3_K.pdf}
\caption{$K$}
\end{subfigure}
\begin{subfigure}{.28\textwidth}
\centering
\includegraphics[width=.99\linewidth]{figures/ProofofConcept/sim_data3_beta.pdf}
\caption{$\beta$}
\end{subfigure}
\caption{Undisturbed data posterior distributions. Each row uses a different simulated data set. Parameters $(\mu, K, \beta)$ are chosen as $(0.2, 0.5, 0.5)$, $(0.4, 0.3, 0.8)$, and $(0.3, 0.3, 1)$ for the three data sets. The black curve shows the true posterior estimated by samples generated from Stan, red represents ABC-Hawkes. All estimates are based on the complete, undistorted data sets.}
\label{fig_post_proofofconcept}
\end{figure}
In \citet{ertekin_reactive_2015}, a Hawkes process is defined which has an intensity function containing both an inhibiting and exciting component, as well as a constant background intensity $\lambda_0$, and $C_1$, a term to deal with zero-inflation. In their simulation study (provided in their supplementary materials) they fix both $\lambda_0$ and $C_1$ to their true values, which only leaves two free parameters, i.e. they do not consider the full estimation problem where the background intensity needs to be learned in addition to the triggering kernel. To estimate the posterior distributions using ABC \citet{ertekin_reactive_2015} use two summary statistics: the log-number of events and the KL divergence between the histograms of the interevent times $\Delta_i$ of the true and simulated data set.
For our comparison we generate three data sets from a Hawkes process. We use a constant background intensity $\mu$, which is assumed fixed and known for all methods to facilitate a direct comparison with \citet{ertekin_reactive_2015}. Hence, we are left to estimate the posteriors of the two free parameters from the exponential excitation kernel from Equation~\eqref{eq_exponential}, hence $\theta' = (K, \beta)$. The priors for the Hawkes parameters are chosen to be relatively uninformative: $ K \sim \mathcal{U}(0, 0.9)$, $ \beta \sim \mathcal{U}(0.1, 3)$. Without any data distortion, Figure~\ref{fig_post_ert} compares the true posterior distribution to the estimates using the two summary statistics from \citet{ertekin_reactive_2015} and ABC-Hawkes. It is evident that ABC-Hawkes does a better job at capturing the posterior distributions across all parameters and data sets, showing that it is able to estimate the Hawkes parameters despite not evaluating the likelihood function.
\begin{figure}[t]
\flushleft
\begin{subfigure}{.2\textwidth}
\flushright
Data Set 1
\end{subfigure}
\begin{subfigure}{.3\textwidth}
\centering
\includegraphics[width=.99\linewidth]{figures/ertekin/sim_data1_K.pdf}
\end{subfigure}
\begin{subfigure}{.3\textwidth}
\centering
\includegraphics[width=.99\linewidth]{figures/ertekin/sim_data1_beta.pdf}
\end{subfigure} \\
\begin{subfigure}{.2\textwidth}
\flushright
Data Set 2
\end{subfigure}
\begin{subfigure}{.3\textwidth}
\centering
\includegraphics[width=.99\linewidth]{figures/ertekin/sim_data2_K.pdf}
\end{subfigure}
\begin{subfigure}{.3\textwidth}
\centering
\includegraphics[width=.99\linewidth]{figures/ertekin/sim_data2_beta.pdf}
\end{subfigure}\\
\begin{subfigure}{.2\textwidth}
\flushright
Data Set 3
\end{subfigure}
\begin{subfigure}{.3\textwidth}
\centering
\includegraphics[width=.99\linewidth]{figures/ertekin/sim_data3_K.pdf}
\caption{$K$}
\end{subfigure}
\begin{subfigure}{.3\textwidth}
\centering
\includegraphics[width=.99\linewidth]{figures/ertekin/sim_data3_beta.pdf}
\caption{$\beta$}
\end{subfigure}
\caption{Undisturbed data posterior distributions. Each row uses a different simulated data set. Parameters $(K, \beta)$ are chosen as $(0.5, 0.5)$, $(0.3, 0.8)$, and $(0.3, 1)$ for the three data sets. The black curve shows the true posterior estimated by samples generated from Stan, red represents ABC-Hawkes, and green the approach by \citet{ertekin_reactive_2015}. All estimates are based on the complete, undistorted data sets.}
\label{fig_post_ert}
\end{figure}
\subsection{Distortion -- Twitter Data Example} \label{sub_twitter}
We next apply the ABC-Hawkes algoorithm to parameter estimation in distorted data. For this purpose, we will manually insert distortion into a real data set and then compare the posterior distribution estimated from the distorted data, to that estimated from the original undistorted data.
For the real data, we choose to study the occurrence time of tweets on Twitter, which previous research has shown can be accurately modeled by a Hawkes process \citep{mei_neural_2017}. We use a Twitter data set that was previously analyzed by \citet{rizoiu_tutorial_2017} and describes the retweet cascade of an article published in the New York Times. While this data set is complete (and we can hence obtain the true posterior distribution), we will manually create a gap in the data to assess whether the true posterior can be recovered using only this distorted data.
To evaluate ABC-Hawkes, we use the first $150$ event times in the tweet data. To artificially create a gap, we delete all observations from observation $t_{60}$ to observation $t_{90}$, to produce the incomplete data. The be relatively uninformative priors are chosen as above: $\mu \sim \mathcal{U}(0.05, 0.85)$, $ K \sim \mathcal{U}(0, 0.9)$, $ \beta \sim \mathcal{U}(0.1, 3)$. The restriction that $K<1$ is standard, and ensures that data sampled from the Hawkes process contain a finite number of events with probability 1.
We generate samples from the true parameter posteriors using Markov Chain Monte Carlo as implemented in the Stan \citep{Stan_RStan_2019} probabilistic programming language, as applied to the complete data. This represents the idealized ``ground truth'' posterior that would be obtained if we had access to the true undistorted data. We then apply ABC-Hawkes to the observed data only (i.e. the distorted, incomplete data), with the goal of recovering this true posterior. In the implementation of the ABC-MCMC algorithm we use independent random walk Gaussian proposal distribution for the $q(\cdot|\cdot)$ transition kernels, with standard deviations $(0.05, 0.05, 0.2)$ for $(\mu, K, \beta)$ respectively. To assess the performance of ABC-Hawkes, we compare the obtained posterior to three alternative methods that could be used: (1) MCMC (using Stan) applied to the incomplete data, which represents the naive attempt to learn the Hawkes parameters directly using only the observed data and ignoring the missing data. (2) MCMC (using Stan) applied only to the observations $[0, T_a]$ before the start of the gap. This is ``unbiased'' since it uses a sequence of data where all tweets are available, but is inefficient due to the smaller resulting set. (3) The missing data algorithm suggested by \cite{tucker_handling_2019}, which is a specialised algorithm applicable only to data where the distortion consists of gaps. Their approach alternates between two steps. First, they impute the missing data based on the complete pre-gap data and a given parameter vector. Second, they update the parameter vector based on the imputed data set of full length, where likelihood evaluations are possible. This approach has the advantage of targetting the correct posterior, however it can suffer from slow mixing since the probability of a MCMC example being accepted is low.
We note that approaches (2) and (3) are only applicable to this specific choice of distortion function where the distortion consists of no detected events at all during the gap period, and (unlike ABC-Hawkes), are not applicable to more general types of distortion, as will be discussed in the next section.
Figure~\ref{fig_post_unmarked} shows the resulting posterior density estimates for all these methods, and Table~\ref{tab_twitter} contains the posterior means and standard deviations. It can be seen that the ABC-Hawkes algorithm does an excellent job of recovering the posterior distribution despite the missing data, and is very close to the true posterior mean for each of the model parameters. In contrast, the naive approach (which ignores the gap) produces a highly biased posterior which is not close to the true posterior means of $\mu$ and $K$. Both the approach from \cite{tucker_handling_2019} and the method which uses only the observations prior to the gap do substantially better than the naive approach, but are inferior to ABC Hawkes.
\begin{table}[t]
\caption{Twitter data posterior mean (and standard deviation)}
\label{tab_twitter}
\centering
\begin{tabular}{lccc}
\toprule
Model & $\mu$ & $K$ & $\beta$ \\
\midrule
\textbf{True Posterior} & \textbf{0.55 (0.13)} &\textbf{0.65 (0.10)} & \textbf{0.91 (0.28)} \\
\midrule
ABC-Hawkes & 0.59 (0.15) & 0.68 (0.11) & 1.00 (0.51) \\
Naive & 0.22 (0.08) & 0.80 (0.07) & 0.87 (0.23) \\
Pre-gap only & 0.68 (0.13) & 0.61 (0.13) & 1.49 (0.57) \\
\citet{tucker_handling_2019} & 0.61 (0.14) & 0.66 (0.10) & 1.08 (0.34) \\
\addlinespace[0.5ex]
\bottomrule
\end{tabular}
\end{table}
\begin{figure}[t]
\centering
\begin{subfigure}{.3\textwidth}
\centering
\includegraphics[width=.99\linewidth]{figures/twitter/Twitter_mu.pdf}
\caption{$\mu$}
\label{subfig_unmarked_mu}
\end{subfigure}
\begin{subfigure}{.3\textwidth}
\centering
\includegraphics[width=.99\linewidth]{figures/twitter/Twitter_K.pdf}
\caption{$K$}
\label{subfig_unmarked_K}
\end{subfigure}
\begin{subfigure}{.3\textwidth}
\centering
\includegraphics[width=.99\linewidth]{figures/twitter/Twitter_beta.pdf}
\caption{$\beta$}
\label{subfig_unmarked_beta}
\end{subfigure}
\caption{Twitter data posterior distributions. Black represents the true posterior using both observed and missing data. Red represents ABC-Hawkes using only the incomplete data. The naive approach using only the incomplete data is green, while the yellow and blue lines respectively show the approach that only uses the observations before the gap, and the \citet{tucker_handling_2019} imputation method.}
\label{fig_post_unmarked}
\end{figure}
\subsection{Other Distortion Settings}
The above section showed that ABC-Hawkes can accurately recover the true posterior when there are gaps in the data. However a key advantage of our approach is that \citep[unlike the approach from][]{tucker_handling_2019} it can also be used when the data is distorted in other ways. To investigate this, we use a simulation study where we generate data sets from a Hawkes process and manually distort it in various ways. The first type of distortion involves a time-varying detection rate, while the second involves noisy data. The prior distribution for $\theta$ is taken to be the same as above and we again used Stan \citep{Stan_RStan_2019} to sample from the idealized parameter posterior using the undistorted data, which acts as a baseline that would be obtained if no distortion were present.
For the first type of distortion, we use a linearly decaying detection function where an event that occurs at time $t$ is observed with probability $ h(t) = 1- (a + b \, \frac{t}{T})$ where $a = 0.35$ and $b = -0.25$, and missing otherwise. Hence, earlier events have a lower probability of being observed. Similar time-varying detection functions have been applied in the earthquake literature \citep{ogata_immediate_2006} and hence this is plausible specification. For the second type of distortion, we create a ``noisy'' version of the data set by adding Gaussian errors to each observation, i.e. replacing each $t_i$ with $t'_i = t_i + \varepsilon_i$ where $\varepsilon_i \sim \mathcal{N}(0,0.5^2)$.
For both distorted data sets, we compute the posterior using ABC-Hawkes on the observed (distorted) data using the same Metropolis-Hastings transition kernel as above, and compare it to the idealized posterior computed on the true data. Unlike in the above missing data case, we are not aware of any other published algorithms which can handle these two types of distortion, so the only other comparison we make is to the naive method which learns the posterior using the observed data without taking the distortion into account. In Figure~\ref{fig_post_sim}, the posterior density estimates are plotted, and Table~\ref{tab_simulated} shows the posterior means and standard deviations for both scenarios. Again, ABC-Hawkes manages to learn the model parameters accurately and produces a posterior distribution which is remarkably close to the true posterior. In contrast, the naive approach is severely biased and does not get close to the true posterior. Again we note that the slight overestimation of the posterior variance is an inherent issue with ABC that comes from a necessary non-zero choice of $\epsilon_p$ \citep{li_convergence_2017}.
\begin{table}[!th]
\caption{Simulated data posterior mean (and standard deviation)}
\label{tab_simulated}
\centering
\begin{tabular}{llccc}
\toprule
Distortion & Model & $\mu$ & $K$ & $\beta$ \\
\midrule
& \textbf{True Posterior} & \textbf{0.51 (0.07)} & \textbf{0.15 (0.09)} & \textbf{1.45 (0.80)} \\
Lin. Deletion & ABC-Hawkes & 0.50 (0.08) & 0.21 (0.11) & 1.54 (0.81) \\
& Naive & 0.41 (0.07) & 0.19 (0.12) & 1.22 (0.79) \\
\midrule
& \textbf{True Posterior} & \textbf{0.24 (0.04)} & \textbf{0.38 (0.10)} & \textbf{0.69 (0.31)} \\
Noise & ABC-Hawkes & 0.25 (0.04) & 0.32 (0.09) & 0.98 (0.48) \\
& Naive & 0.29 (0.03) & 0.23 (0.04) & 2.89 (0.11) \\
\bottomrule
\end{tabular}
\end{table}
\begin{figure}[!th]
\centering
\begin{subfigure}{.3\textwidth}
\centering
\includegraphics[width=.99\linewidth]{figures/lin/sim_lin_mu.pdf}
\label{subfig_sim_exp_mu}
\end{subfigure}
\begin{subfigure}{.3\textwidth}
\centering
\includegraphics[width=.99\linewidth]{figures/lin/sim_lin_K.pdf}
\label{subfig_sim_exp_K}
\end{subfigure}
\begin{subfigure}{.3\textwidth}
\centering
\includegraphics[width=.99\linewidth]{figures/lin/sim_lin_beta.pdf}
\label{subfig_sim_exp_beta}
\end{subfigure}
\begin{subfigure}{.3\textwidth}
\centering
\includegraphics[width=.99\linewidth]{figures/noise/sim_noise_mu.pdf}
\caption{$\mu$}
\label{subfig_sim_noise_mu}
\end{subfigure}
\begin{subfigure}{.3\textwidth}
\centering
\includegraphics[width=.99\linewidth]{figures/noise/sim_noise_K.pdf}
\caption{$K$}
\label{subfig_sim_noise_K}
\end{subfigure}
\begin{subfigure}{.3\textwidth}
\centering
\includegraphics[width=.99\linewidth]{figures/noise/sim_noise_beta.pdf}
\caption{$\beta$}
\label{subfig_sim_noise_beta}
\end{subfigure}
\caption{Distorted data posterior distributions. \textbf{Top row:} data distortion through a detection function using exponential decay. \textbf{Bottom row:} data distortion by adding noise. \textbf{Colours:} Black represents the true posterior using the undistorted data, red represents ABC-Hawkes using the distorted data, and green represents the naive model using the distorted data. }
\label{fig_post_sim}
\end{figure}
\section{Discussion} \label{sec_summary}
In this paper we have demonstrated that it is possible to successfully learn the parameters of a Hawkes process even when the data is distorted through mechanisms such as missingness or noise. We have based our algorithm on ABC-MCMC, which has been adapted to the unique structure of a self-exciting point process. Unlike a naive MCMC approach which ignores the potential distortion, the resulting ABC-Hawkes algorithm can learn the true posterior distribution that would have been obtained given access to the undistorted data. The strong performance of ABC-Hawkes was demonstrated using a variety of realistic data distorting mechanisms. Future research could expand the theory to other data-distorting mechanisms, for example additive noise in a multivariate setting \citep{trouleau_learning_2019} or censoring \citep{xu_learning_2017}.
While our simulation study focused on a simple Hawkes process with a parametrically specified self-excitation kernel, our approach is much more general than this and can be applied to other specifications of the Hawkes process as long as the resulting model is generative so that data sets $p(Y|\theta)$ can be simulated conditional on a parameter vector $\theta$. This includes recent specifications of the Hawkes process using nonparametric estimation \citep{chen_nonparametric_2016} or
LTSM networks \citep{mei_neural_2017}, which is a potential avenue for future research.
\bibliographystyle{apalike}
| {'timestamp': '2021-06-03T02:18:48', 'yymm': '2006', 'arxiv_id': '2006.09015', 'language': 'en', 'url': 'https://arxiv.org/abs/2006.09015'} |
\section{Introduction}
Non-leptonic weak decays are valuable tools for testing the Standard
Model (SM), the Kobayashi-Maskawa (KM) mechanism, and the unitarity of
the Cabibbo-Kobayashi-Maskawa (CKM) matrix, and for exploring physics
beyond the SM. Among non-leptonic decays, the decay of the light
pseudoscalar meson $\eta'\to K^{\pm}\pi^{\mp}$ is interesting because
it is fundamental to understand the long-standing problem of the
$\Delta I=1/2$ rule in weak non-leptonic interactions.
The experimental $\Delta I=1/2$ rule was first established in the
decay $K\to\pi\pi$. A neutral kaon may decay into two pions with
amplitudes $A_{0}$ or $A_{2}$, respectively. As the real parts of these
amplitudes, Re$A_{0}$ is dominated by $\Delta I=1/2$ transitions and
Re$A_{2}$ receives contributions from $\Delta I=3/2$ transitions, the
former transitions dominate Re$A_{0}$, which expresses the so-called
$\Delta I=1/2$ rule~\cite{deltaIrule1,deltaIrule2}
\begin{equation}
\begin{aligned}
\frac{\text{Re}A_{0}}{\text{Re}A_{2}}=22.35.
\end{aligned}
\end{equation}
Despite nearly 50 years of efforts, the microscopic dynamical
mechanism responsible for such a striking phenomenon is still elusive.
The decay $\eta'\to K^{\pm}\pi^{\mp}$ receives contributions from both
the $\Delta I=1/2$ and $\Delta I=3/2$ parts of the weak
hamiltonian~\cite{bergstrom1988weak}. It is possible to see whether
the $\Delta I=1/2$ rule is functional in this type of decay, and this
could shed light on the origin of this rule. The branching fraction
of $\eta^{\prime}\to K^{\pm}\pi^{\mp}$ decay is predicted to be of the order
of $10^{-10}$ or higher~\cite{bergstrom1988weak}, with a large
long-range hadronic contribution expected, which should become
observable in high luminosity electron-positron collisions.
At present, there is no experimental information on the decay
$\eta^{\prime}\to K^\pm\pi^\mp$. The world's largest sample of
$1.3\times 10^9$ $J/\psi$ events produced at rest and collected with
the BES\uppercase\expandafter{\romannumeral3}\xspace detector therefore offers a good opportunity to search for this rare
decay. In this paper, the measurement of the ratio
$\frac{{\cal B}(\eta'\to K^{\pm}\pi^{\mp})}{{\cal B}(\eta'\to
\gamma\pi^{+}\pi^{-})}$ is presented, where the $\eta^\prime$ is
produced in the decay $J/\psi\to\phi\eta^\prime$. The advantage of
comparing these two $\eta^\prime$ decay channels is that parts of the
systematic uncertainties due to the tracking, the particle
identification (PID), the branching fractions ${\cal B}(J/\psi\to\phi\eta^{\prime})$ and ${\cal B}(\phi\to K^{+}K^{-})$,
and the number of $J/\psi$ events cancel in
the ratio. A measurement of the branching fraction $J/\psi\to\phi\eta^{\prime}$
is also presented in which $\phi$ is reconstructed in its $K^{+}K^{-}$
decay mode and $\eta^{\prime}$ is detected in the $\gamma\pi^+\pi^-$ decay
mode. This can be compared with the results reported by the
BESII~\cite{ablikim2005measurements},
MarkIII~\cite{MARKIIImeasurements}, and DM2~\cite{DM2measurements} collaborations.
\section{\texorpdfstring{Detector and Monte Carlo Simulation}{Detector and MC Simulation}}
BEPCII is a double-ring $e^{+}e^{-}$ collider designed to provide a
peak luminosity of $10^{33}$~cm$^{-2}s^{-1}$ at the center-of-mass
(c.m.) energy of 3.770 GeV. The BES\uppercase\expandafter{\romannumeral3}\xspace~\cite{Ablikim2010345}
detector, with a geometrical acceptance of 93\% of the 4$\pi$ stereo
angle, is operating in a magnetic field of 1.0 T provided by a
superconducting solenoid magnet. It is composed of a helium-based
drift chamber (MDC), a plastic scintillator Time-Of-Flight (TOF)
system, a CsI(Tl) electromagnetic Calorimeter (EMC) and a multi-layer
resistive plate chamber (RPC) muon counter system (MUC).
Monte Carlo (MC) simulations are used to determine the mass resolutions
and detection efficiencies. The GEANT4-based simulation
software BOOST~\cite{ref:boost} includes the geometric and material
description of the BES\uppercase\expandafter{\romannumeral3}\xspace detector, the detector response, and the
digitization models, as well as the detector running conditions and
performance. The production of the $J/\psi$ resonance is simulated with
the MC event generator KKMC~\cite{ref:kkmc,ref:kkmc2}, while the
decays are generated by EVTGEN~\cite{ref:evtgen} for known decay modes
with branching fractions set to the Particle Data Group
(PDG)~\cite{PDG2014} world average values, and by
LUNDCHARM~\cite{ref:lundcharm} for the remaining unknown decays. The
analysis is performed in the framework of the BES\uppercase\expandafter{\romannumeral3}\xspace offline software
system (BOSS)~\cite{ref:boss}.
\section{Data analysis}
\subsection{\texorpdfstring{ $J/\psi\to\phi\eta'$, $\eta^\prime\to\gamma\pi^+\pi^-$}{eta' to gampipi}}
For the decay $J/\psi\to\phi\eta'$, $\phi\toK^{+}K^{-}$,
$\eta^\prime\to\gamma\pi^+\pi^-$, candidate events are selected
by requiring four well reconstructed charged tracks and at least one
isolated photon in the EMC. The four charged tracks are required to
have zero net charge. Each charged track, reconstructed using hits in
the MDC, is required to be in the polar angle range
$|\cos\theta| < 0.93$ and pass within $\pm10$~cm of the interaction
point along the beam direction, and within $\pm1$~cm in the plane
perpendicular to the beam, with respect to the interaction point. For
each charged track, information from the TOF and the specific
ionization measured in the MDC ($dE/dx$) are combined to
form PID confidence levels (C.L.) for the $K$, $\pi$ and $p$
hypotheses, and the particle type with the highest C.L.\ is assigned to
each track. Two of the tracks are required to be identified as kaons and the
remaining two tracks as pions.
Photon candidates are reconstructed by clusters of energy deposited in
the EMC. The energy deposited in the TOF counter in front of the EMC
is included to improve the reconstruction efficiency and the energy
resolution. Photon candidates are required to have a deposited
energy larger than 25 MeV in the barrel region ($|\cos\theta|<0.80$)
and 50 MeV in the end-cap region ($0.86<|\cos\theta|<0.92$). EMC
cluster timing requirements are used to suppress electronic noise and
energy deposits that are unrelated to the event. To eliminate showers
associated with charged particles, the angle between the cluster and
the nearest track must be larger than 15$^{\circ}$.
\begin{figure}[htbp]
\includegraphics[width=0.45\textwidth]{mgampippim_vs_mkpkm_chi2Cut.eps}
\caption{\label{PhiGamPiPi_scatter}Scatter plot of $M(\gamma\pi^{+}\pi^{-})$ versus $M(K^{+}K^{-})$ .}
\end{figure}
\begin{figure*}[htbp]
\includegraphics[width=0.45\textwidth]{log_mkpkm_fit_2d_RBW_Cheby.eps}\put(-70,110){\bf (a)}
\includegraphics[width=0.45\textwidth]{log_mgampippim_fit_2d_RBW_Cheby.eps}\put(-70,110){\bf (b)}
\caption{\label{nominal_fit_result}Distributions of (a) $M(K^{+}K^{-})$ and
(b) $M(\gamma\pi^{+}\pi^{-})$ with projections of the fit result superimposed
for $J/\psi\to\phi\eta', \phi\toK^{+}K^{-}, \eta'\to\gamma\pi^{+}\pi^{-}$. The
dots with errors are for data, the solid curve shows the result of
the fit to signal plus background distributions, the long-dashed
curve is for $\phi\eta'$ signal, the dot-dashed curve shows the
non-$\eta'$-peaking background, the dotted curve shows the
non-$\phi$-peaking background, and the short-dashed curve is for
non-$\phi\eta'$ background.}
\end{figure*}
A four-constraint (4C) kinematic fit is performed to the
$\gammaK^{+}K^{-}\pi^{+}\pi^{-}$ hypothesis. For events with more than one photon
candidate, the candidate combination with the smallest $\chi^{2}_{4C}$
is selected, and it is required that $\chi^{2}_{4C}<50$.
The scatter plot of $M(\gamma\pi^+\pi^-)$ versus $M(K^{+}K^{-})$ is shown in
Fig.~\ref{PhiGamPiPi_scatter}, where the $J/\psi\to\phi\eta^\prime$
decay is clearly visible. To extract the number of $\phi\eta^\prime$ events,
an unbinned extended maximum likelihood fit is performed to the
$M(\gamma\pi^{+}\pi^{-})$ versus $M(K^{+}K^{-})$ distribution with the
requirements of 0.988~GeV/$c^2$ $< M(K^{+}K^{-}) < 1.090$~GeV/$c^2$ and
0.880~GeV/$c^2$ $< M(\gamma\pi^{+}\pi^{-}) < 1.040$~GeV/$c^2$. Assuming zero
correlation between the two discriminating variables $M(K^{+}K^{-})$ and
$M(\gamma\pi^{+}\pi^{-})$, the composite probability density function (PDF)
in the 2-dimensional fit is constructed as follows
\begin{equation}
\begin{aligned}
F&=N_\text{sig}\times (F_\text{sig}^{\phi}\cdot F_\text{sig}^{\eta'})\\
&+N_\text{bkg}^{\text{non-}\eta'}\times (F_\text{sig}^{\phi}\cdot F_\text{bkg}^{\text{non-}\eta'})\\
&+N_\text{bkg}^{\text{non-}\phi}\times (F_\text{bkg}^{\text{non-}\phi}\cdot F_\text{sig}^{\eta'})\\
&+N_\text{bkg}^{\text{non-}\phi\eta'}\times (F_\text{bkg}^{\text{non-}\phi}\cdot F_\text{bkg}^{\text{non-}\eta'}).
\end{aligned}
\end{equation}
Here, the signal shape for $\phi$ (\emph{i.e.} $F_\text{sig}^{\phi}$) is modeled
with a relativistic Breit-Wigner function convoluted with a Gaussian
function taking into account the detector resolution; the signal
shape for $\eta'$ ($i.e.~F_\text{sig}^{\eta'}$) is described by a normal
Breit-Wigner function convoluted with a Gaussian function. The widths
and masses of $\phi$ and $\eta'$ are free parameters in the fit. The
background shape of $\phi$ ($F_\text{bkg}^{\text{non-}\phi}$) is described by
a second order Chebychev polynomial function, and the background shape
of $\eta'$ ($F_\text{bkg}^{\text{non-}\eta'}$) is described by a first order
Chebychev polynomial function. All parameters related to the
background shapes are free in the fit. $N_\text{sig}$ is the number of
$J/\psi\to\phi\eta', \phi\toK^{+}K^{-}, \eta'\to\gamma\pi^{+}\pi^{-}$ signal
events. The backgrounds are divided into three categories:
non-$\phi\eta'$ background ($i.e.~J/\psi\to\gammaK^{+}K^{-}\pi^{+}\pi^{-}$);
non-$\phi$-peaking background ($i.e.~J/\psi\toK^{+}K^{-}\eta'$); and
non-$\eta'$-peaking background
($i.e.~J/\psi\to\phi\gamma\pi^{+}\pi^{-}$). The parameters $N_\text{bkg}^{\text{non-}\phi\eta'}$,
$N_\text{bkg}^{\text{non-}\phi}$ and $N_\text{bkg}^{\text{non-}\eta'}$ are the
corresponding three background yields.
The resulting fitted number of signal events is
$N_\text{sig}=(31321\pm201)$; the projections of the fit on the $M(K^{+}K^{-})$
and $M(\gamma\pi^{+}\pi^{-})$ distributions are shown in
Figs.~\ref{nominal_fit_result} (a) and (b), respectively. The
detection efficiency, $32.96\pm0.04$\%, is obtained from the MC
simulation in which the angular distribution and the shape of
$M(\pi^{+}\pi^{-})$ are taken into account according to a
previous BES\uppercase\expandafter{\romannumeral3}\xspace measurement for
$\eta'\to\pi^{+}\pi^{-}e^{+}e^{-}$~\cite{Eta'ToPiPiEE}, where the
non-resonant contribution (known as the ``box anomaly'') is included
in the simulation of $\eta^{\prime}\to\gamma\pi^{+}\pi^{-}$.
\subsection{\texorpdfstring{$J/\psi\to\phi\eta', \eta'\to K^{\pm}\pi^{\mp}$}{eta' to Kpi}}
\begin{figure*}[htbp]
\includegraphics[width=0.45\textwidth]{mKPi_vs_mphi_enlarge_rectangle.eps}\put(-50,110){\bf (a)}
\includegraphics[width=0.45\textwidth]{mKPi_Cut_3.eps}\put(-50,110){\bf (b)}
\caption{\label{PhiKPi_plot}(a) Scatter plot of $M(K^{+}K^{-})$ versus
$M(K^{\pm}\pi^{\mp})$, where the box indicates the signal region
with $|M(K^{+}K^{-})-M(\phi)| < 15$~MeV/$c^2$ and
$|M(K^\pm\pi^\mp)-M(\eta')| < 7$~MeV/$c^2$. (b) The $K^ \pm
\pi^\mp$ invariant mass distribution, where the arrows show the
signal region. The dots with error bars are for data, the dashed
histogram is for the signal MC with arbitrary normalization, and
the solid histogram is the background contamination from a MC
simulation of $J/\psi \to \phi\pi^+\pi^-$.}
\end{figure*}
To search for $\eta'\to K^{\pm}\pi^{\mp}$, the two-body decay
$J/\psi\to\phi\eta'$ is chosen because of its simple event topology,
$K^+K^-K^{\pm}\pi^{\mp}$, and because the narrow $\phi$ meson is easy to
detect through $\phi\to K^+K^-$ decay. The selection criteria for the
charged tracks are the same as that for the
$J/\psi\to\phi\eta', \eta'\to \gamma\pi^{+}\pi^{-}$ decay. Three tracks
are required to be identified as kaons with the combination of TOF and $dE/dx$
information and the remaining one is required to be identified as a pion.
A 4C kinematic fit imposing energy-momentum conservation is performed
under the $K^{+}K^{-} K^\pm\pi^\mp$ hypothesis, and a requirement of
$\chi^2_{4C}<50$ is imposed. To suppress the dominant background
contamination from $J/\psi\to\phi\pi^{+}\pi^{-}$, the $\chi^2_{4C}$ of the
$K^{+}K^{-} K^{\pm}\pi^{\mp}$ hypothesis is required to be less than that for the
$K^{+}K^{-}\pi^{+}\pi^{-}$ hypothesis. Candidates for $\phi\to K^+K^-$ are reconstructed from the
$K^{+}K^{-}$ combination with invariant mass closest to the nominal
mass value. The remaining kaon together with the pion form the $\eta'$
candidate.
Fig.~\ref{PhiKPi_plot} (a) shows the scatter plot to the invariant
mass $M(K^{+}K^{-})$ versus $M(K^\pm\pi^{\mp})$. The process $\phi\eta^{\prime} (\eta'\to K^{\pm}\pi^{\mp})$ would result in an enhancement of events around the nominal masses of
the $\phi$ and $\eta'$ mesons, while no evident cluster is
seen. Within three standard deviations of the $\phi$ mass,
$|M(K^{+}K^{-})-M(\phi)| < 15$ MeV/$c^2$, the $K^{\pm}\pi^{\mp}$ invariant
mass distribution is displayed in Fig.~\ref{PhiKPi_plot} (b); a
few events are retained around the $\eta^\prime$ mass region, shown as
the dots with error bars. To estimate the number of signal events
passing the selection criteria, a region of $\pm3\sigma$ around the
$\eta^{\prime}$ nominal mass is selected, that is
$|M(K^\pm\pi^\mp)-M(\eta')| < 7$~MeV/$c^2$, where $\sigma =2.2$~MeV
is the mass resolution determined from MC simulation. Only one event
survives in the signal region for further analysis.
To investigate the potential background contributions, a study with an inclusive MC
sample of $1.2\times 10^9$ generic $J/\psi$ decays is performed. It is found
that the remaining background events mainly come from
$J/\psi\to\phi\pi^+\pi^-$. Therefore an exclusive MC sample of
$1.3\times 10^6$ $J/\psi\to\phi\pi^+\pi^-$ events is generated in
accordance with the partial wave analysis results of
Ref.~\cite{PhiPiPifromBESII}.
This sample corresponds to twice the
expected $J/\psi\to\phi\pi^{+}\pi^{-}$ events in data. After normalizing to the
world average value for ${\cal B}(J/\psi\to\phi\pi^{+}\pi^{-})$, 2.0 events are expected
in the $K\pi$ mass range of [0.88, 1.04] GeV/$c^2$, with a total of 0.5
events in $\eta'$ signal region, as shown by the solid histogram in Fig.~\ref{PhiKPi_plot} (b).
To conservatively estimate the upper limit, it is assumed that the
only event in the signal region is a signal event. According to the Feldman-Cousins
method~\cite{UpperLimit}, the corresponding upper limit of the number
of events is $N^\text{UL} = 4.36$ at the 90\% C.L.
\section{Systematic Uncertainties}
The systematic uncertainties in branching fraction measurement
originate mainly from the differences of data and MC on tracking efficiency, photon reconstruction, PID
efficiency, and the 4C kinematic fit, different fitting
range and background shape,
uncertainties from ${\cal B}(\phi\toK^{+}K^{-}$) and ${\cal B}(\eta'\to\gamma\pi^{+}\pi^{-})$, total number
of $J/\psi$ events and MC statistics. Other uncertainties related to the
common selection criteria of the channels
$J/\psi\to\phi\eta^{\prime}, \eta'\to K^{\pm}\pi^{\mp}$ and
$J/\psi\to\phi\eta^{\prime}, \eta^{\prime}\to\gamma\pi^{+}\pi^{-}$ cancel to first order in the
ratio between the branching fractions.
The systematic uncertainties associated with the tracking efficiency
and PID efficiency have been studied in the analysis of
$J/\psi\to p\bar{p}\pi^{+}\pi^{-}$ and
$J/\psi\to K_{S}^{0}K^{\pm}\pi^{\mp}$~\cite{trackingeff, PIDeff}. The
results indicate that the kaon/pion tracking and PID efficiencies for
data agree with those of MC simulation within 1\%.
The photon detection is estimated by the study of
$J/\psi\to\rho\pi$~\cite{trackingeff}. The difference in the detection
efficiency between data and MC is less than 1\% per photon, which is
taken as the systematic uncertainty because of the only photon in the
$J/\psi\to\phi\eta^{\prime}, \eta^{\prime}\to\gamma\pi^{+}\pi^{-}$ channel.
The uncertainty associated with the 4C kinematic fit comes from the
difference between data and MC simulation. The method used in this
analysis is to correct the tracking parameters of the helix fit to
reduce the difference between MC and data, as described in
Ref.~\cite{refsmear}. This procedure yields a systematic uncertainty
of 0.3\% and 1.0\% for the measurement of ${\cal B}(J/\psi\to\phi\eta')$ and
the search of $\eta'\to K^{\pm}\pi^{\mp}$, respectively.
To estimate the systematic contribution due to the fit ranges,
several alternative fits in different ranges are performed. The
maximum difference on the number of signal events from alternative
fits in different mass ranges is 0.1\%, and this value is taken as
systematic uncertainty. To estimate the systematic contribution due
to the background shape, a fit is performed replacing the 2nd-order
Chebychev polynomial function with an Argus function~\cite{ref:Argus};
the change of signal yields is found to be 0.04\%, which is negligible.
The decay $J/\psi\to\phi\eta', \phi\toK^{+}K^{-}, \eta'\to\gamma\pi^{+}\pi^{-}$
is used as control sample to estimate the uncertainty from the $\phi$ mass
window criterion in the search of $\eta'\to K^{\pm}\pi^{\mp}$. The
$\phi$ mass window criterion is applied to the control sample, and a
fit is performed to $M(\gamma\pi^{+}\pi^{-})$. After considering the efficiency difference,
the difference of 1.2\% in the number of
signal events between this fit and the nominal 2D fit is taken as the
uncertainty from the $\phi$ mass window.
The uncertainties on the intermediate-decay branching fractions of
$\phi\toK^{+}K^{-}$ and $\eta'\to\gamma\pi^{+}\pi^{-}$ are taken from world
average values~\cite{PDG2014}.
The above systematic uncertainties together with the uncertainties due
to the number of $J/\psi$ events~\cite{JpsiNumberof2009, JpsiNumberofLiHuijuan} and MC
statistics are all summarized in Table~\ref{summary_of_syserr}, where
the uncertainties associated with MDC tracking, PID, branching
fraction of $\phi\toK^{+}K^{-}$ cancel in the ratio
$\frac{\mathcal{B}(\eta^\prime \to K^\pm \pi^\mp) }
{\mathcal{B}(\eta^\prime \to\gamma\pi^+\pi^-) }$. The total systematic
uncertainty is taken to be the sum in quadrature of the individual
contributions.
\begin {table*}[htp]
{\caption {Summary of systematic uncertainty sources and their contributions (in \%).}
\label{summary_of_syserr}}
\begin{tabular}{c|c|c} \hline \hline
Source & ${\cal B}(J/\psi\to\phi\eta')$ & ${\cal B}(\eta'\to K^{\pm}\pi^{\mp})$/${\cal B}(\eta'\to\gamma\pi^{+}\pi^{-})$\\ \hline
Tracking efficiency & 4.0 & - \\ \hline
PID efficiency & 4.0 & - \\ \hline
Photon reconstruction & 1.0 & 1.0 \\ \hline
4C kinematic fit & 0.3 & 1.0 \\ \hline
Fit range & 0.1 & 0.1 \\ \hline
Background shape & - & - \\ \hline
$\phi$ mass window & - & 1.2 \\ \hline
${\cal B}(\phi\toK^{+}K^{-})$ & 1.0 & - \\ \hline
${\cal B}(\eta'\to\gamma\pi^{+}\pi^{-})$ & 2.0 & - \\ \hline
$N_{J/\psi}$ & 0.8 & - \\ \hline
MC statistic of $\eta'\to\gamma\pi^{+}\pi^{-}$ & 0.1 & 0.1 \\ \hline
MC statistic of $\eta'\to K^{\pm}\pi^{\mp}$ & - & 0.1 \\ \hline
Total & 6.2 & 1.9 \\ \hline \hline
\end{tabular}
\end{table*}
\section{Results}
At the 90\% C.L., the upper limit on the ratio of
${\cal B}(\eta'\to K^{\pm}\pi^{\mp})$ to
${\cal B}(\eta'\to\gamma\pi^{+}\pi^{-})$ is given by
\begin{equation}
\begin{aligned}
\frac{\mathcal{B}(\eta'\to
K^{\pm}\pi^{\mp})}{\mathcal{B}(\eta'\to\gamma\pi^{+}\pi^{-})}<\frac{
N^\text{UL}\cdot\varepsilon_{\gamma\pi^+\pi^-} } {
N_\text{sig}\cdot\varepsilon_{ K^\pm\pi^\mp}}\frac{1}
{(1-\sigma_\text{syst})},
\end{aligned}
\end{equation}
where $N^\text{UL}$ is the upper limit of the number of observed
events at the 90\% C.L. for $\eta^\prime\to K^\pm\pi^\mp$;
$\varepsilon_{ K^\pm\pi^\mp}$ and $\varepsilon_{\gamma\pi^+\pi^-}$ are
the detection efficiencies of $J/\psi\to\phi\eta^{\prime}$ for the two decays
which are obtained from the MC simulations; $\sigma_\text{syst}$ is the
total systematic uncertainty in the search of
$\eta'\to K^{\pm}\pi^{\mp}$. The 90\% C.L. upper limit on the ratio
$\frac{{\cal B}(\eta'\to
K^{\pm}\pi^{\mp})}{{\cal B}(\eta'\to\gamma\pi^{+}\pi^{-})}$ is determined
to be $1.3\times10^{-4}$ by using the values of different parameters
listed in Table~\ref{numbers_of_calculation}.
\begin {table}[htp]
{\caption {Values used in the calculations of the branching
ratios, including the fitted signal yields, $N$ (or 90\%
C.L. upper limit) and the detection efficiency, $\varepsilon$.}
\label{numbers_of_calculation}}
\begin {tabular}
{cc c} \hline \hline
Decay mode & $\varepsilon$ (\%) & $N$ \\
\hline
$\eta'\to K^{\pm}\pi^{\mp}$ & 36.75$\pm$0.04 & $<$4.36 (90\% C.L.)\\
$\eta'\to\gamma\pi^{+}\pi^{-}$ & 32.96$\pm$0.04 & 31321$\pm$201 \\
\hline \hline
\end{tabular}
\end{table}
The branching fraction of $J/\psi\to\phi\eta'$ decay is calculated with the equation
\begin{equation}
\begin{aligned}
&{\cal B}(J/\psi\to\phi\eta')\\
&=\frac{N_\text{sig} /\varepsilon_{\gamma \pi^+\pi^-}}{ N_{J/\psi}{\cal B}(\eta'\to\gamma\pi^{+}\pi^{-}){\cal B}(\phi\toK^{+}K^{-})},
\end{aligned}
\end{equation}
where $N_{J/\psi} = 1310.6\times10^{6}$ is the number of $J/\psi$ events
as determined by $J/\psi$ inclusive hadronic
decays~\cite{JpsiNumberof2009, JpsiNumberofLiHuijuan}. The obtained value for the
branching fraction of $J/\psi\to\phi\eta'$ is
$(5.10\pm0.03(\text{stat.})\pm0.32(\text{syst.}))\times 10^{-4}$.
\section{Summary}
Based on the $1.3 \times 10^{9}$ $J/\psi$ events accumulated with the
BESIII detector, a search for the non-leptonic weak decay
$\eta'\to K^{\pm}\pi^{\mp}$ is performed for the first time through
the $J/\psi\to\phi\eta'$ decay. No evidence for
$\eta^{\prime}\to K^{\pm}\pi^{\mp}$ is seen, and the 90\% C.L. upper limit on
the ratio of
$\frac{{\cal B}(\eta'\to
K^{\pm}\pi^{\mp})}{{\cal B}(\eta'\to\gamma\pi^{+}\pi^{-})}$ is measured
to be $1.3\times10^{-4}$. Using the world average value of
$\mathcal{B}(\eta^{\prime}\to\gamma\pi^+\pi^-)$~\cite{PDG2014}, the
corresponding upper limit on
$\mathcal{B}(\eta^\prime\to K^\pm\pi^\mp)$ is calculated to be
$3.8\times10^{-5}$.
For the determination of the ratio of
$\frac{{\cal B}(\eta'\to
K^{\pm}\pi^{\mp})}{{\cal B}(\eta'\to\gamma\pi^{+}\pi^{-})}$, the
$J/\psi\to\phi\eta'$ decay with
$\phi\toK^{+}K^{-}, \eta'\to\gamma\pi^{+}\pi^{-}$ is analyzed and the
corresponding branching fraction is
${\cal B}(J/\psi\to\phi\eta')=(5.10\pm0.03(\text{stat.})\pm0.32(\text{syst.}))\times10^{-4}$. It
is the most precise measurement to date and in agreement with the world average value.
\begin{acknowledgments}
The BESIII collaboration thanks the staff of BEPCII and the IHEP computing center for their strong support. This work is supported in part by National Key Basic Research Program of China under Contract No. 2015CB856700; National Natural Science Foundation of China (NSFC) under Contracts Nos. 11125525, 11235011, 11322544, 11335008, 11425524, 11105101, 11205117, 11575133, 11175189; the Chinese Academy of Sciences (CAS) Large-Scale Scientific Facility Program; the CAS Center for Excellence in Particle Physics (CCEPP); the Collaborative Innovation Center for Particles and Interactions (CICPI); Joint Large-Scale Scientific Facility Funds of the NSFC and CAS under Contracts Nos. 11179007, U1232201, U1332201, U1232109; CAS under Contracts Nos. KJCX2-YW-N29, KJCX2-YW-N45; 100 Talents Program of CAS; National 1000 Talents Program of China; INPAC and Shanghai Key Laboratory for Particle Physics and Cosmology; German Research Foundation DFG under Contract No. Collaborative Research Center CRC-1044; Istituto Nazionale di Fisica Nucleare, Italy; Ministry of Development of Turkey under Contract No. DPT2006K-120470; Russian Foundation for Basic Research under Contract No. 14-07-91152; The Swedish Research Council; U. S. Department of Energy under Contracts Nos. DE-FG02-04ER41291, DE-FG02-05ER41374, DE-FG02-94ER40823, DESC0010118; U.S. National Science Foundation; University of Groningen (RuG) and the Helmholtzzentrum fuer Schwerionenforschung GmbH (GSI), Darmstadt; WCU Program of National Research Foundation of Korea under Contract No. R32-2008-000-10155-0.
\end{acknowledgments}
| {'timestamp': '2016-02-25T02:06:30', 'yymm': '1602', 'arxiv_id': '1602.07405', 'language': 'en', 'url': 'https://arxiv.org/abs/1602.07405'} |
\section{#1}}
\newtheorem{defn}{Definition}[section]
\newtheorem{thm}[defn]{Theorem}
\newtheorem{lemma}[defn]{Lemma}
\newtheorem{prop}[defn]{Proposition}
\newtheorem{corr}[defn]{Corollary}
\newtheorem{xmpl}[defn]{Example}
\newtheorem{rmk}[defn]{Remark}
\newcommand{\bdfn}{\begin{defn}}
\newcommand{\bthm}{\begin{thm}}
\newcommand{\blmma}{\begin{lemma}}
\newcommand{\bppsn}{\begin{prop}}
\newcommand{\bcrlre}{\begin{corr}}
\newcommand{\bxmpl}{\begin{xmpl}}
\newcommand{\brmrk}{\begin{rmk}}
\newcommand{\edfn}{\end{defn}}
\newcommand{\ethm}{\end{thm}}
\newcommand{\elmma}{\end{lemma}}
\newcommand{\eppsn}{\end{prop}}
\newcommand{\ecrlre}{\end{corr}}
\newcommand{\exmpl}{\end{xmpl}}
\newcommand{\ermrk}{\end{rmk}}
\newcommand{\IA}{\mathbb{A}}
\newcommand{\IB}{\mathbb{B}}
\newcommand{\IC}{\mathbb{C}}
\newcommand{\ID}{\mathbb{D}}
\newcommand{\IE}{\mathbb{E}}
\newcommand{\IF}{\mathbb{F}}
\newcommand{\IG}{\mathbb{G}}
\newcommand{\IH}{\mathbb{H}}
\newcommand{\II}{{I\! \! I}}
\newcommand{\IK}{{I\! \! K}}
\newcommand{\IL}{{I\! \! L}}
\newcommand{\IM}{{I\! \! M}}
\newcommand{\IN}{{I\! \! N}}
\newcommand{\IO}{{I\! \! O}}
\newcommand{\IP}{{I\! \! P}}
\newcommand{\IQ}{\mathbb{Q}}
\newcommand{\IR}{\mathbb{R}}
\newcommand{\IS}{{I\! \! S}}
\newcommand{\IT}{\mathbb{T}}
\newcommand{\IU}{{I\! \! U}}
\newcommand{\IV}{{I\! \! V}}
\newcommand{\IW}{{I\! \! W}}
\newcommand{\IX}{{I\! \! X}}
\newcommand{\IY}{{I\! \! Y}}
\newcommand{\IZ}{\mathbb{Z}}
\newcommand{\ahoma}{{}^\A\mathrm{Hom}_\A}
\newcommand{\homaa}{\mathrm{Hom}^\A_\A}
\newcommand{\homc}{\mathrm{Hom}_\IC}
\newcommand{\id}{\mathrm{id}}
\newcommand{\tube}{B^{N-n}_\epsilon(0)}
\newcommand{\smoothfn}{C^\infty(M)}
\newcommand{\oneformclassical}{\Omega^1 ( M )}
\newcommand{\oneform}{{{\Omega}^1 ( \mathcal{A} ) }}
\newcommand{\oneformdeformed}{\widetilde{{\Omega}^1_D ( \mathcal{A}_\theta )} }
\newcommand{\twoformdeformed}{\widetilde{{\Omega}^2_D ( \mathcal{A}_\theta )} }
\newcommand{\twoform}{{{\Omega}^2}( \mathcal{A} )}
\newcommand{\tensora}{\otimes_{\mathcal{A}}}
\newcommand{\tensorsym}{\otimes^{{\rm sym}}_{\mathcal{A}}}
\newcommand{\tensorc}{\otimes_{\mathbb{C}}}
\newcommand{\A}{\mathcal{A}}
\newcommand{\Atheta}{\mathcal{A}_{\theta}}
\newcommand{\B}{\mathcal{B}}
\newcommand{\Btheta}{\mathcal{B}_{\theta}}
\newcommand{\C}{\mathcal{C}}
\newcommand{\Ctheta}{\mathcal{C}_{\theta}}
\newcommand{\D}{\mathcal{D}}
\newcommand{\Dtheta}{\mathcal{D}_{\theta}}
\newcommand{\E}{\mathcal{E}}
\newcommand{\Etheta}{\mathcal{E}_{\theta}}
\newcommand{\F}{\mathcal{F}}
\newcommand{\Ftheta}{\mathcal{F}_{\theta}}
\newcommand{\G}{\mathcal{G}}
\newcommand{\Gtheta}{\mathcal{G}_{\theta}}
\newcommand{\Acenter}{\mathcal{Z}( \mathcal{A} )}
\newcommand{\Bcenter}{\mathcal{Z}( \mathcal{B} )}
\newcommand{\Ccenter}{\mathcal{Z}( \mathcal{C} )}
\newcommand{\Dcenter}{\mathcal{Z}( \mathcal{D} )}
\newcommand{\Ecenter}{\mathcal{Z}( \mathcal{E} )}
\newcommand{\Fcenter}{\mathcal{Z}( \mathcal{F} )}
\newcommand{\Gcenter}{\mathcal{Z}( \mathcal{G} )}
\newcommand{\Aprime}{\mathcal{A}^{\prime}}
\newcommand{\Eprime}{\mathcal{E}^{\prime}}
\newcommand{\zeroE}{{}_0\E}
\newcommand{\Ezero}{\E_0}
\newcommand{\piggo}{\pi_{g,g_0}}
\newcommand{\somegaeta}{S_{\omega,\eta}}
\newcommand{\Vgtwo}{V_{g^{(2)}}}
\newcommand{\Psym}{P_{\rm sym}}
\newcommand{\Hom}{{\rm Hom}}
\newcommand{\aomega}{\A_\Omega}
\newcommand{\momega}{M_\Omega}
\newcommand{\staromega}{\ast_\Omega}
\newcommand{\Edelta}{{}_{\mathcal{E}} \Delta}
\newcommand{\deltaE}{\Delta_{\mathcal{E}}}
\newcommand{\deltaomega}{\Delta_\Omega}
\newcommand{\deltam}{\Delta_M}
\newcommand{\deltamomega}{\Delta_{M_\Omega}}
\newcommand{\momegadelta}{{}_{M_\Omega} \Delta}
\newcommand{\mdelta}{{}_M \Delta}
\newcommand{\momegaDelta}{{}_{M_\Omega} \Delta}
\newcommand{\sigmacan}{\sigma^{{\rm can}}}
\newcommand{\leftaction}{\triangleright}
\newcommand{\rightaction}{\triangleleft}
\newcommand{\aone}{a_{(1)}}
\newcommand{\atwo}{a_{(2)}}
\newcommand{\aoneone}{a_{(1)(1)}}
\newcommand{\aonetwo}{a_{(1)(2)}}
\newcommand{\atwoone}{a_{(2)(1)}}
\newcommand{\atwotwo}{a_{(2)(2)}}
\newcommand{\bone}{b_{(1)}}
\newcommand{\btwo}{b_{(2)}}
\newcommand{\boneone}{b_{(1)(1)}}
\newcommand{\bonetwo}{b_{(1)(2)}}
\newcommand{\btwoone}{b_{(2)(1)}}
\newcommand{\btwotwo}{b_{(2)(2)}}
\newcommand{\omegaone}{\omega_{(1)}}
\newcommand{\omegatwo}{\omega_{(2)}}
\newcommand{\oneeta}{{}_{(1)} \eta}
\newcommand{\twoeta}{{}_{(2)} \eta}
\newcommand{\onem}{{}_{(1)} m}
\newcommand{\twom}{{}_{(2)} m}
\newcommand{\mone}{m_{(1)}}
\newcommand{\mtwo}{m_{(2)}}
\newcommand{\zeroM}{{}_0 M}
\newcommand{\Mzero}{M_0}
\newcommand{\gzero}{g_0}
\newcommand{\gtwozero}{g^{(2)}_0}
\newcommand{\zerosigma}{{}_0\sigma}
\newcommand{\bimodbicov}{{}^\mathcal{A}_\mathcal{A} \mathcal{M}^\mathcal{A}_\mathcal{A}}
\newcommand{\bimodrightcov}{{}_\mathcal{A} \mathcal{M}^\mathcal{A}_\mathcal{A}}
\newcommand{\bimodleftcov}{{}^\mathcal{A}_\mathcal{A} \mathcal{M}_\mathcal{A}}
\newcommand{\rightmodbicov}{{}^\mathcal{A} \mathcal{M}^\mathcal{A}_\mathcal{A}}
\newcommand{\leftcov}{{}^\mathcal{A} \mathcal{M}}
\newcommand{\bimodbicovgamma}{{}^{\mathcal{A}_\gamma}_{\mathcal{A}_\gamma} \mathcal{M}^{\mathcal{A}_\gamma}_{\mathcal{A}_\gamma}}
\newcommand{\rightmodbicovgamma}{{}^{\mathcal{A}_\gamma} \mathcal{M}^{\mathcal{A}_\gamma}_{\mathcal{A}_\gamma}}
\newcommand{\leftcovgamma}{{}^{\mathcal{A}_\gamma} \mathcal{M}}
\newcommand{\ev}{\rm ev}
\newcommand{\coev}{\rm coev}
\newcommand{\dsp}{\displaystyle}
\newcommand{\RNum}[1]{\uppercase\expandafter{\romannumeral #1\relax}}
\newcommand{\vsp}{\vskip 1em}
\begin{document}
\title{Pseudo-Riemannian metrics on bicovariant bimodules}
\maketitle
\begin{center}
{\large {Jyotishman Bhowmick and Sugato Mukhopadhyay}}\\
Indian Statistical Institute\\
203, B. T. Road, Kolkata 700108\\
Emails: jyotishmanb$@$gmail.com, m.xugato@gmail.com \\
\end{center}
\begin{abstract}
We study pseudo-Riemannian invariant metrics on bicovariant bimodules over Hopf algebras. We clarify some properties of such metrics and prove that pseudo-Riemannian invariant metrics on a bicovariant bimodule and its cocycle deformations are in one to one correspondence.
\end{abstract}
\section{Introduction}
The notion of metrics on covariant bimodules on Hopf algebras have been studied by a number of authors including Heckenberger and~ Schm{\"u}dgen (\cite{heckenberger}, \cite{heckenbergerlaplace}, \cite{heckenbergerspin}) as well as Beggs, Majid and their collaborators (\cite{beggsmajidbook} and references therein). The goal of this article is to characterize bicovariant pseudo-Riemannian metrics on a cocycle-twisted bicovariant bimodule. As in \cite{heckenberger}, the symmetry of the metric comes from Woronowicz's braiding map $ \sigma $ on the bicovariant bimodule. However, since our notion of non-degeneracy of the metric is slightly weaker than that in \cite{heckenberger}, we consider a slightly larger class of metrics than those in \cite{heckenberger}. The positive-definiteness of the metric does not play any role in what we do.
We refer to the later sections for the definitions of pseudo-Riemannian metrics and cocycle deformations. Our strategy is to exploit the covariance of the various maps between bicovariant bimodules to view them as maps between the finite-dimensional vector spaces of left-invariant elements of the respective bimodules. This was already observed and used crucially by Heckenberger and~ Schm{\"u}dgen in the paper \cite{heckenberger}. We prove that bi-invariant pseudo-Riemannian metrics are automatically bicovariant maps and compare our definition of pseudo-Riemannian metric with some of the other definitions available in the literature. Finally, we prove that the pseudo-Riemannian bi-invariant metrics on a bicovariant bimodule and its cocycle deformation are in one to one correspondence. These results will be used in the companion article \cite{article6}.
In Section \ref{section2}, we discuss some generalities on bicovariant bimodules. In Section \ref{section3}, we define and study pseudo-Riemannian left metrics on a bicovariant differential calculus. Finally, in Section \ref{21staugust20197}, we prove our main result on bi-invariant metrics on cocycle-deformations.
Let us set up some notations and conventions that we are going to follow. All vector spaces will be assumed to be over the complex field. For vector spaces $ V_1 $ and $ V_2, $ $ \sigma^{{\rm can}} : V_1 \tensorc V_2 \rightarrow V_2 \tensorc V_1 $ will denote the canonical flip map, i.e, $ \sigma^{{\rm can}} (v_1 \tensorc v_2) = v_2 \tensorc v_1. $ For the rest of the article, $(\A,\Delta)$ will denote a Hopf algebra. We will use the Sweedler notation for the coproduct $\Delta$. Thus, we will write
\begin{equation} \label{28thaugust20191}
\Delta(a) = a_{(1)} \tensorc a_{(2)}.
\end{equation}
For a right $\A$-module $V,$ the notation $V^*$ will stand for the set $ \Hom_{\A} ( V, \A ). $
Following \cite{woronowicz}, the comodule coaction on a left $\A$-comodule $V$ will be denoted by the symbol $ \Delta_V. $ Thus, $ \Delta_V $ is a $\IC$-linear map $\Delta_V:V \to \A \tensorc V$ such that
$$ (\Delta \tensorc \id) \Delta_V = (\id \tensorc \Delta_V) \Delta_V, ~ (\epsilon \tensorc \id)\Delta_V(v)=v $$
for all $ v $ in $ V $ (here $\epsilon$ is the counit of $\A$). We will use the notation
\begin{equation} \label{28thaugust20192}
\Delta_V(v) = v_{(-1)} \tensorc v_{(0)}.
\end{equation}
Similarly, the comodule coaction on a right $\A$-comodule will be denoted by $ {}_V \Delta $ and we will write
\begin{equation} \label{28thaugust20193}
{}_V \Delta(v) = v_{(0)} \tensorc v_{(1)}.
\end{equation}
Finally, for a Hopf algebra $\A,$ $ \bimodleftcov, \bimodrightcov, \bimodbicov $ will denote the categories of various types of mixed Hopf-bimodules as in Subsection 1.9 of \cite{montgomery}.
\section{Covariant bimodules on quantum groups} \label{section2}
In this section we recall and prove some basic facts on covariant bimodules. These objects were studied by many Hopf-algebraists (as Hopf-bimodules) including Abe (\cite{abe}) and Sweedler (\cite{sweedler}). During the 1980's, they were re-introduced by Woronowicz (\cite{woronowicz}) for studying differential calculi over Hopf algebras. Schauenburg (\cite{schauenberg}) proved a categorical equivalence between bicovariant bimodules and Yetter-Drinfeld modules over a Hopf algebra, the latter being introduced by Yetter in \cite{yetter}.
We start by recalling the notions on covariant bimodules from Section 2 of \cite{woronowicz}. Suppose $M$ is a bimodule over $\A$ such that $ (M, \Delta_M) $ is a left $\A$-comodule. Then $ (M, \Delta_M) $ is called a left-covariant bimodule if this tuplet is an object of the category $ \bimodleftcov, $ i.e, for all $a$ in $\A$ and $m$ in $M$, the following equation holds.
$$\Delta_M(a m)=\Delta(a)\Delta_M(m),~ \Delta_M(m a)=\Delta_M(m)\Delta(a).$$
Similarly, if $ {}_M \Delta $ is a right comodule coaction on $M,$ then $ (M, {}_M \Delta) $ is called a right covariant bimodule if it is an object of the category $ \bimodrightcov, $ i.e, for any $a$ in $\A$ and $m$ in $M$,
$${}_ M\Delta(a m)=\Delta(a){}_ M\Delta(m),~ {}_ M\Delta(m a)={}_ M\Delta(m)\Delta(a).$$
Finally, let $ M$ be a bimodule over $\A$ and $\Delta_ M: M \to \A \tensorc M$ and ${}_ M \Delta: M \to M \tensorc \A$ be $\IC$-linear maps. Then we say that $(M, \Delta_{ M}, {}_ M \Delta)$ is a bicovariant bimodule if this triplet is an object of $\bimodbicov. $ Thus,
\begin{itemize}
\item[(i)] $(M, \Delta_{ M})$ is left-covariant bimodule,
\item[(ii)] $(M, {}_ M \Delta)$ is a right-covariant bimodule,
\item[(iii)] $ (\id \tensorc {}_M \Delta) \Delta_M = (\Delta_M \tensorc \id) {}_M\Delta. $
\end{itemize}
The vector space of left (respectively, right) invariant elements of a left (respectively, right) covariant bimodules will play a crucial role in the sequel and we introduce notations for them here.
\begin{defn} \label{21staugust20193}
Let $(M, \Delta_M)$ be a left-covariant bimodule over $\A$. The subspace of left-invariant elements of $M$ is defined to be the vector space
$${}_0M:=\{m \in M : \Delta_M(m)=1\tensorc m\}.$$
Similarly, if $(M, {}_M\Delta)$ is a right-covariant bimodule over $\A$ , the subspace of right-invariant elements of $M$ is the vector space
$$M_0:=\{m \in M : {}_M\Delta(m) = m \tensorc 1\}.$$
\end{defn}
\brmrk \label{20thjune}
We will say that a bicovariant bimodule $( M, \Delta_M, {}_M \Delta ) $ is finite if $ {}_0 M $ is a finite dimensional vector space. Throughout this article, we will only work with bicovariant bimodules which are finite.
\ermrk
Let us note the immediate consequences of the above definitions.
\blmma \label{1staugust2019jb1} (Theorem 2.4, \cite{woronowicz} )
Suppose $ M $ is a bicovariant $\A$-$\A$-bimodule. Then
$$ \mdelta (\zeroM) \subseteq \zeroM \tensorc \A,~ \deltam (\Mzero) \subseteq \A \tensorc \Mzero. $$
More precisely, if $ \{ m_i \}_i $ is a basis of $ \zeroM, $ then there exist elements $ \{ a_{ji} \}_{i,j} $ in $ \A $ such that
\begin{equation} \label{26thaugust20191} \mdelta (m_i) = \sum_j m_j \tensorc a_{ji}. \end{equation}
\elmma
\begin{proof} This is a simple consequence of the fact that $ \mdelta $ commutes with $ \deltam. $
\end{proof}
The category $\bimodleftcov$ has a natural monoidal structure. Indeed, if
$ (M, \Delta_M) $ and $ (N, \Delta_N) $ are left-covariant bimodules over $ \A, $ then we have a left coaction $ \Delta_{M \tensora N} $ of $ \A $ on $ M \tensora N $ defined by the following formula:
$$ \Delta_{M \tensora N} (m \tensora n) = m_{(-1)} n_{(-1)} \tensorc m_{(0)} \tensora n_{(0)}. $$
Here, we have made use of the Sweedler notation introduced in \eqref{28thaugust20192}. This makes $ M \tensora N $ into a left covariant $\A$-$\A$-bimodule. Similarly, there is a right coaction $ {}_{M \tensora N} \Delta $ on $ M \tensora N $ if $ (M, {}_M \Delta) $ and $ (N, {}_N \Delta)$ are right covariant bimodules.
The fundamental theorem of Hopf modules (Theorem 1.9.4 of \cite{montgomery}) states that if $ V $ is a left-covariant bimodule over $\A,$ then $ V $ is a free as a left (as well as a right) $ \A$-module. This was reproved by Woronowicz in \cite{woronowicz}. In fact, one has the following result:
\begin{prop}{(Theorem 2.1 and Theorem 2.3 of \cite{woronowicz})} \label{moduleiso}
Let $(M, \Delta_M)$ be a bicovariant bimodule over $\A$. Then the multiplication maps ${}_0M \tensorc \A \to M,$ $ \A \tensorc {}_0M \to M,$ $ M_0 \tensorc \A \to M$ and $ \A \tensorc M_0 \to M$ are isomorphisms.
\end{prop}
\begin{corr} \label{3rdaugust20191}
Let $(M,\Delta_M)$ and $(N, \Delta_N)$ be left-covariant bimodules over $\A$ and $\{m_i\}_i$ and $\{n_j\}_j$ be vector space bases of ${}_0 M$ and ${}_0 N$ respectively. Then each element of $M \tensora N$ can be written as $\sum_{ij} a_{ij} m_i \tensora n_j$ and $\sum_{ij} m_i \tensora n_j b_{ij}$, where $a_{ij}$ and $b_{ij}$ are uniquely determined.\\
A similar result holds for right-covariant bimodules $(M, {}_M \Delta)$ and $(N, {}_N \Delta)$ over $\A$. Finally, if $(M,\Delta_M)$ is a left-covariant bimodule over $\A$ with basis $\{m_i\}_i$ of ${}_0 M$, and $(N, {}_N \Delta)$ is a right-covariant bimodule over $\A$ with basis $\{n_j\}_j$ of $N_0$, then any element of $M \tensora N$ can be written uniquely as $\sum_{ij} a_{ij} m_i \tensora n_j$ as well as $\sum_{ij} m_i \tensora n_j b_{ij}$.
\end{corr}
\begin{proof}
The proof of this result is an adaptation of Lemma 3.2 of \cite{woronowicz} and we omit it.
\end{proof}
The next proposition will require the definition of right Yetter-Drinfeld modules for which we refer to \cite{yetter} and Definition 4.1 of \cite{schauenberg}.
\begin{prop} \label{3rdaugust20192} (Theorem 5.7 of \cite{schauenberg})
The functor $ M \mapsto {}_0 M $ induces a monoidal equivalence of categories $ \bimodbicov $ and the category of right Yetter-Drinfeld modules. Therefore, if $(M,\Delta_M)$ and $(N, \Delta_M)$ be left-covariant bimodules over $\A,$ then
\begin{equation} \label{21staugust20194}
{}_0 (M \tensora N) = {\rm span}_\IC \{m \tensora n : m \in {}_0 M, n \in {}_0 N \}.
\end{equation}
Similarly, if $(M, {}_M \Delta)$ and $(N, {}_N \Delta)$ are right-covariant bimodules over $\A$, then we have that $$ (M \tensora N)_0 = {\rm span}_\IC \{m \tensora n : m \in M_0, n \in N_0 \}.$$
Thus, ${}_0 (M \tensora N) = {}_0 M \tensorc {}_0 N$ and $(M \tensora N)_0 = M_0 \tensorc N_0$.
\end{prop}
\begin{rmk} \label{29thjune20191}
In the light of Proposition \ref{3rdaugust20192}, we are allowed to use the notations $ {}_{0} M \tensorc {}_{0} N $ and $ {}_{0} (M \tensora N) $ interchangeably.
\end{rmk}
We recall now the definition of covariant maps between bimodules.
\begin{defn}
Let $(M, \Delta_{ M}, {}_M \Delta)$ and $(N, \Delta_N, {}_N \Delta)$ be bicovariant $\A$-bimodules and $ T $ be a $ \mathbb{C} $-linear map from $ M $ to $ N. $
$ T $ is called left-covariant if $T$ is a morphism in the category $\leftcov,$ i.e, for all $ m \in M, $
$$ (\id \tensorc T)(\Delta_{M}(m))=\Delta_N(T(m)). $$
$T$ is called right-covariant if $T$ is a morphism in the category $\mathcal{M}^\A.$ Thus, for all $ m \in M,$
$$ ( T \tensorc \id) {}_M \Delta (m) = {}_N \Delta (T (m)). $$
Finally, a map which is both left and right covariant will be called a bicovariant map. In other words, a bicovariant map is a morphism in the category ${}^\A \mathcal{M}^\A.$
\end{defn}
We end this section by recalling the following fundamental result of Woronowicz.
\begin{prop}{(Proposition 3.1 of \cite{woronowicz})} \label{4thmay20193}
Given a bicovariant bimodule $\E$ there exists a unique bimodule homomorphism
$$\sigma: \E \tensora \E \to \E \tensora \E ~ {\rm such} ~ {\rm that} $$
\be \label{30thapril20191} \sigma(\omega \tensora \eta)= \eta \tensora \omega \ee
for any left-invariant element $\omega$ and right-invariant element $\eta$ in $\E$. $\sigma$ is invertible and is a bicovariant $\A$-bimodule map from $\E \tensora \E$ to itself. Moreover, $\sigma$ satisfies the following braid equation on $\E \tensora \E \tensora \E:$
$$ (\id \tensora \sigma)(\sigma \tensora \id)(\id \tensora \sigma)= (\sigma \tensora \id)(\id \tensora \sigma)(\sigma \tensora \id). $$
\end{prop}
\section{Pseudo-Riemannian metrics on bicovariant bimodules} \label{section3}
In this section, we study pseudo-Riemannian metrics on bicovariant differential calculus on Hopf algebras. After defining pseudo-Riemannian metrics, we recall the definitions of left and right invariance of a pseudo-Riemannian metrics from \cite{heckenberger}. We prove that a pseudo-Riemannian metric is left (respectively, right) invariant if and only if it is left (respectively, right) covariant. The coefficients of a left-invariant pseudo-Riemannian metric with respect to a left-invariant basis of $\E$ are scalars. We use this fact to clarify some properties of pseudo-Riemannian invariant metrics. We end the section by comparing our definition with those by Heckenberger and~ Schm{\"u}dgen (\cite{heckenberger}) as well as by Beggs and Majid.
\begin{defn} \label{24thmay20191} (\cite{heckenberger})
Suppose $ \E $ is a bicovariant $\A$-bimodule $\E$ and $ \sigma: \E \tensora \E \rightarrow \E \tensora \E $ be the map as in Proposition \ref{4thmay20193}. A pseudo-Riemannian metric for the pair $ (\E, \sigma) $ is a right $\A$-linear map $g:\E \tensora \E \to \A$ such that the following conditions hold:
\begin{enumerate}
\item[(i)] $ g \circ \sigma = g. $
\item[(ii)] If $g(\rho \tensora \nu)=0$ for all $\nu$ in $\E,$ then $\rho = 0.$
\end{enumerate}
\end{defn}
For other notions of metrics on covariant differential calculus, we refer to \cite{beggsmajidbook} and references therein.
\begin{defn} (\cite{heckenberger})
A pseudo-Riemannian metric $g$ on a bicovariant $\A$-bimodule $\E$ is said to be left-invariant if for all $\rho, \nu$ in $\E$,
$$ (\id \tensorc \epsilon g)(\Delta_{(\E \tensora \E)}(\rho \tensora \nu)) = g(\rho \tensora \nu). $$
Similarly, a pseudo-Riemannian metric $g$ on a bicovariant $\A$-bimodule $\E$ is said to be right-invariant if for all $\rho, \nu$ in $\E$,
$$ (\epsilon g \tensorc \id)({}_{(\E \tensora \E)}\Delta(\rho \tensora \nu)) = g(\rho \tensora \nu). $$
Finally, a pseudo-Riemannian metric $g$ on a bicovariant $\A$-$\A$ bimodule $\E$ is said to be bi-invariant if it is both left-invariant as well as right-invariant.
\end{defn}
We observe that a pseudo-Riemannian metric is invariant if and only if it is covariant.
\begin{prop} \label{29thaugust20192}
Let $g$ be a pseudo-Riemannian metric on the bicovariant bimodule $\E$. Then $g$ is left-invariant if and only if $g: \E \tensora \E \to \A$ is a left-covariant map. Similarly, $g$ is right-invariant if and only if $g: \E \tensora \E \to \A$ is a right-covariant map.
\end{prop}
\begin{proof}
Let $g$ be a left-invariant metric on $\E$, and $\rho$, $\nu$ be elements of $\E$.
Then the following computation shows that $g$ is a left-covariant map.
\begin{equation*}
\begin{aligned}
& \Delta g(\rho \tensora \nu) = \Delta((\id \tensorc \epsilon g)(\Delta_{(\E \tensora \E)}(\rho \tensora \nu)))\\
=& \Delta(\id \tensorc \epsilon g)(\rho_{(-1)} \nu_{(-1)} \tensorc \rho_{(0)} \tensora \nu_{(0)})\\
=& \Delta(\rho_{(-1)} \nu_{(-1)}) \epsilon g(\rho_{(0)} \tensora \nu_{(0)})\\
=& (\rho_{(-1)})_{(1)} (\nu_{(-1)})_{(1)} \tensorc (\rho_{(-1)})_{(2)} (\nu_{(-1)})_{(2)} \epsilon g(\rho_{(0)} \tensora \nu_{(0)})\\
=& (\rho_{(-1)})_{(1)} (\nu_{(-1)})_{(1)} \tensorc ((\id \tensorc \epsilon g)((\rho_{(-1)})_{(2)} (\nu_{(-1)})_{(2)} \tensorc \rho_{(0)} \tensora \nu_{(0)}))\\
=& \rho_{(-1)} \nu_{(-1)} \tensorc ((\id \tensorc \epsilon g)(\Delta_{(\E \tensora \E)}(\rho_{(0)} \tensora \nu_{(0)})))\\
& {\rm \ (where \ we \ have \ used \ {co associativity} \ of \ comodule \ coactions) } \\
=& \rho_{(-1)} \nu_{(-1)} \tensorc g(\rho_{(0)} \tensora \nu_{(0)})\\
=& (\id \tensorc g)(\Delta_{(\E \tensora \E)}(\rho \tensora \nu)).
\end{aligned}
\end{equation*}
On the other hand, suppose $g: \E\tensora \E \to \A$ is a left-covariant map. Then we have
\begin{equation*}
\begin{aligned}
&(\id \tensorc \epsilon g) \Delta_{(\E \tensora \E)}(\rho \tensora \nu) = (\id \tensorc \epsilon)(\id \tensorc g)\Delta_{(\E \tensora \E)} (\rho \tensora \nu)\\
= &(\id \tensorc \epsilon)\Delta g(\rho \tensora \nu) = g(\rho \tensora \nu).
\end{aligned}
\end{equation*}
The proof of the right-covariant case is similar.
\end{proof}
The following key result will be used throughout the article.
\begin{lemma} (\cite{heckenberger}) \label{14thfeb20191}
If $g$ is a pseudo-Riemannian metric which is left-invariant on a left-covariant $\A$-bimodule $\E$, then $g(\omega_1 \tensora \omega_2) \in \IC.1$ for all $\omega_1, \omega_2$ in $\zeroE.$ Similarly, if $ g $ is a right-invariant pseudo-Riemannian metric on a right-covariant $\A$-bimodule, then $ g (\eta_1 \tensora \eta_2) \in \mathbb{C}. 1 $ for all $ \eta_1, \eta_2 $ in $ \E_0. $
\end{lemma}
Let us clarify some of the properties of a left-invariant and right-invariant pseudo-Riemannian metrics.
To that end, we make the next definition which makes sense as we always work with finite bicovariant bimodules (see Remark \ref{20thjune}). The notations used in the next definition will be used throughout the article.
\begin{defn} \label{24thaugust20195}
Let $ \E $ and $ g $ be as above. For a fixed basis $ \{ \omega_1, \cdots , \omega_n \} $ of $ \zeroE, $ we define $ g_{ij} = g (\omega_i \tensora \omega_j). $ Moreover, we define $ V_g: \E \rightarrow \E^* = \Hom_{\A} ( \E, \A ) $ to be the map defined by the formula
$$ V_g (e) (f) = g (e \tensora f). $$
\end{defn}
\begin{prop} \label{23rdmay20192}
Let $g$ be a left-invariant pseudo-Riemannian metric for $\E$ as in Definition \ref{24thmay20191}. Then the following statements hold:
\begin{itemize}
\item[(i)] The map $ V_g $ is a one-one right $ \A $-linear map from $ \E $ to $ \E^*. $
\item[(ii)] If $ e \in \E $ is such that $ g (e \tensora f) = 0 $ for all $ f \in {}_0 \E, $ then $ e = 0. $ In particular, the map $ V_{g}|_{{}_0 \E} $ is one-one and hence an isomorphism from $ {}_0 \E $ to $ ({}_0 \E)^*.$
\item[(iii)] The matrix $((g_{ij}))_{ij}$ is invertible.
\item[(iv)] Let $g^{ij}$ denote the $(i,j)$-th entry of the inverse of the matrix $((g_{ij}))_{ij}$. Then $g^{ij}$ is an element of $\IC.1$ for all $i,j$.
\item[(v)] If $g(e \tensora f)=0$ for all $e$ in $\zeroE$, then $f = 0$.
\end{itemize}
\end{prop}
\begin{proof}
The right $ \A $-linearity of $ V_g $ follows from the fact that $g$ is a well-defined map from $\E \tensora \E$ to $\A.$ The condition (2) of Definition \ref{24thmay20191} forces $V_g$ to be one-one. This proves (i).
For proving (ii), note that $V_g|_{{}_0 \E}$ is the restriction of a one-one map to a subspace. Hence, it is a one-one $\IC$-linear map. Since $g$ is left-invariant, by Lemma \ref{14thfeb20191}, for any $e$ in ${}_0 \E$, $V_g(e)(\zeroE)$ is contained in $\IC$. Therefore, $V_g$ maps $\zeroE$ into $(\zeroE)^*$. Since, $\zeroE$ and $(\zeroE)^*$ have the same finite dimension as vector spaces, $ V_{g}|_{{}_0 \E} : {}_0 \E \to ({}_0 \E)^* $ is an isomorphism. This proves (ii).
Now we prove (iii). Let $ \{ \omega_i \}_i $ be a basis of $ {}_0 \E $ and $ \{ \omega^*_i \}_i $ be a dual basis, i.e, $ \omega^*_i (\omega_j) = \delta_{ij}. $ Since $ V_{g}|_{{}_0 \E} $ is a vector space isomorphism from $ {}_0 \E $ to $ ({}_0 \E)^* $ by part (ii), there exist complex numbers $ a_{ij} $ such that
$$(V_{g})^{-1}(\omega_i^*)= \sum_j a_{ij} \omega_j$$.
This yields
\begin{equation*}
\delta_{ik} = \omega_i^*(\omega_k)
= g(\sum_j a_{ij} \omega_j \tensora \omega_k)
= \sum_j a_{ij} g_{jk}.
\end{equation*}
Therefore, $((a_{ij}))_{ij}$ is the left-inverse and hence the inverse of the matrix $((g_{ij}))_{ij}$. This proves (iii).
For proving (iv), we use the fact that $g_{ij}$ is an element of $\IC.1$ for all $i,j$. Since
$$ \sum_k g(\omega_i \tensora \omega_k)g^{kj} = \delta_{ij}.1 = \sum_k g^{ik}g(\omega_k \tensora \omega_j)=\delta_{ij}, $$
we have
$$\sum_k g(\omega_i \tensora \omega_k)\epsilon(g^{kj})= \delta_{ij} = \sum_k \epsilon(g^{ik})g(\omega_k \tensora \omega_j).$$
So, the matrix $((\epsilon(g^{ij})))_{ij}$ is also an inverse to the matrix $((g(\omega_i \tensora \omega_j)))_{ij}$ and hence $ g^{ij} = \epsilon(g^{ij})$ and $g^{ij}$ is in $\mathbb{C}.1.$
Finally, we prove (v) using (iv). Suppose $f$ be an element in $\E$ such that $g(e \tensora f)=0$ for all $e$ in ${}_0\E$. Let $f=\sum_k \omega_k a_k$ for some elements $a_k$ in $\A$. Then for any fixed index $i_0$, we obtain
\begin{equation*}
0 = g(\sum_j g^{i_0 j}\omega_j \tensora \sum_k \omega_k a_k)
= \sum_k \sum_j g^{i_0j} g_{jk} a_k
= \sum_k \delta_{i_0k} a_k
= a_{i_0}.
\end{equation*}
Hence, we have that $f=0$. This finishes the proof.
\end{proof}
We apply the results in Proposition \ref{23rdmay20192} to exhibit a basis of the free right $\A$-module $ V_g (\E). $ This will be used in making Definition \ref{26thaugust20191sm} which is needed to prove our main Theorem \ref{29thaugust20191sm}.
\blmma \label{28thaugust2019night1}
Suppose $ \{ \omega_i \}_i $ is a basis of $ \zeroE $ and $ \{ \omega^*_i \}_i $ be the dual basis as in the proof of Proposition \ref{23rdmay20192}. If $ g $ is a pseudo-Riemannian left-invariant metric on $\E,$ then $ V_g (\E) $ is a free right $ \A $-module with basis $ \{ \omega^*_i \}_i. $
\elmma
\begin{proof}
We will use the notations $ (g_{ij})_{ij} $ and $ g^{ij} $ from of Proposition \ref{23rdmay20192}. Since $ V_g $ is a right $ \A $-linear map, $ V_g (\E) $ is a right $\A$-module. Since
\begin{equation} \label{28thaugust2019night2} V_g (\omega_i) = \sum_{j} g_{ij} \omega^*_j \end{equation}
and the inverse matrix $ (g^{ij})_{ij} $ has scalar entries (Proposition \ref{23rdmay20192}), we get
$$ \omega^*_k = \sum_i g^{ki} V_g (\omega_i) $$
and so $ \omega^*_k $ belongs to $ V_g (\E) $ for all $k.$ By the right $\A$-linearity of the map $ V_g, $ we conclude that the set $ \{ \omega^*_i \}_i $ is right $\A$-total in $ V_g (\E). $
Finally, if $ a_i $ are elements in $ \A $ such that $\sum_k \omega^*_k a_k = 0, $ then by \eqref{28thaugust2019night2}, we have
$$ 0 = \sum_{i,k} g^{ki} V_g (\omega_i) a_k = V_g (\sum_i \omega_i ( \sum_k g^{ki} a_k ) ). $$
As $ V_g $ is one-one and $ \{ \omega_i \}_i $ is a basis of the free module $ \E, $ we get
$$ \sum_k g^{ki} a_k = 0 ~ \forall ~ i. $$
Multiplying by $ g_{ij} $ and summing over $i$ yields $a_j = 0. $ This proves that $ \{ \omega^*_i \}_i $ is a basis of $ V_g (\E) $ and finishes the proof.
\end{proof}
\brmrk
Let us note that for all $e \in \E,$ the following equation holds:
\begin{equation} \label{25thjune20} e = \sum_i \omega_i \omega^*_i ( e ). \end{equation}
\ermrk
The following proposition was kindly pointed out to us by the referee for which we will need the notion of a left dual of an object in a monoidal category. We refer to Definition 2.10.1 of \cite{etingof} or Definition XIV.2.1 of \cite{kassel} for the definition.
\begin{prop} \label{25thjune202}
Suppose $g$ is a pseudo-Riemannian $\A$-bilinear pseudo-Riemannian metric on a finite bicovariant $\A$-bimodule. Let $\widetilde{\E}$ denote the left dual of the object $\E$ in the category $\bimodbicov.$ Then $\widetilde{\E}$ is isomorphic to $\E$ as objects in the category $\bimodbicov$ via the morphism $V_g.$
\end{prop}
\begin{proof} It is well-known that $\widetilde{\E}$ and $ \E^* $ are isomorphic objects in the category $\bimodbicov.$ This follows by using the bicovariant $\A$-bilinear maps
\[ \ev: \tilde{\E} \tensora \E \to \A; \quad \phi \tensora e \mapsto \phi(e), ~ \coev: \A \to \E \tensora \tilde{\E}; \quad 1 \mapsto \sum_i \omega_i \tensora \omega_i^* \]
We define $ \ev_g: \E \tensora \E \rightarrow \A $ and $ \coev_g: \A \rightarrow \E \tensora \E $ by the following formulas:$$ \ev_g ( e \tensora f ) = g ( e \tensora f ), \quad \coev_g ( 1 ) = \sum_i \omega_i \tensora V^{-1}_g ( \omega^*_i ). $$
Then since $g$ is both left and right $\A$-linear, $ \ev_g $ and $ \coev_g $ are $\A$-$\A$-bilinear maps. The bicovariance of $g$ implies the bicovariance of $\ev_g$ while the bicovariance of $\coev_g = ( \id \tensora V^{-1}_g ) \circ \coev $ follows from the bicovariance of $V_g$ and $\coev.$
Since the left dual of an object is unique upto isomorphism, we need to check the following identities for all $e$ in $\E$:
$$ ( \ev_g \tensora \id ) ( \id \tensora \coev_g ) ( e ) = e, ~ ( \id \tensora \ev_g ) ( \coev_g \tensora \id ) ( e ) = e. $$
But these follow by a simple computation using the fact that ${}_0 \E $ is right $\A$-total in $\E$ and the identity \eqref{25thjune20}.
From the above discussion, we have that $\E$ and $\E^*$ are two left duals of the object $\E$ in the category $ \bimodbicov. $ Then by the proof of Proposition 2.10.5 of \cite{etingof}, we know that $ ( \ev_g \tensora \id_{\E^*} ) ( \id_{\E} \tensora \coev ) $ is an isomorphism from $\E$ to $\E^*.$ But it can be easily checked that $ ( \ev_g \tensora \id_{\E^*} ) ( \id_{\E} \tensora \coev ) = V_g. $ This completes the proof.
\end{proof}
Now we state a result on bi-invariant (i.e both left and right-invariant) pseudo-Riemannian metric.
\begin{prop} \label{27thjune20197}
Let $g$ be a pseudo-Riemannian metric on $\E$ and the symbols $\{ \omega_i \}_i$, $\{ g_{ij} \}_{ij}$ be as above. If
\begin{equation} \label{29thaugust20194}
{}_\E \Delta(\omega_i) = \sum_j \omega_j \tensorc R_{ji}
\end{equation}
(see \eqref{26thaugust20191}), then $g$ is bi-invariant if and only if the elements $ g_{ij} $ are scalar and
\begin{equation} \label{6thnov20191}
g_{ij} = \sum_{kl} g_{kl} R_{ki} R_{lj}.
\end{equation}
\end{prop}
\begin{proof}
Since $g$ is left-invariant, the elements $g_{ij}$ are in $\IC.1$. Moreover, the right-invariance of $g$ implies that $g$ is right-covariant (Proposition \ref{29thaugust20192}), i.e. \begin{equation*} \begin{aligned} & 1 \tensorc g_{ij} = \Delta(g_{ij}) = (g \tensora \id){}_{\E \tensora \E} \Delta(\omega_i \tensorc \omega_j) \\ =& (g \tensora \id)(\sum_{kl} \omega_k \tensora \omega_l \tensorc R_{ki} R_{lj}) = 1 \tensorc \sum_{kl} g_{kl} R_{ki} R_{lj}, \end{aligned} \end{equation*} so that
\begin{equation} \label{29thaugust20193}
g_{ij} = \sum_{kl} g_{kl} R_{ki} R_{lj}.
\end{equation}
Conversely, if $ g_{ij} = g (\omega_i \tensora \omega_j) $ are scalars and \eqref{6thnov20191} is satisfied, then $ g $ is left-invariant and right-covariant. By Proposition \ref{29thaugust20192}, $g$ is right-invariant.
\end{proof}
We end this section by comparing our definition of pseudo-Riemannian metrics with some of the other definitions available in the literature.
Proposition \ref{23rdmay20192} shows that our notion of pseudo-Riemannian metric coincides with the right $\A$-linear version of a ``symmetric metric" introduced in Definition 2.1 of \cite{heckenberger} if we impose the condition of left-invariance.
Next, we compare our definition with the one used by Beggs and Majid in Proposition 4.2 of \cite{majidpodles} (also see \cite{beggsmajidbook} and references therein). To that end, we need to recall the construction of the two forms by Woronowicz (\cite{woronowicz}).
If $ \E $ is a bicovariant $\A$-bimodule and $ \sigma $ be the map as in Proposition \ref{4thmay20193}, Woronowicz defined the space of two forms as:
$$\twoform := (\E \tensora \E) \big/ \rm {\rm Ker} (\sigma - 1).$$
The symbol $\wedge$ will denote the quotient map
$$ \wedge: \E \tensora \E \to \twoform. $$
Thus,
\begin{equation} \label{22ndaugust20191}
{\rm Ker}(\wedge) = {\rm Ker}(\sigma - 1).
\end{equation}
In Proposition 4.2 of \cite{majidpodles}, the authors define a metric on a bimodule $ \E $ over a (possibly) noncommutative algebra $\A$ as an element $ h $ of $ \E \tensora \E $ such that $ \wedge(h) = 0. $ We claim that metrics in the sense of Beggs and Majid are in one to one correspondence with elements $ g \in \Hom_\A (\E \tensora \E, \A) $ (not necessarily left-invariant) such that $ g \circ \sigma = g. $ Thus, modulo the nondegeneracy condition (ii) of Definition \ref{24thmay20191}, our notion of pseudo-Riemannian metric matches with the definition of metric by Beggs and Majid.
Indeed, if $ g \in \Hom_\A (\E \tensora \E, \A) $ as above and $ \{ \omega_i \}_i, $ is a basis of $\zeroE,$ then the equation $ g \circ \sigma = g $ implies that
$$ g \circ \sigma (\omega_i \tensora \omega_j) = g (\omega_i \tensora \omega_j). $$
However, by equation (3.15) of \cite{woronowicz}, we know that
$$ \sigma (\omega_i \tensora \omega_j) = \sum_{k,l} \sigma^{kl}_{ij} \omega_k \tensora \omega_l $$
for some scalars $ \sigma^{kl}_{ij}. $ Therefore, we have
\begin{equation} \label{12thoct20191} \sum_{k,l} \sigma^{kl}_{ij} g (\omega_k \tensora \omega_l) = g (\omega_i \tensora \omega_j). \end{equation}
We claim that the element $ h = \sum_{i,j} g (\omega_i \tensora \omega_j) \omega_i \tensora \omega_j $ satisfies $ \wedge (h) = 0. $ Indeed, by virtue of \eqref{22ndaugust20191}, it is enough to prove that $ (\sigma - 1) (h) = 0. $ But this directly follows from \eqref{12thoct20191} using the left $\A$-linearity of $\sigma.$
This argument is reversible and hence starting from $ h \in \E \tensora \E $ satisfying $ \wedge(h) = 0, $ we get an element $ g \in \Hom_\A (\E \tensora \E, \A) $ such that for all $i,j,$
$$ g \circ \sigma (\omega_i \tensora \omega_j) = g (\omega_i \tensora \omega_j). $$
Since $ \{ \omega_i \tensora \omega_j : i,j \} $ is right $\A$-total in $ \E \tensora \E $ (Corollary \ref{3rdaugust20191}) and the maps $g, \sigma $ are right $\A$-linear, we get that $ g \circ \sigma = g.$ This proves our claim. Let us note that since we did not assume $g$ to be left invariant, the quantities $ g (\omega_i \tensora \omega_j)$ need not be scalars. However, the proof goes through since the elements $ \sigma^{ij}_{kl} $ are scalars.
\section{Pseudo-Riemannian metrics for cocycle deformations} \label{21staugust20197}
This section concerns the braiding map and pseudo-Riemannian metrics of bicovariant bimodules under cocycle deformations of Hopf algebras. This section contains two main results. We start by recalling that a bicovariant bimodule $ \E $ over a Hopf algebra $\A$ can be deformed in the presence of a $2$-cocycle $ \gamma $ on $\A$ to a bicovariant $\A_\gamma$-bimodule $\E_\gamma.$ We prove that the canonical braiding map of the bicovariant bimodule $ \E_\gamma $ (Proposition \ref{4thmay20193}) is a cocycle deformation of the canonical braiding map of $\E.$ Finally, we prove that pseudo-Riemannian bi-invariant metrics on $ \E $ and $ \E_\gamma $ are in one to one correspondence.
Throughout this section, we will make heavy use of the Sweedler notations as spelled out in \eqref{28thaugust20191}, \eqref{28thaugust20192} and \eqref{28thaugust20193}. The coassociativity of $\Delta$ will be expressed by the following equation:
$$(\Delta \tensorc \id)\Delta(a) = (\id \tensorc \Delta) \Delta(a) = a_{(1)} \tensorc a_{(2)} \tensorc a_{(3)}.$$
Also, when $m$ is an element of a bicovariant bimodule, we will use the notation
\begin{equation} \label{28thaugust20194}
(\id \tensorc {}_M \Delta)\Delta_M (m) = (\Delta_M \tensorc \id) {}_M \Delta(m) = m_{(-1)} \tensorc m_{(0)} \tensorc m_{(1)}.
\end{equation}
\bdfn
A cocycle $\gamma$ on a Hopf algebra $ ( \A, \Delta ) $ is a $\mathbb{C}$-linear map $\gamma: \A \tensorc \A \rightarrow \mathbb{C} $ such that it is convolution invertible, unital, i.e,
$$ \gamma (a \tensorc 1) = \epsilon (a) = \gamma (1 \tensorc a) $$
and for all $a,b,c$ in $\A,$
\begin{equation} \label{(iii)} \gamma (a_{(1)} \tensorc b_{(1)}) \gamma (a_{(2)} b_{(2)} \tensorc c) = \gamma (b_{(1)} \tensorc c_{(1)}) \gamma (a \tensorc b_{(2)} c_{(2)}).\end{equation}
\edfn
Given a Hopf algebra $ (\A, \Delta) $ and such a cocycle $ \gamma $ as above, we have a new Hopf algebra $ (\A_\gamma, \Delta_\gamma) $ which is equal to $ \A $ as a vector space, the coproduct $ \Delta_\gamma $ is equal to $ \Delta $ while the algebra structure $ \ast_\gamma $ on $ \A_\gamma $ is defined by the following equation:
\begin{equation} \label{2ndseptember20191} a \ast_\gamma b = \gamma (a_{(1)} \tensorc b_{(1)}) a_{(2)} b_{(2)} \overline{\gamma} (a_{(3)} \tensorc b_{(3)}). \end{equation}
Here, $ \overline{\gamma} $ is the convolution inverse to $ \gamma $ which is unital and satisfies the following equation:
\begin{equation} \label{(iiiprime)} \overline{\gamma} (a_{(1)} b_{(1)} \tensorc c) \overline{\gamma} (a_{(2)} \tensorc b_{(2)}) = \overline{\gamma}(a \tensorc b_{(1)} c_{(1)}) \overline{\gamma} (b_{(2)} \tensorc c_{(2)}). \end{equation}
We refer to Theorem 1.6 of \cite{doi} for more details.
Suppose $ M $ is a bicovariant $\A$-$\A$-bimodule. Then $ M $ can also be deformed in the presence of a cocycle. This is the content of the next proposition.
\begin{prop} \label{8thmay20191} (Theorem 2.5 of \cite{majidcocycle})
Suppose $ M $ is a bicovariant $\A$-bimodule and $ \gamma $ is a $2$-cocycle on $\A.$ Then we have a bicovariant $\A_\gamma $-bimodule $ M_\gamma $ which is equal to $ M $ as a vector space but the left and right $ \A_\gamma $-module structures are defined by the following formulas:
\begin{eqnarray} & \label{9thmay20197} a *_\gamma m = \gamma(a_{(1)} \tensorc m_{(-1)}) a_{(2)} m_{(0)} \overline{\gamma}(a_{(3)} \tensorc m_{(1)})\\ & \label{9thmay20198} m *_\gamma a = \gamma(m_{(-1)} \tensorc a_{(1)}) m_{(0)} a_{(2)} \overline{\gamma}(m_{(1)} \tensorc a_{(3)}), \end{eqnarray}
for all elements $m$ of $M$ and for all elements $a$ of $\A$. Here, $*_\gamma$ denotes the right and left $\A_\gamma$-module actions, and $.$ denotes the right and left $\A$-module actions.
The $\A_\gamma$-bicovariant structures are given by \begin{equation} \label{10thseptember20191sm} {} \Delta_{M_\gamma}:= \Delta_M : M_\gamma \rightarrow \A_\gamma \tensorc M_\gamma ~ {\rm and} ~ {}_{M_\gamma} \Delta:= {}_M \Delta: M_\gamma \rightarrow M_\gamma \tensorc \A_\gamma. \end{equation}
\end{prop}
\begin{rmk} \label{20thjune2}
From Proposition \ref{8thmay20191}, it is clear that if $M$ is a finite bicovariant bimodule (see Remark \ref{20thjune}), then $ M_\gamma $ is also a finite bicovariant bimodule.
\end{rmk}
We end this subsection by recalling the following result on the deformation of bicovariant maps.
\begin{prop} \label{11thjuly20192} (Theorem 2.5 of \cite{majidcocycle})
Let $(M, \Delta_M, {}_M \Delta)$ and $(N, \Delta_N, {}_N \Delta)$ be bicovariant $\A$-bimodules, $T: M \to N$ be a $\IC$-linear bicovariant map and $\gamma$ be a cocycle as above. Then there exists a map $T_\gamma: M_\gamma \to N_\gamma$ defined by $T_\gamma (m) = T(m)$ for all $m$ in $M$. Thus, $T_\gamma = T$ as $\IC$-linear maps. Moreover, we have the following:
\begin{itemize}
\item[(i)] the deformed map $T_\gamma: M_\gamma \to N_\gamma$ is an $\A_\gamma$ bicovariant map,
\item[(ii)] if $T$ is a bicovariant right (respectively left) $\A$-linear map, then $T_\gamma$ is a bicovariant right (respectively left) $\A_\gamma$-linear map,
\item[(iii)] if $(P, \Delta_P, {}_{P} \Delta)$ is another bicovariant $\A$-bimodule, and $S: N \to {P} $ is a bicovariant map, then $(S \circ T)_\gamma : M_\gamma \to P_\gamma$ is a bicovariant map and $S_\gamma \circ T_\gamma = (S \circ T)_\gamma$.
\end{itemize}
\end{prop}
\subsection{Deformation of the braiding map} \label{18thseptember20192}
Suppose $ \E $ is a bicovariant $\A$-bimodule, $ \sigma $ be the bicovariant braiding map of Proposition \ref{4thmay20193} and $ g $ be a bi-invariant metric. Then Proposition \ref{11thjuly20192} implies that we have deformed maps $ \sigma_\gamma$ and $ g_\gamma. $ In this subsection, we study the map $ \sigma_\gamma. $ The map $ g_\gamma $ will be discussed in the next subsection. We will need the following result:
\begin{prop} \label{11thjuly20191} (Theorem 2.5 of \cite{majidcocycle})
Let $(M, \Delta_{M}, {}_M\Delta)$ and $(N, \Delta_{N}, {}_N\Delta)$ be bicovariant bimodules over a Hopf algebra $\A$ and $\gamma$ be a cocycle as above. Then there exists a bicovariant $\A_\gamma$- bimodule isomorphism
$$ \xi: M_\gamma \otimes_{\A_\gamma} N_\gamma \rightarrow (M \tensora N)_\gamma. $$
The isomorphism $\xi$ and its inverse are respectively given by
\begin{equation*}
\begin{aligned}
\xi (m \otimes_{\A_\gamma} n) & = \gamma(m_{(-1)} \tensorc n_{(-1)}) m_{(0)} \tensora n_{(0)} \overline{\gamma}(m_{(1)} \tensorc n_{(1)})\\
\xi^{-1} (m \tensora n) & = \overline{\gamma}(m_{(-1)} \tensorc n_{(-1)}) m_{(0)} \otimes_{\A_\gamma} n_{(0)} {\gamma}(m_{(1)} \tensorc n_{(1)})
\end{aligned}
\end{equation*}
\end{prop}
As an illustration, we make the following computation which will be needed later in this subsection:
\begin{lemma} \label{14thseptember20191}
Suppose $ \omega \in \zeroE, \eta \in \Ezero. $ Then the following equation holds:
\begin{eqnarray*}
\xi^{-1} (\gamma (\eta_{(-1)} \tensorc 1) \eta_{(0)} \tensora \omega_{(0)} \overline{\gamma} (1 \tensorc \omega_{(1)})) &=& \eta \otimes_{\A_\gamma} \omega.
\end{eqnarray*}
\end{lemma}
\begin{proof}
Let us first clarify that we view $ \gamma (\eta_{(-1)} \tensorc 1) \eta_{(0)} \tensora \omega_{(0)} \overline{\gamma} (1 \tensorc \omega_{(1)}) $
as an element in $ (\E \tensora \E)_\gamma. $
Then the equation holds because of the following computation:
\begin{equation*}
\begin{aligned}
&\xi^{-1} (\gamma (\eta_{(-1)} \tensorc 1) \eta_{(0)} \tensora \omega_{(0)} \overline{\gamma} (1 \tensorc \omega_{(1)}))\\ =& \gamma (\eta_{(-1)} \tensorc 1) \xi^{-1} ( \eta_{(0)} \tensora \omega_{(0)}) \overline{\gamma} (1 \tensorc \omega_{(1)}) \\
=& \gamma (\eta_{(-2)} \tensorc 1) \overline{\gamma}(\eta_{(-1)} \tensorc 1) \eta_{(0)} \otimes_{\A_\gamma} \omega_{(0)} \gamma(1 \tensorc \omega_{(1)}) \overline{\gamma} (1 \tensorc \omega_{(2)})\\ ~ &{\rm (} {\rm since} ~ \omega \in \zeroE, ~ \eta \in \Ezero {\rm)}\\
=& \epsilon(\eta_{(-2)}) \epsilon(\eta_{(-1)}) \eta_{(0)} \otimes_{\A_\gamma} \omega_{(0)} \epsilon (\omega_{(1)}) \epsilon (\omega_{(2)}) \ {\rm (since \ \overline{\gamma} \ and \ \gamma \ are \ normalised)}\\
=& \eta \otimes_{\A_\gamma} \omega.
\end{aligned}
\end{equation*}
\end{proof}
Now, we are in a position to study the map $ \sigma_\gamma. $ By Proposition \ref{8thmay20191}, $ \E_\gamma $ is a bicovariant $ \A_\gamma $-bimodule. Then Proposition \ref{4thmay20193} guarantees the existence of a canonical braiding from $\E_\gamma \otimes_{\A_\gamma} \E_\gamma$ to itself. We show that this map is nothing but the deformation $\sigma_\gamma$ of the map $\sigma$ associated with the bicovariant $\A$-bimodule $\E$. By the definition of $ \sigma_\gamma, $ it is a map from $ (\E \tensora \E)_\gamma $ to $ (\E \tensora \E )_\gamma. $ However, by virtue of Proposition \ref{11thjuly20191}, the map $ \xi $ defines an isomorphism from $ \E_\gamma \otimes_{\A_\gamma} \E_\gamma $ to $ (\E \tensora \E )_\gamma. $ By an abuse of notation, we will denote the map
$$ \xi^{-1} \sigma_\gamma \xi : \E_\gamma \otimes_{\A_\gamma} \E_\gamma \rightarrow \E_\gamma \otimes_{\A_\gamma} \E_\gamma $$
by the symbol $ \sigma_\gamma $ again.
\begin{thm} \label{28thaugust20196} (Theorem 2.5 of \cite{majidcocycle})
Let $\E$ be a bicovariant $\A$-bimodule and $\gamma$ be a cocycle as above. Then the deformation $\sigma_\gamma$ of $\sigma$ is the unique bicovariant $\A_\gamma$-bimodule braiding map on $\E_\gamma$ given by Proposition \ref{4thmay20193}.
\end{thm}
\begin{proof}
Since $\sigma$ is a bicovariant $\A$-bimodule map from $\E \tensora \E$ to itself, part (ii) of Proposition \ref{11thjuly20192} implies that $\sigma_\gamma$ is a bicovariant $\A_\gamma$-bimodule map from $(\E \tensora \E)_\gamma \cong \E_\gamma \otimes_{\A_\gamma} \E_\gamma$ to itself. By Proposition \ref{4thmay20193}, there exists a unique $\A_\gamma$-bimodule map $\sigma^\prime$ from $\E_\gamma \otimes_{\A_\gamma} \E_\gamma$ to itself such that $\sigma^\prime(\omega \otimes_{\A_\gamma} \eta) = \eta \otimes_{\A_\gamma} \omega$ for all $\omega$ in ${}_0(\E_\gamma)$, $\eta$ in $(\E_\gamma)_0$.
Since ${}_0(\E_\gamma) = \zeroE$ and $(\E_\gamma)_0 = \Ezero$, it is enough to prove that $\sigma_\gamma(\omega \otimes_{\A_\gamma} \eta) = \eta \otimes_{\A_\gamma} \omega$ for all $\omega$ in $\zeroE$, $\eta$ in $\Ezero$.
We will need the concrete isomorphism between $\E_\gamma \otimes_{\A_\gamma} \E_\gamma$ and $(\E \tensora \E)_\gamma$ defined in Proposition \ref{11thjuly20191}. Since $\omega$ is in $\zeroE$ and $\eta$ is in $\Ezero$, this isomorphism maps the element $\omega \otimes_{\A_\gamma} \eta$ to $\gamma(1 \tensorc \eta_{(-1)}) \omega_{(0)} \tensora \eta_{(0)} \overline{\gamma}(\omega_{(1)} \tensorc 1)$. Then, by the definition of $\sigma_\gamma$, we compute the following:
\begin{equation*}
\begin{aligned}
& \sigma_\gamma(\omega \otimes_{\A_\gamma} \eta)
= \sigma(\gamma(1 \tensorc \eta_{(-1)}) \omega_{(0)}\tensora \eta_{(0)} \overline{\gamma} (\omega_{(1)} \tensorc 1)) \\
=& \sigma(\epsilon (\eta_{(-1)}) \omega_{(0)}\tensora \eta_{(0)} \epsilon(\omega_{(1)}))
= \epsilon (\eta_{(-1)}) \eta_{(0)}\tensora \omega_{(0)} \epsilon(\omega_{(1)})\\
=& \gamma (\eta_{(-1)} \tensorc 1) \eta_{(0)}\tensora \omega_{(0)} \overline{\gamma}(1 \tensorc \omega_{(1)})
= \eta \otimes_{\A_\gamma} \omega,
\end{aligned}
\end{equation*}
where, in the last step we have used Lemma \ref{14thseptember20191}.
\end{proof}
\brmrk \label{26thjune20202}
Proposition \ref{8thmay20191}, Proposition \ref{11thjuly20192}, Proposition \ref{11thjuly20191} and Theorem \ref{28thaugust20196} together imply that the categories $ \bimodbicov $ and $\bimodbicovgamma$ are isomorphic as braided monoidal categories. This was the content of Theorem 2.5 of \cite{majidcocycle}. The referee has pointed out that this is a special case of a much more generalized result of Bichon (Theorem 6.1 of \cite{bichon}) which says that if two Hopf algebras are monoidally equivalent, then the corresponding categories of right-right Yetter Drinfeld modules are also monoidally equivalent.
However, in Theorem \ref{28thaugust20196}, we have proved in addition that the braiding on $\bimodbicovgamma$ is precisely the Woronowicz braiding of Proposition \ref{4thmay20193}.
\ermrk
\begin{corr} \label{18thsep20194}
If the unique bicovariant $\A$-bimodule braiding map $\sigma$ for a bicovariant $\A$-bimodule $\E$ satisfies the equation $\sigma^2 = 1$, then the bicovariant $\A_\gamma$-bimodule braiding map $ \sigma_\gamma$ for the bicovariant $\A_\gamma$-bimodule $\E_\gamma$ also satisfies $\sigma_\gamma^2 = 1$.
In particular, if $\A$ is the commutative Hopf algebra of regular functions on a compact semisimple Lie group $G$ and $\E$ is its canonical space of one-forms, then the braiding map $\sigma_\gamma$ for $\E_\gamma$ satisfies $\sigma_\gamma^2 = 1$.
\end{corr}
\begin{proof}
By Theorem \ref{28thaugust20196}, $\sigma_\gamma$ is the unique braiding map for the bicovariant $\A_\gamma$-bimodule $\E_\gamma$. Since, by our hypothesis, $\sigma^2 = 1$, the deformed map $\sigma_\gamma$ also satisfies $\sigma_\gamma^2 = 1$ by part (iii) of Proposition \ref{11thjuly20192}.\\
Next, if $\A$ is a commutative Hopf algebra as in the statement of the corollary and $\E$ is its canonical space of one-forms, then we know that the braiding map $\sigma$ is just the flip map, i.e. for all $e_1, e_2$ in $\E$,
\[ \sigma(e_1 \tensora e_2) = e_2 \tensora e_1, \]
and hence it satisfies $\sigma^2 = 1$. Therefore, for every cocycle deformation $\E_\gamma$ of $\E$, the corresponding braiding map satisfies $\sigma_\gamma^2 = 1$.
\end{proof}
\subsection{Pseudo-Riemannian bi-invariant metrics on $\E_\gamma$}
Suppose $ \E $ is a bicovariant $\A$-bimodule and $\E_\gamma $ be its cocycle deformation as above. The goal of this subsection is to prove that a pseudo-Riemannian bi-invariant metric on $ \E $ naturally deforms to a pseudo-Riemannian bi-invariant metric on $\E_\gamma. $ Since $ g $ is a bicovariant (i.e, both left and right covariant) map from the bicovariant bimodule $ \E \tensora \E $ to itself, then by Proposition \ref{11thjuly20192}, we have a right $ \A_\gamma $-linear bicovariant map $ g_{\gamma} $ from $ \E_\gamma \otimes_{\A_\gamma} \E_\gamma $ to itself. We need to check the conditions (i) and (ii) of Definition \ref{24thmay20191} for the map $ g_\gamma. $
The proof of the equality $ g_\gamma = g_\gamma \circ \sigma_\gamma $ is straightforward. However, checking condition (ii), i.e, verifying that the map $ V_{g_\gamma} $ is an isomorphism onto its image needs some work. The root of the problem is that we do not yet know whether $ \E^* = V_g ( \E ).$ Our strategy to verify condition (ii) is the following: we show that the right $ \A $-module $ V_g ( \E ) $ is a bicovariant right $\A$-module (see Definition \ref{28thaugust20195}) in a natural way. Let us remark that since the map $ g $ (hence $ V_g $) is not left $\A$-linear, $ V_g ( \E ) $ need not be a left $\A$-module. Since bicovariant right $ \A $-modules and bicovariant maps between them can be deformed (Proposition \ref{28thaugust20191sm}), the map $ V_g $ deforms to a right $ \A_\gamma $-linear isomorphism from $ \E_\gamma $ to $ (V_g (\E) )_\gamma. $ Then in Theorem \ref{29thaugust20191sm}, we show that $ (V_g)_\gamma $ coincides with the map $ V_{g_\gamma} $ and the latter is an isomorphism onto its image. This is the only subsection where we use the theory of bicovariant right modules (as opposed to bicovariant bimodules).
\begin{defn} \label{28thaugust20195}
Let $M$ be a right $\A$-module, and $\Delta_M : M \to \A \tensorc M$ and ${}_M \Delta: M \to M \tensorc \A$ be $\mathbb{C}$-linear maps. We say that $(M,\Delta_M, {}_M \Delta)$ is a bicovariant right $\A$-module if the triplet is an object of the category $ \rightmodbicov, $ i.e,
\begin{itemize}
\item[(i)] $(M, \Delta_M)$ is a left $\A$-comodule,\\
\item[(ii)] $(M, {}_M \Delta)$ is a right $\A$-comodule,\\
\item[(iii)] $ (\id \tensorc {}_M\Delta) \Delta_M = (\Delta_M \tensorc \id) {}_M\Delta $, \\
\item[(iv)] For any $a$ in $\A$ and $m$ in $M$, $$ \Delta_M(m a)=\Delta_M(m)\Delta(a), \quad {}_M \Delta(ma) = {}_M \Delta(m) \Delta(a).$$
\end{itemize}
\end{defn}
For the rest of the subsection, $ \E $ will denote a bicovariant $ \A $-bimodule. Moreover, $ \{ \omega_i \}_i $ will denote a basis of $ \zeroE $ and $ \{ \omega^*_i \}_i $ the dual basis, i.e, $ \omega^*_i (\omega_j) = \delta_{ij}. $
Let us recall that \eqref{26thaugust20191} implies the existence of elements $R_{ij}$ in $\A$ such that
\begin{equation} \label{26thaugust20192}
{}_\E \Delta(\omega_i) = \sum_{ij} \omega_j \tensorc R_{ji}.
\end{equation}
We want to show that $ V_g (\E) $ is a bicovariant right $\A$-module. To this end, we recall that (Lemma \ref{28thaugust2019night1} ) $ V_g (\E) $ is a free right $ \A $-module with basis $ \{ \omega^*_i \}_i. $ This allows us to make the following definition.
\begin{defn} \label{26thaugust20191sm}
Let $\{ \omega_i \}_i$ and $ \{ \omega^*_i \}_i $ be as above and $g$ a bi-invariant pseudo-Riemannian metric on $\E $. Then we can endow $V_g(\E)$ with a left-coaction $\Delta_{V_g(\E)} : V_g(\E) \to \A \tensorc V_g(\E)$ and a right-coaction ${}_{V_g(\E)} \Delta : V_g(\E) \to V_g(\E) \tensorc \A$, defined by the formulas
\begin{equation} \label{29thaugust20192jb}
\Delta_{V_g(\E)}(\sum_i \omega^\ast_i a_i) = \sum_i (1 \tensorc \omega^\ast_i) \Delta(a_i), ~
{}_{V_g(\E)} \Delta(\sum_i \omega^\ast_i a_i) = \sum_{ij} (\omega^\ast_j \tensorc S(R_{ij}))\Delta(a_i),
\end{equation}
where the elements $R_{ij}$ are as in \eqref{26thaugust20192}.
\end{defn}
Then we have the following result.
\begin{prop} \label{21staugust20191}
The triplet $ (V_g (\E), \Delta_{V_g(\E)}, {}_{V_g(\E)} \Delta ) $ is a bicovariant right $ \A $-module. Moreover, the map $ V_g: \E \rightarrow V_g (\E) $ is bicovariant, i.e, we have
\begin{equation} \label{29thaugust20191}
\Delta_{V_g(\E)} (V_g(e)) = (\id \tensorc V_g) \Delta_\E(e), ~
{}_{V_g(\E)} \Delta (V_g(e)) = (V_g \tensorc \id) {}_\E \Delta(e).
\end{equation}
\end{prop}
\begin{proof} The fact that $ (V_g (\E), \Delta_{V_g(\E)}, {}_{V_g(\E)} \Delta ) $ is a bicovariant right $ \A $-module follows immediately from the definition of the maps $ \Delta_{V_g(\E)} $ and $ {}_{V_g(\E)} \Delta.$ So we are left with proving \eqref{29thaugust20191}.
Let $ e \in \E. $ Then there exist elements $ a_i $ in $ \A $ such that $ e = \sum_i \omega_i a_i. $ Hence, by \eqref{28thaugust2019night2}, we obtain
\begin{eqnarray*}
\Delta_{V_g (\E)} (V_g (e) ) &=& \Delta_{V_g(\E)} (V_g(\sum_i \omega_i a_i)) = \Delta_{V_g(\E)} (\sum_{ij} g_{ij} \omega^\ast_j a_i) \\ &=& \sum_{ij} (1 \tensorc g_{ij} \omega^\ast_j) \Delta(a_{i})
=\sum_{i} ((\id \tensorc V_g)(1 \tensorc \omega_i)) \Delta(a_i)\\ &=& \sum_{i} (\id \tensorc V_g)(\Delta_{\E}(\omega_i)) \Delta(a_i)\\
&=& \sum_{i} (\id \tensorc V_g)\Delta_{\E}(\omega_i a_i) = (\id \tensorc V_g ) \Delta_\E (e).
\end{eqnarray*}
This proves the first equation of \eqref{29thaugust20191}.
For the second equation, we begin by making an observation. Since $ {}_\E \Delta (\omega_i) = \sum_j \omega_j \tensorc R_{ji}, $ we have
\begin{equation*}
\delta_{ij} = \epsilon (R_{ij}) = m(S \tensorc \id) \Delta(R_{ij}) = \sum_k S(R_{ik}) R_{kj}.
\end{equation*}
Therefore, multiplying \eqref{29thaugust20193} by $S(R_{jm})$ and summing over $j$, we obtain
\begin{equation} \label{6thnov20192} \sum_j g_{ij} S(R_{jm}) = \sum_j g_{jm} R_{ji}.\end{equation}
Now by using \eqref{28thaugust2019night2}, we compute
\begin{eqnarray*}
{}_{V_g(\E)} \Delta (V_g (e)) &=& {}_{V_g(\E)} \Delta (V_g(\sum_i \omega_i a_i)) = {}_{V_g(\E)} \Delta (\sum_{ij} g_{ij} \omega^\ast_j a_i)\\ &=& \sum_{ij} {}_{V_g(\E)} \Delta (g_{ij} \omega^\ast_j) \Delta({a_i})
= \sum_{ijk} g_{ij} \omega^\ast_k \tensorc S(R_{jk}) \Delta({a_i})\\ &=& \sum_{ik} \omega^\ast_k \tensorc \sum_j g_{ij} S(R_{jk}) \Delta({a_i })\\
&=& \sum_{ik} \omega^\ast_k \tensorc \sum_j g_{jk} R_{ji} \Delta (a_i) ~ {\rm (} {\rm by} ~ \eqref{6thnov20192} {\rm)}\\
&=& \sum_{ijk} g_{jk} \omega^\ast_k \tensorc R_{ji} \Delta (a_i)= \sum_{ij} V_g(\omega_j) \tensorc R_{ji} \Delta (a_i)\\
&=& \sum_i (V_g \tensorc \id){}_{\E} \Delta (\omega_i) \Delta (a_i) ~ {\rm (} {\rm by} ~ \eqref{26thaugust20192} {\rm)}\\
&=& \sum_i (V_g \tensorc \id){}_{\E} \Delta (\omega_i a_i) = (V_g \tensorc \id){}_{\E} \Delta (e).
\end{eqnarray*}
This finishes the proof.
\end{proof}
Now we recall that bicovariant right $\A$-modules (i.e, objects of the category $\rightmodbicov$) can be deformed too.
\begin{prop} \label{28thaugust20191sm} (Theorem 5.7 of \cite{schauenberg})
Let $(M, \Delta_M, {}_M \Delta)$ be a bicovariant right $\A$-module and $\gamma$ be a 2-cocycle on $\A$. Then
\begin{itemize}
\item[(i)] $M$ deforms to a bicovariant right $\A_\gamma$-module, denoted by $M_\gamma$,\\
\item[(ii)] if $(N, \Delta_N, {}_N \Delta)$ is another bicovariant right $\A$-module and $T: M \to N$ is a bicovariant right $\A$-linear map, then the deformation $T_\gamma : M_\gamma \to N_\gamma$ is a bicovariant right $\A_\gamma$-linear map, \item[(iii)] $T_\gamma$, as in (ii), is an isomorphism if and only if $T$ is an isomorphism.
\end{itemize}
\end{prop}
\begin{proof}
Parts (i) and (ii) follow from the equivalence of categories $ \leftcov $ and $\leftcovgamma$ combined with the $\rightmodbicov$ analogue of (non-monoidal part of) the second assertion of Proposition 5.7 of \cite{schauenberg}. Part (iii) follows by noting that since the map $T$ is a bicovariant right $\A$-linear map, its inverse $T^{-1}$ is also a bicovariant right $\A$-linear map. Thus, the deformation $(T^{-1})_\gamma$ of $T^{-1}$ exists and is the inverse of the map $T_\gamma$.
\end{proof}
As an immediate corollary, we make the following observation.
\begin{corr} \label{28thaugust20192sm}
Let $g$ be a bi-invariant pseudo-Riemannian metric on a bicovariant $\A$-bimodule $\E$. Then the following map is a well-defined isomorphism.
$$(V_g)_\gamma: \E_\gamma \rightarrow (V_g(\E))_\gamma = (V_g)_{\gamma} (\E_\gamma) $$
\end{corr}
\begin{proof}
Since both $\E$ and $V_g(\E)$ are bicovariant right $\A$-modules, and $V_g$ is a right $\A$-linear bicovariant map (Proposition \ref{21staugust20191}), Proposition \ref{28thaugust20191sm} guarantees the existence of $(V_g)_\gamma$. Since $g$ is a pseudo-Riemannian metric, by (ii) of Definition \ref{24thmay20191}, $V_g: \E \to V_g(\E)$ is an isomorphism. Then, by (iii) of Proposition \ref{28thaugust20191sm}, $(V_g)_\gamma$ is also an isomorphism.
\end{proof}
Now we are in a position to state and prove the main result of this section which shows that there is an abundant supply of bi-invariant pseudo-Riemannian metrics on $ \E_\gamma.$ Since $ g $ is a map from $ \E \tensora \E $ to $ \A, $ $ g_\gamma $ is a map from $ (\E \tensora \E)_\gamma $ to $ \A_\gamma. $ But we have the isomorphism $ \xi $ from $ \E_\gamma \otimes_{\A_\gamma} \E_\gamma $ to $ (\E \tensora \E)_\gamma $ (Proposition \ref{11thjuly20191}). As in Subsection \ref{18thseptember20192}, we will make an abuse of notation to denote the map $ g_\gamma \xi^{-1} $ by the symbol $ g_\gamma. $
\begin{thm} \label{29thaugust20191sm}
If $g$ is a bi-invariant pseudo-Riemannian metric on a finite bicovariant $\A$-bimodule $\E$ (as in Remark \ref{20thjune}) and $\gamma$ is a 2-cocycle on $\A$, then $g$ deforms to a right $\A_\gamma$-linear map $g_\gamma$ from $\E_\gamma \otimes_{\A_\gamma} \E_\gamma$ to itself. Moreover, $g_\gamma$ is a bi-invariant pseudo-Riemannian metric on $\E_\gamma$. Finally, any bi-invariant pseudo-Riemannian metric on $\E_\gamma$ is a deformation (in the above sense) of some bi-invariant pseudo-Riemannian metric on $\E$.
\end{thm}
\begin{proof}
Since $g$ is a right $\A$-linear bicovariant map (Proposition \ref{29thaugust20192}), $g$ indeed deforms to a right $\A_\gamma$-linear map $g_\gamma$ from $(\E \tensora \E)_\gamma \cong \E_\gamma \otimes_{\A_\gamma} \E_\gamma$ (see Proposition \ref{11thjuly20191}) to $\A_\gamma$. The second assertion of Proposition \ref{11thjuly20192} implies that $g_\gamma$ is bicovariant. Then Proposition \ref{29thaugust20192} implies that $g_\gamma$ is bi-invariant. Since $g \circ \sigma = g$, part (iii) of Proposition \ref{11thjuly20192} implies that $$ g_\gamma = (g \circ \sigma)_\gamma = g_\gamma \circ \sigma_\gamma. $$ This verifies condition (i) of Definition \ref{24thmay20191}\\
Next, we prove that $g_\gamma$ satisfies (ii) of Definition \ref{24thmay20191}. Let $\omega$ be an element of $\zeroE = {}_0(\E_\gamma)$ and $\eta$ be an element of $\Ezero = (\E_\gamma)_0$. Then we have
\begin{equation*}
\begin{aligned}
&(V_g)_\gamma(\omega)(\eta) = (V_g(\omega))_\gamma(\eta) = V_g(\omega)(\eta)\\
=& g(\omega \tensora \eta) = g_\gamma(\overline{\gamma}(1 \tensorc \eta_{(-1)}) \omega_{(0)} \otimes_{\A_\gamma} \eta_{(0)} \gamma(\omega_{(1)} \tensorc 1))\\
& {\rm (} {\rm by} ~ {\rm the} ~ {\rm definition} ~ {\rm of} ~ \xi^{-1} ~ {\rm in} ~ {\rm Proposition} \ \ref{11thjuly20191} {\rm)}\\
=& g_\gamma(\epsilon(\eta_{(-1)}) \omega_{(0)} \otimes_{\A_\gamma} \eta_{(0)} \epsilon(\omega_{(1)})) = g_\gamma(\omega \otimes_{\A_\gamma} \eta) = V_{g_\gamma}(\omega)(\eta).
\end{aligned}
\end{equation*}
Then, by the right-$\A_\gamma$ linearity of $(V_g)_\gamma(\omega)$ and $V_{g_\gamma}(\omega)$, we get, for all $a$ in $\A$,
\begin{equation*}
V_{g_\gamma}(\omega)(\eta *_\gamma a) = V_{g_\gamma}(\omega)(\eta)*_\gamma a = (V_g)_\gamma(\omega)(\eta)*_\gamma a = (V_g)_\gamma(\omega)(\eta *_\gamma a).
\end{equation*}
Therefore, by the right $\A$-totality of $(\E_\gamma)_0 = \E_0$ in $\E_\gamma$, we conclude that the maps $(V_g)_\gamma$ and $V_{g_\gamma}$ agree on ${}_0(\E_\gamma)$. But since ${}_0(\E_\gamma) = \zeroE$ is right $\A_\gamma$-total in $\E_\gamma$ and both $V_{g_\gamma}$ and $ (V_{g})_\gamma$ are right-$\A_\gamma$ linear, $(V_g)_\gamma = V_{g_\gamma}$ on the whole of $\E_\gamma$.\\
Next, since $V_g$ is a right $\A$-linear isomorphism from $\E$ to $V_g(\E)$, hence by Corollary \ref{28thaugust20192sm}, $(V_g)_\gamma$ is an isomorphism onto $ (V_g (\E))_\gamma = (V_g)_\gamma (\E_\gamma)=V_{g_\gamma}(\E_\gamma).$ Therefore $V_{g_\gamma}$ is an isomorphism from $\E_\gamma$ to $V_{g_\gamma}(\E_\gamma)$. Hence $g_\gamma$ satisfies (ii) of Definition \ref{24thmay20191}.\\
To show that every pseudo-Riemannian metric on $\E_\gamma$ is obtained as a deformation of a pseudo-Riemannian metric on $\E$, we view $\E$ as a cocycle deformation of $\E_\gamma$ under the cocycle $\gamma^{-1}$. Then given a pseudo-Riemannian metric $g^\prime$ on $\E_\gamma$, by the first part of this proof, $(g^\prime)_{\gamma^{-1}}$ is a bi-invariant pseudo-Riemannian metric on $\E$. Hence, $g^\prime = ((g^\prime)_{\gamma^{-1}})_{\gamma}$ is indeed a deformation of the bi-invariant pseudo-Riemannian metric $(g^\prime)_{\gamma^{-1}}$ on $\E$.
\end{proof}
\begin{rmk} \label{26thjune20201}
We have actually used the fact that $\E$ is finite in order to prove Theorem \ref{29thaugust20191sm}. Indeed, since $\E$ is finite, we can use the results of Section \ref{section3} to derive Proposition \ref{21staugust20191}
which is then used to prove Corollary \ref{28thaugust20192sm}. Finally, Corollary \ref{28thaugust20192sm} is used to prove Theorem \ref{29thaugust20191sm}.
Also note that the proof of Theorem \ref{29thaugust20191sm} also implies that the maps $(V_g)_\gamma$ and $V_{g_\gamma}$ are equal.
\end{rmk}
When $g$ is a pseudo-Riemannian bicovariant bilinear metric on $\E,$ then we have a much shorter proof of the fact that $g_\gamma$ is a pseudo-Riemannian metric on $\E_\gamma$ which avoids bicovariant right $\A$-modules. We learnt the proof of this fact from communications with the referee and is as follows:
We will work in the categories $\bimodbicov$ and $\bimodbicovgamma.$ Firstly, as $g$ is bilinear, $V_g$ is a morphism of the category $\bimodbicov.$ and can be deformed to a bicovariant $\A_\gamma$-bilinear map $ ( V_g )_\gamma $ from $ \E_\gamma $ to $ ( \E^* )_\gamma. $ Similarly, $g$ deforms to a $\A_\gamma$-bilinear map from $ \E_\gamma \otimes_{\A_\gamma} \E_\gamma $ to $\A_\gamma.$ Then as in the proof of Theorem \ref{29thaugust20191sm}, we can easily check that $ ( V_g )_\gamma = V_{g_\gamma}. $
On the other hand, it is well-known that the left dual $\widetilde{\E}$ of $\E$ is isomorphic to $\E^*.$ Since $g$ is bilinear, Proposition \ref{25thjune202} implies that the morphism $V_g$ (in the category $ \bimodbicov $) is an isomorphism from $\E$ to $\E^*.$
Therefore, we have an isomorphism $ ( V_g )_\gamma $ is an isomorphism from $\E_\gamma$ to $ ( \E^* )_\gamma \cong ( \E_\gamma )^* $ by Exercise 2.10.6 of \cite{etingof}. As $ ( V_g )_\gamma = V_{g_\gamma}, $ we deduce that $ V_{g_\gamma} $ is an isomorphism from $\E_\gamma$ to $ ( \E_\gamma )^*. $ Since the equation $ g_\gamma \circ \sigma_\gamma = g_\gamma, $ this completes the proof.
\vspace{4mm}
{\bf Acknowledgement:} We are immensely grateful to the referee for clarifying the connections with monoidal categories and their deformations. His/her enlightening remarks has greatly improved our understanding. We would like to thank the referee for pointing out the relevant results in \cite{majidcocycle}, \cite{schauenberg} and \cite{bichon}. Proposition 3.5, Remark \ref{26thjune20202},
Remark \ref{26thjune20201} and the discussion at the very end of the article are due to the referee.
| {'timestamp': '2020-07-28T02:23:08', 'yymm': '1911', 'arxiv_id': '1911.06036', 'language': 'en', 'url': 'https://arxiv.org/abs/1911.06036'} |
\section{Introduction}
\paragraph*{Problem statement and motivation}
Let $\QQ,\RR,\CC$ be respectively the
fields of rational, real and complex numbers, and let $m,n$
be positive integers. Given $m \times m$
matrices ${H}_0, {H}_1, \ldots, {H}_n$ with entries in $\QQ$ and Hankel
structure, i.e. constant skew diagonals,
we consider the {\it linear Hankel matrix} ${H}(\vecx) = {H}_0+\X_1{H}_1+\ldots+\X_n{H}_n$,
denoted ${H}$ for short, and the algebraic set
\[
{\mathcal{H}}_r =
\{\vecx \in \CC^n : {\rm rank} \, {H}(\vecx) \leq r\}.
\]
The goal of this paper is to provide an efficient algorithm for
computing at least one sample point per connected component of the
real algebraic set ${\mathcal{H}}_r \cap \RR^n$.
Such an algorithm can be used to solve the matrix rank minimization
problem for ${H}$. Matrix rank minimization mostly consists of
minimizing the rank of a given matrix whose entries are subject to
constraints defining a convex set. These problems arise in many
engineering or statistical modeling applications and have recently
received a lot of attention. Considering Hankel structures is
relevant since it arises in many applications (e.g. for model
reduction in linear dynamical systems described by Markov parameters, see
\cite[Section 1.3]{markovsky12}).
Moreover, an algorithm for computing sample points in each connected
component of ${\mathcal{H}}_r\cap\RR^n$ can also be used to decide the
emptiness of the feasibility set $S=\{\vecx \in \RR^n : {H}(\vecx)\succeq 0\}$.
Indeed, considering the minimum rank $r$ attained in the boundary of
$S$, it is easy to prove that one of the connected components of
${\mathcal{H}}_{r} \cap \RR^n$ is actually contained in $S$. Note also that
such feasibility sets, also called Hankel spectrahedra, have recently
attracted some attention (see e.g. \cite{BS14}).
The intrinsic algebraic nature of our problem makes relevant the
design of
exact algorithms to achieve reliability.
On the one hand, we aim at
exploiting algorithmically the special Hankel structure to
gain efficiency.
On the other hand, the design of a special algorithm
for the case of linear Hankel matrices can bring the foundations of a
general approach to e.g. the symmmetric case which is important for
semi-definite programming, i.e. solving linear matrix inequalities.
\paragraph*{Related works and state-of-the-art} Our problem consists
of computing sample points in real algebraic sets. The first algorithm
for this problem is due to Tarski but its complexity was not
elementary recursive \cite{Tarski}. Next, Collins designed the
Cylindrical Algebraic Decomposition algorithm \cite{c-qe-1975}. Its
complexity is doubly exponential in the number of variables which is
far from being optimal since the number of connected components of a
real algebraic set defined by $n$-variate polynomial equations of
degree $\leq d$ is upper bounded by $O(d)^n$. Next, Grigoriev and
Vorobjov \cite{GV88} introduced the first algorithm based on critical
point computations computing sample point in real algebraic sets
within $d^{O(n)}$ arithmetic operations. This work has next been
improved and generalized (see \cite{BaPoRo06} and references therein)
from the complexity viewpoint. We may apply these algorithms to our
problem by computing all $(r+1)$-minors of
the Hankel matrix and compute sample points in the real algebraic set
defined by the vanishing of these minors. This is done in time
$(\binom{m}{r+1}\binom{n+r}{r})^{O(1)}+r^{O(n)}$ however since the
constant in the exponent is rather high, these algorithms did not lead
to efficient implementations in practice. Hence, another series of
works, still using the critical point method but aiming at designing
algorithms that combine asymptotically optimal complexity and
practical efficiency has been developed (see e.g. \cite{BGHSS, SaSc03,
GS14} and references therein).
Under regularity assumptions, these yield probabilistic algorithms
running in time which is essentially $O(d^{3n})$ in the smooth case
and $O(d^{4n})$ in the singular case (see \cite{S05}). Practically,
these algorithms are implemented in the library {\sc RAGlib} which
uses Gr\"obner bases computations (see \cite{faugere2012critical,
Sp14} about the complexity of computing critical points with
Gr\"obner bases).
Observe that determinantal varieties such as ${\mathcal{H}}_r$ are generically
singular (see \cite{bruns1988determinantal}). Also the
aforementioned algorithms do not exploit the structure of the
problem. In \cite{HNS2014}, we introduced an algorithm for computing
real points at which a
{\em generic} linear square matrix of size $m$ has rank $\leq m-1$, by
exploiting the structure of the problem. However, because of the
requested genericity of the input linear matrix, we cannot use it for
linear Hankel matrices. Also, it does not allow to get sample points
for a given, smaller rank deficiency.
\paragraph*{Methodology and main results} Our main result is an
algorithm that computes sample points in each connected component of
${\mathcal{H}}_r \cap \RR^n$ under some genericity assumptions on the entries
of the linear Hankel matrix ${H}$ (these genericity assumptions are
made explicit below). Our algorithm exploits the Hankel structure of
the problem. Essentially, its complexity is quadratic in a multilinear
B\'ezout bound on the number of complex solutions. Moreover, we find
that, heuristically, this bound is less than
${{m}\choose{r+1}}{{n+r}\choose{r}}{{n+m}\choose{r}}$.
Hence, for subfamilies of the real root finding problem on linear
Hankel matrices where the maximum rank allowed $r$ is fixed, the complexity
is essentially in $(nm)^{O(r)}$.
The very basic idea is to study the algebraic set ${\mathcal{H}}_r\subset
\CC^n$ as the Zariski closure of the projection of an incidence
variety, lying in $\CC^{n+r+1}$. This variety encodes the fact that
the kernel of ${H}$ has dimension $\geq m-r$. This lifted variety
turns out to be generically smooth and equidimensional and defined by
quadratic polynomials with multilinear structure. When these
regularity properties are satisfied, we prove that computing one point
per connected component of the incidence variety is sufficient to
solve the same problem for the variety ${\mathcal{H}}_r \cap \RR^n$. We also
prove that these properties are generically satisfied. We remark that
this method is similar to the one used in \cite{HNS2014}, but in this
case it takes strong advantage of the Hankel structure of the linear
matrix, as detailed in Section \ref{sec:prelim}. This also reflects on
the complexity of the algorithm and on practical performances.
Let ${{C}}$ be a connected component of ${\mathcal{H}}_r\cap\RR^n$, and and
$\Pi_1,\pi_1$ be the canonical projections $\Pi_1: (\x_1, \ldots,
\x_n, \y_1, \ldots, \y_{r+1})\to \x_1$ and $\pi_1: (\x_1, \ldots,
\x_n)\to \x_1$. We prove that in generic coordinates, either {\em (i)}
$\pi_1(C) = \RR$ or {\em (ii)} there exists a critical point of the
restriction of $\Pi_1$ to the considered incidence variety. Hence,
after a generic linear change of variables, the algorithm consists of
two main steps: {\em (i)} compute the critical points of the
restriction of $\Pi_1$ to the incidence variety and {\em (ii)}
instantiating the first variable $\X_1$ to a generic value and perform
a recursive call following a geometric pattern introduced in
\cite{SaSc03}.
This latter step ({\em i}) is actually performed by building the
Lagrange system associated to the optimization problem whose solutions
are the critical points of the restriction of $\pi_1$ to the incidence
variety. Hence, we use the algorithm in \cite{jeronimo2009deformation}
to solve it. One also observes heuristically that these Lagrange
systems are typically zero-dimensional.
However, we were not able to prove this finiteness property,
but we
prove that it holds when we restrict the optimization step
to the set
of points $\vecx \in {\mathcal{H}}_r$ such that ${\rm rank} \, {H}(\vecx)
= p$, for
any $0 \leq p \leq r$. However, this is sufficient
to
conclude that there are finitely many critical points of the
restriction
of $\pi_1$ to ${\mathcal{H}}_r \cap \RR^n$, and that the
algorithm
returns the output correctly.
When the
Lagrange system has dimension $0$, the complexity of
solving
its equations is essentially
quadratic
in the number of its complex solutions.
As previously announced, by the
structure of these systems one can deduce multilinear
B\'ezout bounds on
the number of solutions that are polynomial in $nm$ when
$r$ is fixed, and polynomial in $n$ when $m$ is fixed.
This complexity result outperforms
the
state-of-the-art algorithms.
We finally
remark that the complexity gain is reflected also
in the
first implementation of the algorithm, which allows
to solve instances of our problem that are out of reach of the
general algorithms implemented in {\sc RAGlib}.
\paragraph*{Structure
of the paper}
The paper
is structured as follows. Section \ref{sec:prelim} contains
preliminaries
about Hankel matrices and the basic notation of the
paper; we
also prove that our regularity assumptions are generic. In
Section
\ref{sec:algo} we describe the algorithm and prove its
correctness.
This is done by using preliminary results proved in
Sections
\ref{sec:dimension} and \ref{sec:closedness}. Section
\ref{ssec:algo:complexity} contains the complexity analysis
and bounds
for the number of complex solutions of the output of the
algorithm.
Finally, Section \ref{sec:exper} presents the results of our
experiments on generic linear Hankel matrices, and comparisons
with the state-of-the-art algorithms for the real root finding
problem.
\section{Notation and preliminaries} \label{sec:prelim}
\paragraph*{Basic notations} \label{ssec:prelim:basic}
We denote by $\GL(n, \QQ)$ (resp. $\GL(n, \CC)$)
the set of $n \times n$ non-singular matrices with rational (resp.
complex) entries. For a matrix $M \in \CC^{m \times m}$ and an integer
$p \leq m$, one denotes with $\minors(p,M)$ the list of determinants
of $p \times p$ sub-matrices of $M$. We denote by $M'$ the
transpose matrix of $M$.
Let $\QQ[\vecx]$ be the ring of polynomials on $n$ variables $\vecx = (\X_1, \ldots, \X_n)$
and let $\mathbf{f} = (f_1, \ldots, f_p) \in \QQ[\vecx]^p$ be a polynomial system.
The common zero locus of the entries of $\mathbf{f}$ is denoted by
$\zeroset{\mathbf{f}} \subset \CC^n$, and its dimension with $\dim \, \zeroset{\mathbf{f}}$. The ideal generated by $\mathbf{f}$ is denoted by
$\left\langle \mathbf{f} \right\rangle$, while if $\mathcal{V} \subset \CC^n$ is any set, the
ideal of polynomials vanishing on $\mathcal{V}$ is denoted by $\ideal{\mathcal{V}}$, while the
set of regular (resp. singular) points of $\mathcal{V}$ is denoted by ${\rm reg}\, \, \mathcal{V}$
(resp. ${\rm sing}\, \, \mathcal{V}$). If $\mathbf{f} = (f_1, \ldots, f_p) \subset \QQ[\vecx]$, we denote
by $\jac \mathbf{f} = \left( \partial f_i / \partial \X_j \right)$ the Jacobian matrix
of $\mathbf{f}$. We denote by ${\rm reg}\,(\mathbf{f}) \subset \zeroset{\mathbf{f}}$ the subset where
$\jac \mathbf{f}$ has maximal rank.
A set $\mathcal{E} \subset \CC^n$ is locally closed if $\mathcal{E} = {\mathcal{Z}}
\cap \mathscr{O}$ where ${\mathcal{Z}}$ is a Zariski closed set and $\mathscr{O}$ is a
Zariski open set.
Let $\mathcal{V} = \zeroset{\mathbf{f}} \subset \CC^n$ be a smooth equidimensional algebraic set, of dimension $d$,
and let $\mathbf{g} \colon \CC^n \to \CC^p$ be an algebraic map. The set of critical points of
the restriction of $\mathbf{g}$ to $\mathcal{V}$ is the solution set of $\mathbf{f}$ and of the $(n-d+p)-$minors
of the matrix $\jac (\mathbf{f}, \mathbf{g})$, and it is denoted by ${\rm crit}\,(\mathbf{g}, \mathcal{V})$. Finally, if $\mathcal{E}
\subset \mathcal{V}$ is a locally closed subset of $\mathcal{V}$, we denote by ${\rm crit}\,(\mathbf{g}, \mathcal{E})
= \mathcal{E} \cap {\rm crit}\,(\mathbf{g}, \mathcal{V})$.
Finally, for $M \in \GL(n, \CC)$ and $f \in \QQ[\vecx]$, we denote by $f^M(\vecx) = f(M\,\vecx)$,
and if $\mathbf{f}=(f_1, \ldots, f_p) \subset \QQ[\vecx]$ and $\mathcal{V} = \zeroset{\mathbf{f}}$, by
$\mathcal{V}^M = \zeroset{\mathbf{f}^M}$ where $\mathbf{f}^M = (f^M_1, \ldots, f^M_p)$.
\paragraph*{Hankel structure} \label{ssec:prelim:hanktoep}
Let $\{h_1, \ldots, h_{2m-1}\} \subset \QQ$. The matrix ${H} = (h_{i+j-1})_{1 \leq i,j \leq m}
\in \QQ^{m \times m}$ is called a Hankel matrix,
and we use the notation ${H} = {\sf Hankel}(h_1, \ldots, h_{2m-1})$.
The structure of a Hankel matrix induces structure on its kernel. By
\cite[Theorem 5.1]{heinig1984algebraic}, one has that if ${H}$ is a Hankel
matrix of rank at most $r$, then there exists a non-zero vector $\vecy = (\y_1,
\ldots, \y_{r+1}) \in \QQ^{r+1}$ such that the columns of the $m \times
(m-r)$ matrix
\[
{Y}(\vecy)=
\begin{bmatrix}
\vecy & 0 & \ldots & 0 \\
0 & \vecy & \ddots & \vdots \\
\vdots & \ddots & \ddots & 0 \\
0 & \ldots & 0 & \vecy \\
\end{bmatrix}
\]
generate a $(m-r)-$dimensional subspace of the kernel of ${H}$.
We observe that ${H} \, {Y}(\vecy)$ is also a Hankel matrix.
The product ${H} \, {Y}(\vecy)$ can be re-written as a matrix-vector product
$\tilde{{H}} \, y$, with $\tilde{{H}}$ a given rectangular Hankel matrix.
Indeed, let ${H}={\sf Hankel}(h_1, \ldots, h_{2m-1})$. Then, as previously
observed, ${H} \, {Y}(\vecy)$ is a rectangular Hankel matrix, of size
$m \times (m-r)$, whose entries coincide with the entries of
\[
\tilde{{H}} \, \vecy =
\begin{bmatrix}
h_1 & \ldots & h_{r+1} \\
\vdots & & \vdots \\
h_{2m-r-1} & \ldots & h_{2m-1}
\end{bmatrix}
\begin{bmatrix}
\y_1 \\
\vdots \\
\y_{r+1}
\end{bmatrix}.
\]
Let $H(\vecx)$ be a linear Hankel matrix.
From \cite[Corollary 2.2]{conca1998straightening} we deduce that, for $p \leq r$, then
the ideals $\left\langle \minors(p+1,{H}(\vecx)) \right\rangle$ and
$\left\langle\minors(p+1,\tilde{{H}}(\vecx))\right\rangle$ coincides. One deduces that
$\vecx = (\X_1, \ldots, \X_n) \in \CC^n$ satisfies ${\rm rank} \, {H}(\vecx) = p$
if and only if it satisfies ${\rm rank} \, \tilde{{H}}(\vecx) = p$.
\paragraph*{Basic sets} \label{ssec:prelim:polsys}
We first recall that the linear matrix ${H}(\vecx) = {H}_0 + \X_1{H}_1 + \ldots + \X_n{H}_n$,
where each $H_i$ is a Hankel matrix, is also a Hankel matrix. It is identified
by the $(2m-1)(n+1)$ entries of the matrices ${H}_i$. Hence we often consider ${H}$ as an element of
$\CC^{(2m-1)(n+1)}$. For $M \in \GL(n, \QQ)$, we denote by ${H}^M(\vecx)$ the linear
matrix ${H}(M \vecx)$.
We define in the following the main algebraic sets appearing during
the execution of our algorithm, given ${H} \in \CC^{(2m-1)(n+1)}$,
$0 \leq p \leq r$, $M \in \GL(n, \CC)$ and $\u = (u_1, \ldots, u_{p+1}) \in \QQ^{p+1}$.
{\it Incidence varieties.} We consider the polynomial system
\[
\begin{array}{lccl}
\mathbf{f}({H}^M,\u,p): & \CC^{n} \times \CC^{p+1} & \longrightarrow & \CC^{2m-p-1} \times \CC \\
& (\vecx, \vecy) & \longmapsto &
\left ((\tilde{H}(M \, \vecx)\, \vecy)', \u'\vecy-1 \right )
\end{array}
\]
where $\tilde{H}$ has been defined in the previous section.
We denote by ${\incidence}({H}^M, \u, p) = \zeroset{\mathbf{f}_p({H}^M,\u)} \subset \CC^{n+p+1}$ and simply
${\incidence}={\incidence}({H}^M,\u,p)$ and $\mathbf{f}=\mathbf{f}({H}^M, \u, p)$ when $p, {H}, M$ and $\u$ are clear.
We also denote by $\incidencereg({H}^M, \u, p) = \incidence({H}^M,\u, p) \cap
\{(\vecx, \vecy)\in \CC^{n+p+1} : {\rm rank} \,{H}(\vecx)=p\}.$
{\it Fibers.} Let ${\alpha} \in \QQ$. We denote by
$\mathbf{f}_{{\alpha}}({H}^M, \u, p)$ (or simply $\mathbf{f}_{{\alpha}}$) the polynomial system obtained
by adding $\X_1-{\alpha}$ to $\mathbf{f}({H}^M, \u, p)$. The resulting algebraic set
$\zeroset{\mathbf{f}_{{\alpha}}}$, denoted by ${\incidence}_{{\alpha}}$, equals ${\incidence} \cap \zeroset{\X_1-{\alpha}}$.
{\it Lagrange systems.}
Let $\v \in \QQ^{2m-p}$. Let $\jac_1\mathbf{f}$ denote the matrix of size $c \times (n+p)$
obtained by removing the first column of $\jac \mathbf{f}$ (the derivative
w.r.t. $\X_1$), and define $\lagrange=\lagrange({H}^M, \u, \v, p)$ as the map
\[
\begin{array}{lrcl}
\lagrange : & \CC^{n+2m+1} & \to & \CC^{n+2m+1} \\
& (\vecx,\vecy,\vecz) & \mapsto & (\tilde{{H}}(M \, \vecx) \, \vecy, \u'\vecy-1, \vecz'\jac_1\mathbf{f}, \v'\vecz-1)
\end{array}
\]
where $\vecz=(\z_1, \ldots, \z_{2m-p})$ stand for Lagrange multipliers. We
finally define ${\mathcal{Z}}({H}^M, \u, \v, p) = \zeroset{\lagrange({H}^M, \u, \v, p)} \subset
\CC^{n+2m+1}.$
\paragraph*{Regularity property $\sfG$} \label{ssec:prelim:regul}
We say that a polynomial system $\mathbf{f} \in \QQ[x]^c$ satisfies Property $\sfG$ if
the Jacobian matrix $\jac \, \mathbf{f}$ has maximal rank at any point of $\zeroset{\mathbf{f}}$.
We remark that this implies that:
\begin{enumerate}
\item the ideal $\ideal{\mathbf{f}}$ is radical;
\item the set $\zeroset{\mathbf{f}}$ is either empty or smooth and equidimensional of co-dimension $c$.
\end{enumerate}
We say that ${\lagrange}({H}^M, \u, \v, p)$ satisfies $\sfG$ over
$\incidencereg(H^M, \u, p)$ if the following holds: for $(\vecx,\vecy,\vecz) \in
{\mathcal{Z}}({H}^M, \u, \v, p)$ such that $(\vecx,\vecy) \in \incidencereg(H^M, \u, p)$,
the matrix $\jac({\lagrange}({H}^M, \u, \v, p))$ has maximal rank at $(\vecx,\vecy,\vecz)$.
Let $\u \in \QQ^{p+1}$. We say that ${H} \in \CC^{(2m-1)(n+1)}$ satisfies Property
$\sfG$ if $\mathbf{f}({H}, \u, p)$ satisfies Property $\sfG$ for all $0 \leq p \leq r$.
The first result essentially shows that $\sfG$ holds
for $\mathbf{f}({H}^M, \u, p)$ (resp. $\mathbf{f}_{\alpha}({H}^M, \u, p)$) when the
input parameter ${H}$ (resp. ${\alpha}$) is generic enough.
\begin{proposition} \label{prop:regularity}
Let $M \in \GL(n,\CC)$.
\begin{itemize}
\item[(a)] There exists a non-empty Zariski-open set ${\mathscr{H}} \subset
\CC^{(2m-1)(n+1)}$ such that, if ${H} \in {\mathscr{H}} \cap
\QQ^{(2m-1)(n+1)}$, for all $0 \leq p \leq r$ and $\u \in \QQ^{p+1}-\{\mathbf{0}\}$,
$\mathbf{f}({H}^M, \u, p)$ satisfies Property
$\sfG$
\item[(b)] for ${H} \in {\mathscr{H}}$, and $0 \leq p \leq r$, if ${\incidence}({H}^M, \u, p) \neq \emptyset$
then $\dim \, {\mathcal{H}}_p \leq n-2m+2p+1$;
\item[(c)] For $0 \leq p \leq r$ and $\u \in \QQ^{p+1}$, if $\mathbf{f}({H}^M, \u, p)$ satisfies
$\sfG$, there exists a non-empty Zariski open set ${\mathscr{A}} \subset \CC$ such
that, if ${\alpha} \in {\mathscr{A}}$, the polynomial system $\mathbf{f}_{{\alpha}}$
satisfies $\sfG$
\end{itemize}
\end{proposition}
\proof
Without loss of generality, we can assume that $M = {\rm I}_n$. We let $0
\leq p \leq r$, $\u \in \QQ^{p+1}-\{\mathbf{0}\}$ and recall that we identify the
space of linear Hankel matrices with $\CC^{(2m-1)(n+1)}$. This space is
endowed by the variables $\mathfrak{h}_{k,\ell}$ with $1\leq k \leq 2m-1$
and $0\leq \ell \leq n$; the generic linear Hankel matrix is then
given by
$\mathfrak{H}=\mathfrak{H}_0+\X_1\mathfrak{H}_1+\cdots+\X_n\mathfrak{H}_n$
with $\mathfrak{H}_i={\sf Hankel}(\mathfrak{h}_{1,i}, \ldots, \mathfrak{h}_{2m-1, i})$.
We consider the map
\[
\begin{array}{lccc}
q : & \CC^{n+(p+1)+(2m-1)(n+1)} & \longrightarrow & \CC^{2m-p} \\
& (\vecx, \vecy, {H}) & \longmapsto& \mathbf{f}({H}, \u, p)
\end{array}
\]
and, for a given ${H} \in \CC^{(2m-1)(n+1)}$, its section-map $q_{H}
\colon \CC^{n+(p+1)} \to \CC^{2m-p}$ sending $(\vecx,\vecy)$ to
$q(\vecx,\vecy,{H})$. We also consider the map $\tilde{q}$ which associates
to $(\vecx, \vecy, {H})$ the entries of $\tilde{H}\vecy$ and its section map
$\tilde{q}_H$; we will consider these latter maps over the open set
$O=\{(\vecx, \vecy)\in \CC^{n+p+1}\mid \vecy\neq \mathbf{0}\}$. We prove below
that $\mathbf{0}$ is a regular value for both $q_H$ and $\tilde{q}_H$.
Suppose first that $q^{-1}(\mathbf{0}) = \emptyset$
(resp. $\tilde{q}^{-1}(\mathbf{0})$). We deduce that for all ${H} \in
\CC^{(2m-1)(n+1)}$, $q_H^{-1}(\mathbf{0}) = \emptyset$ (resp
$\tilde{q}_H^{-1}(\mathbf{0}) = \emptyset$) and $\mathbf{0}$ is a
regular value for both maps $q_H$ and $\tilde{q}_H$. Note also that
taking ${\mathscr{H}} = \CC^{(2m-1)(n+1)}$, we deduce that $\mathbf{f}({H}, \u, p)$
satisfies $\sfG$.
Now, suppose that $q^{-1}(\mathbf{0})$ is not empty and let
$(\vecx,\vecy,{H}) \in q^{-1}(0)$. Consider the Jacobian matrix $\jac q$ of
the map $q$ with respect to the variables $\vecx,\vecy$ and the entries of ${H}$,
evaluated at $(\vecx,\vecy,{H})$. We consider the submatrix of $\jac q$
by selecting the column corresponding to:
\begin{itemize}
\item the partial derivatives with respect to $\mathfrak{h}_{1, 0}, \ldots,
\mathfrak{h}_{2m-1, 0}$;
\item the partial derivatives with respect to $\Y_1, \ldots,
\Y_{p+1}$.
\end{itemize}
We obtain a $(2m-p) \times (2m+p)$ submatrix of $\jac q$; we prove below
that it has full rank $2m-p$.
Indeed, remark that the $2m-p-1$ first lines correspond to the entries
of $\tilde{{H}}\vecy$ and last line corresponds to the derivatives of
$\u'\vecy-1$. Hence, the structure of this submatrix is as below
\[
\begin{bmatrix}
\y_1 & \ldots & \y_{p+1} & 0 & \ldots & 0 &0& \cdots & 0 \\
0 & \y_1 & \ldots & \y_{p+1} & \ldots & 0 & & \\
\vdots & & \ddots & & \ddots & & \vdots & & \vdots\\
\vdots & & &\y_1 & \ldots & \y_{p+1}& 0 & & 0\\
0 & & \cdots & & \cdots & 0 & u_1 & \cdots & u_{p+1}\\
\end{bmatrix}
\]
Since this matrix is evaluated at the solution set of $\u'\vecy-1=0$,
we deduce straightforwardly that one entry of $\u$ and one entry of $\vecy$
are non-zero and that the above matrix is full rank and that
$\mathbf{0}$ is a regular value of the map $q$.
We can do the same for $\jac \tilde{q}$ except the fact that we do not
consider the partial derivatives with respect to $\Y_1, \ldots,
\Y_{p+1}$. The $(2m-p-1) \times (2m-1)$ submatrix we obtain corresponds
to the upper left block containing the entries of $\vecy$. Since
$\tilde{q}$ is defined over the open set $O$ in which $\vecy\neq
\mathbf{0}$, we also deduce that this submatrix has full rank
$2m-p-1$.
By Thom's Weak Transversality Theorem one deduces that there exists a
non-empty Zariski open set ${\mathscr{H}}_p \subset \CC^{(2m-1)(n+1)}$ such
that if ${H} \in {\mathscr{H}}_p$, then $\mathbf{0}$ is a regular value of
$q_{H}$ (resp. $\tilde{q}_{H}$). We deduce that for ${H} \in {\mathscr{H}}_p$, the
polynomial system $\mathbf{f}({H}, \u, p)$ satisfies $\sfG$ and using the
Jacobian criterion \cite[Theorem 16.19]{Eisenbud95},
$\incidence({H}, \u, p)$ is either empty or smooth equidimensional of
dimension $n-2m+2p+1$. This proves assertion (a), with ${\mathscr{H}} =
\bigcap_{0 \leq p \leq r}{\mathscr{H}}_p$.
Similarly, we deduce that $\tilde{q}_{H}^{-1}(\mathbf{0})$ is either
empty or smooth and equidimensional of dimension $n-2m+2p+2$. Let
$\Pi_\vecx$ be the canonical projection $(\vecx, \vecy)\to \vecx$;
note that for
any $\vecx \in {\mathcal{H}}_r$, the dimension of $\Pi_\vecx^{-1}(x)\cap
\tilde{q}_H^{-1}(\mathbf{0})$ is $\geq 1$ (by homogeneity
of the $\vecy$-variables). By the Theorem on the Dimension of Fibers
\cite[Sect.6.3,Theorem 7]{Shafarevich77}, we deduce that
$n-2m+2p+2-\dim({\mathcal{H}}_p) \geq 1$. We deduce that for
${H} \in {\mathscr{H}}$, $\dim({\mathcal{H}}_p) \leq n-2m+2p+1$ which proves assertion
(b).
It remains to prove assertion (c). We assume that
$\mathbf{f}({H}, \u, p)$ satisfies
$\sfG$. Consider the restriction of the map $\Pi_1 \colon \CC^{n+p+1}
\to \CC$, $\Pi_1(\vecx,\vecy)=\x_1$, to ${\incidence}({H}, \u, p)$, which is smooth
and equidimensional by assertion (a).
By Sard's Lemma \cite[Section 4.2]{SaSc13}, the set of critical values
of the restriction of $\Pi_1$ to ${\incidence}({H}, \u, p)$ is finite. Hence,
its complement ${\mathscr{A}} \subset \CC$ is a non-empty Zariski open
set. We deduce that for ${\alpha} \in {\mathscr{A}}$, the Jacobian matrix of
$\mathbf{f}_\alpha({H}, \u, p)$ satisfies $\sfG$.
\hfill$\square$
\section{Algorithm and correctness} \label{sec:algo}
In this section we present the algorithm, which is called {\sf LowRank\-Hankel},
and prove its correctness.
\subsection{Description} \label{ssec:algo:desc}
\paragraph*{Data representation}
The algorithm takes as {\it input} a couple $({H}, r)$, where ${H} = ({H}_0, {H}_1, \ldots,
{H}_n)$ encodes $m \times m$ Hankel matrices with entries in $\QQ$, defining the
linear matrix ${H}(\vecx)$, and $0 \leq r \leq m-1$.
The {\it output} is represented by a rational parametrization, that is a polynomial system
\[
\mathbf{q} = (q_0({t}), q_1({t}), \ldots, q_n({t}), q({t})) \subset \QQ[{t}]
\]
of univariate polynomials, with $gcd(q,q_0)=1$. The set of solutions of
\[
\X_i-q_i({t})/q_0({t}) = 0, \,\,i=1 \ldots n \qquad q({t})=0
\]
is clearly finite and expected to contain at least one point per connected component
of the algebraic set ${\mathcal{H}}_r \cap \RR^n$.
\paragraph*{Main subroutines and formal description}
We start by describing the main subroutines we use.
\noindent {\sf ZeroDimSolve}. It takes as input a polynomial system
defining an algebraic set ${\mathcal{Z}}\subset \CC^{n+k}$ and a subset of
variables $\vecx=(\X_1, \ldots, \X_n)$.
If ${\mathcal{Z}}$ is
finite, it returns a rational parametrization of the projection of
${\mathcal{Z}}$ on the $\vecx$-space else it returns an empty list.
\noindent {\sf ZeroDimSolveMaxRank}. It takes as input a polynomial
system $\mathbf{f}=(f_1, \ldots, f_c)$ such that $Z=\{\vecx \in
\CC^{n+k} \mid {\sf rank}(\jac\mathbf{f}(\vecx))=c\}$ is finite and a
subset of variables $\vecx=(\X_1, \ldots, \X_n)$ that endows $\CC^n$. It
returns {\sf fail: the assumptions are not satisfied} if assumptions
are not satisfied, else it returns a rational parametrization of the
projection of ${\mathcal{Z}}$ on the $\vecx$-space.
\noindent {\sf Lift}. It takes as input a rational parametrization of
a finite set ${\mathcal{Z}} \subset \CC^N$ and a number ${\alpha} \in \CC$, and
it returns a rational parametrization of $\{({\alpha}, \bfx) \, : \,
\bfx \in {\mathcal{Z}}\}$.
\noindent {\sf Union}. It takes as input two rational parametrizations
encoding finite sets ${\mathcal{Z}}_1, {\mathcal{Z}}_2$ and it returns a rational
parametrization of ${\mathcal{Z}}_1 \cup {\mathcal{Z}}_2$.
\noindent {\sf ChangeVariables}. It takes as input a rational
parametrization of a finite set ${\mathcal{Z}} \subset \CC^N$ and a
non-singular matrix $M \in \GL(N, \CC)$. It returns a rational
parametrization of ${\mathcal{Z}}^M$.
The algorithm {\sf LowRankHankel} is recursive, and it
assumes that its input ${H}$ satisfies Property $\sfG$.
${\sf LowRankHankel}({H},r)$:
\begin{enumerate}
\item \label{step:rec:1} If $n < 2m-2r-1$ then return $[\,]$.
\item\label{step:rec:choice1} Choose randomly $M\in \GL(n, \QQ)$, ${\alpha} \in \QQ$ and
$\u_p \in \QQ^{p+1}$, $\v_p \in \QQ^{2m-p}$ for $0\leq p \leq r$.
\item \label{step:rec:2} If $n = 2m-2r-1$ then return ${\sf ZeroDimSolve}(\mathbf{f}({H},\u_r, r), \vecx))$.
\item Let ${\mathsf{P}}={\sf ZeroDimSolve}({\lagrange}(\mathbf{f}({H},\u_r,r), \v))$
\item \label{step:rec:3} If ${\mathsf{P}}=[]$ then for $p$ from 0 to $r$ do
\begin{enumerate}
\item ${\mathsf{P}}'={\sf ZeroDimSolveMaxRank}({\lagrange}({H}^M, \u_p, \v_p), \vecx)$;
\item ${\mathsf{P}} = {\sf Union}({\mathsf{P}}, {\mathsf{P}}')$
\end{enumerate}
\item \label{step:rec:5} ${\sf Q}={\sf Lift}({\sf LowRankHankel}({\sf Subs}(\X_1={\alpha}, {H}^M),r), {\alpha})$;
\item \label{step:rec:6} return({\sf ChangeVariables}({\sf Union}(${\mathsf{Q}}, {\mathsf{P}}$), $M^{-1}$)).
\end{enumerate}
\subsection{Correctness} \label{ssec:algo:corr} \label{sssec:algo:prelimresult}
The correctness proof is based on the two following results that are
proved in Sections \ref{sec:dimension} and \ref{sec:closedness}.
The first result states that when the input matrix $H$ satisfies
$\sfG$ and that, for a generic choice of $M$ and $\v$, and for all
$0 \leq p \leq r$, the set of solutions $(\vecx, \vecy, \vecz)$ to $\lagrange({H}^M, \u,
\v, p)$ at which ${\rm rank}\,\tilde{{H}}(x)=p$ is finite and contains
${\rm crit}\,(\pi_1, \incidencereg({H}^M,\u, p))$.
\begin{proposition} \label{prop:dimension}\label{PROP:DIMENSION} Let
${\mathscr{H}}$ be the set defined in Proposition \ref{prop:regularity} and
let ${H} \in {\mathscr{H}}$ and $\u \in \QQ^{p+1}-\{\mathbf{0}\}$ for $0 \leq p
\leq r$. There exist non-empty Zariski open sets ${\mathscr{M}}_1 \subset
\GL(n,\CC)$ and ${\mathscr{V}} \subset \CC^{2m-p}$ such that if $M \in
{\mathscr{M}}_1 \cap \QQ^{n \times n}$ and $\v \in {\mathscr{V}} \cap \QQ^{2m-p}$,
the following holds:
\begin{itemize}
\item[(a)] ${\lagrange}({H}^M, \u, \v, p)$ satisfies $\sfG$ over
$\incidencereg(H^M, u, p)$;
\item[(b)] the projection of ${\rm reg}\,({\lagrange}({H}^M, \u, \v, p))$ on the
$(\vecx,\vecy)$-spa\-ce contains ${\rm crit}\,(\Pi_1, \incidencereg({H}^M, \u,
p))$
\end{itemize}
\end{proposition}
\begin{proposition}\label{prop:closedness}\label{PROP:CLOSEDNESS}
Let ${H} \in {\mathscr{H}}$, $0\leq p\leq r$ and $d_p = n-2m+2p+1$ and
${\mathcal{C}}$ be a connected component of ${\mathcal{H}}_p \cap \RR^n$. Then
there exist non-empty Zariski open sets ${\mathscr{M}}_2 \subset \GL(n,\CC)$
and ${\mathscr{U}} \subset \CC^{p+1}$ such that for any $M \in {\mathscr{M}}_2 \cap
\QQ^{n \times n}$, $\u \in {\mathscr{U}} \cap \QQ^{p+1}$, the following
holds:
\begin{itemize}
\item[(a)] for $i = 1, \ldots, d_p$, $\pi_i({\mathcal{C}}^M)$ is closed;
\item[(b)] for any ${\alpha} \in \RR$ in the boundary of
$\pi_1({\mathcal{C}}^M)$, $\pi_1^{-1}({\alpha}) \cap {\mathcal{C}}^M$
is finite;
\item[(c)] for any $\vecx\in \pi_1^{-1}({\alpha}) \cap {\mathcal{C}}^M$ and $p$
such that ${\rm rank} \, \tilde{{H}}_p(\vecx)=p$, there exists $(\vecx, \vecy) \in
\RR^n \times \RR^{p+1}$ such that $(\vecx,\vecy) \in {\incidence} ({H}^\M, \u,
p)$.
\end{itemize}
\end{proposition}
Our algorithm is probabilistic and its correctness depends on the
validity of the choices that are made at Step
\ref{step:rec:choice1}. We make this assumption that we formalize
below.
We need to distinguish the choices of $M, \u$ and $\v$ that are made
in the different calls of {\sf LowRankHankel}; each of these parameter
must lie in a non-empty Zariski open set defined in Propositions
\ref{prop:regularity}, \ref{prop:dimension} and \ref{prop:closedness}.
We assume that the input matrix ${H}$ satisfies $\sfG$; we denote it by
${H}^{(0)}$, where the super script indicates that no recursive call has been
made on this input; similarly ${\alpha}^{(0)}$ denotes the choice of
${\alpha}$ made at Step \ref{step:rec:choice1} on input
${H}^{(0)}$. Next, we denote by ${H}^{(i)}$ the input of {\sf
LowRankHankel} at the $i$-th recursive call and by
${\mathscr{A}}^{(i)}\subset \CC$ the non-empty Zariski open set defined in
Proposition \ref{prop:regularity} applied to $H^{(i)}$. Note that if
${\alpha}^{(i)}\in {\mathscr{A}}^{(i)}$, we can deduce that ${H}^{(i+1)}$
satisfies $\sfG$.
Now, we denote by ${\mathscr{M}}_1^{(i)},{\mathscr{M}}_2^{(i)}$ and
${\mathscr{U}}^{(p,i)},{\mathscr{V}}^{(p,i)}$ the open sets defined in Propositions
\ref{prop:regularity}, \ref{prop:dimension} and \ref{prop:closedness}
applied to $H^{(i)}$, for $0 \leq p \leq r$ and where $i$ is the depth
of the recursion.
Finally, we denote by $M^{(i)} \in \GL(n, \QQ)$, $\u^{(i)}_p\in
\QQ^{p+1}$ and $\v^{(i)}_p$, for $0 \leq p \leq r$, respectively the
matrix and the vectors chosen at Step \ref{step:rec:choice1} of the
$i$-th call of ${\sf LowRankHankel}$.
\noindent {\bf Assumption $\sfH$}. We say that $\sfH$
is satisfied if $M^{(i)}$, ${\alpha}^{(i)}$, $\u^{(i)}_p$ and $\v^{(i)}_p$
satisfy:
\begin{itemize}
\item $M^{(i)} \in ({\mathscr{M}}_1^{(i)} \cap
{\mathscr{M}}_2^{(i)}) \cap \QQ^{i \times i}$;
\item ${\alpha}^{(i)} \in {\mathscr{A}}^{(i)}$.
\item $\u^{(i)}_p \in {\mathscr{U}}^{(p,i)} \cap
\QQ^{p+1}-\{\mathbf{0}\}$, for $0 \leq p \leq r$;
\item $\v_p^{(i)} \in {\mathscr{V}}^{(p,i)} \cap \QQ^{2m-p}-\{\mathbf{0}\}$ for $0 \leq p \leq r$;
\end{itemize}
\begin{theorem}
Let ${H}$ satisfy $\sfG$. Then, if
$\sfH$ is satisfied, ${\sf LowRankHankel}$ with input $({H}, r)$, returns a
rational para\-met\-rization that encodes a finite algebraic set in ${\mathcal{H}}_r$ meeting
each connected component of ${\mathcal{H}}_r \cap \RR^n$.
\end{theorem}
\begin{proof}
The proof is by decreasing induction on the depth of the recursion.
When $n<2m-2r-1$, ${\mathcal{H}}_r$ is empty since the input ${H}$ satisfies
$\sfG$ (since $\sfH$ is satisfied). In this case, the output defines the empty set.
When $n=2m-2r-1$, since $\sfH$ is satisfied, by Proposition \ref{PROP:REGULARITY},
either ${\mathcal{H}}_r = \emptyset$ or $\dim\,{\mathcal{H}}_r = 0$. Suppose ${\mathcal{H}}_r = \emptyset$.
Hence ${\incidence}_r = \emptyset$, since the projection of ${\incidence}_r$ on
the $\vecx-$space is included in ${\mathcal{H}}_r$. Suppose now that $\dim {\mathcal{H}}_r
= 0$: Proposition \ref{prop:closedness} guarantees that the output
of the algorithm defines a finite set containing ${\mathcal{H}}_r$.
Now, we assume that $n>2m-2r-1$; our induction assumption is that for
any $i\geq 1$ ${\sf LowRankHankel}({H}^{(i)}, r)$ returns a rational
parametrization that encodes a finite set of points in the algebraic
set defined by ${\sf rank}({H}^{(i)})\leq r$ and that meets every
connected component of its real trace.
Let ${{C}}$ be a connected component of ${\mathcal{H}}_r \cap \RR^n$. To keep
notations simple, we denote by $M \in \GL(n, \QQ)$, $\u_p$ and $\v_p$
the matrix and vectors chosen at Step \ref{step:rec:choice1} for
$0\leq p \leq r$. Since $\sfH$ holds one can apply Proposition
\ref{prop:closedness}. We deduce that the image $\pi_1({{C}}^M)$ is
closed. Then, either $\pi_1({{C}}^M) = \RR$ or it is a closed interval.
Suppose first that $\pi_1({\mathcal{C}}^M) = \RR$. Then for ${\alpha} \in \QQ$
chosen at Step \ref{step:rec:choice1}, $\pi_1^{-1}({\alpha}) \cap
{\mathcal{C}}^M \neq 0$. Remark that $\pi_1^{-1}({\alpha}) \cap {{C}}^M$ is the
union of some connected components of ${\mathcal{H}}^{(1)}_r \cap \RR^{n-1} =
\{\vecx=(\x_2, \ldots, \x_n) \in \RR^{n-1} : {\rm rank} \, {H}^{(1)} (\vecx) \leq
r\}$. Since $\sfH$ holds, assertion (c) of Proposition
\ref{prop:regularity} implies that ${H}^{(1)}$ satisfies $\sfG$. We
deduce by the induction assumption that the parametrization returned
by Step \ref{step:rec:5} where ${\sf LowRankHankel}$ is called
recursively defines a finite set of points that is contained in
${\mathcal{H}}_r$ and that meets ${{C}}$.
Suppose now that $\pi_1({{C}}^M) \neq \RR$. By Proposition
\ref{prop:closedness}, $\pi_1({{C}}^M)$ is closed. Since ${{C}}^M$ is
connected, $\pi_1({{C}}^M)$ is a connected interval, and since
$\pi_1({{C}}^M) \neq \RR$ there exists $\beta$ in the boundary of
$\pi_1({{C}}^M)$ such that $\pi_1({{C}}^M) \subset [\beta, +\infty)$ or
$\pi_1({{C}}^M) \subset (-\infty, \beta]$. Suppose without loss of
generality that $\pi_1({{C}}^M) \subset [\beta, +\infty)$, so that
$\beta$ is the minimum value attained by $\pi_1$ on ${{C}}^M$.
Let $\vecx=(\beta, \x_2, \ldots, \x_n) \in {{C}}^M$, and suppose that
${\rm rank} (\tilde{{H}}(\vecx)) = p$. By Proposition \ref{prop:closedness} (assertion (c)),
there exists $\vecy \in \CC^{p+1}$ such that $(\vecx,\vecy) \in
\incidence(H, \u, p)$. Note that since ${\rm rank} (\tilde{{H}}(\vecx)) = p$, we also deduce that
$(\vecx,\vecy) \in
\incidencereg(H, \u, p)$.
We claim that there exists $\vecz \in \CC^{2m-p}$ such that $(\vecx,\vecy,\vecz)$
lies on ${\rm reg}\,({\lagrange}({H}^M, \u, \v, p))$.
Since $\sfH$ holds, Proposition \ref{prop:dimension} implies that
${\lagrange}({H}^M, \u, \v, p)$ satisfies $\sfG$ over $\incidencereg(H^M,
\u, p)$. Also, note that the Jacobian criterion implies that
${\rm reg}\,({\lagrange}({H}^M, \u, \v, p))$ has dimension at most $0$.
We conclude that the point $\vecx \in {{C}}^M$ lies on the finite set
encoded by the rational parametrization {\sf P} obtained at Step
\ref{step:rec:3} of ${\sf LowRankHankel}$ and we are done.
It remains to prove our claim, i.e. there exists $\vecz \in \CC^{2m-p}$
such that $(\vecx,\vecy,\vecz)$ lies on ${\rm reg}\,({\lagrange}({H}^M, \u, \v, p))$.
Let ${{C}}' $ be the connected component of ${\incidence}({H}, \u, p)^M \cap
\RR^{n+m(m-r)}$ containing $(\vecx,\vecy)$. We first prove that $\beta =
\pi_1(\vecx,\vecy)$ lies on the boundary of $\pi_1({{C}}')$. Indeed, suppose
that there exists $(\widetilde{\vecx},\widetilde{\vecy}) \in {{C}}'$ such that
$\pi_1(\widetilde{\vecx},\widetilde{\vecy}) < \beta$. Since ${{C}}'$ is
connected, there exists a continuous semi-algebraic map $\tau \colon
[0,1] \to {{C}}'$ with $\tau(0) = (\vecx,\vecy)$ and $\tau(1) =
(\widetilde{\vecx},\widetilde{\vecy})$. Let $\varphi: (\vecx, \vecy)\to \vecx$ be the
canonical projection on the $\vecx$-space.
Note that $\varphi \circ \tau$ is also continuous and semi-algebraic
(it is the composition of continuous semi-algebraic maps), with
$(\varphi \circ \tau)(0)=\vecx$, $(\varphi \circ \tau)(1)
=\widetilde{\vecx}$. Since $(\varphi \circ \tau)(\theta) \in {\mathcal{H}}_p$ for
all $\theta \in [0,1]$, then $\widetilde{\vecx} \in {{C}}$. Since
$\pi_1(\widetilde{\vecx}) = \pi_1(\widetilde{\vecx}, \widetilde{\vecy}) <
{\alpha}$ we obtain a contradiction. So $\pi_1(\vecx,\vecy)$ lies on the
boundary of $\pi_1({{C}}')$.
By the Implicit Function Theorem, and the fact that $\mathbf{f}({H}, \u, p)$
satisfies Property ${\sfG}$, one deduces that $(\vecx,\vecy)$ is a critical
point of the restriction of $\Pi_1: (\x_1, \ldots, \x_n, \y_1, \ldots,
\y_{r+1})\to \x_1$ to ${\incidence}({H}, \u, p)$.
Since ${\sf rank}({H}^M(\vecx))=p$ by construction, we deduce that
$(\vecx,\vecy)$ is a critical point of the restriction of $\Pi_1$ to
$\incidencereg({H}^M, \u, p)$ and that, by Proposition
\ref{prop:dimension}, there exists $\vecz \in \CC^{2m-p}$ such
that $(\vecx,\vecy,\vecz)$ belongs to the set ${\rm reg}\,({\lagrange}({H}^M, \u, \v, p))$, as claimed.
\end{proof}
\section{Degree bounds and complexity} \label{ssec:algo:complexity}
We first remark that the complexity of subroutines ${\sf Union}$,
${\sf Lift}$ and ${\sf ChangeVariables}$ (see \cite[Chap. 10]{SaSc13})
are negligible with respect to the complexity of ${\sf
ZeroDimSolveMaxRank}$.
Hence, the complexity of ${\sf LowRankHankel}$ is at most $n$ times
the complexity of ${\sf ZeroDimSolveMaxRank}$, which is computed
below.
Let $({H},r)$ be the input, and let $0 \leq p \leq r$.
We estimate the complexity of ${\sf ZeroDimSolveMaxRank}$ with input
$({H}^M, \u_p, \v_p)$. It depends on the algorithm used to solve
zero-dimensional polynomial systems. We choose the one of
\cite{jeronimo2009deformation} that can be seen as a symbolic homotopy
taking into account the sparsity structure of the system to solve.
More precisely, let $\mathbf{p}\subset \QQ[x_1, \ldots, x_n]$ and
$s\in \QQ[x_1, \ldots, x_n]$ such that the common complex solutions of
polynomials in $\mathbf{p}$ at which $s$ does not vanish is finite.
The algorithm in \cite{jeronimo2009deformation} builds a system
$\mathbf{q}$ that has the same monomial structure as $\mathbf{p}$ has
and defines a finite algebraic set. Next, the homotopy system
$\mathbf{t} = t\mathbf{p}+(1-t)\mathbf{q}$ where $t$ is a new
variable is built. The system $\mathbf{t}$ defines a $1$-dimensional
constructible set over the open set defined by $s\neq 0$ and for
generic values of $t$. Abusing notation, we denote by $Z(\mathbf{t})$
the curve defined as the Zariski closure of this constructible set.
Starting from the solutions
of $\mathbf{q}$ which are encoded with a rational parametrization, the
algorithm builds a rational parametrization for the solutions of
$\mathbf{p}$ which do not cancel $s$. Following
\cite{jeronimo2009deformation}, the algorithm runs in time
$\ensuremath{{O}{\,\tilde{ }\,}}(Ln^{O(1)} \delta \delta')$ where $L$ is the complexity of
evaluating the input, $\delta$ is a bound on the number of isolated
solutions of $\mathbf{p}$ and $\delta'$ is a bound on the degree of
$Z(\mathbf{t})$ defined by $\mathbf{t}$.
Below, we estimate these degrees when the input is a Lagrange system
as the ones we consider.
\noindent
{\bf Degree bounds.} We let $((\tilde{{H}}
\,\vecy)',{\u_p}'\vecy-1)$, with $\vecy = (\Y_1, \ldots, \Y_{p+1})'$,
defining ${\incidence}_p({H},{\u_p})$. Since $\vecy \neq 0$, one can eliminate
w.l.o.g. $\Y_{p+1}$, and the linear form ${\u_p}'\vecy-1$, obtaining a
system $\tilde{\mathbf{f}} \in \QQ[\vecx,\vecy]^{2m-p-1}$. We recall that if
$\vecx^{(1)}, \ldots, \vecx^{(c)}$ are $c$ groups of variables, and $f
\in \QQ[\vecx^{(1)}, \ldots, \vecx^{(c)}]$, we say that the
multidegree of $f$ is $(d_1, \ldots, d_c)$ if its degree with respect
to the group $\vecx^{(j)}$ is $d_j$, for $j=1, \ldots, c$.
Let $\lagrange=(\tilde{\mathbf{f}}, \tilde{\mathbf{g}}, \tilde{\mathbf{h}})$ be the corresponding
Lagrange system, where
$$
(\tilde{\mathbf{g}}, \tilde{\mathbf{h}}) = (\tilde{g}_1, \ldots, \tilde{g}_{n-1}, \tilde{h}_1,
\ldots, \tilde{h}_{p}) = \vecz' \jac_1 \tilde{\mathbf{f}}
$$
with $\vecz = [1, \Z_2, \ldots, \Z_{2m-p-1}]$ a non-zero vector of Lagrange
multipliers (we let $\Z_1 = 1$ w.l.o.g.). One obtains that $\lagrange$ is constituted by
\begin{itemize}
\item
$2m-p-1$ polynomials of multidegree bounded by $(1,1,0)$ with respect to $(\vecx,\vecy,\vecz)$,
\item
$n-1$ polynomials of multidegree bounded by $(0,1,1)$ with respect to $(\vecx,\vecy,\vecz)$,
\item
$p$ polynomials of multidegree bounded by $(1,0,1)$ with respect to $(\vecx,\vecy,\vecz)$,
\end{itemize}
that is by $n+2m-2$ polynomials in $n+2m-2$ variables.
\begin{lemma} \label{lemma1}
With the above notations, the number of isolated solutions of $\zeroset{\lagrange}$ is at most
\[
\delta(m,n,p) = \sum_{\ell}\binom{2m-p-1}{n-\ell} \binom{n-1}{2m-2p-2+\ell} \binom{p}{\ell}
\]
where $\ell\in \{\max\{0,n-2m+p+1\}, \ldots,
\min\{p,n-2m+2p+1\}\}$.
\end{lemma}
\begin{proof}
By \cite[Proposition 11.1]{SaSc13}, this degree is bounded by the multilinear
B\'ezout bound $\delta(m,n,p)$ which is the sum of the coefficients of
\[
(s_\X+s_\Y)^{2m-p-1} (s_\Y+s_\Z)^{n-1} (s_\X+s_\Z)^{p} \in \QQ[s_\X,s_\Y,s_\Z]
\]
modulo $I = \left \langle s_\X^{n+1}, s_\Y^{p+1}, s_\Z^{2m-p-1} \right
\rangle$. The conclusion comes straightforwardly by technical
computations.
\end{proof}
With input $\lagrange$, the homotopy system $\mathbf{t}$ is constituted by
$2m-p-1, n-1$ and $p$ polynomials of multidegree respectively bounded
by $(1,1,0,1), (0,1,1,1)$ and $(1,0,1,1)$ with respect to
$(\vecx,\vecy,\vecz,t)$.
We prove the following.
\begin{lemma} \label{lemma2}
${\rm deg} \, \zeroset{\mathbf{t}} \in {O}(pn(2m-p) \delta(m,n,p))$.
\end{lemma}
\begin{proof}[of Lemma \ref{lemma2}]
We use Multilinear B\'ezout bounds as in the proof of Lemma \ref{lemma1}.
The degree of $\zeroset{{\bf t}}$ is bounded by the sum of the coefficients
of
\[
(s_\X+s_\Y+s_{t})^{2m-p-1} (s_\Y+s_\Z+s_{t})^{n-1} (s_\X+s_\Z+s_{t})^{p}
\]
modulo $I = \left\langle s_\X^{n+1}, s_\Y^{p+1}, s_\Z^{2m-p-1}, s_{t}^2 \right\rangle
\subset \QQ[s_\X, s_\Y, s_\Z, s_{t}]$. Since the variable $s_t$ can appear
up to power $1$, the previous polynomial is congruent to $P_1+P_2+P_3+P_4$
modulo $I$, with
\begin{itemize}
\item[] $P_1 = (s_\X+s_\Y)^{2m-p-1} (s_\Y+s_\Z)^{n-1} (s_\X+s_\Y)^{p}$
\item[] $P_2 = (2m-p-1) s_t (s_\X+s_\Y)^{2m-p-2} (s_\Y+s_\Z)^{n-1} \,(s_\X+s_\Z)^{p}$,
\item[] $P_3 = (n-1) s_t (s_\Y+s_\Z)^{n-2} (s_\X+s_\Y)^{2m-p-1} (s_\X+s_\Z)^{p}$
\item[] $P_4 = p \, s_t (s_\X+s_\Z)^{p-1} (s_\X+s_\Y)^{2m-p-1} (s_\Y+s_\Z)^{n-1}.$
\end{itemize}
We denote by $\Delta(P_i)$ the contribution of $P_i$ to the previous
sum.
Firstly, observe that $\Delta(P_1) = \delta(m,n,p)$ (compare with the
proof of Lemma \ref{lemma1}). Defining
$\chi_1 = \max\{0,n-2m+p+1\}$ and $\chi_2 = \min\{p,n-2m+2p+1\}$,
one has $\Delta(P_1) = \delta(m,n,p) = \sum_{\ell = \chi_1}^{\chi_2}{\gamma}(\ell)$
with
${\gamma}(\ell) = \binom{2m-p-1}{n-\ell} \binom{n-1}{2m-2p-2+\ell} \binom{p}{\ell}.$
Write now $P_2 = (2m-p-1)s_t\tilde{P}_2$, with $\tilde{P}_2 \in \QQ[\X,\Y,\Z].$
Let $\Delta(\tilde{P}_2)$ be the contribution of $\tilde{P}_2$, that is the sum
of the coefficients of $\tilde{P}_2$ modulo $I' = \left\langle s_\X^{n+1}, s_\Y^{p+1},
s_\Z^{2m-p-1} \right\rangle$, so that $\Delta(P_2) = (2m-p-1)\Delta(\tilde{P}_2)$. Then
\[
\Delta(\tilde{P}_2) = \sum_{i,j,\ell}\binom{2m-p-2}{i}\binom{n-1}{j}\binom{p}{\ell}
\]
where the sum runs in the set defined by the inequalities
\[
i + \ell \leq n, \,\,\, 2m-p-2-i+j \leq p, \,\,\, n-1-j+p-\ell \leq 2m-p-2.
\]
Now, since $\tilde{P}_2$ is homogeneous of degree $n+2m-3$, only three possible
cases hold:
{\it Case (A)}. $i + \ell=n$, $2m-p-2-i+j =p$ and $n-1-j+p-\ell =2m-p-3$. Here the contribution is
$\delta_a = \sum_{\ell=\alpha_1}^{\alpha_2} {\varphi}_a(\ell)$ with
\[
{\varphi}_a(\ell) = \binom{2m-p-2}{n-\ell}\binom{n-1}{2m-2p-3+\ell}\binom{p}{\ell},
\]
and $\alpha_1 = \max\{0,n-2m+p+2\}, \alpha_2 = \min\{p,n-2m+2p+2\}.$
Suppose first that $\ell$ is an admissible index for $\Delta(P_1)$ and $\delta_a$,
that is $\max\{\chi_1,\alpha_1\}=\alpha_1 \leq \ell \leq \chi_2=\min\{\chi_2,\alpha_2\}$.
Then:
\begin{align*}
{\varphi}_a(\ell) & \leq \binom{2m-p-1}{n-\ell}\binom{n-1}{2m-2p-3+\ell}\binom{p}{\ell} = \\
& = \Psi(\ell) {\gamma}(\ell) \qquad \text{with} \, \Psi(\ell) = \frac{2m-2p-2+\ell}{n-(2m-2p-2+\ell)}.
\end{align*}
The rational function $\ell \longmapsto \Psi(\ell)$
is piece-wise monotone (its first derivative is positive), and its unique possible
pole is $\ell = n-2m+2p+2$. Suppose that this value is a pole for $\Psi(\ell)$.
This would imply $\alpha_2 = n-2m+2p+2$ and so $\chi_2 = n-2m+2p+1$; since $\ell$
is admissible for $\Delta(P_1)$, then one would conclude a contradiction. Hence
the rational function $\Psi(\ell)$ has no poles, its maximum is atteined in
$\chi_2$ and its value is $\Psi(\chi_2) = n-1$. Hence
${\varphi}_a(\ell) \leq (n-1){\gamma}(\ell)$.
Now, we analyse any possible case:
\begin{enumerate}
\item[(A1)] $\chi_1 = 0, \alpha_1=0$. This implies $\chi_2=n-2m+2p+1, \alpha_2=n-2m+2p+2$.
We deduce that
\begin{align*}
\delta_a &= \sum_{\ell = 0}^{\chi_2}{\varphi}_a(\ell) + {\varphi}_a(\alpha_2) \leq (n-1) \sum_{\ell = 0}^{\chi_2}{\gamma}(\ell) + \\
&+ {\varphi}_a(\alpha_2) \leq (n-1)\Delta(P_1)+ {\varphi}_a(\alpha_2).
\end{align*}
In this case we deduce the bound $\delta_a \leq n \Delta(P_1)$.
\item[(A2)] $\chi_1 = 0, \alpha_1=n-2m+p+2$. This implies $\chi_2=n-2m+2p+1, \alpha_2=p$.
In this case all indices are admissible, and hence we deduce the bound $\delta_a \leq (n-1) \Delta(P_1).$
\item[(A3)] $\chi_1 = n-2m+p+1$. This implies $\alpha_1=n-2m+p+2$, $\chi_2=p, \alpha_2=p$.
Also in this case all indices are admissible, and $\delta_a \leq (n-1) \Delta(P_1).$
\end{enumerate}
{\it Case (B)}. $i + \ell=n$, $2m-p-2-i+j =p-1$ and $n-1-j+p-\ell = 2m-p-2$. Here the contribution is
$\delta_b=\sum_\ell {\varphi}_b(\ell)$ where
\[
{\varphi}_b(\ell) = \binom{2m-p-2}{n-\ell}\binom{n-1}{2m-2p-2+\ell}\binom{p}{\ell}.
\]
One gets $\delta_b \leq \Delta(P_1)$ since the sum above is defined in $\max \{0,n\-2m+p+2\} \leq
\ell \leq \min \{p,n-2m+2p+1\}$, and the inequality ${\varphi}_b(\ell) \leq {\gamma}(\ell)$ holds term-wise.
{\it Case (C)} $i + \ell=n-1$, $2m-p-2-i+j = p$ and $n-1-j+p-\ell = 2m-p-2$. Here the contribution is
$\delta_c = \sum_{\ell}{\varphi}_c(\ell)$ where
\[
{\varphi}_c(\ell) = \binom{2m-p-2}{n-1-\ell}\binom{n-1}{2m-2p-2+\ell}\binom{p}{\ell}.
\]
One gets $\delta_c \leq \Delta(P_1)$ since the sum above is defined in $\max\{0,n\-2m+p+1\} \leq
\ell \leq \min\{p,n-2m+2p+1\}$, and the inequality ${\varphi}_c(\ell) \leq {\gamma}(\ell)$ holds term-wise.
We conclude that $\delta_a \leq n \Delta(P_1)$, $\delta_b \leq \Delta(P_1)$ and $\delta_c \leq \Delta(P_1)$.
Hence $\Delta(P_2) = (2m-p-1) (\delta_a+\delta_b+\delta_c) \in {O}(n(2m-p) \Delta(P_1)).$
Analogously to $\Delta(P_2)$, one can conclude that $\Delta(P_3) \in {O}(n(n+2m-p) \Delta(P_1))$
and $\Delta(P_4) \in {O}(pn(n+2m-p) \Delta(P_1)).$
\end{proof}
\noindent{\bf Estimates.}\\
We provide the whole complexity of ${\sf ZeroDimSolveMaxRank}$.
\begin{theorem}
Let $\delta = \delta(m,n,p)$ be given by Lemma \ref{lemma1}. Then
{\sf ZeroDimSolveMaxRank} with input $\lagrange({H}^M, \u_p, \v_p)$ computes
a rational parametrization
within
\[
\ensuremath{{O}{\,\tilde{ }\,}}(p(n+2m)^{O(1)}(2m-p) \delta^2),
\]
arithmetic operations over $\QQ$.
\end{theorem}
\begin{proof}
The polynomial entries of the system $\mathbf{t}$ (as defined in the
previous section) are cubic polynomials in $n+2m-1$ variables, so
the cost of their evaluation is in $O((n+2m)^3)$. Applying
\cite[Theorem 5.2]{jeronimo2009deformation} and bounds given in
Lemma \ref{lemma1} and \ref{lemma2} yield the claimed complexity
estimate.
\end{proof}
From Lemma \ref{lemma1}, one deduces that for all $0 \leq p \leq r$,
the maximum number of complex solutions computed by ${\sf
ZeroDimSolveMaxRank}$ is bounded above by $\delta(m,n,p)$. We deduce
the following result.
\begin{proposition}
Let $H$ be a $m \times m$, $n-variate$ linear Hankel matrix, and let
$r \leq m-1$. The maximum number of complex solutions computed by
${\sf LowRankHankel}$ with input $(H,r)$ is
$$
\binom{2m-r-1}{r} + \sum_{k=2m-2r}^{n}\sum_{p=0}^{r} \delta(m,k,p).
$$
where
$\delta(m,k,p)$ is the bound defined in Lemma \ref{lemma1}.
\end{proposition}
\begin{proof}
The maximum number of complex solutions computed by {\sf
ZeroDimSolve} is the degree of ${\incidence}(H, \u, r)$. Using, the
multilinear B\'ezout bounds, this is bounded by the coefficient of
the monomial $s_\X^{n}s_\Y^{r}$ in the expression
$(s_\X+s_\Y)^{2m-r-1}$, that is exactly $\binom{2m-r-1}{r}$. The
proof is now straightforward, since {\sf ZeroDimSolveMaxRank} runs
$r+1$ times at each recursive step of {\sf LowRankHankel}, and since
the number of variables decreases from $n$ to $2m-2r$.
\end{proof}
\section{Proof of Proposition \ref{prop:dimension}} \label{sec:dimension}
\noindent
We start with a local description of the algebraic sets defined by our
Lagrange systems. This is obtained from a local description of the
system defining $\incidence({H}, \u, p)$. Without loss of generality,
we can assume that $\u=(0, \ldots, 0, 1)$ in the whole section: such a
situation can be retrieved from a linear change of the $\vecy$-variables
that leaves invariant the $\vecx$-variables.
\subsection{Local equations} \label{ssec:dimlag:local}\label{sssec:dimlag:local:inc}
\label{sssec:dimlag:local:lag}
Let $(\vecx, \vecy)\in \incidencereg({H}, \u, p)$. Then, by definition, there
exists a $p \times p$ minor of $\tilde{{H}}(\vecx)$ that is
non-zero. Without loss of generality, we assume that this minor is the
determinant of the upper left $p\times p$ submatrix of
$\tilde{H}$. Hence, consider the following block partition
\begin{equation} \label{partition}
\tilde{{H}}(\vecx) =
\left[
\begin{array}{cc}
N & Q \\
P & R \\
\end{array}
\right]
\end{equation}
with $N \in \QQ[\vecx]^{p \times p}$, and $Q \in \QQ[\vecx]^{p}$, $P \in
\QQ[\vecx]^{(2m-2p-1) \times p}$, and $R \,\in \,\QQ[\vecx]^{2m-2p-1}$. We
are going to exhibit suitable local descriptions of $\incidencereg({H},
\u, p)$ over the Zariski open set $O_N\subset \CC^{n+p+1}$ defined by
$\det N \neq 0$; we denote by $\QQ[\vecx,\vecy]_{\det N}$ the local ring of
$\QQ[\vecx, \vecy]$ localized by $\det N$.
\begin{lemma} \label{lemma:local:incidence}
Let $N,Q,P,R$ be as above, and $\u \in \QQ^{p+1}-\{\mathbf{0}\}$. Then there exist
$\{q_{i}\}_{1 \leq i \leq p} \subset \QQ[\vecx]_{\det N}$ and
$\{\tilde{q}_{i}\}_{1 \leq i \leq 2m-2p-1} \subset \QQ[\vecx]_{\det N}$ such that the
constructible set $\incidencereg({H}, \u, p) \cap O_N$ is
defined by the equations
\begin{align*}
\Y_{i} - q_{i}(\vecx) &= 0 \qquad 1 \leq i \leq p \\
\tilde{q}_{i}(\vecx) &= 0 \qquad 1 \leq i \leq 2m-2p-1 \\
\Y_{p+1} - 1 &= 0.
\end{align*}
\end{lemma}
\begin{proof}
Let $c=2m-2p-1$.
The proof follows by the equivalence
\[
\left[ \begin{array}{cc} N & Q \\ P & R \end{array} \right] \vecy = 0
\;
\text{iff}
\;
\left[ \begin{array}{cc} {\rm I}_p & 0 \\ -P & {\rm I}_{c} \end{array} \right]
\left[ \begin{array}{cc} N^{-1} & 0 \\ 0 & {\rm I}_{c} \end{array} \right]
\left[ \begin{array}{cc} N & Q \\ P & R \end{array} \right] \vecy = 0
\]
in the local ring $\QQ[\vecx,\vecy]_{\det N}$, that is if and only if
\[
\left[ \begin{array}{cc} {\rm I}_p & N^{-1}Q \\ 0 & R-PN^{-1}Q \end{array} \right] \vecy = 0
\]
Recall that we have assumed that $\u=(0, \ldots, 0, 1)$; then the
equation $\u\vecy=1$ is $\Y_{p+1}=1$. Denoting by $q_{i}$ and
$\tilde{q}_{i}$ respectively the entries of vectors $-N^{-1}Q$ and
$-(R-PN^{-1}Q)$ ends the proof.
\end{proof}
The above local system is denoted by $\tilde{\mathbf{f}} \in \QQ[\vecx,\vecy]_{\det N}^{2m-p}$.
The Jacobian matrix of this polynomial system is
\[
\jac\tilde{\mathbf{f}} =
\left[
\begin{array}{cc}
\begin{array}{c} \jac_x\tilde{\mathbf{q}} \\ \star \end{array}
&
\begin{array}{c} 0 \\ {\rm I}_{p+1} \end{array}
\end{array}
\right]
\]
with $\tilde{\mathbf{q}} = (\tilde{q}_{1}(\vecx), \ldots,
\tilde{q}_{2m-2p-1}(\vecx))$. Its kernel defines the tangent space to
$\incidencereg({H}, \u, p)\cap O_N$. Let $\w=(\w_1, \ldots, \w_n) \in
\CC^n$ be a row vector; we denote by $\pi_\w$ the projection
$\pi_\w(\vecx,\vecy) = \w_1\X_1 + \cdots + \w_n\X_n$.
Given a row vector $\v \in \CC^{2m-p+1}$, we denote by ${\sf wlagrange}(\tilde{\mathbf{f}},
\v)$ the following polynomial system
\begin{equation} \label{local-lag}
\tilde{\mathbf{f}}, \,\,\,\, (\tilde{\mathbf{g}}, \tilde{\mathbf{h}}) = [\Z_1, \ldots, \Z_{2m-p}, \Z_{2m-p+1}]
\left[
\begin{array}{c}
\jac \tilde{\mathbf{f}} \\
\begin{array}{cc}
\w & 0
\end{array}
\end{array}
\right], \,\,\,\,
\v'\vecz-1.
\end{equation}
For all $0 \leq p \leq r$, this polynomial system contains $n+2m+2$
polynomials and $n+2m+2$ variables. We denote by ${\sf L}_p(\tilde{\mathbf{f}}, \v,
\w)$ the set of its solutions whose projection on the $(\vecx, \vecy)$-space
lies in $O_N$.
Finally, we denote by ${\sf wlagrange}({\mathbf{f}}, \v)$ the polynomial
system obtained when replacing $\tilde{\mathbf{f}}$ above with $\mathbf{f}=\mathbf{f}({H}, \u,
p)$. Similarly, its solution set is denoted by ${\sf L}_p({\mathbf{f}}, \v,
\w)$.
\subsection{Intermediate result} \label{ssec:dimlag:intlemma}
\begin{lemma} \label{lemma:intermediate}
Let ${\mathscr{H}} \subset \CC^{(2m-r)(n+1)}$ be the non-empty Zariski open set
defined by Proposition \ref{prop:regularity}, ${H} \in {\mathscr{H}}$
and $0 \leq p \leq r$.
There exist non-empty Zariski open sets ${\mathscr{V}} \subset \CC^{2m-p}$ and
${\mathscr{W}} \subset \CC^n$ such that if $\v \in {\mathscr{V}}$ and $\w \in
{\mathscr{W}}$, the following holds:
\begin{itemize}
\item[(a)] the set $\mathcal{L}_p(\mathbf{f}, \v, \w)=\mathcal{L}(\mathbf{f}, \v, \w) \cap
\{(\vecx,\vecy,\vecz) \mid {{\rm rank}}\,\tilde{H}(\vecx)=p\}$ is finite and the
Jacobian matrix of ${\sf wlagrange}({\mathbf{f}}, \v)$ has maximal rank at
any point of $\mathcal{L}_p(\mathbf{f}, \v, \w)$;
\item[(b)] the projection of $\mathcal{L}_p(\mathbf{f}, \v, \w)$ in the
$(\vecx,\vecy)$-space contains the critical points of the restriction of
$\pi_\w$ restricted to $\incidencereg({H}, \u, p)$.
\end{itemize}
\end{lemma}
\proof
We start with Assertion (a).
The statement to prove holds over $\incidencereg({H}, \u, p)$; hence it
is enough to prove it on any open set at which one $p\times p$ minor
of $\tilde {H}$ is non-zero. Hence, we assume that the determinant of
the upper left $p\times p$ submatrix $N$ of $\tilde {H}$ is non-zero;
$O_N\subset \CC^{n+p+1}$ is the open set defined by $\det\, N \neq 0$,
and we reuse the notation introduced in this section. We prove
that there exist non-empty Zariski open sets ${\mathscr{V}}'_N\subset \CC^{2m-p}$
and ${\mathscr{W}}_N \subset \CC^{n}$ such that for $\v \in {\mathscr{V}}'_N$ and $\w \in
{\mathscr{W}}_N$, $\mathcal{L}_p(\tilde{\mathbf{f}}, \v, \w)$ is finite and that the Jacobian matrix
associated to ${\sf wlagrange}(\tilde{\mathbf{f}}, \v)$ has maximal rank at any
point of $\mathcal{L}_p(\tilde{\mathbf{f}}, \v, \w)$. The Lemma follows straightforwardly
by defining ${\mathscr{V}}'$ (resp. ${\mathscr{W}}$) as the intersection of ${\mathscr{V}}'_N$
(resp. ${\mathscr{W}}_N$) where $N$ varies in the set of $p \times p$ minors
of $\tilde{H}(\vecx)$.
Equations $\tilde{\mathbf{h}}$ yield $\Z_{j}=0$ for $j=2m-2p, \ldots, 2m-p$,
and can be eliminated together with their
$\vecz$ variables from the Lagrange system ${\sf wlagrange}(\tilde{\mathbf{f}},
\v)$. It remains $\vecz$-variables $\Z_1, \ldots, \Z_{2m-2p-1}, \Z_{2m-p+1}$;
we denote by $\Omega \subset \CC^{2m-2p}$ the Zariski open set where they don't
vanish simultaneously.
Now, consider the map
\[
\begin{array}{lrcc}
q : & O_N \times \Omega \times \CC^{n} & \longrightarrow & \CC^{n+2m-p} \\
& (\vecx,\vecy,\vecz,\w) & \longmapsto & (\tilde{\mathbf{f}}, \tilde{\mathbf{g}})
\end{array}
\]
and, for $\w \in \CC^n$, its section map $q_{\w}(\vecx,\vecy,\vecz) = q(\vecx,\vecy,\vecz,\w)$.
We consider $\tilde{\v} \in \CC^{2m-p}$ and we denote by $\tilde{\vecz}$
the remaining $\vecz-$variables, as above. Hence we define
\[
\begin{array}{lrcc}
Q : & O_N \times \Omega \times \CC^{n} \times \CC^{2m-2p} & \longrightarrow & \CC^{n+2m-p+1} \\
& (\vecx, \vecy, \vecz, \w, \tilde{\v}) & \longmapsto & (\tilde{\mathbf{f}}, \tilde{\mathbf{g}}, \tilde{\v}'\vecz-1)
\end{array}
\]
and its section map $Q_{\w,\tilde{\v}}(\vecx,\vecy,\vecz) = q(\vecx,\vecy,\vecz,\w,\tilde{\v})$.
We claim that $\mathbf{0} \in \CC^{n+2m-p}$ (resp. $\mathbf{0} \in \CC^{n+2m-p+1}$) is a
regular value for $q$ (resp. $Q$). Hence we deduce, by Thom's Weak Transversality
Theorem, that there exist non-empty Zariski open sets ${\mathscr{W}}_N \subset \CC^n$ and
$\tilde{{\mathscr{V}}}_N \subset \CC^{2m-2p}$ such that if $\w \in {\mathscr{W}}_N$ and $\tilde{\v} \in
\tilde{{\mathscr{V}}}_N$, then $\mathbf{0}$ is a regular value for $q_{\w}$ and $Q_{\w,\tilde{\v}}$.
We prove now this claim. Recall that since $H \in {\mathscr{H}}$, the Jacobian matrix
$\jac_{\vecx,\vecy} \tilde{\mathbf{f}}$ has maximal rank at any point $(\vecx,\vecy) \in \zeroset{\tilde{\mathbf{f}}}$.
Let $(\vecx,\vecy,\vecz,\w) \in q^{-1}(\bf0)$ (resp. $(\vecx, \vecy, \vecz, \w, \tilde{\v}) \in Q^{-1}(\bf0)$).
Hence $(\vecx,\vecy) \in \zeroset{\tilde{\mathbf{f}}}$. We isolate the square submatrix of
$\jac q (\vecx,\vecy,\vecz,\w)$ obtained by selecting all its rows and
\begin{itemize}
\item the columns corresponding to derivatives of $\vecx, \vecy$ yielding a
non-singular submatrix of $\jac_{\vecx,\vecy} \tilde{\mathbf{f}}(\vecx,\vecy)$;
\item the columns corresponding to the derivatives w.r.t. $\w_1,
\ldots, \w_n$, hence this yields a block of zeros when applied to
the lines corresponding to $\tilde{\mathbf{f}}$ and the block ${\rm I}_n$ when
applied to $\tilde{\mathbf{g}}$.
\end{itemize}
For the map $Q$, we consider the same blocks as above. Moreover, since
$(\vecx,\vecy,\vecz,\w,\tilde{\v}) \in Q^{-1}(\bf0)$ verifies $\tilde{\v}'\vecz-1=0$,
there exists $\ell$ such that $\z_\ell \neq 0$. Hence, we add the derivative
of the polynomial $\tilde{\v}'\vecz-1$ w.r.t. $\tilde{\v}_\ell$, which
is $\z_\ell \neq 0$. The claim is proved.
Note that $q^{-1}_\w(\mathbf{0})$ is defined by $n+2m-p$ polynomials
involving $n+2m-p+1$ variables. We deduce that for $\w \in {\mathscr{W}}_N$,
$q_\w^{-1}(\mathbf{0})$
is either empty or it is equidimensional and has dimension $1$. Using
the homogeneity in the $\vecz$-variables and the Theorem on the Dimension of
Fibers \cite[Sect. 6.3, Theorem 7]{Shafarevich77}, we deduce that the projection on the $(\vecx, \vecy)$-space of
$q_\w^{-1}(\mathbf{0})$ has dimension $\leq 0$.
We also deduce that for $\w \in {\mathscr{W}}_N$ and $\tilde{\v} \in \tilde{{\mathscr{V}}}_N$,
$Q_{\w,\tilde{\v}}^{-1}(\bf0)$ is either empty or finite.
Hence, the points of $Q^{-1}_{\v, \w}(\mathbf{0})$ are in bijection
with those in $\mathcal{L}(\tilde{\mathbf{f}}, \v, \w)$ forgetting their $0$-coordinates
corresponding to $\Z_j=0$.
We define ${\mathscr{V}}'_N = \tilde{{\mathscr{V}}}_N \times \CC^{p} \subset \CC^{2m-2p}$.
We deduce straightforwardly that for $\v \in {\mathscr{V}}'_N$ and $\w \in {\mathscr{W}}_N$,
the Jacobian matrix of ${\sf wlagrange}(\tilde{\mathbf{f}}, \v)$ has
maximal rank at any point of $\mathcal{L}_p(\tilde{\mathbf{f}}, \v, \w)$. By the Jacobian
criterion, this also implies that the set $\mathcal{L}_p(\tilde{\mathbf{f}}, \v, \w)$ is
finite as requested.
We prove now Assertion (b).
Let ${\mathscr{W}} \subset \CC^n$ and ${\mathscr{V}}' \subset \CC^{2m-p}$ be the non-empty
Zariski open sets defined in the proof of Assertion (a). For $\w \in {\mathscr{W}}$
and $\v \in {\mathscr{V}}'$, the projection of $\mathcal{L}_p(\tilde{\mathbf{f}}, \v, \w)$ on the
$(\vecx,\vecy)-$space is finite.
Since $H \in {\mathscr{H}}$, $\incidencereg({H}, \u, p)$ is smooth and
equidimensional.
Since we work on $\incidencereg({H}, \u, p)$, one of the $p \times p$
minors of $\tilde{H}(\vecx)$ is non-zero. Hence, suppose to work in
$O_N \cap \incidencereg({H}, \u, p)$ where $O_N \subset \CC^{n+p+1}$
has been defined in the proof of Assertion (a). Remark that
\[
{\rm crit}\,(\pi_\w, \incidencereg({H}, \u, p)) \, = \, \bigcup_N \, {\rm crit}\,(\pi_\w, O_N \cap \incidencereg({H}, \u, p))
\]
where $N$ runs over the set of $p \times p$ minors of $\tilde{H}(\vecx)$.
We prove below that there exists a non-empty Zariski open set
${\mathscr{V}} \subset \CC^{2m-p}$ such that if $\v \in {\mathscr{V}}$, for all $N$
and for $\w \in {\mathscr{W}}$, the set ${\rm crit}\,(\pi_\w, O_N \cap \incidencereg({H}, \u, p))$
is finite and contained in the projection of $\mathcal{L}_p(\mathbf{f}, \v, \w)$. This
straightforwardly implies that the same holds for ${\rm crit}\,(\pi_\w, \incidencereg({H}, \u, p))$.
Suppose w.l.o.g. that $N$ is the upper left $p \times p$ minor of $\tilde{H}(\vecx)$.
We use the notation $\tilde{\mathbf{f}}, \tilde{\mathbf{g}}, \tilde{\mathbf{h}}$ as above. Hence,
the set ${\rm crit}\,(\pi_\w, O_N \cap \incidencereg({H}, \u, p))$ is the image by the
projection $\pi_{\vecx,\vecy}$ over the $(\vecx,\vecy)-$space, of the
constructible set defined by $\tilde{\mathbf{f}}, \tilde{\mathbf{g}}, \tilde{\mathbf{h}}$ and $\vecz
\neq 0$. We previously proved that, if $\w \in {\mathscr{W}}_N$, $q^{-1}(\bf0)$ is either empty
or equidimensional of dimension $1$. Hence, the constructible set defined by
$\tilde{\mathbf{f}}, \tilde{\mathbf{g}}, \tilde{\mathbf{h}}$ and $\vecz \neq 0$, which is isomorphic
to $q^{-1}(\bf0)$, is either empty or equidimensional of dimension $1$.
Moreover, for any $(\vecx,\vecy) \in {\rm crit}\,(\pi_\w, O_N \cap \incidencereg({H}, \u, p))$,
$\pi_{\vecx,\vecy}^{-1}(\vecx,\vecy)$ has dimension 1, by the homogeneity
of polynomials w.r.t. variables $\vecz$. By the Theorem on the Dimension
of Fibers \cite[Sect. 6.3, Theorem 7]{Shafarevich77}, we deduce that
${\rm crit}\,(\pi_\w, O_N \cap \incidencereg({H}, \u, p))$ is finite.
For $(\vecx,\vecy) \in {\rm crit}\,(\pi_\w, O_N \cap \incidencereg({H}, \u, p))$, let
${\mathscr{V}}_{(\vecx,\vecy),N} \subset \CC^{2m-p}$ be the non-empty Zariski open
set such that if $\v \in {\mathscr{V}}_{(\vecx,\vecy),N}$ the hyperplane
$\v'\vecz-1=0$ intersects transversely $\pi_{\vecx,\vecy}^{-1}(\vecx,\vecy)$.
Recall that ${\mathscr{V}}'_N \subset \CC^{2m-p}$ has been defined in the proof of
Assertion (a). Define
$$
{\mathscr{V}}_N = {\mathscr{V}}'_N \cap \bigcap_{(\vecx,\vecy)} {\mathscr{V}}_{(\vecx,\vecy),N}
$$
and ${\mathscr{V}} = \bigcap_N {\mathscr{V}}_N$. This concludes the proof, since ${\mathscr{V}}$
is a finite intersection of non-empty Zariski open sets.
\hfill$\square$
\subsection{Conclusion} \label{ssec:dimlag:proof}
We denote by ${\mathscr{M}}_1 \subset \GL(n,\CC)$ the set of non-singular matrices
$M$ such that the first row $\w$ of $M^{-1}$ lies in the set ${\mathscr{W}}$
given in Lemma \ref{lemma:intermediate}: this set is non-empty and Zariski
open since the entries of $M^{-1}$ are rational functions of the entries of $M$.
Let ${\mathscr{V}} \subset \CC^{2m-p}$ be the non-empty Zariski open set given by Lemma
\ref{lemma:intermediate} and let $\v \in {\mathscr{V}}$.
Let $\e_1$ be the row vector $(1, 0, \ldots, 0) \in \QQ^n$ and for all $M \in \GL(n,\CC)$, let
\[
\tilde{M} =
\left[
\begin{array}{cc}
M & {0} \\
{0} & {\rm I}_m \\
\end{array}
\right].
\]
Remark that for any $M \in {\mathscr{M}}_1$ the following identity holds:
$$
\left[\begin{array}{c}
\jac \mathbf{f}({H}^M, \u, p) \\
\e_1 \quad 0\; \cdots \; 0\\
\end{array}\right] = \left[\begin{array}{cc}
\jac \mathbf{f}({H}, \u, p) \\
\w \quad 0 \; \cdots \; 0\\
\end{array}\right]\tilde{M}.
$$
We conclude that the set of solutions of the system
\begin{equation}
\label{eq:dim:1}
\left(\mathbf{f}({H}, \u, p), \quad
\vecz'
\left[\begin{array}{c}
\jac\mathbf{f}({H}, \u, p) \\
\w \quad 0\; \cdots \; 0\\
\end{array}\right],
\quad \v'\vecz-1 \right)
\end{equation}
is the image by the map $(\vecx,\vecy) \mapsto \tilde{M}^{-1}(\vecx,\vecy)$
of the set ${S}$ of solutions of the system
\begin{equation}
\label{eq:dim:2}
\left(\mathbf{f}({H}, \u, p), \quad
\vecz'\left[\begin{array}{c}
\jac\mathbf{f}({H}, \u, p) \\
\e_1 \quad 0\; \cdots \; 0\\
\end{array}\right], \quad \v'\vecz-1 \right).
\end{equation}
Now, let $\varphi$ be the projection that eliminates the last
coordinate $\z_{2m-p+1}$. Remark that $\varphi(S)
= {\sf L}_p(\mathbf{f}^M, \v, \e_1)
.
Now, applying Lemma \ref{lemma:intermediate} ends the proof.
\hfill$\square$
\section{Proof of Proposition \ref{prop:closedness}}\label{sec:closedness}
The proof of Proposition \ref{prop:closedness} relies on results of
\cite[Section 5]{HNS2014}
and of \cite{SaSc03}. We use the
same notation as in \cite[Section 5]{HNS2014}, and we recall them below.
\paragraph*{Notations} For ${\mathcal{Z}} \subset \CC^n$ of dimension $d$, we denote by
$\Omega_i({\mathcal{Z}})$ its $i-$equidimensional component, $i=0, \ldots, d$. We denote by
${\mathscr{S}}({\mathcal{Z}})$ the union of:
\begin{itemize}
\item $\Omega_0({\mathcal{Z}}) \cup \cdots \cup \Omega_{d-1}({\mathcal{Z}})$
\item the set ${\rm sing}\,(\Omega_d({\mathcal{Z}}))$ of singular points of $\Omega_d({\mathcal{Z}})$.
\end{itemize}
Let $\pi_i$ be the map $(\x_1, \ldots, \x_n) \to (\x_1, \ldots, \x_i)$.
We denote by ${\mathscr{C}}(\pi_i, {\mathcal{Z}})$ the Zariski closure of the union of the
following sets:
\begin{itemize}
\item $\Omega_0({\mathcal{Z}}) \cup \cdots \cup \Omega_{i-1}({\mathcal{Z}})$;
\item the union for $r \geq i$ of the sets ${\rm crit}\,(\pi_i, {\rm reg}\,(\Omega_r({\mathcal{Z}})))$.
\end{itemize}
For $M \in \GL(n,\CC)$ and ${\mathcal{Z}}$ as above, we define the collection
of algebraic sets $\{\mathcal{O}_i({\mathcal{Z}}^M)\}_{0 \leq i \leq d}$ as follows:
\begin{itemize}
\item ${\mathcal O}_d({\mathcal{Z}}^M)={\mathcal{Z}}^M$;
\item ${\mathcal O}_i({\mathcal{Z}}^M)={\mathscr{S}}({\mathcal O}_{i+1}({\mathcal{Z}}^M))
\cup {\mathscr{C}}(\pi_{i+1}, {\mathcal O}_{i+1}({\mathcal{Z}}^M)) \cup \\
\cup {\mathscr{C}}(\pi_{i+1},{\mathcal{Z}}^M)$ for $i=0, \ldots, d-1$.
\end{itemize}
We finally recall the two following properties:
{\it Property ${\mathsf{P}}({\mathcal{Z}})$.} Let ${{\mathcal{Z}}} \subset \CC^n$ be
an algebraic set of dimension $d$. We say that $M \in \GL(n,\CC)$ satisfies
${\mathsf{P}}({\mathcal{Z}})$ when for all $i = 0, 1, \ldots, d$:
\begin{enumerate}
\item
${\mathcal O}_i({{\mathcal{Z}}}^M)$ has dimension $\leq i$ and
\item
${\mathcal O}_i({{\mathcal{Z}}}^M)$ is in Noether position with respect to $\x_1, \ldots, \x_i$.
\end{enumerate}
{\it Property ${\sf Q}$.} We say that an algebraic set ${\mathcal{Z}}$ of dimension $d$
satisfies ${\sf Q}_i({\mathcal{Z}})$ (for a given $1 \leq i \leq d$) if for any connected
component ${{C}}$ of ${{\mathcal{Z}}}\cap \RR^n$ the boundary of $\pi_i({{C}})$ is contained
in $\pi_i({\mathcal O}_{i-1}({\mathcal{Z}}) \cap {{{C}}})$. We say that ${\mathcal{Z}}$ satisfies
${\mathsf{Q}}$ if it satisfies ${\mathsf{Q}}_1, \ldots, {\mathsf{Q}}_d$.
Let ${\mathcal{Z}}\subset \CC^n$ be an algebraic set of dimension $d$. By
\cite[Proposition 15]{HNS2014}, there exists a non-empty Zariski open
set $\mathscr{M}\subset \GL(n,\CC)$ such that for $M\in
\mathscr{M}\cap \GL(n,\QQ)$ Property ${\sf P}({\mathcal{Z}})$ holds. Moreover,
if $M \in \GL(n, \QQ)$ satisfies ${\sf P}({\mathcal{Z}})$, then ${\sf
Q}_i({\mathcal{Z}}^M)$ holds for $i=1, \ldots, d$ \cite[Proposition
16]{HNS2014}.
We use these results in the following proof of Proposition \ref{prop:closedness}.
\proof We start with assertion (a). Let ${\mathscr{M}}_2 \subset
\GL(n,\CC)$ be the non-em\-pty Zariski open set of \cite[Proposition
17]{HNS2014} for ${\mathcal{Z}} = {\mathcal{H}}_p$: for $M \in {\mathscr{M}}_2$, $M$
satisfies ${{\mathsf{P}}}({\mathcal{H}}_p)$. Remark that the connected components of
${\mathcal{H}}_p \cap \RR^n$ and are in bijection with those of ${\mathcal{H}}^M_p \cap \RR^n$ (given by
${{C}} \leftrightarrow {{C}}^M$). Let ${{C}}^M \subset {\mathcal{H}}^M_p \cap
\RR^n$ be a connected component of ${\mathcal{H}}^M_p \cap \RR^n$. Let
$\pi_1$ be the projection on the first variable $\pi_1 \colon
\RR^{n} \to \RR$, and consider its restriction to $ {\mathcal{H}}^M_r \cap
\RR^n$. Since $M \in {\mathscr{M}}_2$, by \cite[Proposition 16]{HNS2014}
the boundary of $\pi_1({{C}}^M)$ is included in $\pi_1({\mathcal
O}_0({\mathcal{H}}^M_p) \cap {{C}}^M)$ and in particular in
$\pi_1({{C}}^M)$. Hence $\pi_1({{C}}^M)$ is closed.
We prove now assertion (b).
Let $M \in {\mathscr{M}}_2$, ${{C}}$ a connected component of ${\mathcal{H}}_p \cap \RR^n$ and
${\alpha} \in \RR$ be in the boundary of $\pi_1({{C}}^M)$. By \cite[Lemma 19]{HNS2014}
$\pi_1^{-1}({\alpha}) \cap {{C}}^M$ is finite.
We claim that, up to genericity
assumptions on $\u \in \QQ^{p+1}$, for $\vecx \in \pi_1^{-1}({\alpha}) \cap {{C}}^M$,
the linear system $\vecy \mapsto \mathbf{f}({H}^M, \u, p)$ has at least one solution.
We deduce that
there exists a non-empty Zariski open set ${\mathscr{U}}_{{{C}},\vecx} \subset
\CC^{p+1}$ such that if $\u \in {\mathscr{U}}_{{{C}},\vecx} \cap \QQ^{p+1}$, there exists $\vecy \in
\QQ^{p+1}$ such that $(\vecx,\vecy) \in {\incidence}({H}^M, \u, p)$. One concludes by taking
\[
{\mathscr{U}} = \bigcap_{{{C}} \subset {\mathcal{H}}_p \cap \RR^n} \bigcap_{\vecx \in \pi_1^{-1}({\alpha}) \cap {{C}}^M} {\mathscr{U}}_{{{C}},\vecx},
\]
which is non-empty and Zariski open since:
\begin{itemize}
\item the collection $\{{{C}} \subset {\mathcal{H}}_p \cap \RR^n \, \text{connected component}\}$
is finite;
\item the set $\pi_1^{-1}({\alpha}) \cap {{C}}^M$ is finite.
\end{itemize}
It remains to prove the claim we made. For $\vecx \in \pi_1^{-1}({\alpha}) \cap {{C}}^M$, the matrix
$\tilde{{H}}(\vecx)$ is rank defective, and let $p' \leq p$ be its rank.
The linear system
\[
\left[ \begin{array}{c} \tilde{{H}}(\vecx) \\ \u \end{array} \right] \cdot \vecy =
\left[ \begin{array}{c} {\bf 0} \\ 1 \end{array} \right]
\]
has a solution if and only if
\[
{\rm rank} \left[ \begin{array}{c} \tilde{{H}}(\vecx) \\ \u \end{array} \right] =
{\rm rank} \left[ \begin{array}{c} \tilde{{H}}(\vecx) \\ \u \end{array}
\begin{array}{c} {\bf 0} \\ 1 \end{array} \right],
\]
and the rank of the second matrix is $p'+1$. Denoting by ${\mathscr{U}}_{{{C}},\vecx}
\subset \CC^{p+1}$ the complement in $\CC^{p+1}$ of the $p'-$dimensional
linear space spanned by the rows of $\tilde{{H}}(\vecx)$, proves the claim and
concludes the proof.\hfill$\square$
\section{Experiments} \label{sec:exper}
The algorithm {\sf LowRankHankel} has been implemented under
\textsc{Ma\-ple}. We use the \textsc{FGb} \cite{faugere2010fgb}
library implemented by J.-C. Faugère for solving solving
zero-dimensional polynomial systems using Gr\"obner bases. In
particular, we used the new implementation of \cite{FM11} for
computing rational parametrizations. Our implementation checks the
genericity assumptions on the input.
We test the algorithm with input $m \times m$ linear Hankel matrices
${H}(\vecx)={H}_0+\X_1{H}_1+\ldots+\X_n{H}_n$, where the entries of ${H}_0,
\ldots, {H}_n$ are random rational numbers, and an integer $0 \leq r
\leq m-1$. None of the implementations of Cylindrical Algebraic
Decomposition solved our examples involving more that $3$ variables.
Also, on all our examples, we found that the Lagrange systems define
finite algebraic sets.
We compare the practical behavior of ${\sf LowRankHankel}$ with the
performance of the library {\sc RAGlib}, implemented by the third
author (see \cite{raglib}). Its function ${\sf PointsPerComponents}$,
with input the list of $(r+1)-$minors of ${H}(\vecx)$, returns one
point per connected component of the real counterpart of the algebraic
set ${\mathcal{H}}_r$, that is it solves the problem presented in this
paper. It also uses critical point methods. The symbol $\infty$ means
that no result has been obtained after $24$ hours. The symbol matbig
means that the standard limitation in $\textsc{FGb}$ to the size of
matrices for Gr\"obner bases computations has been reached.
We report on timings (given in seconds) of the two implementations in
the next table. The column ${\sf New}$ corresponds to timings of ${\sf
LowRankHankel}$. Both computations have been done on an Intel(R)
Xeon(R) CPU $E7540$ $@2.00 {\rm GHz}$ 256 Gb of RAM.
We remark that $\textsc{RAGlib}$ is competetitive for problems of small
size (e.g. $m=3$) but when the size increases ${\sf LowRankHankel}$
performs much better, especially when the determinantal variety has
not co-dimension $1$. It can tackle problems that are out reach of
$\textsc{RAGlib}$. Note that for fixed $r$, the algorithm seems to have a
behaviour that is polynomial in $nm$ (this is particularly visible
when $m$ is fixed, e.g. to $5$).
{\tiny
\begin{table}
\centering
\begin{tabular}{|c|c|c||c|c|}
\hline
$(m,r,n)$ & {\sf RAGlib} & {\sf New} & {\sf TotalDeg} & {\sf MaxDeg}\\
\hline
\hline
$(3,2,2)$ & 0.3 & 5 & 9 & 6\\
$(3,2,3)$ & 0.6 & 10 & 21 & 12\\
$(3,2,4)$ & 2 & 13 & 33 & 12\\
$(3,2,5)$ & 7 & 20 & 39 & 12\\
$(3,2,6)$ & 13 & 21 & 39 & 12\\
$(3,2,7)$ & 20 & 21 & 39 & 12\\
$(3,2,8)$ & 53 & 21 & 39 & 12\\
\hline
\hline
$(4,2,3)$ & 2 & 2.5 & 10 & 10\\
$(4,2,4)$ & 43 & 6.5 & 40 & 30\\
$(4,2,5)$ & 56575 & 18 & 88 & 48\\
$(4,2,6)$ & $\infty$ & 35 & 128 & 48\\
$(4,2,7)$ & $\infty$ & 46 & 143 & 48\\
$(4,2,8)$ & $\infty$ & 74 & 143 & 48\\
\hline
\hline
$(4,3,2)$ & 0.3 & 8 & 16 & 12\\
$(4,3,3)$ & 3 & 11 & 36 & 52\\
$(4,3,4)$ & 54 & 31 & 120 & 68\\
$(4,3,5)$ & 341 & 112 & 204 & 84\\
$(4,3,6)$ & 480 & 215 & 264 & 84\\
$(4,3,7)$ & 528 & 324 & 264 & 84\\
$(4,3,8)$ & 2638 & 375 & 264 & 84\\
\hline
\hline
$(5,2,5)$ & 25 & 4 & 21 & 21 \\
$(5,2,6)$ & 31176 & 21 & 91 & 70\\
$(5,2,7)$ & $\infty$ & 135 & 199 & 108\\
$(5,2,8)$ & $\infty$ & 642 & 283 & 108\\
$(5,2,9)$ & $\infty$ & 950 & 311 & 108\\
$(5,2,10)$ & $\infty$ & 1106 & 311 & 108\\
\hline
\hline
$(5,3,3)$ & 2 & 2 & 20 & 20\\
$(5,3,4)$ & 202 & 18 & 110 & 90\\
$(5,3,5)$ & $\infty$ & 583 & 338 &228\\
$(5,3,6)$ & $\infty$ & 6544 & 698 & 360\\
$(5,3,7)$ & $\infty$ & 28081 & 1058 & 360\\
$(5,3,8)$ & $\infty$ & $\infty$ & - & - \\
\hline
\end{tabular}
\label{tab:time}
\caption{Timings and degrees}
\end{table}
\begin{table}
\centering
\begin{tabular}{|c|c|c||c|c|}
\hline
$(m,r,n)$ & {\sf RAGlib} & {\sf New} & {\sf TotalDeg} & {\sf MaxDeg}\\
\hline
\hline
\hline
\hline
$(5,4,2)$ & 1 & 5 & 25 & 20\\
$(5,4,3)$ & 48 & 30 & 105 & 80\\
$(5,4,4)$ & 8713 & 885 & 325 & 220\\
$(5,4,5)$ & $\infty$ & 15537 & 755 & 430\\
$(5,4,6)$ & $\infty$ & 77962 & 1335 & 580\\
\hline
\hline
$(6,2,7)$ & $\infty$ & 6 & 36 & 36 \\
$(6,2,8)$ & $\infty$ & matbig & - & - \\
\hline
\hline
$(6,3,5)$ & $\infty$ & 10 & 56 & 56 \\
$(6,3,6)$ & $\infty$ & 809 & 336 & 280 \\
$(6,3,7)$ & $\infty$ & 49684 & 1032 & 696 \\
$(6,3,8)$ & $\infty$ & matbig & - & - \\
\hline
\hline
$(6,4,3)$ & 3 & 5 & 35 & 35 \\
$(6,4,4)$ & $\infty$ & 269 & 245 & 210 \\
$(6,4,5)$ & $\infty$ & 30660 & 973 & 728 \\
$(6,4,6)$ & $\infty$ & $\infty$ & - & - \\
\hline
\hline
$(6,5,2)$ & 1 & 9 &36 & 30 \\
$(6,5,3)$ & 915 & 356 & 186 & 150 \\
$(6,5,4)$ & $\infty$ & 20310 & 726 & 540 \\
$(6,5,5)$ & $\infty$ & $\infty$ & - & - \\
\hline
\end{tabular}
\label{tab:time2}
\caption{Timings and degrees (continued)}
\end{table}
}
Finally, we report in column ${\sf TotalDeg}$ the degree of the
rational parametrization obtained as output of the algorithm, that is
the number of its complex solutions. We observe that this value is
definitely constant when $m,r$ are fixed and $n$ grows, as for the
maximum degree (column ${\sf MaxDeg}$) appearing during the recursive
calls.
The same holds for the multilinear bound given in Section
\ref{ssec:algo:complexity} for the total number of complex solutions.
\newpage
| {'timestamp': '2015-02-10T02:19:53', 'yymm': '1502', 'arxiv_id': '1502.02473', 'language': 'en', 'url': 'https://arxiv.org/abs/1502.02473'} |
\section{Introduction}
\label{intro}
Any natural language (Portuguese, English, German, etc.) has ambiguities. Due to ambiguity, the same word surface form can have two or more different meanings. For example, the Portuguese word \textit{banco} can be used to express the financial institution but also the place where we can rest our legs (a seat). Lexical ambiguities, which occur when a word has more than one possible meaning, directly impact tasks at the semantic level and solving them automatically is still a challenge in natural language processing (NLP) applications. One way to do this is through word embeddings.
Word embeddings are numerical vectors which can represent words or concepts in a low-dimensional continuous space, reducing the inherent sparsity of traditional vector-space representations \cite{salton1975vector}. These vectors are able to capture useful syntactic and semantic information, such as regularities in natural language. They are based on the distributional hypothesis, which establishes that the meaning of a word is given by its context of occurrence \cite{bruni2014multimodal}. The ability of embeddings to capture knowledge has been exploited in several tasks, such as Machine Translation \cite{mikolov2013exploiting}, Sentiment Analysis \cite{socher2013recursive}, Word Sense Disambiguation \cite{chen2014unified} and Language Understanding \cite{mesnil2013investigation}.
Although very useful in many applications, the word embeddings (word vectors), like those generated by Word2Vec \cite{mikolov2013Aefficient}, GloVe \cite{pennington2014glove} and FastText \cite{bojanowski2016enriching} have an important limitation: the Meaning Conflation Deficiency, which is the inability to discriminate among different meanings of a word. In any natural language, there are words with only one meaning (monosemous) and words with multiple meanings (ambiguous) \cite{CamachoCollados2018FromWT}. In word embeddings, each word is associated with only one vector representation ignoring the fact that ambiguous words can assume different meanings for which different vectors should be generated. Thus, there is a loss of information by representing a lexical ambiguity in a single vector, since it will only contain the most commonly used meaning for the word (or that which occurs in the corpus from which the word vectors were generated).
Several works \cite{pina2014simple,neelakantan2015efficient,wu2015sense,liu2015multi,huang2012improving,reisinger2010multi,iacobacci2015sensembed} have investigated the representation of word senses instead of word occurrences in what has been called \textit{sense embeddings} (sense vectors).
In this paper, we present the first experiments carried out to evaluate sense vectors for Portuguese. In section \ref{sec:relatedwork} we describe some of the approaches for generating sense vectors proposed in the literature. The approaches investigated in this paper are described in section \ref{sec:sswe}. The experiments carried out for evaluating sense vectors for Portuguese are described in section~\ref{sec:experiment}. Section~\ref{sec:conclusion} finishes this paper with some conclusions and proposals for future work.
\section{Related Work}
\label{sec:relatedwork}
\cite{shutze1998discrimination} was one of the first works to identify the meaning conflation deficiency of word vectors and to propose the induction of meanings through the clustering of contexts in which an ambiguous word occurs. Then, many other works followed these ideas.
One of the first works using neural network to investigate the generation of sense vectors was \cite{reisinger2010multi}. The approach proposed there is divided in two phases: pre-processing and training. In the pre-processing, firstly, the context of each target word is defined as the words to the left and to the right of that target word. Then, each possible context is represented by the weighted average of the vectors of the words that compose it. These context vectors are grouped and each centroid is selected to represent the sense of the cluster. Finally, each word of the corpus is labeled with the cluster with the closest meaning to its context. After this pre-processing phase, a neural network is trained from the labeled corpus, generating the sense vectors. The model was trained in two corpora, a Wikipedia dump in English and the third English edition of Gigaword corpus. The authors obtained a correlation of Spearman of around 62.5\% in WordSim-353 \cite{finkelstein2001placing}\footnote{WordSim-353 is a dataset with 353 pairs of English words for which similarity scores were set by humans on a scale of 1 to 10.}, for the Wikipedia and Gigaword corpus.
Another approach for generating sense vectors was \cite{huang2012improving}, which extends the \cite{reisinger2010multi}'s approach by incorporating a global context into the generation of word vectors. According to them, aggregating information from a larger context improves the quality of vector representations of ambiguous words that have more than one possible local context. To provide the vector representation of the global context, the proposed model uses all words in the document in which the target word occurs, incorporating this representation into the local context. The authors trained the model in a Wikipedia dump (from April 2010) in English with 2 million articles and 990 million tokens. The authors obtained a Spearman correlation of 65.7\% in the Stanford's Contextual Word Similarities (SCWS)\footnote{The SCWS is a dataset with 2,003 word pairs in sentential contexts.}, surpassing the baselines.
Based on \cite{huang2012improving}, \cite{neelakantan2015efficient} proposed the generation of sense vectors by performing a Skip-Gram adaptation of \cite{mikolov2013Aefficient}. In this approach, the identification of the senses occurs together with the training to generate the vectors, making the process efficient and scalable. This approach was the one chosen to be used in this paper and is explained in detail in the next section. The authors used the same corpus as \cite{huang2012improving} for training the sense vectors and obtained a Spearman correlation of 67.3 \% also in the SCWS, surpassing the baselines.
\cite{trask2015sense2vec} propose a different approach that uses a tagged corpus rather than a raw corpus for sense vectors generation. The authors annotated the corpus with part of speech (PoS) tags and that allowed the identification of ambiguous words from different classes. For example, this approach allow to distinguish between the noun \textit{livro} (book) and the verb \textit{livro} (free). After that they trained a word2vec (CBOW or Skip-Gram) model \cite{mikolov2013Aefficient} with the tagged corpus. The authors did not report results comparing their approach with baselines. In addition to the PoS tags, the authors also tested the ability of the method to disambiguate named entities and feelings, also labeling the corpus with these tags, before generating word embeddings. This approach was one of the chosen to be investigated in this paper and it will be explained in detail in the next section.
More recently, new proposals for language model generation like ELMo \cite{peters2018elmo}, OpenAI GPT \cite{radford2018openai} and BERT \cite{devlin2018bert} have begun to use more complex architectures to model context and capture the meanings of a word. The idea behind this language models is that each layer of the neural network is able to capture a different sense of the input word and generate dynamic vector representations, according to each input context. This idea of dynamic embeddings facilitates the use of these representations in downstream tasks. These architectures are complex and require very powerful hardware resources for training. The difference between sense vectors and language models like those lies in the architecture and in the way the trained model is used. Sense vectors are features that will be used for specific NLP tasks. On the other hand, the complex architecture of language models has both the neural networks that will create the language model and the NLP tasks, which can even share the same hyper-parameters (fine-tuning approach).
\section{Sense embeddings}
\label{sec:sswe}
In this paper, two approaches were used for sense vectors generation: the MSSG \cite{neelakantan2015efficient} and the Sense2Vec \cite{trask2015sense2vec}. Each one is explained in the next sections.
\subsection{Multiple-Sense Skip-Gram (MSSG)}
In \cite{neelakantan2015efficient}, two methods were proposed for generating sense vectors based on the original Skip-Gram model \cite{mikolov2013Aefficient}: MSSG (Multiple-Sense Skip-Gram) and NP-MSSG (Non-Parametric Multiple-Sense Skip-Gram). The main difference between them is that MSSG implements a fixed amount of possible meanings for each word while NP-MSSG does this as part of its learning process.
In both methods, the vector of the context is given by the weighted average of the vectors of the words that compose it. The context vectors are grouped and associated to the words of the corpus by approximation to their context. After predicting the sense, the gradient update is performed on the centroid of the cluster and the training continues. The training stops when vector representations have already been generated for all the words.
Different from the original skip-gram, its extensions, MSSG and NP-MSSG, learn multiple vectors for a given word. They were based on works such as \cite{huang2012improving} and \cite{reisinger2010multi}. In the MSSG model, each word $w \in W$ is associated to a global vector $v_g(w)$ and each sense of the word has a sense vector $v_s(w,k)(k=1,2,\cdots,K)$ and a context cluster with centroid $u(w,k)(k=1,2,\cdots,K)$. The $K$ sense vectors and the global vectors are of dimension $d$ and $K$ is a hyperparameter.
Considering the word $w_t$, its context $c_t=\{w_{t-R_t},\cdots,w_{t-1},w_{t+1},\cdots,w_{t+R_t}\}$ and the window size $R_t$, vector representation of the context is defined as the mean of the global vector representation of the words in the context. Let $v_{context}(c_t)=\frac{1}{2*R_t}\sum_{c \in c_t} v_g(c)$ be the vector representation of the context $c_t$. The global vectors of context words are used instead of their sense vectors to avoid the computational complexity associated with predicting the meanings of words in the context. It is possible, then, to predict the meaning of the word $w_t$, $s_t$, when it appears in the context $c_t$.
The algorithm used for building the clusters is similar to k-means. The centroid of a cluster is the mean of the vector representations of all contexts that belong to this cluster and the cosine similarity is used to measure the similarity.
In MSSG, the probability ($P$) that the word $c$ is observed in the context of the word $w_t$ ($D=1$), given the sense and the probability that it is not observed ($D=0$), has the addition of $s_t$ (sense of $w_t$) in the formulas of the original Skip-gram (formula \ref{eq:likelehood_context} and \ref{eq:likelehood_notcontext}). The objective function ($J$) also considers $(w_t, s_t)$ instead of just $(w_t)$ (formula \ref{eq:objective_function}).
\begin{equation}
P(D = 1|v_s(w_t, s_t), v_g(c)) = \frac{1}{1 + e^{-v_s(w_t, s_t)^T v_g(c)}}
\label{eq:likelehood_context}
\end{equation}
\begin{equation}
\label{eq:likelehood_notcontext}
P(D = 0|v_s(w_t, s_t),v_g(c)) = 1 - P(D = 1|v_s(w_t, s_t),v_g(c))
\end{equation}
\begin{equation}
\begin{split}
\label{eq:objective_function}
J = \sum_{(w_t,c_t) \in D_+} \sum_{c \in c_t} log P(D=1|v_s(w_t, s_t),v_g(c)) + \\
\sum_{(w_t,c'_t) \in D_-} \sum_{c' \in c'_t} log P(D=0|v_s(w_t, s_t),v_g(c'))
\end{split}
\end{equation}
After predicting the meaning of the word $w_t$, MSSG updates the sense vector generated for the word $w_t(v_s(w_t,s_t))$, the global vector of context words and the global vector of noisy context words selected by chance. The centroid of the context cluster $s_t$ for the word $w_t(u(w_t,s_t))$ is updated when the context $c_t$ is added to the cluster $s_t$.
In this paper, we choose to work with the MSSG fixing the amount of senses for each target word. We did that to allow a fair comparison with the second approach investigated here which a limited amount of meanings.
\subsection{Sense2Vec}
\cite{trask2015sense2vec} propose the generation of sense vectors from a corpus annotated with part-of-speech (PoS) tags, making it possible to identify ambiguous words from the amount of PoS tags they receive (for example, the noun \textit{livro} (book) in contrast with the verb \textit{livro} (free)).
The authors suggest that annotating the corpus with PoS tags is a costless approach to identify the different context of ambiguous words with different PoS tags in each context. This approach makes it possible to create a meaningful representation for each use. The final step is to train a word2vec model (CBOW or Skip-Gram) \cite{mikolov2013Aefficient} with the tagged corpus, so that instead of predicting a word given neighboring words, it predicts a sense given the surrounding senses.
\cite{trask2015sense2vec} presents experiments demonstrating the effectiveness of the method for sentiment analysis and named entity recognition (NER). For sentiment analysis, sense2vec was trained with a corpus annotated with PoS tags and adjectives with feeling tags. The word ``bad'' was disambiguated between positive and negative sentiment. For the negative meaning, words like ``terrible'', ``horrible'' and ``awful'' appeared, while in the positive meaning there was present words like ``good'', ``wrong'' and ``funny'', indicating a more sarcastic sense of ``bad''.
In the NER task, sense2vec was trained with a corpus annotated with PoS and NER tags. For example, the NE ``Washington'' was disambiguated between the entity categories PERSON-NAME (person's name) and GPE (geolocation). In the PERSON-NAME category it was associated with words like ``George-Washington'', ``Henry-Knox'' and ``Philip-Schuyler'' while in the GPE category the word was associated with ``Washington-DC'', ``Seattle'' and ``Maryland''.
\section{Experiments and Results}
\label{sec:experiment}
In this section we present the first experiments carried out to evaluate sense vectors generated for Portuguese. As follows, we first describe the corpora used to generate sense vectors, then we present the network parameters used for training the models and, finally, we show the experiments carried out to evaluate the two approaches under investigation: MSSG and sense2vec.
\subsection{Training Corpora}
The corpora used for the training of sense vectors were the same as \cite{hartmann2017portuguese} which is composed of texts written in Brazilian Portuguese (PT-BR) and European Portuguese (PT-EU). Table~\ref{tab:corpusembeddings} summarizes the information about these corpora: name, amount of tokens and types and a briefly description of the genre.
\begin{table}[!ht]
\centering
\footnotesize
\caption{Statistics of our training corpora}
\begin{tabular}[t]{lrrl}
\midrule\midrule
Corpus&
Tokens&
Types&
Genre\\
\midrule
LX-Corpus \cite{rodrigues2016lx} & 714,286,638 & 2,605,393 & Mixed genres\\
Wikipedia & 219,293,003 & 1,758,191 & Encyclopedic\\
GoogleNews & 160,396,456 & 664,320 & Informative\\
SubIMDB-PT & 129,975,149 & 500,302 & Spoken language\\
G1 & 105,341,070 & 392,635 & Informative\\
PLN-Br & 31,196,395 & 259,762 & Informative\\
Literacy works of\newline public domain & 23,750,521 & 381,697 & Prose\\
Lacio-web \cite{aluisio2003lacioweb} & 8,962,718 & 196,077 & Mixed genres\\
Portuguese e-books & 1,299,008 & 66,706 & Prose\\
Mundo Estranho & 1,047,108 & 55,000 & Informative\\
CHC & 941,032 & 36,522 & Informative\\
FAPESP & 499,008 & 31,746 & Science\\
Textbooks & 96,209 & 11,597 & Didactic\\
Folhinha & 73,575 & 9,207 & Informative\\
NILC subcorpus & 32,868 & 4,064 & Informative\\
Para Seu Filho Ler & 21,224 & 3,942 & Informative\\
SARESP & 13,308 & 3,293 & Didactic\\
\textbf{Total} & \textbf{1,395,926,282} & \textbf{3,827,725}\\
\midrule\midrule
\end{tabular
\vspace{-1\baselineskip}
\label{tab:corpusembeddings}
\end{table}
The corpora were pre-processed in order to reduce the vocabulary size. For the sense2vec model, the corpora were also PoS-tagged using the nlpnet tool \cite{fonseca2013twostep}, which is considered the state-of-art in PoS-tagging for PT-BR.
It is important to say that both approaches for generating sense vectors were trained with these corpora. The only difference is that the input for the MSSG is the sentence without any PoS tag while the input for the sense2vec is the sentence annotated with PoS tags.
\subsection{Network Parameters}
For all training, including baselines, we generated vectors of 300 dimensions, using the Skip-Gram model, with context window of five words. The learning rate was set to 0.025 and the minimum frequency for each word was set to 10. For the MSSG approach, the maximum number of senses per word was set to 3.
\subsection{Evaluation}
Based on \cite{hartmann2017portuguese}, this experiment is a task of syntactic and semantic analogies where the use of sense vectors is evaluated. Word vectors were chosen as baselines.
\paragraph{\textbf{Dataset.}} The dataset of Syntactic and Semantic Analogies of \cite{rodrigues2016lx} has analogies in Brazilian (PT-BR) and European (PT-EU) Portuguese. In syntactic analogies, we have the following categories: adjective-to-adverb, opposite, comparative, superlative, present-participle, nationality-adjective, past-tense, plural, and plural-verbs. In semantic analogies, we have the following categories: capital-common-countries, capital-world, currency, city-in-state and family. In each category, we have examples of analogies with four words:
\paragraph{\textbf{adjective-to-adverb:}}
\begin{itemize}
\item \textit{fantástico fantasticamente aparente aparentemente}
\textbf{(syntactic)}\\
fantastic fantastically apparent apparently
\end{itemize}
\vspace*{-0.5cm}
\paragraph{\textbf{capital-common-countries:}}
\begin{itemize}
\item \textit{Berlim Alemanha Lisboa Portugal }\textbf{(semantic)} \\
Berlin Germany Lisbon Portugal
\end{itemize}
\paragraph{\textbf{Algorithm.}} The algorithm receives the first three words of the analogy and aims to predict the fourth. Thus, for instance considering the previous example, the algorithm would receive Berlin (a), Germany (b) and Lisbon (c) and should predict Portugal (d). Internally, the following algebraic operation is performed between vectors:
\begin{equation}
v (b) + v (c) - v (a) = v (d)
\end{equation}
\paragraph{\textbf{Evaluation metrics.}} The metric used in this case is accuracy, which calculates the percentage of correctly labeled words in relation to the total amount of words in the dataset.
\paragraph{\textbf{Discussion of results.}} Table \ref{tab:evaluation} shows the accuracy values obtained for the syntactic and semantic analogies. The Word2vec, GloVe and FastText were adopted as word vectors baselines since they performed well in \cite{hartmann2017portuguese} experiments. Note that the sense vectors generated by our sense2vec model outperform the baselines at the syntactic and semantic levels.
\begin{table}[!htb]
\centering
\footnotesize
\caption{Accuracy values for the syntactic and semantic analogies}
\begin{tabular}[t]{lccc|ccc}
\midrule\midrule
\multicolumn{1}{c}{\multirow{2}{*}{\textbf{Embedding}}} &
\multicolumn{3}{c}{\textbf{PT-BR}} & \multicolumn{3}{c}{\textbf{PT-EU}}\\
\cmidrule{2-7}
\multicolumn{1}{c}{} & \textbf{Syntactic} & \textbf{Semantic} & \textbf{All} & \textbf{Syntactic} & \textbf{Semantic} & \textbf{All}\\
\midrule
Word2Vec (word) & 49.4 & 42.5 & 45.9 & 49.5 & 38.9 & 44.3 \\
\midrule
GloVe (word) & 34.7 & 36.7 & 35.7 & 34.9 & 34.0 & 34.4 \\
\midrule
FastText (word) & 39.9 & 8.0 & 24.0 & 39.9 & 7.6 & 23.9 \\
\midrule
MSSG (sense) & 23.0 & 6.6 & 14.9 & 23.0 & 6.3 & 14.7 \\
\midrule
Sense2Vec (sense) & \textbf{52.4} & \textbf{42.6} & \textbf{47.6} & \textbf{52.6} & \textbf{39.5} & \textbf{46.2} \\
\midrule\midrule
\end{tabular
\vspace{-1\baselineskip}
\label{tab:evaluation}
\end{table}
In syntactic analogies, the sense vectors generated by sense2vec outperform the word vectors generated by word2vec in opposite, nationality-adjective, past-tense, plural and plural-verbs. An example is shown in table \ref{tab:syntactic}. We can explain this type of success through an algebraic operation of vectors. When calculating v(\textit{aparentemente} (apparently)) + v(\textit{completo} (complete)) - v(\textit{aparente} (apparent)) the resulting vector of word2vec is v(\textit{incompleto} (incomplete)) when it should be v(\textit{completamente} (completely)). The correct option appears as the second nearest neighbor.
So, we can conclude that the sense2vec's PoS tag functions as an extra feature in the training of sense vectors, generating more accurate numerical vectors, allowing the correct result to be obtained.
\begin{table}[!htb]
\centering
\caption{Example of syntactic analogy predicted by word2vec and sense2vec}
\begin{minipage}{\textwidth}
\scalebox{.90}{
\begin{tabular}[t]{l|l}
\midrule\midrule
word2vec & aparente aparentemente completo : completamente \textbf{(expected)}\\
& aparente aparentemente completo : incompleto \textbf{(predicted)}\\
\midrule
sense2vec & aparente$|$ADJ aparentemente$|$ADV completo$|$ADJ : completamente$|$ADV \textbf{(expected)}\\
& aparente$|$ADJ aparentemente$|$ADV completo$|$ADJ : completamente$|$ADV \textbf{(predicted)}\\
\midrule\midrule
\end{tabular}}
\end{minipage}
\label{tab:syntactic}
\end{table}
In semantic analogies, the sense vectors generated by sense2vec outperform the word vectors generated by word2vec in capital-world, currency and city-in-state. Examples of city-in-state are shown in table \ref{tab:semantic}.
\begin{table}[!htb]
\begin{center}
\footnotesize
\caption{Example of semantic analogies predicted by word2vec and sense2vec}
\begin{tabular}[t]{l|l}
\midrule\midrule
word2vec & arlington texas akron : kansas \textbf{(predicted)} ohio \textbf{(expected)}\\
sense2vec & arlington$|$N texas$|$N akron$|$N : ohio$|$N \textbf{(predicted)(expected)}\\
\midrule
word2vec & bakersfield califórnia madison : pensilvânia \textbf{(predicted)} wisconsin \textbf{(expected)}\\
sense2vec & bakersfield$|$N califórnia$|$N madison$|$N : wisconsin$|$N \textbf{(predicted)(expected)}\\
\midrule
word2vec & worcester massachusetts miami : seattle \textbf{(predicted)} flórida \textbf{(expected)}\\
sense2vec & worcester$|$N massachusetts$|$N miami$|$N : flórida$|$N \textbf{(predicted)(expected)}\\
\midrule\midrule
\end{tabular
\vspace{-1\baselineskip}
\label{tab:semantic}
\end{center}
\end{table}
In this case, the PoS tag is always the same for all words: N (noun). This indicates that the success of sense2vec is related to the quality of sense vectors as a whole. As all words are tagged, this feature ends up improving the inference of all vector spaces during training.
Based on \cite{mikolov2013Aefficient}, who performs the algebraic operation: vector("King") - vector("Man") + vector("Woman") = vector("Queen"), this experiment explores the ability of sense vectors to infer new semantic information through algebraic operations.
\paragraph{\textbf{Dataset.}} The CSTNews dataset \cite{cardoso2011cstnews} contains 50 collections of journalistic documents (PT-BR) with 466 nouns annotated with meanings, from the synsets of wordnet. Therefore, 77\% of the nouns are ambiguous (with more than two meanings). Some ambiguous words were chosen for algebraic operations between vectors.
\paragraph{\textbf{Algorithm.}} The algorithm receives the first three words of the analogy and aims to predict the fourth. Thus, for instance considering the previous example, the algorithm would receive Man (a), Woman (b) and King (c) and should predict Queen (d). Internally, the following algebraic operation is performed between vectors:
\begin{equation}
v (b) + v (c) - v (a) = v (d)
\end{equation}
\paragraph{\textbf{Evaluation metrics.}} This evaluation shows qualitative results, so a metric is not used.
\paragraph{\textbf{Discussion of results.}} To illustrate how sense vectors capture the meaning differences better than word vectors do, examples of algebraic operations using word vectors (generated by word2vec) and sense vectors (generated by MSSG) are shown below.
This first example is for the ambiguous word \textit{banco} (bank) which has three predominant meanings: (1) reserve bank (soccer, basketball), (2) physical storage (trunk, luggage rack) and (3) financial institution (Santander, Pactual).
\begin{figure}[htb!]
\includegraphics[width=0.9\textwidth]{nearest_neighbor_bank}
\caption{Algebraic Operation by MSSG and word2vec with the word "banco"}
\label{fig:nearest_neighbor_bank}
\end{figure}
In this example, we show the results of \textit{banco} $+$ \textit{dados} $-$ \textit{dinheiro} (bank $+$ data $-$ money) and we expect as a result words related to the second meaning of the word \textit{banco}.\footnote{In Portuguese, the usual translation of ``database'' is \textit{banco de dados}. So, in Portuguese, MySQL, SQL, etc. are common words related to \textit{banco de dados}.} When the sense vectors are used (top part of Fig \ref{fig:nearest_neighbor_bank}) we obtain exactly what we were expecting. However, when the word vectors are used (bottom part of Fig \ref{fig:nearest_neighbor_bank}) we do not obtain any result related to data.
Another example is shown in Fig \ref{fig:nearest_neighbor_gol}. This example is for the ambiguous word \textit{gol} (goal) which has one predominant meanings: soccer goal. In this example, we show the results of \textit{gol} $+$ \textit{companhia} $-$ \textit{futebol} (goal $+$ company $-$ soccer) and we have discovered a new meaning for the word \textit{gol}: airline name such as KLM, LATAM and American Airlines (top part of Fig \ref{fig:nearest_neighbor_gol}). When the word vectors are used (bottom part of Fig \ref{fig:nearest_neighbor_gol}) we do not get this new meaning. With this algebraic operation, we can conclude that it is possible to discover new meanings for a word, even if it does not have a sense vector corresponding to this meaning.
\begin{figure}[htb!]
\includegraphics[width=0.9\textwidth]{nearest_neighbor_gol}
\caption{Algebraic Operation by MSSG and word2vec with the word "gol"}
\label{fig:nearest_neighbor_gol}
\end{figure}
The last example uses two ambiguous words in the same operation: \textit{centro} (center) and \textit{pesquisas} (researches). We have found interesting results for this operations. The ambiguous word \textit{centro} (center) has two predominant meanings: (1) institute (center of predictions, NASA center) and (2) midtown (central area). The ambiguous word \textit{pesquisas} (researches) which has two predominant meanings to: (1) scientific research (experiments, discoveries) and (2) opinion or market research.
In the first operation, we show the results of \textit{centro$_{sense1}$} $+$ \textit{pesquisas$_{sense2}$} $-$ \textit{científica} (center$_{sense1}$ $+$ researches$_{sense2}$ $-$ scientific). In top part of figure \ref{fig:nearest_neighbor_centro_instituto}, we obtain a new type: institutes conducting statistical surveys, like Datafolha and YouGov, next to words related to the elections.
\begin{figure}[htb!]
\includegraphics[width=0.9\textwidth]{nearest_neighbor_centro_instituto}
\caption{Algebraic Operation by MSSG and word2vec with the word "centro$_{sense1}$"}
\label{fig:nearest_neighbor_centro_instituto}
\end{figure}
In the second operation, we show the results of \textit{centro$_{sense2}$} $+$ \textit{pesquisas$_{sense2}$} $-$ \textit{científica} (center$_{sense2}$ $+$ researches$_{sense2}$ $-$ scientific). In top part of figure \ref{fig:nearest_neighbor_centro_meio}, we obtain a new type to: Political ideologies/orientation of left, right or \textbf{center}, with words like trump (Donald Trump), clinton (Bill Clinton), romney (Mitt Romney), hillary (Hillary Clinton) and also words like \textit{centro-direita} (center-right), \textit{ultraconservador} (ultraconservative), \textit{eleitores} (voters) and \textit{candidato} (candidate). These words are related to the political sides, including the names of right, left and \textbf{center} politicians.
\begin{figure}[htb!]
\includegraphics[width=0.9\textwidth]{nearest_neighbor_centro_meio}
\caption{Algebraic Operation by MSSG and word2vec with the word "centro$_{sense2}$"}
\label{fig:nearest_neighbor_centro_meio}
\end{figure}
These results are interesting because they show new nuances of the meanings and prove that it is possible to infer new semantic information, which is not represented as sense vectors. These findings are not made using word vectors (bottom part of Fig \ref{fig:nearest_neighbor_centro_instituto} and \ref{fig:nearest_neighbor_centro_meio}).
\section{Conclusion and Future Work}
\label{sec:conclusion}
In this paper we used techniques to generate sense embeddings (sense vectors) for Portuguese (Brazilian and European). The generated models were evaluated through the task of syntactic and semantic analogies and the accuracy values show that the sense vectors (sense2vec) outperform the baselines of traditional word vectors (word2vec, Glove, FastText) with a similar computational cost.
Our sense-vectors and the code used in all the experiments presented in this paper are available at \url{https://github.com/LALIC-UFSCar/sense-vectors-analogies-pt}. The application of sense vectors in NLP tasks (WSD and others) is under development.
As future work we intend to experiment a combination of the two approaches (MSSG and sense2vec) and also to explore how the new approaches proposed for generating language models perform in Portuguese.
\section*{Acknowledgements}
\label{sec:acknowledgements}
This research is part of the MMeaning project, supported by São Paulo Research Foundation (FAPESP), grant \#2016/13002-0, and was also partly funded by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES) - Funding Code 001.
\bibliographystyle{sbc}
| {'timestamp': '2020-01-22T02:34:44', 'yymm': '2001', 'arxiv_id': '2001.07574', 'language': 'en', 'url': 'https://arxiv.org/abs/2001.07574'} |
\section{Introduction}
String topology is the study of algebraic structures on the the free loop space $LM$ of a smooth manifold $M$, as initiated by Chas and Sullivan \cite{ChasSullivan}.
In this paper we will consider the following subset of operations on the real or rational (co)homology.
First, the BV operator $\Delta:H_\bullet(LM)\to H_{\bullet+1}(LM)$ is the just the action of the fundamental chain of $S^1$, using the $S^1$ action on $LM$ by reparameterization of the loop, $S^1\times LM\to LM$.
Second, one has the string product $H_\bullet(LM)\otimes H_\bullet(LM)\to H_{\bullet-n}(LM)$, where $n$ is the dimension of the smooth manifold $M$.
The product is obtained by intersecting the loops at their basepoints, see below for more details.
The product and the operation $\Delta$ generate a Batalin-Vilkovisky algebra structure on the homology $H_{\bullet+n}(LM)$.
Third, there is the string coproduct $H_{\bullet+n-1}(LM,M)\to H_\bullet(LM,M)\otimes H_\bullet(LM,M)$, by taking a self-intersection of the loop \cite{GoreskyHingston}. It is defined only on the relative homology with respect to constant loops.
We refer to section \ref{sec:stringtopology} below for a more detailed description of these operations.
Finally, we consider the homotopy $S^1$-quotient of $LM$, which we denote by $LM_{S^1}$.
Chas and Sullivan \cite{ChasSullivan2} describe a Lie bialgebra structure on the equivariant homology $H_\bullet^{S_1}(LM,M)$ of $LM$ relative to the constant loops, extending earlier work by Turaev \cite{Turaev}. More precisely, the Lie bialgebra structure is degree shifted, such that both bracket and cobracket have degree $2-n$.
Similarly, given a Poincaré duality model for $M$, a Lie bialgebra structure on the reduced equivariant homology $\bar H_\bullet^{S^1}(LM)$ has been described in \cite{ChenEshmatovGan}.
It should be remarked that the latter construction is completely algebraic and "formal", while the Chas-Sullivan definition is "topological", by intersecting loops.
By duality we obtain the corresponding dual operations on the cohomology of the loop spaces considered above.
The (co)homology can furthermore be efficiently computed.
To this end let $A$ be a differential graded commutative algebra (dgca) model for $M$, i.e., $A$ is quasi-isomorphic to the dgca of differential forms on $M$.
Then an iterated integral construction yields a map
\begin{equation}\label{equ:ii1}
HH_{\bullet}(A,A) \to H^{-\bullet}(LM)
\end{equation}
from the Hochschild homology of $A$ with coefficients in $A$ to the cohomology of $LM$. Similarly, one obtains maps
\begin{equation}\label{equ:ii2}
\overline{HH}_{\bullet}(A,A) \to H^{-\bullet}(LM,M)
\end{equation}
from a reduced version of the Hochschild homology and
\begin{equation}\label{equ:ii3}
H\overline{Cyc}_{\bullet}(\bar A) \to \bar H^{-\bullet}_{S^1}(LM)
\end{equation}
from the homology of the (reduced) cyclic words in $A$.
\begin{Thm}[\cite{Jones,CohenJones02,ChenEshmatovGan}]
If $M$ is a simply connected closed manifold then the maps \eqref{equ:ii1}, \eqref{equ:ii2}, \eqref{equ:ii3} are isomorphisms.
\end{Thm}
\subsection{Statement of results}
The purpose of this paper is to understand the string topology operations described above on the objects on the left-hand side of \eqref{equ:ii1}, \eqref{equ:ii2}, \eqref{equ:ii3}, in the case that $M$ is a smooth compact manifold without boundary.
More concretely, our results are as follows.
First, we provide a (slightly) new version of the construction of the string product and coproduct, using the compactified configuration spaces $\mathsf{FM}_M(2)$ of two points on $M$, together with the inclusion of the boundary
$\partial \mathsf{FM}_M(2)=UTM\to \mathsf{FM}_M(2)$. This inclusion can be seen as the simplest instance of the action of the little disks operad on the configuration space of (framed) points on $M$. Our approach allows us to use existing models for the configuration spaces of points \cite{Idrissi,CamposWillwacher} to conduct computations in string topology.
We shall work in the cohomological setting (on $LM$), i.e., we consider the Hochschild homology, not cohomology, of our dgca model $A$ for $M$. Suppose first that $M$ is simply connected.
Then we may take for $A$ a Poincar\'e duality model for $M$ \cite{LambrechtsStanley}. In particular, $A$ comes equipped with a diagonal ${\mathbf D}=\sum {\mathbf D}'\otimes {\mathbf D}''\in A\otimes A$, such that $\sum a{\mathbf D}'\otimes {\mathbf D}''=\sum \pm {\mathbf D}'\otimes a{\mathbf D}''$ for all $a\in A$.
We may use this to construct a (degree shifted) co-BV structure on the reduced Hochschild complex $\bar C(A)=\bar C(A,A)$.
Concretely, the BV operator is the Connes-Rinehart differental on the Hochschild complex, given by the formula (cf. \cite[(2.1.7.3)]{Loday})
\begin{equation}\label{equ:connesB}
B(a_0,\dots, a_n) = \sum_{j=0}^n \pm (1, a_j, a_{j+1},\dots, a_0, a_1, \dots , a_n)
\end{equation}
The coproduct dual to the string product and the cup product on Hochschild cohomology is
\begin{equation}\label{equ:intro product}
(a_0,\dots, a_n) \mapsto
\sum_{j=0}^n
\sum
\pm
(a_0{\mathbf D}',\dots, a_j) \otimes ({\mathbf D}'',a_{j+1},\dots, a_n).
\end{equation}
As a first application we then obtain another proof of the following result of Cohen and Voronov.
\begin{Thm}[\cite{CohenVoronov}]\label{thm:main_1}
For $M$ a closed simply connected manifold the map \eqref{equ:ii1} is an isomorphism of co-BV algebras.
\end{Thm}
As a second application we consider the string coproduct.
On the Hochschild chains the corresponding product operation is given by the formula
\begin{equation}\label{equ:intro coproduct}
(a_0,a_1,\dots,a_m) \otimes (b_0,b_1,\dots,b_n)
\to
\sum\pm (b_0{\mathbf D}' a_0,a_1,\dots,a_m,{\mathbf D}'',b_1,\dots , b_n).
\end{equation}
We then show the following result, conjectured in (some form in) \cite{Abbaspour, Klamt2}, cf. also \cite{ChenEshmatovGan}.
\begin{Thm}\label{thm:main_2}
For $M$ a closed simply connected manifold the map \eqref{equ:ii2} respects the coproducts.
\end{Thm}
The Lie bialgebra structure on the $S^1$-equivariant (co)homology of $LM$ can be constructed from the string product and coproduct. Hence it also follows that the map \eqref{equ:ii3} respects the Lie bialgebra structures in the simply connected situation.
Note that so far all string topology operations considered depend on $M$ only through the real (or rational) homotopy type of $M$, as encoded in the dgca model $A$.
This is in accordance with the result of \cite{CamposWillwacher,Idrissi} that the real homotopy type of the configuration spaces of points on $M$ only depends on the real homotopy type of $M$, for simply connected $M$. (And our construction depends only on a model for the configuration spaces, with the boundary inclusion.)
However, we can also use our approach to "compute" the string topology operations for non-simply connected manifolds. In this case the maps \eqref{equ:ii1}-\eqref{equ:ii3} are no longer quasi-isomorphisms and "compute" has to be understood
as providing algebraic operations on the left-hand sides that are preserved by those maps.
In this setting, one notably sees some indications of dependence of the string topology operations on $M$ beyond its real or rational homotopy type.
In this situation, we use the dgca model for the configuration space of points on $M$ constructed in \cite{CamposWillwacher}.
In that construction a central role is played by a dg Lie algebra of graphs $\mathsf{GC}_M$, whose elements are series of connected graphs with vertices decorated by elements of $\bar H_{\bullet}(M)$.
\begin{equation}\label{equ:GCMex}
\begin{tikzpicture}[scale=.7,baseline=-.65ex]
\node[int,label=90:{$\alpha$}] (v1) at (90:1) {};
\node[int,label=180:{$\beta$}] (v2) at (180:1) {};
\node[int] (v3) at (270:1) {};
\node[int,label=0:{$\gamma$}] (v4) at (0:1) {};
\draw (v1) edge (v4) edge (v2) (v3) edge (v2) edge (v4) edge (v1);
\end{tikzpicture}
\end{equation}
The dgca model for the configuration space is then completely encoded by a Maurer-Cartan element $Z\in \mathsf{GC}_M$.
The tree (i.e., loop-order-0-)part of $\mathsf{GC}_M$ can be identified with (almost) the Lie algebra encoding the real homotopy automorphisms of $M$. Similarly, the tree part $Z_0$ of $Z$ just encodes the real homotopy type of $M$. The higher loop orders hence encode potential dependence on $M$ beyond its real homotopy type.
We can use these models for configuration spaces to describe the string topology operations on the images of the morphisms \eqref{equ:ii1}-\eqref{equ:ii3}, and get explicit formulas.
However, since the formula for the coproduct is a bit ugly (see section \ref{sec:graphical version}), we will restrict to the equivariant situation and describe the string bracket and cobracket there.
So we consider again the loop space $LM$, with the goal of studying the Lie bialgebra structure on its $S^1$-equivariant cohomology.
First note that in the non-simply connected case the maps \eqref{equ:ii1}-\eqref{equ:ii3} still exist, but they are generally not quasi-isomorphisms.
Nevertheless, we can ask for a Lie bialgebra structure to put on the left-hand side of \eqref{equ:ii3} that makes \eqref{equ:ii3} into a morphism of Lie bialgebras.
We may also replace the reduced cyclic words $\overline{Cyc} \bar A$ in our dgca model $A$ of $M$ by the reduced cyclic words $\overline{Cyc} \bar H$ in the cohomology $H:=H^\bullet(M)$ of $M$.
In this case we have a non-trivial $\mathsf{Com}_\infty$-structure on $H$, and accordingly a differential on $\overline{Cyc} \bar H$ encoding the $\mathsf{Com}_\infty$-structure.
Our result is then the following, partially conjectured in \cite[Conjecture 1.11]{CFL}.
\begin{Thm}\label{thm:liebialg}\label{thm:main_3}
Let $M$ be a closed connected oriented manifold.
Then there is a (degree $2-d$-)homotopy involutive Lie bialgebra structure on the chain complex $\overline{Cyc} \bar H$, explicitly constructed below using only the Maurer-Cartan element $Z\in \mathsf{GC}_M$ above, such that the induced map in (co)homology
\[
H_\bullet\overline{Cyc}(\bar H) \to \bar H^{-\bullet}_{S^1}(LM)
\]
respects the Lie bracket and Lie cobracket.
\end{Thm}
We remark that for this theorem the Lie bracket and cobracket on $\bar H^{-\bullet}_{S^1}(LM)$ are defined using the string product and coproduct, roughly following \cite{GoreskyHingston, HingstonWahl}, see section \ref{sec:stringbracketdef} below. For the explicit formulas for the bracket and cobracket see section \ref{sec:thm3proof}, in particular Theorems \ref{thm:bracket} and \ref{thm:cobracket reduced}.
We also note that in dimensions $\neq 3$ the Maurer-Cartan element $Z$ can be taken without terms of loop orders $>1$, i.e., $Z=Z_0+Z_1$, with $Z_1$ of loop order 1.
In this case the homotopy involutive Lie bialgebra structure in the above theorem is in fact a strict Lie bialgebra structure.
Furthermore, the induced involutive Lie bialgebra structure on $H\overline{Cyc}(\bar H)$ depends only on the loop order 0 and 1 parts of $Z$ in all dimensions.
Finally, $Z_1$ is not easy to compute, and the authors do not have an example of a concrete manifold for which $Z_1$ is computable and known to be nontrivial.
However, we expect the loop order 1 part of $\mathsf{GC}_M$ to correspond to nontrivial terms in (a Lie algebra model of) $\mathrm{Diff}(M)$ arising from topological Hochschild homology.
We show that in families the string cobracket witnesses a dependence on $M$ beyond its real homotopy type.
In particular, we show that in the simply-connected case the following diagram commutes
$$
\begin{tikzcd}[column sep=tiny]
\pi_*(\mathrm{Diff}_1(M)) \ar[rr] \ar[d] && \mathrm{Der}_{[\cdot,\cdot], \delta}(\bar{H}^{S^1}_\bullet(LM)) \ar[d] \\
\pi_*(aut_1(M)) \ar[rr] \ar[dr] && \mathrm{Der}_{[\cdot,\cdot]}(\bar{H}^{S^1}_\bullet(LM)) \\
& \bar{H}^{S^1}_\bullet(LM) \ar[ur, "\operatorname{ad}"'],
\end{tikzcd}
$$
and that in examples, the right vertical arrow is far from being surjective. Hence, in contrast to the string bracket, the cobracket gives a non-trivial condition (in general) on elements in $\pi_*(aut_1(M))$ to be in the image of $\pi_*(\mathrm{Diff}_1(M))$. In that sense we obtain that the string coproduct is not homotopy invariant.
For a more detailed discussion we refer to the concluding remarks in section \ref{sec:discussion}.
Let us also consider the case of $M$ being $1$-framed, that is, equipped with a nowhere vanishing vector field.
A necessary condition for this to exist is, of course, the vanishing of the Euler characteristic $\chi(M)=0$.
In this setting one may construct the string coproduct already on the cohomology of the loop space $H(LM)$, as opposed to on $H(LM,M)$ for general $M$, see section \ref{sec:framedcoprod} below.
By similar methods as above we then obtain:
\begin{Thm}\label{thm:main_4}
Let $M$ be a closed orientable $1$-framed manifold.
If $M$ is simply connected (and we hence have a Poincaré duality model) the map \eqref{equ:ii1} is compatible with the string coproduct (cohomology product), where the cohomology product on the left-hand side is defined by the same formula (which now makes sense on absolute chains).
For $M$ (potentially) non-simply connected the map \eqref{equ:ii3} intertwines the string bracket and cobracket on the right-hand side with the corresponding operations on the left-hand side, given by the same formulas as in Theorem \ref{thm:main_3}, cf. section \ref{sec:thm3proof} below.
\end{Thm}
We finally note that for the string topology operations considered here, only the configuration spaces of up to two points play a role, and in the graph complex only diagrams of loop order $\leq 1$. In light of Theorem \ref{thm:liebialg} it is hence reasonable to expect that a similar discussion of higher order string topology operations involves configuration spaces of more points, and diagrams of higher loop order.
\subsection*{Acknowledgements}
We are grateful for discussions, suggestions and support by Anton Alekseev, Ricardo Campos, Matteo Felder, Alexander Kupers, Pavel Safronov, Nathalie Wahl, and others.
While working on this project we were made aware of similar results in preparation by Kaj B\"orjeson \cite{Borjeson}. We are grateful for him sharing his drafts.
The first author is partially supported by the Postdoc Mobility grant P400P2\_183900 of the Swiss National Science Foundation.
The second author is partially supported by the European Research Council under the ERC starting grant StG 678156 GRAPHCPX.
\section{Notation and recollections}
\subsection{Conventions on (co)chain complexes}
We generally work with cohomological degree conventions, that is, differentials in differential graded (dg) vector spaces have degree +1.
If we want to emphasize the cohomological nature of a dg vector space, we sometimes write it as $V^\bullet$, while $V_\bullet$ shall refer to homological conventions. Note that we often omit the $(-)^\bullet$, for example $H(M)=H^\bullet(M)$ is the cohomology of the manifold $M$.
Furthermore, all dg vector spaces will be over a field $\K$ of characteristic zero. For a (big) part of the results we need to restrict to either $\K=\mathbb{R}$ or $\K=\mathbb{Q}$.
For $V$ a dg vector space we denote by $V[k]$ the $k$-fold degree shifted dg vector space, defined such that for an element $v\in V$ of degree $j$, the corresponding element of $V[k]$ has degree $j-k$. Such degree shifts can also be indicated on the degree placeholder $\bullet$ like $V^{\bullet +1}:=V^\bullet[1]$.
We define the tensor coalgebra of a cohomologically graded complex $V^\bullet$ with a (non-standard) degree shift
\[
TV^\bullet := \bigoplus_{k\geq 0} (V^{\bullet}[1])^{\otimes k}.
\]
This is to remove clutter when working with the Hochschild complex $TA\otimes A$ or similarly defined complexes later.
\subsection{Operads, $\mathsf{Com}_\infty$- and $\widehat{\Com}_\infty$-algebras}
\label{sec:intro operads}
We denote by $\mathsf{Com}$ the commutative operad.
It has the standard Koszul resolution
\[
\mathsf{Com}_\infty = \Omega(\mathsf{Com}^\vee)\to \mathsf{Com}
\]
as the cobar construction of the Koszul dual cooperad $\mathsf{Com}^\vee=\mathsf{Lie}^*\{1\}$, with $\mathsf{Lie}$ the Lie operad.
A $\mathsf{Com}_\infty$-algebra structure on a (differential) graded vector space $A$ can be encoded as a codifferential $D_A$ on the cofree $\mathsf{Com}^\vee$-coalgebra
\[
A = {\mathbb F}^c_{\mathsf{Com}^\vee}(A[1]).
\]
A $\mathsf{Com}_\infty$-map $A\to B$ is by (our) definition a map of $\mathsf{Com}^\vee$-coalgebras
\[
\left( {\mathbb F}^c_{\mathsf{Com}^\vee}(A[1]),D_A\right) \to \left( {\mathbb F}^c_{\mathsf{Com}^\vee}(B[1]),D_B\right).
\]
One can strictify a $\mathsf{Com}_\infty$-algebra $A$ to a free $\mathsf{Com}$-algebra quasi-isomorphic to $A$
\[
\hat A := ({\mathbb F}_{\mathsf{Com}}({\mathbb F}^c_{\mathsf{Com}^\vee}(A[1])[-1]), D).
\]
Concretely, the $\mathsf{Com}_\infty$-quasi-isomorphism $A\to \hat A$ is given by the inclusion
\[
{\mathbb F}^c_{\mathsf{Com}^\vee}(A[1])
\subset
{\mathbb F}_{\mathsf{Com}}({\mathbb F}^c_{\mathsf{Com}^\vee}(A[1])[-1])[1]
\subset
{\mathbb F}^c_{\mathsf{Com}^\vee}({\mathbb F}_{\mathsf{Com}}({\mathbb F}^c_{\mathsf{Com}^\vee}(A[1])[-1])[1]).
\]
We shall also use below the bar-cobar resolution
\[
\widehat{\Com}_\infty :=\Omega(B(\mathsf{Com})) \xrightarrow{\simeq}\mathsf{Com}
\]
where $B(-)$ stands for the operadic bar construction.
In the same manner as above one defines $\widehat{\Com}_\infty$-algebras and $\widehat{\Com}_\infty$-morphisms.
We just replace $\mathsf{Com}^\vee$ by $B\mathsf{Com}$, or in other words $\mathsf{Lie}$ by $\mathsf{Lie}_\infty$.
There is a canonical quasi-isomorphism of cooperads $\mathsf{Com}^\vee\to B\mathsf{Com}$ and accordingly a canonical quasi-isomorphism
\[
\mathsf{Com}_\infty \to \widehat{\Com}_\infty.
\]
Hence any $\widehat{\Com}_\infty$-algebra is in particular a $\mathsf{Com}_\infty$-algebra, and a $\widehat{\Com}_\infty$-map between two $\widehat{\Com}_\infty$-algebras induces a $\mathsf{Com}_\infty$-map between the corresponding $\mathsf{Com}_\infty$-algebras.
\subsection{Hochschild and cyclic complex}\label{sec:intro hochschild}
For an associative or more generally $A_\infty$-algebra $A$ we consider the reduced Hochschild complex $\bar C(A)$ of $A$. All of our algebras will be augmented, and in this case we can write
\[
\bar C(A) = \left( \bigoplus_{k\geq 0} (\bar A[1])^{\otimes k} \otimes A, d_H\right)
\]
where $d_H$ is the Hochschild differential and $\bar A$ is the augmentation ideal.
The negative cyclic complex of $A$ is
\[
(\bar C(A)[[u]], d_H+uB),
\]
where $u$ is a formal variable of degree $+2$ and $B$ is the Connes-Rinehart differential, see \eqref{equ:connesB}.
We similarly define the reduced negative cyclic complex to be cyclic homology relative to $\mathbb{R} \to A$
\[
\overline{CC}(A) = \left(\bigoplus_{k\geq 0} (\bar A[1])^{\otimes k} \otimes A[[u]], d_H+uB\right)/ \mathbb{R}[[u]],
\]
since our unit is split, this will differ by $\mathbb{R}[[u]]$ from the original one.
Let
\[
\overline{Cyc}(\bar A) = \bigoplus_{k\geq 1} \left( (\bar A[1])^{\otimes k} \right)_{S_k}
\]
be the reduced cyclic words.
Then there is a natural map of complexes, and a quasi-isomorphism as we will see in Proposition \ref{prop:cycqiso} below,
\[
\overline{Cyc}(\bar A) \to \overline{CC}(A),
\]
essentially by applying the operator $B$. More concretely
\[
(a_1,\cdots ,a_k)
\mapsto
\sum_{j=1}^k \pm (a_j,\dots,a_{j-1},1).
\]
Now consider an $A_\infty$-map $f: A\to B$ between (unital) $A_\infty$-algebras.
The constructions above are functorial in $A$, and one natural maps between the Hochschild and cyclic complexes of $A$ and $B$, induced by $f$.
Furthermore, any $\mathsf{Com}_\infty$-algebra is an $A_\infty$-algebra, and a $\mathsf{Com}_\infty$-map is an $A_\infty$ map, so the same applies to $\mathsf{Com}_\infty$-maps $f:A\to B$.
By the previous subsection, we may also replace (a fortiori) $\mathsf{Com}_\infty$ by $\widehat{\Com}_\infty$.
When dealing with words $\alpha=(a_1,\dots,a_p),\beta=(b_1,\dots,b_q)\in TX$ in the tensor algebra of a vector space $X$, we denote by
\[
\alpha\beta=(a_1,\dots,a_p,b_1,\dots,b_q)\in TX
\]
their concatenation.
Similarly, we denote by $\Sha$ the shuffle product
\[
\alpha\Sha\beta
=\sum_{\sigma \in \mathit{Sh}(p,q)} \sigma\cdot (\alpha\beta),
\]
where the sum runs over all $(p,q)$-shuffle permutations.
For example, the reduced Hochschild complex of a commutative algebra has a commutative product given by the formula
\[
(\alpha,\alpha_0)\Sha (\beta,\beta_0)
=
\pm (\alpha\Sha \beta, \alpha_0\beta_0).
\]
Note that here $\alpha_0\beta_0$ is the product of the two elements in $A$, not the juxtaposition.
\subsection{Pullback-pushout lemma}
We will later make use of the following result, see for example \cite[Theorem 2.4]{HessNotes} or \cite[Proposition 15.8]{FHT2}.
\begin{Thm}
Consider the pullback diagram
\[
\begin{tikzcd}
E\times_B X \ar{r} \ar{d}& E\ar{d}{p} \\
X\ar{r} & B
\end{tikzcd}
\]
where $p$ is a Serre fibration, $E$ is path connected and $X$ and $B$ are simply connected. Let $A_X\leftarrow A_B\rightarrow A_E$ be a rational dgca model for the lower right zigzag in the diagram.
Then the homotopy pushout
\[
A_X \otimes^h_{A_B} A_E
\]
is a dgca model for the pullback $E\times_B X$.
\end{Thm}
\subsection{Fulton-MacPherson-Axelrod-Singer compactification of configuration spaces}\label{sec:FM}
Consider an oriented manifold $M$. Axelrod and Singer \cite{AxelrodSinger} defined compactifications of the configuration spaces of points on $M$, by iterated real bordification. We denote the thus created compactified configuration space of $r$ points by $\mathsf{FM}_M(r)$.
We shall not recall the details of the compactification procedure here. We just note that a point in $\mathsf{FM}_M(r)$ can be seen as a decorated tree with $r$ leaves, with the root node decorated by a configuration of points in $M$, and the other nodes by (essentially) configurations of points in tangent spaces of $M$.
\[
\begin{tikzpicture}[baseline=1cm, scale=.8]
\draw (0,0)
.. controls (-.7,0) and (-1.3,-.7) .. (-2,-.7)
.. controls (-4,-.7) and (-4,1.7) .. (-2,1.7)
.. controls (-1.3,1.7) and (-.7,1) .. (0,1)
.. controls (.7,1) and (1.3,1.7) .. (2,1.7)
.. controls (4,1.7) and (4,-.7) .. (2,-.7)
.. controls (1.3,-.7) and (.7,0) .. (0,0);
\begin{scope}[xshift=-2cm, yshift=.6cm, scale=1.2]
\draw (-.5,0) .. controls (-.2,-.2) and (.2,-.2) .. (.5,0);
\begin{scope}[yshift=-.07cm]
\draw (-.35,0) .. controls (-.1,.1) and (.1,.1) .. (.35,0);
\end{scope}
\end{scope}
\begin{scope}[xscale=-1, xshift=-2cm, yshift=.6cm, scale=1.2]
\draw (-.5,0) .. controls (-.2,-.2) and (.2,-.2) .. (.5,0);
\begin{scope}[yshift=-.07cm]
\draw (-.35,0) .. controls (-.1,.1) and (.1,.1) .. (.35,0);
\end{scope}
\end{scope}
\node [int, label={$\scriptstyle 1$}] (v1) at (-3,.5) {};
\node [int, label={$\scriptstyle 2$}] (v2) at (2,-.1) {};
\node [int] (va) at (0,.5) {};
\begin{scope}[scale = .7, xshift=-1cm,yshift=2.75cm]
\draw (va) -- (0,0) (va) -- (3,0) (va)-- (4,2) (va) -- (1,2);
\draw[fill=white] (0,0)--(3,0)--(4,2)--(1,2)--cycle;
\node [int, label={$\scriptstyle 3$}] (v3) at (1,1) {};
\node [int] (vb) at (2,.5) {};
\begin{scope}[scale = .7, xshift=2cm,yshift=2cm]
\draw (vb) -- (0,0) (vb) -- (3,0) (vb)-- (4,2) (vb) -- (1,2);
\draw[fill=white] (0,0)--(3,0)--(4,2)--(1,2)--cycle;
\node [int, label={$\scriptstyle 4$}] (v4) at (1,1) {};
\node [int, label={$\scriptstyle 5$}] (v5) at (2,.5) {};
\end{scope}
\end{scope}
\end{tikzpicture}
\]
Similarly, one defines a version of the little disks operad $\mathsf{FM}_n$ assembled from the compactified configuration spaces of $r$ points in $\mathbb{R}^n$.
From this one may finally build a fiberwise version of the little disks operad
\[
\mathsf{FM}_n^M = \mathit{Fr}_M \times_{SO(n)} \mathsf{FM}_n,
\]
where $\mathit{Fr}_M$ is the oriented orthonormal frame bundle for some (irrelevant) choice of metric on $M$.
The collection $\mathsf{FM}_n^M$ can be seen either as an operad in spaces over $M$, or as a colored operad with colors $M$.
The collection of spaces $\mathsf{FM}_M(r)$ then assembles into an operadic right module over $\mathsf{FM}_n^M$.
For our purposes it shall suffice to understand the situation in arities $r\leq 2$.
We have
\begin{align*}
\mathsf{FM}_M(1)&=M = \mathsf{FM}_n^M(1) &\text{and} & & \mathsf{FM}_n^M(2) &= UTM,
\end{align*}
where $UTM$ is the unit tangent bundle of $M$.
The simplest instance of the operadic right action (and in fact the only instance we need) is the composition
\[
\mathsf{FM}_M(1) \times_M \mathsf{FM}_n^M(2) = UTM \to \mathsf{FM}_M(2),
\]
which is just the inclusion $UTM\cong\partial\mathsf{FM}_M(2) \to \mathsf{FM}_M(2)$.
\subsection{The Lambrechts-Stanley model of configuration space}
We shall need cochain and dg commutative algebra models for configurations spaces of (up to 2) points. The simpler version of these are models proposed by Lambrechts and Stanley.
Concretely, for a simply connected closed manifold one can find a Poincar\'e duality model $A$, see \cite{LambrechtsStanley}. This is a dg commutative algebra quasi-isomorphic to the differential forms $\Omega(M)$, exhibiting Poincar\'e duality on the cochain level. In particular, we have a coproduct $\Delta : A\to A\otimes A$ of degree $n$.
The Lambrechts-Stanley model \cite{LambrechtsStanley2} is a (tentative) dg commutative algebra model for $\mathsf{FM}_M(r)$, which can be built out of $A$.
In particular for $r=2$ this is just
\[
\mathit{cone}(A\xrightarrow{\Delta} A\otimes A).
\]
It has been shown in \cite{LambrechtsStanley3} that for $r\leq 2$ and 2-connected closed $M$ the proposed Lambrechts-Stanley model is indeed a model, i.e., quasi-isomorphic to $\Omega(\mathsf{FM}_M(2))$.
This has been extended by \cite{Idrissi, CamposWillwacher} to arbitrary $r$ and simply connected closed manifolds, provided $n\geq 4$.
\subsection{Graph complex models for configuration spaces}\label{sec:GraphsM}
For non-simply connected manifolds we cannot guarantee the existence of Poincar\'e duality models.
However, by work of Campos-Willwacher \cite{CamposWillwacher} one can still write down explicit, albeit more complicated models $\mathsf{Graphs}_M(r)$ of configuration spaces, for $M$ connected closed and oriented.
We shall only sketch the construction.
The dg vector space $\mathsf{Graphs}_M(r)$ consists (essentially) of linear combinations of (isomorphism classes of) diagrams with $r$ ``external'' vertices labelled $1,\dots,r$, and an arbitrary (finite) number of internal vertices. In addition, all vertices may be decorated by zero or more elements of the reduced cohomology $\bar H(M)=H^{\geq 1}(M)$. Finally, each connected component of a graph must contain at least one external vertex.
\[
\begin{tikzpicture}[scale=1.2]
\node[ext] (v1) at (0,0) {$\scriptstyle 1$};
\node[ext] (v2) at (.5,0) {$\scriptstyle 2$};
\node[ext] (v3) at (1,0) {$\scriptstyle 3$};
\node[ext] (v4) at (1.5,0) {$\scriptstyle 4$};
\node[int] (w1) at (.25,.5) {};
\node[int] (w2) at (1.5,.5) {};
\node[int] (w3) at (1,.5) {};
\node (i1) at (1.7,1) {$\scriptstyle \omega_1$};
\node (i2) at (1.3,1) {$\scriptstyle \omega_1$};
\node (i3) at (-.4,.5) {$\scriptstyle \omega_2$};
\node (i4) at (1.9,.4) {$\scriptstyle \omega_3$};
\node (i5) at (0.25,.9) {$\scriptstyle \omega_4$};
\draw (v1) edge (v2) edge (w1) (w1) edge (v2) (v3) edge (w3) (v4) edge (w3) edge (w2) (w2) edge (w3);
\draw[dotted] (v1) edge (i3) (w2) edge (i2) edge (i1) (v4) edge (i4) (w1) edge (i5);
\node at (3,.2) {$\in \mathsf{Graphs}_M(4)$};
\end{tikzpicture}
\]
For concreteness, we pick a homogeneous basis $(e_q)$ of $H(M)$, and we denote by $e^q = (e_q^*)$ the Poincar\'e-dual basis, such that the diagonal in $H(M\times M)$ is represented by the element $e_q \otimes e^q = \sum_q e_q\otimes e_q^*$.
Then the differential in our graph complex acts by edge contraction, and replacing each edge by a diagonal in $H(M)\otimes H(M)$ (cutting the edge).
\begin{equation}\label{equ:edgecsplit}
d\,
\begin{tikzpicture}[baseline=-.65ex]
\node[int] (v) at (0,0) {};
\node[int] (w) at (0.7,0) {};
\draw (v) edge +(-.5,0) edge +(-.5,.5) edge +(-.5,-.5) edge (w)
(w) edge +(.5,0) edge +(.5,.5) edge +(.5,-.5);
\end{tikzpicture}
=
\begin{tikzpicture}[baseline=-.65ex]
\node[int] (v) at (0,0) {};
\node[int] (w) at (0,0) {};
\draw (v) edge +(-.5,0) edge +(-.5,.5) edge +(-.5,-.5) edge (w)
(w) edge +(.5,0) edge +(.5,.5) edge +(.5,-.5);
\end{tikzpicture}
+
\sum_q
\begin{tikzpicture}[baseline=-.65ex]
\node[int] (v) at (0,0) {};
\node[int] (w) at (1,0) {};
\node (i1) at (0.3,.5) {$\scriptstyle e_q$};
\node (i2) at (0.7,-0.5) {$\scriptstyle e_{q}^*$};
\draw (v) edge +(-.5,0) edge +(-.5,.5) edge +(-.5,-.5)
(w) edge +(.5,0) edge +(.5,.5) edge +(.5,-.5);
\draw[dotted] (v) edge (i1) (w) edge (i2);
\end{tikzpicture}
\end{equation}
Here, if $e_q$ or $e_q^*$ is the unit $1\in H^0(M)$ (or a multiple thereof), we just drop the corresponding decoration. Alternatively, we can say that we could also define our graph complex with decorations in the cohomology $H(M)$ instead of the reduced cohomology $\bar H(M)$, and then impose the relation that units may be dropped from any (internal or external) vertex
\begin{equation}\label{equ:onerelation}
\begin{tikzpicture}[baseline=-.65ex]
\node[int,label=90:1] (v) at (0,0) {};
\draw (v) edge +(-.5,-.5) edge +(0,-.5) edge +(.5,-.5);
\end{tikzpicture}
=
\begin{tikzpicture}[baseline=-.65ex]
\node[int] (v) at (0,0) {};
\draw (v) edge +(-.5,-.5) edge +(0,-.5) edge +(.5,-.5);
\end{tikzpicture}\, .
\end{equation}
In fact, this relation shall be seen as of "cosmetic" origin, eventually yielding a smaller but quasi-isomorphic complex.
More severely, we shall also note that the cutting operation in \eqref{equ:edgecsplit} might produce a graph with a connected component without external vertices, violating the connectivity condition above.
In that case, the cut-off-subgraph is formally mapped to a number, given a map from such graphs to numbers. That latter map is called partition function $Z$, and combinatorially encodes the real homotopy type of the configuration spaces of points.
In fact, we may understand $Z$ as a Maurer-Cartan element in a dual graph complex $\mathsf{GC}_{\bar H(M)}$, whose elements are formal series in graphs without external vertices such as \eqref{equ:GCMex}, see the following subsection. Furthermore one can separate graphs of various loop orders present in $Z$:
\begin{equation}\label{equ:Zsplit}
Z=Z_{tree}+Z_1+Z_2+\cdots,
\end{equation}
where $Z_{tree}$ is the tree piece, $Z_1$ contains only the $1$-loop graphs etc.
The piece $Z_{tree}$ encodes precisely the real homotopy type of $M$, i.e., the $\mathsf{Com}_\infty$ structure on $H(M)$.
The higher corrections $Z_{\geq 2}$ vanish if the dimension $n$ of $M$ satisfies $n\neq 3$.
The piece $Z_1$ also vanishes for degree reasons if $H^1(M)=0$ and can be made to vanish if $n=2$.
\todo[inline]{FlorianX}
Below we shall see that the piece $Z_1$ of the partition function $Z$ appears in our formula for the string cobracket.
This in itself is not a contradiction to the conjectured homotopy invariance of the string topology operations. However, as we will see below this has the consequence that the $\mathrm{Diff}(M)$-action on string topology does not factor through the homotopy automorphisms.
\subsection{Graph complex (Lie algebra) $\mathsf{GC}_H$ and $\mathsf{GC}_M$, following \cite{CamposWillwacher}}\label{sec:GCM}
We shall need below a more explicit definition of the graph complex in which $Z$ above is a Maurer-Cartan element.
Generally, consider a finite dimensional graded vector space $H$ with a non-degenerate pairing $\epsilon:H\otimes H\to \mathbb{R}$ of degree $-n$. Our main example will be $H=H^\bullet(M)$, the cohomology of a closed oriented connected manifold, with the pairing provided by Poincar\'e duality. With this example in mind, we assume that the subspaces of degree 0 and $n$ are one-dimensional, and there is a distinguished element $1\in H^0$, which in our case will be the unit of the cohomology algebra. We denote the dual element of degree $n$ by $\omega$.
We denote by $\bar H=H^{\neq 0}$ the corresponding reduced version of $H$.
We also use the notation $1^*,\omega^*$ to denote dual elements in the dual space $H^*$.
For concreteness, we also pick a basis $f_q$ of $H^*$ in degrees $\neq 0,n$, and denote by $f_q^*$ the Poincar\'e-dual basis.
Then we may consider a dg Lie algebra $\mathsf{GC}_{\bar H}'$ whose elements are series of (isomorphism classes of) connected graphs, with vertices carrying decorations by $H^*$.
There is a differential by splitting vertices, or connecting two decorations
\begin{align}\label{equ:GCMdelta}
\delta \,
\begin{tikzpicture}[baseline=-.65ex]
\node[int] (v) at (0,0) {};
\node[int] (w) at (0,0) {};
\draw (v) edge +(-.5,0) edge +(-.5,.5) edge +(-.5,-.5) edge (w)
(w) edge +(.5,0) edge +(.5,.5) edge +(.5,-.5);
\end{tikzpicture}
&=
\sum
\begin{tikzpicture}[baseline=-.65ex]
\node[int] (v) at (0,0) {};
\node[int] (w) at (0.7,0) {};
\draw (v) edge +(-.5,0) edge +(-.5,.5) edge +(-.5,-.5) edge (w)
(w) edge +(.5,0) edge +(.5,.5) edge +(.5,-.5);
\end{tikzpicture}
&
\delta\,
\begin{tikzpicture}[baseline=-.65ex]
\node[int] (v) at (0,0) {};
\node[int] (w) at (1,0) {};
\node (i1) at (0.3,.5) {$\scriptstyle \alpha$};
\node (i2) at (0.7,-0.5) {$\scriptstyle \beta$};
\draw (v) edge +(-.5,0) edge +(-.5,.5) edge +(-.5,-.5)
(w) edge +(.5,0) edge +(.5,.5) edge +(.5,-.5);
\draw[dotted] (v) edge (i1) (w) edge (i2);
\end{tikzpicture}
&=
\epsilon(\alpha,\beta)\,
\begin{tikzpicture}[baseline=-.65ex]
\node[int] (v) at (0,0) {};
\node[int] (w) at (0.7,0) {};
\draw (v) edge +(-.5,0) edge +(-.5,.5) edge +(-.5,-.5) edge (w)
(w) edge +(.5,0) edge +(.5,.5) edge +(.5,-.5);
\end{tikzpicture}\, .
\end{align}
which should be seen as the dual to the edge contraction and edge splitting \eqref{equ:edgecsplit} above.
Also, there is a Lie bracket, which is again given by pairing two decorations, replacing them by an edge, schematically:
\begin{equation}\label{equ:GCMbracket}
\left[
\begin{tikzpicture}[baseline=-.65ex]
\node[int] (v) at (0,0) {};
\node[draw,circle, minimum size=7mm] (w) at (-1,0) {$\Gamma$};
\node (i1) at (0.3,.5) {$\scriptstyle \alpha$};
\draw (v) edge (w.north east) edge (w) edge (w.south east);
\draw[dotted] (v) edge (i1);
\end{tikzpicture}
,
\begin{tikzpicture}[baseline=-.65ex]
\node[int] (v) at (1,0) {};
\node[draw,circle, minimum size=7mm] (w) at (2,0) {$\Gamma'$};
\node (i1) at (0.7,-0.5) {$\scriptstyle \beta$};
\draw (v) edge (w.north west) edge (w) edge (w.south west);
\draw[dotted] (v) edge (i1);
\end{tikzpicture}
\right]
=
\epsilon(\alpha,\beta)\, \,
\begin{tikzpicture}[baseline=-.65ex]
\node[int] (v) at (0,0) {};
\node[draw,circle, minimum size=7mm] (w) at (-1,0) {$\Gamma$};
\node[draw,circle, minimum size=7mm] (w2) at (1.5,0) {$\Gamma$};
\node[int] (v2) at (.5,0) {};
\draw (v) edge (w.north east) edge (w) edge (w.south east) edge (v2);
\draw (v2) edge (w2.north west) edge (w2) edge (w2.south west);
\end{tikzpicture}
\, .
\end{equation}
There are also sign and degree conventions, which we shall largely ignore here, but refer the reader to the original reference \cite{CamposWillwacher} instead.
There is a Maurer-Cartan element
\begin{equation}\label{equ:GCHz}
z=
\sum_{j\geq 0} \frac 1 {j!} \left(
\begin{tikzpicture}[baseline=-.65ex]
\node[int] (v) at (0,0) {};
\node (i1) at (0.3,.5) {$\scriptstyle (1^*)^j$};
\node (i2) at (0.3,-0.5) {$\scriptstyle \omega^*$};
\draw[dotted] (v) edge (i1) edge (i2);
\end{tikzpicture}
+
\frac 1 2 \,
\sum_q\,
\begin{tikzpicture}[baseline=-.65ex]
\node[int] (v) at (0,0) {};
\node (i3) at (0.3,.5) {$\scriptstyle (1^*)^j$};
\node (i1) at (0.3,-.5) {$\scriptstyle f_q$};
\node (i2) at (-0.3,-0.5) {$\scriptstyle f_q^*$};
\draw[dotted] (v) edge (i1) edge (i2);
\end{tikzpicture}
\right)
\end{equation}
where the notation $(1^*)^j$ shall indicate that $j$ copies of $1^*$ are present decorating the vertex. We shall define $\mathsf{GC}_H:= (\mathsf{GC}_H')^z$ as the twist by this MC element.
It is convenient to also define a cosmetic variant, getting rid of decorations by $1^*$ in graphs. (This is also done in \cite{CamposWillwacher}.)
More precisely, we construct a dg Lie algebra $\mathsf{GC}_{\bar H}$ by repeating the construction of $\mathsf{GC}_{H}$, except for the following differences:
\begin{itemize}
\item We only allow decorations by $\bar H^*$ in graphs.
\item In the differential \eqref{equ:GCMdelta} and bracket \eqref{equ:GCMbracket} we tacitly assume that every vertex is decorated by copies of elements $1^*$, i.e., a decoration $\omega^*$ is replaced by an edge to any vertex.
\item In the MC element $z$ of \eqref{equ:GCHz} we merely drop all terms involving decorations by $1^*$, leaving only the $j=0$-term in the outer sum.
\end{itemize}
Thus we obtain a dg Lie algebra $\mathsf{GC}_{\bar H}$.
There is a natural map of dg Lie algebras
\begin{equation}\label{equ:GCbarHGCH}
\mathsf{GC}_{\bar H} \to \mathsf{GC}_H
\end{equation}
by sending a graph to all possible graphs obtainable by adding $1$-decorations to all vertices.
Formally, to each vertex, we do the following operation
\[
\begin{tikzpicture}[baseline=-.65ex]
\node[int] (v) at (0,0) {};
\draw (v) edge +(-.5,0) edge +(-.5,.5) edge +(-.5,-.5);
\end{tikzpicture}
\, \,
\mapsto
\, \,
\sum_{j\geq 0} \frac 1 {j!}\,
\begin{tikzpicture}[baseline=-.65ex]
\node[int] (v) at (0,0) {};
\node (i1) at (0.3,.5) {$\scriptstyle (1^*)^j$};
\draw[dotted] (v) edge (i1);
\draw (v) edge +(-.5,0) edge +(-.5,.5) edge +(-.5,-.5);
\end{tikzpicture}\, .
\]
(The map is in fact a quasi-isomorphism, though we shall not use this.)
The Maurer-Cartan element $Z$ of \cite{CamposWillwacher} takes values in $\mathsf{GC}_{\bar H(M)}$.
However, given the map \eqref{equ:GCbarHGCH} we may map it to another MC element $Z\in \mathsf{GC}_H$, which we shall denote by the same letter, abusing notation.
There is one important observation, for which we refer to \cite{CamposWillwacher}.
The MC element $Z\in \mathsf{GC}_{\bar H(M)}$ may be taken to be composed of graphs which are at least trivalent, i.e., the valency of any vertex in any graph occurring is $\geq 3$.
The valency of a vertex is defined to be the number of elements in its star, which is in turn the set of half-edges and decorations incident at that vertex.
Later on we shall need this observation in the following form.
If we consider the MC element $z+Z\in \mathsf{GC}'_{H(M)}$, and we consider only its trivalent part, then the only graphs which ever contain any decorations by $1^*$ are those from the part $z$, and can explicitly be read off from \eqref{equ:GCHz} above.
\section{String topology operations}
\label{sec:stringtopology}
The goal of this section is to introduce the construction of the string product and coproduct, in the form we will be using them.
We will generally work in the cohomological setting, i.e., we will define the operations on the cohomology of the loop space, not on homology as usual.
\subsection{Preliminaries}
Let $M$ be a closed oriented manifold. Let $\mathsf{FM}_M(2)$ denote the compactified configuration space of two (labelled) points on $M$. It can be constructed as the real oriented blowup of $M \subset M \times M$, and thus its boundary can be identified with the unit tangent bundle $UTM$. It naturally fits into the following commuting square
\begin{equation}\label{diag:preucolim}
\begin{tikzcd}
UTM \ar[r] \ar[d]& \mathsf{FM}_M(2) \ar[d] \\
M \ar[r] & M \times M,
\end{tikzcd}
\end{equation}
which is actually a homotopy pushout as the map $UTM \to \mathsf{FM}_M(2)$ is a cofibration. The vertical homotopy cofibers are two versions of the Thom space of $M$. The diagram hence gives a homotopy equivalence between the Thom space $DM / UTM$ (with $DM\to M$ the unit disk bundle) and $M \times M / (M \times M \setminus M)$ that does not depend on a tubular neighborhood embedding. We will exploit this fact in our construction of string topology operations.
As an example, consider the map $H_\bullet(M\times M) \to H_{\bullet-n}(M)$ obtained by intersecting with the diagonal. Given the above observation we may realize the dual map on cohomology by the zigzag
\begin{equation}\label{equ:example_diag}
\begin{tikzcd}
H^\bullet(M\times M) & \ar[l] H^\bullet( M \times M , \mathsf{FM}_M(2)) \ar[d, "\simeq"] & & \\
& H^\bullet(M, UTM) & \ar[l, "\wedge Th"] H^{\bullet-n}(M),
\end{tikzcd}
\end{equation}
where the last arrow is the Thom isomorphism (multiplication by the Thom form) and the fact that the vertical arrow is an isomorphism uses that \eqref{diag:preucolim} is a homotopy pushout.
While this zigzag might not be the the simplest expression for the intersection with the diagonal, it has the advantage that it is relatively straightforward to realize on the cochain complex level, without many "artificial choices".
The construction of the string topology operations prominently involves the intersection with the diagonal, and hence we will be using this example below. We shall need a slight extension, allowing for pullbacks.
To this end, let us note the following.
\begin{Lem}
\label{lem:ucolim}
Let $E \to M \times M$ be a fibration. Then the following diagram (of pullbacks) is a homotopy pushout.
\begin{equation}
\label{diag:ucolim}
\begin{tikzcd}
E|_{UTM} \ar[r] \ar[d]& E|_{\mathsf{FM}_M(2)} \ar[d] \\
E|_{M} \ar[r] & E,
\end{tikzcd}
\end{equation}
in particular the maps between cofibers are equivalences.
\end{Lem}
\begin{proof}
The diagram is clearly a pushout, and the map $E|_{UTM} \to E|_{\mathsf{FM}_M(2)}$ is a cofibration. Alternatively, this is Mather's cube theorem, visualizing \eqref{diag:ucolim} as the top face of a cube with bottom face \eqref{diag:preucolim}.
\end{proof}
Let $PM \to M \times M$ be the path space fibration of $M$. We will denote by $LM$ the free loop space $M^{S^1} = PM \times_{M \times M} M$.
\subsection{String product (Cohomology coproduct)}
We define the string product
$$
H^\bullet(LM) \otimes H^\bullet(LM) \longleftarrow H^{\bullet -n}(LM),
$$
to be the composite of the maps
\begin{equation}\label{equ:product_zigzag}
\begin{tikzcd}
H^\bullet(LM) \otimes H^\bullet(LM) & \ar[l] H^\bullet( LM \times LM , LM \times^\prime LM) \ar[d, "\simeq"] & & \\
& H^\bullet(\op{Map}(8), \op{Map}^\prime(8)) & \ar[l, "\wedge Th"] H^{\bullet-n}(\op{Map}(8)) & \ar[l] H^{\bullet-n}(LM),
\end{tikzcd}
\end{equation}
which we will describe now. We apply Lemma \ref{lem:ucolim} to the fibration $LM \times LM \to M \times M$ to obtain the following homotopy pushout diagram
$$
\begin{tikzcd}
\op{Map}^\prime(8) \ar[r] \ar[d]& LM \times^\prime LM \ar[d] \\
\op{Map}(8) \ar[r] & LM \times LM,
\end{tikzcd}
$$
where each entry is defined to be the fiber product of $LM \times LM$ with the corresponding term in \eqref{diag:preucolim} over $M \times M$, for example
\[
\op{Map}'(8):= (LM\times LM) \times_{M\times M} UTM,
\]
which can be thought of as the space of figure eights in $M$ together with a tangent vector at the node of the eight.
This defines all the spaces in the definition of the string product and explains the vertical isomorphism in \eqref{equ:product_zigzag}. The third map in \eqref{equ:product_zigzag} is given by multiplying with the image of the Thom class $Th \in H^n(M, UTM)$ in $H^n(\op{Map}(8), \op{Map}^\prime(8))$. The last map is induced by the natural map $\op{Map}(8) \to LM$, traversing both "ears" of the figure 8, in a fixed order.
\begin{Rem}
Since later it will be more natural to work in the cohomological setting, but the string topological operation have a geometric meaning, we still call the above map a product, even though it has the signature of a coproduct. To make up for this we will often write the maps from right to left.
\end{Rem}
The diagram \eqref{equ:product_zigzag} is obtained from the chain-level version of diagram \eqref{equ:example_diag} by taking fiber product with a certain fibration. More precisely, let us consider the following diagram of pairs of spaces
$$
\begin{tikzcd}
M \times M \ar[r] & (M \times M, \mathsf{FM}_M(2)) & \\
& \ar[u, "\simeq"] (M, UM) \ar[r, dashed, "Th"] & M,
\end{tikzcd}
$$
where the dashed map is the Thom isomorphism and exists on chain level.
In particular, this induces the corresponding maps upon taking the fiber product with $LM \times LM \to M \times M$.
The map
$$
\begin{tikzcd}
H^\bullet(\op{Map}(8), \op{Map}^\prime(8)) & \ar[l, "\wedge Th"'] H^{\bullet-n}(\op{Map}(8)),
\end{tikzcd}
$$
on chain level is given by taking the cup product with the pullback of the Thom class $Th \in C^n(M, UTM)$ along the map of pairs
$$
\begin{tikzcd}
(\op{Map}(8), \op{Map}^\prime(8)) \ar[r] & (M , UTM).
\end{tikzcd}
$$
\subsection{String coproduct (cohomology product)}
The coproduct operation we are interested in is defined on loops relative to constant loops, i.e.
$$
H^\bullet(LM, M) \longleftarrow H^\bullet(LM, M) \otimes H^\bullet(LM, M) [n-1].
$$
We will later see that it admits a natural lift to $H^\bullet(LM)$ in case that there is a non-vanishing vector field on $M$ (in particular $M$ has Euler characteristic $0$). The coproduct will be constructed in a similar fashion to the product, but this time we apply Lemma \ref{lem:ucolim} to the fibration $E = PM \times_{M\times M} PM \to M \times M$. We will identify $PM \times_{M\times M} PM$ with $\op{Map}(\bigcirc_2)$, the space of two-pointed loops in $M$. Taking the fiber product with the diagram \eqref{diag:preucolim} we obtain the homotopy pushout
$$
\begin{tikzcd}
\op{Map}^\prime(8) \arrow{r} \arrow{d} & \op{Map}^\prime(\bigcirc_2) \arrow{d} \\
\op{Map}(8) \arrow{r} & \op{Map}(\bigcirc_2).
\end{tikzcd}
$$
Let us define the spliting/reparametrization map
\begin{equation}\label{equ:splittingmap_def}
\begin{aligned}
s : I \times LM &\longrightarrow \op{Map}(\bigcirc_2) \\
(t, \gamma) &\longmapsto (s \mapsto \begin{cases} \gamma(\tfrac{1}{2}st) &\mbox{if } 0 \leq s \leq \tfrac{1}{2} \\ \gamma(2t(1-s) + 2(s-\tfrac{1}{2}) & \tfrac{1}{2} \leq s \leq 1 \end{cases}.
\end{aligned}
\end{equation}
This map sends the boundary of the interval into $F := LM \coprod_M LM \subset \op{Map}(\bigcirc_2)$, that is the space of loops with two marked points where either one of the intervals is mapped to a constant. We thus obtain a map
$$
H^\bullet(LM, M) \overset{s^\bullet}{\longleftarrow} H^{\bullet+1}(\op{Map}(\bigcirc_2), LM \coprod_M LM).
$$
Naturally $F := LM \coprod_M LM \to M$ is a fibration. We are in the situation of having a fibration $E \to M \times M$ together with a map $F \to E|_{M}$ and we seek to construct a map
$$
H^\bullet(E,F) \to H^{\bullet -n} (E|_{M}, F),
$$
by "intersecting with the diagonal".
Given a Thom class $Th \in H^n(M\times M, \mathsf{FM}_M(2))$, the authors of \cite{GoreskyHingston,HingstonWahl}
construct such a map by multiplying a relative cochain in $C^\bullet(E,F)$ with a representative of the Thom class which has support in a tubular neighborhood of $M \subset U_\epsilon \subset M \times M$, thus obtaining a cochain in $C^{\bullet + n}( E|_{U_\epsilon}, F)$ and then composing with a retraction of the bundle $E|_{U_\epsilon}$ to $E|_M$.
We shall rather use an extension of \eqref{equ:example_diag}. Recall that the vertical homotopy cofibers in the diagram
$$
\begin{tikzcd}
M \times M & \ar{l} M \\
\mathsf{FM}_M(2) \ar{u} & \ar{l} \ar{u} UTM
\end{tikzcd}
$$
are both copies of the Thom space $DM / UTM$, and the induced map is an equivalence. We will again pull back the fibrations $E$ and $F$ along these maps, to obtain the following maps of pairs (we denote cofibers by ordinary quotients here)
$$
\begin{tikzcd}
(E,F) \ar{r} & (E / E|_{\mathsf{FM}_M(2)}, F / F|_{UTM} ) \\
& \ar{u}{\simeq} ( E|_M / E|_{UTM}, F / F|_{UTM} ).
\end{tikzcd}
$$
Now we are left with a pair of fibrations over the Thom pair $(M , UTM)$ and we can apply the Thom isomorphism, that is, we multiply with a Thom form $Th \in \Omega^\bullet(M, UTM)$.
Note that we can identify $E|_M$ with $\op{Map}(8)$ and under this identification, $F$ corresponds to the space of figure 8's with at least one ear constant. We define the loop coproduct to be the composite
\begin{equation}\label{equ:string coproduct zigzag}
\begin{tikzcd}
H^\bullet(LM,M) & H^{\bullet+1}(\op{Map}(\bigcirc_2), LM \coprod_M LM) \ar[l]& H^{\bullet+1-n}(\op{Map}(8), F) \ar[l] & H^\bullet(LM,M)^{\otimes 2}[n-1] \ar[l]
\end{tikzcd}
\end{equation}
where the last map is induced by the map $\op{Map}(8) \to LM \times LM$. On the "space"-level the diagram is given by
\begin{equation}
\label{eqn:defredcop}
\begin{tikzcd}
\frac{LM}{M} \ar[dashed]{r}{\text{suspend}} &\frac{I \times LM}{\partial I \times LM\cup I\times M} \ar[r, "s"] & \frac{\op{Map}(\bigcirc_2)}{F} \ar[r] & \frac{\op{Map}(\bigcirc_2) / \op{Map}^\prime(8)}{ F/ F|_{UTM}} & \\
&&& \frac{\op{Map}(8) / \op{Map}^\prime(8)}{F / F|_{UTM}} \ar[u, "\simeq"] \ar[r, dashed, "Th"] & \frac{\op{Map}(8)}{F},
\end{tikzcd}
\end{equation}
where we again wrote cofibers and cofibers of cofibers as fractions and pullbacks as restrictions. Note that the main part of this diagram is induced by maps on the base after taking fiber product with $E$ and $F$ over $M \times M$ and $M$, respectively. More concretely, we have the following diagram of pairs of pairs
$$
\begin{tikzcd}
\frac{M \times M}{M} \ar[r] & \frac{(M \times M, \mathsf{FM}_M(2) )}{( M, UTM)} & \\
& \frac{(M, UTM)}{(M, UTM)} \ar[u, "\simeq"] \ar[r, dashed, "Th"] & \frac{M}{M},
\end{tikzcd}
$$
where in the numerator we have pairs of spaces over $M \times M$ and in the denominator pairs of spaces over $M$. To get the previous diagram \eqref{eqn:defredcop}, one takes fiber product of the numerator with $E$ over $M\times M$ and of the denominator with $F$ over $M$ and realizes the corresponding cofibers.
\begin{Rem}
If one wishes to invert the above homotopy equivalence to write down an "actual" map, one needs to write down an inverse map of pairs of pairs that induces a homotopy inverse after taking the fiber products and taking the cofibers. Since we realize cofibers later, a "map" of pairs does not need to be defined everywhere. For instance, a "map" $M \times M \to (M, UTM)$ can be given by describing the map on a tubular neighborhood of the diagonal and providing a map to $UTM$ on the punctured tubular neighborhood. The problem is that the pair $(M, UTM)$ in the numerator is not fibrant as spaces over $M\times M$, thus we need to find a fibrant replacement, for instance $(M,UTM) \to (PM \times_M PM, PM\times_M UTM \times_M PM)$. Then a "map" $M \times M \to (PM \times_M PM, PM\times_M UTM \times_M PM)$ is obtained by connecting points that are close and providing the corresponding vector if they are not equal.
\end{Rem}
\subsection{String coproduct ($\chi(M) = 0$ case)}
\label{sec:framedcoprod}
Let $M$ be a closed manifold with trivialized Euler class. In particular, we can assume that the Thom class is represented by a fiberwise volume form on $UTM$, that is $Th \in C^{n-1}(UTM) \subset C^{n}(M, UTM)$ is closed. In this case, one can lift the coproduct to a map
$$
H^\bullet(LM) \longmapsfrom H^\bullet(LM)^{\otimes 2}[d-1]
$$
defined by the following zig-zag
$$
\begin{tikzcd}
H^\bullet(LM) & \arrow{l}{s^*} H^{\bullet-1}(\op{Map}(\bigcirc_2),\op{Map}(8)) \arrow{d}{\simeq} \\
& H^{\bullet-1}(\op{Map}^\prime(\bigcirc_2),\op{Map}^\prime(8)) & \arrow{l}{\delta} H^{\bullet-2}(\op{Map}^\prime(8)) & \arrow[l, "\wedge Th"] H^{\bullet-n-1}(\op{Map}(8)).
\end{tikzcd}
$$
It is induced by the following maps of spaces
$$
\begin{tikzcd}
(I, \partial I) \times LM \arrow[r, "s"] & (\op{Map}(\bigcirc_2),\op{Map}(8)) \\
& (\op{Map}^\prime(\bigcirc_2),\op{Map}^\prime(8)) \arrow{u}{\simeq} \arrow{r} & \Sigma\op{Map}^\prime(8) \arrow{r} & Th\op{Map}(8),
\end{tikzcd}
$$
where the last map is induced by the Thom collapse along the embedding $M \to UTM$ (implicitly given by the trivialization of the Euler class). On chain level it is given by multiplying with the pullback of the fiberwise volume form $Th \in C^{n-1}(UTM)$ along the map $\op{Map}'(8) \to UTM$.
The vertical excision isomorphism follows again from Lemma \ref{lem:ucolim} applied to the fibration $\op{Map}(\bigcirc_2) = PM \times PM \to M \times M$, where this time we consider the horizontal cofibers.
The entire zig-zag except for the splitting map is obtained from the following zig-zag by taking fiber product with the fibration $\op{Map}(\bigcirc_2) = PM \otimes PM \to M \times M$.
\begin{equation}
\label{equ:1frdefcop}
\begin{tikzcd}
(M \times M, M) \\
\ar[u] (\mathsf{FM}_M(2), UTM) \ar[r] & \Sigma UTM = (pt, UTM) \ar[r, dashed, "Th"] & M
\end{tikzcd}
\end{equation}
\subsection{Definition of string bracket and cobracket}
\label{sec:stringbracketdef}
The original version of the string bracket and cobracket \cite{ChasSullivan,ChasSullivan2} was defined on the equivariant (co)homology of $LM$ relative to the constant loops, $H^\bullet_{S^1}(LM,M)$.
For our purposes, we consider a version of the definition using the string coproduct, provided essentially by Goresky and Hingston \cite[section 17]{GoreskyHingston}, see also \cite{HingstonWahl}.
To this end consider the $S^1$-bundle $\pi:LM\simeq LM\times ES^1\to LM_{S^1}$.
This gives rise to the Gysin long exact sequence for equivariant cohomology
\[
\cdots \to H^\bullet(LM) \xrightarrow{\pi^*} H_{S^1}^\bullet(LM)
\to H_{S^1}^{\bullet+2}(LM) \xrightarrow{\pi_!}
H^{\bullet+1}(LM) \to \cdots
\]
One has a similar sequence for reduced (equivariant) cohomology.
Now, we define the string bracket (cohomology cobracket) operation (up to sign) as the composition
\begin{equation}\label{equ:defbracket}
\bar H^\bullet_{S^1}(LM)
\xrightarrow{\pi^*}
\bar H^\bullet(LM)
\xrightarrow{\cdot}
(\bar H^\bullet(LM)\otimes \bar H^\bullet(LM))[n]
\xrightarrow{\pi_!\otimes \pi_!}
(\bar H^\bullet_{S^1}(LM)\otimes \bar H^\bullet_{S^1}(LM))[n-2].
\end{equation}
Here $\cdot$ is the string product.
For the string cobracket (cohomology bracket) we similarly use the composition (up to sign)
\begin{equation}\label{equ:defcobracket}
\bar H^\bullet_{S^1}(LM)\otimes \bar H^\bullet_{S^1}(LM)
\xrightarrow{\pi^*\otimes \pi^*}
\bar H^\bullet(LM)\otimes \bar H^\bullet(LM)
\to
H^\bullet(LM,M)\otimes H^\bullet(LM,M)
\xrightarrow{*}
H^\bullet(LM,M)[n-1]
\to \bar H^\bullet(LM)[n-1]
\xrightarrow{\pi_!}
\bar H^\bullet_{S^1}(LM)[n-2],
\end{equation}
where the map $H^\bullet(LM,M) \to \bar{H}^\bullet(LM)$ uses that $M \subset LM$ is a retract.
\begin{Rem}
Note that the map $H^\bullet(LM,M) \to \bar H^\bullet(LM)$ does not depend on choosing a basepoint. More precisely, in our convention $\bar H^\bullet(LM) = H^\bullet( \text{pt}, LM)[1]$, i.e. it is the cohomology of the cofiber of the map $LM \to \text{pt}$ shifted by one (and not relative to a basepoint). The map above then comes from the fact that the long exact sequence associated to the cofiber diagram
$$
\begin{tikzcd}
\frac{LM}{M} \ar[r] &\frac{\text{pt}}{M} \ar[r] & \frac{\text{pt}}{LM}
\end{tikzcd}
$$
splits since $\frac{\text{pt}}{M} \to \frac{\text{pt}}{LM}$ is a retract.
\end{Rem}
\todo[inline]{FlorianX: It seems it does not depend on basepoints.}
In the $\chi(M) = 0$ case, there is no need to work relative to constant loops and on reduced homologies. The bracket and cobracket are defined on $H^\bullet_{S^1}(LM)$ in this case.
\section{Cochain complex models}\label{sec:cochain models}
Having defined our version of the string topology operations on cohomology the goal of this section and the next is to introduce concrete cochain complexes that compute these cohomologies. Furthermore we will find the concrete maps on cochain complex that realize, for example, the zigzag \eqref{equ:product_zigzag} for the string product and its variant for the coproduct.
These chain complex models will eventually allow us to prove the main theorems \ref{thm:main_1}-\ref{thm:main_3} in later sections, by explicitly tracing cochains through the zigzags.
We also note that for non-simply-connected situation our "models" are not actually models, i.e., their cohomology is generally not the same as that of the loop spaces considered.
However, mind that in this situation (Theorem \ref{thm:main_3}) our only goal is to check that the map in one direction from the cyclic chains to the cohomology $\bar{H}_{S^1}(LM)$ respects the Lie bialgebra structure, not that the map is an isomorphism, which would be wrong.
\subsection{Iterated integrals and model for path spaces}\label{sec:it int}
Let for now $A := \Omega^\bullet(M)$ denote the algebra of differential forms on $M$. We denote by $B = B(A,A,A)$ the two sided bar construction, namely
$$
B(A,A,A) = \oplus_{n\geq 0} A \otimes \bar{A}[1]^{\otimes n} \otimes A.
$$
We recall that since $A$ is commutative, $B$ is a commutative dg algebra with the shuffle product. It is moreover a coalgebra in $A$-bimodules under the natural deconcatenation coproduct
\begin{align*}
B &\longrightarrow B \otimes_A B \\
\alpha = (\alpha_0 | \alpha_1 \dots \alpha_k | \alpha_{-1}) &\longmapsto \alpha' \otimes \alpha'' = \sum_i (\alpha_0 | \alpha_1 \dots \alpha_i | 1 | \alpha_{i-1} \dots \alpha_k | \alpha_{-1}),
\end{align*}
with counit $\epsilon: B \to A$.
Following Chen \cite{Chen}, we define
\begin{align*}
ev_n \colon \Delta^n \times PM &\longrightarrow M \times M^{n} \times M \\
((t_1, \ldots, t_n), \gamma) &\longmapsto (\gamma(0), \gamma(t_1), \ldots, \gamma(t_n), \gamma(1)),
\end{align*}
where $\Delta^n = \{ (t_1, \ldots, t_n ) \ | \ 0 \leq t_1 \leq \ldots \leq t_n \leq 1\}$ is the standard simplex, and the fiber integral
$$
\int_{\Delta^n} \colon \Omega^\bullet( \Delta^n \times PM) \to \Omega^\bullet(PM).
$$
And hence
\begin{align*}
\int : B \to \Omega^\bullet(PM)\\
\int = \bigoplus_{n \geq 0} \int_{\Delta_n} \circ \ ev_n^*.
\end{align*}
\begin{Lem}
\label{lem:barpath}
The map $\int: B \to \Omega^\bullet(PM)$ is a quasi-isomorphism of algebras. Furthermore, the following diagrams commute
$$
\begin{tikzcd}
B \ar{r}{\int} & \Omega^\bullet(PM) \\
A \otimes A \ar{u} \ar{r} & \Omega^\bullet(M \times M) \ar{u}
\end{tikzcd}
\quad
\begin{tikzcd}
B \ar{d}{\epsilon} \ar{r}{\int} & \Omega^\bullet(PM) \ar{d}{const^*} \\
A \ar{r} & \Omega(M)
\end{tikzcd}
\quad
\begin{tikzcd}
B \ar{d} \ar{r}{\int} & \Omega^\bullet(PM) \ar{d} \\
B \otimes_A B \ar{r} & \Omega(PM \times_M PM)
\end{tikzcd}
$$
\end{Lem}
We may hence take the bar construction $B$ as our model for $PM$, in the sense that B is a dgca with a quasi-isomorphism to $\Omega(PM)$.
Furthermore, we shall later be flexible and replace $A=\Omega(M)$ by different dgca models for $M$, denoted also by $A$, slightly abusing notation. We can then still use the two-sided bar construction $B:=B(A,A,A)$ as our model of $PM$.
In practice, we shall not use the dgca structure, but only the $A$-bimodule structure on $B$, and use that $B$ is cofibrant as an $A$-bimodule.
\subsection{Splitting map}
\label{sec:splittingmap}
Let $\mathcal{I} = \mathbb{R} (1-t) \oplus \mathbb{R} t \oplus \mathbb{R} dt \subset \Omega^\bullet(I)$ denote the space of Whitney forms on the interval with projection
\begin{align*}
\Omega^\bullet(I) &\longrightarrow \mathcal{I} \\
f &\longmapsto f(0)(1-t) + f(1) t + (\int_I f) dt
\end{align*}
The splitting map $s$ (see \eqref{equ:splittingmap_def}) can be derived from a map
$$
I \times PM \to PM \times_M PM,
$$
given by the same formula which induces
$$
\Omega^\bullet(PM \times_M PM) \to \mathcal{I} \otimes \Omega^\bullet(PM).
$$
\begin{Prop}
The following diagram commutes
$$
\begin{tikzcd}
\mathcal{I} \otimes B \ar{d} & \ar{l} \ar{d} B \otimes_A B \\
\mathcal{I} \otimes \Omega^\bullet(PM) & \ar{l} \Omega^\bullet(PM \times_M PM),
\end{tikzcd}
$$
where
\begin{align*}
B \otimes_A B &\longrightarrow \mathcal{I} \otimes B \\
(x | \alpha | y | \beta | z) \longmapsto &(1-t) \otimes (\epsilon(x\alpha y)| \beta | z) \\
& + t \otimes (x | \alpha | \epsilon(y \beta z)) \\
& + dt \otimes (-1)^{x \alpha y} (x | \alpha y \beta | z)
\end{align*}
\end{Prop}
\begin{proof}
The formula for the first two components follows from Lemma \ref{lem:barpath}. For the third component we note that the following diagram commutes
$$
\begin{tikzcd}
\Delta^n \times PM \times \Delta^m \times PM \ar{r} & M \times M^n \times M \times M \times M^m \times M \\
\Delta^n \times \Delta^m \times I \times PM \ar{u} \ar{r} \ar{d}& M \times M^n \times M \times M^m \times M \ar{u} \ar{d}\\
\Delta^n \star \Delta^m \times PM \ar{r} & M \times M^{m+n+1} \times M.
\end{tikzcd}
$$
\end{proof}
Note that the original splitting map \eqref{equ:splittingmap_def}
$$
I \times LM \longrightarrow \op{Map}(\bigcirc_2)
$$
is obtained from $I \times PM \to PM \times_M PM$ by pulling back along the diagonal $M \to M \times M$. Hence we obtain the following
\begin{Prop}
\label{prop:splitmap}
The following diagram commutes.
$$
\begin{tikzcd}
\Omega^\bullet(LM) & \arrow{l}{s^*} \Omega^\bullet(\op{Map}(\bigcirc_2)) \oplus \Omega^\bullet(F)[1]& \arrow{l} \Omega^\bullet(\op{Map}(\bigcirc_2)) \oplus \Omega^\bullet(\op{Map}(8))[1] \\
B \otimes_{A^e} A \arrow{u}& \arrow{u} \arrow{l} (B \otimes_A B) \otimes_{A^e} A \ \oplus \ ((B \overset{A}{\oplus} B) \otimes_{A^{\otimes 4}} A)[1]& \arrow{u} \arrow{l} (B \otimes_A B) \otimes_{A^e} A \ \oplus \ ((B \otimes B) \otimes_{A^{\otimes 4}} A)[1]
\end{tikzcd}
$$
\end{Prop}
where the map on the lower left row is given by
\begin{equation}\label{equ:splitmap cochains}
\begin{aligned}
B \otimes_{A^e} A &\longleftarrow (B \otimes_A B) \otimes_{A^e} A \quad \oplus \quad ((B \overset{A}{\oplus} B) \otimes_{A^{\otimes 4}} A)[1] \\
\pm (\alpha x \beta | y) &\longmapsfrom ((\alpha | x | \beta | y), 0) \\
(\alpha |x ) - (\beta | x) &\longmapsfrom (0, (\alpha \otimes 1 + 1 \otimes \beta | x))
\end{aligned}
\end{equation}
and on the lower right is the natural projection $B \otimes B \to (B \otimes A) \oplus (A \otimes B) \to B \overset{A}{\oplus} B$
\begin{Rem}
The vertical maps are quasi-isomorphisms in the simply-connected case.
\end{Rem}
\subsection{Connes differential}\label{sec:Connesdiff}
The action of $S^1$ on $LM = PM \times_{M \times M} M$ is induced by the splitting map $s$ as follows.
$$
\begin{tikzcd}
I \times PM \times_{M \times M} M \ar[r, "s"] & ( PM \times_M PM) \times_{M \times M} M \ar[r, "m^\text{op}"] & PM \times_{M \times M} M,
\end{tikzcd}
$$
where $m^\text{op}$ is concatenation in the opposite order. That is rotating a loop is the same as splitting it at every point and concatenating the two pieces in the opposite order. Under the map $B \otimes_{A^e} A \to \Omega^\bullet(LM)$ the action of the fundamental class of the circle is then computed as
$$
\begin{tikzcd}
B \otimes_{A^e} A \ar[r, "\Delta^\text{op}"] & (B \otimes_A B) \otimes_{A^e} A \ar[r] & B \otimes_{A^e} A \\
(\alpha | x) \ar[r, mapsto] & \pm (\alpha''|x|\alpha'|1) \ar[r, mapsto] & \pm (\alpha'' x \alpha' | 1),
\end{tikzcd}
$$
which is the standard formula for Connes' B-operator.
\subsection{Space of figure eights}
\label{sec:figeight}
In the definition of the loop product we also need the map
$$
\op{Map}(8) \to LM
$$
that concatenates the two loops. Since this is induced by the map $PM \times_M PM \to PM$ which is modelled by the deconcatenation coproduct on $B$, we get that the map $\op{Map}(8) \to LM$ is modelled by
\begin{align}
\label{equ:concatloop}
\begin{split}
(B \otimes_{A^e} A) &\longrightarrow (B \otimes B) \otimes_{A^{\otimes 4}} A \\
(\alpha|x) &\longmapsto (\alpha' \otimes \alpha'' | x).
\end{split}
\end{align}
To get the usual coproduct we have to compose with the map
$$
\op{Map}(8) \to LM \times LM,
$$
that reads off each ear separately. This is just $M \to M \times M$ fiber product with $LM \times LM$ and hence modelled by
\begin{align}
\label{equ:splitloop}
\begin{split}
(B \otimes_{A^e} A) \otimes (B \otimes_{A^e} A) &\longrightarrow (B \otimes B) \otimes_{A^{\otimes 4}} A \\
(\alpha|x) \otimes (\beta|y) &\longmapsto \pm (\alpha \otimes \beta| xy).
\end{split}
\end{align}
\subsection{$S^1$-equivariant loops}
It has been shown by Jones \cite{Jones} that the $S^1$-equivariant cohomology of the loop space of a simply connected manifold can be computed via the negative cyclic homology of a dgca model $A$ of the manifold,
\[
HC^-_{-\bullet}(A)\xrightarrow{\cong} H^\bullet_{S^1}(LM).
\]
More precisely, let $(B \otimes_{A^e} A [[u]], d + uB)$ be the negative cyclic complex, where $B$ is Connes operator. Then there is a map $(B \otimes_{A^e} A [[u]], d + uB) \to \Omega^\bullet(LM \times_{S^1} ES^1)$ (for the standard bar resolution $LM \times_{S^1} ES^1$) that induces an isomorphism on cohomology in the case where $M$ is simply connected. Moreover, the Gysin sequence induced by the fiber sequence $S^1 \to LM \to LM_{S^1}$ is modelled by
$$
\begin{tikzcd}
B \otimes_{A^e} A \ar[r, "B"] \ar[d] & B \otimes_{A^e} A[[u]] \ar[r, "\cdot u"] \ar[d] & B \otimes_{A^e} A[[u]] \ar[r, "u = 0"] \ar[d] & B \otimes_{A^e} A \ar[d] \\
\Omega^{\bullet-1}(LM) \ar[r, "\pi_!"] & \Omega^\bullet(LM_{S^1}) \ar[r] & \Omega^{\bullet+2}(LM_{S^1}) \ar[r, "\pi^*"] & \Omega^{\bullet+2}(LM),
\end{tikzcd}
$$
(see for instance \cite[Theorem 4.3]{Loday2}).
Let us moreover recall the following result
\begin{Prop}\label{prop:cycqiso}
Let $A$ be a connected graded algebra. Then
$$
B \otimes_{A^e} A \oplus \mathbb{R}[[u]] \overset{B}{\longrightarrow} (B \otimes_{A^e} A [[u]], d + uB)$$
factors through cyclic coinvariants $B \otimes_{A^e} A \to \op{Cyc}(\bar{A})$ and induces a quasi-isomorphism
$$
\overline{Cyc}(\bar{A}) \oplus \mathbb{R}[[u]] = \op{Cyc}(\bar{A}) \oplus u\mathbb{R}[[u]] \overset{B}{\longrightarrow} (B \otimes_{A^e} A [[u]], d + uB).
$$
\end{Prop}
\begin{proof}
This can for instance be extracted from \cite{Goodwillie} who shows that periodic cyclic homology is that of a point in this case and hence negative cyclic is essentially cyclic homology which can be computed in terms of cyclic words (see for instance \cite{Loday}).
\end{proof}
Under this quasi-isomorphism we get the following descriptions for $\pi_!$ and $\pi^*$ of section \ref{sec:stringbracketdef}.
\begin{Lem}
\label{lem:equivtohoch}
The following diagrams commute
$$
\begin{tikzcd}
B \otimes_{A^e} A \ar[r, "pr"] \ar[d] & \op{Cyc}(\bar{A}) \oplus u\mathbb{R}[[u]] \ar[d] \\
\Omega^{\bullet}(LM) \ar[r, "\pi_!"] & \Omega^{\bullet+1}(LM_{S^1})
\end{tikzcd}
\quad
\begin{tikzcd}
\op{Cyc}(\bar{A}) \oplus u\mathbb{R}[[u]] \ar[r, "\iota"] \ar[d] & B \otimes_{A^e} A \ar[d] \\
\Omega^\bullet(LM_{S^1}) \ar[r, "\pi^*"] & \Omega^{\bullet}(LM),
\end{tikzcd}
$$
where $pr : B \otimes_{A^e}A \to \op{Cyc}(\bar{A})$ is the natural projection (where the element $1$ is sent to the empty cyclic word), and $\iota: \op{Cyc}(\bar{A}) \to $ sends a cyclic word $x_1\dots x_n \mapsto \sum_i \pm (x_{1+i}x_{2+i}\ldots x_{n+i} | 1)$.
\end{Lem}
We also consider the following variant of this. One has the reduced equivariant cohomology $\bar H_{S^1}(LM)$, which fits into a split short exact sequence
\[
0 \to \bar H_{S^1}(LM) \to H^\bullet_{S^1}(LM) \to H^\bullet_{S^1}(*) \to 0.
\]
By naturality of the above construction we get a natural map
\[
\overline{Cyc}(\bar A) \to \bar H_{S^1}(LM),
\]
from the reduced cyclic complex, which is a quasi-isomorphism for $M$ simply-connected (see also \cite{ChenEshmatovGan}).
\section{Cochain zigzags for the string product and bracket}\label{sec:cochain zigzags}
In this section we shall use the cochain models of section \ref{sec:cochain models} to obtain explicit descriptions of the string product and coproduct, as defined in section \ref{sec:stringtopology}.
\subsection{Product}
The zigzag \eqref{equ:product_zigzag} defining the string product on cohomology is realized on cochains by the zigzag
\[
\begin{tikzcd}
\Omega^\bullet(LM\times LM) & \ar{l} \op{Tot}\left( \Omega^\bullet(LM\times LM) \to \Omega^\bullet(LM\times' LM) \right) \ar{d}{\simeq} & \\
&
\op{Tot}\left( \Omega^\bullet(\op{Map}(8)) \to \Omega^\bullet(\op{Map}'(8)) \right)
& \ar{l} \Omega^{\bullet -n}(\op{Map}(8))
& \ar{l} \Omega^{\bullet -n}(LM)
\end{tikzcd}
\]
We first rewrite this to the zigzag
\[
\begin{tikzcd}
\Omega^\bullet(LM)\otimes \Omega^\bullet(LM) & \ar{l} \op{Tot}\left( \Omega^\bullet(LM)\otimes \Omega^\bullet(LM) \to (\Omega^\bullet(LM)\otimes \Omega^\bullet(LM))\otimes_{A^{\otimes 2}} \Omega^\bullet(\mathsf{FM}_M(2)) \right) \ar{d}{\simeq} & \\
&
\op{Tot}\left( \Omega^\bullet(LM)\otimes_A \Omega^\bullet(LM) \to \Omega^\bullet(LM) \otimes_A \Omega^\bullet(LM) \otimes_A \otimes \Omega^\bullet(UTM) \right)
& \ar{l} \Omega^\bullet(LM) \otimes_A \Omega^\bullet(LM)[-n]
& \\
& & \ar{u} \Omega^\bullet(LM)[-n],
\end{tikzcd}
\]
which clearly comes with a map to the original zigzag.
Now suppose we have a model for the boundary inclusion $UTM\to \mathsf{FM}_M(2)$ compatible with the maps to $M \times M$, i.e., we have a commutative diagram of $A$-bimodules
\[
\begin{tikzcd}
C \ar{r} \ar{d}{\simeq} & U \ar{d}{\simeq} \\
\Omega^\bullet(\mathsf{FM}_M(2)) \ar{r} & \Omega^\bullet(UTM).
\end{tikzcd}
\]
Then, using the models of the previous section, we can rewrite the diagram again to
\[
\begin{tikzcd}
B\otimes_{A^2} A \otimes B\otimes_{A^2} A
& \ar{l} \op{Tot}\left(B\otimes_{A^2} A \otimes B\otimes_{A^2} A
\to (B\otimes B)\otimes_{A^{\otimes 4}} C \right) \ar{d}{\simeq} & \\
&
\left( B\otimes_{A^4} B \to B \otimes_{A^2} B \otimes_{A^2}U \right)
& \ar{l}{\wedge Th} B\otimes_{A^4} B[-n]
& \ar{l} B\otimes_{A^2} A [-n] \, ,
\end{tikzcd}
\]
and again we retain a map to the original diagram.
Now the left- and right-hand end of the diagram are Hochschild complexes.
We can hence compute the string product on the Hochschild homology $HH(A,A)$ by starting with a cocycle on the right and tracing its image through the zigzag.
The main difficulty here is however crossing the vertical map.
We shall hence further simplify the zigzag slightly.
To this end note that our zigzag is obtained from the zigzag of $A$-bimodules
\begin{equation}\label{equ:prod_mod_zigzag}
\begin{tikzcd}
A \otimes A
& \ar{l} A\otimes A/C
\ar{d}{\simeq} & \\
&
A/U
& \ar{l}{\wedge Th} A[-n]
\end{tikzcd}
\end{equation}
by tensoring with $B^{\otimes 2}$ over $A^{\otimes 4}$.
We will then use the following result, which is obvious, but nevertheless stated for comparison with the later Lemma \ref{lem:hocop}.
\begin{Lem}\label{lem:hoprod}
Suppose that $QA\to A$ is a cofibrant resolution of $A$ as a bimodule and
\[
\begin{tikzcd}
QA[d] \ar{d}\ar{r}{g} & A\otimes A/C \ar{d} \\
A[-n] \ar{r}{\wedge Th} \ar[Rightarrow, ur, "h"] & A/U
\end{tikzcd}
\]
is a homotopy commutative square. Then the zigzag
\[
A\otimes A/C \rightarrow A/U \xleftarrow{\wedge Th} A[-n]
\]
is homotopic to the zigzag
\[
A\otimes A/C \xleftarrow{g} QA[-n]\to A[-n] \, .
\]
\end{Lem}
Note that eventually we are interested in the composition \[
A[-n]\leftarrow QA[-n] \rightarrow A\otimes A/C \to A\otimes A,
\]
which one may interpret as a (derived) coproduct.
\subsection{Coproduct (relative to constant loops case)}\label{sec:coprodrel}
Taking differential forms of the right part of the diagram \eqref{eqn:defredcop} we obtain
$$
\begin{tikzcd}
\op{Tot}( \Omega^\bullet(E) \to \Omega^\bullet(F)) & \ar{l} \op{Tot}\left( \begin{tikzpicture}
\node(1) at (0,.5) {$\Omega^\bullet(E)$};
\node(2) at (2.2,.5) {$\Omega^\bullet(F)$};
\node(3) at (0,-.5) {$\Omega^\bullet(E|_{\mathsf{FM}_M(2)})$};
\node(4) at (2.2,-.5) {$\Omega^\bullet(F|_{UTM})$};
\draw[->](1)-- (2);
\draw[->](3)-- (4);
\draw[->](1)-- (3);
\draw[->](2)-- (4);
\end{tikzpicture} \right) \ar{d}{\simeq}
& \\
& \op{Tot}\left( \begin{tikzpicture}
\node(1) at (0,.5) {$\Omega^\bullet(E|_M)$};
\node(2) at (2.2,.5) {$\Omega^\bullet(F)$};
\node(3) at (0,-.5) {$\Omega^\bullet(E|_{UTM})$};
\node(4) at (2.2,-.5) {$\Omega^\bullet(F|_{UTM})$};
\draw[->](1)-- (2);
\draw[->](3)-- (4);
\draw[->](1)-- (3);
\draw[->](2)-- (4);
\end{tikzpicture} \right) & \ar{l}{\wedge Th} \op{Tot}( \Omega^\bullet(E|_M) \to \Omega^\bullet(F))[d]
\end{tikzcd}
$$
Here $\op{Tot}(\cdots)$ refers to the total complex of the diagram, for example $\op{Tot}( \Omega^\bullet(E) \to \Omega^\bullet(F))$ is the mapping cone of the map of complexes $\Omega^\bullet(E) \to \Omega^\bullet(F)$.
By replacing fiber products with tensor product we obtain a map from the following diagram into the above diagram (that is a quasi-isomorphism in the simply connected situation at each step).
\begin{equation}
\label{equ:zigzag_coprod2}
\begin{tikzcd}
\op{Tot}( \Omega^\bullet(E) \to \Omega^\bullet(F)) & \ar{l}
\op{Tot}\left( \begin{tikzpicture}
\node(1) at (0,.5) {$\Omega^\bullet(E)$};
\node(2) at (3.5,.5) {$\Omega^\bullet(F)$};
\node(3) at (0,-.5) {$\Omega^\bullet(E) \otimes_{A^{\otimes 2}} \Omega^\bullet(\mathsf{FM}_M(2))$};
\node(4) at (3.5,-.5) {$\Omega^\bullet(F) \otimes_A \Omega^\bullet(UTM)$};
\draw[->](1)-- (2);
\draw[->](3)-- (4);
\draw[->](1)-- (3);
\draw[->](2)-- (4);
\end{tikzpicture} \right) \ar{d}{\simeq}
& \\
&\op{Tot}\left( \left(\Omega^\bullet(E)\otimes_{A^{\otimes 2}} A \to \Omega^\bullet(F) \right) \otimes_A \left(\begin{tikzpicture}
\node(1) at (0,.5) {$A$};
\node(3) at (0,-.5) {$\Omega^\bullet(UTM)$};
\draw[->](1)-- (3);
\end{tikzpicture} \right) \right) & \ar{l}{\wedge Th} \op{Tot}\left(\Omega^\bullet(E)\otimes_{A^{\otimes 2}} A \to \Omega^\bullet(F)[n]\right)
\end{tikzcd}
\end{equation}
This diagram is obtained from
\begin{equation}
\label{diag:mor}
\begin{tikzcd}
A[-n] \ar[d, equal] \ar{r}{\wedge Th} & \ar[d, equal] A / \Omega^\bullet(UTM) & \ar{d} \ar{l}{\simeq} (A \otimes A) / \Omega^\bullet(\mathsf{FM}_M(2)) \ar{r} & A \otimes A \ar[d]\\
A[-n] \ar{r}{\wedge Th} & A / \Omega^\bullet(UTM) & \ar[l,equal] A / \Omega^\bullet(UTM) \ar{r} & A
\end{tikzcd}
\end{equation}
by tensoring the first line with $\Omega^\bullet(E)$ over $A^{\otimes 4}$ and the second line with $\Omega^\bullet(F)$ over $A^{\otimes 2}$. Here we again wrote (co)cones as quotients. In particular, if we wish to invert the quasi-isomorphism $A / \Omega^\bullet(UTM) \overset{\simeq}{\longrightarrow} (A \otimes A) / \Omega^\bullet(\mathsf{FM}_M(2))$ we have to do so as $A^{\otimes 4}$ module and compatible with the projection to $A / \Omega^\bullet(UTM)$. Since all our $A$-module structures come in pairs that factor through a single $A$-module structure it is enough to talk about $A$-bimodules and $A$-modules.
Let us summarize the above situation as follows. Let us define the category $\mathcal M_A$ with objects pairs $(M \to N)$ of an $A$-bimodule $M$ and an $A$-module $N$, together with an $A$-bimodule map between them. Morphisms are homotopy commuting squares, i.e., squares
\begin{equation}\label{equ:MAAAmorph}
\begin{tikzcd}
M \ar{r}{f} \ar{d}{\pi} & M' \ar{d}{\pi'}\\
N \ar[Rightarrow, ur, "h"] \ar{r}{g}& N'
\end{tikzcd},
\end{equation}
where $f$ is a map of $A$-bimodules, $g$ is a map of $A$-modules and $h:M\to N'[1]$ is a map of bimodules, such that $d_{N'}h+hd_M=\pi'f-g\pi$.
We define a functor from $\mathcal M_A$ into dg vector spaces
\[
T_{F,E} : \mathcal M_A \to dg\mathcal Vect
\]
that sends an object $(M \overset{f}{\to} N)$ to the complex
$$
\Omega^\bullet(E) \otimes_{A^{\otimes 2}} M \oplus \Omega^\bullet(F) \otimes_A N [1],
$$
with differential of the form
$$
d = \begin{pmatrix}
d & 0 \\
f & d
\end{pmatrix}
=
\begin{pmatrix}
d_{\Omega^\bullet(E)} \otimes 1 + 1 \otimes d_M & 0 \\
\iota_{\Omega^\bullet(E) \to \Omega^\bullet(F)} \otimes f & d_{\Omega^\bullet(F)} \otimes 1 + 1 \otimes d_N
\end{pmatrix}.
$$
This defines a functor. Concretely, to a morphism \eqref{equ:MAAAmorph} in $\mathcal M_A$ we associate the morphism
$$
\Omega^\bullet(E) \otimes_{A^{\otimes 2}} M \oplus \Omega^\bullet(F) \otimes_A N [1]
\to \Omega^\bullet(E) \otimes_{A^{\otimes 2}} M' \oplus \Omega^\bullet(F) \otimes_A N' [1]
$$
given by the matrix
$$
\begin{pmatrix}
\mathrm{id}_\Omega^\bullet(E) \otimes f & 0\\
\iota_{\Omega^\bullet(E) \to \Omega^\bullet(F)}\otimes h & \mathrm{id}_\Omega^\bullet(F) \otimes g
\end{pmatrix}\, .
$$
We note in particular, that the homotopy in the morphism \eqref{equ:MAAAmorph} in $\mathcal M_A$ is part of the data and appears non-trivially in the image under $T_{F,E}$.
The functor sends componentwise quasi-isomorphisms into quasi-isomorphisms (we note that $\Omega^\bullet(E)$ and $\Omega^\bullet(F)$ are cofibrant as $A^{\otimes 2}$ and $A$ modules, respectively).
Our zigzag \eqref{equ:zigzag_coprod2} of quasi-isomorphisms in $dg\mathcal Vect$ is obtained from the zigzag \eqref{diag:mor} in $\mathcal M_A$ (with all homotopies $=0$) by applying the functor $T_{F,E}$.
The category $\mathcal M_A$ can in fact be extended to a dg category and $T_{F,E}$ to a dg functor.
Hence one can talk about homotopic morphisms in $\mathcal M_A$, and homotopic morphisms are send to homotopic morphisms between complexes by $T_{F,E}$. Also, one can consider zigzags of quasi-isomorphisms such as \eqref{diag:mor} as a morphism in the derived category.
Our goal is then to replace the zigzag \eqref{diag:mor} by a homotopic zigzag in $\mathcal M_A$ that is computationally simpler.
To this end we will use later the following result.
\begin{Lem}
\label{lem:hocop}
Let for that purpose $QA \to A$ denote a cofibrant replacement of $A$ in $A^{\otimes 2}$-modules.
Let $g$ and $h$ be any $A$-bimodule maps forming a homotopy commuting square
\begin{equation}\label{equ:hocopsquare}
\begin{tikzcd}
QA[-n] \ar[d] \ar[r, "g"] & (A \otimes A) / \Omega^\bullet(\mathsf{FM}_M(2)) \ar[d] \\
A[-n] \ar[r, "\wedge Th"] \ar[Rightarrow, ur, "h"] & A / \Omega^\bullet(UTM),
\end{tikzcd}
\end{equation}
then
$$
\begin{tikzcd}
A[-n] \ar[d, equal] & \ar[l] QA[-n] \ar[d] \ar[r, "g"] & (A \otimes A) / \Omega^\bullet(\mathsf{FM}_M(2)) \ar[d] \\
A[-n] & \ar[l, equal] A[-n] \ar[Rightarrow, ur, "h"] \ar[r, "\wedge Th"] & A / \Omega^\bullet(UTM),
\end{tikzcd}
$$
and
$$
\begin{tikzcd}
A[-n] \ar[d, equal] \ar{r}{\wedge Th} & \ar[d, equal] A / \Omega^\bullet(UTM) & \ar{d} \ar{l}{\simeq} (A \otimes A) / \Omega^\bullet(\mathsf{FM}_M(2)) \\
A[-n] \ar{r}{\wedge Th} & A / \Omega^\bullet(UTM) & \ar[l,equal] A / \Omega^\bullet(UTM)
\end{tikzcd}
$$
define homotopic morphisms.
\end{Lem}
\begin{proof}
It is enough to show that the composites
$$
\begin{tikzcd}
QA[-n] \ar[d] \ar[r, "g"] & A \otimes A / \Omega^\bullet(\mathsf{FM}_M(2)) \ar[d] \ar{r}{\simeq}& \ar{d} A / \Omega^\bullet(UTM) \\
A[-n] \ar[Rightarrow, ur, "h"] \ar[r, "\wedge Th"] & A / \Omega^\bullet(UTM) & \ar[l,equal] A / \Omega^\bullet(UTM),
\end{tikzcd}
$$
and
$$
\begin{tikzcd}
QA[-n] \ar[r] \ar[d]&A[-n] \ar[d, equal] \ar{r}{\wedge Th} & \ar[d, equal] A / \Omega^\bullet(UTM) \\
A[-n] \ar[r, equal] &A[-n] \ar{r}{\wedge Th} & A / \Omega^\bullet(UTM)
\end{tikzcd}
$$
are homotopic. A homotopy is given by $H := \begin{pmatrix}h & 0 \\ 0 & 0\end{pmatrix}$ as shown by the computation
\begin{align*}
[d,H] &= \begin{pmatrix} d & 0 \\ 1 & d \end{pmatrix} \begin{pmatrix}h & 0 \\ 0 & 0 \end{pmatrix} - \begin{pmatrix}h & 0 \\ 0 & 0 \end{pmatrix} \begin{pmatrix} d & 0 \\ \pi & d \end{pmatrix} \\
&= \begin{pmatrix} [d,h] & 0 \\ h & 0 \end{pmatrix}= \begin{pmatrix} mg-(\wedge Th) \pi & 0 \\ h & 0 \end{pmatrix} \\
&= \begin{pmatrix} mg & 0 \\ h & \wedge Th \end{pmatrix} - \begin{pmatrix} (\wedge Th) \pi & 0 \\ 0 & \wedge Th \end{pmatrix}
\end{align*}
where $m$ denotes the map $A \otimes A / \Omega^\bullet(C_2(M)) \to A / \Omega^\bullet(UTM)$ and $\pi: QA \to A$ is the canonical projection and we used the assumption
$$
[d,h] = m g - (\wedge Th)\pi.
$$
\end{proof}
We remark that the final datum that enters \eqref{diag:mor} is the homotopy commuting square
$$
\begin{tikzcd}
QA[-n] \ar[d] \ar[r] & A \otimes A \ar[d] \\
A[-n] \ar[Rightarrow, ur] \ar[r] & A,
\end{tikzcd}
$$
obtained by postcomposing \eqref{equ:hocopsquare} with the maps from the relative to the non-relative complexes. We note that the lower horizontal map is multiplication with the Euler element. The needed data is thus a bimodule map
$$
g: QA[-n] \to A \otimes A,
$$
such that
$$
QA[-n] \to A \otimes A \to A
$$
is homotopic (with a specified homotopy $h$) to a map that descends along $QA \to A$ to an $A$-module map $A[-n] \to A$.
\begin{Rem}
Note that for any cyclic $A_\infty$ (or right Calabi-Yau) algebra $A$ there is a canonical "coproduct" map $A \to A \otimes A$ in the derived category of $A$-bimodules from which one constructs an "Euler class" $A \to A$ by postcomposing with the multiplication. This is an element in Hochschild cohomology $HH^d(A,A)$. In our case, this element is in the image of the map $Z(A) \to HH^\bullet(A,A)$. In general, this gives an obstruction for $A$ to come from a manifold, which is an element in $HH^{d,\geq 1}(A,A)$. In case this element vanishes, choices of lifts are given by elements in $HH^{d-1}(A,A) = HH_1(A,A)^*$, so that in case $M$ is 2-connected, there is no room for additional data. We will see below that we can trivialize the element in $HH^{d,\geq 1}(A,A)$ using $Z_1$ thus reducing the space of additional data to $HC^-_1(A,A)^*$ which is zero for $M$ simply-connected.
\todo[inline]{ThomasX: Where exactly do we see that?}
\end{Rem}
\subsection{Coproduct ($\chi(M) = 0$ case)}\label{sec:chi 0 case}
In the 1-framed case, we recall that the coproduct is obtained by fiber product with the fibration $\op{Map}(\bigcirc_2) = PM \otimes PM \to M \times M$ of the zig-zag
$$
\begin{tikzcd}
(M \times M, M) \\
\ar[u] (\mathsf{FM}_M(2), UTM) \ar[r] & \Sigma UTM = (pt, UTM) \ar[r, dashed, "Th"] & M.
\end{tikzcd}
$$
and precomposing with the splitting map $(I, \partial I) \times LM \to (\op{Map}(\bigcirc_2), \op{Map}(8))$.
In this case the Thom form can be lifted to a closed element $Th \in \Omega^{n-1}(UTM)$. A cochain description can thus be obtained from the $A^{\otimes 2}$-bimodule map
$$
(A \otimes A)/A \longrightarrow \Omega^\bullet(\mathsf{FM}_M(2))/ \Omega^\bullet(UTM) \overset{\wedge Th}{\longleftarrow} A,
$$
by tensoring with $B \otimes B$ over $A^{\otimes 4}$. Thus to get a description, we merely need to find a homotopy inverse to the map $(A \otimes A)/A \leftarrow \Omega^\bullet(\mathsf{FM}_M(2))/ \Omega^\bullet(UTM)$ on the image of $\wedge Th$. More concretely, we have the following
\begin{Lem}
\label{lem:frcop}
Assume $\Psi : QA \to (A \otimes A)/A$ and a homotopy making the following diagram homotopy commute
$$
\begin{tikzcd}
QA \ar[r, "\Psi"] \ar[dr, "\wedge Th"', ""{name=Th}] & (A \otimes A) / A \ar[d] \arrow[Rightarrow, to=Th] \\
& \Omega^\bullet(\mathsf{FM}_M(2))/ \Omega^\bullet(UTM),
\end{tikzcd}
$$
in $A$-bimodules are given. Then the diagram
$$
\begin{tikzcd}
(B \otimes B) \otimes_{A^{\otimes 4}} (A \otimes A / A) \ar[d] & \ar[l, "\Psi"] (B \otimes B) \otimes_{A^{\otimes 4}} QA \ar[d] \ar[r, "\simeq"] & (B \otimes B) \otimes_{A^{\otimes 4}} A \ar[dl] \\
\Omega^\bullet(\op{Map}(\bigcirc_2), \op{Map}(8)) & \ar[l] \Omega^\bullet(\op{Map}(8)))
\end{tikzcd}
$$
commutes.
\end{Lem}
\todo[inline]{I changed this a bit. Check for repetition later. Decide whether to mention shifts.}
\section{Simply-connected case, and proofs of Theorems \ref{thm:main_1} and \ref{thm:main_2}, and the first part of Theorem \ref{thm:main_4}}
\label{sec:thm12proofs}
Let us assume that $A$ is a Poincaré duality model for $M$. For $M$ simply-connected this exists by \cite{LambrechtsStanley}. In that case we obtain the following models for $UTM$ and $\mathsf{FM}_M(2)$, cf. \cite{LambrechtsStanley3, Idrissi, CamposWillwacher},
\begin{align*}
cone( A \overset{ m \circ \Delta}{\to} A) & \simeq \Omega^\bullet(UTM) \\
cone( A \overset{\Delta}{\to} A \otimes A) & \simeq \Omega^\bullet(\mathsf{FM}_M(2)).
\end{align*}
The map $\Omega^\bullet(\mathsf{FM}_2(M))\to\Omega^\bullet(UTM)$ (restriction to the boundary) is modelled by the map
\begin{align*}
cone( A \overset{\Delta}{\to} A \otimes A)
&\to
cone( A \overset{ m \circ \Delta}{\to} A)
\\
(a,b\otimes c) &\mapsto (a,bc),
\end{align*}
i.e., just by the multiplication $m:A\otimes A\to A$.
A representative of the Thom class in $A / \Omega^\bullet(UTM)$ is given by the pair $(m\circ\Delta(1), (1,0))$. In this case we can easily write down the maps $g$ and $h$ appearing in Lemma \ref{lem:hocop}. Namely, $h=0$ and the map $g$ is given by $x \mapsto (\Delta(x),x, 0)$, that is the following diagram (strictly) commutes
$$
\begin{tikzcd}
A[-n] \ar[r, "g"] \ar[dr, "\wedge Th"'] & A\otimes A/ \Omega^\bullet(\mathsf{FM}_M(2)) = \op{Tot}\left( \begin{tikzpicture}
\node(2) at (1.5,.5) {$A \otimes A$};
\node(3) at (0,-.5) {$A$};
\node(4) at (1.5,-.5) {$A \otimes A$};
\draw[double equal sign distance](2) to (4);
\draw[->](3) to node[above]{$\Delta$} (4);
\end{tikzpicture} \right)
\ar[d] \\
& A/ \Omega^\bullet(UTM)= \op{Tot}\left( \begin{tikzpicture}
\node(2) at (1.5,.5) {$A$};
\node(3) at (0,-.5) {$A$};
\node(4) at (1.5,-.5) {$A$};
\draw[double equal sign distance](2) to (4);
\draw[->](3) to node[above]{$m\circ\Delta$} (4);
\end{tikzpicture} \right),
\end{tikzcd}\qquad
\begin{tikzcd}
x \ar[mapsto, r, "g"] \ar[mapsto, dr, "\wedge Th"'] & \left( \begin{tikzpicture}
\node(2) at (1.5,.5) {$\Delta(x)$};
\node(3) at (0,-.5) {$x$};
\node(4) at (1.5,-.5) {$0$};
\end{tikzpicture} \right)
\ar[mapsto, d] \\
& \left( \begin{tikzpicture}
\node(2) at (1.5,.5) {$m\Delta(x)$};
\node(3) at (0,-.5) {$x$};
\node(4) at (1.5,-.5) {$0$};
\end{tikzpicture} \right).
\end{tikzcd}
$$
Thus the resulting diagram \eqref{diag:mor} is equivalent to
\begin{equation}\label{equ:sec6diag}
\begin{tikzcd}
A[-n] \ar[d, equal]\ar[r, "\Delta"] & A \otimes A \ar[d] \\
A[-n] \ar[r, "m\Delta"] & A.
\end{tikzcd}
\end{equation}
Note that here we did not even need to pass to a cofibrant replacement $QA$ of the $A$-bimodule $A$, as in Lemma \ref{lem:hocop}. More pedantically, we can take, for example, $QA=B(A,A,A)=:B$, the bar resolution of the $A$-bimodule $A$ as in section \ref{sec:it int}, but then all maps from $QA$ in Lemma \ref{lem:hocop} factor through the canonical projection $QA\to A$.
From this we can compute the string coproduct (cohomology product).
To this end, we trace back and follow the zigzag \eqref{equ:string coproduct zigzag}.
\newcommand{M_E}{M_E}
\newcommand{M_F}{M_F}
\newcommand{M_8}{M_8}
Let us first make explicit all cochain complex models we use. Our model for $E=\op{Map}(\bigcirc_2)\to M\times M$ (loops with two marked points) is
\[
M_E = B\otimes_{A^{\otimes 2}} B \cong \bigoplus_{p,q\geq 0} (\bar A[1])^{\otimes p} \otimes A \otimes (\bar A[1])^{\otimes q} \otimes A
\]
considered as an $A^{\otimes 2}$-module, and equipped with the natural Hochschild-type differential.
Similarly, the cochain complex modelling $\op{Map}(8)\to M$ is \[
M_8 = B\otimes_{A^{\otimes 4}} B
\cong
\bigoplus_{p,q\geq 0} (\bar A[1])^{\otimes p} \otimes A \otimes (\bar A[1])^{\otimes q}.
\]
The subspace $F\subset \op{Map}(8)$ (figure 8 loops with one ear trivial) is modelled by the quotient of $M_8$ by the summands with $p>0$ and $q>0$
\[
M_F =M_8 / \bigoplus_{p,q> 0} (\bar A[1])^{\otimes p} \otimes A\otimes (\bar A[1])^{\otimes q}
\cong \bigoplus_{p,q\geq 0, pq=0} (\bar A[1])^{\otimes p} \otimes A\otimes (\bar A[1])^{\otimes q}.
\]
The cochain complex computing $H(LM)$ is the Hochschild complex
\[
C(A) = B\otimes_{A^{\otimes 2}} / A \cong \bigoplus_{k\geq 0} (\bar A[1])^k \otimes A.
\]
The cochain complex computing $H(LM,M)$ in this case is the reduced Hochschild complex
\[
\op{Tot}(C(A) \to A) \xleftarrow{\simeq} \bar C(A) :=\bigoplus_{k\geq 1} (\bar A[1])^k \otimes A.
\]
We start with two cocycles
\begin{align*}
a &=(\alpha_1,\cdots,\alpha_k, x)=:(\alpha,x) \\
b &=(\beta_1,\cdots,\beta_l, y)=:(\beta,y)
\end{align*}
in this complex $\bigoplus_{k\geq 1} (\bar A[1])^k \otimes A$, representing cohomology classes in $H(LM,M)$. Our goal is to produce a cocycle representing their pullback under the string coproduct.
First, consider the rightmost map in \eqref{equ:string coproduct zigzag}.
The rightmost map in \eqref{equ:string coproduct zigzag} is realized on cochains by (cf \eqref{equ:concatloop})
\[
(a,b) \mapsto c_1 := \pm (\alpha, xy, \beta)\in
\bigoplus_{p,q\geq 1} (\bar A[1])^{\otimes p} \otimes A \otimes (\bar A[1])^{\otimes q}
\xrightarrow{\simeq}
\op{Tot}(M_8 \to M_F).
\]
Next consider the map $H^{\bullet+1-n}(\op{Map}(8),F)\to H^{\bullet+1}(E,F)$, i.e., the middle map in \eqref{equ:string coproduct zigzag}. This map is, according to section \ref{sec:coprodrel} and the discussion in the beginning of this section, represented on cochains by the map
\[
\op{Tot}(M_8 \to M_F) \to \op{Tot}(M_E\to M_F)
\]
obtained from diagram \eqref{equ:sec6diag} by tensoring the first row with $M_E$ over $A^{\otimes 2}$ and the second by $M_E$ over $A$.
Our cocycle $c_1$ above is hence mapped to the cocycle
\[
c_2 := \sum \pm (\alpha, {\mathbf D}', \beta, x{\mathbf D}''y)
\in
M_E \subset \op{Tot}(M_E\to M_F),
\]
where $\sum \pm {\mathbf D}' \otimes x{\mathbf D}''y = \Delta(\alpha_0 \beta_0)$.
Finally, we apply the pullback via the splitting map $s$ to obtain a map
\[
H^\bullet(E,F) \to H^{\bullet -1}(LM,M).
\]
According to Proposition \ref{prop:splitmap} this map on cochains is given by formula \eqref{equ:splitmap cochains}. Applied to our cochain $c_2$ the final result reads
\[
\sum \pm (\alpha,{\mathbf D}',\beta, x{\mathbf D}''y)
\in
\bar C(A),
\]
in agreement with \eqref{equ:intro coproduct}.
Summarizing, we hence obtain Theorem \ref{thm:main_2}, i.e., the following result.
\begin{Thm}
For $M$ simply-connected, and $A$ a Poincaré duality model for $M$, the natural map
$$
\overline{HH}_\bullet(A,A) \to H^\bullet(LM, M),
$$
intertwines the string coproducts.
\end{Thm}
We can compute the string product on $H(LM)$ by a similar, albeit simpler computation.
To this end we begin with a cochain in the Hochschild complex
\[
(\alpha, x) \in C(A),
\]
representing a cohomology class in $H(LM)$.
We then trace this class through the zigzag \eqref{equ:product_zigzag}.
The image in our model $M_8$ for $H(\op{Map}(8))$ is given by formula \eqref{equ:concatloop} as
\[
p_1 := (\alpha',x,\alpha'')
\in
M_8.
\]
The remaining maps of \eqref{equ:product_zigzag}, namely $H^{\bullet-n}(\op{Map}(8))\to H^\bullet(LM\times LM)$ have been discussed in section \ref{sec:figeight}. According to Lemma \ref{lem:hoprod} and the discussion preceding it they are given on chains by tensoring the upper row ($g=\Delta$) of \eqref{equ:sec6diag} with $B\otimes B$ over $A^{\otimes 4}$ to obtain a map of cochain complexes
\[
M_8[-n] \to C(A)\otimes C(A).
\]
Concretely, on our cochain $p_1$ this produces the cochain
\[
p_2 = \sum (\alpha', x')\otimes (\alpha'', x'')
=
\sum (\alpha', x{\mathbf D}')\otimes (\alpha'', {\mathbf D}'').
\]
this agrees with \eqref{equ:intro product}.
It is furthermore obvious from the construction of the map via iterated integrals that the BV operator is also preserved (see Section \ref{sec:Connesdiff}).
Hence we have shown Theorem \ref{thm:main_1}.
\begin{proof}[Proof of first part of Theorem \ref{thm:main_4}]
Similarly, we can prove the first part of Theorem \ref{thm:main_4}. For the string product, the same computations as above work. For the coproduct we use the
description of section \ref{sec:chi 0 case}.
Proceeding otherwise as in the relative case above we obtain the string coproduct (cohomology product) of the cocycles $(\alpha,x)$ and $(\beta,y)$ in $\bar C(A)$ as the composition:
\[
\begin{tikzcd}
B\otimes_{A^e}A \otimes B\otimes_{A^e}A
\ar{r}
&
(B\otimes B)\otimes_{A^{\otimes 4}} A
\ar{r}{\Delta}
&
(B\otimes B)\otimes_{A^{\otimes 4}} (A\otimes A/A)
\ar{r}{s}
&
B\otimes_{A^2}A \\
((x,\alpha), (y,\beta))
\ar[symbol=\in]{u}
\ar[mapsto]{r}
&
\pm (\bar\alpha,\bar\beta,xy)
\ar[symbol=\in]{u}
\ar[mapsto]{r}
&
\sum \pm (\alpha, \beta, {\mathbf D}'xy, D'', 0)
\ar[symbol=\in]{u}
\ar[mapsto]{r}
&
\sum \pm (\bar\alpha,{\mathbf D}'',\bar\beta,{\mathbf D}'\alpha_0\beta_0)
\ar[symbol=\in]{u}
\end{tikzcd}.
\]
\end{proof}
\section{Involutive homotopy Lie bialgebra structure on cyclic words}
\subsection{Lie bialgebra of cyclic words}
For a graded vector space $H$ with a non-degenerate pairing of degree $-n$ it is well known (see \cite{Gonzalez} for an elementary review) that the cyclic words
\begin{equation}\label{equ:cycdef}
\op{Cyc}(H^*)= \bigoplus_{k\geq 0} (H^*[-1])^{\otimes k}_{C_k}
\end{equation}
carry an involutive dg Lie bialgebra structure with bracket $[,]$ and cobracket $\Delta$ of degree $n-2$.
Concretely, one has the following formulas (cf. \cite[section 3.1]{ChenEshmatovGan})
\begin{align*}
\Delta(a_1\cdots a_k) &= \sum_{i<j}\pm \epsilon(a_i,a_j) (a_1\cdots a_{i-1}a_{j+1}\cdot a_k) \wedge (a_{i+1}\cdots a_{j-1}) \\
[(a_1\cdots a_k),(b_1\cdots b_m)]
&=
\sum_{i,j} \pm \epsilon(a_i,b_j)((a_{i+1}\cdots a_k a_1\cdots a_{i-1}b_{j+1}\cdots b_mb_1\cdots b_{j-1}))\, .
\end{align*}
There are also slight variants.
First, we have the quotient Lie bialgebra
\[
\overline{Cyc}(H^*) = \op{Cyc}(H^*)/\K,
\]
dropping the $k=0$-term from the direct sum \eqref{equ:cycdef}.
Second, using the setting and notation of section \ref{sec:GCM}, we have the Lie sub-bialgebra
\[
\op{Cyc}(\bar H^*)\subset \op{Cyc}(H^*),
\]
dropping the span of $1$ from $H$.
Finally, we have the quotient Lie bialgebra
\[
\overline{Cyc}(\bar H^*) = \op{Cyc}(\bar H^*) /\K.
\]
\subsection{Twisting by Maurer-Cartan elements}\label{sec:IBLtwist}
Next, it is furthermore well known that a (degree $n-2$-)involutive dg Lie bialgebra structure on ${\mathfrak g}$ can be encoded as a BV operator (up to degree shifts) on the symmetric algebra
\[
S({\mathfrak g}[n-3])[[\hbar]]
\]
with $\hbar$ a formal parameter of degree $6-2n$.
Concretely, on this space we then have a degree $+1$ operator
\[
\Delta_0 = \delta_{{\mathfrak g}} + \delta_{c} + \hbar \delta_b,
\]
where $\delta_{{\mathfrak g}}$ is the internal differential on ${\mathfrak g}$, $\delta_c$ applies the cobracket to one factor in the symmetric product and $\delta_b$ is an differential operator of order 2 applying the bracket to two factors.
All relations of the involutive Lie bialgebra can be compactly encoded into the single equation
\[
\Delta_0^2=0.
\]
More generally, such a BV operator encodes a homotopy involutive Lie bialgebra structure ($IBL_\infty$-structure), see e.g. \cite[section 5.4]{CamposMerkulovWillwacher} for more details.
Now suppose we have an element
\[
z\in S({\mathfrak g}[n-3])[[\hbar]]
\]
of degree $|z|=6-2n$.
Then we may twist our involutive Lie bialgebra structure to a different $IBL_\infty$-structure encoded by the operator
\[
\Delta_z = e^{-\frac z \hbar}\Delta_0 e^{\frac z \hbar}
\]
if the master equation (Maurer-Cartan equation)
\[
\Delta e^{\frac z \hbar} = 0
\quad \Leftrightarrow \quad \delta_c z +\frac 1 2 [z,z]=0
\]
is satisfied. Concretely, in our setting
\[
\Delta_z = \Delta_0 +[z,-].
\]
The operator $[z,-]$ is always a derivation, and hence only contributes to the operations of arity $(k,1)$ (1 input and $k\geq 1$ outputs) of our $IBL_\infty$-structure. In other words, the twisting changes the differential and the (possibly higher genus and higher arity) cobrackets.
\subsection{Example: Lie bialgebra structure of \cite{ChenEshmatovGan}}
Let now $H$ be a (degree $n$-)Poincar\'e-duality algebra, with the Poincar\'e duality encoded by the map
\[
\epsilon : H\to \K
\]
of degree $-n$.
We want to apply the twisting construction of the previous section to the case ${\mathfrak g}=\overline{Cyc}(H^*)$.
Indeed, consider the degree 6-2n element encoding the product in $H$
\[
z=\sum_{p,q,r} \pm \epsilon(e_p e_q e_r) (e_p^*e_q^*e_r^*)
\in {\mathfrak g}[n-3] \subset S({\mathfrak g}[n-3]) \subset S({\mathfrak g}[n-3])[[\hbar]],
\]
where the $e_j$ range over a basis of $H$, and $e_j^*$ is the corresponding dual basis of $H^*$.
One can check that this is indeed a Maurer-Cartan element in the sense of the previous subsection.
Continuing to use the notation of the previous subsection the BV operator encoding the $IBL_\infty$-structure is just
\[
\Delta_0 + [z,-].
\]
Since $z$ is linear (i.e., only has Terms in the $k=1$ summand of \eqref{equ:cycdef}), and since there is no $\hbar$-dependence, the twist in fact only changes the differential of our dg Lie bialgebra structure.
In fact, the altered differential is just the Hochschild (or rather cyclic) differential, rendering ${\mathfrak g}$ into the cyclic complex of the coalgebra $H^*$, computing its cyclic homology.
As usual, the normalized cyclic complex $\overline{Cyc}(\bar H^*)\subset \overline{Cyc}(H^*)$ is preserved under this differential and we hence have equipped it with an involutive dg Lie bialgebra structure.
This is the structure found in \cite{ChenEshmatovGan}.
\subsection{Ribbon graph complex}\label{sec:ribbonGC}
Recall the definition of the graph complex (dg Lie algebra) $\mathsf{GC}_H$ as in section \ref{sec:GCM}.
Note that the construction leading to $\mathsf{GC}_H$ can be re-iterated for ribbon (fat) graphs instead of ordinary graphs.
Here a ribbon graph is a graph with a cyclic order prescribed on each star, i.e., on each set of incoming half-edges and decorations at a vertex.
We denote the corresponding graphical dg Lie algebra of ribbon graphs by $\mathsf{GC}_H^{As}$. There is a natural map of dg Lie algebras
\[
\mathsf{GC}_H' \to \mathsf{GC}_H^{\operatorname{As}}
\]
by sending a graph to the sum of all ribbon graphs built from the original graph by imposing cyclic orders on stars.
In fact, this map can be seen as a version of naturality of the Feynman transform of cyclic operads, together with the natural map of cyclic operads $\operatorname{As}\to \operatorname{Com}$.
The main observation is now that there is a map of dg Lie algebras
\[
f: \mathsf{GC}_H^{\operatorname{As}}\to S(\overline{Cyc}(H^*)[n-3])[[\hbar]]
\]
defined on a ribbon graph $\Gamma$ as follows.
\begin{itemize}
\item Note that the ribbon graph has an underlying surface whose genus we denote by $g$, and number of boundary components $b$.
\item If $\Gamma$ has vertices of valency $\neq 3$, we set $f(\Gamma)=0$.
\item Otherwise we set
\[
f(\Gamma) = \pm \hbar^g (c_1)(c_2)\cdots (c_b),
\]
where $c_j$ is the cyclic word obtained by traversing the $j$-th boundary component counterclockwise, and recording the $H$-decorations.
\[
\begin{tikzpicture}[baseline=-.65ex]
\node[int] (v1) at (0:1) {};
\node[int,label=240:{$\alpha$}] (v2) at (60:1) {};
\node[int] (v3) at (120:1) {};
\node[int,label=0:{$\beta$}] (v4) at (180:1) {};
\node[int] (v5) at (240:1) {};
\node[int,label=120:{$\gamma$}] (v6) at (300:1) {};
\draw (v1) edge +(0:.5) edge (v2)
(v2) edge (v3)
(v3) edge +(120:.5) edge (v4)
(v4) edge (v5)
(v5) edge +(240:.5) edge (v6)
(v6) edge (v1);
\end{tikzpicture}
\quad
\mapsto
\quad
(\alpha\beta\gamma)
\]
\end{itemize}
In particular, given an MC element $z+Z\in \mathsf{GC}_H'$, as in section \ref{sec:GCM}, we then obtain an MC element $$Z_{Cyc}\in S(\overline{Cyc}(H^*)[1])[[\hbar]]$$
via the composition of maps of dg Lie algebras
\[
\mathsf{GC}_H\to \mathsf{GC}_H^{\operatorname{As}}\to S(\overline{Cyc}(H^*)[1])[[\hbar]].
\]
\subsection{$IBL_\infty$-structure on cyclic words}
Now we use our MC element $Z_{Cyc}\in S(\overline{Cyc}(H^*)[1])[[\hbar]]$ constructed in the previous section to obtain an $IBL_\infty$-structure on $\overline{Cyc}(H^*)$, by the twisting procedure of section \ref{sec:IBLtwist}.
Specifically the structure is defined as before by the BV operator
\[
\Delta_{Z_{cyc}} = e^{-Z_{cyc}} \Delta_0 e^{Z_{cyc}}= \Delta_0 + [Z_{cyc}, -],
\]
where $\Delta_0$ is the (untwisted) BV operator as before.
Abusively, we shall denote our $IBL_\infty$-algebra thus obtained by $(\overline{Cyc}(H^*),\Delta_{Z_{cyc}})$.
We note that this procedure alters the differential on $\overline{Cyc}(H^*)$ and alters and extends the Lie cobracket by a series of higher cobrackets, i.e., operations of arity $(r,1)$. The twist however does not affect the Lie bracket, and there are no non-trivial operations of arities $(r,s)$ with $r,s\geq 2$ or with $s>2$.
Overall we obtain the following result
\begin{Prop}
$(\overline{Cyc}(H^*), \Delta_{Z_{cyc}})$ is an $IBL_\infty$-algebra whose differential is the natural "Hochschild" differential on the cyclic complex of $H^*$ as $\operatorname{Com}_\infty$-algebra.
\end{Prop}
Finally, we claim that our $IBL_\infty$-structure on $\overline{Cyc}(H^*)$ restricts to the normalized subspace $\overline{Cyc}(\bar H^*)$.
That means concretely that the operation $[Z_{cyc},-]$ cannot introduce terms that contain a cyclic word containing the letter $1^*\in H^*$.
Indeed, by the remark at the end of section \ref{sec:GCM} we know that the only pieces of $Z_{cyc}$ that contain the letter $1^*$ at all are those arising from the leading order part $z$ (as in \eqref{equ:GCHz}) of the Maurer-Cartan element.
However, this piece just controls the commutative product on $H$, and hence it leaves the reduced part $\overline{Cyc}(\bar H^*)$ invariant. (This is for the same reasons that the reduced Hochschild or cyclic complex is a subcomplex of the full Hochschild or cyclic complex.)
Overall, we obtain the following result, which yields the first statement of Theorem \ref{thm:main_3}.
\begin{Prop}
$(\overline{Cyc}(\bar H^*), \Delta_{Z_{cyc}})$ is an $IBL_\infty$-algebra whose cohomology is the reduced cyclic cohomology of the $\operatorname{Com}_\infty$-algebra $H^*$.
\end{Prop}
\subsection{Graded Lie bialgebra structure}
Any $IBL_\infty$-structure on a graded vector space $V$ induces an ordinary involutive Lie bialgebra structure on the cohomology $H(V)$.
In particular, from the the $IBL_\infty$-structure on $\op{Cyc}(H^*)$ from the previous subsection we obtain a graded involutive Lie bialgebra structure on the cyclic cohomology $H(\op{Cyc}(H^*))$ of the $\mathsf{Com}_\infty$-algebra $H$.
The remaining statement of Theorem \ref{thm:main_3} (to be shown) is encoded in the following result.
\begin{Thm}\label{thm:main_3b}
The natural map $(H(\operatorname{Cyc}(H))^* \to H^\bullet(LM_{S^1})$ is compatible with the Lie bialgebra structures.
\end{Thm}
The proof will occupy the next two sections, and proceeds by an explicit computation.
At this point, let us just make the Lie bialgebra structure on the left-hand side more explicit. First, the differential on the cyclic complex $\op{Cyc}(H^*)$ is given by the genus zero ($\hbar^0$-) and $1$-boundary-component-part $z_0$ of the MC element $Z_{cyc}$.
This is just given by the tree piece, which is in turn given by the tree piece $Z_{tree}$ of $Z\in \mathsf{GC}_H$ encoding (only) the $\mathsf{Com}_\infty$-structure on $H$.
The Lie bracket is not altered, as we remarked above.
The Lie cobracket is altered, but only receives contributions from the genus-0 and 2-boundary-components-part $z_1$ of $Z_{cyc}$. This is just the 1-loop part, determined by the 1-loop part $Z_1$ of $Z\in \mathsf{GC}_H$, cf. \eqref{equ:Zsplit}.
There are no further corrections to the cohomology Lie bialgebra structure.
\section{Graphical version}\label{sec:graphical version}
Having proven Theorems \ref{thm:main_1}, \ref{thm:main_2} and the first part of Theorem \ref{thm:main_3}, it remains for us to show the remaining statement of Theorem \ref{thm:main_3}, or more precisely Theorem \ref{thm:main_3b}.
To do this we will use the explicit zigzags of section \ref{sec:cochain zigzags} defining the string (co)product (respectively the (co)bracket). However, in the non-simply connected situation we cannot use the Lambrechts-Stanley model for configuration spaces, unfortunately.
Hence we use the more complicated graphical models of configuration spaces of \cite{CamposWillwacher}.
In this section we shall introduce the specific models and some auxiliary results.
Finally, the proof of Theorems \ref{thm:main_3}, \ref{thm:main_3b} will be given in section \ref{sec:thm3proof}.
Technically, the goal of this section is to write down formulas for the homotopy commuting squares in Lemmas \ref{lem:frcop} and \ref{lem:hocop}.
\subsection{Graphical models for configuration spaces}
Recall that the main input to our construction of the string bracket and cobracket is the compactified configuration space of 2 points on $M$, $\mathsf{FM}_M(2)$, together with the boundary inclusion and the forgetful maps
\begin{equation}\label{equ:bdry_inclusion}
\begin{tikzcd}
UTM = \partial \mathsf{FM}_M(2) \ar{d}\ar[hookrightarrow]{r}& \mathsf{FM}_M(2) \ar{d}\\
M \ar{r}{\Delta} & M\times M
\end{tikzcd}\, .
\end{equation}
The goal of this subsection is to construct a real model for the objects and morphisms in this square. More precisely, we require a dgca $A$ quasi-isomorphic to $M$ and two $A\otimes A$-modules modelling the upper arrow in the diagram.
We note that the morphisms in the above square can be interpreted as the simplest non-trivial instances of the action of the little disks operad on the (framed) configuration space of points. Furthermore, combinatorial models (graph complexes) $\mathsf{Graphs}_M$ for configuration spaces of points, with the little disks action have been constructed in \cite{CamposWillwacher, CDIW}, from which a dgca model of the morphism \eqref{equ:bdry_inclusion} can be extracted, cf. section \ref{sec:GraphsM} above.
However, in our situation their models can be much simplified, essentially by discarding all graphs of loop orders $\geq 2$.
Concretely we make the following definitions.
\begin{itemize}
\item Our dgca model for $M$ will be the tree part of ${}^*\operatorname{Graphs}_M(1)$, that is the space spanned by rooted (at least trivalent) trees, where each vertex is decorated by an element of $\operatorname{Sym}\bar{H^\bullet}$. We denote this dgca by $A$. It can be identified with
$$
A\cong \operatorname{Com} \circ \operatorname{coLie}_\infty \{1\} \circ \bar{H}^\bullet,
$$
that is the bar-cobar resolution of the $\operatorname{Com}_\infty$ algebra $H^\bullet$.
\item Our model for $\mathsf{FM}_M(2)$ will be the tree part of ${}^*\operatorname{Graphs}_M(2)$, which we denote by $\mathcal{C}$. It has a natural decomposition as a graded vector space
$$
\mathcal{C} = B(A, A, A) \oplus A \otimes A,
$$
with an additional differential $d_s: B(A,A,A) \to A \otimes A$.
The graphs in the two summands are schematically depicted as follows.
\begin{align*}
&
\begin{tikzpicture}[baseline=-.65ex]
\node[ext,label=90:{$A$}] (v1) at (0,0) {1};
\node[ext,label=90:{$A$}] (v2) at (3,0) {2};
\node[int,label=90:{$\bar A$}] (i1) at (1,0) {};
\node[int,label=90:{$\bar A$}] (i2) at (2,0) {};
\draw (v1) edge (i1) (i2) edge (i1) edge (v2);
\end{tikzpicture}
&
\begin{tikzpicture}[baseline=-.65ex]
\node[ext,label=90:{$A$}] (v1) at (0,0) {1};
\node[ext,label=90:{$A$}] (v2) at (1.5,0) {2};
\end{tikzpicture}
\end{align*}
where $A$ stands for forests of trees with decorations in $\bar H$ as above.
The differential $d_s$ (from left to right in the picture) comes from the piece of \eqref{equ:edgecsplit} that cuts an edge.
\item Our model for $UTM$ will be the subspace $\U\subset \mathsf{Graphs}_M(1)$ spanned by graphs of loop order $\leq 1$, where tadpoles are only allowed at the root. Concretely, such graphs can have the following shapes.
\begin{align}\label{equ:Upics}
& \begin{tikzpicture}[baseline=-.65ex]
\node[ext,label=90:{$A$}] (v1) at (0,0) {1};
\end{tikzpicture}
&
&
\begin{tikzpicture}[baseline=-.65ex, every loop/.style={draw,-}]
\node[ext,label=20:{$A$}] (v1) at (0,0) {1};
\draw (v1) edge[loop above, looseness=20, out=110,in=70] (v1);
\end{tikzpicture}
&
&
\begin{tikzpicture}[baseline=-.65ex]
\node[ext,label=20:{$A$}] (v1) at (0,0) {1};
\node[int,label=0:{$\bar A$}] (i1) at (0,1) {};
\node[int,label=-30:{$\bar A$}] (i2) at (0,2) {};
\node[int,label=90:{$\bar A$}] (i4) at (0,4) {};
\node[int,label=180:{$\bar A$}] (i3) at (-1,3) {};
\node[int,label=0:{$\bar A$}] (i5) at (1,3) {};
\draw (v1) edge (i1) (i2) edge (i1) edge (i3) edge (i5) (i4) edge (i3) edge (i5);
\end{tikzpicture}
\end{align}
Let $T \in \U$ be the cochain given by the tadpole graph and let $Y \in \U$ be the tripod graph decorated by Poincaré dual classes.
\begin{equation}\label{equ:TYdef}
T = \,
\begin{tikzpicture}[baseline=-.65ex, every loop/.style={draw,-}]
\node[ext] (v1) at (0,0) {1};
\draw (v1) edge[loop above, looseness=20, out=110,in=70] (v1);
\end{tikzpicture},
\qquad
Y = \,
\begin{tikzpicture}[baseline=-.65ex]
\node[ext] (v1) at (0,0) {1};
\node[int,label=90:{$\sum_q e_q e_q^*$}] (i1) at (0,.5) {};
\draw (v1) edge (i1);
\end{tikzpicture}
\end{equation}
In particular, there is a canonical cochain $\nu$ given by the tadpole at the root plus the tripod decorated by Poincaré dual classes, satisfying $d\nu=\chi(M)\omega$, with $\omega\in H^n(M)$ being a volume form, normalized so that $M$ has volume 1.
\begin{equation}\label{equ:nudef}
\nu = \,
\begin{tikzpicture}[baseline=-.65ex, every loop/.style={draw,-}]
\node[ext] (v1) at (0,0) {1};
\draw (v1) edge[loop above, looseness=20, out=110,in=70] (v1);
\end{tikzpicture}
\, + \,
\begin{tikzpicture}[baseline=-.65ex]
\node[ext] (v1) at (0,0) {1};
\node[int,label=90:{$\sum_q e_q e_q^*$}] (i1) at (0,.5) {};
\draw (v1) edge (i1);
\end{tikzpicture}
\end{equation}
We can decompose our model as follows into graded vector subspaces
$$
\U= A \oplus A \tp \oplus \U_\bigcirc \oplus \U_\text{\floweroneright},
$$
where the terms are as follows.
\begin{itemize}
\item The term $A$ corresponds to the first type of graphs in \eqref{equ:Upics}.
\item The term $AT$ corresponds to the second type of graphs in \eqref{equ:Upics}.
\item $\U_\bigcirc = ( B(A,A,A) \otimes_{A \otimes A} A)_{{\mathbb Z}_2}$ corresponds to graphs of the third kind with a "stem" of length 0.
\item $\U_\text{\floweroneright} = ( B(A,A,A) {}_A \otimes_{A \otimes A} B(A,A,A))_{{\mathbb Z}_2}$ corresponds to terms of the third kind in \eqref{equ:Upics}, with the "stem" containing at least one edge.
\end{itemize}
The differential contains several pieces between our subspaces above.
\begin{itemize}
\item There are internal pieces of the differential acting on $A$, by edge contraction and edge splitting. These terms depend on the tree part $Z_{tree}$ of the partition function of section \ref{sec:GCM}.
\item There is a piece of the differential $d_c : \U_\text{\floweroneright} \to \U_\bigcirc$ by contracting of the stem if it has length exactly 1.
\item There is a piece $d_{s_p} : \U_\text{\floweroneright} \to A$ By cutting an edge in the loop.
\item Similarly, there is a piece $d_s: \U_\bigcirc \to A$.
\item Finally there is a piece $d_{s_s}: \U_\text{\floweroneright} \to A$ by cutting an edge in the stem.
This disconnects the loop, and "sends it to a number" by using the loop order one part $Z_1$ of the partition function \eqref{equ:Zsplit}.
\end{itemize}
Moreover, one has that $d\nu\in A$ is a single vertex decorated by a top form $\xi \omega$ representing the Euler class of the manifold.
\end{itemize}
We let $\evd : \mathcal{C} \to \U$ denote the natural map obtained by restricting to the boundary.
After choosing a propagator, \cite{CamposWillwacher} construct the following maps of complexes
\begin{align*}
A &\longrightarrow \Omega_{PA}^\bullet(M) \\
\mathcal{C} &\longrightarrow \Omega^\bullet_{PA}(\mathsf{FM}_M(2))\\
\mathcal{U} &\longrightarrow \Omega^\bullet_{PA}(UTM = \partial \mathsf{FM}_M(2)).
\end{align*}
They are compatible with the $A$-module structure and restriction to the boundary. Moreover, the element $\nu \in \U$ is a representative of the fiberwise volume form, such that the Thom class $Th \in A \oplus \U[1]$ is represented by $\chi(M)\omega \oplus \nu$ and a lift in the $\chi(M) = 0$ case is given by $\mathit{Th} = \nu \in \U$.
\begin{Rem}
We want to think of these graph complexes as constructed similarly to the Hochschild complexes from skeleton graphs, where the Hochschild edge $B$ is replaced by the "graph edge" which can be defined as
$$
A\langle | \rangle / (|^2) \oplus A \otimes A = B \oplus A \oplus A \otimes A
$$
and differential $d(|) = 1 + |e_i \otimes e^i|$. We can write the total differential as $d = d_\text{Hoch} + d_c + d_s$, where $d_c : B \to A$ is the counit, and $d_s(|) = |e_i \otimes e^i|$. The two extra differentials will correspond to maps between parts of a graph complex with differing skeleta. Namely, the first corresponds to contracting an edge, while the second one corresponds to splitting one.
\end{Rem}
The edge splitting map $d_s: B \to A \otimes A$ can be expressed in two steps, first splitting the Hochschild edge and adding decorations $| e_i \otimes e^i |$ and then identifying the two resulting bar complexes as part of two copies of the bar-cobar resolution $A$
$$
B \to B \otimes B \to A \otimes A.
$$
We will use the notations
\begin{align}
\label{equ:leftmult}
B &\overset{m_l}{\longrightarrow} A \\
x | \alpha | y &\longmapsto x (\alpha y)
\end{align}
and
\begin{align*}
B &\overset{m_r}{\longrightarrow} A \\
x | \alpha | y &\longmapsto (x\alpha) y
\end{align*}
for these maps. These maps have useful commutation relations with the contracting and the splitting differential, that is, we obtain for instance
$$
[d, m_l](x| \alpha | y) = \epsilon(\alpha)xy - d_s(m_l(x | \alpha | y)),
$$
hence we obtain a homotopy between $\epsilon : B \to A$ and $d_s(m_l(x | \alpha | y)) = x m_l(\alpha' e_i) z_0(e^i \alpha'' y) = x \alpha' e_i z_0(e^i \alpha'' y)$. We will use this to obtain a homotopy between $1y$ and something that lives in the trivalent part in cases when $y$ itself is not trivalent.
\begin{Lem}
\label{lem:contrsquare}
The square
$$
\begin{tikzcd}
A \otimes A \ar[r] \ar[d] & A \ar[d] \\
\mathcal{C} \ar[r] & \U
\end{tikzcd}
$$
is a homotopy pushout. Equivalently, the inclusion $\operatorname{cone}(A \otimes A \to A) \to \operatorname{cone}(\mathcal{C} \to \U)$ is a quasi-isomorphism of $A$-bimodules.
\end{Lem}
\begin{proof}
The map being an inclusion, it suffices to show that the cokernel
$$
\operatorname{cone}( B \to A\tp \oplus \U_\bigcirc \oplus \U_\text{\floweroneright}))
$$
is contractible. We consider the short exact sequence
$$
\begin{tikzcd}
\operatorname{cone}( B \to A\tp) \ar{r} & \operatorname{cone}( B \to A\tp \oplus \U_\bigcirc \oplus \U_\text{\floweroneright}) \ar{r} & (\U_\bigcirc \oplus \U_\text{\floweroneright})[1]
\end{tikzcd}
$$
and note that the outer terms are contractible.
\end{proof}
The following follows directly from \cite{CamposWillwacher}.
\begin{Prop}\label{prop:cube}
The following cube commutes and all the vertical maps are quasi-isomorphisms.
$$
\begin{tikzcd}[back line/.style={densely dotted}, row sep=3em, column sep=3em]
& A \otimes A \ar{dl} \ar{rr} \ar{dd}[near end]{\simeq}
& & A \ar{dd}{\simeq} \ar{dl} \\
\mathcal{C} \ar[crossing over]{rr} \ar{dd}{\simeq} & & \U \\
& \Omega_{PA}^\bullet(M \times M) \ar{rr} \ar{dl} & & \Omega_{PA}^\bullet(M) \ar{dl} \\
\Omega_{PA}^\bullet(\mathsf{FM}_M(2)) \ar{rr} & & \Omega_{PA}^\bullet(UTM) \ar[crossing over, leftarrow]{uu}[near start]{\simeq}
\end{tikzcd}
$$
In other words, the upper face of the cube is indeed a model for the square \eqref{equ:bdry_inclusion} as desired.
\end{Prop}
\begin{proof}
The diagram commutes by construction. Since the vertical maps on the back face are quasi-isomorphisms, and the previous lemma, it is enough to show that the map $cone(A \otimes A \to \mathcal{C}) \to cone(\Omega^\bullet(M\times M) \to \Omega^\bullet(\mathsf{FM}_M(2)))$ is a quasi-isomorphism. This is clear, since by the Thom isomorphism both have cohomology a free $H$-module with generator given by the Thom class, and the vertical map respects these Thom classes by construction.
\end{proof}
We recall that for the construction of the coproduct in the case $\chi(M) = 0$ we used the map (in the derived category) of $A$-bimodules
$$
A \overset{\wedge \nu}{\longrightarrow} \Omega_{PA}^\bullet(UTM) \longrightarrow \operatorname{cone}(\Omega_{PA}^\bullet(\mathsf{FM}_M(2)) \to \Omega_{PA}^\bullet(UTM)) \longleftarrow \operatorname{cone}( A \otimes A \to A),
$$
where $\nu$ is a (in this case closed) fiberwise volume form. The above proposition allows us to replace this map with
$$
A \overset{\wedge \nu}{\longrightarrow} \U \longrightarrow \operatorname{cone}(\mathcal{C} \to \U) \longleftarrow \operatorname{cone}( A \otimes A \to A).
$$
It follows directly from the proposition that the second arrow is a quasi-isomorphism and hence the map is well-defined. More concretely, after tensoring everything from both sides with $B$, we obtain a map
$$
B \otimes_A B \longrightarrow \operatorname{cone}(A \otimes A \to A),
$$
defined by the requirement that the diagram
$$
\begin{tikzcd}
B \otimes_A B \ar[r, "g"] \ar[dr, "\wedge Th"', ""{name=g}] & A\otimes A / A \ar[d] \arrow[Rightarrow, from=g, ""'] \\
& \mathcal{C} / \U.
\end{tikzcd}
$$
commutes up to homotopy. We seek to compute the map $g$ explicitly. For this we essentially spell out the formulas implicit in the proof of Lemma \ref{lem:contrsquare}.
In the case of the reduced coproduct we note that the above proposition also direcly implies the existence of a homotopy commuting diagram
$$
\begin{tikzcd}
QA[-n] \ar[r, "g"] \ar[dr, "\wedge Th"', ""{name=g}] & A\otimes A \ar[d] \arrow[Rightarrow, from=g, "h"'] \\
& A,
\end{tikzcd}
$$
with $QA:=B\otimes_A B$.
Recall that in this case the homotopy is extra data (and not a property of $g$) that appears in the description of the coproduct. Again, we will obtain formulas by spelling out the contracting homotopy of the upper face of the cube in Proposition \ref{prop:cube} at least for the image of the map $\wedge Th$.
\subsection{A contracting homotopy}
In this section we will produce an explicit contracting homotopy of (the total complex of) the diagram in Lemma \ref{lem:contrsquare}. Using the explicit description of $\mathcal{C}$ and $\U$ we can write the square as
\begin{equation}\label{equ:bunt1}
\begin{tikzcd}
{\color{blue} A \otimes A} \ar[r] \ar[d] & {\color{teal} A} \ar[d] \\
\mathcal{C}={\color{purple} B} \oplus {\color{blue} A \otimes A} \ar[r] & \U = {\color{teal} A} \oplus {\color{purple} A\tp} \oplus {\color{orange} \U_\bigcirc \oplus \U_\text{\floweroneright} }
\end{tikzcd}
\end{equation}
where same-colored elements correspond to contractible subquotient complexes. Our strategy is to write down a contracting homotopy $h_0$ for each of these complexes and then get an overall contracting homotopy $H$ by the perturbation lemma.
Since we want to construct a contracting homotopy in $A$-bimodules we will take first tensor the diagram with $B$ over $A$ form the left and from the right to obtain
\begin{equation}\label{equ:bunt2}
\begin{tikzcd}
{\color{blue} B \otimes B} \ar[r] \ar[d] & {\color{teal} B\otimes_A B} \ar[d] \\
{\color{purple} B\otimes_A B\otimes_A B} \oplus {\color{blue} B\otimes B} \ar[r] & {\color{teal} B\otimes_A B} \oplus {\color{purple} B\otimes_A (A\tp) \otimes_A B} \oplus {\color{orange} B\otimes_A (\U_\bigcirc \oplus \U_\text{\floweroneright})\otimes_A B }
\end{tikzcd}
\end{equation}
We now describe the components of the contracting homotopy $h_0$, that is contracting homotopy of each of the colored subquotient complexes (tensored with $B \otimes B$ if our homotopy makes use of it).
\begin{itemize}
\item ${\color{purple} B \otimes_A B \otimes_A B \to B \otimes_A B}$. The maps
\begin{align*}
B \otimes_A B &\longrightarrow B \otimes_A B \otimes_A B \\
\alpha |x| \beta &\longmapsto \alpha | x | \beta' | 1 | \beta''
\end{align*}
and
\begin{align*}
B \otimes_A B \otimes_A B &\longrightarrow B \otimes_A B \otimes_A B \\
\alpha |x| \beta |y| \gamma &\longmapsto \alpha | x | \beta y \gamma' | 1 | \gamma''
\end{align*}
define a strict (the homotopies commute with the maps) homotopy inverse to $B \otimes_A B \otimes_A B \to B \otimes_A B$ where the second component is the action of the fundamental cycle of the interval via the reparametrization map (see \ref{sec:splittingmap}).
Thus the contracting homotopy $h_0: B \otimes_A B \otimes_A B \oplus B \otimes_A B[1]$ has two components given by the formulas above.
\item ${\color{blue} B \otimes B \overset{\operatorname{id}}{\to} B \otimes B}$. We could simply chose the identity to be our homotopy. However, since we want to obtain a formula that lives in trivalent graphs (i.e. all $Com_\infty$-multiplications have been carried out), we choose a slighty more complicated contracting homotopy, namely we define
\begin{align*}
m : B \otimes B &\longrightarrow (B \otimes B)[1]\\
\alpha \otimes \beta &\longmapsto \alpha' m_l(\alpha'') \otimes \beta [1],
\end{align*}
which is a degree $-2$ map on $\operatorname{cone}( B \otimes B \to B \otimes B)$, that is we map $(u,v) \mapsto (m(v), 0)$,
where $u \in B \otimes B$ and $v \in B \otimes B[1]$. We then take $h_0 = \operatorname{id}[1] + [d, m]$ as our contracting homotopy, where $\operatorname{id}[1]$ is the canonical contraction. Thus $h_0(u,v) = (mu + (1 -[d_{B\otimes B},m])(v), mv)$. Similarly to the discussion in \eqref{equ:leftmult} we obtain that $m$ gives a homotopy between the identity and the map
$$
(\alpha \otimes \beta) \longmapsto (\alpha'|\alpha''e_i) z_0(e^i\alpha''') \otimes \beta,
$$
which we will \todo{or won't we} abbreviate to $(\alpha \otimes \beta) \mapsto (\alpha'|d_s(\alpha'')) \otimes \beta$.
Thus the homotopy $h_0$ is given by the formula
$$
h_0( \alpha_1 \otimes \beta_1 , \alpha_2 \otimes \beta_2) = (m(\alpha_1 \otimes \beta_1) + (\alpha_2'| d_s( \alpha_2'')) \otimes \beta_2 , m(\alpha_2\otimes \beta_2) ).
$$
\item ${\color{orange} \U_\bigcirc \to \U_\text{\floweroneright}}$. We identify $\U_\bigcirc$ with $B^{\geq 1}_{{\mathbb Z}_2} \otimes_{A^{\otimes 2}} A$ and $\U_\text{\floweroneright}$ with $B {}_A \otimes_{A^{\otimes 2}} B^{\geq 1}_{{\mathbb Z}_2}$.
Similarly to above we get a contracting homotopy using deconcatenation and the interval action, namely it is given by components
\begin{align*}
h_0: B^{\geq 1} \otimes_{A^{\otimes 2}} A &\longrightarrow B {}_A \otimes_{A^{\otimes 2}} B^{\geq 1} \\
\beta|x &\longmapsto \pm x|\beta' \Sha (\beta''')^* |1| \beta''.
\end{align*}
and
\begin{align*}
h_0: B {}_A \otimes_{A^{\otimes 2}} B^{\geq 1} &\longrightarrow B {}_A \otimes_{A^{\otimes 2}} B^{\geq 1} \\
\alpha|x|\beta &\longmapsto \pm\alpha x \beta' \Sha (\beta''')^* |1| \beta'',
\end{align*}
which both descend to ${\mathbb Z}_2$-coinvariants since the shuffle product is commutative.
\todo[inline]{It feels like this actually depends on what the sign of the reflection is.....and I failed to properly check this.
Seems to be true.}
\item ${\color{teal} B \otimes_A B \overset{\operatorname{id}}{\to} B \otimes_A B}$. Similarly to the blue homotopy, we want to "carry out one $Com_\infty$-multiplication". We again twist the canonical contracting homotopy by the map
\begin{align*}
m_m : B \otimes_A B &\longrightarrow B \otimes_A B \\
\alpha|x|\beta &\longmapsto \alpha' m_l(\alpha'' \Sha (\beta')^*|x) \beta'',
\end{align*}
to obtain the homotopy
$$
h_0(u,v) = (m_mu + (1 -[d_{B\otimes B},m_m])(v), m_mv),
$$
where
$$
(1 -[d_{B\otimes B},m_m])(\alpha|x|\beta) = \pm z_0((\alpha'' \Sha \beta') e^i)\alpha'| (\alpha'' \Sha (\beta'')^* e_i) | \beta'''.
$$
\todo[inline]{This is again very "suggestive notation". Maybe there is a better way to write these.}
\end{itemize}
Let us note that the only parts of the differential disregarded in the homotopy $h_0$ are those from the horizontal maps in our diagram and all the "splitting differentials" by cutting edges in graphs. Thus decomposing the differential of the total complex into $d = d_0 + d_1$, where now $[d_0,h_0] = \operatorname{id}$, we obtain the contracting homotopy $H$ by
$$
H = h_0 + h_0 d_1 h_0 + h_0 d_1 h_0 d_1 h_0 + \dots,
$$
where one checks that the sum is finite, namely
$$
H = h_0 + h_0 d_1 h_0 + h_0 d_1 h_0 d_1 h_0 + h_0 d_1 h_0 d_1 h_0 d_1 h_0.
$$
Let us universally denote by $\pi:B\otimes_A (-) \otimes_A B \to (-)$ the projection undoing the tensor products with $B$ that we introduced above.
Then we decompose $\pi \circ H = H_{A\otimes A} + H_{A} + H_{\mathcal{C}} + H_{\U}$ according to the target space. For instance $H_\mathcal{C} : A \oplus A\otimes A \oplus \mathcal{C} \oplus \U \to \mathcal{C}$ (with degree shifts ignored).
\subsection{Explicit formulas for the homotopies of Lemmas \ref{lem:hocop}, \ref{lem:frcop}}
Recall that for the $\chi(M)=0$-case of the coproduct we need to produce a map $g: QA[-d] \to A \otimes A / A$ (with $QA:=B\otimes_A B$ our chosen cofibrant resolution of the $A$-bimodule $A$) that makes the diagram
$$
\begin{tikzcd}
B \otimes_A B \ar[r, "g"] \ar[dr, "\wedge Th"', ""{name=g}] & A\otimes A / A \ar[d] \arrow[Rightarrow, from=g, ""'] \\
& \mathcal{C} / \U.
\end{tikzcd}
$$
homotopy commute, see Lemma \ref{lem:frcop}. Using the homotopy $H$ from the previous subsection we can choose
\begin{align*}
g &= (H_{A\otimes A} + H_{A}) \circ (\wedge Th).
\end{align*}
For the string coproduct in the reduced case (i.e., on $H(LM,M)$) we need to find $g$ and $h$ such that
\begin{equation}
\begin{tikzcd}
QA[-n] \ar[d] \ar[r, "g"] & (A \otimes A) / \mathcal{C} \ar[d] \\
A[-n] \ar[r, "\wedge Th"] \ar[Rightarrow, ur, "h"] & A / \U,
\end{tikzcd}
\end{equation}
is a homotopy commuting square. We choose
\begin{align*}
g &= (H_{A \otimes A} + H_\mathcal{C}) \circ (\wedge Th) \\
h &= (H_A + H_\U ) \circ (\wedge Th)
\end{align*}
Composing with the projections $(A\otimes A)/\mathcal{C}\to A\otimes A$ and $A/\U\to A$ we obtain the diagram
$$
\begin{tikzcd}[column sep=huge,row sep=large]
B \otimes_A B[-n] \ar[r, "H_{A\otimes A} \circ (\wedge Th)"] \ar[d] & A \otimes A \ar[d]\\
A[-n] \ar[Rightarrow, ur, "H_A \circ (\wedge Th)"] \ar[r, "\wedge \chi(M)\omega"] & A.
\end{tikzcd}
$$
Thus in both cases ($\chi(M)=0$ or working modulo constant loops) it remains to compute
\begin{align*}
&H_{A\otimes A} \circ (\wedge Th)& &\text{and}
& & H_{A} \circ (\wedge Th).
\end{align*}
We note that $\mathit{Th} = T + Y + \chi(M)\omega$ has three components, with $T\in \U$ the tadpole graph (first term in \eqref{equ:nudef}), $Y\in \U$ the second term in \eqref{equ:nudef} and $\omega\in H^n(M)\subset A$ the top dimensional cohomology class, normalized so that $M$ has volume 1.
The major contribution to $g,h$ above comes from the piece $\tp\wedge$, it image lies in the red summand of the lower right corner of \eqref{equ:bunt2}. On this summand, the homotopy $H$ is nontrivial, and we shall evaluate it now.
\subsubsection{Image of the tadpole graph}
We evaluate the formulas for $H_{A\otimes A} \circ (\wedge T)$ and $H_{A} \circ (\wedge T)$ step by step on a typical element
$$
\alpha | x | \beta = 1|\alpha_1 \alpha_2 \ldots \alpha_k| x | \beta_1 \beta_2 \ldots \beta_l | 1 \in A \otimes \bar{A}[1]^{\otimes k} \otimes A \otimes \bar{A}[1]^{\otimes l}\otimes A \subset B \otimes_A B.
$$
Note that this is enough since $H$ is a map of $A$-bimodules.
$$
\alpha | x | \beta =
\begin{tikzpicture}[scale = 2]
\filldraw (-1,0) circle (0.05);
\filldraw (0,0) circle (0.05);
\filldraw (1,0) circle (0.05);
\draw [thick] (-1,0) --(1,0);
\draw (-0.8,0) --(-0.8,0.1) node[above]{$\alpha_1$};
\draw (-0.6,0) --(-0.6,0.1) node[above]{$\alpha_2$};
\draw (-0.6,0) to node[above]{$\ldots$}(-0.2,0) ;
\draw (-0.2,0) --(-0.2,0.1) node[above]{$\alpha_k$};
\draw (0,0) --(0,0.2) node[above]{$x$};
\draw (0.2,0) --(0.2,0.1) node[above]{$\beta_1$};
\draw (0.4,0) --(0.4,0.1) node[above]{$\beta_2$};
\draw (0.4,0) to node[above]{$\ldots$} (0.8,0);
\draw (0.8,0) --(0.8,0.1) node[above]{$\beta_k$};
\end{tikzpicture}
\quad=\quad
\begin{tikzpicture}[scale = 2]
\filldraw (-1,0) circle (0.05);
\filldraw (0,0) circle (0.05);
\filldraw (1,0) circle (0.05);
\draw [thick] (-1,0) --(1,0);
\draw (0,0) --(0,0.2) node[above]{$x$};
\draw [thick] (-1,0) to node[above]{$\alpha$} (0,0);
\draw [thick] (0,0) to node[above]{$\beta$} (1,0);
\end{tikzpicture}
$$
The terms of $H(\alpha | x| \beta \wedge T)$ are obtained by iteratively applying $h_0$ and $d_1$,
\begin{align*}
h_0( \alpha | x | \beta \wedge T) &= \Psi_1^\mathcal{C} \\
d_1 h_0( \alpha | x | \beta \wedge T) &= \Phi_1^\mathcal{C} + \Phi_1^\U \\
h_0 d_1 h_0( \alpha | x | \beta \wedge T) &= \Psi_2^{A \otimes A} + \Psi_2^\mathcal{C} + \Psi_2^\U \\
d_1 h_0 d_1 h_0( \alpha | x | \beta \wedge T) &= \Phi_2^{A} + \Phi_2^\U \\
h_0 d_1 h_0 d_1 h_0( \alpha | x | \beta \wedge T) &= \Psi_3^{A} + \Psi_3^\U \\
d_1 h_0 d_1 h_0 d_1 h_0( \alpha | x | \beta \wedge T) &= 0,
\end{align*}
where we decomposed the images according to which corner of the square they lie in. In the following we compute each term and show that the missing components are zero. Using this notation we have
\begin{align*}
H_{A\otimes A}(\alpha | x| \beta \wedge T) &= \pi \Psi_2^{A\otimes A} \\
H_{A}(\alpha | x| \beta \wedge T) &= \pi \Psi_3^A
\end{align*}
The term $\Psi_1^\mathcal{C}$ is obtained by applying the {\color{purple}purple homotopy}, that is we obtain
$$
\Psi_1^\mathcal{C} = \alpha | x | \beta' | 1 | \beta'' \in {\color{purple}B \otimes_A B \otimes_A B} \subset B \otimes_A \mathcal{C} \otimes_A B
$$
$$
\Psi_1^\mathcal{C} =
\begin{tikzpicture}[scale = 2]
\filldraw (-1,0) circle (0.05);
\filldraw (0,0) circle (0.05);
\filldraw (1,0) circle (0.05);
\filldraw (2,0) circle (0.05);
\draw [thick] (-1,0) to node[above]{$\alpha$}(0,0);
\draw (0,0) to node[above]{$\beta'$}(1,0);
\draw [thick] (1,0) to node[above]{$\beta''$}(2,0);
\draw (1,0) --(1,0.2) node[above]{$1$};
\draw (0,0) --(0,0.2) node[above]{$x$};
\end{tikzpicture}
\quad=\quad
\begin{tikzpicture}[scale = 2]
\filldraw (-1,0) circle (0.05);
\filldraw (0,0) circle (0.05);
\filldraw (1,0) circle (0.05);
\filldraw (2,0) circle (0.05);
\draw [thick] (-1,0) to node[above]{$\alpha$}(0,0);
\draw (0,0) to node[above]{$\beta'$}(1,0);
\draw [thick] (1,0) to node[above]{$\beta''$}(2,0);
\draw (0,0) --(0,0.2) node[above]{$x$};
\end{tikzpicture}
$$
where the middle edge is drawn in a different manner to remember that we think of it as a "graph edge" and not a "Hochschild edge", i.e. there is a splitting differential.
Applying $d_1$ gives two components, one coming from splitting the middle edge and one from the horizontal map $\evd:\mathcal{C} \to \U$. We write
$$ d_1 \Psi_1^\mathcal{C} = {\color{blue} \Phi_1^\mathcal{C}} + {\color{orange} \Phi_1^\U},$$
where
$$
\Phi_1^\mathcal{C} = {\color{blue} d_s \Phi_2} = \alpha|x\beta' e_q) \otimes (e_q^* \beta''|1|\beta''') \in B \otimes_A \mathcal{C} \otimes_A B
$$
$$
{\color{blue} d_s \Phi_2} =
\begin{tikzpicture}[scale = 2]
\filldraw (-1,0) circle (0.05);
\filldraw (0,0) circle (0.05);
\filldraw (1.8,0) circle (0.05);
\filldraw (2.8,0) circle (0.05);
\draw [thick] (-1,0) to node[above]{$\alpha$}(0,0);
\draw [thick] (1.8,0) to node[above]{$\beta'''$}(2.8,0);
\draw (0,0) to node[above]{$\beta'$}(0.6,0.2) node[right]{$\omega_i$};
\draw (1/4*0.6, 1/4*0.2) to (1/4*0.6 - 0.2/10, 1/4*0.2 + 0.6/10);
\draw (2/4*0.6, 2/4*0.2) to (2/4*0.6 - 0.2/10, 2/4*0.2 + 0.6/10);
\draw (3/4*0.6, 3/4*0.2) to (3/4*0.6 - 0.2/10, 3/4*0.2 + 0.6/10);
\draw (1.8,0) to node[above]{$\beta''$} (1.2,0.2) node[left]{$\omega^i$};
\draw (1.8-1/4*0.6, 1/4*0.2) to (1.8-1/4*0.6 + 0.2/10, 1/4*0.2 + 0.6/10);
\draw (1.8-2/4*0.6, 2/4*0.2) to (1.8-2/4*0.6 + 0.2/10, 2/4*0.2 + 0.6/10);
\draw (1.8-3/4*0.6, 3/4*0.2) to (1.8-3/4*0.6 + 0.2/10, 3/4*0.2 + 0.6/10);
\draw (0,0) --(0,0.2) node[above]{$x$};
\end{tikzpicture}
\quad=\quad
\begin{tikzpicture}[scale = 2]
\filldraw (-1,0) circle (0.05);
\filldraw (0,0) circle (0.05);
\filldraw (1.8,0) circle (0.05);
\filldraw (2.8,0) circle (0.05);
\draw [thick] (-1,0) to node[above]{$\alpha$}(0,0);
\draw [thick] (1.8,0) to node[above]{$\beta'''$}(2.8,0);
\draw (0,0) to node[above]{$\beta'$}(0.6,0.2);
\draw (1.8,0) to node[above]{$\beta''$} (1.2,0.2);
\draw (0,0) --(0,0.2) node[above]{$x$};
\draw [dotted](0.6,0.2) to [bend left = 18] (1.2,0.2);
\end{tikzpicture}
$$
and
$$
\Phi_1^\U = \evd (\Phi_1).
$$
To obtain the $\Psi_2$'s we have to apply the blue homotopy to $\Phi_1^\mathcal{C}$ and the orange homotopy to $\Phi_1^\U$
Then $\Psi_2^{A \otimes A} + \Psi_2^\mathcal{C} = h_0 {\color{blue} \Psi_1^\mathcal{C}}$ are the components after applying the blue homotopy. That is
$$
\Psi_2^{A \otimes A} = z_0(e^i \alpha''' x \beta' e_j) (\alpha'|\alpha'' e_i) \otimes (e^j \beta''| \beta''') \in {\color{blue} B \otimes B}
$$
$$
\Psi_2^{A \otimes A} \quad=\quad
\begin{tikzpicture}[scale = 2, baseline={([yshift=-50pt]current bounding box.north)}]
\draw [dotted] (-.8,0) to (-.5,0);
\draw (-.5,0) to node[above]{$\alpha'''$} (0,0);
\draw (0,0) to node[above]{$\beta'$} (.5,0);
\draw (0,0) to (0, 0.2) node[above]{$x$};
\draw [dotted] (.5,0) to (.8,0);
\draw [dashed] (0,0.2) circle (0.7);
\draw (0,0.85) node[above]{$z_0$};
\draw (-1.1,0) to node[above]{$\alpha''$}(-.8,0);
\draw [thick](-1.6,0) to node[above]{$\alpha'$}(-1.1,0);
\filldraw (-1.6,0) circle (0.05);
\filldraw (-1.1,0) circle (0.05);
\draw (.8,0) to node[above]{$\beta''$}(1.1,0);
\draw [thick](1.1,0) to node[above]{$\beta'''$}(1.6,0);
\filldraw (1.6,0) circle (0.05);
\filldraw (1.1,0) circle (0.05);
\end{tikzpicture}
$$
and
$$
\Psi_2^\mathcal{C} = \alpha' | \alpha ''x d_s'(\beta') \otimes d_s''(\beta') | 1 | \beta'' \in B \otimes B \subset B \otimes_A \mathcal{C} \otimes_A B
$$
$$
\Psi_2^\mathcal{C} \quad=\quad
\begin{tikzpicture}[scale = 2]
\filldraw (-1,0) circle (0.05);
\filldraw (-.5,0) circle (0.05);
\filldraw (1.8,0) circle (0.05);
\filldraw (2.3,0) circle (0.05);
\draw [thick] (-1,0) to node[above]{$\alpha'$}(-.5,0);
\draw [thick] (1.8,0) to node[above]{$\beta'''$}(2.3,0);
\draw (-.5,0) to node[above]{$\alpha''$}(0.05,0.1);
\draw (0.05,0.1) to node[above]{$\beta'$}(0.6,0.2);
\draw (-.5 + 4/8*1.1, 4/8*0.2) to node[above]{$x$}((-.5 + 4/8*1.1 - 0.2/10, 4/8*0.2 + 1.1/10);
\draw (1.8,0) to node[above]{$\beta''$} (1.2,0.2);
\draw [dotted](0.6,0.2) to [bend left = 10] (1.2,0.2);
\end{tikzpicture}
$$
The term $\Psi_2^\U$ obtained by applying the orange homotopy to $\Phi_1^\U$ and hence
$$
\Psi_2^\U = \alpha | x | \beta' \Sha (\beta''')^* |1| \beta'' |1| \beta'''' \in B \otimes_A \U \otimes_A B.
$$
$$
\Psi_2^\U =
\begin{tikzpicture}[scale = 2]
\filldraw (-.5,0) circle (0.05);
\filldraw (.5,0) circle (0.05);
\draw [thick] (-.5,0) to node[above]{$\alpha$} (0,0);
\draw [thick] (0,0) to node[above]{$\beta''''$} (.5,0);
\draw (0,0) to node[left]{$\beta'$} node[right]{$\beta'''$} (0,1);
\draw (0,1.5) circle (0.5) ;
\draw (0.2,2) node[right]{$\beta'' > 0$};
\draw (0,0) to (-0.2, 0.3) node[left]{$x$};
\end{tikzpicture}
$$
Here $\beta'' > 0$ denotes the condition that $\beta''$ contains at least 1 element of $A$, that is we apply the reduced coproduct on this factor.
The term $\Phi_2^A$ is the image of $\Psi_2^{A\otimes A}$ under the horizontal multiplication map, that is
$$
\Phi_2^A = z_0(e^i \alpha''' x \beta' e_j) (\alpha'|(\alpha'' e_i)(e^j \beta'')| \beta''') \in {\color{blue} B \otimes_A B}
$$
$$
\Phi_2^A \quad = \quad
\begin{tikzpicture}[scale = 2, baseline=-.65ex]
\filldraw (-.5,-.5) circle (0.05);
\draw [thick] (-.5,-.5) to node[below]{$\alpha$} (0,-.5);
\filldraw (.5,-.5) circle (0.05);
\draw [thick] (0,-.5) to node[below]{$\beta$}(.5,-.5);
\filldraw (0,-.5) circle (0.05);
\draw (0,-.5) to node[left]{$\alpha$} (-0.3,0.1);
\draw (0,-.5) to node[right]{$\beta$} (0.3,0.1);
\draw[dotted] (-0.3,0.1) to (-0.4, 0.3);
\draw[dotted] (0.3,0.1) to (0.4, 0.3);
\draw (-.4,.3) to [out=120, in=180] node[below left]{$\alpha$} (0,1.0);
\draw (.4,.3) to [out=60, in=0] node[below right]{$\beta$} (0,1.0);
\draw (0,1) to (0,1.2) node[above]{$x$};
\draw [dashed] (0,0.8) ellipse [x radius = .7, y radius = .7];
\draw (0,1.5) node[above]{$z_0$};
\end{tikzpicture}
$$
The two components of the term $\Phi_2^\U = \evd(\Psi_2^\mathcal{C}) + d_s \Psi_2^\U$ are given by
$$
\evd{\Psi_2^\mathcal{C}} = \alpha'| (\alpha''x\beta'e_i)(e^i \beta'')| \beta'''
$$
$$
\evd(\Psi_2^\mathcal{C}) \quad = \quad
\begin{tikzpicture}[scale = 2, baseline={([yshift=-50pt]current bounding box.north)}]
\filldraw (-.5,0) circle (0.05);
\draw [thick] (-.5,0) to node[below]{$\alpha$} (0,0);
\filldraw (.5,0) circle (0.05);
\draw [thick] (0,0) to node[below]{$\beta$}(.5,0);
\filldraw (0,0) circle (0.05);
\draw (0,0) to [out=135, in=-90] node[below left]{$\alpha$} (-1,1);
\draw (-1,1) to [out=90, in=180] node[above left]{$\beta$} (-.2,2);
\draw (-1,1) --(-1.2,1) node[left] {x};
\draw [dotted] (-.2,2) to (.2,2);
\draw (0,0) to [out=45, in=-90] (1,1) node[right]{$\beta$};
\draw (1,1) to [out=90, in=0] (.2,2);
\end{tikzpicture}
$$
and
$$
d_s \Psi_2^\U \quad = \quad
\begin{tikzpicture}[baseline=-.65ex,scale = 2]
\filldraw (-.5,0) circle (0.05);
\draw [thick] (-.5,0) to node[below]{$\alpha$} (0,0);
\filldraw (.5,0) circle (0.05);
\draw [thick] (0,0) to node[below]{$\beta$}(.5,0);
\filldraw (0,0) circle (0.05);
\draw (0,0) to (-.4,0.4) node[above left]{$x$};
\draw (0,0) to node[left]{$\beta$} node[right]{$\beta$} (0,.4);
\draw [dotted](0,0.4) to (0,.6);
\draw (0,0.6) to node[left]{$\beta$} node[right]{$\beta$} (0,1);
\draw (0,1.5) circle (0.5);
\draw (.4, 1.8) node[right]{$\beta > 0$};
\draw [dashed] (0,1.3) ellipse [x radius = .7, y radius = .8];
\draw (0,2.05) node[above]{$z_1$};
\end{tikzpicture}
+
\begin{tikzpicture}[baseline=-.65ex,scale = 2]
\filldraw (-.5,0) circle (0.05);
\draw [thick] (-.5,0) to node[below]{$\alpha$} (0,0);
\filldraw (.5,0) circle (0.05);
\draw [thick] (0,0) to node[below]{$\beta$}(.5,0);
\filldraw (0,0) circle (0.05);
\draw (0,0) to (-.4,0.4) node[above left]{$x$};
\draw (0,0) to node[left]{$\beta$} node[right]{$\beta$} (0,1);
\draw ([shift=(-90:.5)]0,1.5) arc (-90:45:.5) node[midway]{\phantom{xl}$\beta$};
\draw [dotted]([shift=(45:.5)]0,1.5) arc (45:75:.5);
\draw ([shift=(75:.5)]0,1.5) arc (75:270:.5) node[midway]{$\beta$\phantom{xl}};
\draw (.4, 1.8) node[right]{$ > 0$};
\end{tikzpicture}
$$
Since we have no need for the term $\Psi_3^\U$ we only compute $\Psi_3^A$. It is obtained by applying the green homotopy to $\Phi_2^A$ and $\Phi_2^\U$, that is
$$
\Psi_3^A = h_0^A(\Phi_2^A + \Phi_2^\U)
$$
We obtain
$$
h_0^A(\Phi_2^A) \quad = \quad
\begin{tikzpicture}[baseline = {(0,-1)}, scale = 2]
\filldraw (-.5,-.5) circle (0.05);
\draw [thick] (-.5,-.5) to node[below]{$\alpha$} (0,-.5);
\filldraw (.5,-.5) circle (0.05);
\draw [thick] (0,-.5) to node[below]{$\beta$}(.5,-.5);
\filldraw (0,-.5) circle (0.05);
\draw (0,-.5) to node[left]{$\alpha$} node[right]{$\beta$} (0,-.2);
\draw [dotted] (0,-.2) to (0, -.05);
\draw (0,-.05) to node[left]{$\alpha$} node[right]{$\beta$} (0,.25);
\draw (0,.25) to node[left]{$\alpha$} (-.1,.45);
\draw (0,.25) to node[right]{$\beta$} (.1,.45);
\draw [dotted] (-.1,.45) to (-.3, .85);
\draw [dotted] (.1, .45) to (.3, .85);
\draw [dashed] (0,0.25) ellipse [x radius = .35, y radius = .35];
\draw (0.4,0.25) node[right]{$z_0$};
\draw (-.3,.85) to [out=120, in=180] node[left]{$\alpha$} (0,1.3);
\draw (.3,.85) to [out=60, in=0] node[right]{$\beta$} (0,1.3);
\draw (0,1.3) to (0,1.5) node[above]{$x$};
\draw [dashed] (0,1.25) ellipse [x radius = .6, y radius = .6];
\draw (0,1.8) node[above]{$z_0$};
\end{tikzpicture}
$$
and\footnote{We apologize for the somewhat cumbersome notation, but hope that the meaning is still clear from the picture below.}
\begin{align*}
\pi \circ h_0^A(\Phi_2^\U) =& \alpha' \Sha (\beta''''''')^* e_i z_0(e^i \alpha'' (\beta'''''')^* x \beta' (\beta''''')^* ((\beta'''')^* e_k) \beta''' e^k) \\
&+ \alpha' \Sha (\beta''''''')^* e_i z_0(e^i \alpha'' \Sha (\beta'''''')^* x \beta' \Sha (\beta''''')^* e_j) z_1( (\beta'' \Sha (\beta'''')^* e^j) \beta''') \\
& + \alpha' \Sha (\beta'''')^* e_i z_0(e^i \alpha'' \Sha (\beta''')^* ((\beta'')^* e_k) \alpha''' x \beta' e^k)
\end{align*}
$$
h_0^A(\Phi_2^\U) \quad = \quad
\begin{tikzpicture}[scale = 2,baseline={(0,-1)}]
\filldraw (-.5,-.5) circle (0.05);
\draw [thick] (-.5,-.5) to node[below]{$\alpha$} (0,-.5);
\filldraw (.5,-.5) circle (0.05);
\draw [thick] (0,-.5) to node[below]{$\beta$}(.5,-.5);
\filldraw (0,-.5) circle (0.05);
\draw (0,-.5) to node[left]{$\alpha$} node[right]{$\beta$} (0,-.1);
\draw [dotted] (0,-.1) to (0, .2);
\draw (0,0.2) to node[left]{$\alpha$} node[right]{$\beta$} (0,1);
\draw ([shift=(-90:.5)]0,1.5) arc (-90:45:.5) node[midway]{\phantom{xl}$\beta$};
\draw [dotted]([shift=(45:.5)]0,1.5) arc (45:75:.5);
\draw ([shift=(75:.5)]0,1.5) arc (75:180:.5) node[midway]{$\beta$\phantom{xll}};
\draw ([shift=(180:.5)]0,1.5) arc (180:270:.5) node[midway]{$\alpha$\phantom{xll}};
\draw (-.5,1.5) --(-.7,1.5) node[left] {$x$};
\draw [dashed] (0,1.1) ellipse [x radius = .9, y radius = 1.1];
\draw (0,2.15) node[above]{$z_0$};
\end{tikzpicture}
+
\begin{tikzpicture}[baseline={(0,-1)},scale = 2]
\filldraw (-.5,-.5) circle (0.05);
\draw [thick] (-.5,-.5) to node[below]{$\alpha$} (0,-.5);
\filldraw (.5,-.5) circle (0.05);
\draw [thick] (0,-.5) to node[below]{$\beta$}(.5,-.5);
\filldraw (0,-.5) circle (0.05);
\draw (0,-.5) to node[left]{$\alpha$} node[right]{$\beta$} (0,-.2);
\draw [dotted] (0,-.2) to (0, -.05);
\draw (0,-.05) to node[left]{$\alpha$} node[right]{$\beta$} (0,.25);
\draw (0,0.25) to (-.2,0.25) node[left]{$x$};
\draw (0,0.25) to node[left]{$\beta$} node[right]{$\beta$} (0,.55);
\draw [dotted] (0,.55) to (0, .7);
\draw (0,0.7) to node[left]{$\beta$} node[right]{$\beta$} (0,1);
\draw (0,1.5) circle (0.5);
\draw (.4, 1.8) node[right]{$\beta > 0$};
\draw [dashed] (0,1.35) ellipse [x radius = .7, y radius = .7];
\draw (0,2.0) node[above]{$z_1$};
\draw [dashed] (0,0.25) ellipse [x radius = .35, y radius = .35];
\draw (0.4,0.25) node[right]{$z_0$};
\end{tikzpicture}
+
\begin{tikzpicture}[baseline={(0,-1)},scale = 2]
\filldraw (-.5,-.5) circle (0.05);
\draw [thick] (-.5,-.5) to node[below]{$\alpha$} (0,-.5);
\filldraw (.5,-.5) circle (0.05);
\draw [thick] (0,-.5) to node[below]{$\beta$}(.5,-.5);
\filldraw (0,-.5) circle (0.05);
\draw (0,-.5) to node[left]{$\alpha$} node[right]{$\beta$} (0,-.1);
\draw [dotted] (0,-.1) to (0, .2);
\draw (0,0.2) to node[left]{$\alpha$} node[right]{$\beta$} (0,.6);
\draw (0,0.6) to node[left]{$\beta$} node[right]{$\beta$} (0,1);
\draw (0,0.6) to (-.2,0.6) node[left]{$x$};
\draw ([shift=(-90:.5)]0,1.5) arc (-90:45:.5) node[midway]{\phantom{xl}$\beta$};
\draw [dotted]([shift=(45:.5)]0,1.5) arc (45:75:.5);
\draw ([shift=(75:.5)]0,1.5) arc (75:270:.5) node[midway]{$\beta$\phantom{xl}};
\draw [dashed] (0,1.1) ellipse [x radius = .9, y radius = 1.1];
\draw (0,2.15) node[above]{$z_0$};
\draw (.4, 1.8) node[right]{$ > 0$};
\end{tikzpicture}
$$
\subsubsection{The "other" terms}
It remains to compute $H$ on the terms $\alpha|x|\beta \wedge Y$ and $\alpha|x|\beta \wedge \chi(M)\omega$.
In the first case, we obtain $H(\alpha|x|\beta \wedge Y)$ by applying the green homotopy,
$$
H^A(\alpha|x|\beta \wedge Y) = \pi (h_0^A(\alpha|x|\beta \wedge Y)),
$$
$$
h_0^A(\alpha|x|\beta \wedge Y) \quad = \quad
\begin{tikzpicture}[baseline={(0,-1)},scale = 2]
\filldraw (-.5,-.5) circle (0.05);
\draw [thick] (-.5,-.5) to node[below]{$\alpha$} (0,-.5);
\filldraw (.5,-.5) circle (0.05);
\draw [thick] (0,-.5) to node[below]{$\beta$}(.5,-.5);
\filldraw (0,-.5) circle (0.05);
\draw (0,-.5) to node[left]{$\alpha$} node[right]{$\beta$} (0,-.1);
\draw [dotted] (0,-.1) to (0, .2);
\draw (0,0.2) to node[left]{$\alpha$} node[right]{$\beta$} (0,.6);
\draw (0,0.6) to (0,.9) node[above]{$e_i e^i$};
\draw (0,0.6) to (-.2,0.6) node[left]{$x$};
\draw [dashed] (0,.6) ellipse [x radius = .6, y radius = .6];
\draw (0,1.2) node[above]{$z_0$};
\end{tikzpicture}
\quad = \quad
\begin{tikzpicture}[baseline={(0,-1)},scale = 2]
\filldraw (-.5,-.5) circle (0.05);
\draw [thick] (-.5,-.5) to node[below]{$\alpha$} (0,-.5);
\filldraw (.5,-.5) circle (0.05);
\draw [thick] (0,-.5) to node[below]{$\beta$}(.5,-.5);
\filldraw (0,-.5) circle (0.05);
\draw (0,-.5) to node[left]{$\alpha$} node[right]{$\beta$} (0,-.1);
\draw [dotted] (0,-.1) to (0, .2);
\draw (0,0.2) to node[left]{$\alpha$} node[right]{$\beta$} (0,.6);
\draw (0,0.6) to (0,.9);
\draw [dotted] (0,1.1) circle [radius = 0.2];
\draw (0,0.6) to (-.2,0.6) node[left]{$x$};
\draw [dashed] (0,.7) ellipse [x radius = .7, y radius = .7];
\draw (0,1.4) node[above]{$z_0$};
\end{tikzpicture}
$$
Note that this is the "missing" term in the last summand of $h_0^A(\Phi_2^\U)$ if we take the ordinary coproduct instead of the reduced one in that term.
The term $H^A(\alpha|x|\beta \wedge \chi(M)\omega)$ is obtained by applying the green homotopy, i.e.
$$
H^A(\alpha|x|\beta \wedge \chi(M)\omega) = \pi (h_0^A(\alpha|x|\beta \wedge \chi(M)\omega)),
$$
$$
h_0^A(\alpha|x|\beta \wedge \chi(M)\omega) \quad = \quad
\begin{tikzpicture}[baseline = {(0,1)},scale = 2]
\filldraw (-.5,0) circle (0.05);
\draw [thick] (-.5,0) to node[below]{$\alpha$} (0,0);
\filldraw (.5,0) circle (0.05);
\draw [thick] (0,0) to node[below]{$\beta$}(.5,0);
\filldraw (0,0) circle (0.05);
\draw (0,0) to node[left]{$\alpha$} node[right]{$\beta$} (0,1);
\draw (0,1) node[right]{$\chi(M)\omega$} to (-0.2,1.2) node[above left]{$x$};
\end{tikzpicture}
$$
\subsubsection{The collected terms}
Let us summarize all of the above in the following two pictures.
$$
H_{A\otimes A}\circ (\wedge \mathit{Th}) \quad = \quad
\begin{tikzpicture}[baseline = {(0,0)}, scale = 2]
\draw [dotted] (-.8,0) to (-.5,0);
\draw (-.5,0) to node[above]{$\alpha$} (0,0);
\draw (0,0) to node[above]{$\beta$} (.5,0);
\draw (0,0) to (0, 0.2) node[above]{$x$};
\draw [dotted] (.5,0) to (.8,0);
\draw [dashed] (0,0.2) circle (0.7);
\draw (0,0.85) node[above]{$z_0$};
\draw (-1.1,0) to node[above]{$\alpha$}(-.8,0);
\filldraw (-1.1,0) circle (0.05);
\draw (.8,0) to node[above]{$\beta$}(1.1,0);
\filldraw (1.1,0) circle (0.05);
\end{tikzpicture}
$$
$$
H_{A}\circ (\wedge \mathit{Th}) \quad = \quad
\quad
\begin{tikzpicture}[baseline = {(0,-1)}, scale = 2]
\filldraw (0,-.5) circle (0.05);
\draw (0,-.5) to node[left]{$\alpha$} node[right]{$\beta$} (0,-.2);
\draw [dotted] (0,-.2) to (0, -.05);
\draw (0,-.05) to node[left]{$\alpha$} node[right]{$\beta$} (0,.25);
\draw (0,.25) to node[left]{$\alpha$} (-.1,.45);
\draw (0,.25) to node[right]{$\beta$} (.1,.45);
\draw [dotted] (-.1,.45) to (-.3, .85);
\draw [dotted] (.1, .45) to (.3, .85);
\draw [dashed] (0,0.25) ellipse [x radius = .35, y radius = .35];
\draw (0.4,0.25) node[right]{$z_0$};
\draw (-.3,.85) to [out=120, in=180] node[left]{$\alpha$} (0,1.3);
\draw (.3,.85) to [out=60, in=0] node[right]{$\beta$} (0,1.3);
\draw (0,1.3) to (0,1.5) node[above]{$x$};
\draw [dashed] (0,1.25) ellipse [x radius = .6, y radius = .6];
\draw (0,1.8) node[above]{$z_0$};
\end{tikzpicture}
+
\begin{tikzpicture}[baseline = {(0,-1)}, scale = 2]
\filldraw (0,-.5) circle (0.05);
\draw (0,-.5) to node[left]{$\alpha$} node[right]{$\beta$} (0,-.1);
\draw [dotted] (0,-.1) to (0, .2);
\draw (0,0.2) to node[left]{$\alpha$} node[right]{$\beta$} (0,1);
\draw ([shift=(-90:.5)]0,1.5) arc (-90:45:.5) node[midway]{\phantom{xl}$\beta$};
\draw [dotted]([shift=(45:.5)]0,1.5) arc (45:75:.5);
\draw ([shift=(75:.5)]0,1.5) arc (75:180:.5) node[midway]{$\beta$\phantom{xll}};
\draw ([shift=(180:.5)]0,1.5) arc (180:270:.5) node[midway]{$\alpha$\phantom{xll}};
\draw (-.5,1.5) --(-.7,1.5) node[left] {$x$};
\draw [dashed] (0,1.1) ellipse [x radius = .9, y radius = 1.1];
\draw (0,2.15) node[above]{$z_0$};
\end{tikzpicture}
+
\begin{tikzpicture}[baseline = {(0,-1)}, scale = 2]
\filldraw (0,-.5) circle (0.05);
\draw (0,-.5) to node[left]{$\alpha$} node[right]{$\beta$} (0,-.2);
\draw [dotted] (0,-.2) to (0, -.05);
\draw (0,-.05) to node[left]{$\alpha$} node[right]{$\beta$} (0,.25);
\draw (0,0.25) to (-.2,0.25) node[left]{$x$};
\draw (0,0.25) to node[left]{$\beta$} node[right]{$\beta$} (0,.55);
\draw [dotted] (0,.55) to (0, .7);
\draw (0,0.7) to node[left]{$\beta$} node[right]{$\beta$} (0,1);
\draw (0,1.5) circle (0.5);
\draw (.4, 1.8) node[right]{$\beta > 0$};
\draw [dashed] (0,1.35) ellipse [x radius = .7, y radius = .7];
\draw (0,2.0) node[above]{$z_1$};
\draw [dashed] (0,0.25) ellipse [x radius = .35, y radius = .35];
\draw (0.4,0.25) node[right]{$z_0$};
\end{tikzpicture}
+
\begin{tikzpicture}[baseline = {(0,-1)}, scale = 2]
\filldraw (0,-.5) circle (0.05);
\draw (0,-.5) to node[left]{$\alpha$} node[right]{$\beta$} (0,-.1);
\draw [dotted] (0,-.1) to (0, .2);
\draw (0,0.2) to node[left]{$\alpha$} node[right]{$\beta$} (0,.6);
\draw (0,0.6) to node[left]{$\beta$} node[right]{$\beta$} (0,1);
\draw (0,0.6) to (-.2,0.6) node[left]{$x$};
\draw ([shift=(-90:.5)]0,1.5) arc (-90:45:.5) node[midway]{\phantom{xl}$\beta$};
\draw [dotted]([shift=(45:.5)]0,1.5) arc (45:75:.5);
\draw ([shift=(75:.5)]0,1.5) arc (75:270:.5) node[midway]{$\beta$\phantom{xl}};
\draw [dashed] (0,1.1) ellipse [x radius = .9, y radius = 1.1];
\draw (0,2.15) node[above]{$z_0$};
\end{tikzpicture}
+
\begin{tikzpicture}[baseline = {(0,0)},scale = 2]
\filldraw (0,0) circle (0.05);
\draw (0,0) to node[left]{$\alpha$} node[right]{$\beta$} (0,1);
\draw (0,1) node[right]{$\chi(M)\omega$} to (-0.2,1.2) node[above left]{$x$};
\end{tikzpicture}
$$
\begin{Rem}
In the 1-framed case the propagator was chosen compatible with the given 1-framing. Alternatively, since any 1-framings differ by a class $f \in H^{n-1}(M)$ we could simply add this term to $\mathit{Th}$. The extra term in the homotopies above can then be absorbed in the middle term, if we now also take the ordinary (not the reduced) coproduct there, and define $z_1$ on tadpole graphs to be dual to $f$.
\end{Rem}
\subsubsection{Simpler formulas on a smaller model}\label{sec:simpler formulas}
Recall that the differentials on $B \otimes_A B$ and $A$ consist of an edge contraction differential $d_0$ and a differential depending on the $Com_\infty$ structure. The cohomology with respect to $d_0$ is concentrated in the purely trivalent part and given by $T\bar{H} \otimes H \otimes T\bar{H}$ and $H$, respectively.
Since $H^{A\otimes A}$ and $H^A$ send trivalent graphs to trivalent ones, we can readily project onto these spaces to obtain maps
\begin{align*}
H^{A \otimes A}\circ(\wedge \mathit{Th}) : T\bar{H} \otimes H \otimes T\bar{H} &\longrightarrow H \otimes H \\
H^A\circ(\wedge \mathit{Th}) : T\bar{H} \otimes H T\bar{H} &\longrightarrow H
\end{align*}
given by (essentially) the same formulas. Note that the term coming from $\wedge \chi(M) \omega$ vanishes.
The formulas simplify vastly if $x = 1$, namely
\begin{align}
\label{eqn:simplpsi}
H^{A \otimes A}\circ(\alpha | 1 | \beta \wedge \mathit{Th}) &= \epsilon(\alpha) \epsilon(\beta) e_i \otimes e^i \\
\label{eqn:simplpsi2}
H^A\circ( \alpha | 1 | \beta \wedge \mathit{Th}) &= \epsilon(\alpha)e_i z_1(e^i \beta).
\end{align}
\section{Putting it all together, and proof of Theorem \ref{thm:main_3b} }\label{sec:thm3proof}
We are now ready to describe the string bracket and cobracket on the cyclic words $\overline{Cyc}(\bar H)$ computing $\bar H_{S^1}(LM)$ and in particular prove Theorem \ref{thm:main_3b}.
We will proceed essentially as in section \ref{sec:thm12proofs} for the computation of the product and coproduct.
Here we will use the dgca model $A$ of $M$ given by graphs as in section \ref{sec:graphical version}.
The cochain complex computing $H(LM)$ is then the reduced Hochschild complex
\[
\bar C(A)=B\otimes_{A^e} A = \left( \bigoplus_{k\geq 0}\bar A^{\otimes k} \otimes A, d_H \right).
\]
The cohomology $H=H^\bullet(M)$ forms a $\widehat{\Com}_\infty$-algebra, and $A$ is its canonical resolution as a $\mathsf{Com}$-algebra.
Hence, there is a natural $\widehat{\Com}_\infty$-map $H\to A$.
It follows that we also have a canonical map between the Hochschild complexes, cf. section \ref{sec:intro hochschild}
\[
\bar C(H) = \left( \bigoplus_{k\geq 0}\bar H^{\otimes k} \otimes H, d_H \right) \to \bar C(A).
\]
Unfortunately, we do not know a natural way to construct an (explicit) $\mathsf{Com}_\infty$ or $\widehat{\Com}_\infty$-map $A\to H$.
Nevertheless, the natural projection of graded vector spaces $A\to H$ induces a well defined map
$$
B \otimes_{A^e} A \supset (B \otimes_{A^e} A)^{\text{triv, cl}} \overset{\text{pr}}{\longrightarrow} T\bar{H} \otimes H.
$$
on the subspace of closed with respect to edge contraction and trivalent elements where the first inclusion is a quasi-isomorphism and the projection is a bijection.
\todo[inline]{The statement about bijectivity seems more clear for the dual picture in terms of graphs / IHX.}
\todo[inline]{...maybe add some detail here}
\begin{Rem}
The last formula is stating the fact that given a $\mathsf{Com}_\infty$ algebra $H$ one can construct Hochschild chains either by bar-cobar resolving $H$ as a $\mathsf{Com}_\infty$-algebra or by doing associative cobar and taking Hochschild chains on the resulting (cofree) coalgebra (and recalling that Hochschild chains of a (co)free (co)algebra $TV$ are given by $TV \otimes (V \oplus \mathbb{R})$).
\end{Rem}
Since the Connes' operator is compatible with that projection the discussion about negative cyclic homology of section\ref{sec:intro hochschild} carries over to the model $(T\bar{H} \otimes H)$ and we obtain similarly a quasi-isomorphism
$$
\op{Cyc}(\bar{A}) \supset \op{Cyc}(\bar{A})^{\text{triv,cl}} \overset{\text{pr}}{\longrightarrow} \op{Cyc}(\bar{H}),
$$
with the projection being a bijection.
\subsection{String product and bracket}
We now obtain a description of the string product (cohomology coproduct). Let us first consider the operation $H^\bullet(LM) \otimes H^\bullet(LM) \longleftarrow H^\bullet(\op{Map}(8))$. By Lemma \ref{lem:hoprod} it is obtained from
$$
A \longleftarrow QA \overset{H_{A \otimes A}}{\longrightarrow} A \otimes A,
$$
by taking tensor product with $B \otimes B$ over $A^{\otimes 4}$. Note that $(B \otimes B) \otimes_{A^{\otimes 4}} QA = B \otimes_{A^e} B \otimes_A B \otimes_{A^e} B$ is the Hochschild homology on the dumbbell (whose handle consists of two edges) graph. We now define an inverse to the quasi-isomorphism $(B \otimes B) \otimes_{A^{\otimes 4}} QA \to (B \otimes B) \otimes_{A^{\otimes 4}} A$ induced by mapping the figure eight to the dumbbell,
$$
\begin{tikzcd}[row sep=1ex]
B \otimes_{A^e} A \otimes_{A^e} B \ar[r] & B \otimes_{A^e} B \otimes_A B \otimes_{A^e} B\\
\alpha|x|\beta \arrow[u,symbol=\in] \ar[r,mapsto]& \alpha'' |1| (\alpha''')^*\Sha \alpha' | x| \beta' \Sha (\beta''')^* |1| \beta'' \arrow[u,symbol=\in].
\end{tikzcd}
$$
Composing with the map $\op{Map}(8) \to LM$ described in \eqref{equ:concatloop} we now obtain a description of the string product (cohomology coproduct) as the composition
$$
\begin{tikzcd}[row sep=1ex]
B \otimes_{A^e} A \ar[r] & B \otimes_{A^e} A \otimes_{A^e} B \ar[r] & B \otimes_{A^e} B \otimes_A B \otimes_{A^e} B \ar[r, "H^{A\otimes A}\circ(\wedge Th)"] & (B \otimes_{A^e} A) \otimes (A \otimes_{A^e} B) \\
\alpha x \arrow[u,symbol=\in] \ar[r,mapsto]& \alpha' x \alpha'' \arrow[u,symbol=\in] \ar[r,mapsto]& \alpha^{(2)} \otimes (\alpha^{(3)})^* \Sha \alpha^{(1)} x \alpha^{(4)} \Sha (\alpha^{(6)})^* \otimes \alpha^{(5)} \arrow[u,symbol=\in] \ar[r,mapsto]&
\scriptstyle
\alpha^{(2)} \otimes H^{A\otimes A}((\alpha^{(3)})^* \Sha \alpha^{(1)} x \alpha^{(4)} \Sha (\alpha^{(6)})^* \wedge \mathit{Th}) \otimes \alpha^{(5)}
\arrow[u,symbol=\in]
\end{tikzcd}\,.
$$
Since the homotopy $H_{A \otimes A}$ was constructed to respect trivalent graphs, we can readily restrict the above map to closed (with respect to edge contraction) trivalent graphs. Identifying the corresponding spaces with $T\bar{H} \otimes H$ we obtain the following
\begin{Thm}
Under the natural map $T\bar{H} \otimes H \to \Omega^\bullet(LM)$ the string product is given by
\begin{equation}
\begin{tikzcd}[row sep=1ex]
T\bar{H} \otimes H \ar[r] & (T\bar{H} \otimes H) \otimes (T\bar{H} \otimes H) \\
\alpha|x \ar[u, symbol=\in] \ar[mapsto]{r}
&
z_0(e^i (\alpha^{(3)})^* \Sha \alpha^{(1)} x \alpha^{(4)} \Sha (\alpha^{(6)})^* e^j) (\alpha^{(2)}|e_i)\otimes (\alpha^{(5)}|e_j) \ar[u, symbol=\in] .
\end{tikzcd}
\end{equation}
\end{Thm}
The string bracket (cohomology cobracket) is given by the composition \eqref{equ:defbracket} where the maps are modelled by Lemma \ref{lem:equivtohoch}. In particular, in the above formula $x=1$ and we get the simplified formula in
\begin{Thm}\label{thm:bracket}
Under the map $\op{Cyc}{\bar H} \to \Omega^\bullet_{S^1}(LM)$ the string bracket is modelled by
\begin{equation}
\begin{tikzcd}[row sep=1ex]
\op{Cyc}{\bar H} \ar[r] & \op{Cyc}{\bar H} \otimes \op{Cyc}{\bar H}
\\
\alpha_1 \dots \alpha_k \ar[u, symbol=\in] \ar[mapsto,r]
& \sum_j \alpha_1 \dots \alpha_j e_i \otimes e^i \alpha_{j+1} \dots \alpha_k
\ar[u, symbol=\in]
\end{tikzcd}
\end{equation}
\end{Thm}
\subsubsection{A remark on homotopy automorphisms}
Let us record some basic facts about the above formula for later. For this, recall that $H^\bullet$ is a $\mathsf{Com}_\infty$ algebra, and as such has a cobar-dual dg Lie algebra $\mathbb{L} = \mathsf{Lie}(\bar{H}_{\bullet}[-1])$ with a differential that sends generators $\bar{H}_\bullet[-1] \to \mathsf{Lie}(\bar{H}_\bullet[-1])$. Considering the $\mathsf{Com}_\infty$-algebra $H^\bullet$ as an $A_\infty$-algebra and taking its (dual) cobar construction one obtains the dg algebra (the tensor algebra in $\bar{H}_\bullet[-1]$) $T(\bar{H}_\bullet) = U\mathbb{L}$. We identify
$$
\op{Cyc}(\bar{H})^* = (T\bar{H}_\bullet)_\L = (T\bar{H}_\bullet)^\L
$$
with $\L$-(co)invariants of $T\bar{H}_\bullet$. Then the following double complex is contractible
$$
\begin{tikzcd}
(T\bar{H}_\bullet)^\L \ar[r] & T\bar{H}_\bullet \otimes \bar{H}_\bullet \ar[d, "{[\cdot, \cdot]}"] \\
& T\bar{H}_\bullet \otimes \mathbb{R} \ar[r] & (T\bar{H}_\bullet)_\L.
\end{tikzcd}
$$
Which we write as
$$
\begin{tikzcd}
\op{Cyc} \bar{H}_\bullet[-1] \ar[r, "\pi_!"] & T\bar{H}_\bullet \otimes H_\bullet \ar[r] & \op{Cyc} \bar{H}_\bullet,
\end{tikzcd}
$$
where the maps are dual to the ones in Lemma \ref{lem:equivtohoch}. By identifying $H_\bullet = H^\bullet[-n]$ using the inner product we obtain the contractible complex
\begin{equation}\label{equ:contrgysin}
\begin{tikzcd}
\op{Cyc} \bar{H}_\bullet [-1] \ar[r, "\pi_!"] & T\bar{H}_\bullet \otimes H^\bullet[-n] \ar[r] & \op{Cyc} \bar{H}_\bullet,
\end{tikzcd}
\end{equation}
where the middle term is Hochschild cohomology of the $Com_\infty$-algebra $H^\bullet$. One checks that the map
$$
\op{Cyc} \bar{H}_\bullet \overset{\pi_!}{\longrightarrow} T\bar{H}_\bullet \otimes H^\bullet[1-n]
$$
is a map of Lie algebras that is furthermore compatible with the natural action of the Hochschild-Gerstenhaber Lie algebra $T\bar{H}_\bullet \otimes H^\bullet$. More concretely, we write $T\bar{H}_\bullet \otimes H^\bullet = T\bar{H}_\bullet \otimes \mathbb{R} \ \oplus \ T\bar{H}_\bullet \otimes \bar{H}^\bullet$ and identify the two summands with the space of inner derivations $\operatorname{Inn}$ and all derivations $\mathrm{Der}(T\bar{H})$, respectively. Now the natural action of $(\operatorname{Inn} \to \mathrm{Der}(T\bar{H}_\bullet))$ on $\op{Cyc}(\bar{H}_\bullet)$ composed with the map $\pi_!$ coincides with the adjoint action (with respect to the string bracket) of $\op{Cyc}(\bar{H}_\bullet)$ on itself.
Recall that there is a Hodge decomposition coming from writing $U\L = S\L = \oplus_k S^k\L$ (i.e. PBW) and the maps in complex \eqref{equ:contrgysin} respect that decomposition (can for instance be seen by identifying $T\bar{H}_\bullet \otimes \bar{H}_\bullet = \Omega^1_{nc}$ with non-commutative 1-forms). Let us use the notation
$$
\op{Cyc} \bar{H}_\bullet = \oplus_k \op{Cyc}_{(k)} \bar{H}_\bullet = \oplus_k (S^k \L)_{\L}
$$
and similarly
$$
H_\bullet^{S^1}(LM) = \oplus_k H_\bullet^{S^1}(LM)_{(k)}
$$
for the corresponding decomposition on the equivariant loop space.
Complex \eqref{equ:contrgysin} is then the direct sum of contractible complexes
$$
\begin{tikzcd}
\op{Cyc}_{(k)} \bar{H}_\bullet [-1] \ar[r, "\pi_!"] & S^{k-1} \L \otimes H^\bullet[-n] \ar[r] & \op{Cyc}_{(k-1)} \bar{H}_\bullet,
\end{tikzcd}
$$
the $k=2$ term of which is
$$
\begin{tikzcd}
\op{Cyc}_{(2)} \bar{H}_\bullet [-1] \ar[r, "\pi_!"] & (\operatorname{Inn} \to \mathrm{Der}(\L))[-n] \ar[r, "u \to u(\omega)"] & \op{Cyc}_{(1)} \bar{H}_\bullet = \bar{H}_\bullet,
\end{tikzcd}
$$
where $(\operatorname{Inn} \to \mathrm{Der}(\L))$ is the Harrison complex of $H^\bullet$. After shifting by $n-1$ the positive truncation gives us that
$$
(\op{Cyc}_{(2)} \bar{H}_\bullet)[n-1]^+ \longrightarrow (\operatorname{Inn} \to \mathrm{Der}(\L))^+
$$
is a quasi-isomorphism (see also Proposition 65 in \cite{CamposWillwacher} for a direct proof).
We have thus obtained the following
\begin{Lem}
The identification $\pi_!$ of $(\operatorname{Inn} \to \mathrm{Der}(\mathsf{Lie}\bar{H}_\bullet))^+$ with a direct summand of $\op{Cyc}(\bar{H}_\bullet)$ intertwines the natural action of the Harrison complex on the cyclic complex with the adjoint action.
\end{Lem}
\subsection{String coproduct and cobracket}
As before, we will distinguish two cases, the $1$-framed case in which a nonvanishing vector field is chosen (and necessarily $\chi(M)=0$), and the reduced case for general $M$.
\subsubsection{The 1-framed case}
By combining Proposition \ref{prop:splitmap} and Lemma \ref{lem:frcop} we obtain the description of the coproduct $\Omega^\bullet(LM) \longleftarrow \Omega^{\bullet +1 -n}(\op{Map}(8))$ as
$$
\begin{tikzcd}
B \otimes_{A^{\otimes 2}} A & \ar[l, "s"] (B\otimes B) \otimes_{A^{\otimes 4}} (A \otimes A / A) & \ar[l, "{(H_{A\otimes A}, H_{A})}"] (B\otimes B) \otimes_{A^{\otimes 4}} (B \otimes_A B) \ar[d, "\simeq"] \\
&& (B \otimes B) \otimes_{A^{\otimes 4}} A.
\end{tikzcd}
$$
An inverse of the last quasi-isomorphism is given by
\begin{equation}
\label{eqn:8totheta}
\alpha \otimes \beta \otimes x \longmapsto \pm \alpha'' \otimes \beta'' \otimes \alpha''' \Sha \beta''' x \alpha' \Sha \beta',
\end{equation}
i.e. the homotopy equivalence of the mapping spaces of the figure 8 and the theta graph. Finally, we compose it with the map in diagram \eqref{equ:splitloop} to obtain the coproduct. We thus obtain the formula
$$
\begin{tikzcd}[row sep=1ex]
B \otimes_{A^{\otimes 2}} A & \ar[l] B \otimes_{A^{\otimes 2}} A \otimes B \otimes_{A^{\otimes 2}} A \\
\begin{aligned} (\beta'' \Psi' \alpha'' | \Psi'') \\ + (\beta'' |(H_{A} )(\alpha'' \Sha \beta''' xy \alpha' \Sha \beta' \wedge \mathit{Th})) \\ + (\alpha'' |(H_{A})(\alpha''' \Sha \beta'' xy \alpha' \Sha \beta' \wedge \mathit{Th}))\end{aligned} \arrow[u,symbol=\in] & \ar[l,mapsto] \arrow[u,symbol=\in] (\alpha | x) \otimes (\beta | y), \\
\end{tikzcd}
$$
where $\Psi' \otimes \Psi'' = H_{A \otimes A}(\alpha''' \Sha \beta''' x \alpha' \Sha \beta' \wedge \mathit{Th})$. We note that the formula again preserves $(B \otimes_{A^e} A)^{\text{triv, cl}} \subset B \otimes_{A^{\otimes 2}} A$, however, it does not at the moment commute with the projection $T\bar{H} \otimes H$, since we evaluate $H_{A\otimes A}$ and $H_A$ on a graph containing a 4-valent vertex. To obtain formulas one could precompose it with another homotopy (and another evaluation of $z_0$) similar to the ones constructed in the previous chapter. Since the final formula is not very enlightening we choose not to do so. Instead we note that this difficulty does not arise on the image of $\op{Cyc}(A) \to B \otimes_{A^{\otimes 2}} A$. And hence from \eqref{eqn:simplpsi} we obtain the description of the cobracket as in Theorem \ref{thm:liebialg}.
\begin{Thm}\label{thm:cobracket framed}
The cobracket in the 1-framed case is described by the map
$$
\begin{tikzcd}[row sep=1ex]
\op{Cyc}(\bar{H}) \otimes \op{Cyc}(\bar{H}) \ar[r] & T\bar{H} \otimes H \ar[r, "pr"] & \op{Cyc}(\bar{H}) \\
\alpha \otimes \beta \arrow[u,symbol=\in] \ar[r,mapsto]& \arrow[u,symbol=\in] (\alpha \omega_i \beta | \omega^i) + (\alpha' |\omega_i) z_1(\omega^i \alpha'' \beta) + (\beta' | \omega_i) z_1(\omega^i \beta'' \alpha).
\end{tikzcd}
$$
\end{Thm}
\subsubsection{The reduced case}
Similarly by Propositions \ref{prop:splitmap} and Lemma \ref{lem:frcop} we obtain a description of the coproduct $\Omega^\bullet(LM) \longleftarrow \Omega^{\bullet +1 -n}(\op{Map}(8), F)$ as
$$
\begin{tikzcd}[column sep= huge]
B \otimes_{A^{\otimes 2}} A & \ar[l, "s"] \op{Tot}\left( \begin{tikzpicture}
\node(1) at (0,.5) {$(B \otimes B) \otimes_{A^{\otimes 4}} (A \otimes A)$};
\node(3) at (0,-.5) {$(B \overset{A}{\oplus} B) \otimes_{A^{\otimes 2}} A$};
\draw[->](1)-- (3);
\end{tikzpicture} \right) &[25pt]
\ar[l, "1 \otimes H_{A \otimes A}"', start anchor={[yshift=3ex]},end anchor={[yshift=3ex]}]
\ar[l, "{(\epsilon \otimes 1 - 1 \otimes \epsilon) \otimes H_A}" description, start anchor={[yshift=2ex]},end anchor={[yshift=-2ex]}]
\ar[l, "1", start anchor={[yshift=-3ex]},end anchor={[yshift=-3ex]}]
\op{Tot}\left( \begin{tikzpicture}
\node(1) at (0,.5) {$(B \otimes B) \otimes_{A^{\otimes 4}} (B \otimes_A B)$};
\node(3) at (0,-.5) {$(B \overset{A}{\oplus} B) \otimes_{A^{\otimes 2}} A$};
\draw[->](1)-- (3);
\end{tikzpicture} \right) \ar{d}{\simeq}
\\
& & \op{Tot}\left( \begin{tikzpicture}
\node(1) at (0,.5) {$(B \otimes B) \otimes_{A^{\otimes 4}} A$};
\node(3) at (0,-.5) {$(B \overset{A}{\oplus} B) \otimes_{A^{\otimes 2}} A$};
\draw[->](1)-- (3);
\end{tikzpicture} \right)
\end{tikzcd}
$$
Since the homotopy inverse \eqref{eqn:8totheta} is a strict right inverse and the boundary map in the upper complex factors through this projection, we can use it again to invert the last quasi-isomorphism in the diagram. Moreover, since $(B \otimes B) \otimes_{A^{\otimes 4}} A \to (B \overset{A}{\oplus} B) \otimes_{A^{\otimes 2}} A$ is onto, we can simplify the last term by taking the kernel of this map, that is our model for $\Omega^\bullet(\op{Map}(8), F)$ is $(B^{\geq 1} \otimes B^{\geq 1}) \otimes_{A^{\otimes 4}} A$. We then see that the composite
$$
(B^{\geq 1} \otimes B^{\geq 1}) \otimes_{A^{\otimes 4}} A \to B \otimes_{A^{\otimes 2}} A,
$$
is the same as in the 1-framed case, and hence so are the formulas.
\begin{Thm}\label{thm:cobracket reduced}
The reduced cobracket is described by the map
$$
\begin{tikzcd}[row sep=1ex]
\overline{Cyc}(\bar{H}) \otimes \overline{Cyc}(\bar{H}) \ar[r] & T\bar{H} \otimes H \ar[r, "pr"] & \overline{Cyc}(\bar{H}) \\
\alpha \otimes \beta \arrow[u,symbol=\in] \ar[r,mapsto]& \arrow[u,symbol=\in] (\alpha e_i \beta | e^i) + (\alpha' | e_i) z_1( e^i \alpha'' \beta) + (\beta' | e_i) z_1( e^i \beta'' \alpha) &.
\end{tikzcd}
$$
\end{Thm}
\newcommand{\FM^{fr}}{\mathsf{FM}^{fr}}
\newcommand{\mathrm{Aut}}{\mathrm{Aut}}
\section{Discussion -- string topology, invariants of manifolds, and the diffeomorphism group}\label{sec:discussion}
There is the hope that string topology can be used to study manifolds, and that it is sensitive to the structure of $M$, beyond its (rational) homotopy type.
One can ask two related questions:
\begin{enumerate}
\item How strong is the invariant of $M$ given by (a version of) $H(LM)$ or $H_{S^1}(LM)$, together with some chosen set of algebraic (string topology-)operations?
\item A version of the diffeomorphism group acts on (a version of) $H(LM)$, preserving the chosen set of algebraic structure. Hence we get a map from the diffeomorphism group of $M$ to the algebraically defined (homotopy) automorphism group of $H(LM)$, with the algebraic structure considered. How non-trivial is this map?
\end{enumerate}
Of course, similar questions can in principle be asked on the chain level, but we only consider (co)homology here.
In this paper we connect string topology to configuration spaces. Hence it makes sense to compare the "strength" of string topology to similar invariants build from configuration spaces of points, via the Goodwillie-Weiss manifold calculus.
More concretely, to every manifold $M$ one can associate the framed configuration spaces $\FM^{fr}_M$, as right modules over the (fulton-MacPherson-version of the) framed little disks operad $\FM^{fr}_n$. The homotopy type of those forms an invariant, and it is acted upon by the diffeomorphisms of $M$. More generally, we can truncate at arity (number of points) $\leq k$ and get a tower of approximations to the diffeomorphism group, and a hierarchy of invariants:
\begin{equation*}\label{eq:towers}
\begin{tikzcd}
& & & & \mathrm{Diff}(M)\ar{d} \\
T_1\mathrm{Diff}(M):=\mathrm{Aut}^h_{\FM^{fr}_n,\leq 1}(\FM^{fr}_M)
\ar{d}
&
\cdots \ar{d}
\ar{l} &
T_k\mathrm{Diff}(M):=\mathrm{Aut}^h_{\FM^{fr}_n,\leq k}(\FM^{fr}_M)
\ar{d}\ar{l} &
\cdots \ar{d}
\ar{l} &
T_\infty\mathrm{Diff}(M):=\mathrm{Aut}^h_{\FM^{fr}_n}(\FM^{fr}_M)
\ar{l}\ar{d}
\\
\mathrm{Aut}^h_{(\FM^{fr}_n)^\mathbb{Q},\leq 1}((\FM^{fr}_M)^\mathbb{Q})
&
\cdots
\ar{l} &
\mathrm{Aut}^h_{(\FM^{fr}_n)^\mathbb{Q},\leq k}((\FM^{fr}_M)^\mathbb{Q})
\ar{l} &
\cdots
\ar{l}
&
\mathrm{Aut}^h_{(\FM^{fr}_n)^\mathbb{Q}}((\FM^{fr}_M)^\mathbb{Q})
\ar{l}
\end{tikzcd}.
\end{equation*}
Here $\mathrm{Aut}^h_{\FM^{fr}_n}(-)$ refers to homotopy automorphisms of the right $\FM^{fr}_n$-module and $\mathrm{Aut}^h_{\FM^{fr}_n,\leq k}(-)$ to its arity-$k$-truncated version, seeing only configuration spaces of $\leq k$ points.
We also added the corresponding rationalized versions in the lower row.
We could also consider instead the non-framed configuration spaces $\mathsf{FM}_M$, as a right modules over the fiberwise $E_n$-operads $\mathsf{FM}_n^M$, or replace the diagram with an version for configuration categories \cite{BoavidaWeiss}.
In any case the arrows in the diagram above are not well understood, and the usual embedding calculus convergence estimates do not apply.
\subsection{Diffeomorphism invariance}
Let $G = \mathrm{Diff}_1(M)$ be the identity component of the group of diffeomorphisms. We consider $G$-invariance of the (reduced) coproduct. In the definition \eqref{eqn:defredcop} every map except the last one is a map of spaces, and moreover equivariant with respect to $G$. That is there is a commuting diagram of spaces
$$
\begin{tikzcd}
G \times (I, \partial I) \times (LM,M) \ar[r]\ar[d] & G \times \frac{\op{Map}(\bigcirc_2) / \op{Map}^\prime(8)}{ F/ F|_{UTM}} \ar[d] & \ar[l, "\simeq"] \ar[d] G \times \frac{\op{Map}(8) / \op{Map}^\prime(8)}{F / F|_{UTM}} \\
(I, \partial I) \times (LM,M) \ar[r]& \frac{\op{Map}(\bigcirc_2) / \op{Map}^\prime(8)}{ F/ F|_{UTM}} & \ar[l, "\simeq"] \frac{\op{Map}(8) / \op{Map}^\prime(8)}{F / F|_{UTM}}
\end{tikzcd}
$$
where the vertical arrows are the action. Thus we get an induced diagram in homology. The last step in the definition of the coproduct was taking cap product with a Thom class in $H^n(M, UTM)$. For this recall that the cap product is natural and the map $\frac{\op{Map}(8) / \op{Map}^\prime(8)}{F / F|_{UTM}} \to M/UTM$ is $G$-equivariant, it thus follows that
$$
\begin{tikzcd}
H_\bullet(G \times \frac{\op{Map}(8) / \op{Map}^\prime(8)}{F / F|_{UTM}}) \ar[d]\ar[r, "\cap"] & \operatorname{Hom} (H^n(G \times (M/UTM)), H_\bullet( G \times \frac{\op{Map}(8)}{F}) ) \ar[d]\\
H_\bullet(\frac{\op{Map}(8) / \op{Map}^\prime(8)}{F / F|_{UTM}}) \ar[r, "\cap"] & \operatorname{Hom} (H^n(M/UTM), H_\bullet( \frac{\op{Map}(8)}{F} ) )
\end{tikzcd}
$$
commutes. For degree reasons the image of the Thom class $\mathit{Th}$ under $H^n(M/UTM) \to H^n(G \times M/UTM)$ is simply $1 \otimes \mathit{Th}$. Thus by compatibility of capping with products we get that
$$
\begin{tikzcd}
H_\bullet(G) \otimes H_\bullet(\frac{\op{Map}(8) / \op{Map}^\prime(8)}{F / F|_{UTM}})) \ar[d]\ar[r, "\mathrm{id} \otimes \cap \mathit{Th}"] & \operatorname{Hom} H_\bullet(G) \otimes H_\bullet( \frac{\op{Map}(8)}{F}) \ar[d]\\
H_\bullet(\frac{\op{Map}(8) / \op{Map}^\prime(8)}{F / F|_{UTM}}) \ar[r, "\cap \mathit{Th}"] & H_\bullet(\frac{\op{Map}(8)}{F})
\end{tikzcd}
$$
Since the map $\op{Map}(8) \to LM \times LM$ is again $G$-equivariant we obtain that
$$
\begin{tikzcd}
H_\bullet(G) \otimes H_\bullet(LM,M) \ar[d] \ar[r] & H_\bullet(LM,M) \ar[d] \\
H_\bullet(G) \otimes H_\bullet(LM, M) \otimes H_\bullet(LM, M) \ar[r] & H_\bullet(LM, M) \otimes H_\bullet(LM, M).
\end{tikzcd}
$$
commutes. Finally, we obtain similar commuting diagrams for the maps in the Gysin sequence associated to $LM \to LM_{S^1}$ and $G \times LM \to G \times LM_{S^1}$, respectively, which follows from naturality of the Gysin sequence.
Thus we obtain
$$
\begin{tikzcd}
H_\bullet(G) \otimes \bar{H}^{S^1}_\bullet(LM) \ar[d] \ar[r] & \bar{H}^{S^1}_\bullet(LM) \ar[d] \\
H_\bullet(G) \otimes \bar{H}^{S^1}_\bullet(LM) \otimes \bar{H}^{S^1}_\bullet(LM) \ar[r] & \bar{H}^{S^1}_\bullet(LM) \otimes \bar{H}^{S^1}_\bullet(LM).
\end{tikzcd}
$$
Or in formula
$$
\delta(g.x) = g' \delta'(x) \otimes g'' \delta''(x)
$$
for any $g \in H_\bullet(G)$ and $x \in \bar{H}^{S^1}(LM)$ where $\Delta(g) = g' \otimes g''$ is the image of $g$ under the diagonal $H_\bullet(G) \to H_\bullet(G) \otimes H_\bullet(G)$. In particular, primitive elements in $H_\bullet(G)$ act by derivations of the string cobracket. Recall that by Milnor-Moore (Theorem 21.5 in \cite{FHT2}) $H_\bullet(G)$ is the universal enveloping algebra of the Lie algebra $\pi_*(G) \otimes \mathbb{R}$.
From which it follows that we get
$$
( \pi_*(G), [,]) \to Der_{[,],\delta}(\bar{H}^{S^1}(LM)).
$$
That is $\pi_*(\mathrm{Diff}_1(M))$ acts on $\bar{H}^{S^1}(LM)$ by derivations of the Lie bialgebra structure. That the cobracket is preserved will give us a non-trivial term measuring the difference between $\mathrm{Diff}_1(M)$ and $aut_1(M)$, where $aut_1(M)$ is the identity component of the monoid of self-maps. Namely, we have
$$
\begin{tikzcd}
\pi_*(\mathrm{Diff}_1(M)) \ar[r] \ar[d] & Der_{[,],\delta}(\bar{H}^{S^1}(LM)) \ar[d] \\
\pi_*(aut_1(M)) \ar[r] & End(\bar{H}^{S^1}(LM)).
\end{tikzcd}
$$
Let now $M$ be simply-connected, such that we have rational models.
Then the lower arrow factors through
$$
\pi_*(aut_1(M)) \to H(\mathrm{Der}(\mathsf{Lie}\bar{H}_\bullet)/\operatorname{Inn}(\mathsf{Lie}\bar{H}_\bullet)) \to End(H^{S^1}(LM)),
$$
where $\mathsf{Lie}\bar{H}_\bullet$ (the cobar construction of the $\mathsf{Com}_\infty$ coalgebra $H_\bullet$) is the underlying Quillen model for our free model $A$ for cochains on $M$.
As noted in Lemma \ref{lem:compaction} we can identify $H((\mathrm{Der}(\mathsf{Lie}\bar{H}_\bullet)/\operatorname{Inn}(\mathsf{Lie}\bar{H}_\bullet))^+)$ with the ($>n$-degree part of the) lowest Hodge-degree summand of $H^{S^1}(LM)$ such that the action becomes the adjoint action.
It follows that the map $\pi_*(aut_1(M)) \to End(H^{S^1}(LM))$ factors as the adjoint action of the string bracket
$$
\pi_*(aut_1(M)) \to H^{S^1}(LM)_{(2)}[n-1] \to End(H^{S^1}(LM)),
$$
In particular, we see that while the string bracket is preserved by every element in $\pi_*(aut_1(M))$, in contrast the string cobracket is (generally) not. More concretely, let $x \in \pi_*(aut_1(M)) = H^{S^1}_{(2)}(LM)$. Using the 5-term relations for the Lie bialgebra we then see that $[x, \cdot]$ is a derivation for the Lie bialgebra structure if and only if
$$
[y, \delta(x)] = 0 \text{ for all $y \in \bar{H}^{S^1}(LM)$}.
$$
\todo[inline]{Mention that Graph-complexes and/or Burghelea say there is a map $\pi_*(aut(X)) \to H_{S^1}(LM)_{{\mathbb Z}_2}$ and that $\mathrm{Diff}$ is in its kernel. We do not get that it is in the kernel, but that it acts trivially on $H_{S^1}(LM) \otimes H_{S^1}(LM)$. This is still a non-trivial condition, and in good cases no weaker.}
\subsection{Example}
To get a concrete example we proceed as follows. Let us restrict ourselves further to the case where $\chi(M) = 0$ and we have chosen a fixed non-vanishing vector field $\xi$. Moreover, we only consider $\mathrm{Diff}^\xi_1(M)$ and $aut^\xi(M)$ that preserve $\xi$. (More precisely, the action of $\mathrm{Diff}$ on the lift of the Thom class in $H^{n-1}(UTM) \to H^n(M,UTM)$ determines a cocycle $H_\bullet(G) \to H^\bullet(M)$ and we take the kernel of that cocycle.) Thus we can work in the non-reduced setting, which is $\mathrm{Diff}^\xi_1(M)$-equivariant by a similar argument as above. And can in particular apply the counit $H^{S^1}(LM) \to \mathbb{R}$ on one factors of the above equation.
We refer to \cite{AKKN} for the fact that by applying the counit we obtain
$$
(1 \otimes \epsilon) \delta(x) \in Z( H^{S^1}(LM,M) , [,]).
$$
To illustrate the nontriviality of this condition. Let us now take $M = (S^n \otimes S^n)^{\#g}$ for $n$ odd. It is formal and coformal (i.e. the cohomology algebra is Koszul). In that case the Lie bialgebra is the same as the one obtained from a surface (or the one constructed by Schedler) see also \cite{AKKN}. Moreover, it has trivial center and we thus obtain that $\pi_*(\mathrm{Diff}(M))$ lie in the kernel of $(1 \otimes \epsilon) \delta$.
We moreover identify
$$
\pi_*(aut_1(M)) \otimes \mathbb{R} = \operatorname{OutDer}^+(\mathbb{L}),
$$
where $\mathbb{L} = \mathsf{Lie}( x_1,\dots,x_g, y_1,\dots,y_g)/(\omega))$ and $\omega = \sum_i [x_i, y_i]$ (see for instance Theorem 5.7. in \cite{BerglundMadsen}). The action on the framing gives a map $\pi_*(aut_1(M)) \to H$ and hence $\pi_*(aut^\xi_1(M))$ differs from $\pi_*(aut_1(M))$ by at most a factor of $H$.
\todo[inline]{The map $Der(L) \to H$ should just be the "divergence" on tripods. It's probably too cumbersome to actually prove that. We only need that $Der(L)^+$ is much larger than $H$ and still contains elements with non-zero divergence.}
The elements that preserve the cobracket are now
$$
\operatorname{ker} (1\otimes \epsilon)\delta = \{ u \in \operatorname{OutDer}^+(\mathbb{L}) \ | \ (1\otimes \epsilon) \delta(u) = 0 \},
$$
and $(1\otimes \epsilon) \delta(u)$ can be identified with some non-commutative divergence as in \cite{AKKN}. Note that this last Lie algebra is very closely related to $\mathfrak{krv}^{g,1}$ (see loc. cit. for a definition). By definition
$$
\operatorname{ker} (1\otimes \epsilon)\delta \to \operatorname{OutDer}^+(\mathbb{L}) \overset{(1\otimes \epsilon)\delta}{\longrightarrow} U\mathbb{L}/[U\mathbb{L}, U\mathbb{L}]
$$
is exact in the middle and one checks that the second map is non-trivial. Thus we indeed get obstructions for $\pi_*(\mathrm{Diff})$ in this case.
\subsection{Outlook, and a conjecture}
Note that in our definition the string topology operations on the rational (or real) cohomology of the free loop space depend on $M$ only the rational (or real) homotopy type of the configuration spaces of up to two points on $M$.
Hence string topology, to the extend considered here, contains at most as much "information" about $M$ and $\mathrm{Diff}(M)$ as can be obtained through at the second stage of the rationalized Goodwillie-Weiss tower, appearing at the beginning of this section.
In general, one might conceivably consider a larger set of string topology operations, as has been done for example in \cite{ChasSullivan2}. One might also lift these operations to the (say rational) (co)chain level.
We raise the admittedly very vague conjecture, that nevertheless string topology is at most as strong an invariant of the manifold as can be obtained via the configuration spaces.
More concretely, we conjecture that the higher string topology operations on rational cohomology or cochains of $LM$ can all be described using the rational homotopy type of $\mathsf{FM}_M$ as a module over the fiberwise little disks operad $\mathsf{FM}_n^M$. (Alternatively, this can be replaced by the configuration category of $M$ of \cite{BoavidaWeiss}.)
Furthermore, in this manner one will likely obtain a morphism of topological groups
\[
\mathrm{Diff}(M)\to \text{``}\mathrm{Aut}^h_{\text{string topology}}(C(LM))\text{''}
\]
from the diffeomorphism group to the group of homotopy automorphisms of the cochains on $LM$, preserving all string topology operations. The precise formulation of the right-hand side is to be found, there are several nice subsets of operations one can consider.
We conjecture that at least for for simply connected $M$ the above morphism of topological groups factors through the homotopy automorphisms of the rationalized version $\mathsf{FM}_M$ as a module over the fiberwise little disks operad $\mathsf{FM}_n^M$. Equivalently we may also take $\mathrm{Aut}^h_{(\FM^{fr}_n)^\mathbb{Q}}((\FM^{fr}_M)^\mathbb{Q})$.
The homotopy type of the latter object will be determined in \cite{WillwacherTBD}, with the result that a Lie model is given by a slight extension of the graph complex $\mathsf{GC}_M$ of section \ref{sec:GCM}, which also naturally acts on our model $\mathsf{Graphs}_M$ of the configuration spaces.
Hence the appearance of the one-loop term $Z_1$ in the formulas of Theorems \ref{thm:cobracket framed} and \ref{thm:cobracket reduced} above can be seen as a reflection of the appearance of configuration spaces in string topology.
In the other direction, there have been several results or announced results that string topology only depends on the (rational) homotopy type of $M$, for some subsets of string topology operations considered.
Our work indicates to the contrary that string topology, with the correct set of operations, can be used to access information of $M$ beyond its rational homotopy type.
| {'timestamp': '2019-11-15T02:15:46', 'yymm': '1911', 'arxiv_id': '1911.06202', 'language': 'en', 'url': 'https://arxiv.org/abs/1911.06202'} |
\section{Introduction}
Let $X$ be an \textbf{arithmetic scheme}, by which we mean in this paper that it
is a separated scheme of finite type $X \to \Spec \mathbb{Z}$. Then the corresponding
\textbf{zeta function} is defined by
\begin{equation}
\label{eqn:Euler-product-for-zeta}
\zeta (X,s) = \prod_{\substack{x \in X \\ \text{closed pt.}}}
\frac{1}{1 - N (x)^{-s}}.
\end{equation}
Here, for a closed point $x \in X$, the norm
$$N (x) = |\kappa (x)| = |\mathcal{O}_{X,x}/\mathfrak{m}_{X,x}|$$
is the size of the corresponding residue field. The product converges for
$\Re s > \dim X$, and conjecturally admits a meromorphic continuation to the
whole complex plane. Basic facts and conjectures about zeta functions of schemes
can be found in \cite{Serre-1965}.
Of particular interest are the so-called special values of $\zeta (X,s)$ at
integers $s = n \in \mathbb{Z}$, also known as the \textbf{zeta-values} of $X$.
To define these, we assume that $\zeta (X,s)$ admits a meromorphic continuation
around $s = n$. We denote by
$$d_n = \ord_{s=n} \zeta (X,s)$$
the \textbf{vanishing order} of $\zeta (X,s)$ at $s = n$. That is, $d_n > 0$
(resp. $d_n < 0$) if $\zeta (X,s)$ has a zero (resp. pole) of order $|d_n|$ at
$s = n$.
The \textbf{special value} of $\zeta (X,s)$ at $s = n$ is defined as the leading
nonzero coefficient of the Taylor expansion:
$$\zeta^* (X,n) = \lim_{s \to n} (s - n)^{-d_n}\,\zeta (X,s).$$
Early on, Lichtenbaum conjectured that both numbers $\ord_{s = n} \zeta (X,s)$
and $\zeta^* (X,n)$ should have a cohomological interpretation related to the
\'{e}tale motivic cohomology of $X$ (see e.g. \cite{Lichtenbaum-1984} for
varieties over finite fields).
This is made precise in Lichtenbaum's Weil-\'{e}tale program. It suggests the
existence of \textbf{Weil-\'{e}tale cohomology}, which is a suitable
modification of motivic cohomology that encodes the information about the
vanishing order and the special value of $\zeta (X,s)$ at $s = n$.
For Lichtenbaum's recent work on this topic, we refer the reader to
\cite{Lichtenbaum-2005,Lichtenbaum-2009-Euler-char,Lichtenbaum-2009-number-rings,Lichtenbaum-2021}.
The case of varieties over finite fields $X/\mathbb{F}_q$ is now well understood thanks
to the work of Geisser
\cite{Geisser-2004,Geisser-2006,Geisser-2010-arithmetic-homology}.
Flach and Morin considered the case of proper, regular arithmetic schemes
$X$. In \cite{Flach-Morin-2012} they have studied the corresponding
Weil-\'{e}tale topos. Later, in \cite{Morin-2014} Morin gave an explicit
construction of Weil-\'{e}tale cohomology groups $H^i_\text{\it W,c} (X, \mathbb{Z})$ for a proper
and regular arithmetic scheme $X$. This construction was further generalized by
Flach and Morin in \cite{Flach-Morin-2018} to groups $H^i_\text{\it W,c} (X, \mathbb{Z}(n))$ with
weights $n \in \mathbb{Z}$, again for a proper and regular $X$.
Motivated by the work of Flach and Morin, the author constructed in
\cite{Beshenov-Weil-etale-1} Weil-\'{e}tale cohomology groups
$H^i_\text{\it W,c} (X, \mathbb{Z} (n))$ for any arithmetic scheme $X$ (removing the assumption
that $X$ is proper or regular) and strictly negative weights $n < 0$.
The construction is based on the following assumption.
\begin{conjecture*}
$\mathbf{L}^c (X_\text{\it \'{e}t},n)$: given an arithmetic scheme $X$ and $n < 0$, the
cohomology groups $H^i (X_\text{\it \'{e}t}, \mathbb{Z}^c (n))$ are finitely generated for all
$i \in \mathbb{Z}$.
\end{conjecture*}
For the known cases, see \cite[\S 8]{Beshenov-Weil-etale-1}. Under this
conjecture, we constructed in \cite[\S 7]{Beshenov-Weil-etale-1} perfect
complexes of abelian groups $R\Gamma_\text{\it W,c} (X, \mathbb{Z}(n))$ and the corresponding
cohomology groups
$$H^i_\text{\it W,c} (X, \mathbb{Z}(n)) \mathrel{\mathop:}= H^i (R\Gamma_\text{\it W,c} (X, \mathbb{Z}(n))).$$
This text is a continuation of \cite{Beshenov-Weil-etale-1} and investigates the
conjectural relation of our Weil-\'{e}tale cohomology to the special value of
$\zeta (X,s)$ at $s = n < 0$. Specifically, we make the following conjectures.
\begin{enumerate}
\item[1)] \textbf{Conjecture}~$\mathbf{VO} (X,n)$
(see \S\ref{sec:vanishing-order-conjecture}):
\emph{the vanishing order is given by the weighted alternating sum of ranks}
\[ \ord_{s=n} \zeta (X,s) =
\sum_{i\in \mathbb{Z}} (-1)^i \cdot i \cdot \rk_\mathbb{Z} H_\text{\it W,c}^i (X, \mathbb{Z}(n)). \]
\item[2)] A consequence of \textbf{Conjecture}~$\mathbf{B} (X,n)$
(see \S\ref{sec:regulator} and Lemma~\ref{lemma:smile-theta}):
\emph{after tensoring the cohomology groups $H_\text{\it W,c}^i (X, \mathbb{Z} (n))$ with $\mathbb{R}$,
we obtain a long exact sequence of finite dimensional real vector spaces}
\[ \cdots \to H_\text{\it W,c}^{i-1} (X, \mathbb{R} (n)) \xrightarrow{\smile\theta}
H_\text{\it W,c}^i (X, \mathbb{R} (n)) \xrightarrow{\smile\theta}
H_\text{\it W,c}^{i+1} (X, \mathbb{R} (n)) \to \cdots \]
It follows that there is a canonical isomorphism
\[ \lambda\colon \mathbb{R} \xrightarrow{\cong}
(\det_\mathbb{Z} R\Gamma_\text{\it W,c} (X, \mathbb{Z} (n))) \otimes \mathbb{R}. \]
Here $\det_\mathbb{Z} R\Gamma_\text{\it W,c} (X, \mathbb{Z} (n))$ is the determinant of the
perfect complex of abelian groups $R\Gamma_\text{\it W,c} (X, \mathbb{Z} (n))$, in the sense of
Knudsen and Mumford \cite{Knudsen-Mumford-1976}. In particular,
$\det_\mathbb{Z} R\Gamma_\text{\it W,c} (X, \mathbb{Z} (n))$ is a free $\mathbb{Z}$-module of rank
$1$. For the convenience of the reader, we give a brief overview of
determinants in Appendix~\ref{app:determinants}.
\item[3)] \textbf{Conjecture}~$\mathbf{C} (X,n)$
(see \S\ref{sec:special-value-conjecture}):
\emph{the special value is determined up to sign by}
\[ \lambda (\zeta^* (X, n)^{-1}) \cdot \mathbb{Z} =
\det_\mathbb{Z} R\Gamma_\text{\it W,c} (X, \mathbb{Z} (n)). \]
\end{enumerate}
If $X$ is proper and regular, then our construction of
$R\Gamma_\text{\it W,c} (X, \mathbb{Z} (n))$ and the above conjectures agree with those of Flach
and Morin from \cite{Flach-Morin-2018}. Apart from removing the assumption that
$X$ is proper and regular, a novelty of this work is that we prove the
compatibility of the conjectures with operations on schemes, in particular with
closed-open decompositions $Z \not\hookrightarrow X \hookleftarrow U$, where
$Z \subset X$ is a closed subscheme and $U = X\setminus Z$ is the open
complement, and with affine bundles $\AA_X^r = \AA_\mathbb{Z}^r \times X$ (see
Proposition~\ref{prop:compatibility-of-VO(X,n)} and
Theorem~\ref{thm:compatibility-of-C(X,n)}). This gives a machinery for starting
from the cases of schemes for which the conjectures are known and constructing
new schemes for which the conjectures also hold. As an application, we prove in
\S\ref{sec:unconditional-results} the following result.
\begin{maintheorem*}
Let $B$ be a one-dimensional arithmetic scheme, such that each of the generic
points $\eta \in B$ satisfies one of the following properties:
\begin{enumerate}
\item[a)] $\fchar \kappa (\eta) = p > 0$;
\item[b)] $\fchar \kappa (\eta) = 0$, and $\kappa (\eta)/\mathbb{Q}$ is an abelian
number field.
\end{enumerate}
If $X$ is a $B$-cellular arithmetic scheme with smooth quasi-projective fiber
$X_{\text{\it red},\mathbb{C}}$, then Conjectures~$\mathbf{VO} (X,n)$ and
$\mathbf{C} (X,n)$ hold unconditionally for any $n < 0$.
\end{maintheorem*}
In fact, this result is established for a larger class of arithmetic schemes
$\mathcal{C} (\mathbb{Z})$; we refer to \S\ref{sec:unconditional-results} for more
details.
\subsection*{Outline of the paper}
In \S\ref{sec:regulator} we define the regulator morphism, based on the
construction of Kerr, Lewis, and M\"{u}ller-Stach
\cite{Kerr-Lewis-Muller-Stach-2006}, and state the associated
Conjecture~$\mathbf{B} (X,n)$.
Then \S\ref{sec:vanishing-order-conjecture} is devoted to
Conjecture~$\mathbf{VO} (X,n)$ about the vanishing order. We also explain why it
is consistent with a conjecture of Soul\'{e}, and with the vanishing order
arising from the expected functional equation.
In \S\ref{sec:special-value-conjecture} we state Conjecture~$\mathbf{C} (X,n)$
about the special value.
We explain in \S\ref{sec:finite-fields} that if $X$ is a variety over a finite
field, then Conjecture~$\mathbf{C} (X,n)$ is consistent with the conjectures
considered by Geisser in
\cite{Geisser-2004,Geisser-2006,Geisser-2010-arithmetic-homology}, and it
follows from Conjecture~$\mathbf{L}^c (X_\text{\it \'{e}t},n)$.
Then we prove in \S\ref{sec:compatibility-with-operations} that Conjectures
$\mathbf{VO} (X,n)$ and $\mathbf{C} (X,n)$ are compatible with basic operations
on schemes: disjoint unions, closed-open decompositions, and affine
bundles. Using these results, we conclude in \S\ref{sec:unconditional-results}
with a class of schemes for which the conjectures hold unconditionally.
For the convenience of the reader, Appendix~\ref{app:determinants} gives a brief
overview of basic definitions and facts related to the determinants of
complexes.
\subsection*{Notation}
In this paper, $X$ always denotes an \textbf{arithmetic scheme} (separated, of
finite type over $\Spec \mathbb{Z}$), and $n$ is always a strictly negative integer.
We denote by
\[
R\Gamma_\text{\it fg} (X, \mathbb{Z} (n))
\quad\text{and}\quad
R\Gamma_\text{\it W,c} (X, \mathbb{Z} (n))
\]
the complexes of abelian groups constructed in \cite{Beshenov-Weil-etale-1}
under Conjecture~$\mathbf{L}^c (X_\text{\it \'{e}t},n)$. We set
\begin{align*}
H^i_\text{\it fg} (X, \mathbb{Z} (n)) & \mathrel{\mathop:}= H^i (R\Gamma_\text{\it fg} (X, \mathbb{Z} (n))), \\
H^i_\text{\it W,c} (X, \mathbb{Z} (n)) & \mathrel{\mathop:}= H^i (R\Gamma_\text{\it W,c} (X, \mathbb{Z} (n))).
\end{align*}
By \cite[Proposition 5.5 and 7.12]{Beshenov-Weil-etale-1}, these
cohomology groups are finitely generated, assuming
Conjecture~$\mathbf{L}^c (X_\text{\it \'{e}t},n)$; moreover, the groups $H^i_\text{\it W,c} (X, \mathbb{Z}(n))$
are bounded, and $H^i_\text{\it fg} (X, \mathbb{Z} (n))$ are bounded from below and finite
$2$-torsion for $i \gg 0$.
Briefly, the construction fits in the following diagram of distinguished
triangles in the derived category $\mathbf{D} (\mathbb{Z})$:
\[ \begin{tikzcd}[column sep=1.5em]
&[-3em] R\!\Hom (R\Gamma (X_\text{\it \'{e}t}, \mathbb{Z}^c (n)), \mathbb{Q} [-2]) \text{\it ar}{d}{\alpha_{X,n}} \text{\it ar}{r} &[-2.5em] 0 \text{\it ar}{d} \\
& R\Gamma_c (X_\text{\it \'{e}t}, \mathbb{Z}(n)) \text{\it ar}{d}\text{\it ar}{r}{u_\infty^*} & R\Gamma_c (G_\mathbb{R}, X (\mathbb{C}), \mathbb{Z}(n))\text{\it ar}{d}{id} \\
R\Gamma_\text{\it W,c} (X, \mathbb{Z} (n)) \text{\it ar}{r} & R\Gamma_\text{\it fg} (X, \mathbb{Z}(n)) \text{\it ar}[dashed]{r}{i_\infty^*}\text{\it ar}{d} & R\Gamma_c (G_\mathbb{R}, X (\mathbb{C}), \mathbb{Z}(n)) \text{\it ar}{r} \text{\it ar}{d} & {[1]} \\
& R\!\Hom (R\Gamma (X_\text{\it \'{e}t}, \mathbb{Z}^c (n)), \mathbb{Q} [-1]) \text{\it ar}{r} & 0
\end{tikzcd} \]
For more details, see \cite{Beshenov-Weil-etale-1}.
For real coefficients, we set
\begin{align*}
R\Gamma_\text{\it fg} (X, \mathbb{R} (n)) & \mathrel{\mathop:}= R\Gamma_\text{\it fg} (X, \mathbb{Z} (n)) \otimes \mathbb{R}, \\
R\Gamma_\text{\it W,c} (X, \mathbb{R} (n)) & \mathrel{\mathop:}= R\Gamma_\text{\it W,c} (X, \mathbb{Z} (n)) \otimes \mathbb{R}.
\end{align*}
Accordingly,
\begin{align*}
H^i_\text{\it fg} (X, \mathbb{R} (n)) & \mathrel{\mathop:}= H^i (R\Gamma_\text{\it fg} (X, \mathbb{R} (n))) = H^i_\text{\it fg} (X, \mathbb{Z} (n)) \otimes \mathbb{R}, \\
H^i_\text{\it W,c} (X, \mathbb{R} (n)) & \mathrel{\mathop:}= H^i (R\Gamma_\text{\it W,c} (X, \mathbb{R} (n))) = H^i_\text{\it W,c} (X, \mathbb{Z} (n)) \otimes \mathbb{R}.
\end{align*}
By $X (\mathbb{C})$ we denote the space of complex points of $X$ with the usual
analytic topology. It carries a natural action of $G_\mathbb{R} = \Gal (\mathbb{C}/\mathbb{R})$ via
the complex conjugation. For a subring $A \subseteq \mathbb{R}$ we denote by $A (n)$
the $G_\mathbb{R}$-module $(2\pi i)^n\,A$, and also the corresponding constant
$G_\mathbb{R}$-equivariant sheaf on $X (\mathbb{C})$.
We denote by $R\Gamma_c (X (\mathbb{C}), A (n))$ the cohomology with compact support
with $A (n)$-coefficients, and its $G_\mathbb{R}$-equivariant version is defined by
$$R\Gamma_c (G_\mathbb{R}, X (\mathbb{C}), A (n)) \mathrel{\mathop:}= R\Gamma (G_\mathbb{R}, R\Gamma_c (X (\mathbb{C}), A (n)))$$
For real coefficients, we have
$$H_c^i (G_\mathbb{R}, X (\mathbb{C}), \mathbb{R} (n)) = H^i_c (X (\mathbb{C}), \mathbb{R} (n))^{G_\mathbb{R}},$$
where the $G_\mathbb{R}$-action on $H^i_c (X (\mathbb{C}), \mathbb{R} (n))$ naturally comes from the
corresponding action on $X (\mathbb{C})$ and $\mathbb{R} (n)$.
\textbf{Borel--Moore homology} is defined as the dual to cohomology with compact
support. We are interested in the real coefficients:
\begin{align*}
R\Gamma_\text{\it BM} (X (\mathbb{C}), \mathbb{R} (n)) & \mathrel{\mathop:}=
R\!\Hom (R\Gamma_c (X (\mathbb{C}), \mathbb{R} (n)), \mathbb{R}), \\
R\Gamma_\text{\it BM} (G_\mathbb{R}, X (\mathbb{C}), \mathbb{R} (n)) & \mathrel{\mathop:}=
R\!\Hom (R\Gamma_c (G_\mathbb{R}, X (\mathbb{C}), \mathbb{R} (n)), \mathbb{R}).
\end{align*}
\subsection*{Acknowledgments}
Parts of this work are based on my doctoral thesis, which I wrote under the
supervision of Baptiste Morin (Universit\'{e} de Bordeaux) and Bas Edixhoven
(Universiteit Leiden). I am very grateful to them for their support in working
on this project. I am also indebted to Matthias Flach, as the ideas for this
work came from \cite{Flach-Morin-2018}. I thank Stephen Lichtenbaum and Niranjan
Ramachandran who kindly agreed to act as reviewers for my thesis and provided me
with many useful comments and suggestions. Finally, I thank Jos\'{e} Jaime
Hern\'{a}ndez Castillo, Diosel L\'{o}pez Cruz, and Maxim Mornev for several
fruitful discussions.
This paper was edited while I visited the Center for Research in Mathematics
(CIMAT), Guanajuato. I personally thank Pedro Luis del \'{A}ngel and
Xavier G\'{o}mez Mont for their hospitality.
\section{Regulator morphism and Conjecture~$\mathbf{B} (X,n)$}
\label{sec:regulator}
In order to formulate the special value conjecture, we need a regulator morphism
from motivic cohomology to Deligne(--Beilinson) (co)homology. Such regulators
were originally introduced by Bloch in \cite{Bloch-1986-Lefschetz}, and here we
use the construction of Kerr, Lewis, and M\"{u}ller-Stach
\cite{Kerr-Lewis-Muller-Stach-2006}, which works at the level of complexes.
We will simply call it ``the KLM regulator.'' It works under the assumption that
$X_{\text{\it red},\mathbb{C}}$ is a smooth quasi-projective variety.
For simplicity, in this section we assume that $X$ is reduced (motivic
cohomology does not distinguish between $X$ and $X_\text{\it red}$), and that $X_\mathbb{C}$ is
connected of dimension $d_\mathbb{C}$ (otherwise, the arguments below can be applied to
each connected component). We fix a compactification by a normal crossing
divisor
\[ \begin{tikzcd}
X_\mathbb{C} \text{\it ar}[right hook->]{r}{j} & \overline{X}_\mathbb{C} & D \text{\it ar}[left hook->]{l}
\end{tikzcd} \]
The KLM regulator has the form of a morphism in the derived category
\begin{equation}
\label{eqn:KLM-morphism-1}
z^p (X_\mathbb{C}, -\bullet) \otimes \mathbb{Q} \to
{}' C_\mathcal{D}^{2p - 2d_\mathbb{C} + \bullet} (\overline{X}_\mathbb{C}, D, \mathbb{Q} (p-d_\mathbb{C})).
\end{equation}
Here $z^p (X_\mathbb{C}, -\bullet)$ denotes the Bloch's cycle complex
\cite{Bloch-1986}. To define it, consider the algebraic simplex
$\Delta_\mathbb{C}^i = \Spec \mathbb{C} [t_0,\ldots,t_i]/(1 - \sum_j t_j)$.
Then, $z^p (X_\mathbb{C}, i)$ is freely generated by algebraic cycles
$Z \subset X_\mathbb{C} \times_{\Spec \mathbb{C}} \Delta_\mathbb{C}^i$ of codimension $p$ which
intersect the faces properly. It is more convenient for us to work with
$$z_{d_\mathbb{C} - p} (X_\mathbb{C}, i) = z^p (X_\mathbb{C}, i),$$
generated by cycles $Z \subset X_\mathbb{C} \times_{\Spec \mathbb{C}} \Delta_\mathbb{C}^i$ of
dimension $p+i$.
The complex ${}' C_\mathcal{D}^{\bullet} (\overline{X}_\mathbb{C}, D, \mathbb{Q} (k))$ on the
right-hand side of \eqref{eqn:KLM-morphism-1} computes Deligne(--Beilinson)
homology, as defined by Jannsen \cite{Jannsen-1988}. If we take
$p = d_\mathbb{C} + 1 - n$, tensor it with $\mathbb{R}$ and shift it by $2n$, we obtain
\begin{equation}
\label{eqn:KLM-morphism-2}
z_{n-1} (X_\mathbb{C}, -\bullet) \otimes \mathbb{R} [2n] \to
{}' C_\mathcal{D}^{2 + \bullet} (\overline{X}_\mathbb{C}, D, \mathbb{R} (1-n)).
\end{equation}
\begin{remark}
Some comments are in order.
\begin{enumerate}
\item Originally, the KLM regulator is defined using a cubical version of
cycle complexes, but these are quasi-isomorphic to the usual simplicial
cycle complexes by \cite{Levine-1994}, so we make no distinction here.
For an explicit simplicial version of the KLM regulator, see
\cite{Kerr-Lewis-Lopatto-2018}.
\item The KLM regulator is defined as a true morphism of complexes (not just a
morphism in the derived category) on a subcomplex
$z^r_\mathbb{R} (X_\mathbb{C}, \bullet) \subset z^r (X_\mathbb{C}, \bullet)$. This inclusion
becomes a quasi-isomorphism if we pass to rational coefficients. In the
original paper \cite{Kerr-Lewis-Muller-Stach-2006} this is stated without
tensoring with $\mathbb{Q}$ , but the omission is acknowledged later in
\cite{Kerr-Lewis-2007}. For our purposes, it suffices to have a regulator
with coefficients in $\mathbb{R}$.
\item The case of a smooth quasi-projective $X_\mathbb{C}$, where one must consider a
compactification by a normal crossing divisor as above, is treated in
\cite[\S 5.9]{Kerr-Lewis-Muller-Stach-2006}.
\end{enumerate}
\end{remark}
Now we make a small digression to identify the right-hand side of
\eqref{eqn:KLM-morphism-2}. Under our assumption that $n < 0$, Deligne
homology is equivalent to Borel--Moore homology.
\begin{lemma}
For any $n < 0$ there is a quasi-isomorphism
\begin{multline*}
{}' C^\bullet_\mathcal{D} (\overline{X_\mathbb{C}}, D, \mathbb{R} (1-n)) \cong
R\Gamma_\text{\it BM} (X (\mathbb{C}), \mathbb{R} (n)) [-1] \\
\mathrel{\mathop:}= R\!\Hom (R\Gamma_c (X (\mathbb{C}), \mathbb{R} (n)), \mathbb{R}) [-1].
\end{multline*}
Moreover, it respects the natural actions of $G_\mathbb{R}$ on both complexes.
\begin{proof}
From the proof of \cite[Theorem~1.15]{Jannsen-1988}, for any $k \in \mathbb{Z}$ we
have a quasi-isomorphism
\begin{equation}
\label{eqn:Jannsen-Theorem-1.15}
{}' C^\bullet_\mathcal{D} (\overline{X_\mathbb{C}}, D, \mathbb{R} (k)) \cong
R\Gamma (\overline{X} (\mathbb{C}), \mathbb{R} (k + d_\mathbb{C})_{{\mathcal{D}\text{-}\mathcal{B}}, (\overline{X}_\mathbb{C},X_\mathbb{C})}) [2d_\mathbb{C}],
\end{equation}
where
\[ \mathbb{R} (k + d_\mathbb{C})_{{\mathcal{D}\text{-}\mathcal{B}}, (\overline{X}_\mathbb{C},X_\mathbb{C})} =
\Cone \left(\begin{array}{c}
R j_* \mathbb{R} (k + d_\mathbb{C}) \\
\oplus \\
\Omega^{\geqslant k + d_\mathbb{C}}_{\overline{X} (\mathbb{C})} (\log D)
\end{array}
\xrightarrow{\epsilon - \iota}
R j_* \Omega_{X (\mathbb{C})}^\bullet \right) [-1] \]
is the sheaf whose hypercohomology on $\overline{X} (\mathbb{C})$ gives
Deligne--Beilinson cohomology (see \cite{Esnault-Viehweg-1988} for
more details).
Here $\Omega^\bullet_{\overline{X} (\mathbb{C})}$ denotes the usual de Rham complex
of holomorphic differential forms, and
$\Omega^\bullet_{\overline{X} (\mathbb{C})} (\log D)$ is the complex of forms with
at most logarithmic poles along $D (\mathbb{C})$.
The latter complex is filtered by subcomplexes
$\Omega^{\geqslant \bullet}_{\overline{X} (\mathbb{C})} (\log D)$.
The morphism
$\epsilon\colon R j_* \mathbb{R} (k) \to R j_* \Omega^\bullet_{X (\mathbb{C})}$ is induced
by the canonical morphism of sheaves $\mathbb{R} (k) \to \mathcal{O}_{X (\mathbb{C})}$,
and $\iota$ is induced by the natural inclusion
$\Omega^\bullet_{\overline{X} (\mathbb{C})} (\log D) \xrightarrow{\cong} j_*
\Omega_{X (\mathbb{C})}^\bullet = R j_* \Omega_{X (\mathbb{C})}^\bullet$, which is a
quasi-isomorphism of filtered complexes.
We are interested in the case of $k > 0$ when the part
$\Omega^{\geqslant k + d_\mathbb{C}}_{\overline{X} (\mathbb{C})} (\log D)$ vanishes, and
we obtain
\begin{align}
\notag \mathbb{R} (k + d_\mathbb{C})_{{\mathcal{D}\text{-}\mathcal{B}}, (\overline{X}_\mathbb{C},X_\mathbb{C})} & \cong
R j_* \Cone \Bigl(\mathbb{R} (k + d_\mathbb{C})
\xrightarrow{\epsilon}
\Omega_{X (\mathbb{C})}^\bullet \Bigr) [-1] \\
\notag & \cong R j_* \Bigl(\mathbb{R} (k + d_\mathbb{C}) \xrightarrow{\epsilon}
\Omega_{X (\mathbb{C})}^\bullet [-1] \Bigr) \\
\label{eqn:deligne-homology-1} & \cong R j_* \Bigl(\mathbb{R} (k + d_\mathbb{C}) \to \mathbb{C} [-1] \Bigr) \\
\label{eqn:deligne-homology-2} & \cong R j_* \mathbb{R} (k + d_\mathbb{C} - 1) [-1]
\end{align}
Here \eqref{eqn:deligne-homology-1} comes from the Poincar\'{e} lemma
$\mathbb{C} \cong \Omega_{X (\mathbb{C})}^\bullet$ and \eqref{eqn:deligne-homology-2}
from the short exact sequence of $G_\mathbb{R}$-modules
$\mathbb{R} (k + d_\mathbb{C}) \rightarrowtail \mathbb{C} \twoheadrightarrow \mathbb{R} (k + d_\mathbb{C} - 1)$.
Returning to \eqref{eqn:Jannsen-Theorem-1.15} for $k = 1-n$, we find that
\begin{align*}
{}' C^\bullet_\mathcal{D} (\overline{X_\mathbb{C}}, D, \mathbb{R} (1-n)) & \cong
R\Gamma (X (\mathbb{C}), \mathbb{R} (d_\mathbb{C} - n)) [2d_\mathbb{C}-1] \\
& \cong R\!\Hom (R\Gamma_c (X (\mathbb{C}), \mathbb{R} (n)), \mathbb{R}) [-1].
\end{align*}
Here the final isomorphism is Poincar\'{e} duality.
All the above is $G_\mathbb{R}$-equivariant.
\end{proof}
\end{lemma}
Returning now to \eqref{eqn:KLM-morphism-2}, the previous lemma allows us to
reinterpret the KLM regulator as
\begin{equation}
\label{eqn:KLM-morphism-3}
z_{n-1} (X_\mathbb{C}, -\bullet) \otimes \mathbb{R} [2n] \to
R\Gamma_\text{\it BM} (X (\mathbb{C}), \mathbb{R} (n)), \mathbb{R}) [1].
\end{equation}
We have
\begin{multline}
\label{eqn:KLM-morphism-4}
z_{n-1} (X_\mathbb{C}, -\bullet) \otimes \mathbb{R} [2n] =
z_{n-1} (X_\mathbb{C}, -\bullet) \otimes \mathbb{R} [2n-2] [2] \\
= \Gamma (X_{\mathbb{C},\text{\it \'{e}t}}, \mathbb{R}^c (n-1)) [2],
\end{multline}
where the complex of sheaves $\mathbb{R}^c (p)$ is defined by
$U \rightsquigarrow z_p (U, -\bullet) \otimes \mathbb{R} [2p]$.
By \'{e}tale cohomological descent \cite[Theorem~3.1]{Geisser-2010},
\begin{equation}
\label{eqn:KLM-morphism-5}
\Gamma (X_{\mathbb{C},\text{\it \'{e}t}}, \mathbb{R}^c (n-1)) \cong R\Gamma (X_{\mathbb{C},\text{\it \'{e}t}}, \mathbb{R}^c (n-1)).
\end{equation}
(We note that \cite[Theorem~3.1]{Geisser-2010} holds unconditionally, since the
Beilinson--Lichtenbaum conjecture follows from the Bloch--Kato conjecture, which
is now a theorem; see also \cite{Geisser-2004-Dedekind} where the consequences
of Bloch--Kato for motivic cohomology are deduced.)
Finally, the base change from $X$ to $X_\mathbb{C}$ naturally maps cycles
$Z \subset X \times \Delta_\mathbb{Z}^i$ of dimension $n$ to cycles in
$X_\mathbb{C} \times_{\Spec \mathbb{C}} \Delta_\mathbb{C}^i$ of dimension $n-1$, so that there is a
morphism
\begin{equation}
\label{eqn:KLM-morphism-6}
R\Gamma (X_\text{\it \'{e}t}, \mathbb{R}^c (n)) \to R\Gamma (X_{\mathbb{C},\text{\it \'{e}t}}, \mathbb{R}^c (n-1)) [2].
\end{equation}
\begin{remark}
Assuming that $X$ is flat and has pure Krull dimension $d$, we have
$\mathbb{R}^c (n)^X = \mathbb{R} (d-n)^X [2d]$, where $\mathbb{R} (\bullet)$ is the usual cycle
complex defined by $z^n (\text{\textvisiblespace}, -\bullet) [-2n]$.
Similarly, $\mathbb{R}^c (n)^{X_\mathbb{C}} = \mathbb{R} (d_\mathbb{C}-n)^{X_\mathbb{C}} [2d_\mathbb{C}]$, with
$d_\mathbb{C} = d - 1$. With this renumbering, the morphism
\eqref{eqn:KLM-morphism-6} becomes
$$R\Gamma (X_\text{\it \'{e}t}, \mathbb{R} (d-n)) [2d] \to R\Gamma (X_{\mathbb{C},\text{\it \'{e}t}}, \mathbb{R} (d-n)) [2d].$$
This probably looks more natural, but we make no additional assumptions about
$X$ and work exclusively with complexes $A^c (\bullet)$ defined in terms of
dimension of algebraic cycles, rather than $A (\bullet)$ defined in terms of
codimension.
\end{remark}
\begin{definition}
Given an arithmetic scheme $X$ with smooth quasi-projective $X_\mathbb{C}$ and
$n < 0$, consider the composition of morphisms
\begin{multline*}
R\Gamma (X_\text{\it \'{e}t}, \mathbb{R}^c (n)) \xrightarrow{\text{\eqref{eqn:KLM-morphism-6}}}
R\Gamma (X_{\mathbb{C},\text{\it \'{e}t}}, \mathbb{R}^c (n-1)) [2] \stackrel{\text{\eqref{eqn:KLM-morphism-5}}}{\cong}
\Gamma (X_{\mathbb{C},\text{\it \'{e}t}}, \mathbb{R}^c (n-1)) [2] \\
\stackrel{\text{\eqref{eqn:KLM-morphism-4}}}{=}
z_{n-1} (X_\mathbb{C}, -\bullet)_\mathbb{R} [2n] \xrightarrow{\text{\eqref{eqn:KLM-morphism-3}}}
R\Gamma_\text{\it BM} (X (\mathbb{C}), \mathbb{R} (n)), \mathbb{R}) [1].
\end{multline*}
Moreover, we take the $G_\mathbb{R}$-invariants, which gives us the
\textbf{(\'{e}tale) regulator}
\[ Reg_{X,n}\colon R\Gamma (X_\text{\it \'{e}t}, \mathbb{R}^c (n)) \to
R\Gamma_\text{\it BM} (G_\mathbb{R}, X(\mathbb{C}), \mathbb{R} (n)) [1]. \]
\end{definition}
Now we state our conjecture about the regulator, which will play an important
role in everything that follows.
\begin{conjecture}
$\mathbf{B} (X,n)$: given an arithmetic scheme $X$ with smooth
quasi-projective $X_\mathbb{C}$ and $n < 0$, the regulator morphism $Reg_{X,n}$
induces a quasi-isomorphism of complexes of real vector spaces
\[ Reg_{X,n}^\vee\colon R\Gamma_c (G_\mathbb{R}, X (\mathbb{C}), \mathbb{R} (n)) [-1] \to
R\!\Hom (R\Gamma (X_\text{\it \'{e}t}, \mathbb{Z}^c (n)), \mathbb{R}). \]
\end{conjecture}
\begin{remark}
If $X/\mathbb{F}_q$ is a variety over a finite field, then $X (\mathbb{C}) = \emptyset$,
so the regulator map is not interesting. Indeed, in our setting, its purpose
is to take care of the Archimedian places of $X$. In this case
$\mathbf{B} (X,n)$ implies that $H^i (X_\text{\it \'{e}t}, \mathbb{Z}^c (n))$ are torsion groups.
However, by \cite[Proposition~4.2]{Beshenov-Weil-etale-1},
Conjecture~$\mathbf{L}^c (X_\text{\it \'{e}t}, n)$ already implies that
$H^i (X_\text{\it \'{e}t}, \mathbb{Z}^c (n))$ are finite groups.
\end{remark}
\begin{remark}
\label{rmk:regulator-is-defined-for-XC-smooth-quasi-proj}
We reiterate that our construction of $Reg_{X,n}$ works for $X_{\text{\it red},\mathbb{C}}$
smooth quasi-projective. In everything that follows, whenever the regulator
morphism or Conjecture~$\mathbf{B} (X,n)$ is brought, we tacitly assume this
restriction. This is rather unfortunate, since Weil-\'{e}tale cohomology was
constructed in \cite{Beshenov-Weil-etale-1} for any arithmetic scheme,
assuming only Conjecture~$\mathbf{L}^c (X_\text{\it \'{e}t},n)$. Defining the regulator for
singular $X_{\text{\it red},\mathbb{C}}$ is an interesting project for future work.
\end{remark}
\section{Vanishing order Conjecture $\mathbf{VO} (X,n)$}
\label{sec:vanishing-order-conjecture}
Assuming that $\zeta (X,s)$ admits a meromorphic continuation around
$s = n < 0$, we make the following conjecture for the vanishing order at
$s = n$.
\begin{conjecture}
$\mathbf{VO} (X,n)$: one has
\[ \ord_{s=n} \zeta (X,s) =
\sum_{i \in \mathbb{Z}} (-1)^i \cdot i \cdot \rk_\mathbb{Z} H^i_\text{\it W,c} (X, \mathbb{Z} (n)). \]
\end{conjecture}
We note that the right-hand side makes sense under
Conjecture~$\mathbf{L}^c (X_\text{\it \'{e}t},n)$, which implies that $H^i_\text{\it W,c} (X, \mathbb{Z} (n))$
are finitely generated groups, trivial for $|i| \gg 0$;
see \cite[Proposition~7.12]{Beshenov-Weil-etale-1}.
\begin{remark}
Conjecture~$\mathbf{VO} (X,n)$ is similar to
\cite[Conjecture~5.11]{Flach-Morin-2018}. If $X$ is proper and regular, then
$\mathbf{VO} (X,n)$ is the same as Flach and Morin's vanishing order
conjecture. Indeed, the latter is
\begin{equation}
\label{eqn:FM-vanishing-order}
\ord_{s = n} \zeta (X,s) =
\sum_{i\in \mathbb{Z}} (-1)^i \cdot i \cdot \dim_\mathbb{R} H^i_{\text{\it ar},c} (X, \widetilde{\mathbb{R}}(n)),
\end{equation}
where
\[ R\Gamma_{\text{\it ar},c} (X, \widetilde{\mathbb{R}}(n)) \mathrel{\mathop:}=
R\Gamma_c (X, \mathbb{R}(n)) \oplus R\Gamma_c (X, \mathbb{R}(n)) [-1]. \]
Moreover, \cite[Proposition 4.14]{Flach-Morin-2018}, gives a distinguished
triangle
\begin{multline*}
R\Gamma_\text{\it dR} (X_\mathbb{R}/\mathbb{R}) / \Fil^n [-2] \to
R\Gamma_{\text{\it ar},c} (X, \widetilde{\mathbb{R}}(n)) \to
R\Gamma_\text{\it W,c} (X, \mathbb{Z}(n)) \otimes \mathbb{R} \\
\to R\Gamma_\text{\it dR} (X_\mathbb{R}/\mathbb{R}) / \Fil^n [-1]
\end{multline*}
So, in case of $n < 0$ we have
$R\Gamma_{\text{\it ar},c} (X, \widetilde{\mathbb{R}}(n)) \cong
R\Gamma_\text{\it W,c} (X, \mathbb{Z}(n)) \otimes \mathbb{R}$ and
\eqref{eqn:FM-vanishing-order} is exactly Conjecture~$\mathbf{VO} (X,n)$.
\end{remark}
\begin{remark}
The alternating sum in Conjecture~$\mathbf{VO} (X,n)$ is the so-called
\textbf{secondary Euler characteristic}
\[ \chi' (R\Gamma_\text{\it W,c} (X, \mathbb{Z} (n))) \mathrel{\mathop:}=
\sum_{i \in \mathbb{Z}} (-1)^i \cdot i \cdot \rk_\mathbb{Z} H^i_\text{\it W,c} (X, \mathbb{Z} (n)). \]
The calculations below show that the usual Euler characteristic of
$R\Gamma_\text{\it W,c} (X, \mathbb{Z} (n))$ vanishes, assuming
Conjectures~$\mathbf{L}^c (X_\text{\it \'{e}t},n)$ and $\mathbf{B} (X,n)$. See
\cite{Ramachandran-2016} for more details on the secondary Euler
characteristic and its occurrences in nature.
\end{remark}
Under the regulator conjecture, our vanishing order formula takes the form of
the usual Euler characteristic of equivariant cohomology
$R\Gamma_c (G_\mathbb{R}, X(\mathbb{C}), \mathbb{R} (n))$ or motivic cohomology
$R\Gamma (X_\text{\it \'{e}t}, \mathbb{Z}^c (n)) [1]$.
\begin{proposition}
\label{prop:VO(X,n)-assuming-B(X,n)}
Assuming $\mathbf{L}^c (X_\text{\it \'{e}t}, n)$ and $\mathbf{B} (X,n)$,
Conjecture~$\mathbf{VO} (X,n)$ is equivalent to
\begin{align*}
\ord_{s=n} \zeta (X,s) & = \chi (R\Gamma_c (G_\mathbb{R}, X(\mathbb{C}), \mathbb{R} (n))
= \sum_{i \in \mathbb{Z}} (-1)^i \dim_\mathbb{R} H^i_c (X(\mathbb{C}), \mathbb{R} (n))^{G_\mathbb{R}} \\
& = -\chi (R\Gamma (X_\text{\it \'{e}t}, \mathbb{Z}^c (n)))
= \sum_{i \in \mathbb{Z}} (-1)^{i+1} \rk_\mathbb{Z} H^i (X_\text{\it \'{e}t}, \mathbb{Z}^c (n)).
\end{align*}
Moreover, we have
$$\chi (R\Gamma_\text{\it W,c} (X, \mathbb{Z}(n))) = 0.$$
\begin{proof}
Thanks to \cite[Proposition~7.13]{Beshenov-Weil-etale-1}, the Weil-\'{e}tale
complex tensored with $\mathbb{R}$ splits as
\[ R\Gamma_\text{\it W,c} (X,\mathbb{R} (n)) \cong
R\!\Hom (R\Gamma (X_\text{\it \'{e}t}, \mathbb{Z}^c (n)), \mathbb{R}) [-1] \oplus
R\Gamma_c (G_\mathbb{R}, X (\mathbb{C}), \mathbb{R} (n)) [-1]. \]
Assuming Conjecture~$\mathbf{B} (X,n)$, we also have a quasi-isomorphism
\[ R\Gamma_c (G_\mathbb{R}, X (\mathbb{C}), \mathbb{R} (n)) [-1] \cong
R\!\Hom (R\Gamma (X_\text{\it \'{e}t}, \mathbb{Z}^c (n)), \mathbb{R}), \]
so that
\[ \dim_\mathbb{R} H^i_\text{\it W,c} (X,\mathbb{R}(n)) =
\dim_\mathbb{R} H^{i-1}_c (X (\mathbb{C}), \mathbb{R} (n))^{G_\mathbb{R}} +
\dim_\mathbb{R} H^{i-2}_c (X (\mathbb{C}), \mathbb{R} (n))^{G_\mathbb{R}}. \]
Thus, we can rewrite the sum
\begin{align*}
\sum_{i \in \mathbb{Z}} (-1)^i \cdot i \cdot \rk_\mathbb{Z} H^i_\text{\it W,c} (X, \mathbb{Z} (n)) & = \sum_{i \in \mathbb{Z}} (-1)^i \cdot i \cdot \dim_\mathbb{R} H^i_\text{\it W,c} (X, \mathbb{R} (n)) \\
& = \sum_{i \in \mathbb{Z}} (-1)^i \cdot i \cdot
\dim_\mathbb{R} H^{i-1}_c (X (\mathbb{C}), \mathbb{R} (n))^{G_\mathbb{R}} \\
& \quad\quad + \sum_{i \in \mathbb{Z}} (-1)^i \cdot i \cdot \dim_\mathbb{R} H^{i-2}_c (X (\mathbb{C}), \mathbb{R} (n))^{G_\mathbb{R}} \\
& = -\sum_{i \in \mathbb{Z}} (-1)^i \, \dim_\mathbb{R} H^{i-1}_c (X (\mathbb{C}), \mathbb{R} (n))^{G_\mathbb{R}} \\
& = \chi (R\Gamma_c (G_\mathbb{R}, X (\mathbb{C}), \mathbb{R} (n)).
\end{align*}
Similarly,
\begin{align*}
\sum_{i \in \mathbb{Z}} (-1)^i \cdot i \cdot \rk_\mathbb{Z} H^i_\text{\it W,c} (X, \mathbb{Z} (n)) & = \chi (R\!\Hom (R\Gamma (X_\text{\it \'{e}t}, \mathbb{Z}^c (n)), \mathbb{R}) [1]) \\
& = -\chi (R\Gamma (X_\text{\it \'{e}t}, \mathbb{Z}^c (n))).
\end{align*}
These considerations also show that the usual Euler characteristic of
$R\Gamma_\text{\it W,c} (X, \mathbb{Z}(n))$ vanishes.
\end{proof}
\end{proposition}
\begin{remark}
Conjecture~$\mathbf{VO} (X,n)$ is related to a conjecture of Soul\'{e}
\cite[Conjecture~2.2]{Soule-1984-ICM}, which originally reads in terms of
$K'$-theory
\[ \ord_{s=n} \zeta (X,s) =
\sum_{i \in \mathbb{Z}} (-1)^{i+1} \, \dim_\mathbb{Q} K'_i (X)_{(n)}. \]
As explained in \cite[Remark~43]{Kahn-2005}, this can be rewritten in
terms of Borel--Moore motivic homology as
$$\sum_{i \in \mathbb{Z}} (-1)^{i+1} \, \dim_\mathbb{Q} H_i^{BM} (X, \mathbb{Q} (n)).$$
In our setting, $H^i (X_\text{\it \'{e}t}, \mathbb{Z}^c (n))$ plays the role of Borel--Moore
homology, which explains the formula
\[ \ord_{s=n} \zeta (X,s) =
\sum_{i \in \mathbb{Z}} (-1)^{i+1} \rk_\mathbb{Z} H^i (X_\text{\it \'{e}t}, \mathbb{Z}^c (n)). \]
\end{remark}
\begin{remark}[{\cite[Proposition~5.13]{Flach-Morin-2018}}]
\label{rmk:archimedian-euler-factor}
As for the formula
\[ \ord_{s=n} \zeta (X,s) =
\sum_{i \in \mathbb{Z}} (-1)^i \dim_\mathbb{R} H^i_c (X(\mathbb{C}), \mathbb{R} (n))^{G_\mathbb{R}}, \]
it essentially means that the vanishing order at $s = n < 0$ comes from the
Archimedian $\Gamma$-factor appearing in the (hypothetical) functional
equation, as explained in \cite[\S\S 3,4]{Serre-1970}
(see also \cite[\S 4]{Flach-Morin-2020}).
Indeed, under the assumption that $X_\mathbb{C}$ is a smooth projective variety, we
consider the Hodge decomposition
\[ H^i (X (\mathbb{C}), \mathbb{C}) = \bigoplus_{p+q = i} H^{p,q}, \]
which carries an action of $G_\mathbb{R} = \{ id, \sigma \}$ such that
$\sigma (H^{p,q}) = H^{q,p}$. We set $h^{p,q} = \dim_\mathbb{C} H^{p,q}$.
For $p = i/2$ we consider the eigenspace decomposition
$H^{p,p} = H^{p,+} \oplus H^{p,-}$, where
\begin{align*}
H^{p,+} & = \{ x \in H^{p,p} \mid \sigma (x) = (-1)^p\,x \},\\
H^{p,-} & = \{ x \in H^{p,p} \mid \sigma (x) = (-1)^{p+1}\,x \},
\end{align*}
and set $h^{p,\pm} = \dim_\mathbb{C} H^{p,\pm}$ accordingly.
The completed zeta function
$$\zeta (\overline{X}, s) = \zeta (X, s)\,\zeta (X_\infty, s)$$
is expected to satisfy a functional equation of the form
\[ A^{\frac{d-s}{2}}\,\zeta (\overline{X},d-s) =
A^{\frac{s}{2}}\,\zeta (\overline{X},s). \]
Here
\begin{gather*}
\zeta (X_\infty, s) = \prod_{i\in \mathbb{Z}} L_\infty (H^i (X),s)^{(-1)^i}, \\
L_\infty (H^i (X), s) =
\prod_{p = i/2} \Gamma_\mathbb{R} (s - p)^{h^{p,+}}\,\Gamma_\mathbb{R} (s-p+1)^{h^{p,-}} \,
\prod_{\substack{p + q = i \\ p < q}} \Gamma_\mathbb{C} (s - p)^{h^{p,q}}, \\
\Gamma_\mathbb{R} (s) = \pi^{-s/2} \, \Gamma (s/2), \quad
\Gamma_\mathbb{C} (s) = (2\pi)^{-s} \, \Gamma (s).
\end{gather*}
Therefore, the expected vanishing order at $s = n < 0$ is
\begin{align*}
\ord_{s=n} \zeta (X,s) & = -\ord_{s=n} \zeta (X_\infty,s) \\
& = -\sum_{i\in \mathbb{Z}} (-1)^i \ord_{s=n} L_\infty (H^i (X), s) \\
& = \sum_{i\in \mathbb{Z}} (-1)^i \Bigl(\sum_{p = i/2} h^{p,(-1)^{n-p}} +
\sum_{\substack{p + q = i \\ p < q}} h^{p,q}\Bigr).
\end{align*}
The last equality follows from the fact that $\Gamma (s)$ has simple poles at
all $s = n \le 0$. We have
\begin{align*}
\dim_\mathbb{R} H^i (X (\mathbb{C}), \mathbb{R} (n))^{G_\mathbb{R}} & = \dim_\mathbb{R} H^i (X (\mathbb{C}), \mathbb{R})^{\sigma = (-1)^n} \\
& = \dim_\mathbb{C} H^i (X (\mathbb{C}), \mathbb{C})^{\sigma = (-1)^n} \\
& = \sum_{p = i/2} h^{p,(-1)^{n-p}} + \sum_{\substack{p + q = i \\ p < q}} h^{p,q}.
\end{align*}
Here the terms $h^{p,q}$ with $p < q$ come from $\sigma (H^{p,q}) = H^{q,p}$,
while $h^{p,(-1)^{n-p}}$ come from the action on $H^{p,p}$.
We see that our conjectural formula recovers the expected vanishing order.
\end{remark}
Let us look at some particular examples when the meromorphic continuation for
$\zeta (X,s)$ is known.
\begin{example}
\label{example:VO(X,n)-for-number-rings}
Suppose that $X = \Spec \mathcal{O}_F$ is the spectrum of the ring of integers
of a number field $F/\mathbb{Q}$. Let $r_1$ be the number of real embeddings
$F \hookrightarrow \mathbb{R}$ and $r_2$ be the number of conjugate pairs of complex
embeddings $F \hookrightarrow \mathbb{C}$. The space $X (\mathbb{C})$ with the action of
complex conjugation can be visualized as follows:
\[ \begin{tikzpicture}
\matrix(m)[matrix of math nodes, row sep=1em, column sep=1em,
text height=1ex, text depth=0.2ex]{
~ & ~ & ~ & ~ & ~ & \bullet & \bullet & \cdots & \bullet \\
\bullet & \bullet & \cdots & \bullet \\
~ & ~ & ~ & ~ & ~ & \bullet & \bullet & \cdots & \bullet \\};
\draw[->] (m-2-1) edge[loop above,min distance=10mm] (m-2-1);
\draw[->] (m-2-2) edge[loop above,min distance=10mm] (m-2-2);
\draw[->] (m-2-4) edge[loop above,min distance=10mm] (m-2-4);
\draw[->] (m-1-6) edge[bend left] (m-3-6);
\draw[->] (m-1-7) edge[bend left] (m-3-7);
\draw[->] (m-1-9) edge[bend left] (m-3-9);
\draw[->] (m-3-6) edge[bend left] (m-1-6);
\draw[->] (m-3-7) edge[bend left] (m-1-7);
\draw[->] (m-3-9) edge[bend left] (m-1-9);
\draw [decorate,decoration={brace,amplitude=5pt,mirror}] ($(m-3-1)+(-0.5em,-0.5em)$) -- ($(m-3-4)+(0.5em,-0.5em)$);
\draw [decorate,decoration={brace,amplitude=5pt,mirror}] ($(m-3-6)+(-0.5em,-0.5em)$) -- ($(m-3-9)+(0.5em,-0.5em)$);
\draw ($(m-3-1)!.5!(m-3-4)$) node[yshift=-2em,anchor=base] {$r_1$ points};
\draw ($(m-3-6)!.5!(m-3-9)$) node[yshift=-2em,anchor=base] {$2 r_2$ points};
\end{tikzpicture} \]
The complex $R\Gamma_c (X (\mathbb{C}), \mathbb{R} (n))$ consists of a single $G_\mathbb{R}$-module
in degree $0$ given by
$$\mathbb{R} (n)^{\oplus r_1} \oplus (\mathbb{R} (n) \oplus \mathbb{R} (n))^{\oplus r_2},$$
with the action of $G_\mathbb{R}$ on the first summand $\mathbb{R} (n)^{\oplus r_1}$ via the
complex conjugation and the action on the second summand
$(\mathbb{R} (n) \oplus \mathbb{R} (n))^{\oplus r_2}$ via
$(x,y) \mapsto (\overline{y}, \overline{x})$. The corresponding real space of
fixed points has dimension
\[ \dim_\mathbb{R} H^0_c (G_\mathbb{R}, X (\mathbb{C}), \mathbb{R} (n)) = \begin{cases}
r_2, & n \text{ odd},\\
r_1 + r_2, & n \text{ even},\\
\end{cases} \]
which indeed coincides with the vanishing order of the Dedekind zeta function
$\zeta (X,s) = \zeta_F (s)$ at $s = n < 0$.
On the motivic cohomology side, for $n < 0$ the groups
$H^i (X_\text{\it \'{e}t}, \mathbb{Z}^c (n))$ are finite, except for $i = -1$, where by
\cite[Proposition~4.14]{Geisser-2017}
\[ \rk_\mathbb{Z} H^{-1} (X_\text{\it \'{e}t}, \mathbb{Z}^c (n)) = \begin{cases}
r_2, & n \text{ odd},\\
r_1 + r_2, & n \text{ even}.
\end{cases} \]
\end{example}
\begin{example}
Suppose that $X$ is a variety over a finite field $\mathbb{F}_q$. Then the vanishing
order conjecture is not very interesting, because the formula yields
\begin{align*}
\ord_{s=n} \zeta (X,s) & = \sum_{i \in \mathbb{Z}} (-1)^i \dim_\mathbb{R} H^i_c (X(\mathbb{C}), \mathbb{R} (n))^{G_\mathbb{R}} \\
& = \sum_{i \in \mathbb{Z}} (-1)^{i+1} \rk_\mathbb{Z} H^i (X_\text{\it \'{e}t}, \mathbb{Z}^c (n)) = 0,
\end{align*}
since $X (\mathbb{C}) = \emptyset$, and also because $\mathbf{L}^c (X_\text{\it \'{e}t}, n)$
implies $\rk_\mathbb{Z} H^i (X_\text{\it \'{e}t}, \mathbb{Z}^c (n)) = 0$ for all $i$ in the case of
varieties over finite fields, as observed in
\cite[Proposition~4.2]{Beshenov-Weil-etale-1}. Therefore, the conjecture
simply asserts that $\zeta (X,s)$ has no zeros or poles at $s = n < 0$. This
is indeed the case. We have $\zeta (X,s) = Z (X,q^{-s})$, where
$$Z (X,t) = \exp \Bigl(\sum_{k\ge 1} \frac{\# X (\mathbb{F}_{q^k})}{k}\,t^k\Bigr)$$
is the Hasse--Weil zeta function. According to Deligne's work on Weil's
conjectures \cite{Deligne-Weil-II}, the zeros and poles of $Z (X,s)$ satisfy
$|s| = q^{-w/2}$, where $0 \le w \le 2 \dim X$
(see e.g. \cite[pp.\,26--27]{Katz-1994}). In particular, $q^{-s}$ for
$s = n < 0$ is neither a zero nor a pole of $Z (X,s)$.
We also note that our definition of $H^i_\text{\it W,c} (X, \mathbb{Z}(n))$, and pretty much
everything said above, only makes sense for $n < 0$. Already for $n = 0$, for
example, the zeta function of a smooth projective curve $X/\mathbb{F}_q$ has a simple
pole at $s = 0$.
\end{example}
\begin{example}
Let $X = E$ be an integral model of an elliptic curve over $\mathbb{Q}$. Then, as a
consequence of the modularity theorem
(Wiles--Breuil--Conrad--Diamond--Taylor), it is known that $\zeta (E,s)$
admits a meromorphic continuation satisfying the functional equation with the
$\Gamma$-factors discussed in Remark~\ref{rmk:archimedian-euler-factor}.
In this particular case $\ord_{s=n} \zeta (E,s) = 0$ for all $n < 0$. This is
consistent with the fact that
$\chi (R\Gamma_c (G_\mathbb{R}, E (\mathbb{C}), \mathbb{R} (n))) = 0$.
Indeed, the equivariant cohomology groups $H^i_c (E (\mathbb{C}), \mathbb{R} (n))^{G_\mathbb{R}}$
are the following:
\begin{center}
\renewcommand{\arraystretch}{1.5}
\begin{tabular}{rccc}
\hline
& $i = 0$ & $i = 1$ & $i = 2$ \\
\hline
$n$ even: & $\mathbb{R}$ & $\mathbb{R}$ & $0$ \\
$n$ odd: & $0$ & $\mathbb{R}$ & $\mathbb{R}$ \\
\hline
\end{tabular}
\end{center}
---see, for example, the calculation in \cite[Lemma~A.6]{Siebel-2019}.
\end{example}
\section{Special value Conjecture~$\mathbf{C} (X,n)$}
\label{sec:special-value-conjecture}
\begin{definition}
We define a morphism of complexes
\[ \smile\theta\colon R\Gamma_\text{\it W,c} (X,\mathbb{Z}(n)) \otimes \mathbb{R} \to
R\Gamma_\text{\it W,c} (X,\mathbb{Z}(n)) [1] \otimes \mathbb{R} \]
using the splitting \cite[Proposition~7.13]{Beshenov-Weil-etale-1}
\[ R\Gamma_\text{\it W,c} (X, \mathbb{R} (n)) \cong
R\!\Hom (R\Gamma (X_\text{\it \'{e}t}, \mathbb{Z}^c (n)), \mathbb{R}) [-1] \oplus
R\Gamma_c (G_\mathbb{R}, X (\mathbb{C}), \mathbb{R} (n)) [-1] \]
as follows:
\[ \begin{tikzcd}
R\Gamma_\text{\it W,c} (X, \mathbb{R}(n)) \text{\it ar}{d}{\cong}\text{\it ar}[dashed]{r}{\smile\theta} & R\Gamma_\text{\it W,c} (X, \mathbb{R}(n)) [1]\text{\it ar}{d}{\cong} \\
R\!\Hom (R\Gamma (X_\text{\it \'{e}t}, \mathbb{Z}^c (n)), \mathbb{R}) [-1] & R\!\Hom (R\Gamma (X_\text{\it \'{e}t}, \mathbb{Z}^c (n)), \mathbb{R}) \\[-2em]
\oplus & \oplus \\[-2em]
R\Gamma_c (G_\mathbb{R}, X (\mathbb{C}), \mathbb{R} (n)) [-1]\text{\it ar}{uur}[description]{Reg_{X,n}^\vee} & R\Gamma_c (G_\mathbb{R}, X (\mathbb{C}), \mathbb{R} (n))
\end{tikzcd} \]
\end{definition}
\begin{lemma}
\label{lemma:smile-theta}
Assuming Conjectures $\mathbf{L}^c (X_\text{\it \'{e}t},n)$ and $\mathbf{B} (X,n)$, the
morphism $\smile\theta$ induces a long exact sequence of finite dimensional
real vector spaces
\[ \cdots \to H^{i-1}_\text{\it W,c} (X, \mathbb{R} (n))
\xrightarrow{\smile\theta}
H^i_\text{\it W,c} (X, \mathbb{R} (n))
\xrightarrow{\smile\theta}
H^{i+1}_\text{\it W,c} (X, \mathbb{R} (n)) \to \cdots \]
\begin{proof}
We obtain a sequence
\[ \begin{tikzcd}[column sep=1em]
\cdots\text{\it ar}{r} & H^i_\text{\it W,c} (X, \mathbb{R}(n))\text{\it ar}{d}{\cong}\text{\it ar}[dashed]{r}{\smile\theta} & H^{i+1}_\text{\it W,c} (X, \mathbb{R}(n))\text{\it ar}{d}{\cong} \text{\it ar}{r} & \cdots \\
& \Hom (H^{-i-1} (X_\text{\it \'{e}t}, \mathbb{Z}^c (n)), \mathbb{R}) & \Hom (H^{-i-2} (X_\text{\it \'{e}t}, \mathbb{Z}^c (n)), \mathbb{R}) \\[-2em]
\cdots & \oplus & \oplus & \cdots \\[-2em]
& H_c^{i-1} (G_\mathbb{R}, X (\mathbb{C}), \mathbb{R} (n))\text{\it ar}{uur}[description]{\cong} & H_c^i (G_\mathbb{R}, X (\mathbb{C}), \mathbb{R} (n))
\end{tikzcd} \]
The diagonal arrows are isomorphisms according to $\mathbf{B} (X,n)$,
so the sequence is exact.
\end{proof}
\end{lemma}
The Weil-\'{e}tale complex $R\Gamma_\text{\it W,c} (X, \mathbb{Z}(n))$ is defined in
\cite[\S 7]{Beshenov-Weil-etale-1} up to a \emph{non-unique} isomorphism in the
derived category $\mathbf{D} (\mathbb{Z})$ via a distinguished triangle
\begin{equation}
\label{eqn:triangle-defining-RGamma-Wc}
R\Gamma_\text{\it W,c} (X, \mathbb{Z}(n)) \to R\Gamma_\text{\it fg} (X,\mathbb{Z}(n)) \xrightarrow{i_\infty}
R\Gamma_c (G_\mathbb{R}, X (\mathbb{C}), \mathbb{Z} (n)) \to [1]
\end{equation}
This is rather awkward, and there should be a better, more canonical
construction of $R\Gamma_\text{\it W,c} (X, \mathbb{Z}(n))$. For our purposes, however, this is
not much of a problem, since the special value conjecture is not formulated in
terms of $R\Gamma_\text{\it W,c} (X, \mathbb{Z} (n))$, but in terms of its determinant
$\det_\mathbb{Z} R\Gamma_\text{\it W,c} (X, \mathbb{Z}(n))$ (see Appendix~\ref{app:determinants}), which
is well-defined.
\begin{lemma}
\label{lemma:determinant-of-RGamma-Wc-well-defined}
The determinant $\det_\mathbb{Z} R\Gamma_\text{\it W,c} (X, \mathbb{Z}(n))$ is defined up to a
canonical isomorphism.
\begin{proof}
Two different choices for the mapping fiber in
\eqref{eqn:triangle-defining-RGamma-Wc} yield an isomorphism of
distinguished triangles
\[ \begin{tikzcd}
R\Gamma_\text{\it W,c} (X, \mathbb{Z}(n)) \text{\it ar}{r}\text{\it ar}{d}{f}[swap]{\cong} & R\Gamma_\text{\it fg} (X,\mathbb{Z}(n)) \text{\it ar}{r}{i_\infty}\text{\it ar}{d}{id} & R\Gamma_c (G_\mathbb{R}, X (\mathbb{C}), \mathbb{Z} (n)) \text{\it ar}{r}\text{\it ar}{d}{id} & {[1]}\text{\it ar}{d}{f}[swap]{\cong} \\
R\Gamma_\text{\it W,c} (X, \mathbb{Z}(n))' \text{\it ar}{r} & R\Gamma_\text{\it fg} (X,\mathbb{Z}(n)) \text{\it ar}{r}{i_\infty} & R\Gamma_c (G_\mathbb{R}, X (\mathbb{C}), \mathbb{Z} (n)) \text{\it ar}{r} & {[1]}
\end{tikzcd} \]
The idea is to use functoriality of determinants with respect to
isomorphisms of distinguished triangles
(see Appendix~\ref{app:determinants}). The only technical problem is that
whenever $X (\mathbb{R}) \ne \emptyset$, the complexes $R\Gamma_\text{\it fg} (X,\mathbb{Z}(n))$ and
$R\Gamma_c (G_\mathbb{R}, X (\mathbb{C}), \mathbb{Z} (n))$ are not perfect, but may have finite
$2$-torsion in $H^i (-)$ for arbitrarily big $i$ (in
\cite{Beshenov-Weil-etale-1} we called such complexes \textbf{almost
perfect}). On the other hand, the determinants are defined only for
perfect complexes. Fortunately, $H^i (i_\infty^*)$ is an isomorphism for
$i \gg 0$ by the boundedness of $H^i_\text{\it W,c} (X, \mathbb{Z}(n))$
\cite[Proposition~7.12]{Beshenov-Weil-etale-1}, so that for $m$ big enough
we can take the corresponding canonical truncations $\tau_{\le m}$:
\[ \begin{tikzcd}[row sep=1.5em, column sep=0.75em, font=\small]
\tau_{\le m} R\Gamma_\text{\it W,c} (X, \mathbb{Z}(n)) \text{\it ar}{r}\text{\it ar}{d}{\cong} & \tau_{\le m} R\Gamma_\text{\it fg} (X,\mathbb{Z}(n)) \text{\it ar}{r}\text{\it ar}{d} & \tau_{\le m} R\Gamma_c (G_\mathbb{R}, X (\mathbb{C}), \mathbb{Z} (n)) \text{\it ar}{r}\text{\it ar}{d} & {[1]}\text{\it ar}{d}{\cong} \\
R\Gamma_\text{\it W,c} (X, \mathbb{Z}(n)) \text{\it ar}{r}\text{\it ar}{d} & R\Gamma_\text{\it fg} (X,\mathbb{Z}(n)) \text{\it ar}{r}{i_\infty}\text{\it ar}{d} & R\Gamma_c (G_\mathbb{R}, X (\mathbb{C}), \mathbb{Z} (n)) \text{\it ar}{r}\text{\it ar}{d} & {[1]}\text{\it ar}{d} \\
0 \text{\it ar}{r}\text{\it ar}{d} & \tau_{\ge m+1} R\Gamma_\text{\it fg} (X,\mathbb{Z}(n)) \text{\it ar}{r}{\cong}\text{\it ar}{d} & \tau_{\ge m+1} R\Gamma_c (G_\mathbb{R}, X (\mathbb{C}), \mathbb{Z} (n)) \text{\it ar}{r}\text{\it ar}{d} & 0\text{\it ar}{d} \\
{[1]} \text{\it ar}{r} & {[1]} \text{\it ar}{r} & {[1]} \text{\it ar}{r} & {[2]}
\end{tikzcd} \]
The truncations give us (rotating the triangles)
\[ \begin{tikzcd}[column sep=1em,font=\small]
\tau_{\le m} R\Gamma_c (G_\mathbb{R}, X (\mathbb{C}), \mathbb{Z} (n))[-1] \text{\it ar}{r}\text{\it ar}{d}{id} & R\Gamma_\text{\it W,c} (X, \mathbb{Z} (n))\text{\it ar}{d}{f}[swap]{\cong}\text{\it ar}{r} & \tau_{\le m} R\Gamma_\text{\it fg} (X,\mathbb{Z}(n)) \text{\it ar}{d}{id}\text{\it ar}{r} & {[0]} \text{\it ar}{d}{id} \\
\tau_{\le m} R\Gamma_c (G_\mathbb{R}, X (\mathbb{C}), \mathbb{Z} (n))[-1] \text{\it ar}{r} & R\Gamma_\text{\it W,c} (X, \mathbb{Z} (n))' \text{\it ar}{r} & \tau_{\le m} R\Gamma_\text{\it fg} (X,\mathbb{Z}(n)) \text{\it ar}{r} & {[0]}
\end{tikzcd} \]
By the functoriality of determinants with respect to isomorphisms of
distinguished triangles (see Appendix~\ref{app:determinants}), we have a
commutative diagram
\[ \begin{tikzcd}
\begin{array}{c} \det_\mathbb{Z} \tau_{\le m} R\Gamma_c (G_\mathbb{R}, X (\mathbb{C}), \mathbb{Z} (n))[-1] \\ \otimes \\ \det_\mathbb{Z} \tau_{\le m} R\Gamma_\text{\it fg} (X,\mathbb{Z}(n)) \end{array} \text{\it ar}{r}{i}[swap]{\cong} \text{\it ar}{d}{id} & \det_\mathbb{Z} R\Gamma_\text{\it W,c} (X,\mathbb{Z}(n)) \text{\it ar}{d}{\det_\mathbb{Z} (f)}[swap]{\cong} \\
\begin{array}{c} \det_\mathbb{Z} \tau_{\le m} R\Gamma_c (G_\mathbb{R}, X (\mathbb{C}), \mathbb{Z} (n))[-1] \\ \otimes \\ \det_\mathbb{Z} \tau_{\le m} R\Gamma_\text{\it fg} (X, \mathbb{Z} (n)) \end{array} \text{\it ar}{r}{i'}[swap]{\cong} & \det_\mathbb{Z} R\Gamma_\text{\it W,c} (X,\mathbb{Z}(n))
\end{tikzcd} \]
so that $\det_\mathbb{Z} (f) = i'\circ i^{-1}$.
\end{proof}
\end{lemma}
\begin{lemma}
The non-canonical splitting
\[ R\Gamma_\text{\it W,c} (X, \mathbb{R}(n)) \cong
R\!\Hom (R\Gamma (X_\text{\it \'{e}t}, \mathbb{Z}^c (n)), \mathbb{R}) [-1] \oplus
R\Gamma_c (G_\mathbb{R}, X (\mathbb{C}), \mathbb{R} (n)) [-1] \]
from \cite[Proposition~7.13]{Beshenov-Weil-etale-1} yields a canonical
isomorphism of determinants
\[ \det_\mathbb{R} R\Gamma_\text{\it W,c} (X, \mathbb{R} (n)) \cong
\begin{array}{c}
\det_\mathbb{R} R\!\Hom (R\Gamma (X_\text{\it \'{e}t}, \mathbb{Z}^c (n)), \mathbb{R}) [-1] \\
\otimes_\mathbb{R} \\
\det_\mathbb{R} R\Gamma_c (G_\mathbb{R}, X (\mathbb{C}), \mathbb{R} (n)) [-1]
\end{array} \]
\begin{proof}
This is similar to the previous lemma; in fact, after tensoring with $\mathbb{R}$,
we obtain perfect complexes of real vector spaces, so the truncations are no
longer needed. By \cite[Proposition~7.4]{Beshenov-Weil-etale-1} we have
$i_\infty^* \otimes \mathbb{R} = 0$, so there is an isomorphism of triangles
\begin{equation}
\label{eqn:splitting-of-RGamma-Wc-triangles}
\begin{tikzcd}[row sep=1em]
R\Gamma_c (G_\mathbb{R}, X (\mathbb{C}), \mathbb{R} (n)) [-1] \text{\it ar}{r}{id}\text{\it ar}{d} & R\Gamma_c (G_\mathbb{R}, X (\mathbb{C}), \mathbb{R} (n)) [-1]\text{\it ar}{d} \\
R\Gamma_\text{\it W,c} (X, \mathbb{R} (n)) \text{\it ar}[dashed]{r}{f}[swap]{\cong}\text{\it ar}{d} & \begin{array}{c} R\!\Hom (R\Gamma (X_\text{\it \'{e}t}, \mathbb{Z}^c (n)), \mathbb{R}) [-1] \\ \oplus \\ R\Gamma_c (G_\mathbb{R}, X (\mathbb{C}), \mathbb{R} (n)) [-1] \end{array}\text{\it ar}{d} \\
R\Gamma_\text{\it fg} (X, \mathbb{R} (n)) \text{\it ar}{r}{g\otimes \mathbb{R}}[swap]{\cong}\text{\it ar}{d} & R\!\Hom (R\Gamma (X_\text{\it \'{e}t}, \mathbb{Z}^c (n)), \mathbb{R}) [-1] \text{\it ar}{d} \\
R\Gamma_c (G_\mathbb{R}, X (\mathbb{C}), \mathbb{R} (n)) \text{\it ar}{r}{id} & R\Gamma_c (G_\mathbb{R}, X (\mathbb{C}), \mathbb{R} (n))
\end{tikzcd}
\end{equation}
Here the third horizontal arrow comes from the triangle defining
$R\Gamma_\text{\it fg} (X, \mathbb{Z}(n))$:
\begin{multline*}
R\!\Hom (R\Gamma (X_\text{\it \'{e}t}, \mathbb{Z}^c (n)), \mathbb{R}) [-2] \xrightarrow{\alpha_{X,n}}
R\Gamma_c (X_\text{\it \'{e}t}, \mathbb{Z} (n)) \to
R\Gamma_\text{\it fg} (X, \mathbb{Z}(n)) \\
\xrightarrow{g} R\!\Hom (R\Gamma (X_\text{\it \'{e}t}, \mathbb{Z}^c (n)), \mathbb{R}) [-1]
\end{multline*}
tensored with $\mathbb{R}$ (see \cite[Proposition~5.7]{Beshenov-Weil-etale-1}).
The distinguished column on the right-hand side of
\eqref{eqn:splitting-of-RGamma-Wc-triangles} is the direct sum
\[ \begin{tikzcd}[row sep=1.5em]
R\Gamma_c (G_\mathbb{R}, X (\mathbb{C}), \mathbb{R} (n)) [-1]\text{\it ar}{d}{id} &[-2em] &[-2em] 0\text{\it ar}{d} \\
R\Gamma_c (G_\mathbb{R}, X (\mathbb{C}), \mathbb{R} (n)) [-1]\text{\it ar}{d} & \oplus & R\!\Hom (R\Gamma (X_\text{\it \'{e}t}, \mathbb{Z}^c (n)), \mathbb{R}) [-1]\text{\it ar}{d}{id} \\
0 \text{\it ar}{d} & & R\!\Hom (R\Gamma (X_\text{\it \'{e}t}, \mathbb{Z}^c (n)), \mathbb{R}) [-1] \text{\it ar}{d} \\
R\Gamma_c (G_\mathbb{R}, X (\mathbb{C}), \mathbb{R} (n)) & & 0
\end{tikzcd} \]
The splitting isomorphism $f$ in
\eqref{eqn:splitting-of-RGamma-Wc-triangles} is not canonical at
all. However, after taking the determinants, we obtain a commutative diagram
(see Appendix~\ref{app:determinants})
\[ \begin{tikzcd}[column sep=1em, font=\small]
\begin{array}{c} \det_\mathbb{R} R\Gamma_c (G_\mathbb{R}, X (\mathbb{C}), \mathbb{R} (n)) [-1] \\ \otimes_\mathbb{R} \\ \det_\mathbb{R} R\Gamma_\text{\it fg} (X, \mathbb{R}(n)) \end{array} \text{\it ar}{r}{i}[swap]{\cong} \text{\it ar}{d}{id \otimes \det_\mathbb{R} (g\otimes \mathbb{R})}[swap]{\cong} & \det_\mathbb{R} R\Gamma_\text{\it W,c} (X, \mathbb{R} (n)) \text{\it ar}{d}{\det_\mathbb{R} (f)}[swap]{\cong}\text{\it ar}[dashed]{dl}{\cong} \\
\begin{array}{c} \det_\mathbb{R} R\Gamma_c (G_\mathbb{R}, X (\mathbb{C}), \mathbb{R} (n)) [-1] \\ \otimes_\mathbb{R} \\ \det_\mathbb{R} R\!\Hom (R\Gamma (X_\text{\it \'{e}t}, \mathbb{Z}^c (n)), \mathbb{R}) [-1] \end{array} \text{\it ar}{r}{i'}[swap]{\cong} & \det_\mathbb{R} \left(\!\!\!\begin{array}{c} R\!\Hom (R\Gamma (X_\text{\it \'{e}t}, \mathbb{Z}^c (n)), \mathbb{R}) [-1] \\ \oplus \\ R\Gamma_c (G_\mathbb{R}, X (\mathbb{C}), \mathbb{R} (n)) [-1] \end{array}\!\!\!\right)
\end{tikzcd} \]
The dashed diagonal arrow is the desired canonical isomorphism.
\end{proof}
\end{lemma}
\begin{definition}
Given an arithmetic scheme $X$ and $n < 0$, assume
Conjectures~$\mathbf{L}^c (X_\text{\it \'{e}t}, n)$ and $\mathbf{B} (X,n)$. Consider the
quasi-isomorphism
\begin{multline}\small
\label{eqn:definition-of-lambda}
\left(\!\!\!\begin{array}{c} R\Gamma_c (G_\mathbb{R}, X (\mathbb{C}), \mathbb{R} (n)) [-2] \\ \oplus \\ R\Gamma_c (G_\mathbb{R}, X (\mathbb{C}), \mathbb{R} (n)) [-1] \end{array}\!\!\!\right)
\xrightarrow[\cong]{Reg_{X,n}^\vee [-1] \oplus id}
\left(\!\!\!\begin{array}{c} R\!\Hom (R\Gamma (X_\text{\it \'{e}t}, \mathbb{Z}^c (n)), \mathbb{R}) [-1] \\ \oplus \\ R\Gamma_c (G_\mathbb{R}, X (\mathbb{C}), \mathbb{R} (n)) [-1] \end{array}\!\!\!\right) \\
\\
\xrightarrow[\cong]{\text{split}} R\Gamma_\text{\it W,c} (X, \mathbb{R} (n))
\end{multline}
Note that the first complex has determinant
\[ \det_\mathbb{R} \left(\!\!\begin{array}{c} R\Gamma_c (G_\mathbb{R}, X (\mathbb{C}), \mathbb{R} (n)) [-2] \\ \oplus \\ R\Gamma_c (G_\mathbb{R}, X (\mathbb{C}), \mathbb{R} (n)) [-1] \end{array}\!\!\right) \cong
\begin{array}{c} \det_\mathbb{R} R\Gamma_c (G_\mathbb{R}, X (\mathbb{C}), \mathbb{R} (n)) \\ \otimes_\mathbb{R} \\ (\det_\mathbb{R} R\Gamma_c (G_\mathbb{R}, X (\mathbb{C}), \mathbb{R} (n)))^{-1} \end{array} \cong \mathbb{R}, \]
and for the last complex in \eqref{eqn:definition-of-lambda}, by the
compatibility with base change, we have a canonical isomorphism
\[ \det_\mathbb{R} R\Gamma_\text{\it W,c} (X, \mathbb{R} (n)) \cong
(\det_\mathbb{Z} R\Gamma_\text{\it W,c} (X, \mathbb{Z} (n))) \otimes \mathbb{R}. \]
Therefore, after taking the determinants, the quasi-isomorphism
\eqref{eqn:definition-of-lambda} induces a canonical isomorphism
\begin{equation}
\label{eqn:morphism-lambda}
\lambda = \lambda_{X,n}\colon \mathbb{R} \xrightarrow{\cong}
(\det_\mathbb{Z} R\Gamma_\text{\it W,c} (X, \mathbb{Z} (n))) \otimes \mathbb{R}.
\end{equation}
\end{definition}
\begin{remark}
An equivalent way to define $\lambda$ is
\begin{multline*}
\lambda\colon \mathbb{R} \xrightarrow{\cong}
\bigotimes_{i\in \mathbb{Z}} (\det_\mathbb{R} H^i_\text{\it W,c} (X, \mathbb{R} (n)))^{(-1)^i} \\
\xrightarrow{\cong} \Bigl(\bigotimes_{i\in \mathbb{Z}} (\det_\mathbb{Z} H^i_\text{\it W,c} (X, \mathbb{Z} (n)))^{(-1)^i}\Bigr) \otimes \mathbb{R} \\
\xrightarrow{\cong} (\det_\mathbb{Z} R\Gamma_\text{\it W,c} (X, \mathbb{Z} (n))) \otimes \mathbb{R},
\end{multline*}
where the first isomorphism comes from Lemma~\ref{lemma:smile-theta}.
\end{remark}
We are ready to state the main conjecture of this paper. The determinant
$\det_\mathbb{Z} R\Gamma_\text{\it W,c} (X, \mathbb{Z} (n)))$ is a free $\mathbb{Z}$-module of rank $1$, and the
isomorphism \eqref{eqn:morphism-lambda} canonically embeds it in $\mathbb{R}$. We
conjecture that this embedding gives the special value of $\zeta (X,s)$ at
$s = n$ in the following sense.
\begin{conjecture}
$\mathbf{C} (X,n)$: let $X$ be an arithmetic scheme and $n < 0$ a strictly
negative integer. Assuming Conjectures~$\mathbf{L}^c (X_\text{\it \'{e}t}, n)$,
$\mathbf{B} (X,n)$ and the meromorphic continuation of $\zeta (X,s)$ around
$s = n < 0$, the corresponding special value is determined up to sign by
\[ \lambda (\zeta^* (X,n)^{-1}) \cdot \mathbb{Z} =
\det_\mathbb{Z} R\Gamma_\text{\it W,c} (X, \mathbb{Z} (n)), \]
where $\lambda$ is the canonical isomorphism \eqref{eqn:morphism-lambda}.
\end{conjecture}
\begin{remark}
This conjecture is similar to \cite[Conjecture~5.12]{Flach-Morin-2018}.
When $X$ is proper and regular, the above conjecture is the same as the
special value conjecture of Flach and Morin, which for $n \in \mathbb{Z}$ reads
\begin{equation}
\label{eqn:FM-special-value}
\lambda_\infty \Bigl(\zeta^* (X,n)^{-1} \cdot C (X,n) \cdot \mathbb{Z}\Bigr) =
\Delta (X/\mathbb{Z}, n).
\end{equation}
Here the fundamental line $\Delta (X/\mathbb{Z},n)$ is defined via
\[ \Delta (X/\mathbb{Z},n) \mathrel{\mathop:}=
\det_\mathbb{Z} R\Gamma_\text{\it W,c} (X, \mathbb{Z}(n)) \otimes
\det_\mathbb{Z} R\Gamma_\text{\it dR} (X/\mathbb{Z})/\Fil^n. \]
If $n < 0$, then
$\Delta (X/\mathbb{Z},n) = \det_\mathbb{Z} R\Gamma_\text{\it W,c} (X, \mathbb{Z}(n))$. Moreover, $C (X,n)$
in \eqref{eqn:FM-special-value} is a rational number, defined via
$\prod_p |c_p (X,n)|_p$. Here $c_p (X,n) \in \mathbb{Q}_p^\times/\mathbb{Z}_p^\times$ are
the local factors described in \cite[\S 5.4]{Flach-Morin-2018}, but
\cite[Proposition~5.8]{Flach-Morin-2018} states that if $n \le 0$, then
$c_p (X,n) \equiv 1 \pmod{\mathbb{Z}_p^\times}$ for all $p$. Therefore, $C (X,n) = 1$
in our situation. Finally, the trivialization isomorphism $\lambda_\infty$ is
defined exactly as our $\lambda$. Therefore, \eqref{eqn:FM-special-value} for
$n < 0$ agrees with Conjecture~$\mathbf{C} (X,n)$.
Flach and Morin prove that their conjecture is consistent with the Tamagawa
number conjecture of Bloch--Kato--Fontaine--Perrin-Riou
\cite{Fontaine-Perrin-Riou-1994}; see \cite[\S 5.6]{Flach-Morin-2018} for the
details.
\end{remark}
\begin{remark}
Some canonical isomorphisms of determinants involve multiplication by $\pm 1$,
so it is no surprise that the resulting conjecture is stated up to sign
$\pm 1$. This is not a major problem, however, since the sign can be recovered
from the (conjectural) functional equation.
\end{remark}
\section{Case of varieties over finite fields}
\label{sec:finite-fields}
For varieties over finite fields, our special value conjecture corresponds to
the conjectures studied by Geisser in
\cite{Geisser-2004,Geisser-2006,Geisser-2010-arithmetic-homology}.
\begin{proposition}
\label{prop:C(X,n)-over-finite-fields}
If $X/\mathbb{F}_q$ is a variety over a finite field, then under the assumption
$\mathbf{L}^c (X_\text{\it \'{e}t},n)$, the special value conjecture $\mathbf{C} (X,n)$ is
equivalent to
\begin{align}
\label{eqn:special-value-for-X/Fq}
\notag \zeta^* (X,n) & = \pm \prod_{i \in \mathbb{Z}} |H_\text{\it W,c}^i (X, \mathbb{Z}(n))|^{(-1)^i} \\
& = \pm \prod_{i \in \mathbb{Z}} |H^i (X_\text{\it \'{e}t}, \mathbb{Z}^c (n))|^{(-1)^i} \\
\notag & = \pm \prod_{i \in \mathbb{Z}} |H_i^c (X_\text{\it ar}, \mathbb{Z} (n))|^{(-1)^{i+1}},
\end{align}
where $H_i^c (X_\text{\it ar}, \mathbb{Z} (n))$ are Geisser's arithmetic homology groups
defined in \cite{Geisser-2010-arithmetic-homology}.
\begin{proof}
Assuming $\mathbf{L}^c (X_\text{\it \'{e}t},n)$, we have, thanks to
\cite[Proposition~7.7]{Beshenov-Weil-etale-1}
\[ H^i_\text{\it W,c} (X, \mathbb{Z} (n)) \cong
\Hom (H^{2-i} (X_\text{\it \'{e}t}, \mathbb{Z}^c (n)), \mathbb{Q}/\mathbb{Z}) \cong
\Hom (H_{i-1}^c (X_\text{\it ar}, \mathbb{Z} (n)), \mathbb{Q}/\mathbb{Z}). \]
The cohomology groups involved are finite and vanish for $|i| \gg 0$ by
\cite[Proposition~4.2]{Beshenov-Weil-etale-1}, and by
Lemma~\ref{lemma:determinant-for-torsion-cohomology} the determinant is given by
\[ \begin{tikzcd}[row sep=0.75em, column sep=0pt]
\det_\mathbb{Z} R\Gamma_\text{\it W,c} (X, \mathbb{Z} (n)) \text{\it ar}[equals]{d} & \subset & \det_\mathbb{Z} R\Gamma_\text{\it W,c} (X, \mathbb{Z} (n)) \otimes \mathbb{Q} \text{\it ar}[equals]{d} \\
\frac{1}{m}\mathbb{Z} & \subset & \mathbb{Q}
\end{tikzcd} \]
where
\[ m = \prod_{i \in \mathbb{Z}} |H_\text{\it W,c}^i (X, \mathbb{Z}(n))|^{(-1)^i}. \qedhere \]
\end{proof}
\end{proposition}
\begin{remark}
Formulas like \eqref{eqn:special-value-for-X/Fq} were proposed by Lichtenbaum
early on in \cite{Lichtenbaum-1984}.
\end{remark}
\begin{theorem}
\label{thm:C(X,n)-over-finite-fields}
Let $X/\mathbb{F}_q$ be a variety over a finite field satisfying
Conjecture~$\mathbf{L}^c (X_\text{\it \'{e}t}, n)$ for $n < 0$. Then
Conjecture~$\mathbf{C} (X,n)$ holds.
\end{theorem}
We note that \eqref{prop:C(X,n)-over-finite-fields} is equivalent to the special
value formula that appears in
\cite[Theorem~4.5]{Geisser-2010-arithmetic-homology}.
Conjecture~$\mathbf{P}_0 (X)$ in the statement of
\cite[Theorem~4.5]{Geisser-2010-arithmetic-homology} is implied by our
Conjecture~$\mathbf{L}^c (X_\text{\it \'{e}t},n)$ thanks to
\cite[Proposition~4.1]{Geisser-2010-arithmetic-homology}. Geisser's proof
eventually reduces to Milne's work \cite{Milne-1986}, but for our case of
$s = n < 0$, the situation is simpler, and we can give a direct explanation,
using earlier results of B\'{a}yer and Neukirch \cite{Bayer-Neukirch-1978}
concerning Grothendieck's trace formula.
\begin{proof}
By the previous proposition, the conjecture reduces to
$$\zeta (X,n) = \prod_{i \in \mathbb{Z}} |H^i (X_\text{\it \'{e}t}, \mathbb{Z}^c (n))|^{(-1)^i}.$$
By duality \cite[Theorem~I]{Beshenov-Weil-etale-1}
$$|H^{2-i} (X_\text{\it \'{e}t}, \mathbb{Z}^c (n))| = |H^i_c (X_\text{\it \'{e}t}, \mathbb{Z} (n))|,$$
where
\[ \mathbb{Z} (n) \mathrel{\mathop:}=
\bigoplus_{\ell \ne p} \mathbb{Q}_\ell/\mathbb{Z}_\ell (n) [-1] \mathrel{\mathop:}=
\bigoplus_{\ell \ne p} \mu_{\ell^\infty}^{\otimes n} [-1] \mathrel{\mathop:}=
\bigoplus_{\ell \ne p} \varinjlim_r \mu_{\ell^r}^{\otimes n} [-1], \]
and $p$ is the characteristic of the base field.
Now $H^i_c (X_\text{\it \'{e}t}, \mathbb{Q}_\ell (n)) = 0$ for $n < 0$, and therefore
$H^i_c (X_\text{\it \'{e}t}, \mathbb{Z}_\ell (n)) \cong H^{i-1}_c (X_\text{\it \'{e}t}, \mathbb{Q}_\ell/\mathbb{Z}_\ell (n))$.
This means that our formula can be written as
\begin{equation}
\label{eqn:zeta-X/Fq-product-formula}
\zeta (X,n) =
\prod_{\ell \ne p} \prod_{i \in \mathbb{Z}} |H^i_c (X_\text{\it \'{e}t}, \mathbb{Z}_\ell (n))|^{(-1)^i}.
\end{equation}
Grothendieck's trace formula (see \cite{Grothendieck-FL} or
\cite[Rapport]{SGA4-1-2}) reads
\[ Z (X,t) =
\prod_{i \in \mathbb{Z}} \det \bigl(1 - tF \bigm| H^i_c (\overline{X}, \mathbb{Q}_\ell)\bigr)^{(-1)^{i+1}}, \]
where $\overline{X} \mathrel{\mathop:}= X \times_{\Spec \mathbb{F}_q} \overline{\mathbb{F}}_q$ and $F$ is
the Frobenius acting on $H^i_c (\overline{X}, \mathbb{Q}_\ell)$. Substituting
$t = q^{-n}$,
\[ \zeta (X,n) =
\prod_{i \in \mathbb{Z}} \det \bigl(1 - q^{-n} F \bigm| H^i_c (\overline{X}, \mathbb{Q}_\ell)\bigr)^{(-1)^{i+1}}. \]
Then, by the proof of \cite[Theorem~(3.1)]{Bayer-Neukirch-1978}, for each
$\ell \ne p$, we obtain
\begin{equation}
\label{eqn:bayer-neukirch}
|\zeta (X,n)|_\ell =
\prod_{i \in \mathbb{Z}} |H^i_c (X, \mathbb{Z}_\ell (n))|^{(-1)^{i+1}}.
\end{equation}
On the other hand, for $n < 0$ we have
\begin{equation}
\label{eqn:zeta-X/Fq-p-part}
|\zeta (X,n)|_p = 1.
\end{equation}
This fact can be justified, without assuming that $X$ is smooth or projective,
e.g., using Kedlaya's trace formula for rigid cohomology
\cite[p.\,1446]{Kedlaya-2006}, which gives
\[ Z (X,t) = \prod_i P_i (t)^{(-1)^{i+1}},
\quad\text{where }
P_i (t) \in \mathbb{Z}[t] \text{ and } P_i (0) = 1. \]
In particular, $P_i (q^{-n}) \equiv 1 \pmod{p}$.
The product formula recovers from \eqref{eqn:bayer-neukirch} and
\eqref{eqn:zeta-X/Fq-p-part} our special value formula
\eqref{eqn:zeta-X/Fq-product-formula}.
\end{proof}
\begin{remark}
The fact that $|\zeta (X,n)|_p = 1$, as observed in the argument above,
explains why our Weil-\'{e}tale cohomology ignores the $p$-primary part in
some sense.
\end{remark}
Let us consider a few examples to see how the special value conjecture works
over finite fields.
\begin{example}
\label{example:C(X,n)-for-Spec-Fq}
If $X = \Spec \mathbb{F}_q$, then $\zeta (X,s) = \frac{1}{1 - q^{-s}}$. In this case
for $n < 0$ we obtain
\begin{equation}
\label{eqn:motivic-cohomology-of-Fq}
H^i (\Spec \mathbb{F}_{q,\text{\it \'{e}t}}, \mathbb{Z}^c (n)) \cong
\begin{cases}
\mathbb{Z}/(q^{-n} - 1), & i = 1, \\
0, & i \ne 1
\end{cases}
\end{equation}
(see, for example, \cite[Example~4.2]{Geisser-2017}). Therefore, formula
\eqref{eqn:special-value-for-X/Fq} indeed recovers $\zeta (X,n)$ up to sign.
Similarly, if we replace $\Spec \mathbb{F}_q$ with $\Spec \mathbb{F}_{q^m}$, considered as a
variety over $\mathbb{F}_q$, then
$\zeta (\Spec \mathbb{F}_{q^m},s) = \zeta (\Spec \mathbb{F}_q, ms)$, and
\eqref{eqn:motivic-cohomology-of-Fq} also changes accordingly.
\end{example}
\begin{example}
Consider $X = \mathbb{P}^1_{\mathbb{F}_q}/(0\sim 1)$, or equivalently, a nodal cubic.
The zeta function is $\zeta (X,s) = \frac{1}{1 - q^{1-s}}$. We can calculate
the groups $H^i (X_\text{\it \'{e}t}, \mathbb{Z}^c (n))$ using the blowup square
\[ \begin{tikzcd}
\Spec \mathbb{F}_q \sqcup \Spec \mathbb{F}_q \text{\it ar}{r}\text{\it ar}{d}\ar[phantom,pos=0.2]{dr}{\text{\large$\lrcorner$}} & \mathbb{P}^1_{\mathbb{F}_q} \text{\it ar}{d} \\
\Spec \mathbb{F}_q \text{\it ar}{r} & X
\end{tikzcd} \]
This is similar to \cite[\S 8, Example~2]{Geisser-2006}. Geisser uses the
eh-topology and long exact sequences associated to abstract blowup squares
\cite[Proposition~3.2]{Geisser-2006}. In our case the same reasoning works,
because by \cite[Theorem~I]{Beshenov-Weil-etale-1}, one has
$H^i (X_\text{\it \'{e}t}, \mathbb{Z}^c (n)) \cong \Hom (H^{2-i}_c (X_\text{\it \'{e}t}, \mathbb{Z} (n)),\mathbb{Q}/\mathbb{Z})$,
where $\mathbb{Z} (n) = \varinjlim_{p\nmid m} \mu_m^{\otimes n} [-1]$, and \'{e}tale
cohomology and eh-cohomology coincide for such sheaves by
\cite[Theorem~3.6]{Geisser-2006}.
Using the projective bundle formula, we calculate from
\eqref{eqn:motivic-cohomology-of-Fq}
\[ H^i (\mathbb{P}^1_{\mathbb{F}_q,\text{\it \'{e}t}}, \mathbb{Z}^c (n)) \cong \begin{cases}
\mathbb{Z}/(q^{1-n} - 1), & i = -1, \\
\mathbb{Z}/(q^{-n} - 1), & i = +1, \\
0, & i \ne \pm 1.
\end{cases} \]
By the argument from \cite[\S 8, Example~2]{Geisser-2006}, the short exact
sequences
\[ 0 \to H^i (\mathbb{P}^1_{\mathbb{F}_q,\text{\it \'{e}t}}, \mathbb{Z}^c (n)) \to
H^i (X_\text{\it \'{e}t}, \mathbb{Z}^c (n)) \to
H^{i+1} ((\Spec \mathbb{F}_q)_\text{\it \'{e}t}, \mathbb{Z}^c (n)) \to 0 \]
give
\[ H^i (X_\text{\it \'{e}t}, \mathbb{Z}^c (n)) \cong \begin{cases}
\mathbb{Z}/(q^{1-n} - 1), & i = -1, \\
\mathbb{Z}/(q^{-n} - 1), & i = 0,1, \\
0, & \text{otherwise}.
\end{cases} \]
The formula \eqref{eqn:special-value-for-X/Fq} gives the correct value
$\zeta (X,n)$.
\end{example}
\begin{example}
In general, if $X/\mathbb{F}_q$ is a curve, then Conjecture~$\mathbf{L}^c (X_\text{\it \'{e}t},n)$
holds; see for example \cite[Proposition~4.3]{Geisser-2017}. The cohomology
$H^i (X_\text{\it \'{e}t}, \mathbb{Z}^c(n))$ is concentrated in degrees $-1, 0, +1$ by duality
\cite[Theorem~I]{Beshenov-Weil-etale-1} and the reasons of cohomological
dimension, and the special value formula is
\[ \zeta^* (X,n) =
\pm \frac{|H^0 (X_\text{\it \'{e}t}, \mathbb{Z}^c (n))|}{|H^{-1} (X_\text{\it \'{e}t}, \mathbb{Z}^c (n))|\cdot |H^1 (X_\text{\it \'{e}t}, \mathbb{Z}^c (n))|}. \]
\end{example}
\section{Compatibility with operations on schemes}
\label{sec:compatibility-with-operations}
The following basic properties follow from the definition of $\zeta (X,s)$
(formula \eqref{eqn:Euler-product-for-zeta}).
\begin{enumerate}
\item[1)] \textbf{Disjoint unions}: if $X = \coprod_{1 \le i \le r} X_i$ is a
finite disjoint union of arithmetic schemes, then
\begin{equation}
\label{eqn:zeta-function-for-disjoint-unions}
\zeta (X,s) = \prod_{1 \le i \le r} \zeta (X_i,s).
\end{equation}
In particular,
\begin{align*}
\ord_{s=n} \zeta (X,s) & = \sum_{1 \le i \le r} \ord_{s=n} \zeta (X_i,s), \\
\zeta^* (X,n) & = \prod_{1 \le i \le r} \zeta^* (X_i,n).
\end{align*}
\item[2)] \textbf{Closed-open decompositions}: if $Z \subset X$ is a closed
subscheme and $U = X\setminus Z$ is its open complement, then we say that we
have a \textbf{closed-open decomposition} and write
$Z \not\hookrightarrow X \hookleftarrow U$. In this case
\begin{equation}
\label{eqn:zeta-function-for-closed-open-decompositions}
\zeta (X,s) = \zeta (Z,s) \cdot \zeta (U,s).
\end{equation}
In particular,
\begin{align*}
\ord_{s=n} \zeta (X,s) & = \ord_{s=n} \zeta (Z,s) + \ord_{s=n} \zeta (U,s), \\
\zeta^* (X,n) & = \zeta^* (Z,n) \cdot \zeta^* (U,n).
\end{align*}
\item[3)] \textbf{Affine bundles}: for any $r \ge 0$ the zeta function of the
relative affine space $\AA^r_X = \AA^r_\mathbb{Z} \times X$ satisfies
\begin{equation}
\label{eqn:zeta-function-for-affine-space}
\zeta (\AA^r_X, s) = \zeta (X, s-r).
\end{equation}
In particular,
\begin{align*}
\ord_{s=n} \zeta (\AA^r_X, s) & = \ord_{s=n-r} \zeta (X, s), \\
\zeta^* (\AA^r_X, n) & = \zeta^* (X, n-r).
\end{align*}
\end{enumerate}
This suggests that Conjectures~$\mathbf{VO} (X,n)$ and $\mathbf{C} (X,n)$ should
also satisfy the corresponding compatibilities. We verify in this section that
this is indeed the case.
\begin{lemma}
\label{lemma:compatibility-of-Lc(X,n)}
Let $n < 0$.
\begin{enumerate}
\item[1)] If $X = \coprod_{1 \le i \le r} X_i$ is a finite disjoint union of
arithmetic schemes, then
$$\mathbf{L}^c (X_\text{\it \'{e}t},n) \iff \mathbf{L}^c (X_{i,\text{\it \'{e}t}},n)\text{ for all }i.$$
\item[2)] For a closed-open decomposition
$Z \not\hookrightarrow X \hookleftarrow U$, if two of the three conjectures
\[ \mathbf{L}^c (X_\text{\it \'{e}t},n), \quad
\mathbf{L}^c (Z_\text{\it \'{e}t},n), \quad
\mathbf{L}^c (U_\text{\it \'{e}t}, n) \]
are true, then the third is also true.
\item[3)] For an arithmetic scheme $X$ and any $r \ge 0$, one has
$$\mathbf{L}^c (\AA^r_{X,\text{\it \'{e}t}}, n) \iff \mathbf{L}^c (X_\text{\it \'{e}t}, n-r).$$
\end{enumerate}
\begin{proof}
See the proof of \cite[Proposition~5.10]{Morin-2014}.
\end{proof}
\end{lemma}
\begin{lemma}
\label{lemma:compatibility-of-B(X,n)}
Let $n < 0$.
\begin{enumerate}
\item[1)] If $X = \coprod_{1 \le i \le r} X_i$ is a finite disjoint union of
arithmetic schemes, then
\begin{multline*}
Reg_{X,n} = \bigoplus_{1 \le i \le r} Reg_{X_i,n}\colon\\
\bigoplus_{1 \le i \le r} R\Gamma (X_{i,\text{\it \'{e}t}}, \mathbb{R}^c (n)) \to
\bigoplus_{i \le i \le r} R\Gamma_\text{\it BM} (G_\mathbb{R}, X_i (\mathbb{C}), \mathbb{R} (n)) [1].
\end{multline*}
In particular,
$$\mathbf{B} (X,n) \iff \mathbf{B} (X_i,n)\text{ for all }i.$$
\item[2)] For a closed-open decomposition of arithmetic schemes
$Z \not\hookrightarrow X \hookleftarrow U$, the corresponding regulators
give a morphism of distinguished triangles
\[ \begin{tikzcd}[column sep=4em]
R\Gamma (Z_\text{\it \'{e}t}, \mathbb{R}^c (n)) \text{\it ar}{d}\text{\it ar}{r}{Reg_{Z,n}} & R\Gamma_\text{\it BM} (G_\mathbb{R}, Z (\mathbb{C}), \mathbb{R} (n)) [1] \text{\it ar}{d} \\
R\Gamma (X_\text{\it \'{e}t}, \mathbb{R}^c (n)) \text{\it ar}{d}\text{\it ar}{r}{Reg_{X,n}} & R\Gamma_\text{\it BM} (G_\mathbb{R}, X (\mathbb{C}), \mathbb{R} (n)) [1] \text{\it ar}{d} \\
R\Gamma (U_\text{\it \'{e}t}, \mathbb{R}^c (n)) \text{\it ar}{d}\text{\it ar}{r}{Reg_{U,n}} & R\Gamma_\text{\it BM} (G_\mathbb{R}, U (\mathbb{C}), \mathbb{R} (n)) [1] \text{\it ar}{d} \\
R\Gamma (Z_\text{\it \'{e}t}, \mathbb{R}^c (n)) [1]\text{\it ar}{r}{Reg_{Z,n} [1]} & R\Gamma_\text{\it BM} (G_\mathbb{R}, Z (\mathbb{C}), \mathbb{R} (n)) [2]
\end{tikzcd} \]
In particular, if two of the three conjectures
\[ \mathbf{B} (X,n), \quad
\mathbf{B} (Z,n), \quad
\mathbf{B} (U,n) \]
are true, then the third is also true.
\item[3)] For any $r \ge 0$, the diagram
\[ \begin{tikzcd}
R\Gamma (X_\text{\it \'{e}t}, \mathbb{R}^c (n-r)) [2r] \text{\it ar}{d}{Reg_{X,n-r}}\text{\it ar}{r}{\cong} & R\Gamma (\AA^r_{X,\text{\it \'{e}t}}, \mathbb{R}^c (n))\text{\it ar}{d}{Reg_{\AA^r_X,n}} \\
R\Gamma_\text{\it BM} (G_\mathbb{R}, X (\mathbb{C}), \mathbb{R} (n-r)) [2r] \text{\it ar}{r}{\cong} & R\Gamma_\text{\it BM} (G_\mathbb{R}, \AA^r_X (\mathbb{C}), \mathbb{R} (n))
\end{tikzcd} \]
commutes. In particular, one has
$$\mathbf{B} (\AA^r_X, n) \iff \mathbf{B} (X, n-r).$$
\end{enumerate}
\begin{proof}
Part 1) is clear because all cohomologies that enter the definition of
$Reg_{X,n}$ decompose into direct sums over $i = 1,\ldots r$. Parts 2) and
3) boil down to the corresponding functoriality properties for the KLM
morphism \eqref{eqn:KLM-morphism-1}, namely that it commutes with proper
pushforwards and flat pullbacks by
\cite[Lemma~3 and~4]{Weisschuh-2017}. For closed-open decompositions, the
distinguished triangle
\[ R\Gamma (Z_\text{\it \'{e}t}, \mathbb{R}^c (n)) \to R\Gamma (X_\text{\it \'{e}t}, \mathbb{R}^c (n)) \to
R\Gamma (U_\text{\it \'{e}t}, \mathbb{R}^c (n)) \to R\Gamma (Z_\text{\it \'{e}t}, \mathbb{R}^c (n)) [1] \]
comes exactly from the proper pushforward along $Z \hookrightarrow X$ and
flat pullback along $U \hookrightarrow X$ (see
\cite[Corollary~7.2]{Geisser-2010} and \cite[\S 3]{Bloch-1986}). Similarly,
the quasi-isomorphism
$R\Gamma (X_\text{\it \'{e}t}, \mathbb{R}^c (n-r)) [2r] \cong R\Gamma (\AA^r_{X,\text{\it \'{e}t}}, \mathbb{R}^c (n))$
results from the flat pullback along $p\colon \AA^r_X \to X$.
\end{proof}
\end{lemma}
\begin{proposition}
\label{prop:compatibility-of-VO(X,n)}
For each arithmetic scheme $X$ below and $n < 0$, assume
$\mathbf{L}^c (X_\text{\it \'{e}t},n)$, $\mathbf{B} (X,n)$, and the meromorphic continuation
of $\zeta (X,s)$ around $s = n$.
\begin{enumerate}
\item[1)] If $X = \coprod_{1 \le i \le r} X_i$ is a finite disjoint union of
arithmetic schemes, then
$$\mathbf{VO} (X,n) \iff \mathbf{VO} (X_i,n)\text{ for all }i.$$
\item[2)] For a closed-open decomposition
$Z \not\hookrightarrow X \hookleftarrow U$,
if two of the three conjectures
\[ \mathbf{VO} (X,n), \quad
\mathbf{VO} (Z,n), \quad
\mathbf{VO} (U,n) \]
are true, then the third is also true.
\item[3)] For any $r \ge 0$, one has
$$\mathbf{VO} (\AA^r_X, n) \iff \mathbf{VO} (X, n-r).$$
\end{enumerate}
\begin{proof}
We have already observed in Proposition~\ref{prop:VO(X,n)-assuming-B(X,n)}
that under Conjecture~$\mathbf{B} (X,n)$ we can rewrite $\mathbf{VO} (X,n)$
as
$$\ord_{s=n} \zeta (X,s) = \chi (R\Gamma_c (G_\mathbb{R}, X(\mathbb{C}), \mathbb{R} (n))).$$
In part 1), we have
$$\ord_{s=n} \zeta (X,s) = \sum_{1 \le i \le r} \ord_{s=n} \zeta (X_i,s),$$
and for the corresponding $G_\mathbb{R}$-equivariant cohomology,
\[ R\Gamma_c (G_\mathbb{R}, X(\mathbb{C}), \mathbb{R} (n)) =
\bigoplus_{1 \le i \le r} R\Gamma_c (G_\mathbb{R}, X(\mathbb{C}), \mathbb{R} (n)). \]
The statement follows from the additivity of the Euler characteristic:
\[ \begin{tikzcd}[column sep=5em]
\ord_{s=n} \zeta (X,s) \text{\it ar}[equals]{r}{\mathbf{VO} (X,n)}\text{\it ar}[equals]{d} & \chi (R\Gamma_c (G_\mathbb{R}, X(\mathbb{C}), \mathbb{R} (n))) \text{\it ar}[equals]{d} \\
\sum\limits_{1 \le i \le r} \ord_{s=n} \zeta (X_i,s) \text{\it ar}[equals]{r}{\forall i \mathbf{VO} (X_i,n)} & \sum\limits_{1 \le i \le r} \chi (R\Gamma_c (G_\mathbb{R}, X_i (\mathbb{C}), \mathbb{R} (n)))
\end{tikzcd} \]
Similarly in part 2), we can consider the distinguished triangle
\begin{multline*}
R\Gamma_c (G_\mathbb{R}, U (\mathbb{C}), \mathbb{R} (n)) \to
R\Gamma_c (G_\mathbb{R}, X (\mathbb{C}), \mathbb{R} (n)) \to
R\Gamma_c (G_\mathbb{R}, Z (\mathbb{C}), \mathbb{R} (n)) \\
\to R\Gamma_c (G_\mathbb{R}, U (\mathbb{C}), \mathbb{R} (n)) [1]
\end{multline*}
and the additivity of the Euler characteristic gives
\[ \begin{tikzcd}[column sep=4em]
\ord_{s=n} \zeta (X,s) \text{\it ar}[equals]{r}{\mathbf{VO} (X,n)}\text{\it ar}[equals]{d} & \chi (R\Gamma_c (G_\mathbb{R}, X(\mathbb{C}), \mathbb{R} (n))) \text{\it ar}[equals]{d} \\
\ord_{s=n} \zeta (Z,s) \text{\it ar}[equals]{r}{\mathbf{VO} (Z,n)} & \chi (R\Gamma_c (G_\mathbb{R}, Z (\mathbb{C}), \mathbb{R} (n))) \\[-2em]
+ & + \\[-2em]
\ord_{s=n} \zeta (U,s) \text{\it ar}[equals]{r}{\mathbf{VO} (U,n)} & \chi (R\Gamma_c (G_\mathbb{R}, U (\mathbb{C}), \mathbb{R} (n)))
\end{tikzcd} \]
Finally, in part 3), we assume for simplicity that $X_\mathbb{C}$ is connected of
dimension $d_\mathbb{C}$. Then the Poincar\'{e} duality and homotopy invariance of
the usual cohomology without compact support give us
\begin{multline*}
R\Gamma_c (G_\mathbb{R}, \AA^r (\mathbb{C}) \times X (\mathbb{C}), \mathbb{R} (n)) \\
\stackrel{\text{P.D.}}{\cong}
R\!\Hom (R\Gamma (G_\mathbb{R}, \AA^r (\mathbb{C}) \times X (\mathbb{C}), \mathbb{R} (d_\mathbb{C} + r - n)), \mathbb{R}) [-2d_\mathbb{C} - 2r] \\
\stackrel{\text{H.I.}}{\cong}
R\!\Hom (R\Gamma (G_\mathbb{R}, X (\mathbb{C}), \mathbb{R} (d_\mathbb{C} + r - n)), \mathbb{R}) [-2d_\mathbb{C} - 2r] \\
\stackrel{\text{P.D.}}{\cong}
R\Gamma_c (G_\mathbb{R}, X (\mathbb{C}), \mathbb{R} (n - r)) [-2r].
\end{multline*}
The twist $[-2r]$ is even and therefore has no effect on the Euler
characteristic, so that we obtain
\[ \begin{tikzcd}[column sep=4em]
\ord_{s=n} \zeta (\AA^r_X,s) \text{\it ar}[equals]{r}{\mathbf{VO} (\AA^r_X,n)}\text{\it ar}[equals]{d} & \chi (R\Gamma_c (G_\mathbb{R}, \AA^r (\mathbb{C}) \times X(\mathbb{C}), \mathbb{R} (n))) \text{\it ar}[equals]{d} \\
\ord_{s=n-r} \zeta (X,s) \text{\it ar}[equals]{r}{\mathbf{VO} (X,n-r)} & \chi (R\Gamma_c (G_\mathbb{R}, X (\mathbb{C}), \mathbb{R} (n-r)))
\end{tikzcd} \]
\end{proof}
\end{proposition}
Our next goal is to prove similar compatibilities for
Conjecture~$\mathbf{C} (X,n)$, as was just done for $\mathbf{VO} (X,n)$.
We split the proof into three technical lemmas
\ref{lemma:lambda-and-disjoint-unions},
\ref{lemma:lambda-and-closed-open-decompositions},
\ref{lemma:lambda-and-affine-bundles}, each for the corresponding compatibility.
\begin{lemma}
\label{lemma:lambda-and-disjoint-unions}
Let $n < 0$ and let $X = \coprod_{1 \le i \le r} X_i$ be a finite disjoint
union of arithmetic schemes. Assume $\mathbf{L}^c (X_\text{\it \'{e}t},n)$ and
$\mathbf{B} (X,n)$. Then there is a quasi-isomorphism of complexes
\begin{equation}
\label{eqn:RGamma-Wc-and-disjoint-unions}
\bigoplus_{1 \le i \le r} R\Gamma_\text{\it W,c} (X_i, \mathbb{Z}(n)) \cong
R\Gamma_\text{\it W,c} (X, \mathbb{Z}(n)),
\end{equation}
which after taking the determinants gives a commutative diagram
\begin{equation}
\label{eqn:lambda-and-disjoint-unions}
\begin{tikzcd}
\mathbb{R} \otimes_\mathbb{R} \cdots \otimes_\mathbb{R} \mathbb{R}\text{\it ar}{d}{\lambda_{X_1,n}\otimes\cdots\otimes\lambda_{X_r,n}}[swap]{\cong} \text{\it ar}{r}{x_1\otimes\cdots\otimes x_r \mapsto x_1\cdots x_r}[swap]{\cong} & \mathbb{R} \text{\it ar}{d}{\lambda_{X,n}}[swap]{\cong} \\
\bigotimes\limits_{1 \le i \le r} (\det_\mathbb{Z} R\Gamma_\text{\it W,c} (X_i, \mathbb{Z}(n))) \otimes \mathbb{R} \text{\it ar}{r}{\cong} & (\det_\mathbb{Z} R\Gamma_\text{\it W,c} (X_i, \mathbb{Z}(n))) \otimes \mathbb{R}
\end{tikzcd}
\end{equation}
\begin{proof}
For $X = \coprod_{1 \le i \le r} X_i$, all cohomologies in our construction
of $R\Gamma_\text{\it W,c} (X, \mathbb{Z}(n))$ in \cite{Beshenov-Weil-etale-1} decompose into
the corresponding direct sum over $i = 1,\ldots,r$, and
\eqref{eqn:RGamma-Wc-and-disjoint-unions} follows.
After tensoring with $\mathbb{R}$, we obtain a commutative diagram
\[ \begin{tikzcd}[column sep=1.25em]
\bigoplus_i \left(\!\!\!\begin{array}{c} R\Gamma_c (G_\mathbb{R}, X_i (\mathbb{C}), \mathbb{R} (n)) [-2] \\ \oplus \\ R\Gamma_c (G_\mathbb{R}, X_i (\mathbb{C}), \mathbb{R} (n)) [-1] \end{array}\!\!\!\right) \text{\it ar}{d}{\bigoplus_i Reg_{X_i,n}^\vee [-1] \oplus id}[swap]{\cong} \text{\it ar}{r}{\cong} & \begin{array}{c} R\Gamma_c (G_\mathbb{R}, X (\mathbb{C}), \mathbb{R} (n)) [-2] \\ \oplus \\ R\Gamma_c (G_\mathbb{R}, X (\mathbb{C}), \mathbb{R} (n)) [-1] \end{array} \text{\it ar}{d}{Reg_{X,n}^\vee [-1] \oplus id}[swap]{\cong} \\
\bigoplus_i \left(\!\!\!\begin{array}{c} R\!\Hom (R\Gamma (X_{i,\text{\it \'{e}t}}, \mathbb{Z}^c (n)), \mathbb{R}) [-1] \\ \oplus \\ R\Gamma_c (G_\mathbb{R}, X_i (\mathbb{C}), \mathbb{R} (n)) [-1] \end{array}\!\!\!\right) \text{\it ar}{d}{\text{split}}[swap]{\cong} \text{\it ar}{r}{\cong} & \begin{array}{c} R\!\Hom (R\Gamma (X_\text{\it \'{e}t}, \mathbb{Z}^c (n)), \mathbb{R}) [-1] \\ \oplus \\ R\Gamma_c (G_\mathbb{R}, X (\mathbb{C}), \mathbb{R} (n)) [-1] \end{array} \text{\it ar}{d}{\text{split}}[swap]{\cong} \\
\bigoplus_i R\Gamma_\text{\it W,c} (X_i, \mathbb{R} (n)) \text{\it ar}{r}{\cong} & R\Gamma_\text{\it W,c} (X, \mathbb{R} (n))
\end{tikzcd} \]
Taking the determinants, we obtain \eqref{eqn:lambda-and-disjoint-unions}.
\end{proof}
\end{lemma}
\begin{lemma}
\label{lemma:lambda-and-closed-open-decompositions}
Let $n < 0$ and let $Z \not\hookrightarrow X \hookleftarrow U$ be a
closed-open decomposition of arithmetic schemes, such that the conjectures
\begin{gather*}
\mathbf{L}^c (U_\text{\it \'{e}t},n), ~ \mathbf{L}^c (X_\text{\it \'{e}t},n), ~ \mathbf{L}^c (Z_\text{\it \'{e}t},n),\\
\mathbf{B} (U,n), ~ \mathbf{B} (X,n), ~ \mathbf{B} (Z_\text{\it \'{e}t},n)
\end{gather*}
hold (it suffices to assume two of the three conjectures thanks to Lemmas
\ref{lemma:compatibility-of-Lc(X,n)} and \ref{lemma:compatibility-of-B(X,n)}).
Then there is an isomorphism of determinants
\begin{equation}
\label{eqn:isomorphism-of-det-RGamma-Wc-for-closed-open-decompositions}
\det_\mathbb{Z} R\Gamma_\text{\it W,c} (U, \mathbb{Z}(n)) \otimes
\det_\mathbb{Z} R\Gamma_\text{\it W,c} (Z, \mathbb{Z}(n)) \cong
\det_\mathbb{Z} R\Gamma_\text{\it W,c} (X, \mathbb{Z}(n))
\end{equation}
making the following diagram commute up to signs:
\begin{equation}
\label{eqn:lambda-and-closed-open-decompositions}
\begin{tikzcd}
\mathbb{R} \otimes_\mathbb{R} \mathbb{R} \text{\it ar}{r}{x\otimes y \mapsto xy}\text{\it ar}{d}{\lambda_{U,n} \otimes \lambda_{Z,n}}[swap]{\cong} & \mathbb{R}\text{\it ar}{d}{\lambda_{X,n}}[swap]{\cong} \\
\begin{array}{c} (\det_\mathbb{Z} R\Gamma_\text{\it W,c} (U, \mathbb{Z}(n)))\otimes \mathbb{R} \\ \otimes_\mathbb{R} \\ (\det_\mathbb{Z} R\Gamma_\text{\it W,c} (Z, \mathbb{Z}(n)))\otimes \mathbb{R} \end{array} \text{\it ar}{r}{\cong} & (\det_\mathbb{Z} R\Gamma_\text{\it W,c} (X, \mathbb{Z}(n))) \otimes \mathbb{R}
\end{tikzcd}
\end{equation}
\begin{proof}
A closed-open decomposition $Z \not\hookrightarrow X \hookleftarrow U$
induces the distinguished triangles
\[ \begin{tikzcd}[row sep=0pt,column sep=1em,font=\small]
R\Gamma (Z_\text{\it \'{e}t}, \mathbb{Z}^c (n)) \text{\it ar}{r} & R\Gamma (X_\text{\it \'{e}t}, \mathbb{Z}^c (n)) \text{\it ar}{r} & R\Gamma (U_\text{\it \'{e}t}, \mathbb{Z}^c (n)) \text{\it ar}{r} & {[1]} \\
R\Gamma_c (U_\text{\it \'{e}t}, \mathbb{Z} (n)) \text{\it ar}{r} & R\Gamma_c (X_\text{\it \'{e}t}, \mathbb{Z} (n)) \text{\it ar}{r} & R\Gamma_c (Z_\text{\it \'{e}t}, \mathbb{Z} (n)) \text{\it ar}{r} & {[1]} \\
R\Gamma_c (G_\mathbb{R}, U (\mathbb{C}), \mathbb{R} (n)) \text{\it ar}{r} & R\Gamma_c (G_\mathbb{R}, X (\mathbb{C}), \mathbb{R} (n)) \text{\it ar}{r} & R\Gamma_c (G_\mathbb{R}, Z (\mathbb{C}), \mathbb{R} (n)) \text{\it ar}{r} & {[1]}
\end{tikzcd} \]
The first triangle is \cite[Corollary~7.2]{Geisser-2010} and it means that
$R\Gamma (-, \mathbb{Z}^c (n))$ behaves like Borel--Moore homology. The following
two are the usual triangles for cohomology with compact support. These fit
together in a commutative diagram shown in
Figure~\ref{fig:RGamma-Wc-and-closed-open-decompositions} below
(p.~\pageref{fig:RGamma-Wc-and-closed-open-decompositions}). Figure~\ref{fig:RGamma-Wc-and-closed-open-decompositions-otimes-Q}
on p.~\pageref{fig:RGamma-Wc-and-closed-open-decompositions-otimes-Q} shows
the same diagram tensored with $\mathbb{R}$.
In this diagram we start from the morphism of triangles
$(\alpha_{U,n}, \alpha_{X,n}, \alpha_{Z,n})$ and then take the corresponding
cones $R\Gamma_\text{\it fg} (-, \mathbb{Z}(n))$. By
\cite[Proposition~5.6]{Beshenov-Weil-etale-1}, these cones are defined up to
a \emph{unique} isomorphism in the derived category $\mathbf{D} (\mathbb{Z})$, and
the same argument shows that the induced morphisms of complexes
\begin{equation}
\label{eqn:triangle-RGamma-fg}
R\Gamma_\text{\it fg} (U, \mathbb{Z}(n)) \to
R\Gamma_\text{\it fg} (X, \mathbb{Z}(n)) \to
R\Gamma_\text{\it fg} (Z, \mathbb{Z}(n)) \to
R\Gamma_\text{\it fg} (U, \mathbb{Z}(n)) [1]
\end{equation}
are also well-defined
(see \cite[Corollary~A.3]{Beshenov-Weil-etale-1}). A priori,
\eqref{eqn:triangle-RGamma-fg} need not be a distinguished triangle, but we
claim that it induces a long exact sequence in cohomology.
To this end, note that tensoring the diagram with $\mathbb{Z}/m\mathbb{Z}$ gives us an
isomorphism
\[ \begin{tikzcd}[column sep=1em,font=\small]
R\Gamma_c (U_\text{\it \'{e}t}, \mathbb{Z}/m\mathbb{Z} (n)) \text{\it ar}{r}\text{\it ar}{d}{\cong} & R\Gamma_c (X_\text{\it \'{e}t}, \mathbb{Z}/m\mathbb{Z} (n)) \text{\it ar}{r}\text{\it ar}{d}{\cong} & R\Gamma_c (Z_\text{\it \'{e}t}, \mathbb{Z}/m\mathbb{Z} (n)) \text{\it ar}{r}\text{\it ar}{d}{\cong} & {[1]}\text{\it ar}{d}{\cong} \\
\begin{array}{c} R\Gamma_\text{\it fg} (U, \mathbb{Z} (n)) \\ \otimes^\mathbf{L} \\ \mathbb{Z}/m\mathbb{Z} \end{array} \text{\it ar}{r} & \begin{array}{c} R\Gamma_\text{\it fg} (X, \mathbb{Z} (n)) \\ \otimes^\mathbf{L} \\ \mathbb{Z}/m\mathbb{Z} \end{array} \text{\it ar}{r} & \begin{array}{c} R\Gamma_\text{\it fg} (Z, \mathbb{Z} (n)) \\ \otimes^\mathbf{L} \\ \mathbb{Z}/m\mathbb{Z} \end{array} \text{\it ar}{r} & {[1]}
\end{tikzcd} \]
More generally, for each prime $p$ we can take the corresponding derived
$p$-adic completions (see \cite{Bhatt-Scholze-2015} and
\cite[Tag~091N]{Stacks-project})
\[ R\Gamma_\text{\it fg} (-, \mathbb{Z}(n))^\wedge_p \mathrel{\mathop:}=
R\varprojlim_k (R\Gamma_\text{\it fg} (-, \mathbb{Z}(n)) \otimes^\mathbf{L} \mathbb{Z}/p^k\mathbb{Z}), \]
which give us a distinguished triangle for each prime $p$
\[ R\Gamma_\text{\it fg} (U, \mathbb{Z}(n))^\wedge_p \to
R\Gamma_\text{\it fg} (X, \mathbb{Z}(n))^\wedge_p \to
R\Gamma_\text{\it fg} (Z, \mathbb{Z}(n))^\wedge_p \to
R\Gamma_\text{\it fg} (U, \mathbb{Z}(n))^\wedge_p [1]. \]
At the level of cohomology, there are natural isomorphisms
\cite[Tag~0A06]{Stacks-project}
\[ H^i (R\Gamma_\text{\it fg} (-, \mathbb{Z}(n))^\wedge_p) \cong
H^i_\text{\it fg} (-, \mathbb{Z}(n)) \otimes \mathbb{Z}_p. \]
In particular, for each $p$ there is a long exact sequence of cohomology
groups
\begin{multline*}
\cdots \to H^i_\text{\it fg} (U, \mathbb{Z}(n)) \otimes \mathbb{Z}_p \to
H^i_\text{\it fg} (X, \mathbb{Z}(n)) \otimes \mathbb{Z}_p \to
H^i_\text{\it fg} (Z, \mathbb{Z}(n)) \otimes \mathbb{Z}_p \\
\to H^{i+1}_\text{\it fg} (U, \mathbb{Z}(n)) \otimes \mathbb{Z}_p \to \cdots
\end{multline*}
induced by \eqref{eqn:triangle-RGamma-fg}. By finite generation of
$H^i_\text{\it fg} (-, \mathbb{Z}(n))$ and flatness of $\mathbb{Z}_p$ this implies that the sequence
\begin{equation}
\label{eqn:RGamma-fg-long-exact-sequence}
\cdots \to H^i_\text{\it fg} (U, \mathbb{Z}(n)) \to
H^i_\text{\it fg} (X, \mathbb{Z}(n)) \to
H^i_\text{\it fg} (Z, \mathbb{Z}(n)) \to
H^{i+1}_\text{\it fg} (U, \mathbb{Z}(n)) \to \cdots
\end{equation}
is exact.
Now we consider the diagram
\[ \begin{tikzcd}[column sep=0.75em,font=\small]
\tau_{\le m} R\Gamma_c (G_\mathbb{R}, U (\mathbb{C}), \mathbb{Z} (n))[-1] \text{\it ar}{r}\text{\it ar}{d} & R\Gamma_\text{\it W,c} (U, \mathbb{Z} (n))\text{\it ar}{d}\text{\it ar}{r} & \tau_{\le m} R\Gamma_\text{\it fg} (U,\mathbb{Z}(n)) \text{\it ar}{d}\text{\it ar}{r} & {[0]} \text{\it ar}{d} \\
\tau_{\le m} R\Gamma_c (G_\mathbb{R}, X (\mathbb{C}), \mathbb{Z} (n))[-1] \text{\it ar}{r}\text{\it ar}{d} & R\Gamma_\text{\it W,c} (X, \mathbb{Z} (n))\text{\it ar}{d}\text{\it ar}{r} & \tau_{\le m} R\Gamma_\text{\it fg} (X,\mathbb{Z}(n)) \text{\it ar}{d}\text{\it ar}{r} & {[0]} \text{\it ar}{d} \\
\tau_{\le m} R\Gamma_c (G_\mathbb{R}, Z (\mathbb{C}), \mathbb{Z} (n))[-1] \text{\it ar}{r}\text{\it ar}{d} & R\Gamma_\text{\it W,c} (Z, \mathbb{Z} (n))\text{\it ar}{d}\text{\it ar}{r} & \tau_{\le m} R\Gamma_\text{\it fg} (Z,\mathbb{Z}(n)) \text{\it ar}{d}\text{\it ar}{r} & {[0]} \text{\it ar}{d} \\
\tau_{\le m} R\Gamma_c (G_\mathbb{R}, U (\mathbb{C}), \mathbb{Z} (n)) \text{\it ar}{r} & R\Gamma_\text{\it W,c} (U, \mathbb{Z} (n)) [1] \text{\it ar}{r} & \tau_{\le m} R\Gamma_\text{\it fg} (U,\mathbb{Z}(n)) [1] \text{\it ar}{r} & {[1]}
\end{tikzcd} \]
Here we took truncations for $m$ big enough, as in the proof of
Lemma~\ref{lemma:determinant-of-RGamma-Wc-well-defined}. There are canonical
isomorphisms
\begin{align*}
\notag \det_\mathbb{Z} R\Gamma_\text{\it W,c} (U, \mathbb{Z}(n)) & \cong \begin{array}{c} \det_\mathbb{Z} (\tau_{\le m} R\Gamma_c (G_\mathbb{R}, U (\mathbb{C}), \mathbb{Z} (n)) [-1]) \\ \otimes \\ \det_\mathbb{Z} (\tau_{\le m} R\Gamma_\text{\it fg} (U, \mathbb{Z}(n))), \end{array} \\
\\
\notag \det_\mathbb{Z} R\Gamma_\text{\it W,c} (X, \mathbb{Z}(n)) & \cong \begin{array}{c} \det_\mathbb{Z} (\tau_{\le m} R\Gamma_c (G_\mathbb{R}, X (\mathbb{C}), \mathbb{Z} (n)) [-1]) \\ \otimes \\ \det_\mathbb{Z} (\tau_{\le m} R\Gamma_\text{\it fg} (X, \mathbb{Z}(n))), \end{array} \\
\\
\notag \det_\mathbb{Z} R\Gamma_\text{\it W,c} (Z, \mathbb{Z}(n)) & \cong \begin{array}{c} \det_\mathbb{Z} (\tau_{\le m} R\Gamma_c (G_\mathbb{R}, Z (\mathbb{C}), \mathbb{Z} (n)) [-1]) \\ \otimes \\ \det_\mathbb{Z} (\tau_{\le m} R\Gamma_\text{\it fg} (Z, \mathbb{Z}(n))), \end{array} \\
\\
\det_\mathbb{Z} (\tau_{\le m} R\Gamma_c (G_\mathbb{R}, X (\mathbb{C}), \mathbb{Z}(n))) & \cong \begin{array}{c} \det_\mathbb{Z} (\tau_{\le m} R\Gamma_c (G_\mathbb{R}, U (\mathbb{C}), \mathbb{Z} (n))) \\ \otimes \\ \det_\mathbb{Z} (\tau_{\le m} R\Gamma_c (G_\mathbb{R}, Z (\mathbb{C}), \mathbb{Z}(n))), \end{array} \\
\\
\det_\mathbb{Z} (\tau_{\le m} R\Gamma_\text{\it fg} (X, \mathbb{Z}(n))) & \cong \begin{array}{c} \det_\mathbb{Z} (\tau_{\le m} R\Gamma_\text{\it fg} (U, \mathbb{Z} (n))) \\ \otimes \\ \det_\mathbb{Z} (\tau_{\le m} R\Gamma_\text{\it fg} (Z, \mathbb{Z}(n))).\end{array}
\end{align*}
The first four isomorphisms arise from the corresponding distinguished
triangles, while the last isomorphism comes from the long exact sequence
\eqref{eqn:RGamma-fg-long-exact-sequence}, which gives an isomorphism
\begin{equation}\small
\bigotimes_{i \le m}
\Bigl(\det_\mathbb{Z} H^i_\text{\it fg} (U, \mathbb{Z}(n))^{(-1)^i} \otimes
\det_\mathbb{Z} H^i_\text{\it fg} (X, \mathbb{Z}(n))^{(-1)^{i+1}} \otimes
\det_\mathbb{Z} H^i_\text{\it fg} (Z, \mathbb{Z}(n))^{(-1)^i}\Bigr) \cong \mathbb{Z}.
\end{equation}
We can rearrange the terms
(at the expense of introducing a $\pm 1$ sign),
to obtain
\begin{multline*}
\det_\mathbb{Z} (\tau_{\le m} R\Gamma_\text{\it fg} (X, \mathbb{Z}(n))) \cong
\bigotimes_{i \le m} \det_\mathbb{Z} H^i_\text{\it fg} (X, \mathbb{Z}(n)) \cong \\
\bigotimes_{i \le m} \det_\mathbb{Z} H^i_\text{\it fg} (U, \mathbb{Z}(n)) \otimes
\bigotimes_{i \le m} \det_\mathbb{Z} H^i_\text{\it fg} (Z, \mathbb{Z}(n)) \cong \\
\det_\mathbb{Z} (\tau_{\le m} R\Gamma_\text{\it fg} (U, \mathbb{Z}(n))) \otimes
\det_\mathbb{Z} (\tau_{\le m} R\Gamma_\text{\it fg} (Z, \mathbb{Z}(n))).
\end{multline*}
All this gives us the desired isomorphism of integral determinants
\eqref{eqn:isomorphism-of-det-RGamma-Wc-for-closed-open-decompositions}.
Let us now consider the diagram with distinguished rows in
Figure~\ref{fig:Regulators-and-closed-open-decompositions}
(p.~\pageref{fig:Regulators-and-closed-open-decompositions}).
Here the three squares with the regulators involved commute thanks to
Lemma~\ref{lemma:compatibility-of-B(X,n)}. Taking the determinants, we
obtain \eqref{eqn:lambda-and-closed-open-decompositions}, by the
compatibility with distinguished triangles.
\end{proof}
\end{lemma}
\begin{remark}
Morally, we expect that a closed-open decomposition induces a distinguished
triangle of the form
\begin{equation}
\label{eqn:closed-open-decompositions-hypothetical-RGamma-triangle}
R\Gamma_\text{\it W,c} (U, \mathbb{Z}(n)) \to
R\Gamma_\text{\it W,c} (X, \mathbb{Z}(n)) \to
R\Gamma_\text{\it W,c} (Z, \mathbb{Z}(n)) \to [1].
\end{equation}
However, $R\Gamma_\text{\it W,c} (X, \mathbb{Z}(n))$ is defined in \cite{Beshenov-Weil-etale-1}
as a mapping fber of a morphism in $\mathbf{D} (\mathbb{Z})$, so it is not quite
functorial.
We recall that in the usual derived (1-)category $\mathbf{D} (\mathcal{A})$,
taking naively a ``cone of a morphism of distinguished triangles''
$$\begin{tikzpicture}[ampersand replacement=\&]
\matrix(m)[matrix of math nodes, row sep=1.5em, column sep=1.5em,
text height=1.5ex, text depth=0.25ex]{
A^\bullet \& B^\bullet \& C^\bullet \& A^\bullet[1] \\
A^{\bullet\prime} \& B^{\bullet\prime} \& C^{\bullet\prime} \& A^{\bullet\prime} [1] \\
A^{\bullet\prime\prime} \& B^{\bullet\prime\prime} \& C^{\bullet\prime\prime} \& A^{\bullet\prime\prime} [1] \\
A^\bullet [1] \& B^\bullet [1] \& C^\bullet [1] \& A^\bullet [2]\\};
\path[->] (m-1-1) edge (m-1-2);
\path[->] (m-1-2) edge (m-1-3);
\path[->] (m-1-3) edge (m-1-4);
\path[->] (m-2-1) edge (m-2-2);
\path[->] (m-2-2) edge (m-2-3);
\path[->] (m-2-3) edge (m-2-4);
\path[dashed,->] (m-3-1) edge (m-3-2);
\path[dashed,->] (m-3-2) edge (m-3-3);
\path[dashed,->] (m-3-3) edge (m-3-4);
\path[->] (m-4-1) edge (m-4-2);
\path[->] (m-4-2) edge (m-4-3);
\path[->] (m-4-3) edge (m-4-4);
\path[->] (m-1-1) edge (m-2-1);
\path[->] (m-1-2) edge (m-2-2);
\path[->] (m-1-3) edge (m-2-3);
\path[->] (m-1-4) edge (m-2-4);
\path[dashed,->] (m-2-1) edge (m-3-1);
\path[dashed,->] (m-2-2) edge (m-3-2);
\path[dashed,->] (m-2-3) edge (m-3-3);
\path[dashed,->] (m-2-4) edge (m-3-4);
\path[dashed,->] (m-3-1) edge (m-4-1);
\path[dashed,->] (m-3-2) edge (m-4-2);
\path[dashed,->] (m-3-3) edge (m-4-3);
\path[dashed,->] (m-3-4) edge (m-4-4);
\node[font=\small] at ($(m-3-3)!.5!(m-4-4)$) {($-$)};
\end{tikzpicture}$$
usually \emph{does not} yield a distinguished triangle
$A^{\bullet\prime\prime} \to B^{\bullet\prime\prime} \to C^{\bullet\prime\prime} \to A^{\bullet\prime\prime} [1]$.
For a thorough discussion of this problem, see \cite{Neeman-1991}.
For lack of a better definition for
$R\Gamma_\text{\it W,c} (X, \mathbb{Z}(n))$, we constructed the isomorphism
\eqref{eqn:isomorphism-of-det-RGamma-Wc-for-closed-open-decompositions}
ad hoc, without the hypothetical triangle
\eqref{eqn:closed-open-decompositions-hypothetical-RGamma-triangle}.
\end{remark}
\begin{landscape}
\begin{figure}
\[ \begin{tikzcd}[font=\small]
&[2em] &[-2.5em] &[-2.5em] &[-2.5em] R\Gamma_\text{\it W,c} (U, \mathbb{Z} (n))\text{\it ar}{dl} &[-2.5em] \\
R\!\Hom (R\Gamma (U_\text{\it \'{e}t}, \mathbb{Z}^c (n)), \mathbb{Q}[-2]) \text{\it ar}{r}{\alpha_{U,n}}\text{\it ar}{dd} & R\Gamma_c (U_\text{\it \'{e}t}, \mathbb{Z} (n)) \text{\it ar}{dr}{u_\infty} \text{\it ar}{dd} \text{\it ar}{rr} & & R\Gamma_\text{\it fg} (U, \mathbb{Z} (n))\text{\it ar}{dl}[swap]{i_\infty} \text{\it ar}[dashed]{dd}{\exists!} \text{\it ar}[crossing over]{rr} & & \cdots [-1] \text{\it ar}{dd} \\
& & R\Gamma_c (G_\mathbb{R}, U (\mathbb{C}), \mathbb{Z} (n)) & & R\Gamma_\text{\it W,c} (X, \mathbb{Z} (n))\text{\it ar}{dl} \\
R\!\Hom(R\Gamma (X_\text{\it \'{e}t}, \mathbb{Z}^c (n)), \mathbb{Q}[-2]) \text{\it ar}{r}{\alpha_{X,n}}\text{\it ar}{dd} & R\Gamma_c (X_\text{\it \'{e}t}, \mathbb{Z} (n)) \text{\it ar}{dr}{u_\infty} \text{\it ar}{dd} \text{\it ar}{rr} & & R\Gamma_\text{\it fg} (X, \mathbb{Z} (n)) \text{\it ar}{dl}[swap]{i_\infty} \text{\it ar}[dashed]{dd}{\exists!} \text{\it ar}[crossing over]{rr} & & \cdots [-1] \text{\it ar}{dd} \\
& & R\Gamma_c (G_\mathbb{R}, X (\mathbb{C}), \mathbb{Z} (n)) \text{\it ar}[<-,near end,crossing over]{uu} & & R\Gamma_\text{\it W,c} (Z, \mathbb{Z} (n))\text{\it ar}{dl} \\
R\!\Hom(R\Gamma (Z_\text{\it \'{e}t}, \mathbb{Z}^c (n)), \mathbb{Q}[-2]) \text{\it ar}{r}{\alpha_{Z,n}}\text{\it ar}{dd} & R\Gamma_c (Z_\text{\it \'{e}t}, \mathbb{Z} (n)) \text{\it ar}{dr}{u_\infty} \text{\it ar}{dd} \text{\it ar}{rr} & & R\Gamma_\text{\it fg} (Z, \mathbb{Z} (n)) \text{\it ar}{dl}[swap]{i_\infty} \text{\it ar}[dashed]{dd}{\exists!} \text{\it ar}[crossing over]{rr} & & \cdots [-1] \text{\it ar}{dd} \\
& & R\Gamma_c (G_\mathbb{R}, Z (\mathbb{C}), \mathbb{Z} (n)) \text{\it ar}[<-,near end,crossing over]{uu} & & R\Gamma_\text{\it W,c} (U, \mathbb{Z} (n)) [1]\text{\it ar}{dl} \\
R\!\Hom(R\Gamma (U_\text{\it \'{e}t}, \mathbb{Z}^c (n)), \mathbb{Q}[-1]) \text{\it ar}{r}{\alpha_{U,n} [1]} & R\Gamma_c (U_\text{\it \'{e}t}, \mathbb{Z} (n)) [1] \text{\it ar}{dr}{u_\infty} \text{\it ar}{rr} & & R\Gamma_\text{\it fg} (U, \mathbb{Z} (n)) [1] \text{\it ar}{dl}[swap]{i_\infty [1]} \text{\it ar}{rr} & & \cdots [0] \\
& & R\Gamma_c (G_\mathbb{R}, U (\mathbb{C}), \mathbb{Z} (n)) [1] \text{\it ar}[<-,near end,crossing over]{uu} \\
\end{tikzcd} \]
\caption{Diagram induced by a closed-open decomposition
$Z \not\hookrightarrow X \hookleftarrow U$.}
\label{fig:RGamma-Wc-and-closed-open-decompositions}
\end{figure}
\end{landscape}
\begin{landscape}
\begin{figure}
\[ \begin{tikzcd}[font=\small]
&[2em] &[-2.5em] &[-2.5em] &[-2.5em] R\Gamma_\text{\it W,c} (U, \mathbb{R} (n))\text{\it ar}{dl}\text{\it ar}{dd} &[-2.5em] \\
R\!\Hom (R\Gamma (U_\text{\it \'{e}t}, \mathbb{Z}^c (n)), \mathbb{R}[-2]) \text{\it ar}{r}\text{\it ar}{dd} & 0 \text{\it ar}{dr} \text{\it ar}{dd} \text{\it ar}{rr} & & R\Gamma_\text{\it fg} (U, \mathbb{R} (n))\text{\it ar}{dl}[swap]{0} \text{\it ar}[dashed]{dd}{\exists!} \text{\it ar}[crossing over,near start]{rr}{\cong} & & R\!\Hom (R\Gamma (U_\text{\it \'{e}t}, \mathbb{Z}^c (n)), \mathbb{R}[-1]) \text{\it ar}{dd} \\
& & R\Gamma_c (G_\mathbb{R}, U (\mathbb{C}), \mathbb{R} (n)) & & R\Gamma_\text{\it W,c} (X, \mathbb{R} (n))\text{\it ar}{dl}\text{\it ar}{dd} \\
R\!\Hom (R\Gamma (X_\text{\it \'{e}t}, \mathbb{Z}^c (n)), \mathbb{R}[-2]) \text{\it ar}{r}\text{\it ar}{dd} & 0 \text{\it ar}{dr} \text{\it ar}{dd} \text{\it ar}{rr} & & R\Gamma_\text{\it fg} (X, \mathbb{R} (n)) \text{\it ar}{dl}[swap]{0} \text{\it ar}[dashed]{dd}{\exists!} \text{\it ar}[crossing over,near start]{rr}{\cong} & & R\!\Hom (R\Gamma (X_\text{\it \'{e}t}, \mathbb{Z}^c (n)), \mathbb{R}[-1]) \text{\it ar}{dd} \\
& & R\Gamma_c (G_\mathbb{R}, X (\mathbb{C}), \mathbb{R} (n)) \text{\it ar}[<-,near end,crossing over]{uu} & & R\Gamma_\text{\it W,c} (Z, \mathbb{R} (n))\text{\it ar}{dl}\text{\it ar}{dd} \\
R\!\Hom (R\Gamma (Z_\text{\it \'{e}t}, \mathbb{Z}^c (n)), \mathbb{R}[-2]) \text{\it ar}{r}\text{\it ar}{dd} & 0 \text{\it ar}{dr} \text{\it ar}{dd} \text{\it ar}{rr} & & R\Gamma_\text{\it fg} (Z, \mathbb{R} (n)) \text{\it ar}{dl}[swap]{0} \text{\it ar}[dashed]{dd}{\exists!} \text{\it ar}[crossing over,near start]{rr}{\cong} & & R\!\Hom (R\Gamma (Z_\text{\it \'{e}t}, \mathbb{Z}^c (n)), \mathbb{R}[-1]) \text{\it ar}{dd} \\
& & R\Gamma_c (G_\mathbb{R}, Z (\mathbb{C}), \mathbb{R} (n)) \text{\it ar}[<-,near end,crossing over]{uu} & & R\Gamma_\text{\it W,c} (U, \mathbb{R} (n)) [1]\text{\it ar}{dl} \\
R\!\Hom (R\Gamma (U_\text{\it \'{e}t}, \mathbb{Z}^c (n)), \mathbb{R}[-1]) \text{\it ar}{r} & 0 \text{\it ar}{dr} \text{\it ar}{rr} & & R\Gamma_\text{\it fg} (U, \mathbb{R} (n)) [1] \text{\it ar}{dl}[swap]{0} \text{\it ar}{rr}{\cong} & & R\!\Hom (R\Gamma (U_\text{\it \'{e}t}, \mathbb{Z}^c (n)), \mathbb{R}) \\
& & R\Gamma_c (G_\mathbb{R}, U (\mathbb{C}), \mathbb{R} (n)) [1] \text{\it ar}[<-,near end,crossing over]{uu} \\
\end{tikzcd} \]
\caption{Diagram induced by a closed-open decomposition
$Z \not\hookrightarrow X \hookleftarrow U$, tensored with $\mathbb{R}$.}
\label{fig:RGamma-Wc-and-closed-open-decompositions-otimes-Q}
\end{figure}
\end{landscape}
\begin{landscape}
\begin{figure}
\[ \begin{tikzcd}[column sep=1em,font=\small]
\begin{array}{c} R\Gamma_c (G_\mathbb{R}, U (\mathbb{C}), \mathbb{R} (n)) [-2] \\ \oplus \\ R\Gamma_c (G_\mathbb{R}, U (\mathbb{C}), \mathbb{R} (n)) [-1] \end{array} \text{\it ar}{d}{Reg_{U,n}^\vee [-1] \oplus id}[swap]{\cong} \text{\it ar}{r} & \begin{array}{c} R\Gamma_c (G_\mathbb{R}, X (\mathbb{C}), \mathbb{R} (n)) [-2] \\ \oplus \\ R\Gamma_c (G_\mathbb{R}, X (\mathbb{C}), \mathbb{R} (n)) [-1] \end{array} \text{\it ar}{d}{Reg_{X,n}^\vee [-1] \oplus id}[swap]{\cong} \text{\it ar}{r} & \begin{array}{c} R\Gamma_c (G_\mathbb{R}, Z (\mathbb{C}), \mathbb{R} (n)) [-2] \\ \oplus \\ R\Gamma_c (G_\mathbb{R}, Z (\mathbb{C}), \mathbb{R} (n)) [-1] \end{array} \text{\it ar}{d}{Reg_{Z,n}^\vee [-1] \oplus id}[swap]{\cong} \text{\it ar}{r} & \begin{array}{c} R\Gamma_c (G_\mathbb{R}, U (\mathbb{C}), \mathbb{R} (n)) [-1] \\ \oplus \\ R\Gamma_c (G_\mathbb{R}, U (\mathbb{C}), \mathbb{R} (n)) \end{array} \text{\it ar}{d}{Reg_{U,n}^\vee \oplus id}[swap]{\cong} \\
\begin{array}{c} R\!\Hom (R\Gamma (U_\text{\it \'{e}t}, \mathbb{Z}^c (n)), \mathbb{R}) [-1] \\ \oplus \\ R\Gamma_c (G_\mathbb{R}, U (\mathbb{C}), \mathbb{R} (n)) [-1] \end{array} \text{\it ar}{d}{\text{split}}[swap]{\cong} \text{\it ar}{r} & \begin{array}{c} R\!\Hom (R\Gamma (X_\text{\it \'{e}t}, \mathbb{Z}^c (n)), \mathbb{R}) [-1] \\ \oplus \\ R\Gamma_c (G_\mathbb{R}, X (\mathbb{C}), \mathbb{R} (n)) [-1] \end{array} \text{\it ar}{d}{\text{split}}[swap]{\cong} \text{\it ar}{r} & \begin{array}{c} R\!\Hom (R\Gamma (Z_\text{\it \'{e}t}, \mathbb{Z}^c (n)), \mathbb{R}) [-1] \\ \oplus \\ R\Gamma_c (G_\mathbb{R}, Z (\mathbb{C}), \mathbb{R} (n)) [-1] \end{array} \text{\it ar}{d}{\text{split}}[swap]{\cong} \text{\it ar}{r} & \begin{array}{c} R\!\Hom (R\Gamma (U_\text{\it \'{e}t}, \mathbb{Z}^c (n)), \mathbb{R}) \\ \oplus \\ R\Gamma_c (G_\mathbb{R}, U (\mathbb{C}), \mathbb{R} (n)) \end{array} \text{\it ar}{d}{\text{split}}[swap]{\cong} \\
R\Gamma_\text{\it W,c} (U, \mathbb{R} (n)) \text{\it ar}{r} & R\Gamma_\text{\it W,c} (X, \mathbb{R} (n)) \text{\it ar}{r} & R\Gamma_\text{\it W,c} (Z, \mathbb{R} (n)) \text{\it ar}{r} & R\Gamma_\text{\it W,c} (U, \mathbb{R} (n)) [1]
\end{tikzcd} \]
\caption{Diagram induced by a closed-open decomposition
$Z \not\hookrightarrow X \hookleftarrow U$}
\label{fig:Regulators-and-closed-open-decompositions}
\end{figure}
\end{landscape}
\begin{lemma}
\label{lemma:lambda-and-affine-bundles}
For $n < 0$ and $r \ge 0$, let $X$ be an arithmetic scheme satisfying
$\mathbf{L}^c (X_\text{\it \'{e}t},n-r)$ and $\mathbf{B} (X,n-r)$. Then there is a natural
quasi-isomorphism of complexes
\begin{equation}
\label{eqn:RGamma-Wc-and-affine-bundles}
R\Gamma_\text{\it W,c} (\AA^r_X, \mathbb{Z} (n)) \cong R\Gamma_\text{\it W,c} (X, \mathbb{Z} (n-r)) [-2r],
\end{equation}
which after passing to the determinants makes the following diagram commute:
\begin{equation}
\label{eqn:lambda-and-affine-bundles}
\begin{tikzcd}[column sep=0.5em]
& \mathbb{R}\text{\it ar}{dl}{\cong}[swap]{\lambda_{\AA^r_X,n}}\text{\it ar}{dr}{\lambda_{X,n-r}}[swap]{\cong} \\
(\det_\mathbb{Z} R\Gamma_\text{\it W,c} (\AA^r_X, \mathbb{Z} (n)))\otimes \mathbb{R} \text{\it ar}{rr}{\cong} & & (\det_\mathbb{Z} R\Gamma_\text{\it W,c} (X, \mathbb{Z} (n-r)))\otimes \mathbb{R}
\end{tikzcd}
\end{equation}
\begin{proof}
We refer to Figure~\ref{fig:RGamma-Wc-and-affine-bundles}
(p.~\pageref{fig:RGamma-Wc-and-affine-bundles}), which shows how the flat
morphism $p\colon \AA^r_X \to X$ induces the desired quasi-isomorphism
\eqref{eqn:RGamma-Wc-and-affine-bundles}. It all boils down to the homotopy
property of motivic cohomology, namely the fact that $p$ induces a
quasi-isomorphism
\[ p^*\colon R\Gamma (X_\text{\it \'{e}t}, \mathbb{Z}^c (n-r)) [2r] \xrightarrow{\cong}
R\Gamma (\AA^r_{X,\text{\it \'{e}t}}, \mathbb{Z}^c (n)); \]
see, e.g. \cite[Lemma~5.11]{Morin-2014}.
After passing to real coefficients, we obtain the following diagram:
\[ \begin{tikzcd}[column sep=1em, font=\small]
\begin{array}{c} R\Gamma_c (G_\mathbb{R}, \AA^r_X (\mathbb{C}), \mathbb{R} (n)) [-2] \\ \oplus \\ R\Gamma_c (G_\mathbb{R}, \AA^r_X (\mathbb{C}), \mathbb{R} (n)) [-1] \end{array} \text{\it ar}{d}{Reg_{\AA^r_X,n}^\vee [-1] \oplus id}[swap]{\cong} \text{\it ar}{r}{\cong} & \begin{array}{c} R\Gamma_c (G_\mathbb{R}, X (\mathbb{C}), \mathbb{R} (n-r)) [-2] [-2r] \\ \oplus \\ R\Gamma_c (G_\mathbb{R}, X (\mathbb{C}), \mathbb{R} (n-r)) [-1] [-2r] \end{array} \text{\it ar}{d}{Reg_{X,n-r}^\vee [-1] [-2r] \oplus id}[swap]{\cong} \\
\begin{array}{c} R\!\Hom (R\Gamma (\AA^r_{X,\text{\it \'{e}t}}, \mathbb{Z}^c (n)), \mathbb{R}) [-1] \\ \oplus \\ R\Gamma_c (G_\mathbb{R}, \AA^r_X (\mathbb{C}), \mathbb{R} (n)) [-1] \end{array} \text{\it ar}{d}{\text{split}}[swap]{\cong} \text{\it ar}{r}{\cong} & \begin{array}{c} R\!\Hom (R\Gamma (X_\text{\it \'{e}t}, \mathbb{Z}^c (n-r)) [2r], \mathbb{R}) [-1] \\ \oplus \\ R\Gamma_c (G_\mathbb{R}, X (\mathbb{C}), \mathbb{R} (n-r)) [-1] [-2r] \end{array} \text{\it ar}{d}{\text{split}}[swap]{\cong} \\
R\Gamma_\text{\it W,c} (\AA^r_X, \mathbb{R} (n)) \text{\it ar}{r}{\cong} & R\Gamma_\text{\it W,c} (X, \mathbb{R} (n-r)) [-2r]
\end{tikzcd} \]
Here the first square is commutative due to the compatibility of the
regulator with affine bundles (Lemma~\ref{lemma:compatibility-of-B(X,n)}),
and the second square commutes because the quasi-isomorphism
\eqref{eqn:RGamma-Wc-and-affine-bundles} gives compatible splittings
(see again Figure~\ref{fig:RGamma-Wc-and-affine-bundles} on
p.~\pageref{fig:RGamma-Wc-and-affine-bundles}). Taking the determinants, we
obtain the desired commutative diagram
\eqref{eqn:lambda-and-affine-bundles}.
\end{proof}
\end{lemma}
\begin{landscape}
\begin{figure}
\[ \begin{tikzcd}[font=\small,column sep=1em]
&[0.5em] &[-2.75em] &[-2.75em] &[-2.75em] R\Gamma_\text{\it W,c} (\AA^r_X, \mathbb{Z} (n))\text{\it ar}{dl}\text{\it ar}[dashed,near start]{dd}{\cong} &[-2.75em] \\
R\!\Hom (R\Gamma (\AA^r_{X,\text{\it \'{e}t}}, \mathbb{Z}^c (n)), \mathbb{Q}[-2]) \text{\it ar}{r}{\alpha_{\AA^r_X,n}}\text{\it ar}{dd}{(p^*)^\vee}[swap]{\cong} & R\Gamma_c (\AA^r_{X,\text{\it \'{e}t}}, \mathbb{Z} (n)) \text{\it ar}{dr}{u_\infty} \text{\it ar}{dd}{p_*}[swap]{\cong} \text{\it ar}{rr} & & R\Gamma_\text{\it fg} (\AA^r_X, \mathbb{Z} (n))\text{\it ar}{dl}[swap]{i_\infty} \text{\it ar}[dashed]{dd}{\cong} \text{\it ar}[crossing over]{rr} & & \cdots [-1] \text{\it ar}{dd}{\cong} \\
& & R\Gamma_c (G_\mathbb{R}, \AA^r_X (\mathbb{C}), \mathbb{Z} (n)) & & R\Gamma_\text{\it W,c} (X, \mathbb{Z} (n-r)) [-2r]\text{\it ar}{dl} \\
R\!\Hom (R\Gamma (X_\text{\it \'{e}t}, \mathbb{Z}^c (n-r)) [2r], \mathbb{Q}[-2]) \text{\it ar}[outer sep=0.75em]{r}{\alpha_{X,n-r} [-2r]} & R\Gamma_c (X_\text{\it \'{e}t}, \mathbb{Z} (n-r)) [-2r] \text{\it ar}{dr}{u_\infty} \text{\it ar}{rr} & & R\Gamma_\text{\it fg} (X, \mathbb{Z} (n-r)) [-2r] \text{\it ar}{dl}[swap]{i_\infty} \text{\it ar}{rr} & & \cdots [-1] \\
& & R\Gamma_c (G_\mathbb{R}, X (\mathbb{C}), \mathbb{Z} (n-r)) [-2r] \text{\it ar}[<-,near end,crossing over]{uu}[swap]{p_*}{\cong}\\
& & & & R\Gamma_\text{\it W,c} (\AA^r_X, \mathbb{R} (n))\text{\it ar}{dl}\text{\it ar}[dashed,near start]{dd}{\cong} & \\
R\!\Hom (R\Gamma (\AA^r_{X,\text{\it \'{e}t}}, \mathbb{Z}^c (n)), \mathbb{R}[-2]) \text{\it ar}{r}\text{\it ar}{dd}{(p^*)^\vee}[swap]{\cong} & 0 \text{\it ar}{dr} \text{\it ar}{dd} \text{\it ar}{rr} & & R\Gamma_\text{\it fg} (\AA^r_X, \mathbb{R} (n))\text{\it ar}{dl}[swap]{0} \text{\it ar}[dashed]{dd}{\cong} \text{\it ar}[crossing over]{rr} & & \cdots [-1] \text{\it ar}{dd}{\cong} \\
& & R\Gamma_c (G_\mathbb{R}, \AA^r_X (\mathbb{C}), \mathbb{R} (n)) & & R\Gamma_\text{\it W,c} (X, \mathbb{R} (n-r)) [-2r]\text{\it ar}{dl} \\
R\!\Hom (R\Gamma (X_\text{\it \'{e}t}, \mathbb{Z}^c (n-r)) [2r], \mathbb{R}[-2]) \text{\it ar}{r} & 0 \text{\it ar}{dr} \text{\it ar}{rr} & & R\Gamma_\text{\it fg} (X, \mathbb{R} (n-r)) [-2r] \text{\it ar}{dl}[swap]{0} \text{\it ar}{rr} & & \cdots [-1] \\
& & R\Gamma_c (G_\mathbb{R}, X (\mathbb{C}), \mathbb{R} (n-r)) [-2r] \text{\it ar}[<-,near end,crossing over]{uu}[swap]{p_*}{\cong}
\end{tikzcd} \]
\caption{Isomorphism
$R\Gamma_\text{\it W,c} (\AA^r_X, \mathbb{Z} (n)) \cong R\Gamma_\text{\it W,c} (X, \mathbb{Z} (n-r)) [-2r]$
and its splitting after tensoring with $\mathbb{R}$.}
\label{fig:RGamma-Wc-and-affine-bundles}
\end{figure}
\end{landscape}
\begin{theorem}
\label{thm:compatibility-of-C(X,n)}
For an arithmetic scheme $X$ and $n < 0$, assume $\mathbf{L}^c (X_\text{\it \'{e}t},n)$,
$\mathbf{B} (X,n)$, and the meromorphic continuation of $\zeta (X,s)$ around
$s = n$.
\begin{enumerate}
\item[1)] If $X = \coprod_{1 \le i \le r} X_i$ is a finite disjoint union of
arithmetic schemes, then
$$\mathbf{C} (X,n) \iff \mathbf{C} (X_i,n)\text{ for all }i.$$
\item[2)] For a closed-open decomposition
$Z \not\hookrightarrow X \hookleftarrow U$, if two of three conjectures
\[ \mathbf{C} (X,n), \quad
\mathbf{C} (Z,n), \quad
\mathbf{C} (U,n) \]
are true, then the third is also true.
\item[3)] For any $r \ge 0$, one has
$$\mathbf{C} (\AA^r_X, n) \iff \mathbf{C} (X, n-r).$$
\end{enumerate}
\begin{proof}
Follows from Lemmas
\ref{lemma:lambda-and-disjoint-unions},
\ref{lemma:lambda-and-closed-open-decompositions},
\ref{lemma:lambda-and-affine-bundles},
together with the corresponding identities for the zeta functions
\eqref{eqn:zeta-function-for-disjoint-unions},
\eqref{eqn:zeta-function-for-closed-open-decompositions},
\eqref{eqn:zeta-function-for-affine-space}.
\end{proof}
\end{theorem}
The following is a special case of compatibility with closed-open
decompositions.
\begin{lemma}
\label{lemma:compatibility-for-Xred}
For an arithmetic scheme $X$ and $n < 0$, the conjectures
\[
\mathbf{L}^c (X_\text{\it \'{e}t},n), ~
\mathbf{B} (X,n), ~
\mathbf{VO} (X,n), ~
\mathbf{C} (X,n)
\]
are equivalent to
\[
\mathbf{L}^c (X_\text{\it red,\'{e}t},n), ~
\mathbf{B} (X_\text{\it red},n), ~
\mathbf{VO} (X_\text{\it red},n), ~
\mathbf{C} (X_\text{\it red},n)
\]
respectively.
\begin{proof}
Apply Lemma~\ref{lemma:compatibility-of-Lc(X,n)},
Lemma~\ref{lemma:compatibility-of-B(X,n)},
Proposition~\ref{prop:compatibility-of-VO(X,n)}, and
Theorem~\ref{thm:compatibility-of-C(X,n)} to the canonical closed embedding
$X_\text{\it red} \hookrightarrow X$.
\end{proof}
\end{lemma}
The above lemma can be proved directly, by going through the construction of
Weil-\'{e}tale cohomology in \cite{Beshenov-Weil-etale-1} and the statements of
the conjectures. In particular,
$$R\Gamma_\text{\it W,c} (X, \mathbb{Z}(n)) \cong R\Gamma_\text{\it W,c} (X_\text{\it red}, \mathbb{Z}(n)).$$
It is important to note that the cycle complexes do not distinguish $X$ from
$X_\text{\it red}$, and neither does the zeta function: $\zeta (X,s) = \zeta (X_\text{\it red},s)$.
\begin{remark}
If $X/\mathbb{F}_q$ is a variety over a finite field, then the proof of
Theorem~\ref{thm:compatibility-of-C(X,n)} simplifies drastically: we can work
with the formula \eqref{eqn:special-value-for-X/Fq} and the following
properties of motivic cohomology:
\begin{enumerate}
\item[1)] $R\Gamma (\coprod_i X_{i,\text{\it \'{e}t}}, \mathbb{Z}^c (n)) \cong
\bigoplus_i R\Gamma (X_{i,\text{\it \'{e}t}}, \mathbb{Z}^c (n))$;
\item[2)] triangles associated to closed-open decompositions
\[ R\Gamma (Z_\text{\it \'{e}t}, \mathbb{Z}^c (n)) \to
R\Gamma (X_\text{\it \'{e}t}, \mathbb{Z}^c (n)) \to
R\Gamma (U_\text{\it \'{e}t}, \mathbb{Z}^c (n)) \to
R\Gamma (Z_\text{\it \'{e}t}, \mathbb{Z}^c (n))[1] \]
\item[3)] homotopy invariance
$R\Gamma (X_\text{\it \'{e}t}, \mathbb{Z}^c (n-r)) [2r] \cong
R\Gamma (\AA^r_{X,\text{\it \'{e}t}}, \mathbb{Z}^c (n))$.
\end{enumerate}
In this case, no regulators are involved, so we do not need the technical
lemmas \ref{lemma:lambda-and-disjoint-unions},
\ref{lemma:lambda-and-closed-open-decompositions},
\ref{lemma:lambda-and-affine-bundles}.
\end{remark}
If we consider the projective space $\mathbb{P}_X^r = \mathbb{P}_\mathbb{Z}^r \times X$, we have a
formula for the zeta function
\begin{equation}
\label{eqn:zeta-function-for-projective-space}
\zeta (\mathbb{P}_X^r, s) = \prod_{0 \le i \le r} \zeta (X, s-i).
\end{equation}
Our special value conjecture satisfies the corresponding compatibility.
\begin{corollary}[Projective bundles]
Let $X$ be an arithmetic scheme, $n < 0$, and $r \ge 0$.
For all $0 \le i \le r$ assume Conjectures $\mathbf{L}^c (X_\text{\it \'{e}t},n-i)$,
$\mathbf{B} (X,n-i)$, and the meromorphic continuation of $\zeta (X,s)$ around
$s = n-i$. Then
\[ \mathbf{C} (X,n-i)\text{ for }0 \le i \le r \Longrightarrow
\mathbf{C} (\mathbb{P}_X^r, n). \]
\begin{proof}
Applied to the closed-open decomposition
$\mathbb{P}_X^{r-1} \not\hookrightarrow \mathbb{P}_X^r \hookleftarrow \AA_X^r$,
Theorem~\ref{thm:compatibility-of-C(X,n)} gives
\[ \mathbf{C} (X, n-r) \text{ and } \mathbf{C} (\mathbb{P}_X^{r-1}, n)
\Longrightarrow
\mathbf{C} (\AA_X^r, n) \text{ and } \mathbf{C} (\mathbb{P}_X^{r-1}, n)
\Longrightarrow
\mathbf{C} (\mathbb{P}_X^r,n). \]
The assertion follows by induction on $r$.
(The same inductive argument proves the identity
\eqref{eqn:zeta-function-for-projective-space} from
\eqref{eqn:zeta-function-for-affine-space}.)
\end{proof}
\end{corollary}
\section{Unconditional results}
\label{sec:unconditional-results}
Now we apply Theorem~\ref{thm:compatibility-of-C(X,n)} to prove the main theorem
stated in the introduction: the validity of $\mathbf{VO} (X,n)$ and
$\mathbf{C} (X,n)$ for all $n < 0$ for cellular schemes over certain
one-dimensional bases. In fact, we will construct an even larger class of
schemes $\mathcal{C} (\mathbb{Z})$ whose elements satisfy the conjectures. This
approach is motivated by \cite[\S 5]{Morin-2014}.
\begin{definition}
Let $\mathcal{C} (\mathbb{Z})$ be the full subcategory of the category of arithmetic
schemes generated by the following objects:
\begin{itemize}
\item the empty scheme $\emptyset$,
\item $\Spec \mathbb{F}_q$ for each finite field,
\item $\Spec \mathcal{O}_F$ for an abelian number field $F/\mathbb{Q}$,
\item curves over finite fields $C/\mathbb{F}_q$,
\end{itemize}
and the following operations.
\begin{enumerate}
\item[$\mathcal{C}0)$] $X$ is in $\mathcal{C} (\mathbb{Z})$ if and only if $X_\text{\it red}$
is in $\mathcal{C} (\mathbb{Z})$.
\item[$\mathcal{C}1)$] A finite disjoint union $\coprod_{1 \le i \le r} X_i$
is in $\mathcal{C} (\mathbb{Z})$ if and only if each $X_i$ is in
$\mathcal{C} (\mathbb{Z})$.
\item[$\mathcal{C}2)$] Let $Z \not\hookrightarrow X \hookleftarrow U$ be a
closed-open decomposition such that $Z_{\text{\it red},\mathbb{C}}$, $X_{\text{\it red},\mathbb{C}}$,
$U_{\text{\it red},\mathbb{C}}$ are smooth and quasi-projective. If two of the three schemes
$Z,X,U$ lie in $\mathcal{C} (\mathbb{Z})$, then the third also lies in
$\mathcal{C} (\mathbb{Z})$.
\item[$\mathcal{C}3)$] If $X$ lies in $\mathcal{C} (\mathbb{Z})$, then the affine
space $\AA^r_X$ for each $r \ge 0$ also lies in $\mathcal{C} (\mathbb{Z})$.
\end{enumerate}
\end{definition}
Recall that the condition that $X_{\text{\it red},\mathbb{C}}$ is smooth and quasi-projective is
necessary to ensure that the regulator morphism exists (see
Remark~\ref{rmk:regulator-is-defined-for-XC-smooth-quasi-proj}).
\begin{proposition}
\label{prop:C(X,n)-holds-for-C(Z)}
Conjectures $\mathbf{VO} (X,n)$ and $\mathbf{C} (X,n)$ hold for any
$X \in \mathcal{C} (\mathbb{Z})$ and $n < 0$.
\begin{proof}
Finite fields satisfy $\mathbf{C} (X,n)$ by
Example~\ref{example:C(X,n)-for-Spec-Fq}.
If $X = \Spec \mathcal{O}_F$ for a number field $F/\mathbb{Q}$, then
Conjecture~$\mathbf{C} (X,n)$ is equivalent to the conjecture of Flach and
Morin \cite[Conjecture~5.12]{Flach-Morin-2018}, which holds unconditionally
for abelian $F/\mathbb{Q}$, via reduction to the Tamagawa number conjecture; see
\cite[\S 5.8.3]{Flach-Morin-2018}, in particular
[ibid., Proposition~5.35]. The condition $\mathbf{VO} (X,n)$ is also true in
this case (see Example~\ref{example:VO(X,n)-for-number-rings}).
If $X = C/\mathbb{F}_q$ is a curve over a finite field, then $\mathbf{C} (X,n)$
holds thanks to
Theorem~\ref{thm:C(X,n)-over-finite-fields}.
Conjecture~$\mathbf{L}^c (X_\text{\it \'{e}t},n)$ is known for curves and essentially goes
back to Soul\'{e}; see, for example, \cite[Proposition~4.3]{Geisser-2017}.
Finally, the fact that the Conjectures $\mathbf{L}^c (X_\text{\it \'{e}t},n)$,
$\mathbf{B} (X,n)$, $\mathbf{VO} (X,n)$, $\mathbf{C} (X,n)$ are closed under
the operations $\mathcal{C}0)$--$\mathcal{C}3)$ is
Lemma~\ref{lemma:compatibility-for-Xred},
Lemma~\ref{lemma:compatibility-of-Lc(X,n)},
Lemma~\ref{lemma:compatibility-of-B(X,n)},
Proposition~\ref{prop:compatibility-of-VO(X,n)}, and
Theorem~\ref{thm:compatibility-of-C(X,n)} respectively.
\end{proof}
\end{proposition}
\begin{lemma}
Any zero-dimensional arithmetic scheme $X$ is in $\mathcal{C} (\mathbb{Z})$.
\begin{proof}
Since $X$ is a Noetherian scheme of dimension $0$, it is a finite disjoint
union of $\Spec A_i$ for some Artinian local rings $A_i$. Thanks to
$\mathcal{C}1)$, we can assume that $X = \Spec A$, and thanks to
$\mathcal{C}0)$, we can assume that $X$ is reduced. But then $A = k$ is a
field. Since $X$ is a scheme of finite type over $\Spec \mathbb{Z}$, we conclude
that $X = \Spec \mathbb{F}_q \in \mathcal{C} (\mathbb{Z})$.
\end{proof}
\end{lemma}
\begin{proposition}
\label{prop:particular-cases-1-dim-base}
Let $B$ be a one-dimensional arithmetic scheme. Suppose that each of the
generic points $\eta \in B$ satisfies one of the following properties:
\begin{enumerate}
\item[a)] $\fchar \kappa (\eta) = p > 0$;
\item[b)] $\fchar \kappa (\eta) = 0$, and $\kappa (\eta)/\mathbb{Q}$ is an abelian
number field.
\end{enumerate}
Then $B \in \mathcal{C} (\mathbb{Z})$.
\begin{proof}
We verify that such a scheme can be obtained from $\Spec \mathcal{O}_F$ for
an abelian number field $F/\mathbb{Q}$, or a curve over a finite field $C/\mathbb{F}_q$,
using the operations $\mathcal{C}0)$, $\mathcal{C}1)$, $\mathcal{C}2)$ which
appear in the definition of $\mathcal{C} (\mathbb{Z})$.
Thanks to $\mathcal{C}0)$, we can assume that $B$ is reduced. Consider the
normalization $\nu\colon B' \to B$. This is a birational morphism, so there
exist open dense subsets $U' \subseteq B'$ and $U \subseteq B$ such that
$\left.\nu\right|_{U'}\colon U' \xrightarrow{\cong} U$. Now $B\setminus U$
is zero-dimensional, and therefore $B\setminus U \in \mathcal{C} (\mathbb{Z})$ by
the previous lemma. Thanks to $\mathcal{C}2)$, it suffices to check that
$U' \in \mathcal{C} (\mathbb{Z})$, and this would imply $B \in \mathcal{C} (\mathbb{Z})$.
Now $U'$ is a finite disjoint union of normal integral schemes, so according
to $\mathcal{C}1)$ we can assume that $U'$ is integral. Consider the generic
point $\eta \in U'$ and the residue field $F = \kappa (\eta)$. There are two
cases to consider.
\begin{enumerate}
\item[a)] If $\fchar F = p > 0$, then $U'$ is a curve over a finite field,
so it lies in $\mathcal{C} (\mathbb{Z})$.
\item[b)] If $\fchar F = 0$, then by our assumptions, $F/\mathbb{Q}$ is an abelian
number field.
We note that if $V' \subseteq U'$ is an affine open neighborhood of
$\eta$, then $U'\setminus V' \in \mathcal{C} (\mathbb{Z})$ by the previous
lemma. Therefore, we can assume without loss of generality that $U'$ is
affine.
We have $U' = \Spec \mathcal{O}$, where $\mathcal{O}$ is a finitely
generated integrally closed domain. This means that
$\mathcal{O}_F \subseteq \mathcal{O} = \mathcal{O}_{F,S}$ for a finite set
of places $S$. Now $U' = \Spec \mathcal{O}_F \setminus S$, and
$S \in \mathcal{C} (\mathbb{Z})$, so everything reduces to the case of
$U' = \Spec \mathcal{O}_F$, which is in $\mathcal{C} (\mathbb{Z})$. \qedhere
\end{enumerate}
\end{proof}
\end{proposition}
\begin{remark}
Schemes like the above were considered by Jordan and Poonen in
\cite{Jordan-Poonen-2020}, where the authors write down a special value
formula for $s = 1$ that generalizes the classical class number
formula. Namely, they consider the case where $B$ is reduced and affine,
but without requiring $\kappa (\eta)/\mathbb{Q}$ to be abelian.
\end{remark}
\begin{example}
If $B = \Spec \mathcal{O}$ for a nonmaximal order
$\mathcal{O} \subset \mathcal{O}_F$, where $F/\mathbb{Q}$ is an abelian number field,
then our formalism gives a cohomological interpretation of the special values
of $\zeta_\mathcal{O} (s)$ at $s = n < 0$. This already seems to be a new
result.
\end{example}
\begin{definition}
\label{dfn:B-cellular-scheme}
Let $X \to B$ be a $B$-scheme. We say that $X$ is \textbf{$B$-cellular} if it
admits a filtration by closed subschemes
\begin{equation}
\label{eqn:cellular-decomposition}
X = Z_N \supseteq Z_{N-1} \supseteq \cdots \supseteq Z_0 \supseteq Z_{-1} = \emptyset
\end{equation}
such that $Z_i\setminus Z_{i-1} \cong \coprod_j \AA^{r_{i_j}}_B$ is a finite
union of affine $B$-spaces.
\end{definition}
For example, projective spaces $\mathbb{P}^r_B$ and, in general, Grassmannians
$\Gr (k,\ell)_B$ are cellular. Many interesting examples of cellular schemes
arise from actions of algebraic groups on varieties and the Bia\l{}ynicki-Birula
theorem; see \cite{Wendt-2010} and \cite{Brosnan-2005}.
\begin{proposition}
\label{prop:cellular-schemes-in-C(Z)}
Let $X$ be a $B$-cellular arithmetic scheme, where $B \in \mathcal{C} (\mathbb{Z})$,
and $X_{\text{\it red},\mathbb{C}}$ is smooth and quasi-projective. Then
$X \in \mathcal{C} (\mathbb{Z})$.
\begin{proof}
Considering the corresponding cellular decomposition
\eqref{eqn:cellular-decomposition}, we pass to open complements
$U_i = X\setminus Z_i$ to obtain a filtration
$$X = U_{-1} \supseteq U_1 \supseteq \cdots \supseteq U_{N-1} \supseteq U_N = \emptyset,$$
where $U_{i,\mathbb{C}}$ are smooth and quasi-projective, being \emph{open}
subvarieties in $X_\mathbb{C}$. Now we have closed-open decompositions
$\coprod_j \AA^{r_{i,j}}_B \not\hookrightarrow U_i \hookleftarrow U_{i+1}$,
and the claim follows by induction on the length of the cellular
decomposition, using operations $\mathcal{C}1)$--$\mathcal{C}3)$.
\end{proof}
\end{proposition}
As a corollary of the above, we obtain the following result, stated in the
introduction.
\begin{theorem}
Let $B$ be a one-dimensional arithmetic scheme satisfying the assumptions of
Proposition~\ref{prop:particular-cases-1-dim-base}. If $X$ is a $B$-cellular
arithmetic scheme with smooth and quasi-projective fiber $X_{\text{\it red},\mathbb{C}}$, then
Conjectures $\mathbf{VO} (X,n)$ and $\mathbf{C} (X,n)$ hold unconditionally
for any $n < 0$.
\begin{proof}
Follows from propositions
\ref{prop:C(X,n)-holds-for-C(Z)},
\ref{prop:particular-cases-1-dim-base},
\ref{prop:cellular-schemes-in-C(Z)}.
\end{proof}
\end{theorem}
\begin{appendices}
\section{Determinants of complexes}
\label{app:determinants}
Here we give a brief overview of the determinants of complexes. The original
construction goes back to Knudsen and Mumford \cite{Knudsen-Mumford-1976}, and
useful expositions can be found in
\cite[Appendix~A]{Gelfand-Kapranov-Zelevinsky-1994} and \cite[\S
2.1]{Kato-1993}.
For our purposes, let $R$ be an integral domain.
\begin{definition}
Denote by $\mathcal{P}_\text{\it is} (R)$ the category of
\textbf{graded invertible $R$-modules}. It has as objects $(L,r)$, where $L$
is an invertible $R$-module (i.e. projective of rank $1$) and $r \in \mathbb{Z}$. The
morphisms in this category are given by
\[ \Hom_{\mathcal{P}_\text{\it is} (R)} ((L,r), (M,s)) = \begin{cases}
\Isom_R (L,M), & r = s, \\
\emptyset, & r \ne s.
\end{cases} \]
This category is equipped with tensor products
$$(L,r) \otimes_R (M,s) = (L\otimes_R M, r + s)$$
with (graded) commutativity isomorphisms
\[
(L,r) \otimes_R (M,s)
\xrightarrow{\cong}
(M,s) \otimes_R (L,r),
\quad
\ell \otimes m \mapsto (-1)^{rs}\,m\otimes \ell.
\]
\end{definition}
The unit object with respect to this product is $(R,0)$, and for each
$(L,r) \in \mathcal{P}_\text{\it is} (R)$ the inverse is given by $(L^{-1}, -r)$ where
$L^{-1} = \underline{\Hom}_R (L,R)$. The canonical evaluation morphism
$L \otimes_R \underline{\Hom}_R (L,R) \to R$ induces an isomorphism
$$(L,r) \otimes_R (L^{-1}, -r) \cong (R,0).$$
\begin{definition}
Denote by $\mathcal{C}_\text{\it is} (R)$ the category whose objects are finitely
generated projective $R$-modules and whose morphisms are isomorphisms.
For $A \in \mathcal{C}_\text{\it is} (R)$ we define the corresponding determinant by
\begin{equation}
\label{eqn:det-proj-module}
\det_R (A) = \Bigl(\bigwedge^{\rk_R A}_R A, \rk_R A\Bigr) \in \mathcal{P}_\text{\it is} (R).
\end{equation}
Here $\rk_R A$ is the rank of $A$, so that the top exterior power
$\bigwedge^{\rk_R A}_R A$ is an invertible $R$-module.
\end{definition}
This yields a functor
$\det_R\colon \mathcal{C}_\text{\it is} (R) \to \mathcal{P}_\text{\it is} (R)$.
For $(L,r) \in \mathcal{P}_\text{\it is} (R)$ we usually forget about $r$ and treat the
determinant as an invertible $R$-module.
The main result of \cite[Chapter~I]{Knudsen-Mumford-1976} is that this
construction can be generalized to complexes and morphisms in the derived
category.
\begin{definition}
Let $\mathbf{D} (R)$ be the derived category of the category of
$R$-modules. Recall that a complex $A^\bullet$ is \textbf{perfect} if it is
quasi-isomorphic to a bounded complex of finitely generated projective
$R$-modules. We denote by $\mathcal{P}\!\text{\it arf}_\text{\it is} (R)$ the subcategory of $\mathbf{D} (R)$
whose objects consist of perfect complexes, and whose morphisms are
quasi-isomorphisms of complexes.
\end{definition}
\begin{theorem}[Knudsen--Mumford]
The determinant \eqref{eqn:det-proj-module} extends to perfect complexes of
$R$-modules as a functor
$$\det_R\colon \mathcal{P}\!\text{\it arf}_\text{\it is} (R) \to \mathcal{P}_\text{\it is} (R),$$
satisfying the following properties.
\begin{itemize}
\item $\det_R (0) = (R,0)$.
\item For a distinguished triangle of complexes in $\mathcal{P}\!\text{\it arf}_\text{\it is} (R)$
\[ A^\bullet \xrightarrow{u}
B^\bullet \xrightarrow{v}
C^\bullet \xrightarrow{w} A^\bullet [1] \]
there is a canonical isomorphism
\[ i_R (u,v,w)\colon \det_R A^\bullet \otimes_R \det_R C^\bullet
\xrightarrow{\cong} \det_R B^\bullet. \]
\item In particular, there exist canonical isomorphisms
\[ \det_R (A^\bullet \oplus B^\bullet) \cong
\det_R (A^\bullet) \otimes_R \det_R (B^\bullet). \]
\item For the triangles
\[ \begin{tikzcd}[column sep=1em]
A^\bullet \text{\it ar}{r}{id} & A^\bullet \text{\it ar}{r} & 0^\bullet \text{\it ar}{r} & A^\bullet [1] \\[-2em]
0^\bullet \text{\it ar}{r} & A^\bullet \text{\it ar}{r}{id} & A^\bullet \text{\it ar}{r} & 0^\bullet [1]
\end{tikzcd} \]
the isomorphism $i_R$ comes from the canonical isomorphism
$\det_R A^\bullet \otimes_R (R,0) \cong \det_R A^\bullet$.
\item For an isomorphism of distinguished triangles
\[ \begin{tikzcd}
A^\bullet \text{\it ar}{r}{u}\text{\it ar}{d}{f}[swap]{\cong} & B^\bullet \text{\it ar}{r}{v}\text{\it ar}{d}{g}[swap]{\cong} & C^\bullet \text{\it ar}{r}{w}\text{\it ar}{d}{h}[swap]{\cong} & A^\bullet [1]\text{\it ar}{d}{f[1]}[swap]{\cong} \\
A'^\bullet \text{\it ar}{r}{u'} & B'^\bullet \text{\it ar}{r}{v'} & C'^\bullet \text{\it ar}{r}{w'} & A'^\bullet [1]
\end{tikzcd} \]
the diagram
\[ \begin{tikzcd}[column sep=5em]
\det_R A^\bullet \otimes_R \det_R C^\bullet \text{\it ar}{r}{i_R (u,v,w)}[swap]{\cong}\text{\it ar}{d}{\det_R (f) \otimes \det_R (h)}[swap]{\cong} & \det_R B^\bullet\text{\it ar}{d}{\det_R (g)}[swap]{\cong} \\
\det_R A'^\bullet \otimes_R \det_R C'^\bullet \text{\it ar}{r}{i_R (u',v',w')}[swap]{\cong} & \det_R B'^\bullet
\end{tikzcd} \]
is commutative.
\item The determinant is compatible with base change: given a ring
homomorphism $R\to S$, there is a natural isomorphism
\[ \det_S (A^\bullet \otimes_R^\mathbf{L} S)
\xrightarrow{\cong}
(\det_R A^\bullet) \otimes_R S. \]
Moreover, this isomorphism is compatible with $i_R$ and $i_S$.
\item If $A^\bullet$ is a bounded complex where each object $A^i$ is perfect
(i.e. admits a finite length resolution by finitely generated projective
$R$-modules), then
$$\det_R A^\bullet \cong \bigotimes_{i\in \mathbb{Z}} (\det_R A^i)^{(-1)^i}.$$
If each $A^i$ is already a finitely generated projective $R$-module, then
$\det_R A^i$ in the above formula is given by \eqref{eqn:det-proj-module}.
\item If the cohomology modules $H^i (A^\bullet)$ are perfect, then
\begin{equation}
\label{eqn:det-in-terms-of-cohomology}
\det_R A^\bullet \cong
\bigotimes_{i\in \mathbb{Z}} (\det_R H^i (A^\bullet))^{(-1)^i}.
\end{equation}
\end{itemize}
\end{theorem}
We refer the reader to \cite{Knudsen-Mumford-1976} for the actual construction
and proofs.
A particularly simple case of interest is when $R = \mathbb{Z}$ and all cohomology
groups $H^i (A^\bullet)$ are finite.
\begin{lemma}
\label{lemma:determinant-for-torsion-cohomology}
~
\begin{enumerate}
\item[1)] Let $A$ be a finite abelian group. Then
\[ (\det_\mathbb{Z} A) \subset (\det_\mathbb{Z} A) \otimes \mathbb{Q}
\cong \det_\mathbb{Q} (A \otimes \mathbb{Q}) = \det_\mathbb{Q} (0) \cong \mathbb{Q} \]
corresponds to the fractional ideal $\frac{1}{\# A} \mathbb{Z} \subset \mathbb{Q}$.
\item[2)] In general, let $A^\bullet$ be a perfect complex of abelian groups
such that the cohomology groups $H^i (A^\bullet)$ are all finite. Then
$\det_\mathbb{Z} A^\bullet$ corresponds to the fractional ideal
$\frac{1}{m}\,\mathbb{Z} \subset \mathbb{Q}$, where
$$m = \prod_{i\in \mathbb{Z}} |H^i (A^\bullet)|^{(-1)^i}.$$
\end{enumerate}
\begin{proof}
Since $\det_\mathbb{Z} (A\oplus B) \cong \det_\mathbb{Z} A \otimes \det_\mathbb{Z} B$, in
part 1) it suffices to consider the case of a cyclic group
$A = \mathbb{Z}/m\mathbb{Z}$. Using the resolution
\[ \mathbb{Z}/m\mathbb{Z} [0] \cong \Bigl[
\mathop{m\mathbb{Z}}_{\text{deg.\,}-1} \hookrightarrow
\mathop{\mathbb{Z}}_{\text{deg.\,}0}
\Bigr], \]
we calculate
$$\det_\mathbb{Z} (\mathbb{Z}/m\mathbb{Z}) \cong \mathbb{Z} \otimes (m\mathbb{Z})^{-1} \cong (m\mathbb{Z})^{-1},$$
which corresponds to $\frac{1}{m}\,\mathbb{Z}$ in $\mathbb{Q}$.
Part 2) follows directly from 1) and \eqref{eqn:det-in-terms-of-cohomology}.
\end{proof}
\end{lemma}
\end{appendices}
\bibliographystyle{abbrv}
| {'timestamp': '2021-11-29T02:32:08', 'yymm': '2102', 'arxiv_id': '2102.12114', 'language': 'en', 'url': 'https://arxiv.org/abs/2102.12114'} |
\section*{Program Summary}
\section{Introduction}
With the recent discovery of a bosonic resonance
\cite{Atlas:2012gk,CMS:2012gu} showing all the characteristics of the
SM Higgs boson a long search might soon come to a successful end. In
contrast there are no hints for a signal of supersymmetric (SUSY)
particles or particles predicted by any other extension of the
standard model (SM)
\cite{:2012rz,Chatrchyan:2012jx,CMS-PAS-SUS-11-022,CMS-PAS-SUS-12-005,:2012mfa}.
Therefore, large areas of the parameter space of the simplest SUSY
models are excluded. The allowed mass spectra as well as the best fit
mass values to the data are pushed to higher and higher values
\cite{pMSSM}. This has lead to an increasing interest in the study of
SUSY models which provide new features. For instance, models with
broken $R$-parity \cite{rpv,rpvsearches} or compressed spectra
\cite{compressed} might be able to hide much better at the LHC, while
for other models high mass spectra are a much more natural feature
than this is the case in the minimal-supersymmetric standard model
(MSSM) \cite{finetuning}.
However, bounds on the masses and couplings of beyond the SM
(BSM) models follow not only from direct searches at colliders. New
particles also have an impact on SM processes
via virtual quantum corrections, leading in many
instances to sizable deviations from the SM expectations. This holds
in particular for the anomalous magnetic moment of the muon
\cite{Stockinger:2006zn} and
processes which are highly suppressed in the SM.
The latter are mainly lepton flavor violating (LFV) or decays involving
quark flavor changing neutral currents (qFCNC). While the prediction
of LFV decays in the SM is many orders of magnitude below the
experimental sensitivity \cite{Cheng:1985bj}, qFCNC is
experimentally well established. For instance, the observed rate of $b
\to s\gamma$ is in good agreement with the SM expectation and this
observable has put for several years strong
constraints on qFCNCs beyond the SM \cite{bsgamma}.
The experiments at the LHC have reached now a sensitivity to test also
the SM prediction for BR($B^0_s\to\mu\bar\mu$) as well as BR($B^0_d\to
\mu \bar{\mu}$) \cite{Buras:2012ru}
\begin{eqnarray}
\BRx{B^0_s \to \mu \bar{\mu}}_{\text{SM}} &=& \scn{(3.23\pm 0.27)}{-9} \label{eq:Bsintro1}, \\
\BRx{B^0_d \to \mu \bar{\mu}}_{\text{SM}} &=& \scn{(1.07\pm 0.10)}{-10}. \label{eq:Bsintro2}
\end{eqnarray}
Using the measured finite width difference of the $B$ mesons the time integrated branching ratio
which should be compared to experiment is \cite{Buras:2013uqa}
\begin{eqnarray}
\BRx{B^0_s \to \mu \bar{\mu}}_{\text{theo}} &=& \scn{(3.56\pm 0.18)}{-9} \, .
\end{eqnarray}
Recently, LHCb reported the first evidence for $B^0_s \to \mu
\bar{\mu}$. The observed rate \cite{LHCb:2012ct}
\begin{equation}
\BRx{B^0_s \to \mu \bar{\mu}} = (3.2^{+1.5}_{-1.2})\times 10^{-9} \\
\end{equation}
fits nicely to the SM prediction. For $\BRx{B^0_d \to \mu \bar{\mu}}$
the current upper
bound: $9.4 \cdot 10^{-10}$
is already of the same order as the SM expectation.
This leads to new constraints for BSM models and each model has to be
confronted with these measurements. So far, there exist several public
tools which can calculate
$\BRx{B^0_{s,d}\to \ell\bar{\ell}}$ as well as other observables in the context of the MSSM
or partially also for the next-to-minimal supersymmetric standard
model (NMSSM) \cite{NMSSM}: {\tt superiso} \cite{superiso}, {\tt
SUSY\_Flavor} \cite{susyflavor}, {\tt NMSSM-Tools} \cite{NMSSMTools},
{\tt MicrOmegas} \cite{MicrOmegas} or {\tt SPheno} \cite{spheno}.
However, for more complicated SUSY models none of the available tools
provides the possibility to calculate
these decays easily. This gap is now closed by the interplay of the {\tt
Mathematica} package {\tt SARAH}\xspace \cite{sarah} and the spectrum generator
{\tt SPheno}\xspace. {\tt SARAH}\xspace
already has many SUSY models incorporated but allows also an easy and
efficient implementation of new models. For all of these models {\tt SARAH}\xspace
can generate new modules for {\tt SPheno}\xspace for a comprehensive numerical
evaluation. This functionality is extended, as described in this paper, by a
full 1-loop calculation of $B^0_{s,d}\to\ell\bar{\ell}$.
The rest of the paper is organized as follows: in
sec.~\ref{sec:analytical} we recall briefly the analytical calculation
for BR($B^0_{s,d}\to\ell\bar{\ell}$). In
sec.~\ref{sec:implementation} we discuss the implementation of this
calculation in {\tt SARAH}\xspace and {\tt SPheno}\xspace before we conclude in
sec.~\ref{sec:conclusion}. The appendix contains more information
about the calculation and generic results for the amplitudes.
\section{Calculation of BR($B^0_{s,d}\to\ell\bar{\ell}$)}
\label{sec:analytical}
In the SM this decay was first calculated in ref~\cite{Inami:1980fz},
in the analogous context of kaons. The higher order corrections were
first presented in \cite{Buchalla:1993bv}; see also
\cite{Misiak:1999yg}. In the context of supersymmetry this was
considered in \cite{bsmumu-susy}. See also the interesting correlation
between BR$(B_s^0\to\mu\bar\mu)$ and $(g-2)_\mu$ \cite{Dedes:2001fv}.
We present briefly the main steps of the calculation of BR($B^0_{q}\to
\ell_k\bar{\ell}_l$) with $q=s,d$. We follow
closely the notation of ref.~\cite{Dedes:2008iw}. The effective
Hamiltonian can be parametrized by
\begin{eqnarray}
\label{eq:effectiveH} \mathcal{H}&=&\frac{1}{16\pi^2}
\sum_{X,Y=L,R}\kl{C_{SXY}\mathcal{O}_{SXY}+C_{VXY}\mathcal{O}_{VXY}+C_{TX}\mathcal{O}_{TX}}
\, ,
\end{eqnarray}
with the Wilson coefficients $C_{SXY},C_{VXY},C_{TX}$ corresponding to
the scalar, vector and tensor operators
\begin{equation}
\mathcal{O}_{SXY} = (\bar q_j P_X q_i)(\bar \ell_l P_Y \ell_k) \,, \hspace{0.5cm}
\mathcal{O}_{VXY} = (\bar q_j \gamma^\mu P_X q_i)(\bar \ell_l \gamma_\mu P_Y
\ell_k) \,, \hspace{0.5cm}
\mathcal{O}_{TX} = (\bar q_j \sigma^{\mu\nu} P_X q_i)(\bar \ell_l \sigma_{\mu\nu} \ell_k) \, .
\end{equation}
$P_L$ and $P_R$ are the projection operators on left respectively
right handed states. The expectation value of the axial vector matrix
element is defined as
\begin{eqnarray}
\label{eq:fBs}
\langle 0\left| \bar b \gamma^\mu \gamma^5 q \right| B^0_q(p)\rangle
&\equiv& ip^\mu f_{B^0_q} \, .
\end{eqnarray}
Here, we introduced the meson decay constants $f_{B^0_q}$ which can be
obtained from lattice QCD simulations \cite{Laiho:2009eu}. The current
values for $B^0_s$ and $B^0_d$ are given by \cite{Davies:2012qf}
\begin{equation}
\label{eq:fBsValue}
f_{B^0_s} = (227\pm 8)~{\text{Me}\hspace{-0.05cm}\text{V}} \,,\hspace{1cm} f_{B^0_d} = (190\pm 8)~{\text{Me}\hspace{-0.05cm}\text{V}} \, .
\end{equation}
Since the momentum $p$ of the meson is the only four-vector available,
the matrix element in \qref{eq:fBs} can only depend on
$p^\mu$. The incoming momenta of the $b$ antiquark and the $s$ (or $d$) quark are $p_1,p_2$ respectively, where $p=p_1+p_2$. Contracting \qref{eq:fBs} with $p_\mu$ and using the
equations of motion $\bar b \slashed p_1=-\bar b m_b$ and $\slashed
p_2 q=m_q q$ leads to an
expression for the pseudoscalar current
\begin{eqnarray}
\label{eq:fBspseudo}
\langle 0 \left| \bar b \gamma^5 q\right| B^0_q(p) \rangle &=& - i \frac{M_{B_q^0}^2 f_{B^0_q}}{m_b+m_q} \, .
\end{eqnarray}
The vector and scalar currents vanish
\begin{equation}
\label{eq:vanish}
\langle 0\left| \bar b \gamma^\mu q \right| B^0_q(p)\rangle=
\langle 0 \left| \bar b q\right| B^0_q(p)
\rangle =0 \, .
\end{equation}
From eqs.~(\ref{eq:fBs}) and (\ref{eq:fBspseudo}) we obtain
\begin{equation}
\langle 0 \left| \bar b \gamma^\muP_{L/R} q \right| B^0_q(p)\rangle =
\mp \frac i2 p^\mu f_{B^0_q} \label{eq:fBsPLR1} \, , \hspace{1cm}
\langle 0 \left| \bar b P_{L/R} q
\right| B^0_q(p)\rangle = \pm \frac i2 \frac {M_{B^0_q}^2
f_{B^0_q}}{m_b+m_q} \, .
\end{equation}
In general, the matrix element $\mathcal M$ is a function of the form
factors $F_S,F_P,F_V,F_A$ of the scalar, pseudoscalar, vector and
axial-vector current and can be expressed by
\begin{equation}
\label{eq:matrixelementBs}
(4\pi)^2\mathcal M = F_S \bar \ell \ell + F_P \bar \ell \gamma^5 \ell
+ F_V p_\mu \bar \ell
\gamma^\mu \ell + F_A p_\mu \bar \ell \gamma^\mu \gamma^5 \ell \, .
\end{equation}
Note that there is no way of building an antisymmetric 2-tensor out of
just one vector $p^\mu$. The matrix element of the tensor operator
$\mathcal{O}_{TX}$ must therefore vanish. The form factors can be
expressed by linear combinations of the Wilson coefficients of
eq.~(\ref{eq:effectiveH})
\begin{eqnarray}
F_S &=& \frac i4 \frac{M_{B^0_q}^2 f_{B^0_q}}{m_b+m_q} \kl{ C_{SLL} +
C_{SLR} - C_{SRR}-C_{SRL}},\label{eq:formfactorsBs1}\\
F_P &=& \frac i4 \frac{M_{B^0_q}^2 f_{B^0_q}}{m_b+m_q} \kl{ -C_{SLL} +
C_{SLR} - C_{SRR}+C_{SRL}}, \label{eq:formfactorsBs2}\\
F_V &=& -\frac i4 f_{B^0_q} \kl{ C_{VLL} + C_{VLR} - C_{VRR}-C_{VRL}}, \label{eq:formfactorsBs3}\\
F_A &=& -\frac i4 f_{B^0_q} \kl{ -C_{VLL} + C_{VLR} - C_{VRR}+C_{VRL}}.\label{eq:formfactorsBs4}
\end{eqnarray}
The main task is to calculate the different Wilson coefficients for a
given model. These Wilson coefficients receive at the 1-loop level
contributions from
various wave, penguin
and box diagrams, see Figures~\ref{fig:wave}-\ref{fig:box2} in
\ref{app:amplitudes}. Furthermore, in some models these decays could
also happen already at tree-level \cite{Dreiner:2006gu}. The
amplitudes for all possible, generic diagrams which can contribute to
the Wilson coefficients have been calculated with {\tt FeynArts}\xspace/{\tt FormCalc}\xspace
\cite{feynarts} and the results are listed in
\ref{app:amplitudes}. This calculation has been performed in the
$\overline{\text{DR}}$ scheme and 't Hooft gauge. How these results
are used together with {\tt SARAH}\xspace and {\tt SPheno}\xspace to get numerical results
will be discussed in the next section.
After the calculation of the form factors, the squared amplitude is
\begin{align}
\label{eq:squaredMBsllp}
(4\pi)^4 \abs{\ampM}^2&=2\abs{F_S}^2\kl{M_{B^0_q}^2-(m_\ell+m_k)^2}
+2\abs{F_P}^2\kl{M_{B^0_q}^2-(m_\ell-m_k)^2}
\\
\nonumber &+ 2\abs{F_V}^2\kl{M_{B^0_q}^2(m_k-m_\ell)^2-(m_k^2-m_\ell^2)^2} \\
\nonumber &+ 2\abs{F_A}^2\kl{M_{B^0_q}^2(m_k+m_\ell)^2-(m_k^2-m_\ell^2)^2} \\
\nonumber &+ 4\Re (F_s F_V^*) (m_\ell-m_k)\kl{M_{B^0_q}^2+(m_k+m_\ell)^2} \\
\nonumber &+ 4\Re (F_P F_A^*) (m_\ell+m_k)\kl{M_{B^0_q}^2-(m_k-m_\ell)^2} \, .
\end{align}
Here, $m_\ell$ and $m_k$ are the lepton masses. In the case $k=\ell$,
this expression simplifies to
\begin{eqnarray}
\label{eq:ampSquaredsimp}
\abs{\ampM}^2&=&\frac{2}{(16\pi^2)^2} \kl{(M_{B^0_q}^2-4 m_\ell^2)\abs{F_S}^2+
M_{B^0_q}^2 \abs{F_P+2m_\ell F_A}^2}.
\end{eqnarray}
Note, the result is independent of the form factor $F_V$ in this
limit. In the SM the leading 1-loop contributions proceed via
the exchange of virtual gauge bosons. They are thus helicity
suppressed. Furthermore, since these are flavor changing neutral
currents, they are GIM suppressed. The diagrams involving virtual
Higgs bosons are suppressed due to small Yukawa couplings. In BSM
scenarios these suppressions can be absent.
The branching ratio is then given by
\begin{equation}
\label{eq:Bsllpbranching}
\ensuremath{\text{BR}\,}(B_q^0\to \ell_k\bar{\ell}_l)=\frac{\tau_{B^0_q}}{16\pi}
\frac{\abs{\mathcal{M}}^2}{M_{B_q^0}}\sqrt{1-\kl{\frac{m_k+m_l}{M_{B_q^0}}}^2}\sqrt{1-\kl{\frac{m_k-m_l}{M_{B_q^0}}}^2}
\end{equation}
with $\tau_{B^0_q}$ as the life time of the mesons.
\section{Automatized calculation of $\ensuremath{B^0_{s,d}\to \ell \bar{\ell} \xspace}$}
\label{sec:implementation}
\subsection{Implementation in {\tt SARAH}\xspace and {\tt SPheno}\xspace}
{\tt SARAH}\xspace is the first 'spectrum-generator-generator' on the market which
means that it can generate Fortran source for {\tt SPheno}\xspace to obtain a
full-fledged spectrum generator for models beyond the MSSM. The main
features of a {\tt SPheno}\xspace module written by {\tt SARAH}\xspace are a precise mass
spectrum calculation based on 2-loop renormalization group equations
(RGEs) and a full 1-loop calculation of the mass spectrum. Two-loop
results known for the MSSM can be included. Furthermore, also the
decays of SUSY and Higgs particles are calculated as well as
observables like $\ell_i \to \ell_j \gamma$, $\ell_i \to 3 \ell_j$, $b\to
s\gamma$, $\delta\rho$, $(g-2)$, or electric dipole moments. For more
information about the interplay between {\tt SARAH}\xspace and {\tt SPheno}\xspace we refer
the interested reader to Ref.~\cite{Staub:2011dp}.
Here we extend the list of observables by BR($B^0_s\to\ell\bar
{\ell}$) and BR($B^0_d\to \ell\bar{\ell}$). For this purpose, the
generic tree--level and 1--loop amplitudes calculated with
{\tt FeynArts}\xspace/{\tt FormCalc}\xspace given in \ref{app:amplitudes} have been implemented
in {\tt SARAH}\xspace. When {\tt SARAH}\xspace generates the output for {\tt SPheno}\xspace it checks for
all possible field combinations which can populate the generic
diagrams in the given model. This information is then used to generate
Fortran code for a numerical evaluation of all of these diagrams. The
amplitudes are then combined to the Wilson coefficients which again
are used to calculate the form factors
eqs.~(\ref{eq:formfactorsBs1})-(\ref{eq:formfactorsBs4}). The
branching ratio is finally calculated by using
eq.~(\ref{eq:Bsllpbranching}). Note, the known 2--loop QCD corrections
of Refs.~\cite{Buchalla:1993bv,Misiak:1999yg,Bobeth:2001jm} are
not included in this calculation.
The Wilson coefficients for $\ensuremath{B^0_{s,d}\to \ell \bar{\ell} \xspace}$ are
calculated at a scale $Q=160$~GeV by all modules generated by {\tt SARAH}\xspace,
as this is done by default by {\tt SPheno}\xspace in the MSSM. Hence, as input
parameters for the calculation the running SUSY masses and couplings at
this scale obtained by a 2-loop RGE evaluation down from the SUSY scale are used.
In the standard model gauge sector we use the running value of $\alpha_{em}$, the on-shell
Weinberg angle $\sin \Theta_W^2 = 1 -\frac{m_W^2}{m_Z^2}$ with $m_W$ calculated
from $\alpha_{em}(M_Z)$, $G_F$ and the $Z$ mass. In addition, the CKM matrix calculated
from the Wolfenstein parameters ($\lambda$, $A$,$\rho$,$\eta$) as well as
the running quark masses enter the calculation. To obtain the running SM parameters at $Q=160$~GeV
we use 2-loop standard model RGEs of Ref.~\cite{Arason:1991ic}.
The default SM values as well as the derived parameters are given in Tab.~\ref{tab:sm}.
Note, even if CP violation is not switched on the calculation of the SUSY spectrum, always the phase
of the CKM matrix is taken into account in these calculations. This is especially important for $B_d^0$
decays.
\begin{table}[bt]
\centering
\small
\begin{tabular}{|l|l|l|l|l|}
\hline \hline
\multicolumn{5}{|c|}{default SM input parameters} \\
\hline
$\alpha^{-1}_{em}(M_Z) = 127.93 $ & $\alpha_s(M_Z) = 0.1190 $ & $G_F = 1.16639\cdot 10^{-5}~\text{GeV}^{-2}$
& $\rho = 0.135$ & $\eta = 0.349$ \\
$m_t^{pole} = 172.90~\text{GeV}$ & $M_Z^{pole} = 91.1876~\text{GeV} $ & $m_b(m_b) = 4.2~\text{GeV} $ & $\lambda = 0.2257 $ & $A = 0.814$ \\
\hline \hline
\multicolumn{5}{|c|}{derived parameters} \\
\hline
$m_t^{\overline{DR}} = 166.4~\text{GeV}$ &
$| V_{tb}^* V_{ts}| = 4.06*10^{-2} $ & $| V_{tb}^* V_{td}| = 8.12*10^{-3} $ & $m_W = 80.3893 $ & $\sin^2 \Theta_W = 0.2228 $ \\
\hline
\end{tabular}
\caption{SM input values and derived parameters by default used for the numerical evaluation of $B^0_{s,d} \to l\bar{l}$ in {\tt SPheno}\xspace.}
\label{tab:sm}
\end{table}
All standard model parameters can be adjusted by using the
corresponding standard blocks
of the SUSY LesHouches Accord 2 (SLHA2) \cite{SLHA}.
Furthermore, the default input values for the
hadronic parameters given in Table~\ref{tab:input} are used. These
can be changed in the Les Houches input accordingly to the Flavor Les Houches
Accord (FLHA) \cite{Mahmoudi:2010iz} using the following blocks:
\begin{verbatim}
Block FLIFE #
511 1.525E-12 # tau_Bd
531 1.472E-12 # tau_Bs
Block FMASS #
511 5.27950 0 0 # M_Bd
531 5.3663 0 0 # M_Bs
Block FCONST #
511 1 0.190 0 0 # f_Bd
531 1 0.227 0 0 # f_Bs
\end{verbatim}
While {\tt SPheno}\xspace includes the
chiral resummation for the MSSM, this is not taken into account in the
routines generated by {\tt SARAH}\xspace because of its large model dependence.
\begin{table}
\centering
\begin{tabular}{|l|l|l|}
\hline
\multicolumn{3}{|c|}{Default hadronic parameters} \\
\hline
$m_{B^0_s} = 5.36677$ GeV & $f_{B^0_s} = 227(8)$ MeV & $\tau_{B^0_s} = 1.466(31)$ ps \\
$m_{B^0_d} = 5.27958$ GeV & $f_{B^0_d} = 190(8)$ MeV & $\tau_{B^0_d} = 1.519(7)$ ps \\
\hline
\end{tabular}
\caption{Hadronic input values by default used for the numerical evaluation of $B^0_{s,d} \to l\bar{l}$ in {\tt SPheno}\xspace.}
\label{tab:input}
\end{table}
\subsection{Generating and running the source code}
We describe briefly the main steps necessary to generate and run the
{\tt SPheno}\xspace code for a given model: after starting {\tt Mathematica} and
loading {\tt SARAH}\xspace it is just necessary to evaluate the demanded model and
call the function to generate the {\tt SPheno}\xspace source code. For instance,
to get a {\tt SPheno}\xspace module for the B-L-SSM \cite{Khalil:2007dr,FileviezPerez:2010ek,O'Leary:2011yq}, use
\begin{verbatim}
<<[$SARAH-Directory]/SARAH.m;
Start["BLSSM"];
MakeSPheno[];
\end{verbatim}
{\tt MakeSPheno[]} calculates first all necessary information
(\textit{i.e.} vertices, mass matrices, tadpole equations, RGEs,
self-energies) and then exports this
information to Fortran code and writes all necessary auxiliary
functions needed to compile the code together with {\tt SPheno}\xspace. The entire
output is saved in the directory
\begin{verbatim}
[$SARAH-Directory]/Output/BLSSM/EWSB/SPheno/
\end{verbatim}
The content of this directory has to be copied into a new
subdirectory of {\tt SPheno}\xspace called {\tt BLSSM} and afterwards the code can
be compiled:
\begin{verbatim}
cp [$SARAH-Directory]/Output/BLSSM/EWSB/SPheno/* [$SPheno-Directory]/BLSSM/
cd [$SPheno-Directory]
make Model=BLSSM
\end{verbatim}
This creates a new binary {\tt SPhenoBLSSM} in the directory {\tt bin}
of {\tt SPheno}\xspace. To run the spectrum calculation a file called {\tt
LesHouches\!.in.BLSSM} containing all input parameters in the Les
Houches format has to be provided. {\tt SARAH}\xspace writes also a template for
such a file which has been copied with the other files to {\tt
/BLSSM}. This example can be evaluated via
\begin{verbatim}
./bin/SPhenoBLSSM BLSSM/LesHouches.in.BLSSM
\end{verbatim}
and the output is written to {\tt SPheno.spc.BLSSM}. This file
contains all information like the masses, mass matrices, decay widths
and branching ratios, and observables. For the $B^0_{s,d}\to \ell \bar{\ell}$
decays
the results are given twice for easier comparison: once for the full
calculation and once including only the SM contributions. All results
are written to the block {\tt SPhenoLowEnergy} in the spectrum file
using the following numbers:
\begin{center}
\begin{tabular}{|cl|cl|}
\hline
{\tt 4110} & $\text{BR}^{SM}(B^0_d\to e^+e^-)$ & {\tt 4111} & $\text{BR}^{full}(B^0_d\to e^+e^-)$ \\
{\tt 4220} & $\text{BR}^{SM}(B^0_d\to \mu^+\mu^-)$ & {\tt 4221} & $\text{BR}^{full}(B^0_d\to \mu^+\mu^-)$ \\
{\tt 4330} & $\text{BR}^{SM}(B^0_d\to \tau^+\tau^-)$ & {\tt 4331} & $\text{BR}^{full}(B^0_d\to \tau^+\tau^-)$ \\
{\tt 5110} & $\text{BR}^{SM}(B^0_s\to e^+e^-)$ & {\tt 5111} & $\text{BR}^{full}(B^0_s\to e^+e^-)$ \\
{\tt 5210} & $\text{BR}^{SM}(B^0_s\to \mu^+e^-)$ & {\tt 5211} & $\text{BR}^{full}(B^0_s\to \mu^+e^-)$ \\
{\tt 5220} & $\text{BR}^{SM}(B^0_s\to \mu^+\mu^-)$ & {\tt 5221} & $\text{BR}^{full}(B^0_s\to \mu^+\mu^-)$ \\
{\tt 5330} & $\text{BR}^{SM}(B^0_s\to \tau^+\tau^-)$ & {\tt 5331} & $\text{BR}^{full}(B^0_s\to \tau^+\tau^-)$ \\
\hline
\end{tabular}
\end{center}
Note, we kept for completeness and as cross-check $\text{BR}^{SM}(B^0_s\to \mu^+e^-)$ which has to vanish.
The same steps can be repeated for any other model implemented in
{\tt SARAH}\xspace, or the {\tt SUSY-Toolbox} scripts \cite{Staub:2011dp} can be
used for an automatic implementation of new models in {\tt SPheno}\xspace as well
as in other tools based on the {\tt SARAH}\xspace output.
\subsection{Checks}
We have performed several cross checks of the code generated by
{\tt SARAH}\xspace: the first, trivial check has been that we reproduce the known
SM results and that those agree with the full calculation in the limit of
heavy SUSY spectra. For the input parameters
of Tab.~\ref{tab:sm} we obtain $\text{BR}(B^0_s\to \mu^+ \mu^-)^{SM} = 3.28\cdot 10^{-9}$
and $\text{BR}(B^0_d\to \mu^+ \mu^-)^{SM} = 1.08\cdot 10^{-10}$ which are in good agreement
with eqs.~(\ref{eq:Bsintro1})-(\ref{eq:Bsintro2}).
Secondly, as mentioned in
the introduction there are several codes which calculate these decays
for the MSSM or NMSSM. A detailed comparison of all of these codes is
beyond the scope of the presentation here and will
be presented elsewhere \cite{comparison}.
However, a few comments are in order:
the code generated by {\tt SARAH}\xspace as well as most other codes usually show
the same behavior. There are differences in the numerical
values calculated by the programs because of different values
for the SM inputs. For instance, there is an especially strong
dependence on the value of the
electroweak mixing angle and, of course, of the hadronic parameters used in the
calculation \cite{Buras:2012ru}. In addition, these processes are
implemented with different accuracy in different tools: the treatment
of NLO QCD corrections \cite{Bobeth:2001jm}, chiral resummation
\cite{Crivellin:2011jt}, or SUSY box diagrams is not the
same. Therefore, we depict in Fig.~\ref{fig:comparison} a comparison
between {\tt SPheno 3.2.1}, {\tt Superiso 3.3} and {\tt SPheno by
SARAH} using the results normalized to the SM limit of each program.
\begin{figure}[hbtp]
\centering
\includegraphics[width=0.5\linewidth]{FeynAmps/m0_R.pdf} \\
\includegraphics[width=0.5\linewidth]{FeynAmps/m0M12_R.pdf} \\
\includegraphics[width=0.5\linewidth]{FeynAmps/tb_R.pdf}
\caption{The top figure: $R=\text{BR}(B_s^0\to \mu^+
\mu^-)/\text{BR}(B_s^0\to \mu^+\mu^-)_{SM}$ for the constrained
MSSM and as function of $m_0$. The other parameters were set to
$M_{1/2}=140$~GeV, $\tan\beta=10$, $\mu>0$. In the middle $m_0$
and $M_{1/2}$ were varied simultaneously, while $\tan\beta=30$
was fixed. In the bottom figure we show $\log(R)$ as a function
of $\tan\beta$, while $m_0=M_{1/2} = 150$~GeV were kept fix. In
all figures $A_0=0$ and $\mu>0$ was used. The color code is as
follows: {\tt superiso 3.3} (dotted black), {\tt SPheno 3.2.1}
(dashed red) and {\tt SPheno}\xspace by {\tt SARAH}\xspace (solid blue).}
\label{fig:comparison}
\end{figure}
It is also possible to perform a check of self-consistency: the
leading-order contribution has to be finite which leads to non-trivial
relations among the amplitudes for all
wave and penguin diagrams are given in \ref{sec:waveBapp} and \ref{sec:penguinB}.
Therefore, we can check these relations numerically by
varying the renormalization scale used in all loop integrals. The
dependence on this scale should cancel and the branching ratios should
stay constant. This is shown in Figure~\ref{fig:scaledependence}:
while single contributions can change by several orders the sum of all
is numerically very stable.
\begin{figure}[hbt]
\centering
\includegraphics[width=0.6\linewidth]{FeynAmps/renorm-plot-FA}
\caption{The figure shows $|\sum F_A|_{\text{penguin}}$ and $|\sum F_A|_{\text{wave}}$ as well as the sum of both $|\sum F_A|$. Penguin and wave contributions have opposite signs that interchange between $Q=10^2{\text{Ge}\hspace{-0.05cm}\text{V}}$ to $Q=10^3{\text{Ge}\hspace{-0.05cm}\text{V}}$.}
\label{fig:scaledependence}
\end{figure}
\subsection{Non-supersymmetric models}
We have focused our discussion so far on SUSY models. However, even if
{\tt SARAH}\xspace is optimized for the study of SUSY models it is also able to
handle non-SUSY models to some extent. The main drawback at the moment
for non-SUSY models is that the RGEs can not be calculated
because the results of Refs.~\cite{Martin:1993zk,Fonseca:2011vn} which
are used by {\tt SARAH}\xspace are not valid in this case. However, all other
calculations like the ones for the vertices, mass matrices and
self-energies don't use SUSY properties and therefore apply to
any model. Hence, it is also possible to generate {\tt SPheno}\xspace code for
these models which calculates $B^0_{s,d}\to\ell\bar{\ell}$. The main
difference in the calculation comes from the missing possibility to
calculate the RGEs: the user has to provide numerical values for all
parameters at the considered scale which then enter the
calculation. We note that in order to fully support non-supersymmetric
models with {\tt SARAH}\xspace the calculation of the corresponding RGEs at 2-loop
level will be included in {\tt SARAH}\xspace in the future \cite{nonsusyrges}.
\section{Conclusion}
\label{sec:conclusion}
We have presented a model independent implementation of the flavor
violating decays $B^0_{s,d} \to \ell\bar{\ell}$ in {\tt SARAH}\xspace and {\tt SPheno}\xspace. Our
approach provides the possibility to generate source code which
performs a full 1-loop calculation of these observables for any
model which can be implemented in {\tt SARAH}\xspace. Therefore, it takes care of
the necessity to confront many BSM models in the future with the
increasing constraints coming from the measurements of $B^0_{s,d} \to
\ell\bar{\ell}$ at the LHC.
\section*{Acknowledgements}
W.P.\ thanks L.~Hofer for discussions. This work has been supported
by the Helmholtz alliance `Physics at the Terascale' and W.P.\ in part
by the DFG, project No. PO-1337/2-1. HKD acknowledges support from
BMBF grant 00160200.
\input{appendix.tex}
\input{lit.tex}
\end{document}
\section{Conventions}
\label{conventions}
\subsection{Passarino-Veltman integrals}
We use in the following the conventions of \cite{Pierce:1996zz}
for the Passarino-Veltman integrals. All Wilson coefficients appearing
in the following can be expressed by the integrals
\begin{eqnarray}
\label{eq:B0withPsq0}
B_0(0,x,y)&=& \Delta + 1 + \ln \kl{\frac{Q^2}{y}} +
\frac{x}{x-y}\ln \kl{\frac{y}{x}} \\
\Delta &=& \frac 2{4-D} - \gamma_E + \log 4\pi \\
B_1(x,y,z) &=& \frac 12 (z-y)\frac{B_0(x,y,z)-B_0(0,y,z)}{x}-\frac 12 B_0(x,y,z) \\
C_0(x,y,z) &=& \frac{1}{y-z} \left[ \frac{y}{x-y} \log \frac yx - \frac{z}{x-z} \log \frac zx \right] \label{eq:C0} \\
\nonumber C_{00}(x,y,z)
&=& \frac 14 \kl{1-\frac 1{y-z}\kl{ \frac {x^2\log x - y^2\log y}{x-y}-\frac{x^2\log x-z^2\log z}{x-z}}} \\
D_0(x,y,z,t) &=& \frac{C_0(x,y,z)-C_0(x,y,t)}{z-t} \label{eq:D0reduce} \\
\nonumber &=& -\left[ \frac{y\log \frac yx}{(y-x)(y-z)(y-t)} +
\frac{z\log \frac zx}{(z-x)(z-y)(z-t)}+\frac{t\log \frac
tx}{(t-x)(t-y)(t-z)} \right] \\
D_{00}(x,y,z,t) & = & -\frac 14\left[ \frac{y^2\log \frac yx}{(y-x)(y-z)(y-t)} +
\frac{z^2log \frac zx}{(z-x)(z-y)(z-t)}+\frac{t^2\log \frac
tx}{(t-x)(t-y)(t-z)} \right].
\end{eqnarray}
Note, the conventions of ref.~\cite{Pierce:1996zz} (Pierce, Bagger
[PB]) are different than those presented in ref.~\cite{Dedes:2008iw}
(Dedes, Rosiek, Tanedo [DRT]). The box integrals are related by
\begin{eqnsub}
\label{eq:Dloopsall}
D_{0} &=& D_0^{(\text{PB})}=- D_0^{(DRT)}, \\
\label{eq:Dloopsall2}
D_{00}&=& D_{27}^{(\text{PB})}=- \frac 14 D_{2}^{(DRT)} \,.
\end{eqnsub}
\subsection{Massless limit of loop integrals}
In some amplitudes (i.e. penguin diagrams $(a-b)$, box diagram $(v)$) the
following combinations of loop integrals appear:
\begin{align}
I_1 &= B_0(s,M_{F1}^2,M_{F2}^2)+M_S^2 C_0(s,0,0,M_{F2}^2,M_{F1}^2,M_S^2), \\
I_2 &= C_0(0,0,0,M_{F2}^2,M_{F1}^2,M_{V2}^2) + M_{V1}^2 D_0(M_{F2}^2,M_{F1}^2,M_{V1}^2,M_{V2}^2).
\end{align}
The loop functions $B_0,C_0,D_0$ diverge for massless fermions (e.g. neutrinos in the MSSM) but
the expressions $I_1,I_2$ are finite. However, this limit must be taken analytically in order to avoid
numerical instabilities.
In a generalized form and in the limit of zero external momenta, $I_i$ can be expressed by
\begin{align}
I_1(a,b,c) &= B_0(0,a,b)+c C_0(0,0,0,a,b,c) \equiv B_0(0,a,b)+c C_0(a,b,c), \\
I_2(a,b,c,d) &= C_0(0,0,0,a,b,d) + c D_0(a,b,c,d) \equiv C_0(a,b,d)+c D_0(a,b,c,d).
\end{align}
Using eqs.~{\ref{eq:B0withPsq0},\ref{eq:C0},\ref{eq:D0reduce}} we obtain in the limit $a\to 0$
\begin{align}
I_1(0,b,c) &= B_0(0,0,b) + c C_0(0,b,c) \\
&= \Delta + 1 - \log \frac b{Q^2} + c \frac 1{b-c}\log \frac cb \\
&= \Delta + 1 + \log Q^2 + \frac {c}{b-c} \log c + \kl{-1-\frac c{b-c}}\log
b
\end{align}
The term proportional to $\log b$ vanishes in the limit $b\to 0$
\begin{equation}
I_1(0,0,c) = \Delta + 1 - \log \frac c{Q^2}
\end{equation}
The same strategy works for $I_2$:
\begin{align}
I_2(0,b,c,d) &= C_0(0,b,d)+c D_0(0,b,c,d) \\
&= \frac 1{b-d} \log \frac{d}{b} + c \frac{C_0(0,b,c)-C_0(0,b,d)}{c-d} \\
&= \frac 1{b-d} \log \frac{d}{b} + \frac c{c-d} \frac 1{b-c} \log \frac{c}{b} -
\frac{c}{c-d}\frac 1{b-d}\log \frac {d}{b} \\
&= \frac{(c-d)(b-c)\log \frac db + c (b-d)\log \frac cb - c(b-c) \log \frac
db}{(b-d)(c-d)(b-c)} \label{eq:I2last}
\end{align}
The denominator of \qref{eq:I2last} is finite for $b\to 0$ and in the
numerator, the $\log b$ terms cancel each other:
\begin{equation}
( (c-d)c + cd -c^2 ) \log b =0.
\end{equation}
Hence, we end up with
\begin{align}
I_2(0,0,c,d) &= \frac{-c(c-d)\log d - cd \log c + c^2 \log
d}{cd(c-d)} = \frac{\log \frac dc}{c-d}.
\end{align}
\subsection{Parametrization of vertices}
We are going to express the amplitude in the following in terms of
generic vertices. For this purpose, we parametrize a vertex between
two fermions and one vector or scalar respectively as
\begin{eqnarray}
\label{eq:chiralvertices1}
& G_A \gamma_\mu P_L + G_B \gamma_\mu P_R\, , & \\
\label{eq:chiralvertices2}
& G_A P_L + G_B P_R \, . &
\end{eqnarray}
$P_{L,R}=\frac 12 (1\mp \gamma^5)$ are the polarization operators. In
addition, for the vertex between three vector bosons and the one between one
vector boson and two scalars the conventions are as follows
\begin{eqnarray}
& G_{VVV} \cdot \kl{g_{\mu\nu}(k_2-k_1)_\rho+g_{\nu\rho}(k_3-k_2)_\mu
+g_{\rho\mu}(k_1-k_3)_\nu} \, , \\
& G_{SSV}\cdot (k_1-k_2)_\mu\, . &
\end{eqnarray}
Here, $k_i$ are the (ingoing) momenta of the external particles.
\section{Generic amplitudes}
\label{app:amplitudes}
We present in the following the expressions for the generic amplitudes obtained with {\tt FeynArts}\xspace and {\tt FormCalc}\xspace. All coefficients that are not explicitly listed are zero. Furthermore, the Wilson coefficients are left--right symmetric, i.e.
\begin{equation}
C_{XRR} = C_{XLL} (L \leftrightarrow R) \,, \hspace{1cm} C_{XRL}
= C_{XLR} (L \leftrightarrow R),
\end{equation}
with $X=S,V$ and where $(L \leftrightarrow R)$ means that
the coefficients of the left and right polarization part of each vertex have to be interchanged.
\allowdisplaybreaks
\subsection{Tree Level Contributions}
\label{sec:treelevel}
\begin{figure}[htbp]
\centering
\begin{tabular}{cc}
\includegraphics[width=4cm]{FeynAmps/STree1x} & \includegraphics[width=4cm]{FeynAmps/STree2x} \\
(a) & (b) \\
\includegraphics[width=4cm]{FeynAmps/TTree1x} & \includegraphics[width=4cm]{FeynAmps/TTree2x} \\
(c) & (d) \\
\includegraphics[width=4cm]{FeynAmps/UTree1x} & \includegraphics[width=4cm]{FeynAmps/UTree2x} \\
(e) & (f)
\end{tabular}
\caption{Tree level diagrams with vertex numbering}
\label{fig:treeleveldiagrams}
\end{figure}
Since in models beyond the MSSM it might be possible that $\ensuremath{B^0_s\to \ell\bar{\ell}}\xspace$ is already possible at tree--level. This is for instance the case for trilinear $R$-parity violation \cite{Dreiner:2006gu}. The generic diagrams are given in Figure~\ref{fig:treeleveldiagrams}. The chiral vertices are parametrized as in eqs.~(\ref{eq:chiralvertices1})-(\ref{eq:chiralvertices2}) with $A=1,B=2$ for vertex 1 and $A=3,B=4$ for vertex 2. Using these conventions, the corresponding contributions to the Wilson coefficients read
\begin{eqnarray}
\label{eq:treeSchannel}
C^{(a)}_{SLL}&=&16\pi^2 \frac{G_1G_3}{M_S^2-s} \,, \hspace{1cm} C^{(a)}_{SLR} = 16\pi^2 \frac{G_1G_4}{M_S^2-s}\\
C^{(b)}_{VLL}&=&16\pi^2 \frac{-G_1G_3}{M_V^2-s}\,, \hspace{1cm} C^{(b)}_{VLR} = 16\pi^2 \frac{-G_1G_4}{M_V^2-s}\\
C^{(c)}_{SLL}&=& 16\pi^2\frac{-G_1G_3}{2(M_S^2-t)}\,, \hspace{1cm} C^{(c)}_{VLR} = 16\pi^2\frac{-G_2G_3}{2(M_S^2-t)} \\
C^{(d)}_{SLR}&=&16\pi^2\frac{2G_2G_3}{M_V^2-t} \,, \hspace{1cm} C^{(d)}_{VLL} = 16\pi^2\frac{-G_1G_3}{M_V^2-t} \\
C^{(e)}_{SLL} &=& 16\pi^2\frac{-G_1G_3}{2(M_S^2-u)} \,, \hspace{1cm} C^{(e)}_{VLL} = 16\pi^2\frac{G_2G_3}{2(M_S^2-u)} \\
C^{(f)}_{SLR} &=& 16\pi^2\frac{-2G_2G_4}{M_V^2-u} \,, \hspace{1cm} C^{(f)}_{VLR} = 16\pi^2\frac{-G_1G_4}{M_V^2-u}
\end{eqnarray}
Here, $s$, $t$ and $u$ are the usual Mandelstam variables.
\subsection{Wave Contributions}
\label{sec:waveBapp}
\begin{figure}[htpb]
\centering
\begin{tabular}{cc}
\includegraphics[width=3cm]{FeynAmps/BsLLwaveS1SS}
& \includegraphics[width=3cm]{FeynAmps/BsLLwaveS1VS} \\
(a) & (b) \\
\includegraphics[width=3cm]{FeynAmps/BsLLwaveS1SV} & \includegraphics[width=3cm]{FeynAmps/BsLLwaveS1VV}\\
(c) & (d) \\
\includegraphics[width=3cm]{FeynAmps/BsLLwaveT1SS}
& \includegraphics[width=3cm]{FeynAmps/BsLLwaveT1VS}
\\
(e) & (f) \\
\includegraphics[width=3cm]{FeynAmps/BsLLwaveT1SV}
& \includegraphics[width=3cm]{FeynAmps/BsLLwaveT1VV}
\\
(g) & (h) \\
\includegraphics[width=3cm]{FeynAmps/BsLLwaveU1SS}
& \includegraphics[width=3cm]{FeynAmps/BsLLwaveU1VS}
\\
(i) & (j) \\
\includegraphics[width=3cm]{FeynAmps/BsLLwaveU1SV}
& \includegraphics[width=3cm]{FeynAmps/BsLLwaveU1VV}
\\
(k) & (l) \\
\end{tabular}
\caption[Generic wave diagrams]{Generic wave diagrams. For every diagram there is a crossed version, where the loop attaches to the other external quark.}
\label{fig:wave}
\end{figure}
\begin{figure}[htpb]
\centering
\includegraphics[width=4cm]{FeynAmps/BsLLwaveS1numbers}
\includegraphics[width=4cm]{FeynAmps/BsLLwaveS2numbers} \\
\includegraphics[width=4cm]{FeynAmps/BsLLwaveT1numbers}
\includegraphics[width=4cm]{FeynAmps/BsLLwaveT2numbers} \\
\includegraphics[width=4cm]{FeynAmps/BsLLwaveU1numbers}
\includegraphics[width=4cm]{FeynAmps/BsLLwaveU2numbers}
\caption{Generic wave diagram vertex numbering}
\label{fig:wavenumbering}
\end{figure}
The generic wave diagrams are given in Figure~\ref{fig:wave}. The internal
quark which attaches to the vector or scalar propagator has generation index
$n$. Couplings that depend on $n$ carry it as an additional index. The chiral
vertices are parametrized as in
eqs.~(\ref{eq:chiralvertices1})-(\ref{eq:chiralvertices2}) with $A=1,B=2$ for
vertex 1, $A=3,B=4$ for vertex 2, $A=5,B=6$ for vertex 3 and $A=7,B=8$ for
vertex 4, see also Figure~\ref{fig:wavenumbering} for the numbering of the
vertices. If a vertex is labelled $3^\prime$ for instance, the corresponding couplings
are $G_5^\prime, G_6^\prime$. Furthermore, we define the following abbreviations:
\begin{align}
\label{eq:wavebubbleSapp}
f_{S1} &= \frac{1}{m_n^2-m_{i}^2} \kl{-M_F(G_1G_{3n}m_n+G_2G_{4n}m_{i})B_0^{(i)}+(G_2G_{3n}m_nm_{i}+G_1G_{4n}m_{i}^2)B_1^{(i)}},
\\
f_{S2} &= \frac{1}{m_{j}^2-m_n^2}\kl{M_F(G_{2n}G_4m_{j}+G_{1n}G_3m_n)B_0^{(j)}-(G_{2n}G_3m_{j}^2+G_{1n}G_4m_{j}
m_n)B_1^{(j)}}, \\
\tilde f_{S2} &= \frac{1}{m_{j}^2-m_n^2}\kl{M_F(G_{1n}G_3m_{j}+G_{2n}G_4 m_n)B_0^{(j)}-(G_{1n}G_4m_{j}^2+G_{2n}G_3m_{j}
m_n)B_1^{(j)}}, \\
\label{eq:wavebubbleVapp}
f_{V1} &= \frac{1}{m_n^2-m_{i}^2}\kl{2M_F(G_1G_{4n} m_n+G_2G_{3n}m_{i})B_0^{(i)}+(G_2G_{4n}m_nm_{i}+G_1G_{3n}m_{i}^2)B_1^{(i)}}, \\
f_{V2} &= \frac{1}{m_{j}^2-m_n^2}\kl{2M_F(G_{2n}G_3m_j+G_{1n}G_4 m_n)B_0^{(j)}+(G_{2n}G_4m_{j}^2+G_{1n}G_3m_{j} m_n)B_1^{(j)}}, \\
\tilde f_{V2} &= \frac{1}{m_{j}^2-m_n^2}\kl{2M_F(G_{1n}G_4m_j+G_{2n}G_3 m_n)B_0^{(j)}+(G_{1n}G_3m_{j}^2+G_{2n}G_4m_{j} m_n)B_1^{(j)}}.
\end{align}
The $m_i,m_j$ are the quark masses and $B_{0,1}^{(i)} = B_{0,1}(m_i^2,M_F^2,M_S^2)$ (or $M_V^2$ instead of $M_S^2$). $m_n$ is the mass of the internal quark with generation index $n$. Couplings that involve the internal quark are also labelled with $n$ (e.g. $G_{3n}$). Using these conventions, the contributions to the Wilson coefficients are
\begin{eqnarray}
C^{(a)}_{SLL}&=& \frac{G_7}{M_{S0}^2-s} \kl{ G_{5n} f_{S1}+ G_{5n}^\prime f_{S2} } \\
C^{(c)}_{VLL}&=& \frac{G_7}{M_{V0}^2-s} \kl{ -G_{5n} f_{S1} - G_{5n}^\prime \tilde f_{S2} } \to \frac{G_7}{M_{V0}^2-s} G_5 G_1G_4B_1(0,M_F,M_S) \\
C^{(b)}_{SLL}&=&\frac{2G_7}{M_{S0}^2-s} \kl{ G_{5n} f_{V1} - G_{5n}^\prime f_{V2} } \\
C^{(d)}_{VLL}&=&\frac{2G_7}{M_{V0}^2-s} \kl{-G_{5n} f_{V1} + G_{5n}^\prime \tilde f_{V2}} \to \frac{2G_7}{M_{V0}^2-s} G_5 G_1G_3B_1(0,M_F,M_V) \\
C^{(e)}_{SLL} &=& \frac{-1}{2(M_{S0}^2-t)} \kl{ G_{5n} G_7 f_{S1} + G_{5n}^\prime G_7^\prime f_{S2} } \\
C^{(e)}_{VLR} &=& \frac{ -1}{2(M_{S0}^2-t)} \kl{ G_{5n} G_8 f_{S1} + G_{6n}^\prime G_7^\prime \tilde f_{S2} } \\
C^{(g)}_{SLR}&=& \frac{+2}{M_{V0}^2-t} \kl{ G_{5n} G_8 f_{S1} + G_{6n}^\prime G_7^\prime f_{S2} } \\
C^{(g)}_{VLL}&=&\frac{-1}{M_{V0}^2-t} \kl{ G_{5n}G_7 f_{S1} + G_{5n}^\prime G_7^\prime \tilde f_{S2} } \\
C^{(f)}_{SLL}&=& \frac{-1}{M_{S0}^2-t} \kl{ G_{5n}G_7 f_{V1} - G_{5n}^\prime G_7^\prime f_{V2} } \\
C^{(f)}_{VLR} &=& \frac{-1}{M_{S0}^2-t} \kl{ G_{5n}G_8 f_{V1} - G_{6n}^\prime G_7^\prime \tilde f_{V2} } \\
C^{(h)}_{SLR} &=& \frac{+4}{M_{V0}^2-t} \kl{ G_{5n}G_8 f_{V1} - G_{6n}^\prime G_7^\prime f_{V2} } \\
C^{(h)}_{VLL} &=& \frac{+2}{M_{V0}^2-t} \kl{ - G_{5n}G_7 f_{V1} + G_{5n}^\prime G_7^\prime \tilde f_{V2} }\\
C^{(i)}_{SLL} &=& \frac{-1}{2(M_{S0}^2-u)} \kl{ G_{5n}G_7 f_{S1} + G_{5n}^\prime G_7^\prime f_{S2} } \\
C^{(i)}_{VLL} &=& \frac{+1}{2(M_{S0}^2-u)} \kl{ G_{5n}G_8 f_{S1} + G_{6n}^\prime G_{7}^\prime \tilde f_{S2} }\\
C^{(k)}_{SLR}&=& \frac{-2}{M_{V0}^2-u}\kl{G_{6n}G_8 f_{S1}+G_{6n}^\prime G_8^\prime f_{S2}} \\
C^{(k)}_{VLR} &=& \frac{-1}{M_{V0}^2-u} \kl{G_{6n}G_7 f_{S1}+G_{5n}^\prime G_8^\prime \tilde f_{S2}} \\
C^{(j)}_{SLL}&=& \frac{-1}{M_{S0}^2-u} \kl{ G_{5n} G_7 \tilde f_{V1} - G_{5n}^\prime G_7^\prime f_{V2} } \\
C^{(j)}_{VLL} &=& \frac{-1}{M_{S0}^2-u} \kl{ - G_{5n}G_8 \tilde f_{V1} + G_{6n}^\prime G_7^\prime \tilde f_{V2} } \\
C^{(l)}_{SLR} &=& \frac{-4}{M_{V0}^2-u} \kl{ G_{6n}G_8 \tilde f_{V1} - G_{6n}^\prime G_8^\prime f_{V2} }\\
C^{(l)}_{VLR} &=& \frac{-2}{M_{V0}^2-u} \kl{ G_{6n}G_7 \tilde f_{V1} - G_{5n}^\prime G_8^\prime \tilde f_{V2} }
\end{eqnarray}
\subsection{Penguin Contributions}
\label{sec:penguinB}
\begin{figure}[htpb]
\centering
\includegraphics[width=4cm]{FeynAmps/PenV}
\caption{Vertex number conventions for a representative penguin diagram}
\label{fig:penguinvertex}
\end{figure}
\begin{figure}[hbt]
\centering
\begin{tabular}{cccc}
\includegraphics[width=3cm]{FeynAmps/BtoLLpPenguinScalarFFS} & \includegraphics[width=3cm]{FeynAmps/BtoLLpPenguinVectorFFS} & \includegraphics[width=3cm]{FeynAmps/BtoLLpPenguinScalarSSF} & \includegraphics[width=3cm]{FeynAmps/BtoLLpPenguinVectorSSF} \\
(a) & (b) & (c) & (d) \\
\includegraphics[width=3cm]{FeynAmps/BtoLLpPenguinScalarFFV} & \includegraphics[width=3cm]{FeynAmps/BtoLLpPenguinVectorFFV} &
\includegraphics[width=3cm]{FeynAmps/BtoLLpPenguinScalarFVV} & \includegraphics[width=3cm]{FeynAmps/BtoLLpPenguinVectorFVV} \\
(e) & (f) & (g) & (h) \\
\includegraphics[width=3cm]{FeynAmps/BtoLLpPenguinScalarFSV1} & \includegraphics[width=3cm]{FeynAmps/BtoLLpPenguinVectorFSV1} &
\includegraphics[width=3cm]{FeynAmps/BtoLLpPenguinScalarFSV2} & \includegraphics[width=3cm]{FeynAmps/BtoLLpPenguinVectorFSV2} \\
(i) & (j) & (k) & (l)
\end{tabular}
\caption{Generic penguin diagrams}
\label{fig:penguin}
\end{figure}
Diagrams with scalar propagators have $C_{VXY}=0$ and those with vector propagators have \mbox{$C_{SXY}=0$}. The vertex number conventions are given in fig.~\ref{fig:penguinvertex} and all possible diagrams are depicted in Figure~\ref{fig:penguin}. The chiral vertices are parametrized as in eqs.~(\ref{eq:chiralvertices1})-(\ref{eq:chiralvertices2}) with $A=1,B=2$ for vertex 1, $A=3,B=4$ for vertex 2 and $A=7,B=8$ for vertex 4. Vertex 3 can be a chiral vertex, in this case $A=5,B=6$ is used. Otherwise, we will denote it with a index $5$ and give as additional subscript the kind of vertex. The contributions to the Wilson coefficients from these diagrams read
\begin{align}
C^{(a)}_{SLL}&= \frac 1{M_{S0}^2-s}G_1G_3G_7\kl{G_6 B^{(a,b)}_0+(G_5M_{F1}M_{F2}+G_6M_S^2)C^{(a,b)}_0 } \\
C^{(a)}_{SLR}&= \frac 1{M_{S0}^2-s}G_1G_3G_8\kl{G_6 B^{(a,b)}_0+(G_5M_{F1}M_{F2}+G_6M_S^2)C^{(a,b)}_0 } \\
C^{(b)}_{VLL}&= \frac 1{M_{V0}^2-s} G_1G_4G_7 \kl{G_6B^{(a,b)}_0+(-G_5M_{F1}M_{F2}+G_6M_S^2)C^{(a,b)}_0-2G_6C^{(a,b)}_{00} } \\
C^{(b)}_{VLR}&= \frac 1{M_{V0}^2-s} G_1G_4G_8 \kl{G_6B^{(a,b)}_0+(-G_5M_{F1}M_{F2}+G_6M_S^2)C^{(a,b)}_0-2G_6C^{(a,b)}_{00} } \\
C^{(c)}_{SLL}&= \frac 1{M_{S0}^2-s}G_1G_3G_{5,SSS}G_7 M_F C^{(c,d)}_0 \\
C^{(c)}_{SLR}&= \frac 1{M_{S0}^2-s}G_1G_3G_{5,SSS}G_8M_F C^{(c,d)}_0 \\
C^{(d)}_{VLL}&= - \frac 2{M_{V0}^2-s} G_1G_4G_{5,SSV} G_7C^{(c,d)}_{00}\\
C^{(d)}_{VLR}&= - \frac 2{M_{V0}^2-s} G_1G_4G_{5,SSV} G_8 C^{(c,d)}_{00} \\
C^{(e)}_{SLL}&= -\frac 4{M_{S0}^2-s} G_1G_4G_7 \kl{G_5B^{(e,f)}_0+(G_6M_{F1}M_{F2}+G_5M_V^2)C^{(e,f)}_0} \\
C^{(e)}_{SLR}&= -\frac 4{M_{S0}^2-s} G_1G_4G_8 \kl{G_5B^{(e,f)}_0+(G_6M_{F1}M_{F2}+G_5M_V^2)C^{(e,f)}_0} \\
C^{(f)}_{VLL}&= \frac{2}{M_{V0}^2-s} G_1 G_3 G_7 \kl{G_5B^{(e,f)}_0+(-G_6M_{F1}M_{F2}+G_5M_V^2)C^{(e,f)}_0-2 G_5 C^{(e,f)}_{00}} \\
C^{(f)}_{VLR}&= \frac{2}{M_{V0}^2-s} G_1 G_3 G_8 \kl{G_5B^{(e,f)}_0+(-G_6M_{F1}M_{F2}+G_5M_V^2)C^{(e,f)}_0-2 G_5 C^{(e,f)}_{00}} \\
C^{(g)}_{SLL}&= \frac 4{M_{S0}^2-s} G_1G_4G_{5,SVV} G_7 M_{F} C^{(g,h)}_0\\
C^{(g)}_{SLR}&= \frac 4{M_{S0}^2-s} G_1G_4G_{5,SVV} G_8 M_{F} C^{(g,h)}_0\\
C^{(h)}_{VLL}&= -\frac 2{M_{V0}^2-s} G_1G_3G_{5,VVV}G_7 \kl{B^{(g,h)}_0+M_{F}^2C^{(g,h)}_0+2C^{(g,h)}_{00}} \\
C^{(h)}_{VLR}&= -\frac 2{M_{V0}^2-s} G_1G_3G_{5,VVV}G_8 \kl{B^{(g,h)}_0+M_{F}^2C^{(g,h)}_0+2C^{(g,h)}_{00}} \\
C^{(i)}_{SLL}&= \frac 1{M_{S0}^2-s} G_1G_3G_{5,SSV} G_7 \kl{B^{(i-l)}_0+M_{F}^2C^{(i-l)}_0} \\
C^{(i)}_{SLR}&= \frac 1{M_{S0}^2-s} G_1G_3G_{5,SSV} G_8 \kl{B^{(i-l)}_0+M_{F}^2C^{(i-l)}_0} \\
C^{(k)}_{SLL}&= -\frac 1{M_{S0}^2-s} G_1G_4G_{5,SSV} G_7 \kl{B^{(i-l)}_0+M_{F}^2C^{(i-l)}_0} \\
C^{(k)}_{SLR}&= -\frac 1{M_{S0}^2-s} G_1G_4G_{5,SSV} G_8 \kl{B^{(i-l)}_0+M_{F}^2C^{(i-l)}_0} \\
C^{(j)}_{VLL}&= \frac{1}{M_{V0}^2-s} G_1G_4G_{5,SVV}G_7 M_F C^{(i-l)}_0 \\
C^{(j)}_{VLR}&= \frac{1}{M_{V0}^2-s} G_1G_4G_{5,SVV}G_8 M_F C^{(i-l)}_0 \\
C^{(l)}_{VLL}&= \frac{1}{M_{V0}^2-s} G_1G_3G_{5,SVV}G_7 M_F C^{(i-l)}_0 \\
C^{(l)}_{VLR}&= \frac{1}{M_{V0}^2-s} G_1G_3G_{5,SVV}G_8 M_F C^{(i-l)}_0
\end{align}
Here, the arguments of the Passarino-Veltman integrals are as follows, with $s=M_{B^0_q}^2$:
\begin{align}
C_X^{(a,b)} &=C_X (s,0,0,M_{F2}^2,M_{F1}^2,M_{S}^2) \,\hspace{1cm}
B_X^{(a,b)} = B_X(s,M_{F1}^2,M_{F2}^2) \\
C_X^{(c,d)} & = C_X(0,s,0,M_{F}^2,M_{S1}^2,M_{S2}^2) \\
C_X^{(e,f)} &=C_X(s,0,0,M_{F2}^2,M_{F1}^2,M_{V}^2) \,\hspace{1cm}
B_X^{(e,f)} = B_X(s,M_{F1}^2,M_{F2}^2)\\
C_X^{(g,h)}&=C_X(0,s,0,M_{F}^2,M_{V1}^2,M_{V2}^2) \,\hspace{1cm}
B_X^{(g,h)}=B_X(s,M_{V1}^2,M_{V2}^2) \\
C_X^{(i-l)} &= C_X(0,s,0,M_{F}^2,M_{S}^2,M_{V}^2) \,\hspace{1cm}
B_X^{(i-l)} = B_X(s,M_{S}^2,M_{V}^2)
\end{align}
\subsection{Box Contributions}
\begin{figure}[htpb]
\centering
\begin{tabular}{ccc}
\includegraphics[width=4cm]{FeynAmps/BoxIns1} & \includegraphics[width=4cm]{FeynAmps/BoxIns2} & \includegraphics[width=4cm]{FeynAmps/BoxIns4} \\
\end{tabular}
\caption{Vertex number conventions for a set of representative box diagrams}
\label{fig:boxvertex}
\end{figure}
\begin{figure}[hbt]
\centering
\begin{tabular}{ccc}
\includegraphics[width=3cm]{FeynAmps/BtoLLpBoxS1} & \includegraphics[width=3cm]{FeynAmps/BtoLLpBoxS1ins2} & \includegraphics[width=3cm]{FeynAmps/BtoLLpBoxS2ins4} \\
(a) & (b) & (c) \\
\includegraphics[width=3cm]{FeynAmps/BtoLLpBoxS2} & \includegraphics[width=3cm]{FeynAmps/BtoLLpBoxS2ins2} &
\includegraphics[width=3cm]{FeynAmps/BtoLLpBoxS1ins4} \\
(d) & (e) & (f) \\
\includegraphics[width=3cm]{FeynAmps/BtoLLpBoxSV1} & \includegraphics[width=3cm]{FeynAmps/BtoLLpBoxSV1ins2} &
\includegraphics[width=3cm]{FeynAmps/BtoLLpBoxSV2ins4} \\
(g) & (h) & (i) \\
\includegraphics[width=3cm]{FeynAmps/BtoLLpBoxVS1} & \includegraphics[width=3cm]{FeynAmps/BtoLLpBoxVS1ins2} &
\includegraphics[width=3cm]{FeynAmps/BtoLLpBoxVS2ins4} \\
(j) & (k) & (l)
\end{tabular}
\caption{Generic box diagrams I}
\label{fig:box1}
\end{figure}
\begin{figure}[hbt]
\centering
\begin{tabular}{ccc}
\includegraphics[width=3cm]{FeynAmps/BtoLLpBoxSV2} & \includegraphics[width=3cm]{FeynAmps/BtoLLpBoxSV2ins2} &
\includegraphics[width=3cm]{FeynAmps/BtoLLpBoxSV1ins4} \\
(m) & (n) & (o) \\
\includegraphics[width=3cm]{FeynAmps/BtoLLpBoxVS2} & \includegraphics[width=3cm]{FeynAmps/BtoLLpBoxVS2ins2} &
\includegraphics[width=3cm]{FeynAmps/BtoLLpBoxVS1ins4} \\
(p) & (q) & (r) \\
\includegraphics[width=3cm]{FeynAmps/BtoLLpBoxV1} & \includegraphics[width=3cm]{FeynAmps/BtoLLpBoxV1ins2} &
\includegraphics[width=3cm]{FeynAmps/BtoLLpBoxV1ins4} \\
(s) & (t) & (u) \\
\includegraphics[width=3cm]{FeynAmps/BtoLLpBoxV2} & \includegraphics[width=3cm]{FeynAmps/BtoLLpBoxV2ins2} &
\includegraphics[width=3cm]{FeynAmps/BtoLLpBoxV2ins4} \\
(v) & (w) & (x)
\end{tabular}
\caption{Generic box diagrams II}
\label{fig:box2}
\end{figure}
The vertex number conventions for boxes are shown in figs.~\ref{fig:boxvertex}, while all possible generic diagrams are given in Figures~\ref{fig:box1} and \ref{fig:box2}. All vertices are chiral and they are parametrized as in eqs.~(\ref{eq:chiralvertices1})-(\ref{eq:chiralvertices2}) with $A=1,B=2$ for vertex 1, $A=3,B=4$ for vertex 2, $A=5,B=6$ for vertex 3 and $A=7,B=8$ for vertex 4. If there are two particles of equal type in a loop (say, two fermions), the one between vertices 1 and 2 (2 or 3) will be labelled $F1$ and the other one will be $F2$. The contributions to the different Wilson coefficients read
\begin{align}
C^{(a)}_{SLL} &= -G_1G_3G_5G_7M_{F1}M_{F2} \cdot D^{(a-c)}_0 \\
C^{(a)}_{SLR} &= -G_1G_3G_ 6G_8M_{F1}M_{F2}\cdot D^{(a-c)}_0 \\
C^{(a)}_{VLL}&=-G_2G_3G_6G_7 \cdot D^{(a-c)}_{00} \\
C^{(a)}_{VLR}&=-G_2G_3G_5G_8 \cdot D^{(a-c)}_{00} \\
C^{(b)}_{SLL} &= -G_1G_3G_5 G_7 M_{F1}M_{F2} \cdot D^{(a-c)}_0 \\
C^{(b)}_{SLR} &= -G_1G_3G_ 6 G_8M_{F1}M_{F2}\cdot D^{(a-c)}_0 \\
C^{(b)}_{VLL}&=G_2G_3G_5 G_8 \cdot D^{(a-c)}_{00} \\
C^{(b)}_{VLR}&=G_2G_3G_6 G_7 \cdot D^{(a-c)}_{00} \\
C^{(c)}_{SLL} &= \frac 12 G_1G_3G_5G_7 M_{F1}M_{F2}D^{(a-c)}_0 \\
C^{(c)}_{SLR} &= -2 G_1G_4G_5G_8 D^{(a-c)}_{00} \\
C^{(c)}_{VLL} &= -\frac 12 G_2G_4G_5G_7 M_{F1}M_{F2} D^{(a-c)}_0 \\
C^{(c)}_{VLR} &= -G_2G_3G_5G_8 D^{(a-c)}_{00} \\
C^{(d)}_{SLL} &= \frac 12 G_1G_3G_5G_7M_1M_2 \cdot D^{(d-f)}_0 \\
C^{(d)}_{SLR} &= 2G_1G_3G_6G_8 \cdot D^{(d-f)}_{00} \\
C^{(d)}_{VLL}&= -G_2G_3G_6G_7 \cdot D^{(d-f)}_{00} \\
C^{(d)}_{VLR}&=\frac 12 G_2G_3G_5G_8M_1M_2 \cdot D^{(d-f)}_{0} \\
C^{(e)}_{SLL} &= \frac 12 G_1G_3G_5 G_7 M_{F1}M_{F2} \cdot D^{(d-f)}_0 \\
C^{(e)}_{SLR} &= 2G_1G_3G_6 G_8 \cdot D^{(d-f)}_{00} \\
C^{(e)}_{VLL}&= -\frac 12G_2G_3G_5 G_8 M_{F1}M_{F2}\cdot D^{(d-f)}_{0} \\
C^{(e)}_{VLR}&= G_2G_3G_6 G_7 \cdot D^{(d-f)}_{00} \\
C^{(f)}_{SLL} &= \frac 12 G_1G_3G_5G_7 M_{F1}M_{F2}D^{(d-f)}_0 \\
C^{(f)}_{SLR} &= -2G_1G_4G_5G_8 D^{(d-f)}_{00} \\
C^{(f)}_{VLL} &= G_2G_4G_5G_7 D^{(d-f)}_{00} \\
C^{(f)}_{VLR} &= \frac 12 G_2G_3G_5G_8 M_{F1}M_{F2}D^{(d-f)}_0 \\
C^{(g)}_{SLL}&= 2 G_1G_3G_6G_7 \kl{C^{(g-i)}_0+M_{F1}^2D^{(g-i)}_0-2D^{(g-i)}_{00}}\\
C^{(g)}_{SLR}&= 2 G_1G_3G_5G_8 \kl{C^{(g-i)}_0+M_{F1}^2D^{(g-i)}_0-2D^{(g-i)}_{00}}\\
C^{(g)}_{VLL}&= G_2G_3G_5G_7 M_{F1}M_{F2} D^{(g-i)}_0 \\
C^{(g)}_{VLR}&= G_2G_3G_6G_8 M_{F1}M_{F2} D^{(g-i)}_0 \\
C^{(h)}_{SLL}&= -4 G_1G_3G_5 G_7 D^{(g-i)}_{00}\\
C^{(h)}_{SLR}&= -4 G_1G_3G_6 G_8 D^{(g-i)}_{00}\\
C^{(h)}_{VLL}&= G_2G_3G_5 G_8 M_{F1}M_{F2} D^{(g-i)}_0 \\
C^{(g)}_{VLR}&= G_2G_3G_6 G_7 M_{F1}M_{F2} D^{(g-i)}_0 \\
C^{(i)}_{SLL}&= -G_1G_3G_5G_7\kl{C^{(g-i)}_0+M_S^2D^{(g-i)}_0-8D^{(g-i)}_{00}} \\
C^{(i)}_{SLR}&= 2G_1G_3G_5G_8 M_{F1}M_{F2}D^{(g-i)}_0 \\
C^{(i)}_{VLL}&= G_2G_3G_5G_7 \kl{C^{(g-i)}_0+M_S^2D^{(g-i)}_0-2D^{(g-i)}_{00}} \\
C^{(i)}_{VLR}&= G_2G_4G_5G_8 M_{F1}M_{F2} D^{(g-i)}_0 \\
C^{(j)}_{SLL}&= 2 G_2G_3G_5G_7 \kl{C^{(j-l)}_0+M_{F1}^2D^{(j-l)}_0-2D^{(j-l)}_{00}}\\
C^{(j)}_{SLR}&= 2 G_2G_3G_6G_8 \kl{C^{(j-l)}_0+M_{F1}^2D^{(j-l)}_0-2D^{(j-l)}_{00}}\\
C^{(j)}_{VLL}&= G_1G_3G_6G_7 M_{F1}M_{F2} D^{(j-l)}_0 \\
C^{(j)}_{VLR}&= G_1G_3G_5G_8 M_{F1}M_{F2} D^{(j-l)}_0 \\
C^{(k)}_{SLL}&= -4 G_2G_3G_5 G_8 D^{(j-l)}_{00}\\
C^{(k)}_{SLR}&= -4 G_2G_3G_6 G_7 D^{(j-l)}_{00}\\
C^{(k)}_{VLL}&= G_1G_3G_5 G_7 M_{F1}M_{F2} D^{(j-l)}_0 \\
C^{(k)}_{VLR}&= G_1G_3G_6 G_8 M_{F1}M_{F2} D^{(j-l)}_0 \\
C^{(l)}_{SLL}&= -G_1G_3G_5G_8(C^{(j-l)}_0+M_V^2D^{(j-l)}_0-8D^{(j-l)}_{00}) \\
C^{(l)}_{SLR}&= 2G_1G_4G_5G_7 M_{F1}M_{F2}D^{(j-l)}_0 \\
C^{(l)}_{VLL}&= G_2G_4G_5G_8 (C^{(j-l)}_0+M_V^2D^{(j-l)}_0-2D^{(j-l)}_{00}) \\
C^{(l)}_{VLR}&= G_2G_3G_5G_7 M_{F1}M_{F2} D^{(j-l)}_0 \\
C^{(m)}_{SLL}&= - G_1G_3G_6G_7 \left( C^{(m-o)}_0+M_S^2D^{(m-o)}_0 - \frac 14(13G_1G_3G_6G_7+3G_2G_4G_5G_8)D^{(m-o)}_{00}\right) \\
C^{(m)}_{SLR}&=-2 G_1G_3G_5G_8 M_{F1}M_{F2} D^{(m-o)}_0\\
C^{(m)}_{VLL}&= G_2G_3G_5G_7 M_{F1}M_{F2} D^{(m-o)}_0\\
C^{(m)}_{VLR}&= - G_2G_3G_6G_8 \kl{C^{(m-o)}_0+M_S^2D^{(m-o)}_0-2D^{(m-o)}_{00}}\\
C^{(n)}_{SLL}&= 8 G_1G_3G_6 G_8 D^{(m-o)}_{00}\\
C^{(n)}_{SLR}&= 2 G_1G_3G_5 G_7 M_{F1}M_{F2} D^{(m-o)}_0\\
C^{(n)}_{VLL}&= -2 G_2G_3G_6 G_7 D^{(m-o)}_{00}\\
C^{(n)}_{VLR}&= G_2G_3G_5 G_8 M_{F1}M_{F2}D^{(m-o)}_0\\
C^{(o)}_{SLL}&= -\frac 14 (13G_1G_3G_5G_7+3G_2G_4G_6G_8)D^{(m-o)}_{00} \\
C^{(o)}_{SLR}&= -2G_1G_4G_5G_8 M_{F1}M_{F2}D^{(m-o)}_0 \\
C^{(o)}_{VLL}&= G_2G_4G_5G_7 M_{F1}M_{F2} D^{(m-o)}_0 \\
C^{(o)}_{VLR}&= 2G_2G_3G_5G_8 D^{(m-o)}_{00} \\
C^{(p)}_{SLL} &=- G_2G_3G_5G_7 \left( C^{(p-r)}_0+M_V^2D^{(p-r)}_0 - \frac 14(13G_2G_3G_5G_7+3G_1G_4G_6G_8)D^{(p-r)}_{00} \right)\\
C^{(p)}_{SLR}&= -2 G_2G_3G_6G_8 M_{F1}M_{F2} D^{(p-r)}_0\\
C^{(p)}_{VLL}&= G_1G_3G_6G_7 M_{F1}M_{F2} D^{(p-r)}_0\\
C^{(p)}_{VLR}&= - G_1G_3G_5G_8 \kl{C^{(p-r)}_0+M_V^2D^{(p-r)}_0-2D^{(p-r)}_{00}}\\
C^{(r)}_{SLL}&= 8 G_1G_3G_5 G_7 D^{(p-r)}_{00}\\
C^{(q)}_{SLR}&= 2 G_1G_3G_6 G_8 M_{F1}M_{F2} D^{(p-r)}_0\\
C^{(q)}_{VLL}&= -2 G_2G_3G_5 G_8 D^{(p-r)}_{00}\\
C^{(q)}_{VLR}&= G_2G_3G_6 G_7 M_{F1}M_{F2} D^{(p-r)}_0\\
C^{(r)}_{SLL}&= -\frac 14 (13G_2G_4G_5G_7+3G_1G_3G_6G_8)D^{(p-r)}_{00} \\
C^{(r)}_{SLR}&= -2 G_2G_3G_5G_8 M_{F1}M_{F2}D^{(p-r)}_0+\frac 34 (G_2G_4G_5G_7-G_1G_3G_6G_8)D^{(p-r)}_{00} \\
C^{(r)}_{VLL}&= G_1G_3G_5G_7M_{F1}M_{F2}D^{(p-r)}_0 \\
C^{(r)}_{VLR}&= 2G_1G_4G_5G_8 D^{(p-r)}_{00} \\
C^{(s)}_{SLL}&= -4 G_2G_3G_6G_7 M_{F1}M_{F2} D^{(s-u)}_{0}\\
C^{(s)}_{SLR}&= -4 G_2G_3G_5G_8 M_{F1}M_{F2} D^{(s-u)}_0 \\
C^{(s)}_{VLL}&= -4 G_1G_3G_5G_7 \kl{C^{(s-u)}_0+M_{F1}^2D^{(s-u)}_0-3D^{(s-u)}_{00}}\\
C^{(s)}_{VLR}&= -4 G_1G_3G_6G_8 \kl{C^{(s-u)}_0+M_{F1}^2D^{(s-u)}_0}\\
C^{(t)}_{SLL}&= -4 G_2G_3G_5 G_8 M_{F1}M_{F2} D^{(s-u)}_{0}\\
C^{(t)}_{SLR}&= -4 G_2G_3G_6 G_7 M_{F1}M_{F2} D^{(s-u)}_0 \\
C^{(t)}_{VLL}&= 16 G_1G_3G_5 G_7 D^{(s-u)}_{00}\\
C^{(t)}_{VLR}&= 4 G_1G_3G_6 G_8 D^{(s-u)}_{00}\\
C^{(u)}_{SLL}&= -4 G_2G_4G_5 G_7 M_{F1}M_{F2} D^{(s-u)}_{0}\\
C^{(u)}_{SLR}&= -8 G_2G_3G_5 G_8 D^{(s-u)}_{00} \\
C^{(u)}_{VLL}&= 16 G_1G_3G_5 G_7 D^{(s-u)}_{00}\\
C^{(u)}_{VLR}&= 2 G_1G_4G_5 G_8 M_{F1}M_{F2} D^{(s-u)}_{0}\\
C^{(v)}_{SLL}&= 8 G_2G_3G_6G_7 M_{F1}M_{F2} D^{(v-x)}_{0}\\
C^{(v)}_{SLR}&= 8 G_2G_3G_5G_8 \kl{C^{(v-x)}_0+M_{V1}^2D^{(v-x)}_0} \\
C^{(v)}_{VLL}&= -4 G_1G_3G_5G_7 \kl{C^{(v-x)}_0+M_{V1}^2D^{(v-x)}_0-3D^{(v-x)}_{00}}\\
C^{(v)}_{VLR}&= 2 G_1G_3G_6G_8 M_{F1}M_{F2} D^{(v-x)}_{0}\\
C^{(w)}_{SLL}&= 8 G_1G_3G_6 G_8 M_{F1}M_{F2} D^{(v-x)}_{0}\\
C^{(w)}_{SLR}&= 32 G_1G_3G_5 G_7 D^{(v-x)}_{00} \\
C^{(w)}_{VLL}&= -2 G_2G_3G_6 G_7 M_{F1}M_{F2} D^{(v-x)}_0\\
C^{(w)}_{VLR}&= 4 G_2G_3G_5 G_8 D^{(v-x)}_{00}\\
C^{(x)}_{SLL}&= -4 G_1G_4G_5 G_8 M_{F1}M_{F2} D^{(v-x)}_{0}\\
C^{(x)}_{SLR}&= -8 G_1G_3G_5 G_7 (C^{(v-x)}_0+M_{V1}^2D^{(v-x)}_0-3 D^{(v-x)}_{00}) \\
C^{(x)}_{VLL}&= -2 G_2G_3G_5 G_8 M_{F1}M_{F2} D^{(v-x)}_0\\
C^{(x)}_{VLR}&= -4 G_2G_4G_5 G_7(C^{(v-x)}_0+M_{V1}^2D^{(v-x)}_0)
\end{align}
The arguments of the loop functions for the different amplitudes are
\begin{align}
D_X^{(a-c)}&= D_X(M_{F1}^2,M_{F2}^2,M_{S1}^2,M_{S2}^2) \\
D_X^{(d-f)}&= D_X(M_{F1}^2,M_{F2}^2,M_{S1}^2,M_{S2}^2) \\
C_X^{(g-i)}&= C_X(\vec 0_3, M_{F2}^2,M_V^2,M_{S}^2) \,\hspace{1cm}
D_X^{(g-i)}= D_X(M_{F1}^2,M_{F2}^2,M_{V}^2,M_{S}^2)\\
C_X^{(j-l)}&= C_X(\vec 0_3,M_{F2}^2,M_S^2,M_{V}^2) \,\hspace{1cm}
D_X^{(j-l)}= D_X(M_{F1}^2,M_{F2}^2,M_{S}^2,M_{V}^2)\\
C_X^{(m-o)}&= C_X(\vec 0_3, M_{F2}^2,M_{F1}^2,M_V^2) \,\hspace{1cm}
D_X^{(m-o)}= D_X(M_{F2}^2,M_{F1}^2,M_S^2,M_V^2)\\
C_X^{(p-r)}&= C_X(\vec 0_3, M_{F2}^2,M_{F1}^2,M_S^2) \,\hspace{1cm}
D_X^{(p-r)}= D_X(M_{F2}^2,M_{F1}^2,M_V^2,M_S^2) \\
C_X^{(s-u)}&= C_X(\vec 0_3, M_{F2}^2,M_{V1}^2,M_{V2}^2)\,\hspace{1cm}
D_X^{(s-u)}= D_X(M_{F1}^2,M_{F2}^2,M_{V1}^2,M_{V2}^2) \\
C_X^{(v-x)}&= C_X(\vec 0_3, M_{F2}^2,M_{F1}^2,M_{V2}^2) \,\hspace{1cm}
D_X^{(v-x)}= D_X(M_{F2}^2,M_{F1}^2,M_{V1}^2,M_{V2}^2)
\end{align}
\end{appendix}
| {'timestamp': '2013-06-27T02:01:15', 'yymm': '1212', 'arxiv_id': '1212.5074', 'language': 'en', 'url': 'https://arxiv.org/abs/1212.5074'} |
\section{Introduction}
Transformer-based models continue to achieve state-of-the-art performance on a number of NLU and NLG tasks.
A rich body of literature, termed BERTology \cite{Rogers2020API}, has evolved to analyze and optimize these models.
One set of studies comment on the functional role and importance of attention heads in these models \cite{clark2019does,michel2019sixteen, voita2019analyzing, voita2019bottom,liu2019attentive,belinkov2017neural}.
Another set of studies have identified ways to make these models more efficient by methods such as pruning \cite{mccarley2019pruning, gordon2020compressing, sajjad2020poor, budhraja2020weak}.
A third set of studies show that multilingual extensions of these models, such as Multilingual BERT \cite{devlin-etal-2019-bert}, have surprisingly high crosslingual transfer \cite{pires2019multilingual,wu-dredze-2019-beto}.%
Our work lies in the intersection of these three sets of methods:
We analyze the importance of attention heads in multilingual models based on the effect of pruning on performance for both in-language and cross-language tasks.
We base our analysis on BERT and mBERT \cite{devlin-etal-2019-bert} and evaluate (i) in-language performance in English on four tasks from the GLUE benchmark - MNLI, QQP, QNLI, SST-2, and (ii) cross-language performance on the XNLI task on 10 languages - Spanish, German, Vietnamese, Chinese, Hindi, Greek, Urdu, Arabic, Turkish, and Swahili. With these, we derive two broad sets of findings.
Notice that we prune only the attention heads and not other parts of the network, such as the fully connected layers or the embedding layers.
Thus all our results are restricted to comments on role of attention capacity in multilingual models.
First, we compare and contrast the effect of pruning on in-language performance of BERT and mBERT.
Intuitively, the reduced dedicated attention capacity in mBERT for English may suggest a more adverse effect of pruning on the GLUE tasks.
However, we find that mBERT is just as robust to pruning as is BERT.
At 50\% random pruning, average accuracy drop with mBERT on the GLUE tasks is 2\% relative to the base performance, similar to results for BERT reported in \citeauthor{budhraja2020weak}\shortcite{budhraja2020weak}.
Further, mBERT has identical preferences amongst layers to BERT, where (i) heads in the middle layers are more important than the ones in top and bottom layers, and (ii) consecutive layers cannot be simultaneously pruned.
Second, we study the effect of pruning on cross-language performance across languages that are categorised by language family (SVO/SOV/VSO/Agglutinative) and corpus size used for pre-training (high/medium/low-resource).
We find that mBERT is significantly less robust to pruning on the XNLI task: At 50\% random pruning, performance averaged across languages drops by 5\%.
However, this drop is not uniform across languages: The drop in SVO and high-resource languages is lower confirming that more pre-training data and similar language family %
make crosslingual transfer more robust.
Next, layer-wise pruning results reveal several insights into the functional roles of different layers in multilingual models.
Pruning bottom layers sensitively affects cross-language performance where the drop across languages shows a trend based on the language family: Agglutinative $>$ VSO $>$ SOV $>$ SVO.
On the other hand, pruning top layers has a lower impact on accuracy, but the order across language families is the exact reverse of the order for the bottom layers: SVO $>$ SOV $>$ VSO $>$ Agglutinative.
These results suggest that the bottom layers of the network are storing crosslingual information which is particularly crucial for performance on low-resource languages and languages quite different from the fine-tuned language (En-SVO).
Further, the top layers of the network are specialising for the fine-tuned task and relatively more important for languages that are related to the fine-tuned language.
We confirm this intuition by showing that the impact of fine-tuning on attention heads (captured by the difference in entropy of attention distributions across tokens) is much higher on the top layers than the middle or bottom layers.
We also observe that fine-tuning for a single epoch recovers about 93\% of cross-language performance consistently across all languages.
\begin{figure*}
\includegraphics[width=6in]{plots/resource_layers2.png}
\caption{Layerwise pruning - Comparison across four GLUE tasks[(a)-(d)], and XNLI task: language family[(e)-(h)], and pre-training corpus size[(i)-(k)]}
\label{label_combined_layers}
\end{figure*}
\section{In-language performance}
\citeauthor{DBLP:conf/acl/ConneauKGCWGGOZ20}\shortcite{DBLP:conf/acl/ConneauKGCWGGOZ20} argue that in multilingual models such as mBERT there is always a trade-off between \textit{transfer} and \textit{capacity dilution}. Specific to our focus on attention heads, the finite capacity of a fixed number of attention heads is shared by many languages. We refer to this as the \textit{attention capacity} of the network. As a result of this shared capacity, the in-language performance of multilingual models is typically poor when compared to monolingual models. Thus, the first question to ask is \textit{"Does pruning attention capacity affect the in-language performance of mBERT more adversely than BERT?"} Intuitively, if the attention capacity of mBERT is limited to begin with, then pruning it further should lead to large drops in performance. To check this intuition, we take the publicly available mBERT$_{BASE}$ model
which is pretrained on 104 languages.
We then fine-tune and evaluate it on four GLUE tasks: MNLI-M, QQP, QNLI, SST-2 \cite{wang2018glue}; and compare it with BERT$_{BASE}$. Following the evaluation setup as in \citeauthor{budhraja2020weak}\shortcite{budhraja2020weak}, we either (i) randomly prune k\% of attention heads in mBERT, or (ii) prune all heads in specific layers (top, middle, or bottom). In each case, after pruning we finetune the model for 10 epochs. Since the pruned attention heads are randomly sampled, we report the average across three different experiment runs. The standard deviation across three runs of experiments averaged across all languages is 0.51\% of the reported mean values. Our main observations are:%
\noindent \textbf{mBERT is as robust as BERT.} From Figure \ref{fig:random_pruning} we observe that at 0\% pruning, the performance of mBERT is comparable to BERT on 2 out of the 4 tasks (except MNLI-M and SST-2). Further, on pruning, the performance of mBERT does not drop drastically when compared to BERT. At all levels of pruning (0 to 90\% heads pruned) the gap between mBERT and BERT is more or less the same as that at the starting point (0\% pruning). This trend is more clear from Figure \ref{fig:random_pruning} (e) which shows the average drop in the performance at different levels of pruning relative to the base performance (no pruning). Even at 50\% pruning the average performance of mBERT drops by only 2\% relative to the base performance. Thus, contrary to expectation, mBERT is not adversely affected by pruning despite its seemingly limited attention capacity for each language.\\
\noindent \textbf{mBERT has same layer preferences as BERT.} From their pruning experiments, \citeauthor{budhraja2020weak}\shortcite{budhraja2020weak} show that the middle layers in BERT are more important than the top or bottom layers: The performance of BERT does not drop much when pruning top or bottom layers as compared to middle layers. We perform similar experiments with mBERT and find that the results are consistent. As shown in Figure \ref{label_combined_layers}) (a) to (d) across the 4 tasks mBERT does not have any preference amongst top and bottom layers, but middle layers are more important. This is especially true when we prune more layers ($>$4 out of the 12 layers in mBERT). \\
\noindent \textbf{mBERT doesn't prefer pruning consecutive layers (same as BERT).} Some works on BERTology \cite{lan2019albert} have identified that consecutive layers of BERT have similar functionality. We find supporting evidence for this in our experiments. In particular, pruning consecutive layers of mBERT as opposed to odd or even layers leads to a higher drop in the performance (see Table \ref{tab:mbert_glue}). %
Similar results are reported by \citeauthor{budhraja2020weak}\shortcite{budhraja2020weak} for BERT.
Thus, despite being multilingual, mBERT's layers have the same ordering of importance as BERT.
\begin{table}[tp]
\small
\begin{center}
\begin{tabular}{ c c c c c }
\hline
\rule{0pt}{1.5ex} Layers Pruned & MNLI-M & QQP & QNLI & SST-2 \\
\hline
\rule{0pt}{1.5ex} Top six & 78.45 & 90.53 & 87.46 & 89.33 \\
\rule{0pt}{1.5ex} Bottom six & 78.71 & 89.77 & 87.22 & 87.84 \\
\rule{0pt}{1.5ex} Middle six & 76.72 & 89.07 & 85.28 & 87.5 \\
\rule{0pt}{1.5ex} Odd six & 78.94 & 90.24 & 88.72 & 88.64 \\
\rule{0pt}{1.5ex} Even six & 79.82 & 90.17 & 89.09 & 90.02 \\
\hline
\end{tabular}
\caption{Effect of pruning self-attention heads of consecutive v/s non-consecutive layers of mBERT}
\label{tab:mbert_glue}
\end{center}
\end{table}
\section{Cross-language performance}
We now study the impact of pruning on the crosslingual performance of mBERT. Again, intuitively, one would expect that if we prune some heads then the crosslingual signals learned implicitly during training may get affected resulting in poor performance on downstream tasks. To check this intuition, we perform experiments with 11 languages on the XNLI dataset \cite{conneau2018xnli}.
We categorize these languages according to their structural similarity as SVO (English, Spanish, German, Vietnamese, Chinese),
SOV (Hindi, Greek, Urdu),
VSO (Arabic),
and Agglutinative (Turkish, Swahili). Further, we follow \citeauthor{wu2020all}\shortcite{wu2020all} and classify these languages as High Resource (English, Spanish, German), Medium Resource (Arabic, Vietnamese, Chinese, Turkish), and Low Resource (Hindi, Greek, Urdu, Swahili) based on the size of the pretraining corpus. %
Similar to our in-language experiments reported above, we either (i) randomly prune k\% of attention heads in mBERT, or (ii) prune all heads in specific layers (top, middle, or bottom). In each case, after pruning we finetune the model for 10 epochs on the task specific training data. Our mains observations are: \\
\begin{figure}[tp]
\includegraphics[width=2.7in]{plots/xnli_pp.png}
\caption{XNLI task - Performance of various language families at different percentages of pruning.}
\label{label_xnli_pp}
\end{figure}
\noindent\textbf{mBERT is less robust in a cross language setup.} In Figure \ref{label_xnli_pp}, we plot the relative drop in performance from the baseline (0\% pruning) at different levels of pruning (10-90\%).
We observe that at 50\% pruning the relative drop in performance is around 5\% and at 90\% pruning the drop is around 10-15\%. We contrast this with the results in \ref{fig:random_pruning} (e) where the relative drop in performance for in-language tasks was more modest (around 2\% drop at 50\% pruning and 7.5\% drop at 90\% pruning). Thus, in a cross language setup mBERT is affected more adversely at higher levels of pruning (beyond 25\%). However, pruning upto 25\% of the heads does not affect the performance much ($\approx$ 2.5\% relative drop).
\noindent\textbf{mBERT is more robust for SVO and high resource languages.} Referring to Figure \ref{label_xnli_pp}, we observe that high resource languages (more pretraining data) and SVO languages (similar to English) are relatively less adversely affected by pruning. Among the non-SVO languages, agglutinative languages like Turkish are most affected by pruning\footnote{Note that, there is a high overlap between our SVO languages and high resource languages (since in the XNLI dataset most SVO languages are European languages which are high/mid resource languages)}.
\noindent\textbf{mBERT is sensitive to pruning bottom layers for crosslingual transfer.}
Many recent works have shown that middle layers are important in BERT \cite{Pande2021TheHH, Rogers2020API, jawahar-etal-2019-bert, liu2019linguistic}. In this section, we analyse the relative importance of layers for different languages in a crosslingual task. Referring to Figure \ref{label_combined_layers} (e) to (k), we observe that for all languages, the bottom layers are more important than the top layers for crosslingual transfer. Specifically, we compare the difference $d(\cdot)$ between the relative performance drop when pruning the same number of top and bottom layers. We observe that $d(\mbox{SVO})$ $<$ $d(\mbox{SOV})$ $<$ $d(\mbox{VSO})$ $<$ $d(\mbox{Agglutinative})$ [(e) to (h) of Figure \ref{label_combined_layers}] and $d(\mbox{High~resource})$ $<$ $d(\mbox{Medium~resource})$ $<$ $d(\mbox{Low~resource})$ [(i) to (k) of Figure \ref{label_combined_layers}]. %
This suggests that most of the crosslingual information is stored in the bottom layers. While pruning top layers has a lower impact on accuracy, the order across language families is the exact reverse of the order for the bottom layers: $d(\mbox{SVO})$ $>$ $d(\mbox{SOV})$ $>$ $d(\mbox{VSO})$ $>$ $d(\mbox{Agglutinative})$. Lastly, we observe that unlike BERT, mBERT is less sensitive to pruning middle layers for most languages.
\noindent\textbf{mBERT is more sensitive to pruning consecutive layers.}
We observe that mBERT is more sensitive to pruning consecutive layers. On average, pruning \textit{even} layers works better than pruning \textit{odd} or consecutive layers. This suggests that retaining the first layer (in even case) is beneficial.
\noindent\textbf{mBERT does not benefit from extensive fine-tuning after pruning.} We observe that on average (over all pruning experiments and languages on the XNLI task), mBERT achieves 93\% of the final accuracy after just one epoch of fine-tuning. Further fine-tuning causes only marginal gains in the accuracy.
We then analyse the changes across layers to identify any preference for specific layers during fine tuning.
We then compute the entropy of each attention head in each layer for a given input token and average this entropy across all tokens in 500 sentences from the dev set. We plot the difference between the entropy of the unpruned heads before and after fine-tuning (see Figure \ref{label_entropy_change}). We notice that just like BERT \cite{kovaleva2019revealing}, the change in entropy is maximum for attention heads in the top layers (0.176) as compared to bottom (0.047) or middle layers (0.042). This suggests that mBERT adjusts the top layers more during fine-tuning.
\begin{figure}[tp]
\centering
\begin{tabular}{@{\hskip 0in}c}
\includegraphics[height=1.2in]{plots/90pp_ftuning_layer_cropped.png}
\end{tabular}
\caption{Change in entropy of alive heads before and after 1 epoch fine-tuning for 90\% rand. pruned model.}
\label{label_entropy_change}
\end{figure}
\section{Conclusion}
We studied the effect of pruning on mBERT and found that for in-language tasks, it is equally robust and has the same layer preferences as BERT.
For cross language tasks, mBERT is less robust with significant drops especially for low resource and non-SVO languages. Bottom layers are more important and pruning consecutive layers is not preferred.
The importance of top and bottom layers have the reverse order across language families.
| {'timestamp': '2021-09-28T02:26:11', 'yymm': '2109', 'arxiv_id': '2109.12683', 'language': 'en', 'url': 'https://arxiv.org/abs/2109.12683'} |
\section{Electronic Submission}
\label{submission}
Submission to ICML 2023 will be entirely electronic, via a web site
(not email). Information about the submission process and \LaTeX\ templates
are available on the conference web site at:
\begin{center}
\textbf{\texttt{http://icml.cc/}}
\end{center}
The guidelines below will be enforced for initial submissions and
camera-ready copies. Here is a brief summary:
\begin{itemize}
\item Submissions must be in PDF\@.
\item \textbf{New to this year}: If your paper has appendices, submit the appendix together with the main body and the references \textbf{as a single file}. Reviewers will not look for appendices as a separate PDF file. So if you submit such an extra file, reviewers will very likely miss it.
\item Page limit: The main body of the paper has to be fitted to 8 pages, excluding references and appendices; the space for the latter two is not limited. For the final version of the paper, authors can add one extra page to the main body.
\item \textbf{Do not include author information or acknowledgements} in your
initial submission.
\item Your paper should be in \textbf{10 point Times font}.
\item Make sure your PDF file only uses Type-1 fonts.
\item Place figure captions \emph{under} the figure (and omit titles from inside
the graphic file itself). Place table captions \emph{over} the table.
\item References must include page numbers whenever possible and be as complete
as possible. Place multiple citations in chronological order.
\item Do not alter the style template; in particular, do not compress the paper
format by reducing the vertical spaces.
\item Keep your abstract brief and self-contained, one paragraph and roughly
4--6 sentences. Gross violations will require correction at the
camera-ready phase. The title should have content words capitalized.
\end{itemize}
\subsection{Submitting Papers}
\textbf{Paper Deadline:} The deadline for paper submission that is
advertised on the conference website is strict. If your full,
anonymized, submission does not reach us on time, it will not be
considered for publication.
\textbf{Anonymous Submission:} ICML uses double-blind review: no identifying
author information may appear on the title page or in the paper
itself. \cref{author info} gives further details.
\textbf{Simultaneous Submission:} ICML will not accept any paper which,
at the time of submission, is under review for another conference or
has already been published. This policy also applies to papers that
overlap substantially in technical content with conference papers
under review or previously published. ICML submissions must not be
submitted to other conferences and journals during ICML's review
period.
Informal publications, such as technical
reports or papers in workshop proceedings which do not appear in
print, do not fall under these restrictions.
\medskip
Authors must provide their manuscripts in \textbf{PDF} format.
Furthermore, please make sure that files contain only embedded Type-1 fonts
(e.g.,~using the program \texttt{pdffonts} in linux or using
File/DocumentProperties/Fonts in Acrobat). Other fonts (like Type-3)
might come from graphics files imported into the document.
Authors using \textbf{Word} must convert their document to PDF\@. Most
of the latest versions of Word have the facility to do this
automatically. Submissions will not be accepted in Word format or any
format other than PDF\@. Really. We're not joking. Don't send Word.
Those who use \textbf{\LaTeX} should avoid including Type-3 fonts.
Those using \texttt{latex} and \texttt{dvips} may need the following
two commands:
{\footnotesize
\begin{verbatim}
dvips -Ppdf -tletter -G0 -o paper.ps paper.dvi
ps2pdf paper.ps
\end{verbatim}}
It is a zero following the ``-G'', which tells dvips to use
the config.pdf file. Newer \TeX\ distributions don't always need this
option.
Using \texttt{pdflatex} rather than \texttt{latex}, often gives better
results. This program avoids the Type-3 font problem, and supports more
advanced features in the \texttt{microtype} package.
\textbf{Graphics files} should be a reasonable size, and included from
an appropriate format. Use vector formats (.eps/.pdf) for plots,
lossless bitmap formats (.png) for raster graphics with sharp lines, and
jpeg for photo-like images.
The style file uses the \texttt{hyperref} package to make clickable
links in documents. If this causes problems for you, add
\texttt{nohyperref} as one of the options to the \texttt{icml2023}
usepackage statement.
\subsection{Submitting Final Camera-Ready Copy}
The final versions of papers accepted for publication should follow the
same format and naming convention as initial submissions, except that
author information (names and affiliations) should be given. See
\cref{final author} for formatting instructions.
The footnote, ``Preliminary work. Under review by the International
Conference on Machine Learning (ICML). Do not distribute.'' must be
modified to ``\textit{Proceedings of the
$\mathit{39}^{th}$ International Conference on Machine Learning},
Honolulu, Hawaii, USA, PMLR 202, 2023.
Copyright 2023 by the author(s).''
For those using the \textbf{\LaTeX} style file, this change (and others) is
handled automatically by simply changing
$\mathtt{\backslash usepackage\{icml2023\}}$ to
$$\mathtt{\backslash usepackage[accepted]\{icml2023\}}$$
Authors using \textbf{Word} must edit the
footnote on the first page of the document themselves.
Camera-ready copies should have the title of the paper as running head
on each page except the first one. The running title consists of a
single line centered above a horizontal rule which is $1$~point thick.
The running head should be centered, bold and in $9$~point type. The
rule should be $10$~points above the main text. For those using the
\textbf{\LaTeX} style file, the original title is automatically set as running
head using the \texttt{fancyhdr} package which is included in the ICML
2023 style file package. In case that the original title exceeds the
size restrictions, a shorter form can be supplied by using
\verb|\icmltitlerunning{...}|
just before $\mathtt{\backslash begin\{document\}}$.
Authors using \textbf{Word} must edit the header of the document themselves.
\section{Format of the Paper}
All submissions must follow the specified format.
\subsection{Dimensions}
The text of the paper should be formatted in two columns, with an
overall width of 6.75~inches, height of 9.0~inches, and 0.25~inches
between the columns. The left margin should be 0.75~inches and the top
margin 1.0~inch (2.54~cm). The right and bottom margins will depend on
whether you print on US letter or A4 paper, but all final versions
must be produced for US letter size.
Do not write anything on the margins.
The paper body should be set in 10~point type with a vertical spacing
of 11~points. Please use Times typeface throughout the text.
\subsection{Title}
The paper title should be set in 14~point bold type and centered
between two horizontal rules that are 1~point thick, with 1.0~inch
between the top rule and the top edge of the page. Capitalize the
first letter of content words and put the rest of the title in lower
case.
\subsection{Author Information for Submission}
\label{author info}
ICML uses double-blind review, so author information must not appear. If
you are using \LaTeX\/ and the \texttt{icml2023.sty} file, use
\verb+\icmlauthor{...}+ to specify authors and \verb+\icmlaffiliation{...}+ to specify affiliations. (Read the TeX code used to produce this document for an example usage.) The author information
will not be printed unless \texttt{accepted} is passed as an argument to the
style file.
Submissions that include the author information will not
be reviewed.
\subsubsection{Self-Citations}
If you are citing published papers for which you are an author, refer
to yourself in the third person. In particular, do not use phrases
that reveal your identity (e.g., ``in previous work \cite{langley00}, we
have shown \ldots'').
Do not anonymize citations in the reference section. The only exception are manuscripts that are
not yet published (e.g., under submission). If you choose to refer to
such unpublished manuscripts \cite{anonymous}, anonymized copies have
to be submitted
as Supplementary Material via CMT\@. However, keep in mind that an ICML
paper should be self contained and should contain sufficient detail
for the reviewers to evaluate the work. In particular, reviewers are
not required to look at the Supplementary Material when writing their
review (they are not required to look at more than the first $8$ pages of the submitted document).
\subsubsection{Camera-Ready Author Information}
\label{final author}
If a paper is accepted, a final camera-ready copy must be prepared.
For camera-ready papers, author information should start 0.3~inches below the
bottom rule surrounding the title. The authors' names should appear in 10~point
bold type, in a row, separated by white space, and centered. Author names should
not be broken across lines. Unbolded superscripted numbers, starting 1, should
be used to refer to affiliations.
Affiliations should be numbered in the order of appearance. A single footnote
block of text should be used to list all the affiliations. (Academic
affiliations should list Department, University, City, State/Region, Country.
Similarly for industrial affiliations.)
Each distinct affiliations should be listed once. If an author has multiple
affiliations, multiple superscripts should be placed after the name, separated
by thin spaces. If the authors would like to highlight equal contribution by
multiple first authors, those authors should have an asterisk placed after their
name in superscript, and the term ``\textsuperscript{*}Equal contribution"
should be placed in the footnote block ahead of the list of affiliations. A
list of corresponding authors and their emails (in the format Full Name
\textless{}email@domain.com\textgreater{}) can follow the list of affiliations.
Ideally only one or two names should be listed.
A sample file with author names is included in the ICML2023 style file
package. Turn on the \texttt{[accepted]} option to the stylefile to
see the names rendered. All of the guidelines above are implemented
by the \LaTeX\ style file.
\subsection{Abstract}
The paper abstract should begin in the left column, 0.4~inches below the final
address. The heading `Abstract' should be centered, bold, and in 11~point type.
The abstract body should use 10~point type, with a vertical spacing of
11~points, and should be indented 0.25~inches more than normal on left-hand and
right-hand margins. Insert 0.4~inches of blank space after the body. Keep your
abstract brief and self-contained, limiting it to one paragraph and roughly 4--6
sentences. Gross violations will require correction at the camera-ready phase.
\subsection{Partitioning the Text}
You should organize your paper into sections and paragraphs to help
readers place a structure on the material and understand its
contributions.
\subsubsection{Sections and Subsections}
Section headings should be numbered, flush left, and set in 11~pt bold
type with the content words capitalized. Leave 0.25~inches of space
before the heading and 0.15~inches after the heading.
Similarly, subsection headings should be numbered, flush left, and set
in 10~pt bold type with the content words capitalized. Leave
0.2~inches of space before the heading and 0.13~inches afterward.
Finally, subsubsection headings should be numbered, flush left, and
set in 10~pt small caps with the content words capitalized. Leave
0.18~inches of space before the heading and 0.1~inches after the
heading.
Please use no more than three levels of headings.
\subsubsection{Paragraphs and Footnotes}
Within each section or subsection, you should further partition the
paper into paragraphs. Do not indent the first line of a given
paragraph, but insert a blank line between succeeding ones.
You can use footnotes\footnote{Footnotes
should be complete sentences.} to provide readers with additional
information about a topic without interrupting the flow of the paper.
Indicate footnotes with a number in the text where the point is most
relevant. Place the footnote in 9~point type at the bottom of the
column in which it appears. Precede the first footnote in a column
with a horizontal rule of 0.8~inches.\footnote{Multiple footnotes can
appear in each column, in the same order as they appear in the text,
but spread them across columns and pages if possible.}
\begin{figure}[ht]
\vskip 0.2in
\begin{center}
\centerline{\includegraphics[width=\columnwidth]{icml_numpapers}}
\caption{Historical locations and number of accepted papers for International
Machine Learning Conferences (ICML 1993 -- ICML 2008) and International
Workshops on Machine Learning (ML 1988 -- ML 1992). At the time this figure was
produced, the number of accepted papers for ICML 2008 was unknown and instead
estimated.}
\label{icml-historical}
\end{center}
\vskip -0.2in
\end{figure}
\subsection{Figures}
You may want to include figures in the paper to illustrate
your approach and results. Such artwork should be centered,
legible, and separated from the text. Lines should be dark and at
least 0.5~points thick for purposes of reproduction, and text should
not appear on a gray background.
Label all distinct components of each figure. If the figure takes the
form of a graph, then give a name for each axis and include a legend
that briefly describes each curve. Do not include a title inside the
figure; instead, the caption should serve this function.
Number figures sequentially, placing the figure number and caption
\emph{after} the graphics, with at least 0.1~inches of space before
the caption and 0.1~inches after it, as in
\cref{icml-historical}. The figure caption should be set in
9~point type and centered unless it runs two or more lines, in which
case it should be flush left. You may float figures to the top or
bottom of a column, and you may set wide figures across both columns
(use the environment \texttt{figure*} in \LaTeX). Always place
two-column figures at the top or bottom of the page.
\subsection{Algorithms}
If you are using \LaTeX, please use the ``algorithm'' and ``algorithmic''
environments to format pseudocode. These require
the corresponding stylefiles, algorithm.sty and
algorithmic.sty, which are supplied with this package.
\cref{alg:example} shows an example.
\begin{algorithm}[tb]
\caption{Bubble Sort}
\label{alg:example}
\begin{algorithmic}
\STATE {\bfseries Input:} data $x_i$, size $m$
\REPEAT
\STATE Initialize $noChange = true$.
\FOR{$i=1$ {\bfseries to} $m-1$}
\IF{$x_i > x_{i+1}$}
\STATE Swap $x_i$ and $x_{i+1}$
\STATE $noChange = false$
\ENDIF
\ENDFOR
\UNTIL{$noChange$ is $true$}
\end{algorithmic}
\end{algorithm}
\subsection{Tables}
You may also want to include tables that summarize material. Like
figures, these should be centered, legible, and numbered consecutively.
However, place the title \emph{above} the table with at least
0.1~inches of space before the title and the same after it, as in
\cref{sample-table}. The table title should be set in 9~point
type and centered unless it runs two or more lines, in which case it
should be flush left.
\begin{table}[t]
\caption{Classification accuracies for naive Bayes and flexible
Bayes on various data sets.}
\label{sample-table}
\vskip 0.15in
\begin{center}
\begin{small}
\begin{sc}
\begin{tabular}{lcccr}
\toprule
Data set & Naive & Flexible & Better? \\
\midrule
Breast & 95.9$\pm$ 0.2& 96.7$\pm$ 0.2& $\surd$ \\
Cleveland & 83.3$\pm$ 0.6& 80.0$\pm$ 0.6& $\times$\\
Glass2 & 61.9$\pm$ 1.4& 83.8$\pm$ 0.7& $\surd$ \\
Credit & 74.8$\pm$ 0.5& 78.3$\pm$ 0.6& \\
Horse & 73.3$\pm$ 0.9& 69.7$\pm$ 1.0& $\times$\\
Meta & 67.1$\pm$ 0.6& 76.5$\pm$ 0.5& $\surd$ \\
Pima & 75.1$\pm$ 0.6& 73.9$\pm$ 0.5& \\
Vehicle & 44.9$\pm$ 0.6& 61.5$\pm$ 0.4& $\surd$ \\
\bottomrule
\end{tabular}
\end{sc}
\end{small}
\end{center}
\vskip -0.1in
\end{table}
Tables contain textual material, whereas figures contain graphical material.
Specify the contents of each row and column in the table's topmost
row. Again, you may float tables to a column's top or bottom, and set
wide tables across both columns. Place two-column tables at the
top or bottom of the page.
\subsection{Theorems and such}
The preferred way is to number definitions, propositions, lemmas, etc. consecutively, within sections, as shown below.
\begin{definition}
\label{def:inj}
A function $f:X \to Y$ is injective if for any $x,y\in X$ different, $f(x)\ne f(y)$.
\end{definition}
Using \cref{def:inj} we immediate get the following result:
\begin{proposition}
If $f$ is injective mapping a set $X$ to another set $Y$,
the cardinality of $Y$ is at least as large as that of $X$
\end{proposition}
\begin{proof}
Left as an exercise to the reader.
\end{proof}
\cref{lem:usefullemma} stated next will prove to be useful.
\begin{lemma}
\label{lem:usefullemma}
For any $f:X \to Y$ and $g:Y\to Z$ injective functions, $f \circ g$ is injective.
\end{lemma}
\begin{theorem}
\label{thm:bigtheorem}
If $f:X\to Y$ is bijective, the cardinality of $X$ and $Y$ are the same.
\end{theorem}
An easy corollary of \cref{thm:bigtheorem} is the following:
\begin{corollary}
If $f:X\to Y$ is bijective,
the cardinality of $X$ is at least as large as that of $Y$.
\end{corollary}
\begin{assumption}
The set $X$ is finite.
\label{ass:xfinite}
\end{assumption}
\begin{remark}
According to some, it is only the finite case (cf. \cref{ass:xfinite}) that is interesting.
\end{remark}
\subsection{Citations and References}
Please use APA reference format regardless of your formatter
or word processor. If you rely on the \LaTeX\/ bibliographic
facility, use \texttt{natbib.sty} and \texttt{icml2023.bst}
included in the style-file package to obtain this format.
Citations within the text should include the authors' last names and
year. If the authors' names are included in the sentence, place only
the year in parentheses, for example when referencing Arthur Samuel's
pioneering work \yrcite{Samuel59}. Otherwise place the entire
reference in parentheses with the authors and year separated by a
comma \cite{Samuel59}. List multiple references separated by
semicolons \cite{kearns89,Samuel59,mitchell80}. Use the `et~al.'
construct only for citations with three or more authors or after
listing all authors to a publication in an earlier reference \cite{MachineLearningI}.
Authors should cite their own work in the third person
in the initial version of their paper submitted for blind review.
Please refer to \cref{author info} for detailed instructions on how to
cite your own papers.
Use an unnumbered first-level section heading for the references, and use a
hanging indent style, with the first line of the reference flush against the
left margin and subsequent lines indented by 10 points. The references at the
end of this document give examples for journal articles \cite{Samuel59},
conference publications \cite{langley00}, book chapters \cite{Newell81}, books
\cite{DudaHart2nd}, edited volumes \cite{MachineLearningI}, technical reports
\cite{mitchell80}, and dissertations \cite{kearns89}.
Alphabetize references by the surnames of the first authors, with
single author entries preceding multiple author entries. Order
references for the same authors by year of publication, with the
earliest first. Make sure that each reference includes all relevant
information (e.g., page numbers).
Please put some effort into making references complete, presentable, and
consistent, e.g. use the actual current name of authors.
If using bibtex, please protect capital letters of names and
abbreviations in titles, for example, use \{B\}ayesian or \{L\}ipschitz
in your .bib file.
\section*{Accessibility}
Authors are kindly asked to make their submissions as accessible as possible for everyone including people with disabilities and sensory or neurological differences.
Tips of how to achieve this and what to pay attention to will be provided on the conference website \url{http://icml.cc/}.
\section*{Software and Data}
If a paper is accepted, we strongly encourage the publication of software and data with the
camera-ready version of the paper whenever appropriate. This can be
done by including a URL in the camera-ready copy. However, \textbf{do not}
include URLs that reveal your institution or identity in your
submission for review. Instead, provide an anonymous URL or upload
the material as ``Supplementary Material'' into the CMT reviewing
system. Note that reviewers are not required to look at this material
when writing their review.
\section*{Acknowledgements}
\textbf{Do not} include acknowledgements in the initial version of
the paper submitted for blind review.
If a paper is accepted, the final camera-ready version can (and
probably should) include acknowledgements. In this case, please
place such acknowledgements in an unnumbered section at the
end of the paper. Typically, this will include thanks to reviewers
who gave useful comments, to colleagues who contributed to the ideas,
and to funding agencies and corporate sponsors that provided financial
support.
\nocite{langley00}
\section{Introduction}
In a series of influential papers, Amari and Wu proposed that one could improve the generalization performance of support vector machine (SVM) classifiers through data-dependent transformations of the kernel to expand the Riemannian volume element near decision boundaries \cite{amari1999improving,wu2002conformal,williams2007geometrical}. This proposal was based on the idea that this local magnification of areas improves class discriminability \cite{cho2011analysis,amari1999improving,burges1999geometry}.
Over the past decade, SVMs have largely been eclipsed by neural networks, whose ability to flexibly learn features from data is believed to underlie their superior generalization performance \cite{lecun2015deep,zhang2021understanding}. Previous works have explored some aspects of the geometry induced by neural networks feature maps with random parameters \cite{poole2016exponential,amari2019statistical,cho2009kernel,cho2011analysis,zv2022capacity,hauser2017principles,benfenati2023singular}, but have not characterized data-dependent changes in representational geometry over training.
In this work, we explore the possibility that neural networks learn to enhance local input disciminability automatically over the course of training. Our primary contributions are:
\begin{itemize}[leftmargin=*]
\item
In \S \ref{sec:fixedweights}, we study general properties of the metric induced by shallow fully-connected neural networks. Next, in \S \ref{sec:nngp}, we compute the volume element and curvature of the metric induced by infinitely wide shallow networks with Gaussian weights and smooth activation functions, showing that it is spherically symmetric.
\item
In \S \ref{sec:experiments}, we empirically show that training shallow networks on simple classification tasks expands the volume element along decision boundaries, consistent with the hand-engineered modifications proposed by Amari and Wu. In \S \ref{sec:deep}, we provide evidence that deep residual networks trained on more complex tasks behave similarly.
\end{itemize}
In total, our results provide a preliminary picture of how feature learning shapes local input discriminability.
\section{Preliminaries}\label{sec:preliminaries}
We begin by introducing the basic idea of the Riemannian geometry of feature space representations. Our setup and notation largely follow \citet{burges1999geometry}, which in turn follows the conventions of \citet{dodson1991tensor}.
\subsection{Feature embeddings as Riemannian manifolds}
Consider $d$-dimensional data living in some submanifold $\mathcal{D} \subseteq \mathbb{R}^{d}$. Let the \emph{feature map} $\mathbf{\Phi}: \mathbb{R}^{d} \to \mathcal{H}$ be a map from $\mathbb{R}^{d}$ to some separable Hilbert space $\mathcal{H}$ of possibly infinite dimension $n$, with $\mathbf{\Phi}(\mathcal{D}) = \mathcal{M} \subseteq \mathcal{H}$. We index input space dimensions by Greek letters $\mu,\nu,\rho,\ldots \in [d]$ and feature space dimensions by Latin letters $i,j,k, \ldots \in [n]$. We use the Einstein summation convention; summation over all repeated indices is implied.
Assume that $\mathbf{\Phi}$ is $\mathcal{C}^{k}$ for $k \geq 3$, and is everywhere of rank $r=\min\{d,n\}$. If $r = d$, then $\mathcal{M}$ is a $d$-dimensional $\mathcal{C}^{k}$ manifold immersed in $\mathcal{H}$. If $k = \infty$, then $\mathcal{M}$ is a smooth manifold. In contrast, if $r < d$, then $\mathcal{M}$ is a $d$-dimensional $\mathcal{C}^{k}$ manifold submersed in $\mathcal{H}$. The flat metric on $\mathcal{H}$ can then be pulled back to $\mathcal{M}$, with components
\begin{align} \label{eqn:general_metric}
g_{\mu\nu} = \partial_{\mu} \Phi_{i} \partial_{\nu} \Phi_{i},
\end{align}
where we write $\partial_{\mu} \equiv \partial/\partial x^{\mu}$. If $r=d$ and the pullback metric $g_{\mu\nu}$ is full rank, then $(\mathcal{M},g)$ is a $d$-dimensional Riemannian manifold \cite{dodson1991tensor,burges1999geometry}. However, if the pullback $g_{\mu\nu}$ is a degenerate metric, as must be the case if $r < d$, then $(\mathcal{M},g)$ is a singular semi-Riemannian manifold \cite{benfenati2023singular,kupeli2013singular}. In this case, if we let $\sim$ be the equivalence relation defined by identifying points with vanishing pseudodistance, the quotient $(\mathcal{M}/\sim,g)$ is a Riemannian manifold \cite{benfenati2023singular}. Unless noted otherwise, our results will focus on the non-singular case. We denote the matrix inverse of the metric tensor by $g^{\mu\nu}$, and we raise and lower input space indices using the metric.
If we define the feature kernel $k(\mathbf{x},\mathbf{y}) = \Phi_{i}(\mathbf{x}) \Phi_{i}(\mathbf{y})$ for $\mathbf{x},\mathbf{y} \in \mathcal{D}$, then the resulting metric can be written in terms of the kernel as $g_{\mu\nu} = (1/2) \partial_{x_\mu} \partial_{x_\nu} k(\mathbf{x},\mathbf{x}) - [\partial_{y_{\mu}} \partial_{y_{\nu}} k(\mathbf{x},\mathbf{y})]_{\mathbf{y} = \mathbf{x}}$.
This formula applies even if $n = \infty$, giving the metric induced by the feature embedding associated to a suitable Mercer kernel \cite{burges1999geometry}.
\subsection{Volume element and curvature}
With this setup, $(\mathcal{M},g)$ is a Riemannian manifold, hence we have at our disposal a powerful toolkit with which we may study its geometry. We will focus on two geometric properties of $(\mathcal{M},g)$. First, the volume element is given by
\begin{align}
dV = \sqrt{\det g}\, d^{d}x,
\end{align}
where the factor $\sqrt{\det g}$ measures how local areas in input space are magnified by the feature map \cite{dodson1991tensor,amari1999improving,burges1999geometry}. Second, we consider the intrinsic curvature of the manifold, which is characterized by the Riemann tensor $R^{\mu}_{\nu\alpha\beta}$ \cite{dodson1991tensor}.
If $R^{\mu}_{\nu\alpha\beta}= 0$, then the manifold is intrinsically flat. As a tractable measure, we focus on the Ricci curvature scalar $R = g^{\beta \nu} R^{\alpha}_{\nu \alpha \beta}$, which measures the deviation of the volume of an infinitesimal geodesic ball in the manifold from that in flat space \cite{dodson1991tensor}.
In the singular case, we can compute the volume element on $\mathcal{M} / \sim $ at a given point by taking the square root of the product of the non-zero eigenvalues of the degenerate metric $g_{\mu\nu}$ at that point \cite{benfenati2023singular}. However, the curvature in this case is generally not straightforward to compute; we will therefore leave this issue for future work.
\subsection{Shallow neural network feature maps}
In this work, we consider a particular class of feature maps: those given by the hidden layer representations of neural networks \cite{williams1997computing,neal1996priors,lecun2015deep,cho2009kernel,cho2011analysis,lee2018deep,matthews2018gaussian,hauser2017principles,benfenati2023singular}. We will mostly focus on shallow fully-connected neural networks, i.e., those with only a single hidden layer followed by readout. Concretely, such a feature map is of the form
\begin{align} \label{eqn:shallow_featuremap}
\Phi_{j}(\mathbf{x}) = n^{-1/2} \phi(\mathbf{w}_{j} \cdot \mathbf{x} + b_{j})
\end{align}
for weights $\mathbf{w}_{j}$, biases $b_{j}$, and an activation function $\phi$. For convenience, we abbreviate the Euclidean inner product on feature or input space by $\cdot$, e.g., $\mathbf{w} \cdot \mathbf{x} = w_{\mu} x_{\mu}$. In this case, the feature space dimension $n$ is equal to the number of hidden units, and is referred to as the \emph{width} of the hidden layer. In \eqref{eqn:shallow_featuremap}, we scale the components of the feature map by $n^{-1/2}$ such that the associated kernel $k(\mathbf{x},\mathbf{y}) = \Phi_{i}(\mathbf{x}) \Phi_{i}(\mathbf{y})$ and metric \eqref{eqn:shallow_metric} have the form of averages over hidden units, and therefore should be well-behaved at large widths \cite{neal1996priors,williams1997computing}.
We will assume that $\phi$ is $\mathcal{C}^{k}$ for $k \geq 3$, so that this feature map satisfies the smoothness conditions required in the setup above. We will also assume that the activation function and weight vectors are such that the Jacobian $\partial_{\mu} \Phi_{j}$ is full-rank, i.e., is of rank $\min\{d,n\}$. Then, the shallow network feature map satisfies the required conditions for the feature embedding to be a (possibly singular) Riemannian manifold. These conditions extend directly to deep fully-connected networks formed by composing feature maps of the form \eqref{eqn:shallow_featuremap} \cite{hauser2017principles,benfenati2023singular}.
\section{Related works} \label{sec:related}
Having established the geometric preliminaries of \S\ref{sec:preliminaries}, we can give a more complete overview of related works. As introduced above, our hypothesis for how the Riemannian geometry of neural network representations changes during training is directly inspired by the work of \citet{amari1999improving}. In that and subsequent works \cite{amari1999improving,wu2002conformal,williams2007geometrical}, they proposed to modify the kernel of an SVM as $\tilde{k}(\mathbf{x},\mathbf{y}) = h(\mathbf{x}) h(\mathbf{y}) k(\mathbf{x},\mathbf{y})$
for some positive scalar function $h(\mathbf{x})$ chosen such that the magnification factor $\sqrt{\det g}$ is large near the SVM's decision boundary. Concretely, they proposed to fit an SVM with some base kernel $k$, choose $h(\mathbf{x}) = \sum_{\mathbf{v} \in \textrm{SV}(k)} \exp[-\Vert \mathbf{x}-\mathbf{v}\Vert^2/2 \tau^2]$
for $\tau$ a bandwidth parameter and $\textrm{SV}(k)$ the set of support vectors for $k$, and then fit an SVM with the modified kernel $\tilde{k}$. Here, $\Vert \cdot \Vert$ denotes the Euclidean norm. This process could then be iterated, yielding a sequence of modified kernels. They found that this method can improve generalization performance relative to the original kernel \cite{amari1999improving,wu2002conformal,williams2007geometrical}. This approach is a hand-designed form of iterative feature learning.
The geometry induced by common kernels was investigated in detail by \citet{burges1999geometry}, who established a broad range of technical results. He showed that translation-invariant kernels of the form $k(\mathbf{x},\mathbf{y}) = k(\Vert \mathbf{x}-\mathbf{y} \Vert^2)$ yield flat, constant metrics,\footnote{It is interesting to note that this holds for the simplest form of a method for learning data-adaptive kernels recently proposed by \citet{radhakrishnan2022recursive}; see Appendix \ref{app:invariant}.} and gave a detailed characterization of polynomial kernels $k(\mathbf{x},\mathbf{y}) = (\mathbf{x} \cdot \mathbf{y})^{q}$. \citet{cho2011analysis} subsequently analyzed the geometry induced by arc-cosine kernels, i.e., the feature kernels of infinitely-wide shallow neural networks with threshold-power law activation functions $\phi(x) = \max\{0,x\}^q$ and random parameters \cite{cho2009kernel}. Our results on infinitely-wide networks for general smooth activation functions build on these works.
The representational geometry of deep networks with random Gaussian parameters in the limit of large width and depth was studied by \citet{poole2016exponential}, and in later work by \citet{amari2019statistical}.
These works tie into a broader line of research on infinite-width limits of deep neural networks in which inference and prediction is captured by a kernel machine \cite{neal1996priors,williams1997computing,daniely2016deeper,lee2018deep,matthews2018gaussian,yang2019scaling,yang2021feature,zv2021asymptotics,zv2022capacity,bordelon2022selfconsistent}. Our results on the representational geometry of wide shallow networks with smooth activation functions build on these ideas, particularly those relating activation function derivatives to input discriminability \cite{poole2016exponential,daniely2016deeper,zv2021activation,zv2022capacity}.
Particularly closely related to our work are several recent papers that aim to study the curvature of neural network representations. \citet{hauser2017principles,benfenati2023singular} discuss formal principles of Riemannian geometry in deep neural networks, but do not characterize how training shapes the geometry. \citet{kaul2020riemannian} aimed to study the curvature of metrics induced by the outputs of pretrained classifiers. However, their work is limited by the fact that they estimate input-space derivatives using inexact finite differences under the strong assumption that the input data is confined to a \emph{known} smooth submanifold of $\mathbb{R}^{d}$. Recent works by \citet{shao2018riemannian,kuhnel2018latent,wang2021generative} have studied the Riemannian geometry of the latent representations of deep generative models. Finally, in very recent work \citet{benfenati2023reconstruction} have used the geometry induced by the full input-output mapping to reconstruct iso-response curves of deep networks. In contrast, our work focuses on hidden representations, and seeks to characterize the representational manifolds themselves.
\section{Representational geometry of shallow neural network feature maps}\label{sec:fixedweights}
We begin by studying general properties of the Riemannian metrics induced by shallow neural networks feature maps.
\subsection{Finite-width networks}
We first consider finite-width networks with fixed weights, assuming that $n \geq d$. Writing $z_{j} = \mathbf{w}_{j} \cdot \mathbf{x} + b_{j}$ for the preactivation of the $j$-th hidden unit, the general formula \eqref{eqn:general_metric} for the metric yields
\begin{align} \label{eqn:shallow_metric}
g_{\mu\nu} = \frac{1}{n} \phi'(z_{j})^{2} w_{j \mu} w_{j \nu} .
\end{align}
This metric has the useful property that $\partial_{\alpha} g_{\mu\nu}$ is symmetric under permutation of its indices, hence the formula for the Riemann tensor simplifies substantially (Appendix \ref{app:riemann}).
Then, using the Leibniz formula for determinants, we show in Appendix \ref{app:fixedweights} that the determinant of the metric can be expanded as a sum over $d$-tuples of hidden units:
\begin{align} \label{eqn:finite_volume}
\det g = \frac{1}{n^{d} d!} M_{j_{1}\cdots j_{d}}^{2} \phi'(z_{j_{1}})^2 \cdots \phi'(z_{j_{d}})^2 ,
\end{align}
where
\begin{align}
M_{j_{1}\cdots j_{d}} = \det
\begin{pmatrix}
w_{j_{1} 1} & \cdots & w_{j_{1} d} \\
\vdots & \ddots & \vdots \\
w_{j_{d} 1} & \cdots & w_{j_{d} d}
\end{pmatrix}
\end{align}
is the minor of the weight matrix obtained by selecting units $j_{1}, \ldots, j_{d}$.
For the error function $\phi(x) = \erf(x/\sqrt{2})$, $\det g$ expands as a superposition of Gaussian bump functions, one for each tuple of hidden units \eqref{eqn:fixed_erf}. This is reminiscent of Amari and Wu's approach, which yields a Gaussian contribution to $\sqrt{\det g}$ from each support vector (\S \ref{sec:related}).
We can also derive similar expansions for the Riemann tensor and Ricci scalar. The resulting expressions are rather unwieldy, so we give their general forms only in Appendix \ref{app:fixedweight_riemann}. However, in two dimensions the situation simplifies, as the Riemann tensor is completely determined by the Ricci scalar \cite{dodson1991tensor,mtw2017gravitation} (Appendix \ref{app:2d}). In this case, we have the compact expression
\begin{align} \label{eqn:finite_ricci}
(\det g)^2 R &= - \frac{3}{n^3} M_{jk}^{2} M_{ij} M_{ik} \nonumber\\&\quad \times \phi'(z_{i})^{2} \phi'(z_{j}) \phi'(z_{k}) \phi''(z_{j}) \phi''(z_{k}) .
\end{align}
This shows that in $d=2$ the curvature acquires contributions from each triple of distinct hidden units, hence if $n=2$ we have $R=0$. This follows from the fact that the feature map is in this case a change of coordinates on the two-dimensional manifold \cite{dodson1991tensor}.
\subsection{Geometry of infinite shallow networks}\label{sec:nngp}
We now characterize the metric induced by infinite-width networks ($n \to \infty$) with Gaussian weights and biases
\begin{align}
\mathbf{w}_{j} \sim_{\textrm{i.i.d.}} \mathcal{N}(\mathbf{0},\sigma^2 \mathbf{I}_{d}); \quad
b_{j} \sim_{\textrm{i.i.d.}} \mathcal{N}(0,\zeta^2) ,
\end{align}
as commonly chosen at initialization \cite{lecun2015deep,lee2018deep,matthews2018gaussian,yang2019scaling,yang2021feature,poole2016exponential}. For such networks, the hidden layer representation is described by the neural network Gaussian process (NNGP) kernel \cite{neal1996priors,williams1997computing,matthews2018gaussian,lee2018deep}:
\begin{align}
k(\mathbf{x},\mathbf{y}) &= \lim_{n \to \infty} n^{-1} \bm{\Phi}(\mathbf{x}) \cdot \bm{\Phi}(\mathbf{y}) \nonumber \\
&= \mathbb{E}_{\mathbf{w},b}[\phi(\mathbf{w} \cdot \mathbf{x} + b) \phi(\mathbf{w} \cdot \mathbf{y} + b) ] .
\end{align}
This kernel also completely describes the representation after training for networks in the lazy regime \cite{yang2021feature,bordelon2022selfconsistent}.
In Appendix \ref{app:shallow_nngp}, we show that the metric associated with the NNGP kernel, $g_{\mu\nu} = \mathbb{E}_{\mathbf{w},b}[\phi'(\mathbf{w} \cdot \mathbf{x} + b)^2 w_{\mu} w_{\nu}]$, can be written more illuminatingly as
\begin{align}
g_{\mu\nu} = e^{\Omega(\Vert \mathbf{x} \Vert^2)} [\delta_{\mu\nu} + 2 \Omega'(\Vert \mathbf{x} \Vert^2) x_{\mu} x_{\nu} ] ,
\end{align}
where the function $\Omega(\Vert\mathbf{x}\Vert^2)$ is defined via
\begin{align}
e^{\Omega(\Vert \mathbf{x} \Vert^2)} = \sigma^{2} \mathbb{E}_{z \sim \mathcal{N}(0,\sigma^2 \Vert \mathbf{x} \Vert^2 + \zeta^2)}[\phi'(z)^2] .
\end{align}
For these results, we must also assume that $\phi$ and its (weak) derivatives satisfy suitable boundedness assumptions for $\Omega$ to be twice-differentiable \cite{daniely2016deeper}. Therefore, like the metrics induced by other dot-product kernels, the NNGP metric has the form of a projection \cite{burges1999geometry}. Such metrics have determinant
\begin{align}
\det g = e^{\Omega d} (1 + 2 \Vert \mathbf{x} \Vert^2 \Omega')
\end{align}
and Ricci scalar
\begin{align}
R &= - \frac{3 (d-1) e^{-\Omega} (\Omega')^2 \Vert \mathbf{x} \Vert^2}{(1 + 2 \Vert \mathbf{x} \Vert^2 \Omega')^2} \nonumber\\&\qquad \times \left[ d + 2 + 2 \Vert \mathbf{x} \Vert^2 \left( (d-2) \Omega' + 2 \frac{\Omega''}{\Omega'} \right) \right] .
\end{align}
Thus, all geometric quantities are spherically symmetric, depending only on $\Vert \mathbf{x} \Vert^2$. Thanks to the assumption of independent Gaussian weights, the geometric quantities associated to the shallow Neural Tangent Kernel and to the deep NNGP will share this spherical symmetry (Appendix \ref{app:ntk}) \cite{lee2018deep,matthews2018gaussian,yang2019scaling,yang2021feature}. This generalizes the results of \citet{cho2011analysis} for threshold-power law functions to arbitrary smooth activation functions.
The relation between Gaussian norms of $\phi'$ and input discriminability indicated by this result is consistent with previous studies \cite{poole2016exponential,daniely2016deeper,zv2021activation,zv2022capacity}. In short, unless the task depends only on the input norm, the geometry of infinite-width networks will not be linked to the task structure.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{TheoryFigures/paper_curvature_2d_convergence_to_nngp_v0.pdf}
\vspace{-2em}
\caption{Convergence of geometric quantities for finite-width networks with Gaussian random parameters to the infinite-width limit. \textbf{a}. The magnification factor $\sqrt{\det g}$ (\emph{left}) and Ricci scalar $R$ (\emph{right}) as functions of the input norm $\Vert \mathbf{x} \Vert$ for networks with $\phi(x) = \erf(x/\sqrt{2})$. Empirical results for finite networks, computed using \eqref{eqn:finite_volume} and \eqref{eqn:2dricci} are shown in blue, with solid lines showing the mean and shaded patches the standard deviation over 25 realizations of random Gaussian parameters. In all cases, $\sigma = \zeta = 1$. The infinite-width result is shown as a black dashed line. \textbf{b}. As in \textbf{a}, but for normalized quadratic activation functions $\phi(x) = x^2/\sqrt{3}$. }
\label{fig:convergence_to_nngp}
\vspace{-1em}
\end{figure}
\subsection{Examples}
In Appendix \ref{sec:nngp_examples}, we evaluate the geometric quantities of the NNGP for certain analytically tractable activation functions. The resulting expressions for $\sqrt{\det g}$ and $R$ are rather lengthy, so we discuss only their qualitative behavior here. For the error function $\phi(x) = \erf(x/\sqrt{2})$, $R$ is negative for all $d>1$, and both $R$ and $\sqrt{\det g}$ are monotonically decreasing functions of $\Vert \mathbf{x} \Vert$ for all $\zeta$ and $\sigma$. For monomials $\phi(x) \propto x^{q}$ for integer $q > 1$, $\sqrt{ \det g}$ is a monotonically increasing function of $\Vert \mathbf{x} \Vert^2$, while $R$ is again non-positive. However, in this case the behavior of $R$ depends on whether or not bias terms are present: if $\zeta = 0$, then $R$ is a non-decreasing function of $\Vert \mathbf{x} \Vert^2$ that diverges towards $-\infty$ as $\Vert \mathbf{x} \Vert^2 \downarrow 0$, while if $\zeta > 0$, $R$ may be non-monotonic in $\Vert \mathbf{x} \Vert^2$. In Figure \ref{fig:convergence_to_nngp}, we illustrate this behavior, and show convergence of the empirical geometry of finite networks with random Gaussian parameters to the infinite-width results.
\begin{figure*}[t]
\centering
\begin{subfigure}
\centering
\includegraphics[width=0.32\textwidth]{Figures/expansion_xor_w5_nlSigmoid_lr0.1_wd0.0_mom0.9_sindata_epochs10000_seed405_e200.pdf}
\end{subfigure}
\hfill
\begin{subfigure}
\centering
\includegraphics[width=0.32\textwidth]{Figures/expansion_xor_w5_nlSigmoid_lr0.1_wd0.0_mom0.9_sindata_epochs10000_seed405_e5000.pdf}
\end{subfigure}
\hfill
\begin{subfigure}
\centering
\includegraphics[width=0.32\textwidth]{Figures/expansion_xor_w5_nlSigmoid_lr0.1_wd0.0_mom0.9_sindata_epochs10000_seed405_e10000.pdf}
\end{subfigure} \\
\begin{subfigure}
\centering
\includegraphics[width=0.32\textwidth]{Figures/expansion_xor_w20_nlSigmoid_lr0.1_wd0.0_mom0.9_sindata_epochs10000_seed401_e200.pdf}
\end{subfigure}
\hfill
\begin{subfigure}
\centering
\includegraphics[width=0.32\textwidth]{Figures/expansion_xor_w20_nlSigmoid_lr0.1_wd0.0_mom0.9_sindata_epochs10000_seed401_e5000.pdf}
\end{subfigure}
\hfill
\begin{subfigure}
\centering
\includegraphics[width=0.32\textwidth]{Figures/expansion_xor_w20_nlSigmoid_lr0.1_wd0.0_mom0.9_sindata_epochs10000_seed401_e10000.pdf}
\end{subfigure} \\
\begin{subfigure}
\centering
\includegraphics[width=0.32\textwidth]{Figures/expansion_xor_w250_nlSigmoid_lr0.1_wd0.0_mom0.9_sindata_epochs10000_seed401_e200.pdf}
\end{subfigure}
\hfill
\begin{subfigure}
\centering
\includegraphics[width=0.32\textwidth]{Figures/expansion_xor_w250_nlSigmoid_lr0.1_wd0.0_mom0.9_sindata_epochs10000_seed401_e5000.pdf}
\end{subfigure}
\hfill
\begin{subfigure}
\centering
\includegraphics[width=0.32\textwidth]{Figures/expansion_xor_w250_nlSigmoid_lr0.1_wd0.0_mom0.9_sindata_epochs10000_seed401_e10000.pdf}
\end{subfigure} \\
\caption{Evolution of the volume element over training in a network trained to classify points separated by a sinusoidal boundary $y=\frac{3}{5}\sin(7x - 1)$ (single hidden layer with 5 hidden units (top), 20 hidden units (mid), and 250 hidden units (bottom)). Red lines indicate the decision boundaries of the network. See Appendix \ref{app:xor} for experimental details. More hidden units offer better approximation to the sinusoid curve. }
\label{fig:sinusoid}
\end{figure*}
\section{Changes in shallow network geometry during training}\label{sec:experiments}
We now consider how the geometry of the pullback metric changes during training. Changes in the volume element and curvature during gradient descent training are challenging to study analytically, because models for which the learning dynamics are solvable---deep linear networks \cite{saxe2013exact}---trivially yield flat, constant metrics. More generally, the dependence of the metric on the instantaneous configuration of parameters makes it difficult to gain intuition for its evolution over training, even for two-dimensional inputs.
\subsection{Wide Bayesian neural networks}
We can make slightly more analytical progress for Bayesian neural networks at large but finite width. This setting is convenient because there is a fixed parameter posterior; one does not need to solve kernel or metric dynamics through time \cite{bordelon2022selfconsistent}. In Appendix \ref{app:bayesian}, we use recent results on perturbative feature-learning corrections to the NNGP kernel \cite{zv2021asymptotics,roberts2022principles} to compute corrections to the posterior mean of the volume element. In general, it is not possible to evaluate these corrections in closed form \cite{zv2021asymptotics}. For networks with monomial activation functions, no bias terms, and linear readout constrained to interpolate a single training example $(\mathbf{x}_{a},\mathbf{y}_{a})$, we can show that the correction to $\sqrt{\det g}$ is maximal for $\mathbf{x} \parallel \mathbf{x}_{a}$, and minimal for $\mathbf{x} \perp \mathbf{x}_{a}$ \eqref{eqn:bayesian_monomial_detg}. The sign of the correction is positive or negative depending on whether the second moment of the prior predictive is greater than or less than the norm of the output, respectively. For example, if we train on a single point from the XOR task, $(1,1) \mapsto 0$, $\sqrt{\det g}$ will be contracted maximally along $x_1 = x_2$. This simple case shows how interpolating a single point shapes the network's global representational geometry.
\begin{figure*}[t]
\centering
\begin{subfigure}
\centering
\includegraphics[width=0.32\textwidth]{Figures/effvols_mnist_w2000_nlSigmoid_lr0.001_wd0.0001_mom0.9_mnist_epochs200_seed404_7_6_e0.pdf}
\end{subfigure}
\hfill
\begin{subfigure}
\centering
\includegraphics[width=0.32\textwidth]{Figures/effvols_mnist_w2000_nlSigmoid_lr0.001_wd0.0001_mom0.9_mnist_epochs200_seed404_7_6_e60.pdf}
\end{subfigure}
\hfill
\begin{subfigure}
\centering
\includegraphics[width=0.32\textwidth]{Figures/effvols_mnist_w2000_nlSigmoid_lr0.001_wd0.0001_mom0.9_mnist_epochs200_seed404_7_6_e180.pdf}
\end{subfigure} \\
\begin{subfigure}
\centering
\includegraphics[height=2.5in]{Figures/effvols_mnist_plane_w2000_nlSigmoid_lr0.001_wd0.0001_mom0.9_mnist_epochs200_seed404_7_6_1_ternary_e200.pdf}
\end{subfigure} \\
\caption{\emph{Top panel}: $\log_{10}(\sqrt{\det g})$ induced at interpolated images between 7 and 6 by a single-hidden-layer fully-connected network trained to classify MNIST digits. \emph{Bottom panel}: Digit class predictions and $\log_{10}(\sqrt{\det g})$ for the plane spanned by MNIST digits 7, 6, and 1 at the final training epoch (200) . Sample images are visualized at the endpoints and midpoint for each set. Each line is colored by its prediction at the interpolated region and end points. As training progresses, the volume elements bulge in the middle (near the decision boundary) and taper off when travelling towards endpoints. See Appendix \ref{app:mnist} for experimental details and Figure \ref{fig:more_mnist} for images interpolated between other digits.}
\label{fig:mnist}
\end{figure*}
\subsection{Changes in representational geometry for networks trained on two-dimensional toy tasks}
Thus, given the intractability of studying changes in geometry analytically, we resort to numerical experiments. For details of our numerical methods, see Appendix \ref{app:numerics}.
To build intuition, we first consider networks trained on simple two-dimensional tasks, for which we can directly visualize the input space. We first consider a simple two-dimensional binary classification task with sinusoidal boundary, inspired by that considered in the original work of \citet{amari1999improving}. We train networks with sigmoidal activation functions of varying widths to perform this task, and visualize the resulting geometry over the course of training in Figure \ref{fig:sinusoid}. At initialization, the peaks in the volume element lack a clear relation to the structure of the task, with approximate rotational symmetry at large widths as we would expect from \S\ref{sec:nngp}. As the network's decision boundary is gradually molded to conform to the true boundary, the volume element develops peaks in the same vicinity. At all widths, the final volume elements are largest near the peaks of the sinusoidal decision boundary. At small widths, the shape of the sinusoidal curve is not well-resolved, but at large widths there is a clear peak in the close neighborhood of the decision boundary. This result is consistent with the proposal of \citet{amari1999improving}.
In Appendix \ref{app:numerics}, Figure \ref{fig:sinusoid_ricci}, we plot the Ricci curvature for these trained networks. Even for these small networks, the curvature computation is computationally expensive and numerically challenging. Over training, it evolves dynamically, with task-adapted structure visible at the end of training. However, the patterns here are harder to interpret than those in the volume element.
\subsection{Changes in representational geometry for shallow networks trained to classify MNIST digits}
We now provide evidence that a similar phenomenon is present in networks trained to classify MNIST images. We give details of these networks in Appendix \ref{app:mnist}; note that all reach above 95\% train and test accuracy within 200 epochs.
In Figure \ref{fig:mnist}, we plot the induced volume element at synthetic images generated by linearly interpolating between two input images (see Appendix \ref{app:numerics} for details). We emphasize that linear interpolation in pixel space of course does not respect the structure of the image data, and results in unrealistic images. However, this approach has the advantage of being straightforward, and also illustrates how small Euclidean perturbations are expanded by the feature map \cite{novak2018sensitivity}. At initialization, the volume element varies without clear structure along the interpolated path. However, as training progresses, areas near the center of the path, which roughly aligns with the decision boundary, are expanded, while those near the endpoints defined by true training examples remain relatively small. This is again consistent with the proposal of \citet{amari1999improving}. We provide additional visualizations of this behavior in Appendix \ref{app:mnist}.
To gain an understanding of the structure of the volume element beyond one-dimensional slices, in Figure \ref{fig:mnist} we also plot its value in the plane spanned by three randomly-selected example images, at points interpolated linearly within their convex hull. Here, we only show the end of training; in Appendix \ref{app:mnist} we show how the volume element in this plane changes over the course of training. The edges of the resulting ternary plot are one-dimensional slices like those shown in the top row of Figure \ref{fig:mnist}, and we observe consistent expansion of the volume element along these paths. The volume element becomes large near the centroid of the triangle, where multiple decision boundaries intersect.
Because of the computational complexity of estimating the curvature---the Riemann tensor has $d^2(d^2-1)/12$ independent components \cite{mtw2017gravitation,dodson1991tensor}---and its numerical sensitivity (Appendix \ref{app:xor}), we do not attempt to estimate it for this high-dimensional task.
\begin{figure*}[t]
\centering
\begin{subfigure}
\centering
\includegraphics[width=0.32\textwidth]{Figures/eff_vol_e0.pdf}
\end{subfigure}
\hfill
\begin{subfigure}
\centering
\includegraphics[width=0.32\textwidth]{Figures/eff_vol_e50.pdf}
\end{subfigure}
\hfill
\begin{subfigure}
\centering
\includegraphics[width=0.32\textwidth]{Figures/eff_vol_e200.pdf}
\end{subfigure} \\
\begin{subfigure}
\centering
\includegraphics[height=2.5in]{Figures/effvols_cifar10_plane_w34_nlGELU_lr0.01_wd0.0001_mom0.9_cifar10_epochs200_seed501_7_6_1_ternary_e200.pdf}
\end{subfigure}
\caption{\emph{Top panel}: $\log_{10}(\sqrt{\det g})$ induced at interpolated images between a horse and a frog by ResNet34 trained to classify CIFAR-10 digits. \emph{Bottom panel}: Digits classification of a horse, a frog, and a car. The volume element is the largest at the intersection of several binary decision boundaries, and smallest within each of the decision region. The one-dimensional slices along the edges of each ternary plot are consistent with the top panel. See Appendix \ref{app:resnet} for experimental details, Figure \ref{fig:more_cifar} for linear interpolation and plane spanned by other classes, and how the plane evolves during training.}
\label{fig:resnet}
\end{figure*}
\section{Extensions to deep networks}\label{sec:deep}
Thus far, we have focused on the geometry of the feature maps of single-hidden-layer neural networks. However, these analyses can also be applied to deeper networks, regarding the representation at each hidden layer as defining a feature map, and study how the geometry changes with depth \cite{hauser2017principles,benfenati2023singular}. As a simple version of this, in Figure \ref{fig:deep_sinusoid_full} we consider a network with three fully-connected hidden layers trained on the sinusoid task. The metrics induced by the feature maps of all three hidden layers all show the same qualitative behavior as we observed in the shallow case in Figure \ref{fig:sinusoid}: areas near the decision boundary are magnified. As one progresses deeper into the network, the contrast between regions of low and high magnification factor increases.
As a more realistic example, we consider deep residual networks (ResNets) \cite{he2016residual} trained to classify the CIFAR-10 image dataset \cite{krizhevsky2009cifar}. To make the feature map differentiable, we replace the rectified linear unit (ReLU) activation functions used in standard ResNets with Gaussian error linear units (GELUs) \cite{hendrycks2016gelu}. With this modification, we achieve comparable test accuracy (92\%) with a ResNet-34---the largest model we can consider given computational constraints---to that obtained with ReLUs (Appendix \ref{app:resnet}). Importantly, the feature map defined by the input-to-final-hidden-layer mapping of a ResNet-34 gives a submersion of CIFAR-10, as the input images have $32 \times 32 \times 3 = 3072$ pixels, while the final hidden layer has 512 units. Empirically, we find that the Jacobian of this mapping is full-rank (Appendix \ref{app:resnet}); we therefore consider the volume element on $(\mathcal{M}/\sim, g)$ defined by the product of the non-zero eigenvalues of the degenerate pullback metric (\S\ref{sec:preliminaries}, Appendix \ref{app:resnet}).
In Figures \ref{fig:resnet}, we visualize the resulting geometry in the same way we did for networks trained on MNIST, along 1-D interpolated slices and in a 2-D interpolated plane (see Appendix \ref{app:resnet} for details and additional figures). In the 1-D slices, we see a clear trend of large volume elements near decision boundaries, as we observed for shallow networks. However, in two dimensions the picture is less clear. The decision boundaries are more complicated than for MNIST, reflecting the more complex structure of the task. This also highlights the deficiency of our approach of linear interpolation in pixel space, which we further discuss and illustrate in Appendix \ref{app:resnet}. We observe some magnification of areas in the vicinity of decision boundaries, though here it is harder to interpret all forms of structure that are present. Thus, even in this more realistic setting, we observe shaping of geometry over training that appears consistent with the proposal of \citet{amari1999improving}.
\section{Discussion}\label{sec:discussion}
To conclude, we have shown that training on simple tasks shapes the Riemannian geometry induced by neural network representations by magnifying areas along decision boundaries, consistent with the proposal of Amari and Wu for geometrically-inspired kernel learning \cite{amari1999improving,wu2002conformal,williams2007geometrical}. Our results on the geometry induced by the NNGP kernel provide a preliminary picture of the geometric priors of neural networks, and our experimental results begin to show how representational geometry is shaped over the course of training. These results are relevant to the broad goal of leveraging non-Euclidean geometry in deep learning \cite{bronstein2021geometric,weber2020hyperbolic}. We now discuss the limitations of our work, as well as directions for future study.
Perhaps the most important limitation of our work is the fact that we focus either on toy tasks with two-dimensional input domains, or on low-dimensional slices through high-dimensional domains. This is a fundamental limitation of how we have attempted to visualize the geometry. We are also restricted by computational constraints (see in particular Appendix \ref{app:resnet}); to characterize the geometry of state-of-the-art network architectures, more efficient and numerically stable algorithms for computing these quantities must be developed.
An important question that we leave open for future work is whether expanding areas near decision boundaries generically improves generalization in deep neural networks, consistent with \citet{amari1999improving}'s original motivations. Indeed, it is easy to imagine a scenario in which the geometry is overfit, and the trained network becomes too sensitive to small changes in the input. This possibility is consistent with prior work on the sensitivity of deep networks \cite{novak2018sensitivity}, and with the related phenomenon of adversarial vulnerability \cite{szegedy2013intriguing,goodfellow2014explaining}. Investigating these links will be an important objective for future work.
Because of the smoothness conditions required by the definition of the pullback metric and the requirement that $(\mathcal{M},g)$ be a differentiable manifold \cite{hauser2017principles,benfenati2023singular}, the approach pursued in this work does not apply directly to networks with ReLU activation functions, which are not differentiable. Deep ReLU networks are continuous piecewise-linear maps, with many distinct activation regions \cite{hanin2019complexity,hanin2019few}. Within each region, the corresponding linear feature map will induce a flat metric on the input space, but the magnification factor will vary from region to region. It will be interesting to investigate the resulting geometry in future work.
One possible area of application of our results is to the general problem of how to analyze and compare neural network representations \cite{kornblith2019similarity,williams2021generalized}. Importantly, one could compute and plot the volume element induced by a feature map even when one does not have access to explicit class labels. This could allow one to study networks trained with self-supervised learning, or even differentiable approximations to biological neural networks \cite{wang2022tuning}. Exploring the rich geometry induced by these networks is an exciting avenue for future investigation.
\section*{Acknowledgements}
We thank Alexander Atanasov and Blake Bordelon for helpful comments on our manuscript. JAZV, CP and this research were supported by a Google Faculty Research Award and NSF DMS-2134157. The computations in this paper were run on the FASRC Cannon cluster supported by the FAS Division of Science Research Computing Group at Harvard University.
\newpage
| {'timestamp': '2023-01-30T02:01:06', 'yymm': '2301', 'arxiv_id': '2301.11375', 'language': 'en', 'url': 'https://arxiv.org/abs/2301.11375'} |
\section{Introduction}
\label{section:introduction}
Due to the growing popularity of wireless deployments, especially the ones based on the IEEE 802.11 standard (i.e., Wi-Fi), it is very common to find independent overlapping Wireless Networks (WNs) sharing the same channel resources. \textcolor{black}{The decentralized nature of such kind of deployments leads to a significant lack of organization and/or agreement on sharing policies. As a result, resources are typically used inefficiently. An illustrative example of this can be found in \cite{akella2007self}, where the authors show that the power level used by wireless devices is typically set, by default, to the maximum, regardless of the distance between communicating nodes, and the channel occupancy. Consequently, increasing the capacity of such networks has become very challenging.}
\textcolor{black}{Wireless networks operate in three main domains: time, frequency and space. While the first two have been largely exploited, the spatial domain still shows plenty of room for improvement. According to \cite{alawieh2009improving}, Spatial Reuse (SR) can be addressed by means of Transmission Power Control (TPC), Carrier Sense Threshold (CST) adjustment, rate adaptation (related to power control), and directional transmissions. In addition, interference cancellation can play a key role on spectral efficiency optimization \cite{miridakis2013survey}. On one side, TPC and CST adjustment aim at increasing spectral efficiency omnidirectionally. On the other hand, beamforming is meant for directional transmissions. Both beamforming and interference cancellation can be categorized as multiple antenna strategies. While the former allows to reduce the interference levels, the second one is useful to perform multiple simultaneous transmissions.}
\textcolor{black}{In this work, we focus on Dynamic Channel Allocation (DCA) and TPC to address the decentralized SR problem. A proper frequency planning allows to reduce the interference between wireless devices, and tuning the transmit power adds an extra level of SR that can result in improved throughput and fairness. The application of TPC and DCA is particularly challenging by itself. The interactions among devices depend on many features (such as position, environment or transmit power) and are hard to derive.
Including beamforming and/or interference cancellation techniques \cite{dovelos2018breaking}, on the other hand, requires first a clear understanding of TCP and DCA performance alone, and is therefore left as future work.}
\textcolor{black}{Motivated by these challenges,} we focus attention on Reinforcement Learning (RL), which has recently emerged as a very popular method to solve many well-known problems in wireless communications. \textcolor{black}{RL allows to reduce the complexity generated in wireless environments by finding practical solutions. By applying RL, optimal (or near-to-optimal) solutions can be obtained without having a full understanding on the problem in advance. So, one of the main goals of this paper is to show its feasibility for the decentralized SR problem.} Some RL-based applications can be found for packet routing \cite{littman1993distributed}, Access Point (AP) selection \cite{bojovic2011supervised, bojovic2012neural}, optimal rate sampling \cite{combes2014optimal}, or energy harvesting in heterogeneous networks \cite{miozzo2015distributed}. All these applications make use of online learning, where a learner (or agent) obtains data periodically and uses it to predict future good-performing actions. Online learning is particularly useful to cope with complex and dynamic environments. This background encourages us to approach a solution for the decentralized SR problem in WNs through online learning techniques.
From the family of online algorithms, we \textcolor{black}{are interested on analyzing the performance of} Multi-Armed Bandits (MABs) \cite{BCB12} \textcolor{black}{when applied to WNs. The MAB model is well-known} in the online learning literature for solving resource allocation problems. In MABs, a given agent seeks to learn a hidden reward distribution while maximizing the gains. This is known as the exploration-exploitation trade-off. Exploitation is meant to maximize the long-term reward given the current estimate, and exploration aims to improve the estimate. Unlike classical RL, MABs do not consider states\footnote{A state refers to a particular situation experienced by a given agent, which is defined by a set of conditions. By having an accurate knowledge of its current situation, an agent can define state-specific strategies that maximize its profits.} in general, which can be hard to define for the decentralized SR problem presented in this work. On the one hand, spatial interference cannot be binary treated, thus leading to complex interactions among nodes. On the other hand, the adversarial setting unleashed by decentralized deployments increases the system complexity. Therefore, the obtained reward does not only depends on the actions taken by a given node, but also on the adversaries behavior.
\textcolor{black}{This article extends our previous results presented in \cite{wilhelmi2017implications}. Here we generalize the contributions done by implementing several action-selection strategies to find the best combination of frequency channel and transmit power in WNs. These strategies are applied to the decentralized SR problem, where independent WNs learn selfishly, based on their own experienced performance.} On the one hand, we evaluate the impact of varying parameters intrinsic to the proposed algorithms on the resulting throughput and fairness. In addition, we analyze the effects of learning selfishly, and shed light on the future of decentralized approaches. Notably, we observe that even though players act selfishly, some of the algorithms learn to play actions that enhance the overall performance, some times at the cost of high temporal variability. Considering selfish WNs and still obtaining collaborative behaviors is appealing to typical chaotic and dynamic deployments. \textcolor{black}{Finally, the adversarial setting in WNs is studied under two learning implementations: namely \textit{concurrent} and \textit{sequential}. Both procedures rule the operation followed by learners (based on the proposed action-selection strategies). In particular, WNs select an action at the same time for the concurrent approach. In contrast, an ordered action-selection procedure is followed for the sequential case. We study the performance of the aforementioned techniques in terms of convergence speed, average throughput and variability.} The main contributions of this work are summarized below:
\begin{itemize}
\item We devise the feasibility of applying MAB algorithms as defined in the online learning literature to solve the resource allocation problem in WNs.
\item We study the impact of different parameters intrinsic to the action-selection strategies considered (e.g., exploration coefficients, learning rates) on network performance. \textcolor{black}{In addition, we analyze the implications derived from the application of different learning procedures, referred to as \textit{concurrent} and \textit{sequential}, which rule the moment at which WNs act.}
\item \textcolor{black}{We show the impact of learning concurrently and sequentially. In particular, the former leads to a high throughput variability experienced by WNs, which is significantly reduced by the sequential approach.} \textcolor{black}{Accordingly, we envision the utilization of sequential approaches to achieve decentralized learning in adversarial wireless networks.}
\item Finally, we show that there are algorithms that learn to play collaborative actions even though the WNs act selfishly, which is appealing to practical application in chaotic and dynamic environments. In addition, we shed light on the root causes of this phenomena.
\end{itemize}
The remaining of this document is structured as follows: Section \ref{section:related_work} outlines relevant related work. Section \ref{section:mabs} introduces the proposed learning algorithms and their practical implementation for the resource allocation problem in WNs. Then, Section \ref{section:system_model} presents the simulation scenarios and the considerations taken into account. The simulation results are later presented in Section \ref{section:performance_evaluation}. Finally, Section \ref{section:conclusions} provides the final remarks.
\section{Related Work}
\label{section:related_work}
\textcolor{black}{Decentralized SR has been considerably studied by the wireless research community. The authors in \cite{argyriou2010collision} propose using relay nodes to re-transmit packets lost as a result of a collision. The relay node is able to decode different signals from the environment and to detect if a collision took place. Then, with the aim of improving the re-transmissions operation, it benefits from the current transmission to forward the decoded packets to their original destinations. Although this method shows performance improvements in dense scenarios where collisions are very likely to occur, its effectiveness is subject to the network topology. Regarding directional transmissions, the authors in \cite{babich2015design} propose two novel access schemes to allow multiple simultaneous transmissions. In particular, nodes' activity information is sensed, which, together with antenna's directionality information, allows to build a new set of channel access rules.}
\textcolor{black}{Despite approaches based on directional transmissions and interference cancellation are very powerful and allow to significantly increase SR, they strongly rely on having multiple antennas. Such a requirement is not mandatory for the SR operation based on TPC and CST adjustment. In this work, we focus on the former because tuning the transmit power has a direct impact on the generated interference. This allows to purely study the interactions that occur among nodes implementing decentralized SR. Moreover, we consider DCA to be combined with TPC, so that further potential gains can be achieved.}
\textcolor{black}{DCA has been extensively studied from the centralized perspective, especially through techniques based on graph coloring \cite{riihijarvi2005frequency, mishra2005weighted}. Despite these kind of approaches allow to effectively reduce the interference between WNs, a certain degree of communication is required. Regarding decentralized methods, the authors in \cite{akl2007dynamic} propose a very simple approach in which each AP maintains an interference map of their neighbors, so that channel assignment is done through interference minimization. Unfortunately, the interactions among APs in the decentralized setting are not studied. Separately, \cite{chen2007improved} proposes two decentralized approaches that rely on the interference measured at both APs and stations (STAs) to calculate the best frequency channels for dynamic channel allocation. To do so, a WN, in addition to the interference sensed by its associated devices, considers other metrics such as the amount of traffic, so that some coordination is required at the neighbor level (e.g., periodic reporting). The authors in \cite{yue2011cacao} show that the decentralized DCA problem is NP-hard. In addition, they propose a distributed algorithm whereby APs select the best channel according to the observed traffic information (i.e., channel sensing is considered).}
\textcolor{black}{In this work we aim to extend the approach in \cite{yue2011cacao} in two ways. \textcolor{black}{First, we aim to provide a flexible solution based on the performance achieved by a given WN.} Second, we aim to tackle the spatial domain through TPC\textcolor{black}{, which has been shown to provide large improvements in wireless networks \cite{elbatt2000power}}. \textcolor{black}{However}, dealing with the spatial dimension leads to unpredictable interactions in terms of interference. \textcolor{black}{Such a complexity is illustrated} in \cite{tang2014joint}, which performs power control and rate adaptation in subgroups of Wireless Local Area Networks (WLANs). The creation of clusters allows defining independent power levels between devices in the same group, which are useful to avoid asymmetric links. However, to represent all the possible combinations, graphs can become very large, especially in high-density deployments. \textcolor{black}{When it comes to decentralized mechanisms, we find the work in \cite{gandarillas2014dynamic}, which applies TPC based on real-time channel measurements \cite{gandarillas2014dynamic}}. The proposed mechanism (so called Dynamic Transmission Power Control) is based on a set of triggered thresholds that increase/decrease the transmit power according to the state of the system. The main problem is that thresholds are set empirically (based on simulations), which limits the potential of the mechanisms in front of multiple scenarios.}
\textcolor{black}{\textcolor{black}{As shown by previous research, the optimal decentralized SR in WNs through TPC and DCA is very hard to be derived analytically, mostly because of the adversarial setting and the lack of information at nodes. The existing decentralized solutions barely provide flexibility with respect to the scenario, so that potential use cases are disregarded.} For that, we focus on online learning, and more precisely Multi-Armed Bandits (MABs). The MAB framework allows to reduce the complexity of the SR problem, since detailed information about the scenario is not considered. In contrast, learners gain knowledge on all the adversaries as a whole, thus facing a single environment.} To the best of our knowledge, there is very little related work on applying MAB techniques to the problem of resource allocation in WNs. In \cite{coucheney2015multi}, the authors propose modeling a resource allocation problem in Long Term Evolution (LTE) networks through MABs. In particular, a set of Base Stations (BS) learn the best configuration of Resource Blocks (RBs) in a decentralized way. For that purpose, a variation of EXP3 (so-called Q-EXP3) is proposed, which is shown to reduce the strategy set. Despite a regret bound is provided, it is subject to the fact that an optimal resource allocation exists, i.e., every BS obtains the necessary resources. In addition, a large number of iterations is required to find the optimal solution in a relatively small scenario, thus revealing the difficulties shown by decentralized settings.
More related to the problem proposed here, the authors in \cite{maghsudi2015joint} show a channel selection and power control approach in infrastructureless networks, which is modeled through bandits. In particular, two different strategies are provided to improve the performance of two Device to Device (D2D) users (each one composed by a transmitter and a receiver), which must learn the best channel and transmit power to be selected. Similarly to our problem, users do not have any knowledge on the channel or the other's configuration, so they rely on the experienced performance in order to find the best configuration. An extension of \cite{maghsudi2015joint} is provided by the same authors in \cite{maghsudi2015channel}, which includes a calibrated predictor (referred in the work as \textit{forecaster}) to infer the behavior of the other devices in order to counter act their actions. In each agent, the information of the forecaster is used to choose the highest-rewarding action with a certain probability, while the rest of actions are randomly selected. Henceforth, assuming that all the networks use a given strategy $\mathcal{X}$, fast convergence is ensured. Results show that channel resources are optimally distributed in a very short time frame through a fully decentralized algorithm that does not require any kind of coordination. Both aforementioned works rely on the existence of a unique Nash Equilibrium, which favors convergence. In contrast, in this article we aim to extend Bandits utilization to denser deployments, and, what is more important, to scenarios with limited available resources in which there is not a unique Nash Equilibrium (NE) that allows fast-convergence. Thus, we aim to capture the effects of applying selfish strategies in a decentralized way (i.e., agent $i$ follows a strategy $\mathcal{X}_i$ that does not consider the strategies of the others) and we also provide insight about the importance of past information for learning in dense WNs, which has not been studied before.
\section{Multi-Armed Bandits for Improving Spatial Reuse in WNs}
\label{section:mabs}
In this work, we address the decentralized SR problem through online learning because of the uncertainty generated in an adversarial setting. The practical application of MABs in WNs is detailed next:
\subsection{The Multi-Armed Bandits Framework}
In the online learning literature, several MAB settings have been considered such as stochastic bandits \cite{thompson1933likelihood,lai1985asymptotically,auer2002finite}, adversarial bandits \cite{auer1995gambling,auer2002nonstochastic}, restless bandits \cite{whittle1988restless}, contextual bandits \cite{LCLS10} and linear bandits \cite{abe2003reinforcement,APS11}, and numerous exploration-exploitation strategies have been proposed such as \textit{$\varepsilon$-greedy} \cite{sutton1998reinforcement,auer2002finite}, \textit{upper confidence bound} (UCB) \cite{lai1985asymptotically,Agr95,BuKa96,auer2002finite}, \textit{exponential weight algorithm for exploration and exploitation} (EXP3) \cite{auer1995gambling,auer2002finite} and \textit{Thompson sampling} \cite{thompson1933likelihood}. The classical multi-armed bandit problem models a sequential interaction scheme between a learner and an environment. The learner sequentially selects one out of $K$ actions (often called \emph{arms} in this context) and earns some rewards determined by the chosen action and also influenced by the environment. Formally, the problem
is defined as a repeated game where the following steps are repeated in each round $t=1,2,\dots,T$:
\begin{enumerate}
\item The environment fixes an assignment of rewards $r_{a,t}$ for each action $a\in[K] \stackrel{\text{def}}{=} \left\{1,2,\dots,K\right\}$,
\item the learner chooses action $a_t\in[K]$,
\item the learner obtains and observes reward $r_{a_t,t}$.
\end{enumerate}
The bandit literature largely focuses on the perspective of the learner with the objective of coming up with learning algorithms that attempt to maximize the sum of the rewards gathered during the whole procedure (either with finite or infinite horizon). As noted above, this problem has been studied under various assumptions made on the environment and the structure of the arms. The most important basic cases are the \emph{stochastic} bandit problem where, for each particular arm $a$, the rewards are i.i.d.~realizations of random variables from a fixed (but unknown) distribution $\nu_a$, and the \emph{non-stochastic} (or \emph{adversarial}) bandit problem where the rewards are chosen arbitrarily by the environment. In both cases, the main challenge for the learner is the \emph{partial observability} of the rewards: the learner only gets to observe the reward associated with the chosen action $a_t$, but never observes the rewards realized for the other actions.
Let ${\rm r}_{a^*,t}$ and ${\rm r}_{a,t}$ be the rewards obtained at time $t$ from choosing actions $a^*$ (optimal) and $a$, respectively. Then, the performance of learning algorithms is typically measured by the \emph{total expected regret} defined as \[R_T = \sum_{t=0}^{T} \mathbb{E}\left[\left({\rm r}_{a^*,t} - {\rm r}_{a,t}\right)\right].\]
An algorithm is said to \emph{learn} if it guarantees that the regret grows sublinearly in $T$, that is, if $R_T = o(T)$ is guaranteed as $T$ grows large, or, equivalently, that the average regret $R_T/T$ converges to zero. Intuitively, sublinear regret means that the learner eventually identifies the action with the highest long-term payoff. Note, as well, that the optimal action $a^*$ is the same across all the rounds. Most bandit algorithms come with some sort of a guaranteed upper bound on $R_T$ which allows for a principled comparison between various methods.
\subsection{Multi-Armed Bandits Formulation for Decentralized Spatial Reuse}
\textcolor{black}{We model the decentralized SR problem through adversarial bandits. In such a model, the reward experienced by a given agent (WN) is influenced by the whole action profile, i.e., the configurations used by other competing WNs. From a decentralized perspective, the adversarial setting poses several challenges with respect to the existence of a NE. Ideally, the problem is solved if all the competitors implement a pure strategy\footnote{A pure strategy NE is conformed by a set of strategies and payoffs, so that no player can obtain further benefits by deviating from its strategy.} that allows maximizing a certain performance metric. However, finding such a strategy may not be possible in unplanned deployments, due to the competition among nodes and the scarcity of the available resources. Understanding the implications derived from such an adversarial setting in the absence of a NE is one the main goals of this paper, which, to the best of our knowledge, has been barely considered in the previous literature.}
\textcolor{black}{In particular,} we model this adversarial problem as follows. Let arm $a \in \mathcal{A}$ (we denote the size of $\mathcal{A}$ with K) be a configuration \textcolor{black}{in terms of channel and transmit power (e.g., $a_1$ = \{Channel: 1, TPC: -15 dBm\})}. Let $\Gamma_{i,t}$ be the throughput experienced by $\text{WN}_i$ at time $t$, and $\Gamma_{i}^*$ \textcolor{black}{the optimal throughput.}\footnote{\textcolor{black}{The optimal throughput is achieved in case of isolation (i.e., when no interference is experienced in the selected channel).}} We then define the reward $r_{i,t}$ experienced by $\text{WN}_i$ at time $t$ as:
\begin{equation}
r_{i,t} = {\frac{\Gamma_{i,t}}{{\Gamma_{i}^*}}} \leq 1,
\label{eq:reward_generation}
\nonumber
\end{equation}
In order to attempt to maximize the reward, we have considered the \emph{$\varepsilon$-greedy}, \emph{EXP3}, \emph{UCB} and \emph{Thompson sampling} action-selection strategies, which are described next in this section. While $\varepsilon$-greedy and EXP3 explicitly include the concepts of \emph{exploration coefficient} and \emph{learning rate}, respectively, UCB and Thompson sampling are parameter-free policies that extend the concept of exploration (actions are explored according to their estimated value and not by commitment). \textcolor{black}{The aforementioned policies are widely spread and considered of remarkable importance in the MAB literature.}
\subsubsection{$\varepsilon$-greedy}
\label{section:bandits_egreedy}
The \emph{$\varepsilon$-greedy} policy \cite{sutton1998reinforcement,auer2002finite} is arguably the simplest learning algorithm attempting to deal with exploration-exploitation trade-offs. In each round $t$, the $\varepsilon$-greedy algorithm explicitly decides whether to explore or exploit: with probability $\varepsilon$, the algorithm picks an arm uniformly at random (exploration), and otherwise it plays the arm with the highest empirical return $\hat{r}_{k,t}$ (exploitation).
In case $\varepsilon$ is fixed for the entire process, the expected regret is obviously going to grow linearly as $\Omega\left(\varepsilon T\right)$ in general. Therefore, in order to obtain a sublinear regret guarantee (and thus an asymptotically optimal growth rate for the total rewards), it is critical to properly adjust the exploration coefficient. Thus, in our \textcolor{black}{$\varepsilon$-greedy} implementation, we use a time-dependent exploration rate of $\varepsilon_t = \varepsilon_0 / \sqrt{t}$, as suggested in the literature \cite{auer2002finite}. The adaptation of this policy to our setting is shown as Algorithm \ref{alg:egreedy}.
\begin{algorithm}[H]
\SetAlgoLined
\KwIn{SNR: information about the Signal-to-Noise Ratio received at the STA, $\mathcal{A}$: set of possible actions in \{$a_1, ..., a_K$\}}
\textbf{Initialize:} $t=0$, $\varepsilon_t = \varepsilon_0$, $r_{k} = 0, \forall a_k \in \mathcal{A}$\\
\While{active}{
Select $a_k$ $\begin{cases}
\underset{k=1,...,K}{\text{argmax }} r_{k,t},& \text{with prob. } 1 - \varepsilon\\
k \sim \mathcal{U}(1, K), & \text{otherwise}
\end{cases}$\\
Observe the throughput experienced $\Gamma_t$\\
Compute the reward $r_{k,t} = \frac{\Gamma_t}{\Gamma^*}$, where $\Gamma^* = B \log_{2}(1+\text{SNR})$ \\
$\varepsilon_t \gets \varepsilon_0 / \sqrt{t}$ \\
$ t \gets t + 1$
}
\caption{Implementation of Multi-Armed Bandits ($\varepsilon$-greedy) in a WN. $\mathcal{U}(1, K)$ is a uniform distribution that randomly chooses from 1 to $K$.}
\label{alg:egreedy}
\end{algorithm}
\subsubsection{EXP3}
\label{section:bandits_exp3}
The EXP3 algorithm \cite{auer1995gambling,auer2002nonstochastic} is an adaptation of the weighted majority algorithm of \cite{LW94,FS97} to the non-stochastic bandit problem. EXP3 maintains a set of non-negative weights assigned to each arm and picks the actions randomly with a probability proportional to their respective weights (initialized to 1 for all arms). The aim of EXP3 is to provide higher weights to the best actions as the learning procedure proceeds.
More formally, letting $w_{k,t}$ be the weight of arm $k$ at time $t \in \{1,2 ...\}$, EXP3 computes the probability $p_{k,t}$ of choosing arm $k$ in round $t$ as
\begin{equation}
\label{eq:exp3_prob}
p_{k,t} = (1-\gamma) \frac{w_{k,t}}{\sum_{i=1}^{\text{K}}w_{i,t}} + \frac{\gamma}{K},
\nonumber
\end{equation}
where $\gamma\in[0,1]$ is a parameter controlling the rate of exploration.
Having selected arm $a_t$, the learner observes the generated pay-off $r_{a_t,t}$ and computes the importance-weighted reward estimates for all $k\in[K]$
\begin{equation}
\label{eq:exp3_estimated_reward}
\widehat{r}_{k,t} = \frac{\mathbb{I}_{\left\{I_t = k\right\}}r_{k,t}}{p_{k,t}},
\nonumber
\end{equation}
where $\mathbb{I}_{\left\{A \right\}}$ denoting the indicator function of the event $A$ taking a value of $1$ if $A$ is true and $0$ otherwise.
Finally, the weight of arm $k$ is updated as a function of the estimated reward:
\begin{equation}
\label{eq:exp3_weights}
w_{k,t+1}=w_{k,t} e^{\frac{\eta \cdot \widehat{r}_{k,t}}{K}},
\nonumber
\end{equation}
where $\eta>0$ is a parameter of the algorithm often called the \emph{learning rate}. Intuitively, $\eta$ regulates the rate in which the algorithm incorporates new observations. \textcolor{black}{Large values of $\eta$ correspond to more confident updates and small values lead to more conservative behaviors}. As we did for the exploration coefficient in \emph{$\varepsilon$-greedy}, we use a time-dependent learning rate of $\eta_t = \eta_0 / \sqrt{t}$ \cite{auer2002finite}. Our implementation of EXP3 is detailed in Algorithm \ref{alg:exp3}.
\begin{algorithm}[H]
\SetAlgoLined
\KwIn{SNR: information about the Signal-to-Noise Ratio received at the STA, $\mathcal{A}$: set of possible actions in \{$a_1, ..., a_K$\}}
\textbf{Initialize:} $t=0$, $\eta_t = \eta_0$, $w_{k,t} = 1, \forall a_k \in \mathcal{A}$\\
\While{active}{
$p_{k,t} \leftarrow (1-\gamma) \frac{w_{k,t}}{\sum^K_{i=1} w_{i,t}} + \frac{\gamma}{K}$ \\
Draw $a_k \sim p_{k,t} = (p_{1,t}, p_{2,t}, ... , p_{K,t})$\\
Observe the throughput experienced $\Gamma_t$\\
Compute the reward $r_{k,t} = \frac{\Gamma_t}{\Gamma^*}$, where $\Gamma^* = B \log_{2}(1+\text{SNR})$ \\
$\widehat{r}_{k,t} \leftarrow \frac{r_{k,t}}{p_{k,t}}$ \\
$w_{k,t} \leftarrow w_{k,t-1}^{\frac{\eta_{t}}{\eta_{t-1}}} \cdot e^{\eta_{t} \cdot \widehat{r}_{k,t}}$\\
$w_{k',t} \leftarrow w_{k',t-1}^{\eta_t / \eta_{t-1}}, \forall k' \neq k$\\
$\eta_{t} \leftarrow \frac{\eta_0}{\sqrt{t}}$\\
$t \leftarrow t + 1$\\
}
\caption{Implementation of Multi-Armed Bandits (EXP3) in a WN}
\label{alg:exp3}
\end{algorithm}
\subsubsection{UCB}
\label{section:bandits_ucb}
The \emph{upper confidence bound} (UCB) action-selection strategy \cite{Agr95,BuKa96,auer2002finite} is based on the principle of \emph{optimism in face of uncertainty}: in each round, UCB selects the arm with the highest statistically feasible mean reward given the past observations. Statistical feasibility here is represented by an upper confidence bound on the mean rewards which shrinks around the empirical rewards as the number of observations increases. Intuitively, UCB trades off exploration and exploitation very effectively, as upon every time a suboptimal arm is chosen, the corresponding confidence bound will shrink significantly, thus quickly decreasing the probability of drawing this arm in the future. The width of the confidence intervals is chosen carefully so that the true best arm never gets discarded accidentally by the algorithm, yet suboptimal arms are drawn as few times as possible. To obtain the first estimates, each arm is played once at the initialization.
Formally, let $n_k$ be the number of times that arm $k$ has been played, and $\Gamma_{k,t}$ the throughput obtained by playing arm $k$ at time $t$. The average reward $\overline{r}_{k,t}$ \textcolor{black}{of} arm $k$ at time $t$ is therefore given by:
\begin{equation}
\label{eq:ucb}
\overline{r}_{k,t} = \frac{1}{n_k} \sum_{s=1}^{n_k} r_{k,s}
\nonumber
\end{equation}
Based on these average rewards, UCB selects the action that maximizes $\overline{r}_{k,t} + \sqrt{\frac{2 \ln(t)}{n_k}}$. By doing so, UCB implicitly balances exploration and exploitation, as it focuses efforts on the arms that are $i)$ the most promising (with large estimated rewards) or $ii)$ not explored enough (with small $n_k$). Our implementation of UCB is detailed in Algorithm \ref{alg:ucb}.
\begin{algorithm}[]
\SetAlgoLined
\KwIn{SNR: information about the Signal-to-Noise Ratio received at the STA, $\mathcal{A}$: set of possible actions in \{$a_1, ..., a_K$\}}
\textbf{Initialize:} $t=0$, play each arm $a_k \in \mathcal{A}$ once\\
\While{active}{
Draw $a_k = \underset{k=1,...,K}{\text{argmax }} \overline{r}_{k} + \sqrt{\frac{2 ln(t)}{n_{k}}} $ \\
Observe the throughput experienced $\Gamma_t$\\
Compute the reward $r_{k,t} = \frac{\Gamma_t}{\Gamma^*}$, where $\Gamma^* = B \log_{2}(1+\text{SNR})$ \\
$n_k \leftarrow n_k + 1$\\
$ \overline{r}_{k} \leftarrow \frac{1}{n_{k}} \sum_{s=1}^{n_{k}} r_{k,s}$\\
$t \leftarrow t + 1$
}
\caption{Implementation of Multi-Armed Bandits (UCB) in a WN}
\label{alg:ucb}
\end{algorithm}
\subsubsection{Thompson sampling}
\label{section:bandits_thompsons}
Thompson sampling \cite{thompson1933likelihood} is a well-studied action-selection technique that had been known for its excellent empirical performance \cite{CL11} and was recently proven to achieve strong performance guarantees, often better than those warranted by UCB \cite{AG12,KKM12,KKM13}. Thompson sampling is a Bayesian algorithm: it constructs a probabilistic model of the rewards and assumes a prior distribution of the parameters of said model. Given the data collected during the learning procedure, this policy keeps track of the posterior distribution of the rewards, and pulls arms randomly in a way that the drawing probability of each arm matches the probability of the particular arm being optimal. In practice, this is implemented by sampling the parameter corresponding to each arm from the posterior distribution, and pulling the arm yielding the maximal expected reward under the sampled parameter value.
For the sake of practicality, we \textcolor{black}{assume that rewards follow a Gaussian distribution with a standard Gaussian prior,} as suggested in \cite{agrawal2013further}. By standard calculations, it can be verified that the posterior distribution of the rewards under this model is Gaussian with mean \textcolor{black}{and variance}
\begin{equation}
\textcolor{black}{\hat{r}_k(t) = \frac{\sum_{w=1:k}^{t-1} r_k(t) }{n_k(t) + 1} \Big/ \sigma_k^2(t) = \frac{1}{n_k + 1}},
\nonumber
\end{equation}
where $n_k$ is the number of times that arm $k$ was drawn until the beginning of round $t$. Thus, implementing Thompson sampling in this model amounts to sampling a parameter $\theta_k$ from the Gaussian distribution $\mathcal{N}\left(\hat{r}_k(t),\sigma_k^2(t)\right)$ and choosing the action with the maximal parameter. Our implementation of Thompson sampling to the WN problem is detailed in Algorithm \ref{alg:thompsons}.
\begin{algorithm}[]
\SetAlgoLined
\KwIn{SNR: information about the Signal-to-Noise Ratio received at the STA, $\mathcal{A}$: set of possible actions in \{$a_1, ..., a_K$\}}
\textbf{Initialize:} $t=0$, for each arm $a_k \in \mathcal{A}$, set $\hat{r}_{k} = 0$ and $n_k = 0$ \\
\While{active}{
For each arm $a_k \in \mathcal{A}$, sample $\theta_k(t)$ from normal distribution $\mathcal{N}(\hat{r}_{k}, \frac{1}{n_k + 1})$ \\
Play arm $a_{k} = \underset{k=1,...,K}{\text{argmax }} \theta_k(t) $ \\
Observe the throughput experienced $\Gamma_t$\\
Compute the reward $r_{k,t} = \frac{\Gamma_t}{\Gamma^*}$, where $\Gamma^* = B \log_{2}(1+\text{SNR})$ \\
$ \hat{r}_{k,t} \leftarrow \frac{\hat{r}_{k,t} n_{k,t} + r_{k,t}}{n_{k,t} + 2}$\\
$n_{k,t} \leftarrow n_{k,t} + 1$\\
$t \leftarrow t + 1$
}
\caption{Implementation of Multi-Armed Bandits (Thompson \textcolor{black}{s.}) in a WN}
\label{alg:thompsons}
\end{algorithm}
\section{System model}
\label{section:system_model}
For the remainder of this work, we study the interactions among several WNs placed in a 3-D scenario that occur when applying MABs in a decentralized manner (with parameters described later in Section \ref{section:simulation_parameters}). For simplicity, we consider WNs to be composed by an AP transmitting to a single Station (STA) in a downlink manner. \textcolor{black}{Note that in typical uncoordinated wireless deployments (e.g., residential buildings), STAs are typically close to the AP to which they are associated. Thus, having several STAs associated to the same AP does not significantly impact the inter-WNs interference \textcolor{black}{studied} in this work.}
\subsection{Channel modeling}
\label{section:channel_modelling}
Path-loss and shadowing effects are modeled using the log-distance model for indoor communications. The path-loss between WN $i$ and WN $j$ is given by:
\begin{equation}
\text{PL}_{i,j} = \text{P}_{{\rm tx},i} - \text{P}_{{\rm rx},j} = \text{PL}_0 + 10 \alpha \log_{10}(d_{i,j}) + \text{G}_{{\rm s}} + \frac{d_{i,j}}{d_{{\rm obs}}} \text{G}_{{\rm o}}, \nonumber
\nonumber
\end{equation}
where $\text{P}_{{\rm tx},i}$ is the transmitted power in dBm by the AP in $\text{WN}_i$, $\alpha$ is the path-loss exponent, $\text{P}_{{\rm rx},j}$ is the power in dBm received at the STA in $\text{WN}_j$, $\text{PL}_0$ is the path-loss at one meter in dB, $d_{i,j}$ is the distance between the transmitter and the receiver in meters, $\text{G}_{{\rm s}}$ is the log-normal shadowing loss in dB, and $\text{G}_{{\rm o}}$ is the obstacles loss in dB. Note that we include the factor $d_{{\rm obs}}$, which is the average distance between two obstacles in meters.
\subsection{Throughput calculation}
\label{section:throughput_calculation}
\textcolor{black}{The throughput experienced by WN $i$ at time $t$ is given by $\Gamma_{i,t} = B \log_{2}(1 + \text{SINR}_{i, t})$,
where $B$ is the channel width and SINR is the experienced Signal to Interference plus Noise Ratio. The latter is computed as $\text{SINR}_{i,t} = \frac{\text{P}_{i,t}}{\text{I}_{i,t}+\text{N}}$,
where $\text{P}_{i,t}$ and $\text{I}_{i,t}$ are the received power and the sum of the interference at WN $i$ at time $t$, respectively, and N is the floor noise power.} Adjacent channel interference is also considered in $\text{I}_{i,t}$, so that the transmitted power leaked to adjacent channels is $20$ dBm lower for each extra channel separation. \textcolor{black}{Similarly, the optimal throughput is computed as $\Gamma_{i}^* = B \log_{2}(1 + \text{SNR}_{i})$, which frames the operation of a given WN in isolation.}
\subsection{Learning procedure}
\label{section:learning_procedure}
\textcolor{black}{We frame the decentralized learning procedure in two different ways, namely \textit{concurrent} and \textit{sequential}. Figure \ref{fig:async_vs_sync} illustrates the procedure followed by agents to carry out decentralized SR learning. As shown, in each iteration\footnote{\textcolor{black}{The time between iterations ($T$) must be large enough to provide an accurate estimation of the throughput experienced for a given action profile.}} there is a monitoring phase (shown in grey), where the current selected action is analyzed by each agent to quantify the hidden reward (which depends on the adversarial setting). Such a reward is the same for all the policies presented in this work, so that a fair comparison can be provided. After the monitoring phase is completed, agents update their knowledge (shown in purple) and choose a new action (shown in yellow). Note, as well, that both approaches rely on a synchronization phase (shown in green), which can be achieved through message passing \cite{sanghavi2009message, yang2011message} and/or environment sensing.\footnote{\textcolor{black}{The IEEE 802.11k amendment, which is devoted to measurement reporting, may enable the environment sensing operation.}}}
\begin{figure}[h!]
\centering
\epsfig{file=async_vs_sync.png, width=11cm}
\caption{Concurrent and sequential procedures.}
\label{fig:async_vs_sync}
\end{figure}
\textcolor{black}{In the concurrent approach, agents (or WNs) make decisions simultaneously, thus leading to a more variable (and chaotic) environment. In practice the fully decentralized learning process will most probably not be synchronized but we leave the study of any possible effects of that desynchronization to future work. In contrast, \textcolor{black}{for the sequential approach, WNs need to wait for their turn in order to pick a new action. As a result, the performance of their last selected action (arm) is measured for several iterations (equal to the number of overlapping networks). In particular, during the update phase, a WN computes the reward of its last selected arm according to the throughput experienced in average. Accordingly, the performance of a given action is measured against different adversarial settings, since the environment changes gradually. Despite agents still learn selfishly, they can better assess how robust an action is against the joint actions profile, in comparison to the concurrent approach.}}
\subsection{Simulation Parameters}
\label{section:simulation_parameters}
According to \cite{bellalta2016ax}, which provides an overview of the IEEE 802.11ax-2019 standard, a typical high-density scenario for residential buildings contains $0.0033 \text{APs}/\text{m}^3$ (i.e., 100 APs in a $100 \times 20 \times 15$ m area). Accordingly, for simulation purposes, we define a map scenario with dimensions $10\times5\times10$ m, containing from 2 to 8 APs. In addition, for the first part of the simulations, we consider a setting containing 4 WNs that form a grid topology. In it, STAs are placed at the maximum possible distance from the other networks. Table \ref{tbl:simulation_parameters} details the parameters used.
\begin{table}[h!]
\centering
\resizebox{0.6\columnwidth}{!}{
\begin{tabular}{|l|l|}
\hline
\textbf{Parameter} & \textbf{Value} \\ \hline
Map size (m) & $10\times5\times10$ \\ \hline
Number of coexistent WNs & \{2, 4, 6, 8\} \\ \hline
APs/STAs per WN & 1 / 1 \\ \hline
Distance AP-STA (m) & $\sqrt{2}$ \\ \hline
Number of \textcolor{black}{orthogonal} channels & \textcolor{black}{3} \\ \hline
Channel bandwidth (MHz) & 20 \\ \hline
Initial channel selection model & Uniformly distributed
\\ \hline
\textcolor{black}{Transmit power values (dBm)} & \textcolor{black}{\{-15, 0, 15, 30\}} \\ \hline
$\text{PL}_0$ (dB) & 5
\\ \hline
$\text{G}_s$ (dB) & Normally distributed with mean 9.5
\\ \hline
$\text{G}_o$ (dB) & Uniformly distributed with mean 30
\\ \hline
$d_{\rm obs}$ (meters between two obstacles) & 5
\\ \hline
Noise level (dBm) & -100 \\ \hline
Traffic model & Full buffer (downlink) \\ \hline
Number of learning iterations & 10,000 \\ \hline
\end{tabular}}
\caption{Simulation parameters}
\label{tbl:simulation_parameters}
\end{table}
\section{Performance Evaluation}
\label{section:performance_evaluation}
In this Section, we evaluate the performance of each action-selection strategy presented in Section \ref{section:mabs} when applied to the decentralized SR problem in WNs.\footnote{The source code used in this work is open \cite{fwilhelmi2017code}, encouraging sharing of \textcolor{black}{knowledge} with potential contributors under the GNU General Public License v3.0.} For that purpose, we first evaluate in Section \ref{section:toy_grid_scenario} the $\varepsilon$-greedy, EXP3, UCB and Thompson sampling policies in a fixed adversarial environment. \textcolor{black}{This allows us to provide insights on the decentralized learning problem in a competitive scenario. Accordingly, we are able to analyze in detail the effect of applying each learning policy on the network's performance.} Without loss of generality, we consider a symmetric configuration and analyze the competition effects when WNs have the same opportunities for accessing the channel. Finally, Section \ref{section:random} provides a performance comparison of the aforementioned scenarios with different densities and with randomly located WNs.
\subsection{Toy Grid Scenario}
\label{section:toy_grid_scenario}
The toy grid scenario contains 4 WNs and is illustrated in Figure \ref{fig:scenario}. This scenario has the particularity of being symmetric, so that adversarial WNs have the same opportunities to compete for the channel resources. \textcolor{black}{The optimal solution in terms of proportional fairness\footnote{\textcolor{black}{The proportional fairness (PF) result accomplishes that the logarithmic sum of each individual throughput is maximized: $\max \sum_{i \in \text{WN}} \log(\Gamma_i)$.}} is achieved when channel reuse is maximized and WNs sharing the channel moderate their transmit power. The PF solution provides an aggregate performance of $440.83$ Mbps (i.e., $106.212$ Mbps per WN on average). The optimal solution is computed by brute force (i.e., trying all the combinations), and it is used as a baseline.}
\begin{figure}[h!]
\centering
\epsfig{file=4_WLANs_scenario.pdf, width=8.5cm}
\caption{Grid scenario containing 4 WNs, each one composed by an AP and a STA.}
\label{fig:scenario}
\end{figure}
\textcolor{black}{\subsubsection{Configuration of the Learning Parameters}}
\textcolor{black}{Before comparing the performance of each algorithm, we first analyze the effect of modifying each one's internal parameters. Since the versions of UCB and Thompson sampling analyzed in this work are parameter-less, in this section we focus only on $\varepsilon$-greedy and EXP3 methods.}
\textcolor{black}{Firstly, $\varepsilon$-greedy allows to regulate the explicit exploration rate at which the agent operates, which is referred to as $\varepsilon$. In this paper, $\varepsilon$ is dynamically adjusted as $\varepsilon_t = \frac{\varepsilon_0}{\sqrt{t}}$, with the aim of exploring more efficiently. Accordingly, we study the impact of modifying the initial exploration coefficient in the experienced performance by a WN. Secondly, when it comes to EXP3, we find two parameters, namely $\eta$ and $\gamma$. While $\eta$ controls how fast old beliefs are replaced by newer ones, $\gamma$ regulates explicit exploration by tuning the importance of weights in the action-selection procedure. Setting $\gamma = 1$ results in completely neglecting weights (actions have the same probability to be chosen). On the other side, by setting $\gamma = 0$, the effect of weights are at its highest importance. Thus, in order to clearly analyze the effects of the EXP3 weights, which directly depend on $\eta$, we fix $\gamma$ to 0. As we did for $\varepsilon$-greedy, we analyze the impact of modifying the parameter $\eta_0$ in EXP3 on the WN's performance.}
\textcolor{black}{Figure \ref{fig:tuning_parameters} shows the aggregate throughput obtained in the grid scenario when applying both $\varepsilon$-greedy and EXP3 during 10,000 iterations, and for each $\varepsilon_0$ and $\eta_0$ values, respectively. The results are presented for values $\varepsilon_0$ and $\eta_0$ between 0 and 1 in 0.1 steps. The average and standard deviation of the throughput from 100 simulation runs are also shown, and compared with the proportional fair solution.}
\begin{figure}[h!]
\centering
\epsfig{file=tuning_parameters.png, width=7cm}
\label{fig:tuning_parameters}
\caption{\textcolor{black}{Average network throughput and standard deviation obtained for each $\varepsilon_0$ and $\eta_0$ value in $\varepsilon$-greedy and EXP3, respectively. Results are from 100 simulations lasting 10,000 iterations each. The proportional fair solution is also shown (red dashed line).}}
\end{figure}
\textcolor{black}{As shown, the aggregate throughput obtained on average is quite similar for all $\varepsilon_0$ and $\eta_0$ values, except for the complete random case where no exploration is done (i.e., when $\varepsilon_0$ and $\eta_0$ are equal to 0). For $\varepsilon$-greedy, the lower the $\varepsilon_0$ parameter, the less exploration is performed. Consequently, for low $\varepsilon_0$, the average throughput is highly dependent on how good/bad were the actions taken at the beginning of the learning process, which results in a higher standard deviation as $\varepsilon_0$ goes to 0. As for EXP3, the lower $\eta_0$, the more slowly weights are updated. For $\eta_0 = 0$, weights are never updated, so that arms have always the same probability to be chosen. To conclude, we choose $\varepsilon_0 = 1$ and $\eta_0 = 0.1$, respectively, for the rest of simulations, which provide the highest ratio between the aggregate throughput and the variability among different runs.}
\textcolor{black}{\subsubsection{Performance of the MAB-based Policies}}
\textcolor{black}{Once we established the initial parameters to be used by both $\varepsilon$-greedy and EXP3, we now compare the performance of all the studied action-selection strategies when applied to decentralized WNs. First, we focus on the average throughput achieved by each WN in the toy grid scenario, for each of the methods (Figure \ref{fig:results_part_1_variability_default}). As shown, the proportional fair solution is almost achieved by all the learning methods. However, Thompson sampling is shown to be much more stable than the other mechanisms, since its variability in the aggregate throughput is much lower (depicted in Figure \ref{fig:results_part_1_agg_variability_default}).}
\begin{figure}[h!]
\centering
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=\textwidth]{results_part_1_mean_tpt_default}
\caption{Mean throughput}
\label{fig:results_part_1_variability_default}
\end{subfigure}
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=\textwidth]{results_part_1_agg_variability_default}
\caption{Temporal network throughput}
\label{fig:results_part_1_agg_variability_default}
\end{subfigure}
\caption{\textcolor{black}{Mean throughput achieved per WN, for each action-selection strategy (the standard deviation is shown in red). The black dashed line indicates the PF result.}}
\label{fig:results_part_1_mean_tpt}
\end{figure}
\textcolor{black}{In order to dig deeper into the behavior of agents for each policy, Figure \ref{fig:actions_probabilities} shows the probability of each WN to choose each action. Regarding $\varepsilon$-greedy, EXP3 and UCB, a large set of actions is chosen with similar probabilities. Note that there are only three frequency channels, so that two WNs need to share one of them, thus leading to a lower performance with respect to the other two. Therefore, WNs are constantly changing their channel and experiencing intermittent good/poor performance. Thus, the degree of exploration is kept very high, resulting in high temporal variability. In contrast, Thompson sampling shows a clearer preference for selecting a single action, which allows reducing the aforementioned variability.}
\begin{figure}[t!]
\centering
\begin{subfigure}[b]{0.43\textwidth}
\includegraphics[width=\textwidth]{actions_probability_EG}
\caption{$\varepsilon$-greedy ($\varepsilon_0 = 0.1$)}
\label{fig:actions_probability_EG}
\end{subfigure}
\begin{subfigure}[b]{0.43\textwidth}
\includegraphics[width=\textwidth]{actions_probability_EXP3}
\caption{EXP3 ($\eta_0 = 0.1$)}
\label{fig:actions_probability_EXP3}
\end{subfigure}
\begin{subfigure}[b]{0.43\textwidth}
\includegraphics[width=\textwidth]{actions_probability_UCB}
\caption{UCB}
\label{fig:actions_probability_UCB}
\end{subfigure}
\begin{subfigure}[b]{0.43\textwidth}
\includegraphics[width=\textwidth]{actions_probability_TS}
\caption{TS}
\label{fig:actions_probability_TS}
\end{subfigure}
\caption{\textcolor{black}{Probability of selecting each given action for a simulation of 10,000 iterations.}}
\label{fig:actions_probabilities}
\end{figure}
\textcolor{black}{\subsubsection{Learning Sequentially}}
\label{section:proposed_method}
\textcolor{black}{In order to alleviate the strong throughput variability experienced when applying decentralized learning, we now focus on the sequential approach introduced in Section \ref{section:learning_procedure}. Now, only one WN is able to select an action at a time. With that, we aim to reduce the adversarial effect on the estimated rewards. Therefore, by having a more stable environment (not all the WNs learn simultaneously), the actual reward of a given selected action can be estimated more accurately. Figure \ref{fig:results_part_1_async_vs_sync} shows the differences between learning through concurrent and sequential mechanisms. Firstly, the throughput experienced on average along the entire simulation is depicted in Figure \ref{fig:results_part_1_async_vs_sync_meant_tpt}. Secondly, without loss of generality, Figure \ref{fig:results_part_1_async_vs_sync_variability} shows the temporal variability experienced by $\text{WN}_4$ when applying Thompson sampling. Note that showing the performance of a single WN is representative enough for the entire set of WNs (the scenario is symmetric), and allows us to analyze in detail the behavior of the algorithms.}
\textcolor{black}{On the one hand, a lower throughput is experienced on average when learning in a sequential way, but the differences are very small. In such a situation, WNs spend more time observing sub-optimal actions, since they need to wait for their turn. Note, as well, that the time between iterations ($T$) depends on the implementation. In this particular case, we assume that $T$ is the same for both sequential and concurrent approaches.}
\textcolor{black}{On the other hand, the temporal variability shown by the sequential approach is much lower than for the concurrent one (Figure \ref{fig:results_part_1_async_vs_sync_variability}). The high temporal variability may negatively impact on the user's experience and the operation of upper layer protocols (e.g., TCP) may be severely affected. Notice that a similar effect is achieved for the rest of algorithms.}
\begin{figure}[t!]
\centering
\begin{subfigure}[b]{0.44\textwidth}
\includegraphics[width=\textwidth]{results_part_1_async_vs_sync}
\caption{Average throughput}
\label{fig:results_part_1_async_vs_sync_meant_tpt}
\end{subfigure}
\begin{subfigure}[b]{0.44\textwidth}
\includegraphics[width=\textwidth]{results_part_1_async_vs_sync_variability}
\caption{Temporal variability in $\text{WN}_4$}
\label{fig:results_part_1_async_vs_sync_variability}
\end{subfigure}
\caption{\textcolor{black}{Concurrent vs sequential approaches performance. (a) Mean average throughput achieved for each learning procedure. (b) Temporal variability experienced by $\text{WN}_4$ for the Thompson sampling action-selection strategy and for each learning procedure.}}
\label{fig:results_part_1_async_vs_sync}
\end{figure}
\subsubsection{Learning in a Dynamic Environment}
\label{section:dynamic_environment}
\textcolor{black}{Finally, we show the performance of the proposed learning mechanisms in a dynamic scenario. For that, we propose the following situation. Firstly, $\text{WN}_1$ and $\text{WN}_2$ are active for the whole simulation. Secondly, $\text{WN}_3$ turns on at iteration 2,500, when $\text{WN}_1$ and $\text{WN}_2$ are supposed to have acquired enough knowledge to maximize SR. Finally, $\text{WN}_4$ turns on at iteration 5,000, similarly than for $\text{WN}_3$.}
\textcolor{black}{Through this simulation, we aim to show how each learning algorithm adapts to changes in the environment, which highly impact on the rewards distributions. Figure \ref{fig:dynamic_enviroment} shows the temporal aggregate throughput achieved by each action-selection strategy. As done in Subsection \ref{section:proposed_method}, we only plot the results of the best-performing algorithm, i.e., Thompson sampling, both for the concurrent and the sequential procedures.}
\begin{figure}[h!]
\centering
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=\textwidth]{temporal_aggregate_tpt_dynamic_scenario_async_TS}
\caption{Concurrent approach}
\label{fig:temporal_aggregate_tpt_dynamic_scenario_async_TS}
\end{subfigure}
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=\textwidth]{temporal_aggregate_tpt_dynamic_scenario_ordered_TS}
\caption{Sequential approach}
\label{fig:temporal_aggregate_tpt_dynamic_scenario_ordered_TS}
\end{subfigure}
\caption{\textcolor{black}{Temporal aggregate throughput experienced for a 10,000-iteration Thompson sampling simulation.}}
\label{fig:dynamic_enviroment}
\end{figure}
\textcolor{black}{As shown, WNs are able to adapt to the changes in the environment. In particular, for the concurrent case (see Figure \ref{fig:temporal_aggregate_tpt_dynamic_scenario_async_TS}), changes are harder to be captured as the network size increases. In contrast, learning in an ordered way (see Figure \ref{fig:temporal_aggregate_tpt_dynamic_scenario_ordered_TS}) allows reducing the temporal variability, even if new WNs turn on. However, there is a little loss in the aggregate performance with respect to the concurrent approach. The difference between the maximum network performance is mostly provoked by the reduced exploration shown by the sequential approach.}
\subsection{Random Scenarios}
\label{section:random}
We now evaluate whether the previous conclusions generalize to random scenarios with an arbitrary number of WNs. To this aim, we use the same $10\times5\times 10$ m scenario and randomly allocate N = \{2, 4, 6, 8\} WNs. \textcolor{black}{Figures \ref{fig:random_scenarios_results} and \ref{fig:random_scenarios_results_variability} show the mean throughput and variability experienced for each learning strategy, and for each number of coexistent WNs, respectively. The variability is measured as the standard deviation that a given WN experiences along an entire simulation. We consider the average results of 100 different random scenarios for each number of networks. In particular, we are interested on analyzing the gains achieved by each algorithm, even if convergence cannot be provided due to the competition between networks. For that, we display the average performance for the following learning intervals: [1-100, 101-500, 501-1000, 1001-2500, 2501-10000]. Note that the first intervals represent few iterations. This allows us to observe the performance achieved during the transitory phase in more detail. In addition, the performance achieved in a static situation (i.e., when no learning is performed) is shown in Figure \ref{fig:random_scenarios_results}. With that, we aim to compare the gains obtained by each learning strategy with respect to the current IEEE 802.11 operation in unplanned and chaotic deployments.}
\begin{figure}[h!]
\centering
\includegraphics[width=0.8\textwidth]{random_scenarios_results}
\caption{\textcolor{black}{Average throughput experienced in each learning interval for each action-selection strategy. Results from 100 repetitions are considered for each different number of overlapping WNs (N = \{2, 4, 6, 8\}). The black dashed line indicates the default IEEE 802.11 performance (static situation).}}
\label{fig:random_scenarios_results}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=0.77\textwidth]{random_scenarios_results_variability}
\caption{\textcolor{black}{Average variability experienced in each learning interval for each action-selection strategy. Results from 100 repetitions are considered for each different number of overlapping WNs (N = \{2, 4, 6, 8\}).}}
\label{fig:random_scenarios_results_variability}
\end{figure}
\textcolor{black}{First of all, let us focus on the throughput improvements achieved with respect to the static situation. As shown in Figure \ref{fig:random_scenarios_results}, each learning strategy easily outperforms the static scenario for low densities (i.e., 2 and 4 overlapping WNs). However, as density increases, improving the average throughput becomes more challenging. This is clearly evidenced for N = \{6, 8\} WNs, where EXP3 performs worse than the static situation.}
\textcolor{black}{Secondly, we concentrate on the concurrent learning procedure. As shown in Figure \ref{fig:random_scenarios_results}, Thompson sampling outperforms the other action-selection strategies for all the scenarios, provided that enough exploration is done (up to 500 iterations). On the other hand, $\varepsilon$-greedy allows to increase the average performance very quickly, but its growth stalls from iteration 200. Note that $\varepsilon$-greedy is based on the absolute throughput value, thus preventing to find a collaborative behavior in which the scarce resources are optimally shared. Finally, EXP3 and UCB are shown to improve the average throughput linearly, but offering poor performance.}
\textcolor{black}{When it comes to the sequential approach, we find the following:}
\begin{itemize}
\item \textcolor{black}{On the one hand, the average throughput is reduced in almost all the cases in comparison with the concurrent approach (see Figure \ref{fig:random_scenarios_results}). We find that this is due to the larger phases in which agents exploit sub-optimal actions. As previously pointed out, the time between iterations is considered to be the same for both sequential and concurrent learning approaches.}
\item \textcolor{black}{On the other hand, the sequential procedure is shown to substantially improve the variability experienced by $\varepsilon$-greedy, EXP3 and UCB (see Figure \ref{fig:random_scenarios_results_variability}). The performance of the latter is particularly shown to be improved when the learning procedure is ordered. The sequential approach, therefore, allows UCB to produce more accurate estimates on the actions rewards. In contrast, learning sequentially does not improve the concurrent version of Thompson sampling in any performance metric. We attribute this suboptimal behavior to the way in which Thompson sampling performs estimates of the played actions, which depends on the number of times each one is selected. In particular, suboptimal actions can eventually provide good enough performance according to the adversarial setting. The same issue can lead to underestimate optimal actions, so that their actual potential is not observed. Since Thompson sampling bases its estimates on the number of times each action is selected, the aforementioned effect may lead to increase the exploitation on suboptimal actions.}
\end{itemize}
\section{Conclusions }
\label{section:conclusions}
In this paper, we provided an implementation of MABs to address the decentralized SR problem in dense WNs. Unlike previous literature, we have focused on a situation in which few resources are available, thus bringing out the competition issues raised from the adversarial setting. Our results show that decentralized learning allows improving SR in dense WN, so that collaborative results in symmetric scenarios, sometimes close to optimal proportional fairness, can be achieved. This result is achieved even though WNs act selfishly, aiming to maximize their own throughput. In addition, this behavior is observed for random scenarios, where the effects of asymmetries cannot be controlled.
These collaborative actions are, at times, accompanied by high temporal throughput variability, which can be understood as a consequence of the rate at which networks change their configuration in response of the opponents behavior. A high temporal variability may provoke negative issues in a node's performance, as its effects may be propagated to higher layers of the protocol stack. For instance, a high throughput fluctuation may entail behavioral anomalies in protocols such as Transmission Control Protocol (TCP).
We have studied this trade-off between fair resource allocation and high temporal throughput variability in $\varepsilon$-greedy, EXP3, UCB and Thompson sampling action-selection strategies. Our results show that while this trade-off is hard to regulate via the learning parameters in $\varepsilon$-greedy and EXP3, UCB and, especially, Thompson sampling are able to achieve fairness at a reduced temporal variability. We identify the root cause of this phenomena to the fact that both UCB and Thompson sampling consider the probability distribution of the rewards, and not only their magnitude.
\textcolor{black}{Furthermore, for the sake of alleviating the temporal variability, we studied the effects of learning concurrent and sequentially. We have shown that learning in an ordered way is very effective to reduce the throughput variability for almost all the proposed learning strategies, even if WNs maintain a selfish behavior. By learning sequentially, more knowledge is attained on a given action, thus allowing to differentiate quickly between good and bad performing actions. \textcolor{black}{Apart from that, we found that Thompson sampling grants significantly better results than the other examined algorithms since it is able to capture meaningful information from chaotic environments.}}
We left as future work to further study the MABs application to WNs through distributed (with message passing) and centralized (with complete information) approaches with shared reward. In particular, we would like to extend this work to enhance both throughput and stability by inferring the actions of the opponents and acting in consequence, as well as further investigating dynamic scenarios. Defining the resource allocation problem as an adversarial game is one possibility to do so. \textcolor{black}{In addition to this, the utilization of multiple antenna strategies (i.e., single and multi-user beamforming and interference cancellation) is expected to further improve the spectral efficiency in future WNs. Through these techniques, the SR problem can be relaxed in a similar way than using several non-overlapping frequency channels. However, its application would significantly increase the problem's complexity, and its analysis is also left as future work.}
\section*{Acknowledgments}
This work has been partially supported by the Spanish Ministry of Economy and Competitiveness under the Maria de Maeztu Units of Excellence Programme (MDM-2015-0502), by a Gift from CISCO University Research Program (CG\#890107) \& Silicon Valley Community Foundation, by the European Regional Development Fund under grant TEC2015-71303-R (MINECO/FEDER), and by the Catalan Government under grant SGR-2017-1188.
\textcolor{black}{The authors would like to thank the anonymous reviewers. Their dedication and insightful comments were of great help to improve this paper. We would also like to show our gratitude to Andrés Burbano, who also contributed to improve the quality of this document through his thorough revision.
}
\newpage
\bibliographystyle{unsrt}
| {'timestamp': '2018-11-15T02:02:17', 'yymm': '1710', 'arxiv_id': '1710.11403', 'language': 'en', 'url': 'https://arxiv.org/abs/1710.11403'} |
\section{Introduction}
Big data provides us an unprecedented opportunity to construct the real-time and high-frequency macroeconomic indicators, which can not only play a unique role in early warning and monitoring in the process of macroeconomic operation, but also provide a new perspective for the research in the field of economics. However, most of China's important macroeconomic indicators, such as the Consumer Price Index (CPI), which is used to measure the level of inflation, can only be released monthly, and there is a lag in its release. But the development of scanner data provides a new data source for the compilation of CPI, which can not only improve the effectiveness of CPI release, but also realize the compilation of CPI at a higher time frequency. At present, the Netherlands, Norway, Switzerland, Sweden and New Zealand and other countries have officially applied scanner data to the compilation of their own CPI.
To our best knowledge, there is no research results that directly apply scanner data to the compilation of CPI in China, so we use the scanner data of supermarket retail sales provided by CAA to compile the Scanner-data Food Consumer Price Index, namely S-FCPI. It not only makes up for the gap in such studies, and due to its unique advantages, such as strong timeliness, diversified time frequency and wide geographical coverage, it provides a new perspective for real-time monitoring of changes in the price level of various goods, reflecting macroeconomic operation, measuring inflation inertia and inflation prediction, which is a beneficial supplement to China's CPI.
Scanner data refers to the highly structured data containing goods prices, sales volume, sales location and other information formed by scanning the barcode of the goods when the goods is sold, which not only provides a new source of data for the compilation of price indexes by national statistical agencies and related research, but also makes it possible to compile superlative price indexes at detailed aggregation levels since prices and quantities are available (see \cite{2011Eliminating}). Among the countries that currently use scanner data to compile CPI, the Dutch first tried to apply scanner data to the compilation of CPI in 1997 and constructed the price index of coffee at first (see \cite{1997Estimation}). After a series of tests on the index effect, the scanner data was formally applied to the compilation of Dutch CPI in 2002. At present, the goods categories of the Dutch using scanner data to compile CPI have been expanded from the original food category to six categories in COICOP.
In Norway, the scanner data is officially used to compile the CPI in 2005, and the T\"ornqvist price index is applied to compile the basic classification price index. Now, 20\% of the data used to compile the Norway CPI is already scanner data, covering seven categories in COICOP (see \cite{nygaard2010chain}; \cite{johansen2011dealing}; \cite{rodriguez2006use}; \cite{johansen2012various}). In Switzerland, the scanner data is officially used to compile the CPI in 2008 which is provided by the two largest retail chains in Switzerland, and it covers six categories in COICOP (see \cite{muller2006recent}). New Zealand began to compile the CPI of electronics in September 2013 based on the scanner data provided by GfK and considering the rapid turnover of old and new products in electronics and the existence of seasonal products, as well as the price and sales volume information provided by scanner data, New Zealand uses T\"ornqvist price index as its index calculation formula. In addition to the practical application of the scanner data in the compilation of CPI in some countries, many scholars have also conducted a lot of research on how to overcome the impact of the potential problems when using scanner data, such as high loss rate of goods, large fluctuation of prices and sales volume affected by promotion. For example, \cite{ivancic2011scanner} put forward the RYGEKS (Rolling Year GEKS) price index in their research, then \cite{2011Eliminating} introduced the T\"ornqvist price index into the RYGEKS price index, which can be free from the impact of chain drift, and New Zealand uses this method to compile the CPI now. And \cite{RePEc:swe:wpaper:2018-13} suggested that CCDI index and mean splice method should be used to update the price index in order to control the degree of chain drift.
However, due to information protection and other reasons, retail enterprises are often reluctant to provide their own scanner data, and the cost of purchasing access to scanner data also makes the higher cost of conducting the relevant research. Therefore, considering the correlation between online and offline goods prices, some research have turned their perspectives to online price data. Currently, the relatively mature research results are the Billion Price Project (BPP) carried out by MIT, and the iCPI issued by researchers from Tsinghua University. \cite{cavallo2016billion}, the founders of MIT Billion Price Program, introduced the data source, data processing and application value of BPP in detail in their research. Considering that large multi-channel retailers (i.e. retailers selling online and offline at the same time, such as Wal Mart) participate in most retail sales in most countries, BPP takes the price data of such retailers as the main data source, but BPP of China only includes fresh food and supermarket food. \cite{HARCHAOUI2018225} showed in their research that since online goods prices can reflect changes in offline goods prices to a certain extent, BPP can be used as one of the indicators to enhance the timeliness of national CPI. In addition, their research also realized mixed frequency prediction of U.S. CPI based on the U.S. BPP index and MIDAS model. The iCPI released by the iCPI project team of Tsinghua University is a relatively mature online price index in China at present. \cite{LiuTaoxiong2019} discussed in detail the data source, compilation method and application value of iCPI in their research. iCPI takes China's online shopping platforms (such as Taobao, JD, Suning, etc.) and goods prices information platforms (such as Price Quotations, Soufun, etc.) as the main data sources, and the classification of goods categories is basically consistent with China's CPI. In addition, iCPI can achieve the compilation of daily and weekly high-frequency price indexes, but the main disadvantage of iCPI is that it cannot reflect the change of regional price level.
The remainder of the paper is organized as follows. Section \ref{sec:Data and S-FCPI} features the data source of S-FCPI and introduces the compilation method of S-FCPI. Section \ref{sec:CPI and S-FCPI} analyzes the correlation between S-FCPI and CPI and other macro indicators, thus proving the reliability of S-FCPI. Section \ref{sec:Predicting CPI by S-FCPI} uses S-FCPI to realize the quantitative and qualitative prediction of CPI growth rate. Section \ref{sec:Conclusion} is conclusion which summarizes the results.
\section{Data source and S-FCPI}
\label{sec:Data and S-FCPI}
The retail scanner data of Chinese supermarkets integrated by CAA not only provide massive goods prices data, but also provide location, volume, amount, and detailed information, which provide the potential for constructing a consumer price index. This paper employed the supermarket retail scanner big data from CAA to construct China's Food Consumer Price Index, named S-FCPI (Scanner-data Food Consumer Price Index). In this section, we elaborate data collection, data processing and S-FCPI construction.
\subsection{Data collecting}
The data source for compiling S-FCPI is the retail scanner data of Chinese supermarkets provided by China Ant Business Alliance (CAA), which is a retail resource alliance organization, since its establishment in 2017 by 12 chain retail enterprises in 6 provinces of China, CAA has experienced rapid development for five years. In 2020, it has more than 100 member enterprises in 32 provinces, cities, and autonomous regions of China, with a total annual turnover of nearly 100 billion Yuan. By utilizing the Alibaba Cloud and the MaxComputing platform~\cite{aliyun}, CAA collected and pre-processed the scanner big data generated by its member enterprises in real time.
The traditional survey-based method in most countries to collect retail data has remained the same, in which the process is expensive, complex and often slow. Scanner big data provided by CAA has the main advantages:
\begin{itemize}
\item Low cost. Each sale of goods is equivalent to data collection, which reduces the cost compared with market survey.
\item High frequency. The data will be uploaded to the database in real time when the goods are sold, and the sales time record is accurate to seconds, so the data can be aggregated into different time dimensions according to the research requirement.
\item Large volume. The database stores the sales data of 12 million goods which covering 32 administrative regions in China, and the goods are divided into 13 first-level categories, 67 second-level categories, 419 third-level categories and 776 fourth-level categories by CAA.
\end{itemize}
\subsection{Data processing}
Despite of the significant advantages of scanner data, it takes effort to process the data due to its massive data, which also puts forward higher requirements for the optimization of data processing, which the most critical is the choice of data aggregation route.
Specifically, goods sales summary table (daily), goods barcode information table and goods category table are the basic dataset when compelling S-FCPI. Among them, the goods sales summary table (daily) is the most important dataset, and it takes the date as a partition, which corresponds to independent folders on the distributed system in essence, and each folder stores scanner data of all stores on the same day. Its significance is to avoid unnecessary scanning of the whole dataset to optimize the query and improve the efficiency of data processing. However, when compiling S-FCPI, it needs to use the above datasets to vertically aggregate scanner data at the daily, weekly and monthly level according to the time dimension, and horizontally aggregate scanner data at the city (county), procincial (district) and national level According to the geographical dimension. Therefore, two possible data aggregation routes are as follows: vertical aggregation according to time dimension and then horizontal aggregation according to geographical dimension; horizontal aggregation according to geographical dimension and then vertical aggregation according time dimension.
Since the above two data aggregation routes cannot avoid full dataset scans, the aggregation route with the least number of full dataset scans is the most efficient one on the premise of the same aggregation results. For the first aggregation route, it only needs to scan goods sales summary table (daily) once to obtain the daily aggregated data at the city (county) level, then aggregate it into weekly, monthly aggregated data at the city (county) level, and finally aggregate the daily, weekly, monthly aggregated data at the city (county) level respectively into procincial (district) and national level. For the second aggregation route, it needs to scan goods sales summary table (daily) three times to obtain the daily aggregated data at the city (county), procincial (district) and national level, then aggregate them respectively into weekly and monthly level. Therefore, the first aggregation route can effectively reduce the number of full dataset scans and improve the efficiency of data aggregation, so we choose it as the final data aggregation route.
\subsection{Index construction}
To ensure the scientificity and rigor of the compilation of S-FCPI as well as the correctness and validity of the results, we mainly refer to the theories and methods of compiling CPI in China, and the ``Circulation and Consumer Price Statistical Report System (2021)'' published by the National Bureau of Statistics of China. In addition, other relevant materials such as ``Consumer Price Index Manual: Concepts and Methods (2020)'' published by the International Monetary Fund and the International Labour Organization are also referred. The detailed compilation process of S-FCPI is shown in Figure~\ref{The compilation process of S-FCPI.}.
\begin{figure}[!h]
\centering
\includegraphics[scale=.6]{figs/flow.pdf}
\caption{The construction process of S-FCPI.}
\label{The compilation process of S-FCPI.}
\end{figure}
\subsubsection{Categories of goods}
In order to make the price changes reflected by S-FCPI reliable, we mainly refer to the ``Classification of Personal Consumption by Purpose'' published by the United Nations, and ``Circulation and Consumer Price Statistical Reporting System (2021)'' to reclassify the categories of goods in the CAA database. The S-FCPI's categories of goods are generally consistent with China's CPI, and the specific categories of goods are shown in Table~\ref{Goods categories}.
\begin{table}[H]
\caption{Goods categories of S-FCPI.}\label{Goods categories}
\begin{tabular}{ll}
\toprule
Sub-category & Basic category \\ \midrule
Grain & Rice, noodles, other grains, and grain products \\
Tuber & Tuber, Tuber products \\
Bean & Dried beans, soy products \\
Edible oil & \begin{tabular}[c]{@{}l@{}}Rapeseed oil, soybean oil, peanut oil, sunflower oil, camellia oil, blend oil, \\ linseed oil, corn oil, olive oil, butter, other edible oils\end{tabular} \\
Vegetables and edible fungi & \begin{tabular}[c]{@{}l@{}}Onion, ginger, garlic and pepper, root vegetables, mushroom vegetables, \\ nightshade vegetables, leafy vegetables, \\ dried vegetables and dried bacteria and products\end{tabular} \\
Neat of animal & \begin{tabular}[c]{@{}l@{}}Pork, beef, mutton, other livestock meat and by-products, \\ livestock meat products\end{tabular} \\
Meat of poultries & Chicken, duck, other poultry meat and products \\
Aquatic products & \begin{tabular}[c]{@{}l@{}}Fish, shrimp, crab, shellfish, algae, soft pod, \\ other aquatic products\end{tabular} \\
Eggs & \begin{tabular}[c]{@{}l@{}}Eggs, duck eggs, goose eggs, quail eggs, pigeon eggs, \\ other egg products and products\end{tabular} \\
Dairy & \begin{tabular}[c]{@{}l@{}}Pure milk, pure goat milk, yogurt, milk powder, milk beverage, \\ other milk products\end{tabular} \\
Dried fresh melons and fruits & Fresh fruit, nuts, candied dried fruit, other melon and fruit products \\
Candy and pastries & Sugar, sweets and chocolate, pastries, other confectionery pastries \\
Condiments & \begin{tabular}[c]{@{}l@{}}Edible salt, soy sauce, vinegar, cooking wine, chicken essence, \\ monosodium glutamate, sesame oil, seasoning sauce, chili oil, \\ spices, other condiments\end{tabular} \\
Other food categories & Convenience food, starch, and products, puffed food, baby food \\ \bottomrule
\end{tabular}
\end{table}
\subsubsection{Collection of goods prices}
There is a significant difference between S-FCPI and CPI in the way of price collection. Specifically, the representative goods under each category of CPI are selected by local investigation teams based on the sales volume and supplemented by the opinions of the relevant local departments. It means that CPI constructs a fixed basket of goods and thus may have errors due to the incomplete selection of goods. The difference is that S-FCPI regards all goods with sales data as representative goods and uses the product ID to match the sales data of the different periods, that is, S-FCPI constructs a variable basket of goods to ensure that the goods in the basket meet the homogeneity and comparability.
On the frequency of goods prices collection, the data for compiling CPI is obtained through manual investigation by investigators, and different investigation frequencies are set according to the types of goods (1 to 5 times per month). But for the scanner data, each order can be regarded as a price collection of all goods in the bill, therefore, the scanner data realize the real-time collection of goods prices, which also enables S-FCPI to be constructed not only into monthly with the same frequency as CPI, but also can be constructed into daily and weekly with higher frequency.
On the location of goods prices collection, the National Bureau of Statistics of China determines it by sampling cities (counties) and price survey points \footnote{The sampling of cities (counties) is stratified sampling According to the size of cities, population, and income, and some small or medium-sized cities (counties) are appropriately added for investigation to enhance the representativeness of provincial price indexes. The sampling of the price survey points is to conduct equidistant sampling after sorting the transaction volume of each farmers market (fresh market), supermarket, etc. in descending order in the sampled cities (counties).}. S-FCPI regards all stores of CAA member enterprises as goods prices collection location instead of sampling, so it can not only be constructed into national provincial price index with the same geographical dimension as CPI, but it can also be constructed into a more detailed geographic dimension, such as county price index.
\subsubsection{Weights setting of S-FCPI}
Since different categories of goods account for different proportions in household consumption, it needs to set corresponding weights for compiling S-FCPI. The CPI weights are mainly derived from the household survey data in the base year, and it is adjusted slightly by using typical survey data an
d expert assessments. In addition, CPI has no weight data below the basic category, and only has weight data above the basic category. Different from the CPI weights, the S-FCPI weights are calculated based on the CAA database which enables timely adjustment of the weights according to the consumption structure of consumers during the reporting period. The main idea of S-FCPI weights is to select the corresponding aggregate data according to the time dimension and geographic dimension of S-FCPI, then the weights are obtained by calculating the share of goods sales.
On the composition of weights, the CPI weights include the city (county) , procincial (district) and national weights, the provincial (district) and national weights are further divided into the corresponding urban and rural weights. The composition of S-FCPI weights is roughly the same as CPI weights, which also includes the city (county), procincial (district) and national weights, and all weights include the frequency of daily, weekly, and monthly.
On the update frequency of weights, the CPI weights are updated every five years since 2000, and slightly adjusted with relevant data during the period. However, the update frequency of CPI weighs is still relatively low, which may lead to non-sampling error of CPI weights. S-FCPI improves the updating frequency of weights and realizes the updating frequency of weights with daily, weekly, monthly, or other frequencies to reflect the changes of residents' consumption structure and pattern more accurately and timely.
\subsubsection{Index calculation}
The calculation of S-FCPI can be divided into four steps: calculating the unit price, calculating the relative number of price changes, compiling the basic categorical price index, and summarizing to the high-level price index. We use the monthly chain price index as the example to introduce the process of calculating S-FCPI.
\textbf{Step one, calculating the unit price.} Since the CAA database provides both the price and sales volume information of goods, S-FCPI uses the weighted average method to calculate the unit price instead of the simple arithmetic average used by CPI, thus more accurately reflecting the differences between different stores. The specific calculation method is to take the share of sales volume of goods as the weight, calculate the weighted average price of goods in one store firstly, then calculate the weighted average price of goods in multiple stores and take it as the unit price of goods.Taking goods \textit{k} as an example, the calculation formula of unit price is as follows, where $U_k^t$ is unit price, $sales_k^t$ is total sales, $q_k^t$ is total sales volumes.
\begin{equation}
U_k^t=\frac{sales_k^t}{q_k^t}\label{eq:U}
\end{equation}
\textbf{Step two, calculating the relative number of price changes.} Consistent with the compilation method of CPI, the relative number of price changes of each good in S-FCPI is calculated by comparing the unit prices of the two periods. Equation (\ref{eq:R}) is the calculation formula where $R_k^t$ is relative number of price changes, $U_k^t$ and $U_k^{t-1}$ are the unit prices in period \textit{t} and period \textit{t}-1, respectively.
\begin{equation}
R_k^t=\frac{U_k^t}{U_k^{t-1}}\label{eq:R}
\end{equation}
\textbf{Step three, compiling the basic categorical price index. } Different countries in the world have different theoretical bases for compiling their own basic categorical CPI. One is the Fixed basket index theory represented by China~\cite{XU2006cpi}, which core idea is to select a series of goods and services closely related to the daily life and consumption of residents and form an abstract goods basket, the change of the overall price level is reflected by investigating the change of the cost of purchasing the goods basket in different periods. The other is the cost-of-living index theory represented by the United States(see \cite{konus1939problem}; \cite{abraham2003toward}; \cite{diewert2001consumer}; \cite{triplett2001should}), which core idea is to reflect the change of the overall price level by comparing the minimum expenditure required by consumers in different periods to reach a certain utility level. Since there are two different theoretical bases for CPI, namely, the fixed basket index theory and the cost-of-living index theory, theoretically speaking, when CPI is constructed based on different theoretical bases, the index calculation method adopted should also have different emphasis.
Jevons price index is an index calculation method that does not consider consumer preference, which is used for the calculation of the basic categorical price index of China's CPI, so we take it as one of the calculation methods of the basic categorical price index of S-FCPI. Taking the basic category \textit{j} under the sub-category \textit{i} of a city (county) as an example, the calculation formula of the Jevons price index is shown in Equation ((\ref{eq:J})), where $M_{jk}^t$ is the number of goods in basic category \textit{j}.
\begin{equation}
J_j^t=\prod_{k=1}^{M_{jk}^t}\left(R_{jk}^t\right)^\frac{1}{M_{jk}^t}\label{eq:J}
\end{equation}
In relevant research, the index calculation method that does not consider consumer preference (unweighted index form) is often used due to the limitation of the availability of weights data, while the development of scanner data makes it possible to compile basic categorical price index when considering consumer preference (weighted index form), and CPI Manual (2020) also points out that when detailed price and sales data are available, the weighted index form will be a better choice. Therefore, we also use T\"ornqvist (see \cite{DIEWERT1976115}; \cite{1979The}) and GFT (see \cite{basmann1983budget}; \cite{basmann2013generalized}; \cite{swann1999economic}) price indices to compile the basic categorical price index of S-FCPI. Taking the basic category \textit{j} under the sub-category \textit{i} of a city (county) as an example, the calculation formulas of T\"ornqvist and GFT price index are as follows, where $S_{jk}^t$ is the share of sales of goods \textit{k}.
\begin{equation}
T_j^t=\prod_{k=1}^{M_{jk}^t}\left(R_{jk}^t\right)^\frac{S_{jk}^{t-1}+S_{jk}^t}{2}\label{eq:T}
\end{equation}
\begin{equation}
G_j^t=\prod_{k=1}^{M_{jk}^t}\left(R_{jk}^t\right)^{S_{jk}^t}\label{eq:G}
\end{equation}
\begin{equation}
S_{jk}^t=\frac{{sales}_{jk}^t}{\sum_{k=1}^{M_{jk}^t}{sales}_{jk}^t}\label{eq:Share}
\end{equation}
\textbf{Step four, summarizing to the high-level price index.} China's CPI uses the Laspeyres price index for high-level summary, and its weights are the household consumption expenditure data obtained from the survey, but the weights update frequency is low, which may lead to non-sampling errors of weight. Therefore, S-FCPI takes the share of sales in the reporting period as the weight and uses the Paasche price index to conduct high-level summary, so that it can more accurately and timely reflect the changes in residents' consumption structure and prices. The high-level summary process of S-FCPI is shown in Figure \ref{High-level summary process of S-FCPI.}.
\begin{figure}[!h]
\centering
\includegraphics[scale=.5]{figs/flow2.pdf}
\caption{High-level summary process of S-FCPI.}
\label{High-level summary process of S-FCPI.}
\end{figure}
The first level of summary is city (county) level S-FCPI internal summarize. The basic category S-FCPI at the city (county) level is summarized into the sub-category S-FCPI at the city (county) level, and then summarized into the S-FCPI at the city (county) level. The second level of summary is to summarize the city (county) level S-FCPI to provincial (district) level. It includes summarizing the city (county) level basic category S-FCPI to provincial (district) level, summarizing the city (county) level sub-category S-FCPI to provincial (district) level, and summarizing the city (county) level S-FCPI to provincial (district) level. The third level of summary is to summarize the provincial (districts) level S-FCPI to national level. It includes summarizing the provincial (districts) level basic category S-FCPI to national level, summarizing the provincial (districts) level sub-category S-FCPI to national level, and summarizing the provincial (districts) level S-FCPI to national level. In conclusion, whether it is daily, weekly, and monthly S-FCPI, or chain, year-on-year or fixed-base S-FCPI, it can be constructed according to the above process, and the reliability of S-FCPI will be verified from multiple perspectives in the next chapter.
\section{Relationship between CPI and S-FCPI}
\label{sec:CPI and S-FCPI}
S-FCPI has significant advantages of diversified index frequency, release without lag, and wide geographical dimension, it provides a new perspective for real-time monitoring of goods prices changes, reflecting macroeconomic operating conditions, measuring inflation inertia, predicting inflation (CPI), and another related research. Based on this, this chapter will use the monthly S-FCPI data from February 2018 to May 2022 and other macro indicators such as CPI to analyze the reliability of S-FCPI from multiple perspectives.
\subsection{Features selection}
Table \ref{Feature description.} shows the relevant indicators used in the following analysis and their specific meanings. The CPI, FCPI and FRPI published by the National Bureau of Statistics of China is selected to explore the correlation between S-FCPI and them. The online food price index published by the iCPI project is selected to analyze the correlation between S-FCPI and iCPI, as well as their ability to capture the change of CPI. The broad money supply M2 published by the National Bureau of Statistics of China and Shibor published on the official website of Shanghai Interbank Offered rate are selected to measure the impact of monetary shock, that is, the impact of demand shock on food prices. The consumer confidence index is selected to measure the correlation between consumers' subjective feelings on the current economic situation and food price changes.
Among the above indicators, the data of CPI, FCPI, FRPI and M2 are obtained from the official website of the National Bureau of Statistics of China, the data of iCPI is obtained from the official website of the iCPI project of Tsinghua University, the data of Shibor is obtained from the official website of Shanghai Interbank Offered Rate, and the consumer confidence index is obtained from the China Economic Information NET.
\begin{table}[width=.9\linewidth,pos=h]
\caption{Feature description. Here, we leave out the category $i$ in the indexes, e.g., $T$ is short for $T_i$. $i$ represents the category of foods in our study. 0: food, 1: grain; 2: tuber; 3: bean; 4: edible oil; 5: vegetables and edible fungi; 6: neat of animal; 7: meat of poultries; 8: aquatic products; 9: eggs; 10: dairy; 11: dried and fresh fruits; 12: candy and pastries; 13: condiment; 14: other food categories.}
\label{Feature description.}
\begin{tabular}{@{}lll@{}}
\toprule
Category & Index & Description \\ \midrule
\multirow{3}{*}{S-FCPI} & $T$ & Monthly chain S-FCPI in T\"ornqvist form \\
& $G$ & Monthly chain S-FCPI in GFT form \\
& $J$ & Monthly chain S-FCPI in Jevons form \\ \midrule
\multirow{4}{*}{CPI} & CPI & China monthly chain CPI \\
& FCPI & China monthly chain FCPI \\
& FRPI & China monthly chain RPI of food \\
& iCPI & Monthly chain iCPI of food \\ \midrule
\multirow{3}{*}{Other indexes} & M2 & Broad money \\
& Shibor & Shanghai Interbank Offered rate \\
& CC & Consumer confidence index \\ \bottomrule
\end{tabular}
\end{table}
\subsection{Month-on-month volatility analysis}
The chain index can be used to reflect the change degree of indicators in the report period compared with the previous period, therefore, to evaluate the ability of S-FCPI to reflect the fluctuation of price level, we draw the monthly chain price indexes of S-FCPI, CPI, FCPI and iCPI as the time series chart shown in Figure \ref{Month-on-month CPI and S-FCPI.}. Among them, the time range of S-FCPI, CPI, and FCPI is 52 months from February 2018 to May 2022, and the time range of iCPI is 41 months from January 2019 to May 2022.
\begin{figure}[!h]
\centering
\includegraphics[scale=.45]{figs/SFCPI-month.pdf}
\caption{Month-on-month CPI and S-FCPI.}
\label{Month-on-month CPI and S-FCPI.}
\end{figure}
In addition, to quantitatively measure the proportion of months with the same change directions of S-FCPI, CPI, FCPI and iCPI, we calculate the co-directional rate among different indicators and reports it in Table \ref{Co-directional rate.}.
Here, co-directional rate $=\frac{m}{M}\times100\%$,
where $m$ is total number of months with the same change directions, $M$ is total number of months.
\begin{table}[width=.9\linewidth,cols=4,pos=h]
\caption{Co-directional rate (\%) between S-FCPI and other macro indicators.}\label{Co-directional rate.}
\begin{tabular*}{0.5\textwidth}{@{}LLLL@{} }
\toprule
& CPI & FCPI & iCPI \\
\midrule
$T$ & 65.38 & 71.15 & 58.54\\
$G$ & 61.54 & 71.15 & 51.22\\
$J$ & 61.54 & 71.15 & 58.54\\
iCPI & 69.23 & 56.10 & - \\
\bottomrule
\end{tabular*}
\end{table}
Figure \ref{Month-on-month CPI and S-FCPI.} and Table \ref{Co-directional rate.} show that the variation directions and level of price reflected by S-FCPI and CPI are relatively close. Between January 2018 and May 2022, $T$ and CPI changed in the same directions in 34 months, accounting for 65.38\%; $G$ and CPI changed in the same directions for 27 months, accounting for 51.92\%; $J$ and CPI changed in the same directions for 27 months, accounting for 51.9\%.
The variation directions of price reflected by S-FCPI and FCPI is relatively close, but the variation level of FCPI is larger. Between January 2018 and May 2022, $T$ and FCPI changed in the same directions in 37 months, accounting for 71.15\%; $G$ and FCPI changed in the same directions for 32 months, accounting for 61.54\%; $J$ and FCPI changed in the same directions for 32 months, accounting for 61.54\%. However, Figure \ref{Month-on-month CPI and S-FCPI.} also shows that FCPI has a larger range of variation, we think that the main reason for this difference lies in the different collection methods of price data. Specifically, the raw data of CPI are collected on a few days of the month, so it can only reflect changes of price on a few days and more volatile. But S-FCPI uses the price data of all goods on sale in a month, which reduces the fluctuation caused by randomness, and finally shows that S-FCPI has a smaller variation level.
The variation directions and level of price reflected by S-FCPI and iCPI are relatively close. In addition to comparing S-FCPI with CPI and FCPI, we also compare S-FCPI with food iCPI to verify the correlation between online and offline prices. Between January 2018 and May 2022, $T$ and iCPI changed in the same directions in 24 months, accounting for 58.54\%; $G$ and iCPI changed in the same directions for 21 months, accounting for 51.22\%; $J$ and iCPI changed in the same directions for 24 months, accounting for 58.54\%, it means that the prices of online and offline goods are related.
\subsection{Month-on-month correlation analysis}
In relevant research, the chain index growth rate is usually used when analyzing the correlation between the two chain indexes, therefore, we firstly calculate the logarithmic growth rate of the chain index of each indicator, then calculates the correlation coefficient between the logarithmic growth rates of each indicator to analyze the correlation between them.
The calculation formula of logarithmic growth rate is shown in Equation (\ref{eq:growth rate}), where ${x\_rate}_t$ is logarithmic growth rate of the chain index of indicator \textit{x} from period \textit{t}-1 to period \textit{t}.
\begin{equation}
{x_{rate}}_t=\ln{x_t}-\ln{x_{t-1}}\label{eq:growth rate}.
\end{equation}
In the below, we leave out subscript $_{rate}$ in the indexes while discussing the growth rates.
\begin{table}[width=.9\linewidth,cols=5,pos=h]
\caption{Correlation between S-FCPI and other macro indicators. Note: $^{***}$, $^{**}$ and $^{*}$ are significant at the level of 1\%, 5\% and 10\% respectively.}\label{Correlation}
\begin{tabular*}{0.8\textwidth}{@{}LLLLLLLL@{}}
\toprule
& CPI & FCPI & FRPI & iCPI & M2 & Shibor & CC \\
\midrule
CPI & 1 & $0.966^{***}$ & $0.959^{***}$ & $0.509^{**}$ & $-0.449^{**}$ & $0.092$ & $-0.100$\\
FCPI & $0.966^{***}$ & $1$ & $0.992^{***}$ & $0.494^{**}$ & $-0.465^{**}$ & $0.019$ & $-0.218$\\
$T$ & $0.742^{***}$ & $0.775^{***}$ & $0.788^{***}$ & $0.415^{**}$ & $-0.484^{**}$ & $-0.128$ & $-0.263$\\
$G$ & $0.718^{***}$ & $0.761^{***}$ & $0.773^{***}$ & $0.470^{**}$ & $-0.587^{***}$ & $-0.193$ & $-0.297$\\
$J$ & $0.770^{***}$ & $0.786^{***}$ & $0.801^{***}$ & $0.375^{*}$ & $-0.381^{*}$ & $-0.146$ & $-0.189$\\
iCPI & $0.509^{**}$ & $0.494^{**}$ & $0.535^{***}$ & $1$ & $-0.703^{***}$ & $0.042$ & $-0.181$ \\
\bottomrule
\end{tabular*}
\end{table}
Table \ref{Correlation} shows that there is significant positive relationship between the growth rate of S-FCPI and the growth rate of CPI, FCPI, and FRPI in the same period at the significance level of 1\%. Specifically, the $J$ constructed by the Jevons price index, which is the same as the index formula used by the National Bureau of Statistics of China to compile CPI, and its growth rate $J$ has the strongest correlation with CPI, FCPI and FRPI, and the Pearson correlation coefficients are 0.770, 0.786, and 0.801, respectively. For $T$ constructed by T\"ornqvist price index, the correlation between its growth rate $T$ and CPI, FCPI, and FRPI is second, and the Pearson correlation coefficients are 0.742, 0.775, and 0.788, respectively. While $G$ constructed by GFT price index, its growth rate $G$ has lower correlation with CPI, FCPI, and FRPI, and the Pearson correlation coefficients are 0.718, 0.761, and 0.773, respectively.
Moreover, the growth rates of S-FCPI, CPI and FCPI are consistent with the growth rates of other macro indicators in the same period in terms of correlation and direction.
Specifically, $T$, $G$, $J$, CPI and +FCPI have significant positive correlations with iCPI at the significance level of 5\% ($J$ is at the significance level of 1\%). The Pearson correlation coefficient ranges from 0.375 to 0.510, which indicates a positive correlation between the variation of online and offline price. And they have a significant negative correlation with M2 at the significance level of 5\% ($T$ at the significance level of 10\%, $G$ at the significance level of 1\%), and the Pearson correlation coefficient is between -0.580 and -0.380, it indicates that when the price level rises, that is, when the current inflationary pressure increases, the monetary authority will reduce the contemporaneous money supply. In addition, none of them has a significant correlation with Shibor and CC.
\section{Predicting CPI by S-FCPI}
\label{sec:Predicting CPI by S-FCPI}
Considering that CPI is an important indicator for measuring the degree of inflation in China, the prediction of CPI growth rate undoubtedly has certain practical significance. In this chapter, the quantitative and qualitative predictions of CPI will be realized from two perspectives: first, predicting growth rate of CPI; second, predicting the directions and levels of CPI growth rate.
\subsection{Predicting the growth rate of CPI}
When predicting the growth rate of CPI, six machine learning models, namely Linear Regression, Random Forest, K-Neighbors, Adaboost, GBDT, and Bagging are used respectively. The target variable of the model is the current growth rate of CPI, and the features are the current growth rate of S-FCPI in T\"ornqvist form and its first-order and second-order lag terms. The prediction period is set to 12 months, so the last 12 months of the data are selected as the test data, it contains 12 months of data from June 2021 to May 2022, and the rest of the period is used as the training data, it contains 37 months of data from May 2018 to May 2021.
In addition to predicting the CPI growth rate, we also predict the FCPI growth rate, and except that the target variables are different, the rest are the same as the prediction of CPI growth rate. Table \ref{tbl:predicting CPI growth rate} reports the evaluation indicators of the prediction effects of six regression models, Figure \ref{fig:Predict CPI growth rates.} and Figure \ref{fig:Predict FCPI growth rates.} report the prediction effect of CPI and FCPI growth rate respectively, where the left side of the dotted line is the in-sample prediction, and the right side of the dotted line is the out-of-sample prediction.
\begin{table}[H]
\caption{Evaluation indicators of predicting CPI growth rate.}\label{tbl:predicting CPI growth rate}
\begin{tabular}{lllllll}
\toprule
& \multicolumn{3}{c}{CPI} & \multicolumn{3}{c}{FCPI} \\
& MAE & RMSE & $R^2$ & MAE & RMSE & $R^2$ \\ \midrule
Linear Regression & 0.0034 & 0.0039 & 0.398 & 0.0097 & 0.0134 & 0.489 \\
Random Forest & 0.0031 & 0.0039 & 0.415 & 0.0095 & 0.0129 & 0.530 \\
K-Neighbors & 0.0040 & 0.0050 & 0.013 & 0.0156 & 0.0186 & 0.015 \\
Adaboost & 0.0036 & 0.0042 & 0.301 & 0.0117 & 0.0138 & 0.460 \\
GBDT & 0.0026 & 0.0032 & 0.585 & 0.0112 & 0.0144 & 0.413 \\
Bagging & 0.0030 & 0.0035 & 0.514 & 0.0103 & 0.0126 & 0.550 \\ \bottomrule
\end{tabular}
\end{table}
\begin{figure}[!h]
\centering
\includegraphics[scale=.45]{figs/CPI-pred.pdf}
\caption{Prediction results of CPI growth rates by S-FCPI.}
\label{fig:Predict CPI growth rates.}
\end{figure}
\begin{figure}[!h]
\centering
\includegraphics[scale=.45]{figs/FCPI-pred.pdf}
\caption{Prediction results of FCPI growth rates by S-FCPI.}
\label{fig:Predict FCPI growth rates.}
\end{figure}
Table \ref{tbl:predicting CPI growth rate} and Figure \ref{fig:Predict CPI growth rates.} show that the GBDT model has the best prediction effect on CPI growth rate, its MAE (0.0026) and RMSE (0.0032) are the smallest among the six models, and meanwhile the $R^2$ (0.583) is the largest. The prediction effect of Bagging model is second to GBDT model, which MAE, RMSE and $R^2$ is 0.0030, 0.0035, and 0.514, while the prediction effects of the other models are relatively poor, which will not be described in detail.
Table \ref{tbl:predicting CPI growth rate} and Figure \ref{fig:Predict FCPI growth rates.} show that the Bagging model has the best prediction effect on FCPI growth rate, its RMSE (0.0126) is the smallest among the six models and the $R^2$ (0.550) is the largest. The prediction effect of Random Forest model is second to Bagging model, which MAE, RMSE and $R^2$ is 0.0095, 0.0129, and 0.530, respectively, and using the logistic regression and the Adaboost models can also achieve a good prediction effect on the FCPI growth rate.
\subsection{Predicting the directions and levels of CPI growth rate}
For the study of macroeconomic operation, while focusing on the Accuracy of quantitative prediction results, the relevant research often pays more attention to capture the characteristics of the changing trend in the process of macroeconomic operation, that is, the qualitative prediction of directions and levels of CPI growth rate. Therefore, we use the S-FCPI growth rate to qualitatively predict the CPI growth rate from the following two perspectives: first, predicting the directions of CPI growth rate; second, predicting the levels of CPI growth rate.
Predicting the directions of CPI growth rate is a binary classification task essentially, so we use Naive Bayes, Logistic Regression, Random Forest, K-Neighbors, Adaboost and GBDT for binary classification prediction. We take whether the CPI growth rate is greater than 0 as the target variable, the month with CPI growth rate not less than 0 (CPI) is labeled as 1, which is a positive sample, and the month with CPI growth rate less than 0 (CPI) is labeled as -1, which is a negative sample. The features are current growth rate of S-FCPI in T\"ornqvist form and its first-order lag term. The prediction period is also set to 12 months, and the division between the training data and the test data is as same as the prediction of CPI growth rate. The training data contains 38 months of data from April 2018 to May 2021, of which 23 months are labeled as 1 and 15 months are labeled as -1. The test data contains 12 months of data from June 2021 to May 2022, of which 5 months are labeled as 1 and 7 months are labeled as -1.
The binary classification often pays more attention to the prediction performance of the model for a certain type of samples, but when predicting the directions of CPI growth rate, the prediction performance of the model for the two types of samples is equally important. Therefore, we use F1 score as an index to evaluate the performance of the model in predicting positive samples, and we use Accuracy to evaluate the overall predicting performance of the model.
In addition to predicting the directions of CPI growth rate, we also predict the directions of FCPI growth rate, and except that the target variables are different, the rest are the same as the prediction of the directions of CPI growth rate. Table \ref{tbl:predicting directions of CPI growth rate} reports the evaluation indicators of the prediction effects.
\begin{table}[]
\caption{Evaluation indicators of predicting the directions of CPI growth rate.}\label{tbl:predicting directions of CPI growth rate}
\begin{tabular}{lllll}
\toprule
& \multicolumn{2}{c}{CPI} & \multicolumn{2}{c}{FCPI} \\
& F1-score & Accuracy & F1-score & Accuracy \\ \midrule
Naive Bayes & 0.667 & 0.583 & 0.824 & 0.750 \\
Logistic Regression & 0.588 & 0.417 & 0.737 & 0.583 \\
Random Forest & 0.615 & 0.583 & 0.800 & 0.750 \\
K-Neighbors & 0.769 & 0.750 & 0.933 & 0.917 \\
Adaboost & 0.800 & 0.833 & 0.800 & 0.750 \\
GBDT & 0.727 & 0.750 & 0.800 & 0.750 \\ \bottomrule
\end{tabular}
\end{table}
Table \ref{tbl:predicting directions of CPI growth rate} shows that the Adaboost model has the best prediction effect on the directions of CPI growth rate, which Accuracy is 83.3\% and accurately predicts the directions of CPI growth rate in 10 months of 12 months. The prediction effect of K-nearest and GBDT models are second to Adaboost model, which Accuracy is 75\%. For the prediction of directions of FCPI growth rate, the K-nearest mode has the best prediction effect on the directions of FCPI growth rate, which Accuracy is 91.7\% and accurately predicts the directions of FCPI growth rate in 11 months of 12 months. The prediction effect of Naive Bayes, Random Forest, and GBDT models are second to K-Neighbors model, which Accuracy is 75\%. In conclusion, the prediction effect of K-nearest neighbor, Adaboost and GBDT models are generally excellent, and the directions of CPI and FCPI growth rate can be accurately predicted by using the S-FCPI growth rate and its first-order lag term.
Since the growth rate of CPI changes to a large extent in some periods, we further divide the growth rate of CPI into three categories and takes them as the target variable of the prediction model to achieve the prediction of the levels of the CPI growth rate. Specifically, the month with CPI growth rate bigger than 0.003 ($\mu+\sigma/2$) is labeled as 1 which means the CPI growth rate has risen sharply, the month with CPI growth rate smaller than -0.003 ($\mu-\sigma/2$) is labeled as -1 which means the CPI growth rate has fallen sharply, and the other months are labeled as 0 which means the CPI growth rate fluctuates within a reasonable range. The features, the division of training data and test data, the prediction model and the model evaluation index are all the same as the prediction of the directions of CPI growth rate, which will not be described in detail. Finally, the training data contains 38 months of data from April 2018 to May 2021, of which 11 months are labeled as 1, 17 months are labeled as 0, and 10 months are labeled as -1. The teat data contains 12 months of data from June 2021 to May 2022, of which 4 months are labeled as 1, 5 months are labeled as 0, and 3 months are labeled as -1.
In addition to predicting the levels of CPI growth rate, we also predict the levels of FCPI growth rate. Table \ref{tbl:predicting levels of CPI growth rate} reports the evaluation indicators of the prediction effects.
\begin{table}[]
\caption{Evaluation indicators of predicting the levels of CPI growth rate.}\label{tbl:predicting levels of CPI growth rate}
\begin{tabular}{lllll}
\toprule
& \multicolumn{2}{c}{CPI} & \multicolumn{2}{c}{FCPI} \\
& F1-score & Accuracy & F1-score & Accuracy \\ \midrule
Naive Bayes & 0.522 & 0.583 & 0.311 & 0.417 \\
Logistic Regression & 0.415 & 0.500 & 0.398 & 0.417 \\
Random Forest & 0.815 & 0.833 & 0.667 & 0.667 \\
K-Neighbors & 0.655 & 0.667 & 0.302 & 0.417 \\
Adaboost & 0.815 & 0.833 & 0.311 & 0.417 \\
GBDT & 0.739 & 0.750 & 0.667 & 0.667 \\ \bottomrule
\end{tabular}
\end{table}
Table \ref{tbl:predicting levels of CPI growth rate} shows that the Random Forest and Adaboost models has the best prediction effect on the levels of CPI growth rate, which Accuracy is 83.3\% and accurately predicts the levels of CPI growth rate in 10 months of 12 months. The prediction effect of GBDT model is second to them, which Accuracy is 75\%. For the prediction of the levels of FCPI growth rate, the Random Forest and GBDT models has the best prediction effect on the levels of FCPI growth rate, which Accuracy is 66.7\% and accurately predicts the levels of FCPI growth rate in 8 months of 12 months. While the prediction effect of the other models on the levels of FCPI growth rate is poor, which will not be described in detail.
In conclusion, this chapter mainly uses a variety of machine learning models to realize the quantitative prediction of CPI and FCPI growth rate, and also realizes the qualitative prediction of the directions and levels of CPI and FCPI growth rate, and obtains relatively accurate prediction effect. Therefore, the S-FCPI which with no lag can provide a new data source for the research of macro and micro economic problems such as inflation (CPI) prediction.
\section{Conclusion}
\label{sec:Conclusion}
In the era of big data, it is of great significance to build real-time and high-frequency macroeconomic indicators and the conditions are already in place. Such indicators can not only play a unique role in early warning and monitoring of macroeconomic operation, but also provide a new perspective for relevant research. The development of scanner data provides a new data source for the compilation of CPI, applying it to the compilation of CPI can not only improve the effectiveness and frequency of CPI release, but can also make up for the defect that CPI has no weights data under the basic category. However, there is no such research result in China at present. Based on this, we use the supermarket retail scanner data provided by CAA to compile the Scanner-data Food Consumer Price Index (S-FCPI), and the main conclusions are as follows:
First, based on the compilation method of China's CPI and the method of big data, we compile the S-FCPI based on scanner data. S-FCPI has achieved a timeless release with daily, weekly, and monthly frequency, formed an index system at the city (county), provincial (district) and national levels in the geographical dimension, and formed an index system covering 75 basic categories, 14 sub-categories and general food category in the goods category dimension. S-FCPI fully proves the feasibility of using scanner data to compile price index in China and makes up for the research gap that China currently has no direct application of scanner data to the compilation of CPI. Second, we use macro indicators to test the reliability of S-FCPI from three perspectives of month-on-month volatility analysis, month-on-month correlation analysis. The conclusion of month-on-month volatility analysis shows that the price changes reflected by S-FCPI and macro indicators such as CPI are consistent, the conclusion of month-on-month correlation analysis shows that there is a highly positive correlation between S-FCPI and the growth rate of macro indicators such as CPI. Third, we use multiple machine learning models to achieve the quantitative and qualitative prediction of cpi growth rate, which proves the application value of S-FCPI in inflation prediction. In quantitative prediction, we use the growth rate of S-FCPI and its first-order and second-order lag terms to predict the growth rate of CPI and FCPI, the GBDT regressor has the best prediction effect on the CPI growth rate which the $R^2$ is 0.585, the Random Forest regressor has the best prediction effect on the FCPI growth rate which the $R^2$ is 0.530. In qualitative prediction, we use the growth rate of S-FCPI and its first-order term to predict the directions and levels of CPI and FCPI growth rate, the Adaboost classifier has the best prediction effect on the directions of CPI growth rate which the F1-score is 0.800 and the Accuracy is 0.833, and the K-Neighbors classifier has the best prediction effect on the directions of FCPI growth rate which the F1-score is 0.933 and the Accuracy is 0.917. Both Adaboost and Random Forest classifiers have the best prediction effect on the levels of CPI growth rate which the F1-score is 0.815 and the Accuracy is 0.833, Both GBDT and Random Forest classifiers have the best prediction effect on the levels of FCPI growth rate which the F1-score is 0.667 and the Accuracy is 0.667.
Relying on the unique advantages of S-FCPI, low cost, high frequency and large volume, it can not only reflect the changes in goods prices in a higher time frequency and wider geographical dimension than CPI, but can also provide a new perspective for monitoring macroeconomic operation, forecasting inflation (CPI) and other macro and micro economic issues, which is a beneficial supplement to China's CPI. However, S-FCPI still has some defects, such as only using food data of CAA database and not using online price data. In the future, S-FCPI will carry out follow-up research from multiple directions: in the research of index compilation, S-FCPI will further expand goods categories and combine the online price data; in the research of index theory, S-FCPI will conduct in-depth research on the application of chain drift and transcendental index in scanner data; in the research of economic issues, S-FCPI will conduct in-depth research on the measurement of inflation inertia, the analysis of goods prices stickiness and the heterogeneity of goods prices fluctuation among regions.
| {'timestamp': '2022-12-01T02:04:23', 'yymm': '2211', 'arxiv_id': '2211.16641', 'language': 'en', 'url': 'https://arxiv.org/abs/2211.16641'} |
\section{Introduction}
Tree decompositions and path decompositions are fundamental objects in graph theory.
For algorithmic purposes, it would be highly useful to be able to compute such decompositions of minimum width, that is, achieving the treewidth and the pathwidth of the graph, respectively.
However, both these problems are NP-hard, and remain so even when restricted to very specific graph classes \cite{ACP87,Bodlaender92,Gusted93,HM94,KBMK93,KKM95,MS88}.
The best known polynomial-time approximation algorithm for treewidth, due to Feige, Hajiaghayi, and Lee~\cite{FHL08}, computes a tree decomposition of an input graph $G$ whose width is within a factor of $O(\sqrt{\log\tw(G)})$ of the treewidth $\tw(G)$ of $G$.
A modification of that algorithm produces a path decomposition whose width is within a factor of $O(\log n\cdot\sqrt{\log\tw(G)})$ of the pathwidth $\pw(G)$ of $G$, where $n$ is the number of vertices of $G$, using the fact that $\tw(G)\leq\pw(G)\leq(\tw(G)+1)\log_2n$~\cite{FHL08}.
Combining it with existing FPT algorithms for treewidth and pathwidth~\cite{Bodlaender96,BK93} leads to a polynomial-time algorithm approximating pathwidth to within a factor of $O(\pw(G)\tw(G)\sqrt{\log\tw(G)})$.%
\footnote{Here is a brief sketch.
Instances $G$ such that $\pw(G)\tw(G)=O(\log n)$ can be solved optimally in polynomial time thanks to the following two algorithms.
(1) Bodlaender~\cite{Bodlaender96} described an algorithm that, given $G$ and a number $t$, constructs a tree decomposition of $G$ of width at most $t$ or finds that $\tw(G)>t$ in $2^{O(t^2)}n$ time.
(2) Bodlaender and Kloks~\cite{BK93} described an algorithm that, given $G$, a tree decomposition of $G$ of width $t$, and a number $p$, constructs a path decomposition of $G$ of width at most $p$ or finds that $\pw(G)>p$ in $2^{O(pt)}n$ time.
When $\pw(G)\tw(G)=\Omega(\log n)$, the aforementioned $O(\log n\cdot\sqrt{\log\tw(G)})$-approximation algorithm of Feige et~al.\ achieves the claimed bound.}
This is the best known approximation ratio for pathwidth that we are aware of.
A simple polynomial-time $f(\pw(G))$-approximation algorithm (with $f$ exponential) follows from the result of~\cite{CDF96}.
We describe a polynomial-time algorithm that approximates pathwidth to within a factor of $O(\tw(G)\sqrt{\log\tw(G)})$, thus effectively dropping the $\pw(G)$ factor in the above.
To our knowledge, this is the first algorithm to achieve an approximation ratio that depends only on treewidth.
Our approach builds on the following key insight: every graph with large pathwidth has large treewidth or contains a subdivision of a large complete binary tree.
\begin{theorem}
\label{thm1}
Every graph with treewidth\/ $t-1$ has pathwidth at most\/ $th+1$ or contains a subdivision of a complete binary tree of height\/ $h+1$.
\end{theorem}
The bound $th+1$ is best possible up to a multiplicative constant (see Section~\ref{sec:tight}).
Our original motivation for Theorem~\ref{thm1} was the following result of Kawarabayashi and Rossman~\cite{KR22} about treedepth, which is an upper bound on pathwidth: every graph with treedepth $\Omega(k^5\log^2k)$ has treewidth at least $k$, or contains a subdivision of a complete binary tree of height $k$, or contains a path of order $2^k$.
The bound $\Omega(k^5\log^2k)$ was recently lowered to $\Omega(k^3)$ by Czerwiński, Nadara, and Pilipczuk~\cite{CNP21}, who also devised an $O(\tw(G)\log^{3/2}\tw(G))$-approximation algorithm for treedepth.
Kawarabayashi and Rossman~\cite{KR22} conjectured that the third outcome of their theorem, the path of order $2^k$, could be avoided if one considered pathwidth instead of treedepth: they conjectured the existence of a universal constant $c$ such that every graph with pathwidth $\Omega(k^c)$ has treewidth at least $k$ or contains a subdivision of a complete binary tree of height $k$.
Theorem~\ref{thm1} implies their conjecture with $c=2$, which is best possible (see Section~\ref{sec:tight}).
We remark that Wood~\cite{Wood13} also conjectured a statement of this type, with a bound of the form $f(t)\cdot h$ on the pathwidth for some function $f$ (see also \cite[Lemma~6]{MW15} and \cite[Conjecture~6.4.1]{Hickingbotham19}).
Both Theorem~\ref{thm1} and the treedepth results~\cite{KR22,CNP21} are a continuation of a line of research on excluded minor characterizations of graphs with small values of their corresponding width parameters (treewidth/pathwidth/treedepth), which was started by the seminal Grid Minor Theorem~\cite{RS86} and its improved polynomial versions~\cite{CC16,CT21}.
In the last section, we propose yet another problem in a similar vein (see Conjecture~\ref{conjecture}).
Since the complete binary tree of height $h$ has pathwidth $\lceil h/2\rceil$~\cite{Scheffler89}, any subdivision of it (as a subgraph) can be used to certify that the pathwidth of a given graph is large.
The following key concept provides a stronger certificate of large pathwidth, more suitable for our purposes.
Let $(\mathcal{T}_h)_{h=0}^\infty$ be a sequence of classes of graphs defined inductively as follows: $\mathcal{T}_0$ is the class of all connected graphs, and $\mathcal{T}_{h+1}$ is the class of connected graphs $G$ that contain three pairwise disjoint sets of vertices $V_1$, $V_2$, and $V_3$ such that $G[V_1],G[V_2],G[V_3]\in\mathcal{T}_h$ and any two of $V_1$, $V_2$, and $V_3$ can be connected in $G$ by a path avoiding the third one.
Every graph in $\mathcal{T}_h$ has the following properties:
\begin{itemize}
\item it has pathwidth at least $h$ (see Lemma~\ref{lem:Th-pw}), and
\item it contains a subdivision of a complete binary tree of height $h$ (see Lemma~\ref{lem:Th-subdivision}).
\end{itemize}
Theorem~\ref{thm1} has a short and simple proof (see Section~\ref{sec:proof1}).
It proceeds by showing that every connected graph with treewidth $t-1$ has pathwidth at most $th+1$ or belongs to $\mathcal{T}_{h+1}$.
The stronger assertion allows us to apply induction on $h$.
Unfortunately, this proof is not algorithmic.
To obtain the aforementioned approximation algorithm, we prove the following algorithmic version of Theorem~\ref{thm1}.
Its proof is significantly more involved (see Section~\ref{sec:proof2}).
\begin{theorem}
\label{thm2}
For every connected graph\/ $G$ with treewidth at most\/ $t-1$, there is an integer\/ $h\geq 0$ such that\/ $G\in\mathcal{T}_h$ and\/ $G$ has pathwidth at most\/ $th+1$.
Moreover, there is a polynomial-time algorithm to compute such an integer\/ $h$, a path decomposition of\/ $G$ of width at most\/ $th+1$, and a subdivision of a complete binary tree of height\/ $h$ in\/ $G$ given a tree decomposition of\/ $G$ of width at most\/ $t-1$.
\end{theorem}
Since every graph in $\mathcal{T}_h$ has pathwidth at least $h$, combining Theorem~\ref{thm2} (applied to every connected component of the input graph) with the aforementioned approximation algorithm for treewidth of Feige et al.~\cite{FHL08}, we obtain the following approximation algorithm for pathwidth.
\begin{corollary}
\label{cor:approx}
There is a polynomial-time algorithm which, given a graph\/ $G$ of treewidth\/ $t$ and pathwidth\/ $p$, computes a path decomposition of\/ $G$ of width\/ $O(t\sqrt{\log t}\cdot p)$.
Moreover, if a tree decomposition of\/ $G$ of width\/ $t'$ is also given in the input, the resulting path decomposition has width at most\/ $(t'+1)p+1$.
\end{corollary}
We remark that if we consider graphs $G$ coming from a fixed class of graphs with bounded treewidth, then we can first use an algorithm of Bodlaender~\cite{Bodlaender96} to compute an optimal tree decomposition of $G$ in linear time, and then use the above algorithm to approximate pathwidth to within a ratio of roughly $\tw(G)+1$.
We note the following two precursors of this result in the literature (with slightly better approximation ratios): Bodlaender and Fomin~\cite{BF02} gave a $2$-approximation algorithm for computing the pathwidth of outerplanar graphs (a subclass of graphs of treewidth at most $2$), and Fomin and Thilikos~\cite{FT06} gave a $3$-approximation algorithm for computing the pathwidth on Halin graphs (a subclass of graphs of treewidth at most $3$).
We conclude this introduction with a remark about parameterized algorithms, even though our focus in this paper is approximation algorithms with running time polynomial in the size of the input graph.
Bodlaender~\cite{Bodlaender96} (see also~\cite{Bodlaender12}) designed a linear-time FPT algorithm for computing pathwidth when parameterized by the pathwidth.
That is, for an $n$-vertex input graph $G$, his algorithm computes the pathwidth $\pw(G)$ and an optimal path decomposition of $G$ in $f(\pw(G))\cdot n$ time for some computable function $f$.
Bodlaender and Kloks~\cite{BK93} considered the problem of computing the pathwidth when the input graph has small treewidth.
They devised an XP algorithm for computing pathwidth when parameterized by the treewidth: given an $n$-vertex graph $G$, the algorithm computes $\pw(G)$ and an optimal path decomposition of $G$ in $n^{f(\tw(G))}$ time for some computable function $f$.
It is an old open problem whether pathwidth is fixed-parameter tractable when parameterized by the treewidth, that is, whether there exists an algorithm to compute the pathwidth of an $n$-vertex input graph $G$ in $f(\tw(G))\cdot n^{O(1)}$ time for some computable function $f$.
This question was first raised by Dean~\cite{Dean93}.
Fomin and Thilikos~\cite{FT06} pointed out that even obtaining an approximation of pathwidth when parameterized by treewidth is open.
Our approximation algorithm is a solution to the latter problem (in a strong sense---with polynomial dependence of the running time in the parameter) and can be seen as a step in the direction of Dean's question.
\section{Preliminaries and Tools}
\subsection{Basic Definitions}
Graphs considered in this paper are finite, simple, and undirected.
We use standard graph-theoretic terminology and notation.
We allow a graph to have no vertices; by convention, such a graph \emph{is not connected} and has no connected components.
The vertex set of a graph $G$ is denoted by $V(G)$.
A vertex $v$ of a graph $G$ is considered a neighbor of a set $X\subseteq V(G)$ if $v\notin X$ and $v$ is connected by an edge to some vertex in $X$.
The neighborhood (thus defined) of a set $X$ in $G$ is denoted by $N_G(X)$.
A \emph{tree decomposition} of a graph $G$ is a pair $(T,\{B_x\}_{x\in V(T)})$ where $T$ is a tree and $\{B_x\}_{x\in V(T)}$ is a family of subsets of $V(G)$ called \emph{bags}, satisfying the following two conditions:
\begin{itemize}
\item for each vertex $v$ of $G$, the set of nodes $\{x\in V(T)\colon v\in B_x\}$ induces a non-empty subtree of $T$;
\item for each edge $uv$ of $G$, there is a node $x$ in $T$ such that both $u$ and $v$ belong to $B_x$.
\end{itemize}
The \emph{width} of a tree decomposition $(T,\{B_x\}_{x\in V(T)})$ is $\max_{x\in V(T)}\size{B_x}-1$.
The \emph{treewidth} of a graph $G$ is the minimum width of a tree decomposition of $G$.
A \emph{path decomposition} and \emph{pathwidth} are defined analogously with the extra requirement that the tree $T$ is a path.
The treewidth and the pathwidth of a graph $G$ are denoted by $\tw(G)$ and $\pw(G)$, respectively.
We refer the reader to~\cite{Diestel10} for background on tree decompositions.
A \emph{complete binary tree of height} $h$ is a rooted tree in which every non-leaf node has two children and every path from the root to a leaf has $h$ edges.
Such a tree has $2^{h+1}-1$ nodes.
A \emph{complete ternary tree of height} $h$ is defined analogously but with the requirement that every non-leaf node has three children.
A \emph{subdivision} of a tree $T$ is a tree obtained from $T$ by replacing each edge $uv$ with some path connecting $u$ and $v$ whose internal nodes are new nodes private to that path.
\subsection{Witnesses for Large Pathwidth}
Recall that $(\mathcal{T}_h)_{h=0}^\infty$ is the sequence of classes of graphs defined inductively as follows: $\mathcal{T}_0$ is the class of all connected graphs, and $\mathcal{T}_{h+1}$ is the class of connected graphs $G$ that contain three pairwise disjoint sets of vertices $V_1$, $V_2$, and $V_3$ such that $G[V_1],G[V_2],G[V_3]\in\mathcal{T}_h$ and any two of $V_1$, $V_2$, and $V_3$ can be connected in $G$ by a path avoiding the third one.
A \emph{$\mathcal{T}_h$-witness} for a graph $G\in\mathcal{T}_h$ is a complete ternary tree of height $h$ of subsets of $V(G)$ defined inductively following the definition of $\mathcal{T}_h$.
The $\mathcal{T}_0$-witness for a connected graph $G$ is the tree with the single node $V(G)$, denoted by $\witness{V(G)}$.
A $\mathcal{T}_{h+1}$-witness for a graph $G\in\mathcal{T}_{h+1}$ is a tree with root $V(G)$ and with three subtrees $W_1,W_2,W_3$ of the root that are $\mathcal{T}_h$-witnesses of $G[V_1],G[V_2],G[V_3]$ for some sets $V_1,V_2,V_3$ as in the definition of $\mathcal{T}_{h+1}$; it is denoted by $\witness{V(G);W_1,W_2,W_3}$.
It clearly follows from these definitions that every graph in $\mathcal{T}_h$ has at least $3^h$ vertices and every $\mathcal{T}_h$-witness of an $n$-vertex graph has $O(n)$ nodes.
The next two lemmas explain the connection of $\mathcal{T}_h$ to pathwidth and to subdivisions of complete binary trees.
\begin{lemma}
\label{lem:Th-pw}
If\/ $G\in\mathcal{T}_h$, then\/ $\pw(G)\geq h$.
\end{lemma}
\begin{proof}
The proof goes by induction on $h$.
The case $h=0$ is trivial.
Now, assume that $h\geq 1$ and the lemma holds for $h-1$.
Since $G\in\mathcal{T}_h$, there are sets $V_1,V_2,V_3\subseteq V(G)$ interconnected as in the definition of $\mathcal{T}_h$, such that $G[V_i]\in\mathcal{T}_{h-1}$ and thus $\pw(G[V_i])\geq h-1$ for $i=1,2,3$.
Let $P$ be a path decomposition of $G$.
With bags restricted to $V_i$, it becomes a path decomposition of $G[V_i]$.
It follows that for $i=1,2,3$, there is a bag $B_i$ in $P$ such that $\size{B_i\cap V_i}\geq h$.
Assume without loss of generality that $B_1,B_2,B_3$ occur in this order in $P$.
Since $G[V_1]$ and $G[V_3]$ are connected, there is a path that connects $B_1\cap V_1$ and $B_3\cap V_3$ in $G$ avoiding $V_2$.
This path must have a vertex in $B_2$, so $\size{B_2\setminus V_2}\geq 1$ and thus $\size{B_2}\geq h+1$.
This proves that $\pw(G)\geq h$.
\end{proof}
The proof of Lemma~\ref{lem:Th-pw} generalizes the well-known proof of the fact that (a subdivision of) a complete binary tree of height $h$ has pathwidth at least $\lceil h/2\rceil$.
Actually, it is straightforward to show that such a tree belongs to $\mathcal{T}_{\lceil h/2\rceil}$.
\begin{lemma}
\label{lem:Th-subdivision}
If\/ $G\in\mathcal{T}_h$, then\/ $G$ contains a subdivision of a complete binary tree of height\/ $h$ as a subgraph.
Moreover, it can be computed in polynomial time from a\/ $\mathcal{T}_h$-witness for\/ $G$.
\end{lemma}
\begin{proof}
We prove, by induction on $h$, that for every graph $G\in\mathcal{T}_h$ and every $v\in V(G)$, the following structure exists in $G$: a subdivision $S$ of a complete binary tree of height $h$ with some root $r$ and a path $P$ from $v$ to $r$ such that $V(P)\cap V(S)=\{r\}$.
This is trivial for $h=0$.
For the induction step, assume that $h\geq 1$ and the statement holds for $h-1$.
Let $G\in\mathcal{T}_h$ and $v\in V(G)$.
Let $V_1,V_2,V_3\subseteq V(G)$ be as in the definition of $\mathcal{T}_h$.
Assume without loss of generality that $v\in V_3$ or $v$ can be connected with $V_3$ by a path in $G$ avoiding $V_1\cup V_2$.
For $i=1,2$, since $G[V_3]$ is connected and $G$ has a path connecting $V_i$ with $V_3$ and avoiding $V_{3-i}$, there is also a path in $G$ from $v$ to some vertex $v_i\in V_i$ avoiding $V_1\cup V_2\setminus\{v_i\}$.
These paths can be chosen so that they first follow a common path $P$ from $v$ to some vertex $r$ in $G-(V_1\cup V_2)$ and then they split into a path $Q_1$ from $r$ to $v_1$ and a path $Q_2$ from $r$ to $v_2$ so that $r$ is the only common vertex of any two of $P,Q_1,Q_2$.
For $i=1,2$, the induction hypothesis provides an appropriate structure in $G[V_i]$: a subdivision $S_i$ of a complete binary tree of height $h-1$ with root $r_i$ and a path $P_i$ from $v_i$ to $r_i$ such that $V(P_i)\cap V(S_i)=\{r_i\}$.
Connecting $r$ with $S_1$ and $S_2$ by the combined paths $Q_1P_1$ and $Q_2P_2$, respectively, yields a subdivision $S$ of a complete binary tree of height $h$ with root $r$ in $G$.
The construction guarantees that $V(P)\cap V(S)=\{r\}$.
Clearly, given a $\mathcal{T}_h$-witness for $G$, the induction step described above can be performed in polynomial time, and therefore the full recursive procedure of computing a subdivision of a complete binary tree of height $h$ in $G$ works in polynomial time.
\end{proof}
\subsection{Combining Path Decompositions}
The following lemma will be used several times in the paper to combine path decompositions.
\begin{lemma}
\label{lem:combine}
Let\/ $G$ be a graph and\/ $(T,\{B_x\}_{x\in V(T)})$ be a tree decomposition of\/ $G$ of width\/ $t-1$.
\begin{enumerate}
\item\label{item:combine-vertex} If\/ $q\in V(T)$ and every connected component of\/ $G-B_q$ has pathwidth at most\/ $\ell$, then there is a path decomposition of\/ $G$ of width at most\/ $\ell+t$ which contains\/ $B_q$ in every bag.
\item\label{item:combine-subpath} If\/ $Q$ is the path connecting\/ $x$ and\/ $y$ in\/ $T$ and every connected component of\/ $G-\bigcup_{q\in V(Q)}B_q$ has pathwidth at most\/ $\ell$, then there is a path decomposition of\/ $G$ of width at most\/ $\ell+t$ which contains\/ $B_x$ in the first bag and\/ $B_y$ in the last bag.
\end{enumerate}
In either case, there is a polynomial-time algorithm to construct such a path decomposition of\/ $G$ from the path decompositions of the respective components\/ $C$ of width at most\/ $\ell$.
\end{lemma}
\begin{proof}
In case~\ref{item:combine-vertex}, the path decomposition of $G$ is obtained by concatenating the path decompositions of the connected components of $G-B_q$ (which have width at most $\ell$) and adding $B_q$ to every bag.
Now, consider case~\ref{item:combine-subpath}.
For every node $q$ of $Q$, let $T_q$ be the subtree of $T$ induced on the nodes $z$ such that the path from $q$ to $z$ in $T$ contains no other nodes of $Q$, and let $V_q=\bigcup_{z\in V(T_q)}B_z$.
Apply case~\ref{item:combine-vertex} to the graph $G[V_q]$, the tree decomposition $(T_q,\{B_z\}_{z\in V(T_q)})$ of $G[V_q]$, and the node $q\in V(T_q)$ to obtain a path decomposition of $G[V_q]$ of width at most $\ell+t$ containing $B_q$ in every bag.
Then, concatenate the path decompositions thus obtained for all nodes $q$ of $Q$ (in the order they occur on $Q$) to obtain a requested path decomposition of $G$.
\end{proof}
\section{Proof of Theorem \ref{thm1}}
\label{sec:proof1}
The statement of Theorem~\ref{thm1} on a graph $G$ follows from the same statement on every connected component of $G$.
By Lemma~\ref{lem:Th-subdivision}, every graph in $\mathcal{T}_h$ contains a subdivision of a complete binary tree of height $h$.
Thus, Theorem~\ref{thm1} is a direct corollary to the following statement.
\begin{theorem}
\label{thm:existence}
For every\/ $h\in\mathbb{N}$, every connected graph with treewidth at most\/ $t-1$ has pathwidth at most\/ $th+1$ or belongs to\/ $\mathcal{T}_{h+1}$.
\end{theorem}
A tree decomposition of $G$ is \emph{optimal} if its width is equal to $\tw(G)$.
For the proof of Theorem~\ref{thm:existence}, we need optimal tree decompositions with an additional property.
Namely, consider a connected graph $G$ and a tree decomposition $(T,\{B_x\}_{x\in V(T)})$ of $G$.
Every edge $xy$ of $T$ splits $T$ into two subtrees: $T_{x|y}$ containing $x$ and $T_{y|x}$ containing $y$.
For every oriented edge $xy$ of $T$, let $G_{x|y}$ denote the subgraph of $G$ induced on the union of the bags of the nodes in $T_{x|y}$.
The property we need is that every subgraph of the form $G_{x|y}$ is connected.
While the fact that such a tree decomposition always exists is probably known, we have not found any reference, so we prove it in the following lemma.
\begin{lemma}
\label{lem:connected-decomposition}
Every connected graph\/ $G$ has an optimal tree decomposition\/ $(T,\{B_x\}_{x\in V(T)})$ with the property that\/ $G_{x|y}$ is connected for every oriented edge\/ $xy$ of\/ $T$.
\end{lemma}
\begin{proof}
Let $t=\tw(G)+1$.
The \emph{fatness} of an optimal tree decomposition $(T,\{B_x\}_{x\in V(T)})$ of $G$ is the $t$-tuple $(k_0,\ldots,k_{t-1})$ such that $k_i$ is the number of bags $B_x$ of size $t-i$.
Let $(T,\{B_x\}_{x\in V(T)})$ be an optimal tree decomposition of $G$ with lexicographically minimum fatness.
(The idea of taking such a tree decomposition comes from the proof of existence of optimal ``lean'' tree decompositions due to Bellenbaum and Diestel \cite[Theorem~3.1]{BD02}.)
We show that it has the required property.
Suppose it does not.
Let $xy$ be an edge of $T$ such that $G_{x|y}$ is disconnected and the number of nodes in the subtree $T_{x|y}$ of $T$ is minimized.
Let $\mathcal{C}$ be the family of connected components of $G_{x|y}$, so that $\size{\mathcal{C}}\geq 2$.
Let $Z=N_T(x)\setminus\{y\}$.
For every node $z\in Z$, let $C_z$ be the connected component of $G_{x|y}$ that contains $G_{z|x}$, which exists because the choice of $xy$ guarantees that $G_{z|x}$ is connected.
We modify $(T,\{B_x\}_{x\in V(T)})$ into a new tree decomposition of $G$ as follows.
We keep all nodes other than $x$ (with their bags $B_x$) and all edges non-incident to $x$.
We replace the node $x$ by $\size{\mathcal{C}}$ nodes $x_C$ with bags $B_{x_C}=B_x\cap V(C)$ for each $C\in\mathcal{C}$.
We replace the edge $xy$ by $\size{\mathcal{C}}$ edges $x_Cy$ for each $C\in\mathcal{C}$, and we replace the edge $xz$ by an edge $x_{C_z}z$ for each $z\in Z$.
It is straightforward to verify that what we obtain is indeed a tree decomposition of $G$ and it has width $t$.
Since $G$ is connected, we have $B_{x_C}=B_x\cap V(C)\neq\emptyset$ for every $C\in\mathcal{C}$.
This and the assumption that $\size{\mathcal{C}}\geq 2$ imply that $\size{B_{x_C}}<\size{B_x}$ for all $C\in\mathcal{C}$.
We conclude that the fatness of the new tree decomposition is lexicographically less than the fatness of $(T,\{B_x\}_{x\in V(T)})$, which contradicts the assumption that the latter is lexicographically minimal.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:existence}]
The proof goes by induction on $h$.
The statement is true for $h=0$: if a connected graph $G$ has a cycle or a vertex of degree at least $3$, then $G\in\mathcal{T}_1$, and otherwise $G$ is a path, so $\pw(G)\leq 1$.
For the rest of the proof, assume that $h\geq 1$ and the statement is true for $h-1$.
Let $G$ be a connected graph width treewidth at most $t-1$ and $(T,\{B_x\}_{x\in V(T)})$ be an optimal tree decomposition of $G$ obtained from Lemma~\ref{lem:connected-decomposition}.
Thus $\size{B_x}\leq t$ for every node $x$ of $T$ and $G_{x|y}$ is connected for every oriented edge $xy$ of $T$.
For every oriented edge $xy$ of $T$, let $F_{x|y}$ be the subgraph of $G$ induced on the vertices not in $G_{y|x}$, that is, on the vertices that belong only to bags in $T_{x|y}$ and to no other bags.
Let $E$ be the set of edges $xy$ of $T$ such that some connected components of both $F_{x|y}$ and $F_{y|x}$ belong to $\mathcal{T}_h$.
Suppose $E=\emptyset$.
It follows that every pair of trees of the form $T_{x|y}$ such that $F_{x|y}$ has a connected component in $\mathcal{T}_h$ has a common node.
This implies that all such trees have a common node, say $z$, by the well-known fact that subtrees of a tree have the Helly property \cite[Theorem~4.1]{Horn72}.
For every neighbor $y$ of $z$ in $T$ and every connected component $C$ of $F_{y|z}$, since $C\notin\mathcal{T}_h$, the induction hypothesis gives $\pw(C)\leq t(h-1)+1$.
Lemma~\ref{lem:combine}~\ref{item:combine-vertex} applied with $q=z$ yields $\pw(G)\leq th+1$.
For the rest of the proof, assume $E\neq\emptyset$.
Since every connected supergraph of a graph from $\mathcal{T}_h$ belongs to $\mathcal{T}_h$, the set $E$ is the edge set of some subtree $K$ of $T$.
Let $Z$ be the set of leaves of $K$.
Since $K$ has at least one edge, we have $\size{Z}\geq 2$.
Suppose $\size{Z}\geq 3$.
Choose any distinct $z_1,z_2,z_3\in Z$.
For each $i\in\{1,2,3\}$, let $C_i$ be a connected component of $F_{z_i|x_i}$ that belongs to $\mathcal{T}_h$, where $x_i$ is the unique neighbor of $z_i$ in $K$.
For each $i\in\{1,2,3\}$, the subgraph $G_{x_i|z_i}$ is connected, is vertex-disjoint from $C_i$, and contains the other two of $C_1$, $C_2$, and $C_3$.
Consequently, any two of the sets $V(C_1)$, $V(C_2)$, and $V(C_3)$ can be connected by a path in $G$ avoiding the third one.
This shows that $G\in\mathcal{T}_{h+1}$.
Now, suppose $\size{Z}=2$.
It follows that $K$ is a path $x_1\ldots x_m$, where $Z=\{x_1,x_m\}$.
For every node $x_i$ of $K$, every neighbor $y$ of $x_i$ in $T-K$, and every connected component $C$ of $F_{y|x_i}$, since $C\notin\mathcal{T}_h$, the induction hypothesis gives $\pw(C)\leq t(h-1)+1$.
Lemma~\ref{lem:combine}~\ref{item:combine-subpath} applied with $Q=K$ yields $\pw(G)\leq th+1$.
\end{proof}
\section{Proof of Theorem \ref{thm2}}
\label{sec:proof2}
By Lemma~\ref{lem:Th-subdivision}, every graph in $\mathcal{T}_h$ contains a subdivision of a complete binary tree of height $h$, which can be computed in polynomial time from a $\mathcal{T}_h$-witness of $G$.
Therefore, Theorem~\ref{thm2} is a direct corollary to the following statement, the proof of which is presented in this section.
\begin{theorem}
\label{thm:algorithm}
There is a polynomial-time algorithm which, given a connected graph\/ $G$ and a tree decomposition of\/ $G$ of width at most\/ $t-1$, computes
\begin{itemize}
\item a number\/ $h\in\mathbb{N}$ such that\/ $G\in\mathcal{T}_h$ and\/ $\pw(G)\leq th+1$,
\item a\/ $\mathcal{T}_h$-witness for\/ $G$,
\item a path decomposition of\/ $G$ of width at most\/ $th+1$.
\end{itemize}
\end{theorem}
\subsection{Normalized Tree Decompositions}
\label{subsec:normalized}
Let $G$ be a graph with a fixed rooted tree decomposition $(T',\{B'_x\}_{x\in V(T')})$ of width at most $t-1$, which we call the \emph{initial tree decomposition} of $G$.
For a node $x$ of $T'$, let $T'_x$ be the subtree of $T'$ consisting of $x$ and all nodes of $T'$ lying below $x$, and let $V'_x$ be the set of vertices of $G$ contained only in bags of $T'_x$ (i.e., in no bags of $T'-T'_x$).
We show how to turn $(T',\{B'_x\}_{x\in V(T')})$ into a \emph{normalized tree decomposition} of $G$.
Intuitively, the latter will be a cleaned-up version of the initial tree decomposition with some additional properties that will be useful in the algorithm.
For a subset $X$ of the vertex set of a graph $G$, let $\CC{G}(X)$ denote the family of connected components of the induced subgraph $G[X]$ (which is empty when $X=\emptyset$).
Let
\[\Sub(G)=\bigcup_{x\in V(T')}\CC{G}(V'_x).\]
For a connected graph $G$, the set $\Sub(G)$ will be the node set of the normalized tree decomposition of $G$.
We will use Greek letters $\alpha$, $\beta$, etc.\ to denote members of $\Sub(G)$ (nodes of the normalized tree decomposition of $G$).
Here are some easy consequences of these definitions and the assumption that $(T',\{B'_x\}_{x\in V(T')})$ is a tree decomposition of $G$.
\begin{lemma}
\label{lem:Sub}
The following holds for every graph\/ $G$ and for\/ $\Sub(G)$ defined as above.
\begin{enumerate}
\item If\/ $G$ is connected, then\/ $G\in\Sub(G)$.
\item\label{item:Sub-laminar} If\/ $\alpha,\beta\in\Sub(G)$, then\/ $V(\alpha)\subseteq V(\beta)$, or\/ $V(\beta)\subseteq V(\alpha)$, or\/ $V(\alpha)\cap V(\beta)=\emptyset$.
\item If\/ $\alpha,\beta\in\Sub(G)$ and\/ $V(\alpha)\cap V(\beta)=\emptyset$, then no edge of\/ $G$ connects\/ $V(\alpha)$ and\/ $V(\beta)$.
\end{enumerate}
\end{lemma}
Now, assume that $G$ is a connected graph.
By Lemma~\ref{lem:Sub}, the members of $\Sub(G)$ are organized in a rooted tree with root $G$ and with the following properties for any nodes $\alpha,\beta\in\Sub(G)$:
\begin{itemize}
\item if $\beta$ is a descendant of $\alpha$ (i.e., $\alpha$ lies on the path from the root to $\beta$), then $V(\beta)\subseteq V(\alpha)$;
\item if neither of $\alpha$ and $\beta$ is a descendant of the other, then $V(\alpha)$ and $V(\beta)$ are disjoint and non-adjacent in $G$.
\end{itemize}
Let $T$ denote this rooted tree (not to be confused with $T'$).
The normalized tree decomposition will be built on this tree $T$.
For each $\alpha\in\Sub(G)$, let $A_\alpha$ be the set of vertices $v\in V(G)$ for which $\alpha$ is the smallest subgraph in $\Sub(G)$ containing $v$ (which must be unique), and let $B_\alpha=A_\alpha\cup N_G(V(\alpha))$.
(Recall that $N_G(V(\alpha))\subseteq V(G)\setminus V(\alpha)$ by the definition of neighborhood.)
\begin{lemma}
\label{lem:normalized}
The following holds for every connected graph\/ $G$.
\begin{enumerate}
\item\label{item:normalized-A} $\{A_\alpha\}_{\alpha\in\Sub(G)}$ is a partition of\/ $V(G)$ into non-empty sets.
\item\label{item:normalized-td} $(T,\{B_\alpha\}_{\alpha\in\Sub(G)})$ is a tree decomposition of\/ $G$.
\item\label{item:normalized-width} $\size{B_\alpha}\leq t$ for every\/ $\alpha\in\Sub(G)$.
\end{enumerate}
\end{lemma}
\begin{proof}
It is clear from the definition that $\bigcup_{\alpha\in\Sub(G)}A_\alpha=V(G)$.
Let $\alpha\in\Sub(G)$.
Since $\alpha$ is connected and the vertex sets of the children of $\alpha$ in $T$ are pairwise disjoint and non-adjacent, at least one vertex of $\alpha$ is not a vertex of any child of $\alpha$ and thus belongs to $A_\alpha$.
This shows \ref{item:normalized-A}.
For the proof of \ref{item:normalized-td}, let $\alpha\in\Sub(G)$ and $v\in A_\alpha$.
Then $v\in B_\alpha$.
If $v\in B_\beta$ for some other $\beta\in\Sub(G)$, then $v$ must be a neighbor of $V(\beta)$ in $G$, which implies that $\beta$ is a descendant of $\alpha$ in $T$.
In that case, $v$ is also a neighbor of $V(\gamma)$ (and thus $v\in B_\gamma$) for every internal node $\gamma$ on the path from $\alpha$ to $\beta$ in $T$.
This shows that the nodes $\beta$ of $T$ such that $v\in B_\beta$ form a non-empty connected subtree of $T$.
It remains to show that any two adjacent vertices $u$ and $v$ of $G$ belong to a common bag $B_\alpha$.
Let $\alpha$ be a minimal member of $\Sub(G)$ containing at least one of $u$ and $v$; say, it contains $v$.
Then $v\in A_\alpha$.
If $u\notin A_\alpha$, then $u\notin V(\alpha)$, by minimality of $\alpha$, so $u$ is a neighbor of $V(\alpha)$ in $G$.
In either case, $u,v\in B_\alpha$.
For the proof of \ref{item:normalized-width}, let $\alpha\in\Sub(G)$, and let $x$ be the lowest node of $T'$ such that $\alpha\in\CC{G}(V'_x)$.
Every vertex $v\in A_\alpha$ belongs to $B'_x$; otherwise it would belong to $V'_y$ for some child $y$ of $x$ in $T'$, and the connected component of $G[V'_y]$ containing $v$ would be a proper induced subgraph of $\alpha$, contradicting $v\in A_\alpha$.
Now, let $v$ be a neighbor of $V(\alpha)$ in $G$.
Thus $v$ is a neighbor of some vertex $u\in V(\alpha)$ while $v\notin V(\alpha)$.
It follows that $u\in V'_x$ while $v\notin V'_x$, which implies that $v$ belongs to some bag of $T'-T'_x$ as well as some bag of $T'_x$ (as $uv$ is an edge of $G$), so it belongs to $B'_x$.
This shows that $B_\alpha\subseteq B'_x$, so $\size{B_\alpha}\leq\size{B'_x}\leq t$.
\end{proof}
We call $(T,\{B_\alpha\}_{\alpha\in\Sub(G)})$ the \emph{normalized tree decomposition} of $G$.
By Lemma~\ref{lem:normalized}, it has at most $\size{V(G)}$ nodes and its width is at most $t-1$.
The following lemma is a direct consequence of the construction of the normalized tree decomposition and will be used in the next subsection.
We emphasize that by a \emph{subtree} of the rooted tree $T$ we simply mean a connected subgraph of $T$, that is, the subtree does not need to be closed under taking descendants.
\begin{lemma}
\label{lem:component}
If\/ $Q$ is a subtree of\/ $T$ and\/ $\gamma$ is a connected component of\/ $G-\bigcup_{\xi\in V(Q)}B_\xi$, then either
\begin{enumalph}
\item $\gamma$ is a node of\/ $T-Q$ that is a child in\/ $T$ of some node of\/ $Q$, or
\item $V(\gamma)\cap V(\xi)=\emptyset$ where\/ $\xi$ is the root of\/ $Q$ in\/ $T$, that is, the node of\/ $Q$ closest to the root in\/ $T$.
\end{enumalph}
\end{lemma}
The normalized tree decomposition depends on the choice of the initial tree decomposition for $G$.
In the algorithm, the graphs $G$ considered are induced subgraphs of a common \emph{input graph} $G^{\mathrm{in}}$ given with an \emph{input tree decomposition} $(T^{\mathrm{in}},\{B^{\mathrm{in}}_x\}_{x\in V(T^{\mathrm{in}})})$, and the initial tree decomposition $(T',\{B'_x\}_{x\in V(T')})$ of $G$ considered above in the definition of $\Sub(G)$ is the input tree decomposition of $G^{\mathrm{in}}$ with appropriately restricted bags (some of which may become empty):
\[(T',\{B'_x\}_{x\in V(T')})=(T^{\mathrm{in}},\{B^{\mathrm{in}}_x\cap V(G)\}_{x\in V(T^{\mathrm{in}})}).\]
The fact that the normalized tree decompositions for all induced subgraphs $G$ of $G^{\mathrm{in}}$ considered in the algorithm come from a common input tree decomposition of $G^{\mathrm{in}}$ has the following consequences, which will be used in the complexity analysis of the algorithm in Subsection~\ref{subsec:complexity}.
(We emphasize again that, in the following lemma and later in the paper, $\Sub(G)$ is always computed with respect to the initial tree decomposition that is the restriction of the input tree decomposition to $G$.)
\begin{lemma}
\label{lem:Tin}
The following holds for every induced subgraph\/ $G$ of the input graph\/ $G^{\mathrm{in}}$.
\begin{enumerate}
\item\label{item:Tin-restrict} If\/ $\alpha\in\Sub(G)$ and\/ $V(\alpha)\subseteq X\subseteq V(G)$, then\/ $\alpha\in\Sub(G[X])$.
\item\label{item:Tin-laminar} If\/ $\alpha,\beta\in\Sub(G)$, then\/ $\alpha\in\Sub(\beta)$, or\/ $\beta\in\Sub(\alpha)$, or\/ $V(\alpha)\cap V(\beta)=\emptyset$.
\item\label{item:Tin-recursion} If\/ $\alpha\in\Sub(G)$, then\/ $\Sub(\alpha)=\{\beta\in\Sub(G)\colon V(\beta)\subseteq V(\alpha)\}$.
\item\label{item:Tin-component} If\/ $\alpha\in\Sub(G)$, $X\subseteq V(G)$, and\/ $\gamma\in\CC{G}(V(\alpha)\cap X)$, then\/ $\gamma\in\Sub(G[X])$.
\end{enumerate}
\end{lemma}
\begin{proof}
Let $T^{\mathrm{in}}_x$ and $V^{\mathrm{in}}_x$ be defined like $T'_x$ and $V'_x$ but for the tree decomposition $(T^{\mathrm{in}},\{B^{\mathrm{in}}_x\}_{x\in V(T^{\mathrm{in}})})$ of $G^{\mathrm{in}}$.
Let $G$ be an induced subgraph of $G^{\mathrm{in}}$.
The fact that $\Sub(G)$ is computed with respect to the initial tree decomposition that is the restriction of the input tree decomposition to $G$ implies
\[\Sub(G)=\bigcup_{x\in V(T^{\mathrm{in}})}\CC{G^{\mathrm{in}}}(V^{\mathrm{in}}_x\cap V(G)).\]
If $\alpha\in\Sub(G)$ and $V(\alpha)\subseteq X\subseteq V(G)$, then the fact that $\alpha$ is a connected component of $G^{\mathrm{in}}[V^{\mathrm{in}}_x\cap V(G)]$ for some node $x$ of $T^{\mathrm{in}}$ implies that it is also a connected component of $G^{\mathrm{in}}[V^{\mathrm{in}}_x\cap X]$, which in turn implies $\alpha\in\Sub(G[X])$.
This shows \ref{item:Tin-restrict}.
If $\alpha,\beta\in\Sub(G)$ and $V(\alpha)\subseteq V(\beta)$, then property~\ref{item:Tin-restrict} with $X=V(\beta)$ yields $\alpha\in\Sub(\beta)$.
This implies \ref{item:Tin-laminar} by Lemma~\ref{lem:Sub}~\ref{item:Sub-laminar}, and it also implies the ``$\supseteq$'' inclusion in \ref{item:Tin-recursion} (with $\alpha$ and $\beta$ interchanged).
For the converse inclusion in \ref{item:Tin-recursion}, let $\alpha\in\Sub(G)$ and $\beta\in\Sub(\alpha)$.
Thus $V(\beta)\subseteq V(\alpha)$.
Let $x$ be a node of $T^{\mathrm{in}}$ such that $\alpha$ is a connected component of $G^{\mathrm{in}}[V^{\mathrm{in}}_x\cap V(G)]$.
Since $V(\alpha)\subseteqV^{\mathrm{in}}_x$, there is a node $y$ in $T^{\mathrm{in}}_x$ (implying $V^{\mathrm{in}}_y\subseteqV^{\mathrm{in}}_x$) such that $\beta$ is a connected component of $G^{\mathrm{in}}[V^{\mathrm{in}}_y\cap V(\alpha)]$.
It follows that $\beta$ is a connected component of $G^{\mathrm{in}}[V^{\mathrm{in}}_y\cap V(G)]$, so $\beta\in\Sub(G)$.
Finally, if $\alpha\in\Sub(G)$, $X\subseteq V(G)$, and $\gamma\in\CC{G}(V(\alpha)\cap X)$, then $\alpha$ is a connected component of $G^{\mathrm{in}}[V^{\mathrm{in}}_x\cap V(G)]$ for some node $x$ of $T^{\mathrm{in}}$, whence it follows that $\gamma$ is a connected component of $G^{\mathrm{in}}[V^{\mathrm{in}}_x\cap X]$ and thus $\gamma\in\Sub(G[X])$.
This shows \ref{item:Tin-component}.
\end{proof}
\subsection{Main Procedure}
The core of the algorithm is a recursive procedure $\solve(G,b)$, where $G$ is a connected graph with $\tw(G)\leq t-1$ and $b\in\mathbb{N}\cup\{\infty\}$ is an \emph{upper bound request}.
It computes the following data:
\begin{itemize}
\item a number $h=h(G,b)\in\mathbb{N}$ such that $h\leq b$,
\item a $\mathcal{T}_h$-witness $W(G,b)$ of $G$,
\item only when $h<b$: a path decomposition $P(G,b)$ of $G$ of width at most $th+1$.
\end{itemize}
The algorithm uses memoization to compute these data only once for each pair $(G,b)$.
A run of $\solve(G,\infty)$ produces the outcome requested in Theorem~\ref{thm:algorithm}.
The purpose of the upper bound request is optimization---we tell the procedure that if it can provide a $\mathcal{T}_b$-witness for $G$, then we no longer need any path decomposition for $G$.
This allows the procedure to save some computation, perhaps preventing many unnecessary recursive calls.
Our complexity analysis of the algorithm will rely on this optimization.
Below, we present the procedure $\solve(G,b)$ for a fixed connected graph $G$ and a fixed upper bound request $b\in\mathbb{N}\cup\{\infty\}$.
The procedure assumes access to the normalized tree decomposition $(T,\{B_\alpha\}_{\alpha\in\Sub(G)})$ of $G$ of width at most $t-1$ obtained from some initial tree decomposition of $G$ as described in the definition of $\Sub(G)$.
In the next subsection, we show that a full run of $\solve(G^{\mathrm{in}},\infty)$ on an input graph $G^{\mathrm{in}}$ makes recursive calls to $\solve(G,b)$ for only polynomially many distinct pairs $(G,b)$ if the normalized tree decompositions of all induced subgraphs $G$ of $G^{\mathrm{in}}$ occurring in these calls are obtained from a common input tree decomposition of $G^{\mathrm{in}}$ as described in Subsection~\ref{subsec:normalized}.
If $b=0$, then we just set $h(G,0)=0$ and $W(G,0)=\witness{V(G)}$, and we terminate the run of $\solve(G,b)$, saying that it is \emph{pruned}.
Assume henceforth that $b\geq 1$.
Suppose $T$ has only one node, that is, $\Sub(G)=\{G\}$.
Since $V(G)$ is the bag of that node, $\size{V(G)}\leq t$.
If $G$ has a cycle or a vertex of degree at least $3$, then it has three vertices $v_1,v_2,v_3$ such that any two of them can be connected by a path in $G$ avoiding the third one.
In that case, we set $h(G,b)=1$ and $W(G,b)=\witness{V(G);\witness{\{v_1\}},\witness{\{v_2\}},\witness{\{v_3\}}}$, and (if $b>1$) we let $P(G,b)$ be the path decomposition of $G$ consisting of the single bag $V(G)$, which has width at most $t-1$.
If $G$ has no cycle or vertex of degree at least $3$, then it is a path.
In that case, we set $h(G,b)=0$ and $W(G,b)=\witness{V(G)}$, and we let $P(G,b)$ be any path decomposition of $G$ of width $1$.
Assume henceforth that $T$ has more than one node.
For each node $\alpha\in\Sub(G)\setminus\{G\}$, we run $\solve(\alpha,b)$ to compute $h(\alpha,b)$, $W(\alpha,b)$, and $P(\alpha,b)$ when $h(\alpha,b)<b$.
We call these recursive calls to $\solve$ \emph{primary}.
If any of them leads to $h(\alpha,b)=b$, we just set $h(G,b)=b$ and $W(G,b)=W(\alpha,b)$, and we terminate the run of $\solve(G,b)$, again saying that it is \emph{pruned}.
Assume henceforth that the run of $\solve(G,b)$ is not pruned, that is, we have $h(\alpha,b)<b$ for every $\alpha\in\Sub(G)\setminus\{G\}$.
Let $k$ be the maximum value of $h(\alpha,b)$ for $\alpha\in\Sub(G)\setminus\{G\}$.
Thus $k<b$.
We will consider several further cases, each leading to one of the following two outcomes:
\begin{enumerate}[label=(\Alph*),widest=A]
\item We set $h(G,b)=k$.
In that case, we let $W(G,b)$ be $W(\alpha,b)$ for any node $\alpha\in\Sub(G)\setminus\{G\}$ such that $h(\alpha,b)=k$, and we only need to provide an appropriate path decomposition $P(G,b)$.
\item\label{outcomeB} We set $h(G,b)=k+1$.
In that case, if $k+1<b$, we let $P(G,b)$ be the path decomposition of $G$ of width $t(k+1)+1$ obtained by applying Lemma~\ref{lem:combine}~\ref{item:combine-vertex} with $q$ the root of $T$, and we only need to provide an appropriate $\mathcal{T}_{k+1}$-witness $W(G,b)$.
\end{enumerate}
Let $Z$ be the set of minimal nodes $\zeta\in\Sub(G)\setminus\{G\}$ such that $h(\zeta,b)=k$.
It follows that $Z\neq\emptyset$ (by the definition of $k$) and the sets $V(\zeta)$ with $\zeta\in Z$ are pairwise disjoint and non-adjacent in $G$.
Suppose that $Z$ consists of a single node $\zeta$.
Let $Q$ be the path from the root to $\zeta$ in $T$.
Every connected component $\gamma$ of $G-\bigcup_{\xi\in V(Q)}B_\xi$ needs to satisfy Lemma~\ref{lem:component}~(a), so it is a node of $T-Q$; in particular, $\gamma\in\Sub(G)\setminus\{G\}$ and $h(\gamma,b)<k$, so $P(\gamma,b)$ is a path decomposition of $\gamma$ of width at most $t(k-1)+1$.
We set $h(G,b)=k$ and apply Lemma~\ref{lem:combine}~\ref{item:combine-subpath} to obtain a path decomposition $P(G,b)$ of $G$ of width at most $tk+1$.
Assume henceforth that $\size{Z}\geq 2$.
Let $U=V(G)\setminus\bigcup_{\zeta\in Z}V(\zeta)$.
A \emph{$U$-path} is a path in $G$ with all internal vertices in $U$.
Consider an auxiliary graph $H$ with vertex set $Z$ where $\zeta\xi$ is an edge if and only if there is a $U$-path connecting $V(\zeta)$ and $V(\xi)$.
The graph $H$ is connected, because so is $G$.
Suppose that $H$ has a cycle or a vertex of degree at least $3$.
Then there are $\zeta_1,\zeta_2,\zeta_3\in Z$ such that any two of them can be connected by a path in $H$ avoiding the third one.
This and connectedness of the induced subgraphs in $Z$ imply that any two of the sets $V(\zeta_1),V(\zeta_2),V(\zeta_3)$ can be connected by a path in $G$ avoiding the third one.
We set $h(G,b)=k+1$ and $W(G,b)=\witness{V(G);W(\zeta_1,b),W(\zeta_2,b),W(\zeta_3,b)}$, while $P(G,b)$ is constructed as in the description of outcome~\ref{outcomeB}.
Assume henceforth that $H$ has no cycle or vertex of degree at least $3$, that is, $H$ is a path $\zeta_1\ldots\zeta_m$ with $m=\size{Z}\geq 2$.
The run of $\solve(G,b)$ is a \emph{key run} if this case is reached.
For convenience of notation, let $\zeta_0=\zeta_{m+1}=G$.
Every vertex in $U$ is connected by a $U$-path to one set or to two consecutive sets among $V(\zeta_1),\ldots,V(\zeta_m)$.
Define subsets $U_{0,1},U_{1,2},\ldots,U_{m,m+1}$ of $U$ as follows:
\begin{itemize}
\item $U_{0,1}$ is the set of vertices in $U$ connected by a $U$-path to $V(\zeta_1)$ but not to $V(\zeta_2)$;
\item for $1\leq i<m$, $U_{i,i+1}$ is the set of vertices in $U$ connected by a $U$-path to $V(\zeta_i)$ and to $V(\zeta_{i+1})$;
\item $U_{m,m+1}$ is the set of vertices in $U$ connected by a $U$-path to $V(\zeta_m)$ but not to $V(\zeta_{m-1})$.
\end{itemize}
For $1\leq i\leq m$, let $B_i=B_{\zeta_i}$, let $U_i$ be the set of vertices in $U\setminus\bigcup_{j=0}^mU_{j,j+1}$ connected by a $U$-path to $V(\zeta_i)$, and let $V_i=V(\zeta_i)\cup B_i\cup U_i$.
In particular, $U_1=U_m=\emptyset$, so $V_1=V(\zeta_1)\cup B_1$ and $V_m=V(\zeta_m)\cup B_m$.
The sets $U_i$ ($1<i<m$) and $U_{i,i+1}$ ($0\leq i\leq m$) are pairwise disjoint and their union is $U$.
The following Venn diagram illustrates possible intersections and adjacencies between the sets $V(\zeta_i)$, $U_{i,i+1}$, $U_i$, and $B_i$ (i.e., only pairs of sets that touch in the diagram can be adjacent):
\begin{center}
\begin{tikzpicture}[scale=.68]
\def\V#1{(5*#1,0) arc (180:0:1)--+(0,-1.4) arc (0:-180:1)--cycle}
\def\W#1{(5*#1+3.5,0) ellipse (2 and 1.2)}
\def\U#1{(5*#1+1,1.5) ellipse (0.7 and 1.2)}
\def\B#1{(5*#1+1,0.2) ellipse (2.6 and 1.1)}
\draw[dotted]\B{1}\B{2}\B{3}\B{4};
\begin{scope}
\clip\W{0}\W{1}\U{2}\W{2}\U{3}\W{3}\W{4};
\draw[thick,black!30,fill=black!10]\B{1}\B{2}\B{3}\B{4};
\end{scope}
\draw\W{0}\W{1}\U{2}\W{2}\U{3}\W{3}\W{4};
\begin{scope}
\clip\V{1}\V{2}\V{3}\V{4};
\draw[thick,black!30,fill=black!10]\B{1}\B{2}\B{3}\B{4};
\end{scope}
\draw\V{1}\V{2}\V{3}\V{4};
\node at (6,-1.55) {$V(\zeta_1)$};
\node at (11,-1.55) {$V(\zeta_2)$};
\node at (16,-1.55) {$V(\zeta_3)$};
\node at (21,-1.55) {$V(\zeta_4)$};
\node at (2.5,-0.05) {$U_{0,1}$};
\node at (8.5,-1.65) {$U_{1,2}$};
\node at (13.5,-1.65) {$U_{2,3}$};
\node at (18.5,-1.65) {$U_{3,4}$};
\node at (24.5,-0.05) {$U_{4,5}$};
\node at (11,1.9) {$U_2$};
\node at (16,1.9) {$U_3$};
\node[black!70] at (6,0) {$B_1$};
\node[black!70] at (11,0) {$B_2$};
\node[black!70] at (16,0) {$B_3$};
\node[black!70] at (21,0) {$B_4$};
\end{tikzpicture}
\end{center}
For $0\leq i\leq m$, let $Q_{i,i+1}$ be the path from $\zeta_i$ to $\zeta_{i+1}$ in $T$, and let $\zeta_{i,i+1}$ be the root of $Q_{i,i+1}$ in $T$, that is, the lowest common ancestor of $\zeta_i$ and $\zeta_{i+1}$ in $T$.
(Since $\zeta_0=\zeta_{m+1}=G$, the paths $Q_{0,1}$ and $Q_{m,m+1}$ connect the root $G$ of $T$ to $\zeta_1$ and $\zeta_m$, respectively, and $\zeta_{0,1}=\zeta_{m,m+1}=G$.)
Let $\Gamma$ be the family of connected components of $G[V_i\setminus B_i]$ for $1\leq i\leq m$ and of connected components of $G[U_{i,i+1}\setminus\bigcup_{\xi\in V(Q_{i,i+1})}B_\xi]$ for $0\leq i\leq m$.
Suppose that for each $\gamma\in\Gamma$, we have a path decomposition $P_\gamma$ of $\gamma$ of width at most $t(k-1)+1$.
Then we apply
\begin{itemize}
\item Lemma~\ref{lem:combine}~\ref{item:combine-vertex} to $G[V_i]$, the tree decomposition $(T,\{V_i\cap B_\alpha\}_{\alpha\in\Sub(G)})$ of $G[V_i]$ and the node $\zeta_i$ to obtain a path decomposition $P_i$ of $G[V_i]$ of width at most $tk+1$ containing the set $B_i$ in every bag, for $1\leq i\leq m$;
\item Lemma~\ref{lem:combine}~\ref{item:combine-subpath} to $G[U_{i,i+1}]$, the tree decomposition $(T,\{U_{i,i+1}\cap B_\alpha\}_{\alpha\in\Sub(G)})$ of $G[U_{i,i+1}]$ and the path $Q_{i,i+1}$ to obtain a path decomposition $P_{i,i+1}$ of $G[U_{i,i+1}]$ of width at most $tk+1$ containing $U_{i,i+1}\cap B_i$ in the first bag and $U_{i,i+1}\cap B_{i+1}$ in the last bag, for $0\leq i\leq m$.
\end{itemize}
We set $h(G,b)=k$, and we concatenate these path decompositions $P_{0,1},P_1,P_{1,2},\ldots,P_m,P_{m,m+1}$ to obtain a path decomposition $P(G,b)$ of $G$ of width at most $tk+1$, as shown by the following claim.
\begin{claim}
The concatenation of\/ $P_{0,1},P_1,P_{1,2},\ldots,P_m,P_{m,m+1}$ is a path decomposition of\/ $G$.
\end{claim}
\begin{proof}
If $v\in V(\zeta_i)\cup U_i$ with $1\leq i\leq m$, then $v$ belongs to bags in $P_i$, where the corresponding nodes form a non-empty subpath, and it belongs to no bags in the other path decompositions among $P_{0,1},P_1,P_{1,2},\ldots,P_m,P_{m,m+1}$.
If $v\in U_{i,i+1}$ with $0\leq i\leq m$, then $v$ belongs to bags in $P_{i,i+1}$, where the corresponding nodes form a non-empty subpath, and moreover,
\begin{itemize}
\item if $v\in B_i\cap U_{i,i+1}$ with $1\leq i\leq m$, then $v$ belongs to all bags of $P_i$ and to the first bag of $P_{i,i+1}$;
\item if $v\in U_{i,i+1}\cap B_{i+1}$ with $0\leq i<m$, then $v$ belongs to the last bag of $P_{i,i+1}$ and to all bags of $P_{i+1}$;
\item $v$ belongs to no bags in the other path decompositions among $P_{0,1},P_1,P_{1,2},\ldots,P_m,P_{m,m+1}$.
\end{itemize}
In all cases, the nodes whose bags contain $v$ form a non-empty subpath in the concatenation.
Now, consider an edge $uv$ of $G$.
If $u,v\in V_i$ ($1\leq i\leq m$) or $u,v\in U_{i,i+1}$ ($0\leq i\leq m$), then both $u$ and $v$ belong to a common bag of $P_i$ or $P_{i,i+1}$ (respectively).
Since the sets of the form $U_i$ and $U_{i,i+1}$ are pairwise non-adjacent, if the edge $uv$ connects two different sets of the form $U_{i,i+1}$, $U_i$, or $V(\zeta_i)$, then it connects $V(\zeta_i)$ with $N_G(V(\zeta_i))$ ($1\leq i\leq m$), where the latter set is contained in $B_i$ (by definition), so $u,v\in V_i$.
Therefore, the above exhausts all the edges of $G$, showing that every edge is realized in some bag of the concatenation.
\end{proof}
It remains to provide the path decompositions $P_\gamma$ for all $\gamma\in\Gamma$ or to deal with the cases where we cannot provide them.
Let $K$ be the unique subtree of $T$ that has $G$ as the root and the nodes in $Z$ as the leaves.
Thus $h(\gamma,b)<k$ for every node $\gamma$ in $T-K$, by the definition of $Z$.
\begin{claim}
\label{claim:component}
If\/ $\gamma$ is a connected component of\/ $G[V_i\setminus B_i]$ where\/ $1\leq i\leq m$, then either
\begin{enumalph}
\item $\gamma$ is a node in\/ $T-K$ that is a child of\/ $\zeta_i$ in\/ $T$, or
\item $V(\gamma)\cap V(\zeta_i)=\emptyset$ and\/ $\gamma\in\CC{G}(U_i\setminus B_i)$, which is possible only when\/ $1<i<m$.
\end{enumalph}
If\/ $\gamma$ is a connected component of\/ $G[U_{i,i+1}\setminus\bigcup_{\xi\in Q_{i,i+1}}B_\xi]$ where\/ $0\leq i\leq m$, then either
\begin{enumalph}[resume]
\item $\gamma$ is a node in\/ $T-K$ that is a child in\/ $T$ of a node from\/ $Q_{i,i+1}$, or
\item $V(\gamma)\cap V(\zeta_{i,i+1})=\emptyset$, which is possible only when\/ $1\leq i<m$.
\end{enumalph}
\end{claim}
\begin{proof}
For the proof of the first statement, let $\gamma\in\CC{G}(V_i\setminus B_i)$ where $1\leq i\leq m$.
Since $N_G(U_i)\subseteq V(\zeta_i)$ and $N_G(V(\zeta_i))\subseteq B_i$, we have $N_G(U_i\setminus B_i)\subseteq B_i$ and $N_G(V(\zeta_i)\setminus B_i)\subseteq B_i$, and thus $N_G(V_i\setminus B_i)\subseteq B_i$, which implies that $\gamma$ is a connected component of $G-B_i$.
By Lemma~\ref{lem:component} applied with $Q$ consisting of the single node $\zeta_i$, either
\begin{enumalph}
\item $\gamma$ is a child of $\zeta_i$ in $T$, so it is a node in $T-K$, as $\zeta_i$ is a leaf of $K$, or
\item $V(\gamma)\cap V(\zeta_i)=\emptyset$, which is possible only when $1<i<m$ (as $U_1=U_m=\emptyset$); in that case, $V(\gamma)\subseteq U_i\setminus B_i$, so $\gamma\in\CC{G}(U_i\setminus B_i)$, as $N_G(U_i\setminus B_i)\subseteq B_i$.
\end{enumalph}
For the proof of the second statement, let $\gamma\in\CC{G}(U_{i,i+1}\setminus\bigcup_{\xi\in V(Q_{i,i+1})}B_\xi)$ where $0\leq i\leq m$.
Since $N_G(U_{i,i+1})\subseteq V(\zeta_i)\cup V(\zeta_{i+1})$, $N_G(V(\zeta_i))\subseteq B_i$, and $N_G(V(\zeta_{i+1}))\subseteq B_{i+1}$, we have $N_G(U_{i,i+1}\setminus\bigcup_{\xi\in V(Q_{i,i+1})}B_\xi)\subseteq\bigcup_{\xi\in V(Q_{i,i+1})}B_\xi$, which implies that $\gamma$ is a connected component of $G-\bigcup_{\xi\in V(Q_{i,i+1})}B_\xi$.
By Lemma~\ref{lem:component} applied with $Q=Q_{i,i+1}$, either
\begin{enumalph}[resume]
\item $\gamma$ is a node in $T-Q_{i,i+1}$ that is a child in $T$ of some node $\xi$ of $Q_{i,i+1}$, so $\gamma$ is a node in $T-K$, as $V(\gamma)$ is disjoint from $V(\zeta_j)$ for every leaf $\zeta_j$ of $K$, or
\item $V(\gamma)\cap V(\zeta_{i,i+1})=\emptyset$, which is possible only when $1\leq i<m$ (otherwise $\zeta_{i,i+1}=G$).\qedhere
\end{enumalph}
\end{proof}
Let a component $\gamma\in\Gamma$ be called a \emph{child component} when case (a) or~(c) of Claim~\ref{claim:component} holds for $\gamma$ and a \emph{parent component} when case (b) or~(d) of Claim~\ref{claim:component} holds for $\gamma$.
Case (a) or~(c) of Claim~\ref{claim:component} implies that for every child component $\gamma$, there has been a primary recursive call to $\solve(\gamma,b)$ and $h(\gamma,b)<k$, so $P_\gamma=P(\gamma,b)$ is a path decomposition of $\gamma$ of width at most $t(k-1)-1$.
An attempt to deal with parent components $\gamma$ would be to run $\solve(\gamma,k)$ for each of them to compute $h(\gamma,k)$, $W(\gamma,k)$, and $P(\gamma,k)$ when $h(\gamma,k)<k$.
If every such recursive call led to $h(\gamma,k)<k$, then $P(\gamma,k)$ would be a requested path decomposition $P_\gamma$ of $\gamma$ of width at most $t(k-1)+1$ for every parent component $\gamma$.
If some of these recursive calls led to $h(\gamma,k)=k$, then we would set $h(G,b)=k+1$ and use $W(\gamma,k)$ to construct a requested $\mathcal{T}_{k+1}$-witness $W(G,b)$, while $P(G,b)$ would be constructed as described in outcome~\ref{outcomeB} with no need for explicit path decompositions of the parent components.
However, this approach fails in that we are unable to provide a polynomial upper bound on the number of distinct pairs $(G,b)$ for which recursive calls to $\solve(G,b)$ would be made.
We overcome this difficulty as follows.
For each parent component $\gamma$, instead of running $\solve(\gamma,k)$, we run $\solve(\hat\gamma,k)$ for an appropriately defined connected induced subgraph $\hat\gamma$ of $G[U]$ such that $V(\gamma)\subseteq V(\hat\gamma)$, to compute $h(\hat\gamma,k)$, $W(\hat\gamma,k)$, and $P(\hat\gamma,k)$ when $h(\hat\gamma,k)<k$.
We call these recursive calls \emph{secondary}.
Their role is analogous to the role of the calls to $\solve(\gamma,k)$ in the attempt above.
If every secondary call leads to $h(\hat\gamma,k)<k$, then $P(\hat\gamma,k)$ is a path decomposition of $\hat\gamma$ of width at most $t(k-1)+1$, and the requested path decomposition $P_\gamma$ of every parent component $\gamma$ is obtained from the respective $P(\hat\gamma,k)$ by restricting the bags to $V(\gamma)$.
If some secondary call leads to $h(\hat\gamma,k)=k$, then we set $h(G,b)=k+1$ and use $W(\hat\gamma,k)$ to construct a requested $\mathcal{T}_{k+1}$-witness $W(G,b)$ (as described in Claim~\ref{claim:witness} below), while $P(G,b)$ is constructed as described in outcome~\ref{outcomeB} with no need for explicit path decompositions of the parent components.
The induced subgraphs $\hat\gamma$ are defined in a way (described below) that behaves nicely in the recursion and will allow us (in Subsection~\ref{subsec:complexity}) to provide a polynomial upper bound on the number of distinct pairs $(G,b)$ for which recursive calls to $\solve(G,b)$ are made.
For $\sigma\in\Sub(G)$, let $A_\sigma$ be as in the definition of normalized tree decomposition, so that $A_\sigma\subseteq V(\sigma)$ and $B_\sigma=A_\sigma\cup N_G(V(\sigma))$.
For $1\leq i<m$, let $R_{i,i+1}$ be the set of vertices $v\in U\cap B_{\zeta_{i,i+1}}$ with the following property: if $\sigma$ is the node in $T$ (on the path from $\zeta_{i,i+1}$ to the root $G$ of $T$) such that $v\in A_\sigma$, then $v$ is connected by a $(U\cap V(\sigma))$-path to $V(\zeta_i)$ and to $V(\zeta_{i+1})$ (thus $R_{i,i+1}\subseteq U_{i,i+1}$).
For $1\leq i\leq m$, let $R_i=U\cap B_i=N_G(V(\zeta_i))$.
Let $R=R_1\cup R_{1,2}\cup R_2\cup\cdots\cup R_{m-1,m}\cup R_m$.
For each parent component $\gamma$, the induced subgraph $\hat\gamma$ used in the description above is defined as follows.
If $\gamma\in\CC{G}(U_i\setminus B_i)$ with $1<i<m$, as in case~(b) of Claim~\ref{claim:component}, then $\hat\gamma=\gamma$, which is a connected component of $G[U_i\setminus R_i]$, because $U_i\cap B_i\subseteq R_i$.
If $\gamma\in\CC{G}(U_{i,i+1}\setminus\bigcup_{\xi\in V(Q_{i,i+1})}B_\xi)$ with $1\leq i<m$, as in case~(d) of Claim~\ref{claim:component}, then $\hat\gamma$ is the connected component of $G[U_{i,i+1}\setminus(R_i\cup R_{i,i+1}\cup R_{i+1})]$ such that $V(\gamma)\subseteq V(\hat\gamma)$, which exists because $R_i\cup R_{i,i+1}\cup R_{i+1}\subseteq\bigcup_{\xi\in V(Q_{i,i+1})}B_\xi$.
Let each $\hat\gamma$ obtained this way from a parent component $\gamma$ be called a \emph{secondary component}.
Every secondary component is a connected component of $G[U\setminus R]$ and thus of $G-R$, as $N_G(\bigcup_{i=1}^mV(\zeta_i))\subseteq R$.
\begin{claim}
\label{claim:witness}
If\/ $h(\hat\gamma,k)=k$ for a secondary component\/ $\hat\gamma$, then the following is a\/ $\mathcal{T}_{k+1}$-witness for\/ $G$:
\begin{itemize}
\item $\witness{V(G);W(\zeta_{i-1},b),W(\hat\gamma,k),W(\zeta_{i+1},b)}$ when\/ $\hat\gamma\in\CC{G}(U_i\setminus R_i)$ with\/ $1<i<m$;
\item
$\witness{V(G);W(\zeta_i,b),W(\hat\gamma,k),W(\zeta_{i+1},b)}$ when\/ $\hat\gamma\in\CC{G}(U_{i,i+1}\setminus(R_i\cup R_{i,i+1}\cup R_{i+1}))$ with\/ $1\leq i<m$.
\end{itemize}
\end{claim}
\begin{proof}
If $\hat\gamma\in\CC{G}(U_i\setminus R_i)$ with $1<i<m$, then there is a path in $G$ connecting
\begin{itemize}
\item $V(\zeta_{i-1})$ with $V(\hat\gamma)$ via $U_{i-1,i}\cup V_i$, thus avoiding $V(\zeta_{i+1})$,
\item $V(\hat\gamma)$ with $V(\zeta_{i+1})$ via $V_i\cup U_{i,i+1}$, thus avoiding $V(\zeta_{i-1})$,
\item $V(\zeta_{i-1})$ with $V(\zeta_{i+1})$ via $U_{i-1,i}\cup V(\zeta_i)\cup U_{i,i+1}$, thus avoiding $V(\hat\gamma)$.
\end{itemize}
Now, let $\hat\gamma$ be a secondary component in $\CC{G}(U_{i,i+1}\setminus(R_i\cup R_{i,i+1}\cup R_{i+1}))$ with $1\leq i<m$.
There is a path in $G$ connecting
\begin{itemize}
\item $V(\zeta_i)$ with $V(\hat\gamma)$ via $U_{i,i+1}$, thus avoiding $V(\zeta_{i+1})$,
\item $V(\hat\gamma)$ with $V(\zeta_{i+1})$ via $U_{i,i+1}$, thus avoiding $V(\zeta_i)$.
\end{itemize}
We still need a path in $G$ connecting $V(\zeta_i)$ with $V(\zeta_{i+1})$ that avoids $V(\hat\gamma)$.
Since $\zeta_{i,i+1}$ is connected and $V(\zeta_i)\cup V(\zeta_{i+1})\subseteq V(\zeta_{i,i+1})$, there is a $(U\cap V(\zeta_{i,i+1}))$-path connecting $V(\zeta_i)$ and $V(\zeta_{i+1})$.
Let $X$ be the set of vertices in $U\cap V(\zeta_{i,i+1})$ that are connected by a $(U\cap V(\zeta_{i,i+1}))$-path to $V(\zeta_i)$ and to $V(\zeta_{i+1})$.
We claim that $N_G(X\setminus(R_i\cup R_{i,i+1}\cup R_{i+1}))\subseteq R_i\cup R_{i,i+1}\cup R_{i+1}$.
Suppose not, and let $v$ be a vertex in $V(G)\setminus(X\cup R_i\cup R_{i,i+1}\cup R_{i+1})$ with a neighbor in $X\setminus(R_i\cup R_{i,i+1}\cup R_{i+1})$.
Since $N_G(V(\zeta_i))=R_i$ and $N_G(V(\zeta_{i+1}))=R_{i+1}$, we have $v\notin V(\zeta_i)\cup V(\zeta_{i+1})$.
It follows that $v\in U$ and $v$ is connected by a $(U\cap V(\zeta_{i,i+1}))$-path to $V(\zeta_i)$ and to $V(\zeta_{i+1})$.
Since $v\notin X$, we have $v\notin V(\zeta_{i,i+1})$ and thus $v\in N_G(V(\zeta_{i,i+1}))\subseteq B_{\zeta_{i,i+1}}$.
Let $\sigma$ be the node on the path from $\zeta_{i,i+1}$ to the root in $T$ such that $v\in A_\sigma$.
Thus $V(\zeta_{i,i+1})\subseteq V(\sigma)$.
We conclude that $v\in U\cap B_{\zeta_{i,i+1}}$ and $v$ is connected by a $(U\cap V(\sigma))$-path to $V(\zeta_i)$ and to $V(\zeta_{i+1})$, which contradicts the assumption that $v\notin R_{i,i+1}$.
Since $\hat\gamma$ is a secondary component, there is a parent component $\gamma\in\CC{G}(U_{i,i+1}\setminus\bigcup_{\xi\in V(Q_{i,i+1})}B_\xi)$ such that $V(\gamma)\cap V(\zeta_{i,i+1})=\emptyset$ and $V(\gamma)\subseteq V(\hat\gamma)$, so $V(\hat\gamma)\nsubseteq X$.
This implies that $V(\hat\gamma)\cap X=\emptyset$, as $\hat\gamma\in\CC{G}(U_{i,i+1}\setminus(R_i\cup R_{i,i+1}\cup R_{i+1}))$ and $N_G(X\setminus(R_i\cup R_{i,i+1}\cup R_{i+1}))\subseteq R_i\cup R_{i,i+1}\cup R_{i+1}$.
Therefore, any $X$-path in $G$ connecting $V(\zeta_i)$ and $V(\zeta_{i+1})$ (which exists) avoids $V(\hat\gamma)$.
\end{proof}
This completes the description of the procedure $\solve(G,b)$.
Since all recursive calls of the form $\solve(\gamma,c)$ that it makes are for proper connected induced subgraphs $\gamma$ of $G$ (and for $c\leq b$), the procedure terminates and correctly computes the requested outcome.
\subsection{Complexity Analysis}
\label{subsec:complexity}
The algorithm consists in running $\solve(G^{\mathrm{in}},\infty)$ on the input graph $G^{\mathrm{in}}$.
It makes further recursive calls to $\solve(\beta,b)$ for various connected induced subgraphs $\beta$ of $G^{\mathrm{in}}$ and upper bound requests $b$.
Let every pair $(\beta,b)$ such that $\solve(\beta,b)$ is run by the algorithm (somewhere in the recursion tree) be called a \emph{subproblem}.
We show that if $G^{\mathrm{in}}$ has $n$ vertices, then there are only $O(n\log n)$ subproblems.
A \emph{pruned subproblem} is a subproblem $(\beta,b)$ for which the run of $\solve(\beta,b)$ is pruned, which implies $h(\beta,b)=b$.
Observe that unless $(\beta,b)$ is pruned, the run of $\solve(\beta,b)$ performs operations that do not depend on the value of $b$ (including the operations performed in all recursive calls).
Indeed, unless $(\beta,b)$ is pruned, no primary recursive call made by $\solve(\beta,b)$ is pruned, so (by induction) these calls perform operations and lead to results that do not depend on $b$, and every secondary recursive call made by $\solve(\beta,b)$ is of the form $\solve(\hat\gamma,k)$ where $k$ does not depend on $b$.
A \emph{key subproblem} is a subproblem $(\beta,b)$ for which the run of $\solve(\beta,b)$ is a key run, that is, $\size{Z}\geq 2$ and $H$ is a path.
In particular, a key subproblem is not pruned.
A \emph{key subgraph} is a connected induced subgraph $\beta$ of $G^{\mathrm{in}}$ such that $(\beta,b)$ is a key subproblem for some $b\in\mathbb{N}\cup\{\infty\}$.
For every key subgraph $\beta$, let $\level(\beta)$ denote the value of $k$ defined in a key run of $\solve(\beta,b)$, that is, the maximum of $h(\gamma,b)$ over all $\gamma\in\Sub(\beta)\setminus\{\beta\}$.
(As we observed above, these values do not depend on $b$, so neither does $\level(\beta)$.)
For every key subgraph $\beta$, let $R(\beta)$ be the set $R=R_1\cup R_{1,2}\cup R_2\cup\cdots\cup R_{m-1,m}\cup R_m$ defined in a key run of $\solve(\beta,b)$.
(As before, it does not depend on $b$.)
For every $k\in\mathbb{N}$, let $R^k$ be the union of the sets $R(\beta)$ over all key subgraphs $\beta$ such that $\level(\beta)=k$.
(We have $R^k=\emptyset$ if there are no such key subgraphs.)
\begin{lemma}
\label{lem:level}
The following holds for any key subgraphs\/ $\alpha$ and\/ $\beta$ such that\/ $\alpha\in\Sub(\beta)$.
\begin{enumerate}
\item\label{item:level-order} $\level(\alpha)\leq\level(\beta)$.
\item\label{item:level-less} If\/ $\level(\alpha)<\level(\beta)$, then\/ $R(\beta)\cap V(\alpha)=\emptyset$.
\item\label{item:level-equal} If\/ $\level(\alpha)=\level(\beta)$, then\/ $R(\beta)\cap V(\alpha)=R(\alpha)$.
\end{enumerate}
\end{lemma}
\begin{proof}
Since $\alpha\in\Sub(\beta)$, it follows from Lemma~\ref{lem:Tin}~\ref{item:Tin-recursion} that $\Sub(\alpha)\subseteq\Sub(\beta)$.
This directly shows~\ref{item:level-order}, by the definition of level.
Let $(T,\{B_\sigma\}_{\sigma\in\Sub(\beta)})$ be the normalized tree decomposition of $\beta$ used in a key run of $\solve$ on $\beta$.
(Its construction from the input tree decomposition of $G^{\mathrm{in}}$ depends only on $\beta$.)
Lemma~\ref{lem:Tin}~\ref{item:Tin-recursion} implies that $(T^\alpha,\{B_\sigma\cap V(\alpha)\}_{\sigma\in\Sub(\alpha)})$, where $T^\alpha$ is the subtree of $T$ comprised of $\alpha$ and the nodes below $\alpha$ in $T$, is the normalized tree decomposition of $\alpha$ used in any key run of $\solve$ on $\alpha$.
Let $Z$, $m$, $H$, $U$, $\zeta_i$, $\zeta_{i,i+1}$, $B_i$, $R_i$, and $R_{i,i+1}$ be defined as in a key run of $\solve$ on $\beta$.
For the proof of~\ref{item:level-less}, assume $\level(\alpha)<\level(\beta)$.
The definition of level implies that none of the nodes in $Z$ lie below $\alpha$ in $T$.
Therefore, for $1\leq i\leq m$, we have $R_i\cap V(\alpha)=\emptyset$, as $R_i\subseteq B_i\setminus V(\zeta_i)$, and for $1\leq i<m$, we have $R_{i,i+1}\cap V(\alpha)=\emptyset$, as $R_{i,i+1}\subseteq B_{\zeta_{i,i+1}}$, where the node $\zeta_{i,i+1}$ lies above $\zeta_i$ and $\zeta_{i+1}$ in $T$.
This yields $R(\beta)\cap V(\alpha)=\emptyset$.
For the proof of~\ref{item:level-equal}, assume $\level(\alpha)=\level(\beta)$.
Let $Z^\alpha$, $H^\alpha$, and $U^\alpha$ be the respective $Z$, $H$, and $U$ defined in any key run of $\solve$ on $\alpha$.
(We keep using $Z$, $H$, $U$, and other notations with no superscript to denote what is defined in a key run of $\solve$ on $\beta$.)
It follows that $Z^\alpha$ is the set of nodes in $Z$ that lie below $\alpha$ in $T$.
Since $Z^\alpha\neq\emptyset$ and the sets $V(\zeta_1),\ldots,V(\zeta_m)$ are pairwise disjoint, we have $\alpha\notin\bigcup_{i=1}^m\Sub(\zeta_i)$.
This and Lemma~\ref{lem:Tin}~\ref{item:Tin-laminar} imply $V(\zeta_i)\cap V(\alpha)=\emptyset$ for every $\zeta_i\in Z\setminus Z^\alpha$.
It follows that $U^\alpha=U\cap V(\alpha)$.
Consequently, the path $H^\alpha$ is a subpath of $H$, and $Z^\alpha=\{\zeta_r,\ldots,\zeta_s\}$, where $1\leq r<s\leq m$.
Let $R^\alpha_r,R^\alpha_{r,r+1},R^\alpha_{r+1},\ldots,R^\alpha_{s-1,s},R^\alpha_s$ denote the sets $R_1,R_{1,2},R_2,\ldots,R_{m-1,m},R_m$ defined in a key run of $\solve$ on $\alpha$ in this or the reverse order matching the order of $\zeta_r,\ldots,\zeta_s$.
Let $1\leq i\leq m$.
If $\zeta_i\in Z^\alpha$, then $R^\alpha_i=U^\alpha\cap B_i\cap V(\alpha)=U\cap B_i\cap V(\alpha)=R_i\cap V(\alpha)$.
If $\zeta_i\notin Z^\alpha$, then $\zeta_i$ does not lie below $\alpha$ in $T$, so $R_i\cap V(\alpha)=(B_i\setminus V(\zeta_i))\cap V(\alpha)=\emptyset$.
Now, let $1\leq i<m$.
Suppose $\zeta_i,\zeta_{i+1}\in Z^\alpha$.
It follows that $\zeta_{i,i+1}$ is a node in $T^\alpha$.
For each node $\sigma$ on the path between $\zeta_{i,i+1}$ and $\alpha$ in $T$, the set $A_\sigma$ contributes the same vertices to both $R^\alpha_{i,i+1}$ and $R_{i,i+1}$, namely, the vertices connected by a $(U\cap V(\sigma))$-path to $V(\zeta_i)$ and to $V(\zeta_{i+1})$, as $V(\sigma)\subseteq V(\alpha)$.
For each node $\sigma$ above $\alpha$ in $T$, the set $A_\sigma$ contributes no vertices to $R^\alpha_{i,i+1}$, and the vertices it contributes to $R_{i,i+1}$ are not in $V(\alpha)$, as $A_\sigma\cap V(\alpha)=\emptyset$.
Thus $R^\alpha_{i,i+1}=R_{i,i+1}\cap V(\alpha)$.
On the other hand, if $\zeta_i\notin Z^\alpha$ or $\zeta_{i+1}\notin Z^\alpha$, then $\zeta_{i,i+1}$ does not lie in $T_\alpha$, so $B_{\zeta_{i,i+1}}\cap V(\alpha)=\emptyset$ and thus $R_{i,i+1}\cap V(\alpha)=\emptyset$.
We conclude that
\[\begin{split}
R(\alpha)&=\smash[b]{R^\alpha_r\cup R^\alpha_{r,r+1}\cup R^\alpha_{r+1}\cup\cdots\cup R^\alpha_{s-1,s}\cup R^\alpha_s}\\
&=(\smash[b]{R_1\cup R_{1,2}\cup R_2\cup\cdots\cup R_{m-1,m}\cup R_m})\cap V(\alpha)\\
&=R(\beta)\cap V(\alpha).\qedhere
\end{split}\]
\end{proof}
The following lemma will finally allow us to bound the total number of subproblems.
\begin{lemma}
\label{lem:subproblems}
For every\/ $k\in\mathbb{N}\cup\{\infty\}$, every subproblem\/ $(\gamma,k)$ satisfies\/ $\gamma\in\Sub(G^{\mathrm{in}}-\bigcup_{i\geq k}R^i)$.
\end{lemma}
\begin{proof}
We prove the following two statements for all $k\in\mathbb{N}\cup\{\infty\}$, the first of which is the statement of the lemma.
\begin{enumroman}
\item\label{item:subproblems-1} Every subproblem $(\gamma,k)$ satisfies $\gamma\in\Sub(G^{\mathrm{in}}-\bigcup_{i\geq k}R^i)$.
\item\label{item:subproblems-2} Every key subproblem $(\alpha,a)$ with $a\geq k>\level(\alpha)$ satisfies $\alpha\in\Sub(G^{\mathrm{in}}-\bigcup_{i\geq k}R^i)$.
\end{enumroman}
For every $k\in\mathbb{N}$ greater than the maximum finite upper bound request used by the algorithm, \ref{item:subproblems-1} is vacuous and \ref{item:subproblems-2} is equivalent to the case $k=\infty$.
Therefore, we can prove \ref{item:subproblems-1} and \ref{item:subproblems-2} by ``downward induction'' with the above-mentioned values of $k$ being the ``base case''.
Specifically, let $k\in\mathbb{N}\cup\{\infty\}$ be maximum for which \ref{item:subproblems-1} or \ref{item:subproblems-2} supposedly fails.
In particular, we assume that \ref{item:subproblems-2} holds for values greater than $k$, which is equivalent to the following.
\begin{enumerate}[label={($*$)}]
\item\label{item:subproblems-IH} Every key subproblem $(\alpha,a)$ with $a>k\geq\level(\alpha)$ satisfies $\alpha\in\Sub(G^{\mathrm{in}}-\bigcup_{i>k}R^i)$.
\end{enumerate}
For the proof of \ref{item:subproblems-1}, consider a subproblem $(\gamma,k)$, and assume without loss of generality that \ref{item:subproblems-1} holds for all subproblems $(\beta,k)$ with $V(\gamma)\subset V(\beta)$.
Since $G^{\mathrm{in}}\in\Sub(G^{\mathrm{in}})$, \ref{item:subproblems-1} holds when $(\gamma,k)=(G^{\mathrm{in}},\infty)$.
Thus assume $(\gamma,k)\neq(G^{\mathrm{in}},\infty)$, so that a primary or secondary recursive call to $\solve(\gamma,k)$ occurs in the algorithm.
If there is a primary call to $\solve(\gamma,k)$ from $\solve(\beta,k)$ where $\gamma\in\Sub(\beta)\setminus\{\beta\}$, then $\beta\in\Sub(G^{\mathrm{in}}-\bigcup_{i\geq k}R^i)$ implies $\gamma\in\Sub(G^{\mathrm{in}}-\bigcup_{i\geq k}R^i)$ by Lemma~\ref{lem:Tin}~\ref{item:Tin-recursion}.
Thus assume the other case, namely, that the algorithm makes a secondary call to $\solve(\gamma,k)$ from $\solve(\alpha,a)$ for some key subproblem $(\alpha,a)$ with $a>k=\level(\alpha)$.
By \ref{item:subproblems-IH}, we have $\alpha\in\Sub(G^{\mathrm{in}}-\bigcup_{i>k}R^i)$.
We claim that $R^k\cap V(\alpha)=R(\alpha)$.
It is clear that $R^k\cap V(\alpha)\supseteq R(\alpha)$.
Now, let $v\in R^k\cap V(\alpha)$.
Let $(\beta,b)$ be a key subproblem with $b>k=\level(\beta)$ such that $v\in R(\beta)$ (which exists by the definition of $R^k$).
By \ref{item:subproblems-IH}, we have $\beta\in\Sub(G^{\mathrm{in}}-\bigcup_{i>k}R^i)$.
The fact that $v\in V(\alpha)\cap V(\beta)$ and Lemma~\ref{lem:Tin}~\ref{item:Tin-laminar} yield $\alpha\in\Sub(\beta)$ or $\beta\in\Sub(\alpha)$.
If $\alpha\in\Sub(\beta)$, then Lemma~\ref{lem:level}~\ref{item:level-equal} yields $v\in R(\beta)\cap V(\alpha)=R(\alpha)$.
If $\beta\in\Sub(\alpha)$, then Lemma~\ref{lem:level}~\ref{item:level-equal} (with $\alpha$ and $\beta$ interchanged) yields $v\in R(\beta)\subseteq R(\alpha)$.
This completes the proof of the claim.
Being a secondary component in $\solve(\alpha,a)$, $\gamma$ is a connected component of $\alpha-R(\alpha)$ and thus of $\alpha-R^k$, as $R^k\cap V(\alpha)=R(\alpha)$.
Since $\alpha\in\Sub(G^{\mathrm{in}}-\bigcup_{i>k}R^i)$, Lemma~\ref{lem:Tin}~\ref{item:Tin-component} applied with $G=G^{\mathrm{in}}-\bigcup_{i>k}R^i$ and $X=V(G^{\mathrm{in}})\setminus\bigcup_{i\geq k}R^i$ yields $\gamma\in\Sub(G^{\mathrm{in}}-\bigcup_{i\geq k}R^i)$.
For the proof of \ref{item:subproblems-2}, let $(\alpha,a)$ be a key subproblem with $a\geq k>\level(\alpha)$.
If $a=k$, then $\alpha\in\Sub(G^{\mathrm{in}}-\bigcup_{i\geq k}R^i)$ by \ref{item:subproblems-1}, so assume $a>k$.
By~\ref{item:subproblems-IH}, we have $\alpha\in\Sub(G^{\mathrm{in}}-\bigcup_{i>k}R^i)$.
Suppose there is a vertex $v\in R^k\cap V(\alpha)$.
As before, let $(\beta,b)$ be a key subproblem with $b>k=\level(\beta)$ such that $v\in R(\beta)$.
By~\ref{item:subproblems-IH}, we have $\beta\in\Sub(G^{\mathrm{in}}-\bigcup_{i>k}R^i)$.
The fact that $v\in V(\alpha)\cap V(\beta)$ and Lemma~\ref{lem:Tin}~\ref{item:Tin-laminar} yield $\alpha\in\Sub(\beta)$ or $\beta\in\Sub(\alpha)$.
If $\alpha\in\Sub(\beta)$, then Lemma~\ref{lem:level}~\ref{item:level-less} yields $R(\beta)\cap V(\alpha)=\emptyset$, which is a contradiction.
If $\beta\in\Sub(\alpha)$, then $\level(\beta)\leq\level(\alpha)$ by Lemma~\ref{lem:level}~\ref{item:level-order} (with $\alpha$ and $\beta$ interchanged), which is again a contradiction.
Thus $R^k\cap V(\alpha)=\emptyset$, which implies $\alpha\in\Sub(G^{\mathrm{in}}-\bigcup_{i\geq k}R^i)$ by Lemma~\ref{lem:Tin}~\ref{item:Tin-restrict}.
\end{proof}
Let $n=\size{V(G^{\mathrm{in}})}$.
Lemma~\ref{lem:normalized}~\ref{item:normalized-A} implies that $\size{\Sub(G^{\mathrm{in}}[X])}\leq\size{X}\leq n$ for every $X\subseteq V(G^{\mathrm{in}})$.
This and Lemma~\ref{lem:subproblems} imply that there are at most $n$ subproblems of the form $(\gamma,k)$ for every $k\in\mathbb{N}\cup\{\infty\}$.
Every finite upper bound request $k$ used in the algorithm is $O(\log n)$, because it occurs on some secondary call to $\solve(\gamma,k)$ from $\solve(\alpha,a)$ where $\alpha$ has a $\mathcal{T}_k$-witness, so $\size{V(\alpha)}\geq 3^k$.
Therefore, the total number of subproblems is $O(n\log n)$.
Since the operations performed in a single~pass of $\solve$ (excluding the recursive calls) clearly take polynomial time, we conclude that the full run of $\solve(G^{\mathrm{in}},\infty)$ takes polynomial time.
This completes the proof of Theorem~\ref{thm:algorithm}.
\section{Tightness}
\label{sec:tight}
Theorem~\ref{thm1} asserts that every graph with pathwidth at least $th+2$ has treewidth at least $t$ or contains a subdivision of a complete binary tree of height $h+1$.
While this statement is true for all positive integers $t$ and $h$, we remark that the interesting case is when $h>\log_2t-2$.
Indeed, if $h\leq\log_2t-2$, then the second outcome is known to hold for every graph with pathwidth at least $t$; this follows from a result of Bienstock, Robertson, Seymour, and Thomas~\cite{BRST91}.%
\footnote{In~\cite{BRST91}, it is proved that for every forest $F$, graphs with no $F$ minors have pathwidth at most $\size{V(F)}-2$.
In particular, if $G$ contains no subdivision of a complete binary tree of height $h+1$, then $\pw(G)<2^{h+2}\leq t$.}
We now show that Theorem~\ref{thm1} is tight up to a multiplicative factor when $h>\log_2t-2$.
\begin{theorem}
\label{thm:tight}
For any positive integers\/ $t$ and\/ $h$, there is a graph with treewidth\/ $t$ and pathwidth at least\/ $t(h+1)-1$ that contains no subdivision of a complete binary tree of height\/ $3\max(h+1,\lceil\log_2t\rceil)$.
\end{theorem}
Fix a positive integer $t$.
For a tree $T$, let $\blowup{T}$ be a graph obtained from $T$ by replacing every node of $T$ with a clique on $t$ vertices and replacing every edge of $T$ with an arbitrary perfect matching between the corresponding cliques.
For $h\in\mathbb{N}$, let $T_h$ be a complete ternary tree of height $h$.
The following three claims show that the graph $\blowup{T_h}$ satisfies the three conditions requested in Theorem~\ref{thm:tight}, thus proving that Theorem~\ref{thm:tight} holds for $\blowup{T_h}$.
\begin{claim}
If\/ $T$ is a tree on at least two vertices, then\/ $\tw(\blowup{T})=t$.
\end{claim}
\begin{proof}
For each node $x$ of $T$, let $B_x$ be the clique of $t$ vertices in $\blowup{T}$ corresponding to $x$.
A tree decomposition of $\blowup{T}$ of width $t$ is obtained from $T$ by taking $B_x$ as the bag of every node $x$ of $T$ and by subdividing every edge $xy$ of $T$ into a path of length $t+1$ with the following sequence of bags, assuming that the vertices $u_1,\ldots,u_t$ in $B_x$ are matched to $v_1,\ldots,v_t$ in $B_y$, respectively:
\[\{u_1,\ldots,u_t\},\enspace\{u_1,\ldots,u_t\}\cup\{v_1\},\enspace\{u_2,\ldots,u_t\}\cup\{v_1,v_2\},\enspace\ldots,\enspace\{u_t\}\cup\{v_1,\ldots,v_t\},\enspace\{v_1,\ldots,v_t\}.\]
This way, for every matching edge $u_iv_i$ with $1\leq i\leq t$, there is a bag containing its two endpoints.
Consequently, this is a valid tree decomposition of $\blowup{T}$ with bags of size at most $t+1$.
For the proof that $\tw(\blowup{T})\geq t$, let $xy$ be an edge of $T$, and assume that the vertices $u_1,\ldots,u_t$ of $B_x$ are matched to $v_1,\ldots,v_t$ in $B_y$, respectively, as before.
In any tree decomposition of $\blowup{T}$, there is a node $x'$ whose bag contains the clique $B_x$ and a node $y'$ whose bag contains the clique $B_y$.
Walk on the path from $x'$ to $y'$ and stop at the first node whose bag contains some vertex in $B_y$.
This bag must also contain all of $B_x$, so it has size at least $t+1$.
\end{proof}
\begin{claim}
For every\/ $h\in\mathbb{N}$, we have\/ $\pw(\blowup{T_h})\geq t(h+1)-1$.
\end{claim}
\begin{proof}
We define the \emph{root clique} of $\blowup{T_h}$ as the clique in $\blowup{T_h}$ corresponding to the root of $T_h$.
We prove the following slightly stronger statement, by induction on $h$: in every path decomposition of $\blowup{T_h}$, there are a bag $B$ of size at least $t(h+1)$ and $t$ vertex-disjoint paths in $\blowup{T_h}$ each having one endpoint in the root clique and the other endpoint in $B$.
For the base case $h=0$, the graph $\blowup{T_0}$ is simply a complete graph on $t$ vertices, and the statement is trivial.
For the induction step, assume that $h\geq 1$ and the statement is true for $h-1$.
Let $R$ be the root clique of $\blowup{T_h}$.
Let $(P,\{B_x\}_{x\in V(P)})$ be a path decomposition of $\blowup{T_h}$ of minimum width.
The graph $\blowup{T_h}-R$ has three connected components $C_1$, $C_2$, and $C_3$ that are copies of $\blowup{T_{h-1}}$ with root cliques $R_1$, $R_2$, and $R_3$, respectively.
For each $i\in\{1,2,3\}$, the induction hypothesis applied to the path decomposition $(P,\{B_x\cap V(C_i)\}_{x\in V(P)})$ of $C_i$ provides a node $x_i$ of $P$ such that
\begin{itemize}
\item $\size{B_{x_i}\cap V(C_i)}\geq th$, and
\item there are $t$ vertex-disjoint paths in $C_i$ between $B_{x_i}\cap V(C_i)$ and the root clique $R_i$ of $C_i$.
\end{itemize}
Assume without a loss of generality that the node $x_2$ occurs between $x_1$ and $x_3$ on $P$.
We prove the induction statement for $B=B_{x_2}$.
For each $i\in\{1,2,3\}$, we take the $t$ vertex-disjoint paths from $B_{x_i}$ to $R_i$ in $C_i$ and extend them by the matching between $R_i$ and $R$ to obtain $t$ vertex-disjoint paths from $B_{x_i}$ to $R$ in $\blowup{T_h}[R\cup V(C_i)]$.
In particular, there are $t$ vertex-disjoint paths from $B_{x_2}$ to $R$ in $\blowup{T_h}$, as required in the induction statement.
Since $\size{R}=t$, the $t$ paths from $B_{x_1}$ to $R$ and the $t$ paths from $B_{x_3}$ to $R$ together form $t$ vertex-disjoint paths from $B_{x_1}$ to $B_{x_3}$ in $\blowup{T_h}[V(C_1)\cup R\cup V(C_3)]$, which therefore avoid $V(C_2)$.
Since $x_2$ lies between $x_1$ and $x_3$ on $P$, the set $B_{x_2}\setminus V(C_2)$ must contain at least one vertex from each of these $t$ paths.
Thus $\size{B_{x_2}\setminus V(C_2)}\geq t$.
Since $\size{B_{x_2}\cap V(C_2)}\geq th$, we conclude that $\size{B_{x_2}}\geq t(h+1)$, as required in the induction statement.
\end{proof}
\begin{claim}
For any\/ $h\in\mathbb{N}$, the graph\/ $\blowup{T_h}$ contains no subdivision of a complete binary tree of height\/ $3\max(h+1,\lceil\log_2t\rceil)$.
\end{claim}
\begin{proof}
A simple calculation shows that $T_h$ has $\frac{3^{h+1}-1}{2}$ nodes.
Thus $\size{V(\blowup{T_h})}+1\leq 3^{h+1}t$ and so
\[\log_2\bigl(\size{V(\blowup{T_h})}+1\bigr)\leq\log_2(3^{h+1}t)\leq 3\max(h+1,\lceil\log_2t\rceil).\]
If a graph $G$ contains a subdivision of a complete binary tree of height $c$, then $\size{V(G)}\geq 2^{c+1}-1$ and so $\log_2(\size{V(G)}+1)\geq c+1$.
Therefore, $\blowup{T_h}$ cannot contain a subdivision of a complete binary tree of height $3\max(h+1,\lceil\log_2t\rceil)$.
\end{proof}
\section{Open Problem}
In Theorem~\ref{thm1}, we bound pathwidth by a function of treewidth in the absence of a subdivision of a large complete binary tree.
In~\cite{KR22,CNP21}, the authors bound treedepth by a function of treewidth in the absence of a subdivision of a large complete binary tree and of a long path.
Specifically, Czerwiński, Nadara, and Pilipczuk~\cite{CNP21} proved the following bound.\footnote{This bound is stated in~\cite{CNP21} for the case $t=h=\ell$, but the proof works as well for the general case.}
\begin{theorem}
\label{thm:CNP}
Every graph with treewidth\/ $t$ that contains no subdivision of a complete binary tree of height\/ $h$ and no path of order\/ $2^\ell$ has treedepth\/ $O(th\ell)$.
\end{theorem}
It is natural to ask how large treedepth can be as a function of pathwidth when there is no long path.
We offer the following conjecture.
\begin{conjecture}
\label{conjecture}
Every graph with pathwidth\/ $p$ that contains no path of order\/ $2^\ell$ has treedepth\/ $O(p\ell)$.
\end{conjecture}
This conjecture and Theorem~\ref{thm1} would directly imply Theorem~\ref{thm:CNP}.
We note that an $O(p^2\ell)$ bound on the treedepth follows from Theorem~\ref{thm:CNP}, as every graph with pathwidth $p$ has treewidth at most $p$ and contains no subdivision of a complete binary tree of height $2p+1$.
We also note that an $O(p\ell)$ bound on the treedepth would be best possible; see \cite[Section~7]{CNP21}.
\section*{Acknowledgments}
This research was started at the Structural Graph Theory workshop in Gułtowy, Poland, in June~2019.
We thank the organizers and the other workshop participants for creating a productive working atmosphere.
We also thank the anonymous referees for their helpful comments.
We are particularly grateful to one referee for pointing out a serious error in an earlier version of the paper.
\bibliographystyle{plain}
| {'timestamp': '2022-04-21T02:03:54', 'yymm': '2008', 'arxiv_id': '2008.00779', 'language': 'en', 'url': 'https://arxiv.org/abs/2008.00779'} |
\section{Introduction}
A trader on the stock market is usually assumed to make
his decisions relying on all the information which is generated
by the market events.
However it is registered that some people have more
detailed information than others, in the sense that they
act with the present time knowledge of some future event.
This is the so-called insider information and those dealers
taking advantage of it are the insiders.
The financial markets with economic agents possessing additional knowledge have been studied in
a number of papers (see e.g. \cite{A00, AIS98, GP99, Z07}).
We take approach originated in \cite{DuffieHuang} and \cite{Pikovsky}
assuming that the insider possesses some extra information stored in the random variable $G$
known at the beginning of the trading interval and not available to the regular trader.
The typical examples of $G$ are $G=S_{T+\delta}$, $G=\mathbf{1}_{[a,b]}(S_{T+\delta})$ or
$G=\sup_{t\in[0,T+\delta]}S_t$ ($\delta >0$),
where $S$ is a semimartingale representing the discounted stock price process and $T$ is fixed time
horizon till which the insider is allowed to trade.
In this paper we show how much better and with which strategies an insider
can perform on the market if he uses optimally the extra
information he has at his disposal.
The problem of pricing and perfect hedging of contingent claims is well understood in the
context of arbitrage-free models which are complete. In such models every contingent
claim can be replicated by a self-financing trading strategy. The cost
of replication equals the discounted expectation of
the claim under the unique equivalent martingale measure.
Moreover, this cost is the same for the insider and the regular trader.
Therefore instead of this strategy we will employ the quantile
hedging strategy of an insider for the replication, following idea of
F\"{o}llmer and Leukert \cite{FL99, FL00}.
That is, we will seek for the self-financing strategy that
\begin{itemize}
\item maximizes the probability of success of hedge under a given initial capital or
\item minimizes the initial capital under a given lower bound of the probability of the successful hedge.
\end{itemize}
This is the case when the insider is unwilling to put up the initial amount of capital required by a
perfect hedging.
This approach might be also seen as a
dynamic version of the VaR.
We use powerful technique of {\it grossissement de filtrations}
developed by Yor, Jeulin and Jacod \cite{J80, JY85}
and utilize the results of Amendinger
\cite{A00} and Amendinger et al. \cite{AIS98}.
The paper is organized as follows. In Section \ref{market} we present the main results. In Section \ref{numerical}
we analyze in detail some examples. Finally, in Section \ref{proofs} we give the proofs of the main results.
\section{Main results}\label{market}
Let $(\Omega,\mathcal{F},\mathbb{F},\mathbb{P})$ be a complete probability space
and $S=(S_t)_{t\geq 0}$ be an $(\mathbb{F},\mathbb{P})$-semimartingale representing the discounted stock price process.
Assume that the filtration $\mathbb{F}=(\mathcal{F}_t)_{t\geq 0}$ is the natural filtration of $S$ satisfying usual conditions with the trivial $\sigma$-algebra $\mathcal{F}_0$. Thus, the regular trader makes his portfolio decisions according to the information flow
$\mathbb{F}$. In addition to the regular trader we will consider the insider, whose knowledge will be modelled by the \textit{initial enlargement} of $\mathbb{F}$, that is filtration $\mathbb{G}=(\mathcal{G}_t)_{t\geq 0}$ given by:
$$\mathcal{G}_t=\mathcal{F}_t \vee
\sigma (G),
$$
where $G$ is an $\mathcal{F}$-measurable random variable. In particular, $G$ can be an
$\mathcal{F}_{T+\delta}$-measurable random variable ($\delta >0$) for $T$ being a fixed time horizon representing
the expiry date of the hedged contingent claim.
We will assume that the market is complete and arbitrage-free for the regular trader, hence there exists a unique equivalent martingale measure $\mathbb{Q}_{\mathbb{F}}$ such that $S$ is an $(\mathbb{F},\mathbb{Q}_{\mathbb{F}})$-martingale on $[0,T]$.
Denote by $(Z_{t}^{\mathbb{F}})_{t\in[0,T]}$ the density process of $\mathbb{Q}_\mathbb{F}$ with respect to $\mathbb{P}$, i.e.:
$$Z_{t}^{\mathbb{F}}=\left.\frac{{\rm d}\mathbb{Q}_\mathbb{F}}{{\rm d}\mathbb{P}}\right|_{\mathcal{F}_t}.$$
We will consider the contingent claim $H$ being $\mathcal{F}_T$-measurable, nonnegative random variable and the replicating
investment strategies for insider, which are expressed in terms of the integrals with respect to $S$.
To define them properly we assume that $S$ is a $(\mathbb{G},\mathbb{P})$-semimartingale
which follows from the requirement:
\begin{equation}
\mathbb{P}(G\in\cdot|\mathcal{F}_t)\ll \mathbb{P}(G\in\cdot)\qquad \mathbb{P}-{\rm a.s.} \label{abs_cont}
\end{equation}
for all $t\in [0,T]$ (see e.g. \cite{A00} and \cite{JY85}).
In fact, we assume from now on more, that is that the measure $\mathbb{P}(G\in\cdot|\mathcal{F}_t)$ and the law of $G$ are equivalent
for all $t\in [0,T]$:
\begin{equation}
\mathbb{P}(G\in\cdot|\mathcal{F}_t)\sim \mathbb{P}(G\in\cdot)\qquad \mathbb{P}-{\rm a.s.} \label{equiv}
\end{equation}
Under the condition (\ref{equiv}) there exists equivalent
$\mathbb{G}$-martinagle measure $\mathbb{Q}_{\mathbb{G}}$ defined by:
\begin{equation}\label{QG}
\mathbb{Q}_\mathbb{G}(A)=\int_{A}\frac{Z_{T}^{\mathbb{F}}}{p_{T}^{G}}(\omega){\rm d}\mathbb{P}(\omega),\qquad A\in\mathcal{G}_T,\end{equation}
where
$$p_{t}^{x}\;\mathbb{P}(G\in {\rm d}x)$$
is a version of $\mathbb{P}(G\in {\rm d}x|\mathcal{F}_t);$
see \cite{A00} and \cite{AIS98} (and also Theorems \ref{ther1} and \ref{ther2}).
For $\mathbb{H}\in\{\mathbb{F},\mathbb{G}\}$ we will consider only self-financing admissible trading strategies $(V_0,\xi)$ on $[0,T]$
for which the value process
$$V_{t}=V_{0}+\int_{0}^{t}\xi_u\;{\rm d}S_u,\qquad t\in[0,T],$$
is well defined, where an initial capital $V_{0}\geq 0$ is $\mathcal{H}_0$-measurable, a process $\xi$ is $\mathbb{H}$-predictable
and
$$V_t\geq 0,\qquad \mathbb{P}-{\rm a.s.}$$
for all $t\in [0,T]$.
Denote all admissible strategies associated to the filtration $\mathbb{H}\in\{\mathbb{F},\mathbb{G}\}$ by $\mathcal{A}^{\mathbb{H}}$.
Under assumption (\ref{equiv}) the insider can perfectly replicate the contingent
claim $H\in L^{1}(\mathbb{Q}_\mathbb{F})\cap L^{1}(\mathbb{Q}_\mathbb{G})$:
$$\mathbb{E}_{\mathbb{Q}_\mathbb{G}}(H|\mathcal{G}_t)=H_{0}+\int_{0}^{t}\xi_{u}\;{\rm d}S_{u},\qquad\mathbb{P}-{\rm a.s.}$$
where $H_{0}=\mathbb{E}_{\mathbb{Q}_\mathbb{G}}(H|\mathcal{G}_0)$. Moreover, from \cite{A00} and \cite{AIS98} it follows that $H_{0}=\mathbb{E}_{\mathbb{Q}_\mathbb{G}}H$ (see Theorem \ref{ther1}).
In this paper we will analyze the case when the insider is unwilling to pay the initial capital
$H_0$ required by a perfect hedge.
We will consider the following pair of dual problems.
\begin{problem}\label{P1}
Let $\alpha$ be a given $\mathcal{G}_0$-measurable random variable taking values in $[0,1]$. We are looking for a strategy $(\alpha\mathbb{E}_{\mathbb{Q}_\mathbb{G}}H,\xi)\in\mathcal{A}^\mathbb{G}$ which maximizes for any realization of $G$ the insider's probability of a successful hedge
$$\mathbb{P}\left(\left.\alpha\mathbb{E}_{\mathbb{Q}_\mathbb{G}}H+\int_0^T\xi_t \;{\rm d}S_t\geq H\right|\mathcal{G}_0\right),\qquad \mathbb{P}-{\rm a.s.}$$
\end{problem}
\begin{problem}\label{P2}
Let $\epsilon$ be a given $\mathcal{G}_0$-measurable random variable taking values in $[0,1]$. We are looking for a minimal $\mathcal{G}_0$-measurable random variable $\alpha$ for which there exists $\xi$ such that $(\alpha\mathbb{E}_{\mathbb{Q}_\mathbb{G}}H,\xi)\in\mathcal{A}^\mathbb{G}$ and
\begin{equation}\label{P2inequality}\mathbb{P}\left(\left.\alpha\mathbb{E}_{\mathbb{Q}_\mathbb{G}}H+\int_0^T\xi_t \;{\rm d}S_t\geq H \right|\mathcal{G}_0\right)\geq 1-\epsilon,\qquad \mathbb{P}-{\rm a.s.}\end{equation}
\end{problem}
\begin{remark}
Recall that in the quantile hedging problem for the usual trader we maximize the objective probability $\mathbb{P}(\alpha\mathbb{E}_{\mathbb{Q}_\mathbb{F}}H+\int_0^T\xi_t \;{\rm d}S_t\geq H)$, where $\alpha$ is number from $[0,1]$.
In the Problems \ref{P1}-\ref{P2} we use conditional probability, since
now the insider's perception of the market at time $t=0$ depends on the knowledge described by $\mathcal{G}_0$.
\end{remark}
The set
\begin{equation}\label{succset}\left\{\alpha\mathbb{E}_{\mathbb{Q}_\mathbb{G}}H+\int_0^T\xi_t \;{\rm d}S_t\geq H\right\}\end{equation}
we will call the \textit{success set}.
Denote
$$\mathbb{Q}^*(A)=\frac{\mathbb{E}_{\mathbb{Q}_\mathbb{G}}(H\mathbf{1}_A)}{\mathbb{E}_{\mathbb{Q}_\mathbb{G}}(H)},\qquad A\in\mathcal{G}_T.$$
The following theorems solve Problems \ref{P1} and \ref{P2}.
\begin{theorem}\label{maxprobab}
Suppose that there exists a $\mathcal{G}_0$-measurable random variable $k$ such that
\begin{equation}\label{eqk}\mathbb{Q}^*\left(\left.\left.\frac{\;{\rm d}\mathbb{Q}^*}{\;{\rm d}\mathbb{P}}\right|_{\mathcal{G}_T}\leq k\right|\mathcal{G}_0\right)=\alpha.\end{equation}
Then the maximal probability of a success set solving Problem \ref{P1} equals:
$$\mathbb{P}\left(\left.\left.\frac{\;{\rm d}\mathbb{Q}^*}{\;{\rm d}\mathbb{P}}\right|_{\mathcal{G}_T}\leq k\right|\mathcal{G}_0\right)$$
and it is realized by the strategy
$$\left(\mathbb{E}_{\mathbb{Q}_\mathbb{G}}\left[\left.H\mathbf{1}_{\left\{\left.\frac{\;{\rm d}\mathbb{Q}^*}{\;{\rm d}\mathbb{P}}\right|_{\mathcal{G}_T}\leq k\right\}}\right|\mathcal{G}_0\right],\tilde{\xi}\right),$$
which replicates the payoff $H\mathbf{1}_{\left\{\left.\frac{\;{\rm d}\mathbb{Q}^*}{\;{\rm d}\mathbb{P}}\right|_{\mathcal{G}_T}\leq k\right\}}$.
\end{theorem}
\begin{theorem}\label{minCostQH}
Suppose that there exists a $\mathcal{G}_0$-measurable random variable $k$ such that
\begin{equation}\label{eqk2}\mathbb{P}\left(\left.\left.\frac{\;{\rm d}\mathbb{Q}^*}{\;{\rm d}\mathbb{P}}\right|_{\mathcal{G}_T}\leq k\right|\mathcal{G}_0\right)=1-\epsilon.\end{equation}
Then the minimal $\mathcal{G}_0$-measurable random variable $\alpha$ solving Problem \ref{P2} equals:
$$\mathbb{Q}^*\left(\left.\left.\frac{\;{\rm d}\mathbb{Q}^*}{\;{\rm d}\mathbb{P}}\right|_{\mathcal{G}_T}\leq k\right|\mathcal{G}_0\right)$$
and it is realized by the strategy
$$\left(
\mathbb{E}_{\mathbb{Q}_\mathbb{G}}\left[\left.H\mathbf{1}_{\left\{\left.\frac{\;{\rm d}\mathbb{Q}^*}{\;{\rm d}\mathbb{P}}\right|_{\mathcal{G}_T}\leq k\right\}}\right|\mathcal{G}_0\right],\tilde{\xi}\right)$$ being the perfect hedge of $H\mathbf{1}_{\left\{\left.\frac{\;{\rm d}\mathbb{Q}^*}{\;{\rm d}\mathbb{P}}\right|_{\mathcal{G}_T}\leq k\right\}}.$
\end{theorem}
\begin{remark}
The assumptions that there exists $k$ satisfying (\ref{eqk}) and (\ref{eqk2}) are satisfied if
$\mathbb{P}(Z_T^{\mathbb{F}}H=0|\mathcal{G}_0)<\alpha$ and $\mathbb{P}(Z_T^{\mathbb{F}}H=0|\mathcal{G}_0)<1-\epsilon$ respectively, and
$Z_T^{\mathbb{F}}H$ has the conditional density on $\mathbb{R}_+$ given $G=g$ - see Section \ref{numerical} for the examples.
\end{remark}
The proofs of these theorems are given in Section \ref{proofs}.
\section{Numerical examples}\label{numerical}
In this section we consider the standard Black-Scholes model
in which the price evolution is described by the equation
$${\rm d}S_{t}=\sigma S_{t}{\rm d}W_{t}+\mu S_{t}{\rm d}t,$$
where $W$ is a Brownian motion, $\sigma,\mu>0$.
For simplicity we assume that interest rate is zero.
We analyze the Problem \ref{P2} for two examples of the insider information
and provide numerical results for pricing the vanilla call option, where
$$H=(S_T-K)^+$$
and $K$ is a strike price.
\subsection{The case of $G=W_{T+\delta}$}
It means that insider knows the stock price $G=S_{T+\delta}$
after the expiry date $T$.
In this case we have:
\begin{eqnarray*}
\lefteqn{\mathbb{P}(W_{T+\delta}\in {\rm d}z|\mathcal{F}_t)=\mathbb{P}(W_{T+\delta}-W_{t}+W_{t}\in {\rm d}z|\mathcal{F}_t)}\\
&&= \frac{1}{\sqrt{2\pi(T+\delta-t)}}\exp\Big(-\frac{(z-W_{t})^{2}}{2(T+\delta-t)}\Big){\rm d}z\\
&& = p_{t}^{z}\mathbb{P}(W_{T+\delta}\in {\rm d}z),
\end{eqnarray*}
where
$$p_{t}^{z}=\sqrt{\frac{T+\delta}{T+\delta-t}}
\exp\Big(-\frac{(z-W_{t})^{2}}{2(T+\delta-t)}+\frac{z^{2}}{2(T+\delta)}\Big).$$
Therefore:
\begin{eqnarray*}
\lefteqn{ \left.\frac{{\rm d}\mathbb{Q}_\mathbb{G}}{{\rm d}\mathbb{P}}\right|_{\mathcal{G}_T}=\frac{Z_T^{\mathbb{F}}}{p_T^G}=
\frac{\left.\frac{{\rm d}\mathbb{Q}_\mathbb{F}}{{\rm d}\mathbb{P}}\right|_{\mathcal{F}_T}}{p_T^G}=
\frac{\exp\Big(-\frac{\mu}{\sigma}W_T-\frac{1}{2}\big(\frac{\mu}{\sigma}\big)^{2}T\Big)
}{\sqrt{\frac{T+\delta}{\delta}}\exp\Big(-\frac{(W_{T+\delta}-W_T)^{2}}{2\delta}
+\frac{W_{T+\delta}^{2}}{2(T+\delta)}\Big)}}\\
&& = \sqrt{\frac{\delta}{T+\delta}}\exp\Big(-\frac{\mu}{\sigma}W_T-\frac{1}{2}
\Big(\frac{\mu}{\sigma}\Big)^{2}T+\frac{(W_{T+\delta}-W_T)^{2}}{2\delta}-\frac{W_{T+\delta}^{2}}{2(T+\delta)}\Big)
\end{eqnarray*}
and
\begin{equation}\label{eq1ex1}\left.\frac{{\rm d}\mathbb{Q}^*}{{\rm d}\mathbb{P}}\right|_{\mathcal{G}_T}
=\frac{H}{\mathbb{E}_{\mathbb{Q}_\mathbb{G}}H}\left.\frac{{\rm d}\mathbb{Q}_\mathbb{G}}{{\rm d}\mathbb{P}}\right|_{\mathcal{G}_T}.\end{equation}
Note that
$\left.\frac{{\rm d}\mathbb{Q}^*}{{\rm d}\mathbb{P}}\right|_{\mathcal{G}_T}$ is a random variable with the conditional density
on $\mathbb{R}_+$ given $G=g$ and for given $\epsilon\in[0,1]$ we can find a $\mathcal{G}_0$-measurable random variable $k$ such that
$\mathbb{P}\left(\left.\left.\frac{{\rm d}\mathbb{Q}^*}{{\rm d}\mathbb{P}}\right|_{\mathcal{G}_T}\leq k\right|\mathcal{G}_0\right)=1-\epsilon$
if $\mathbb{P}(H=0|\mathcal{G}_0)<1-\epsilon$. Therefore, by Theorem \ref{minCostQH} the cost of the quantile hedging for the insider can be reduced in this case by the factor:
\begin{equation}\label{eq2ex1}\alpha=\mathbb{Q}^*\left(\left.\left.\frac{\;{\rm d}\mathbb{Q}^*}{\;{\rm d}\mathbb{P}}\right|_{\mathcal{G}_T}\leq k\right|\mathcal{G}_0\right).\end{equation}
Below, we provide the values of $\alpha$ for $\mu=0.08$, $\sigma=0.25$, $S_0=100$, $K=110$, $T=0.25$, $\delta=0.02$, and different values of $G$ and $\epsilon$. In the programme we use simple fact that $\mathbb{E}[f(W_T)|G=g]=f(g-W(\delta))$ for a measurable function $f$.\newline
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|}
\hline
~ & \multicolumn{12}{|c|}{$G$}\\
\hline
~ & ~ & 105 & 106 & 107 & 108 & 109 & 110 & 111 & 112 & 113 & 114 & 115\\
\hline
\multirow{6}{*}{$\epsilon$} & 0.01 & 0.05 & 0.09 & 0.13 & 0.17 & 0.22 & 0.27 & 0.32 & 0.37 & 0.42 & 0.46 & 0.51\\
& 0.05 & $<0.01$ & 0.01 & 0.04 & 0.07 & 0.10 & 0.14 & 0.18 & 0.23 & 0.28 & 0.33 & 0.38\\
& 0.10 & $<0.01$ & $<0.01$ & 0.01 & 0.03 & 0.05 & 0.08 & 0.12 & 0.16 & 0.21 & 0.25 & 0.30\\
& 0.15 & $<0.01$ & $<0.01$ & $<0.01$ & 0.01 & 0.03 & 0.05 & 0.08 & 0.12 & 0.16 & 0.21 & 0.25\\
& 0.20 & $<0.01$ & $<0.01$ & $<0.01$ & $<0.01$ & 0.01 & 0.03 & 0.06 & 0.09 & 0.13 & 0.17 & 0.21\\
& 0.25 & $<0.01$ & $<0.01$ & $<0.01$ & $<0.01$ & $<0.01$ & 0.02 & 0.04 & 0.07 & 0.10 & 0.14 & 0.18\\
\hline
\end{tabular}
\end{center}
~\newline
\subsection{The case of $G=\mathbf{1}_{\left\{W_{T+\delta}\in[a,b]\right\}}$}
In this example the insider knows the range of the stock price $S_{T+\delta}$
after the expiry date $T$.
The straightforward calculation yields:
\begin{eqnarray*}
\mathbb{P}(G=1|\mathcal{F}_t) & = & \frac{1}{\sqrt{2\pi(T+\delta-t)}}\int_{a-W_t}^{b-W_t}\exp\Big(-\frac{u^2}{2(T+\delta-t)}\Big)\;{\rm d}u\\
~ & = & \Phi\left(\frac{b-W_t}{\sqrt{T+\delta-t}}\right)-\Phi\left(\frac{a-W_t}{\sqrt{T+\delta-t}}\right),\\
\end{eqnarray*}
where $\Phi$ is c.d.f. of the standard normal distribution. Thus,
$$p_t^1=\frac{\mathbb{P}(G=1|\mathcal{F}_t)}{\mathbb{P}(G=1)}=\frac{\Phi\left(\frac{b-W_t}{\sqrt{T+\delta-t}}\right)-\Phi\left(\frac{a-W_t}{\sqrt{T+\delta-t}}\right)}{\Phi\left(\frac{b}{\sqrt{T+\delta}}\right)-\Phi\left(\frac{a}{\sqrt{T+\delta}}\right)},$$
and similarly
$$p_t^0=\frac{1+\Phi\left(\frac{a-W_t}{\sqrt{T+\delta-t}}\right)-\Phi\left(\frac{b-W_t}{\sqrt{T+\delta-t}}\right)}{1+\Phi\left(\frac{a}{\sqrt{T+\delta}}\right)-\Phi\left(\frac{b}{\sqrt{T+\delta}}\right)}.$$
Hence
\begin{eqnarray*}
\lefteqn{\left.\frac{{\rm d}\mathbb{Q}_\mathbb{G}}{{\rm d}\mathbb{P}}\right|_{\mathcal{G}_T} = \frac{\exp\left(-\frac{\mu}{\sigma}W_T-\frac{1}{2}\left(\frac{\mu}{\sigma}\right)^2T\right)}{
p_T^0\mathbf{1}_{\{G=0\}}+p_T^1\mathbf{1}_{\{G=1\}}}}\\
&& = \left.\exp\left(-\frac{\mu}{\sigma}W_T-\frac{1}{2}\left(\frac{\mu}{\sigma}\right)^2T\right)\right/
\Bigg(\mathbf{1}_{\{G=0\}}\frac{1+\Phi\left(\frac{a-W_T}{\sqrt{T+\delta-t}}\right)-\Phi\left(\frac{b-W_T}{
\sqrt{T+\delta-t}}\right)}{1+\Phi\left(\frac{a}{\sqrt{T+\delta}}\right)-\Phi\left(\frac{b}{\sqrt{T+\delta}}\right)}\\
&&\hspace{7cm} +\mathbf{1}_{\{G=1\}}\frac{\Phi\left(\frac{b-W_T}{\sqrt{T+\delta-t}}\right)-\Phi\left(\frac{a-W_T}{\sqrt{T+\delta-t}}\right)}{
\Phi\left(\frac{b}{\sqrt{T+\delta}}\right)-\Phi\left(\frac{a}{\sqrt{T+\delta}}\right)}\Bigg)
\end{eqnarray*}
and $\mathbb{Q}^*$ and $\alpha$ are defined in (\ref{eq1ex1})-(\ref{eq2ex1}).
The table below provides the values of the optimal $\alpha$ for $\mu=0.08$, $\sigma=0.25$, $S_0=100$, $K=110$, $T=0.25$, $\delta=0.02$, $G=1$ and different values of $\epsilon$ and endpoints of interval $[a,b]$ for $S_{T+\delta}$. In the programme we use simple fact that $\mathbb{E}[f(W_T)|G=1]=\mathbb{E}[f(W_T), W_{T+\delta}\in[a,b]]/\mathbb{P}(G=1)$ for a measurable function $f$
and to simulate numerator we choose only those trajectories for which $W_{T+\delta}\in[a,b]$.
\newline
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
~ & \multicolumn{6}{|c|}{$[a,b]$}\\
\hline
~ & ~ & [109,111] & [108,112] & [107,113] & [112,114] & [106,108] \\
\hline
\multirow{6}{*}{$\epsilon$} & 0.01 & 0.272 & 0.284 & 0.296 & 0.413 & 0.135 \\
& 0.05 & 0.142 & 0.150 & 0.157 & 0.277 & 0.039 \\
& 0.10 & 0.087 & 0.088 & 0.095 & 0.209 & 0.010 \\
& 0.15 & 0.053 & 0.055 & 0.059 & 0.164 & 0.001 \\
& 0.20 & 0.032 & 0.033 & 0.034 & 0.129 & $<0.001$ \\
& 0.25 & 0.017 & 0.019 & 0.020 & 0.102 & $<0.001$ \\
\hline
\end{tabular}
\end{center}
\section{Proofs}\label{proofs}
Before we give the proofs of the main Theorems \ref{maxprobab} and \ref{minCostQH} we present few
introductory lemmas and theorems. We start with the result of \cite{A00} and \cite{AIS98} concerning the properties of the equivalent martingale measure $\mathbb{Q}_\mathbb{G}$ for the insider. We recall that we assume that
the condition (\ref{equiv}) is satisfied.
\begin{theorem}\label{ther1}
\begin{itemize}
\item[(i)] Process $Z_t^{\mathbb{G}}:=\frac{Z^{\mathbb{F}}_t}{p_t^G}$ is a $(\mathbb{G},\mathbb{P})$-martingale.
\item[(ii)] The measure $\mathbb{Q}_\mathbb{G}$ defined in (\ref{QG}) has the following properties:
\subitem(a) $\mathcal{F}_T$ and $\sigma(G)$ are independent under $\mathbb{Q}_\mathbb{G}$;
\subitem(b) $\mathbb{Q}_\mathbb{G}=\mathbb{Q}_\mathbb{F}$ on $(\Omega,\mathcal{F}_T)$ and $\mathbb{Q}_\mathbb{G}=\mathbb{P}$ on $(\Omega,\sigma(G))$.
\end{itemize}
\end{theorem}
We are now in a position to state the theorem
which relates the martingale measures of the insider and the regular trader.
\begin{theorem}\label{ther2}
Let $X=(X_{t})_{t\geq 0}$ be an $\mathbb{F}$-adapted process. The following statements are equivalent:
\begin{itemize}
\item[(i)]$X$ is an $(\mathbb{F},\mathbb{Q}_\mathbb{F})$-martingale;
\item[(ii)]$X$ is an $(\mathbb{F},\mathbb{Q}_\mathbb{G})$-martingale;
\item[(iii)]$X$ is a $(\mathbb{G},\mathbb{Q}_\mathbb{G})$-martingale.
\end{itemize}
\end{theorem}
\begin{proof}
Equivalence (i) and (ii) follows from the fact that
$\mathbb{Q}_\mathbb{F}=\mathbb{Q}_\mathbb{G}$ on $\mathcal{F}_T$. The implication $(iii)\Rightarrow (ii)$ is a consequence of the
tower property of the conditional expectation.
Finally, taking $A=A_{s}\cap\{\omega\in \Omega: G(\omega)\in B\}$ ($A_{s}\in\mathcal{F}_s$, $B$ - Borel set), the
implication $(ii)\Rightarrow (iii)$ follows from the standard monotone class arguments and following equalities:
\begin{eqnarray*}
\lefteqn{\mathbb{E}_{\mathbb{Q}_\mathbb{G}}(\mathbf{1}_{A}X_{t})=\mathbb{E}_{\mathbb{Q}_\mathbb{G}}(\mathbf{1}_{A_s}\mathbf{1}_{\{G\in B\}}X_{t})=
\mathbb{Q}_\mathbb{G}(G\in B)
\mathbb{E}_{\mathbb{Q}_\mathbb{G}}(\mathbf{1}_{A_s}X_{t})}
\\&&=\mathbb{Q}_\mathbb{G}(G\in B)
\mathbb{E}_{\mathbb{Q}_\mathbb{G}}(\mathbf{1}_{A_s}X_{s})=
\mathbb{E}_{\mathbb{Q}_\mathbb{G}}(\mathbf{1}_{A}X_{s}),\qquad s\leq t,
\end{eqnarray*}
where in the second equality we use Theorem \ref{ther1}(ii).
\end{proof}
\begin{remark}
We need the Theorem \ref{ther2} to guarantee the martingale representation for the insider's replicating strategy. Moreover, from this representation and the Theorem \ref{ther1} we can deduce that the cost of perfect
hedging for the insider is the same as for the regular trader, that is $\mathbb{E}_{\mathbb{Q}_\mathbb{G}}[H|\mathcal{G}_0]=\mathbb{E}_{\mathbb{Q}_\mathbb{F}}H$.
\end{remark}
\begin{remark}
In general the Theorem \ref{ther2} is not true for the local martingales, since a localizing sequence $(\tau_{n})$ of $\mathbb{G}$-stopping times is not a sequence of $\mathbb{F}$-stopping times.
\end{remark}
\begin{lemma}\label{NPL}
Let $k$ be a positive $\mathcal{G}_t$-measurable random variable. For every $A\in\mathcal{F}$ such that
$$\mathbb{Q}^*\left(\left.A\right|\mathcal{G}_t\right)\leq\mathbb{Q}^*
\left(\left.\left.\frac{{\rm d}\mathbb{Q}^*}{{\rm d}\mathbb{P}}\right|_{\mathcal{G}_T}\leq k\right|\mathcal{G}_t\right),\qquad \mathbb{P}-{\rm a.s.}$$
we have:
$$\mathbb{P}\left(\left.A\right|\mathcal{G}_t\right)\leq\mathbb{P}
\left(\left.\left.\frac{{\rm d}\mathbb{Q}^*}{{\rm d}\mathbb{P}}\right|_{\mathcal{G}_T}\leq k\right|\mathcal{G}_t\right),\qquad \mathbb{P}-{\rm a.s.}$$
Similarly, if
$$\mathbb{P}\left(\left.A\right|\mathcal{G}_t\right)\geq\mathbb{P}
\left(\left.\left.\frac{{\rm d}\mathbb{Q}^*}{{\rm d}\mathbb{P}}\right|_{\mathcal{G}_T}\leq k\right|\mathcal{G}_t\right),\qquad \mathbb{P}-{\rm a.s.}$$
then
$$\mathbb{Q}^*\left(\left.A\right|\mathcal{G}_t\right)\geq\mathbb{Q}^*
\left(\left.\left.\frac{{\rm d}\mathbb{Q}^*}{{\rm d}\mathbb{P}}\right|_{\mathcal{G}_T}\leq k\right|\mathcal{G}_t\right),\qquad \mathbb{P}-{\rm a.s.}$$
\end{lemma}
\begin{proof}
Denote $\tilde{A}:=\left\{\left.\frac{{\rm d}\mathbb{Q}^*}{{\rm d}\mathbb{P}}\right|_{\mathcal{G}_T}\leq k
\right\}$. Note that $$(\mathbf{1}_{\tilde{A}}-\mathbf{1}_A)
\left(k-\left.\frac{{\rm d}\mathbb{Q}^*}{{\rm d}\mathbb{P}}\right|_{\mathcal{G}_T}\right)
\geq 0.$$ Thus
\begin{eqnarray*}
\left.\frac{\;{\rm d}\mathbb{Q}^*}{\;{\rm d}\mathbb{P}}\right|_{\mathcal{G}_t}\left(\mathbb{Q}^*(\tilde{A}|\mathcal{G}_t)-\mathbb{Q}^*(A|\mathcal{G}_t)\right)
&=&\left.\frac{\;{\rm d}\mathbb{Q}^*}{\;{\rm d}\mathbb{P}}\right|_{\mathcal{G}_t}\mathbb{E}_{\mathbb{Q}^*}\left((\mathbf{1}_{\tilde{A}}-\mathbf{1}_{A})\left.\right|\mathcal{G}_t\right)\\
&=&\mathbb{E}_\mathbb{P}\left(\left.\left.\frac{{\rm d}\mathbb{Q}^*}{{\rm d}\mathbb{P}}\right|_{\mathcal{G}_T}(\mathbf{1}_{\tilde{A}}-\mathbf{1}_{A})\right|\mathcal{G}_t\right)\\
&\leq&k\left(\mathbb{P}(\tilde{A}|\mathcal{G}_t)-\mathbb{P}(A|\mathcal{G}_t)\right),
\end{eqnarray*}
which completes the proof.
\end{proof}
\begin{lemma}\label{equalone}
The following holds true:
$$\left.\frac{{\rm d}\mathbb{Q}^*}{{\rm d}\mathbb{Q}_\mathbb{G}}\right|_{\mathcal{G}_0}=1.$$
\end{lemma}
\begin{proof}
Note that for $A=\{\omega\in\Omega: G\in B\}\in \mathcal{G}_0$ ($B$ is a Borel set) we have
\begin{eqnarray*}
\lefteqn{\mathbb{Q}^*(A)=\mathbb{E}_{\mathbb{Q}_{\mathbb{G}}}\left[\frac{{\rm d}\mathbb{Q}^*}{{\rm d}\mathbb{Q}_\mathbb{G}}\mathbf{1}_{A}\right]=\frac{\mathbb{E}_{\mathbb{Q}_{\mathbb{G}}}\left[H\mathbf{1}_{A}\right]}{\mathbb{E}_{\mathbb{Q}_{\mathbb{G}}}H}}\\
&&=
\frac{\mathbb{E}_{\mathbb{Q}_{\mathbb{G}}}\left[H\right]\mathbb{Q}_{\mathbb{G}}(A)}{\mathbb{E}_{\mathbb{Q}_{\mathbb{G}}}H}=\mathbb{Q}_{\mathbb{G}}(A),
\end{eqnarray*}
where in the last but one equality we use Theorem \ref{ther1}.
\end{proof}
{\it Proof of Theorem \ref{maxprobab}}
Consider the value process $V_{t}=\alpha\mathbb{E}_{\mathbb{Q}_\mathbb{G}}H+\int_{0}^{t}\xi_{u}\;{\rm d}S_{u}$ for
any strategy $(\alpha\mathbb{E}_{\mathbb{Q}_\mathbb{G}}H,\xi)\in\mathcal{A}^\mathbb{G}$. Note that for its success set $A$ defined in (\ref{succset})
we have:
$$V_T\geq H\mathbf{1}_A.$$
Moreover, by the Theorem \ref{ther2} the process $V_{t}$ is a nonnegative $(\mathbb{G},\mathbb{Q}_\mathbb{G})$-local martingale, hence it is a $(\mathbb{G},\mathbb{Q}_\mathbb{G})$-supermartingale and
$$\alpha\mathbb{E}_{\mathbb{Q}_\mathbb{G}}H=\alpha V_0\geq \mathbb{E}_{\mathbb{Q}_\mathbb{G}}(V_T|\mathcal{G}_0)\geq \mathbb{E}_{\mathbb{Q}_\mathbb{G}}(H\mathbf{1}_{A}|\mathcal{G}_0).$$
Thus, from Lemmas \ref{equalone} and \ref{NPL},
$$\mathbb{Q}^*(A|\mathcal{G}_0)\leq\frac{\alpha}{\left.\frac{{\rm d}\mathbb{Q}^*}{{\rm d}\mathbb{Q}_\mathbb{G}}\right|_{\mathcal{G}_0}}=\alpha,\qquad \mathbb{P}-{\rm a.s.}$$
and therefore
\begin{eqnarray}
\mathbb{P}(A|\mathcal{G}_0)\leq\mathbb{P}\left(\left.\left.\frac{{\rm d}\mathbb{Q}^*}{{\rm d}\mathbb{P}}\right|_{\mathcal{G}_T}\leq k\right|\mathcal{G}_0\right),\qquad \mathbb{P}-{\rm a.s.} \label{P_ineq}
\end{eqnarray}
It remains to show that $\left(\mathbb{E}_{\mathbb{Q}_\mathbb{G}}\left[H \left.\mathbf{1}_{\left\{\left.\frac{{\rm d}\mathbb{Q}^*}{{\rm d}\mathbb{P}}\right|_{\mathcal{G}_T}\leq k\right\}}\right|\mathcal{G}_0\right],\tilde{\xi}\right)\in\mathcal{A}^\mathbb{G}$, and that this strategy attains the upper bound
(\ref{P_ineq}). The first statement
follows directly from the definition of $\tilde{\xi}$:
$$
\mathbb{E}_{\mathbb{Q}_\mathbb{G}}\left(H \left.\mathbf{1}_{\left\{\left.\frac{{\rm d}\mathbb{Q}^*}{{\rm d}\mathbb{P}}\right|_{\mathcal{G}_T}\leq k\right\}}\right|\mathcal{G}_0\right)
+\int_0^t\tilde{\xi}_u\;{\rm d}S_u=\mathbb{E}_{\mathbb{Q}_\mathbb{G}}\left(H \left.\mathbf{1}_{\left\{\left.\frac{{\rm d}\mathbb{Q}^*}{{\rm d}\mathbb{P}}\right|_{\mathcal{G}_T}\leq k\right\}}\right|\mathcal{G}_t\right)\geq 0.$$
Moreover,
\begin{eqnarray}
\lefteqn{\mathbb{P}\left(\left.\mathbb{E}_{\mathbb{Q}_\mathbb{G}}\left[H \left.\mathbf{1}_{\left\{\left.\frac{{\rm d}\mathbb{Q}^*}{{\rm d}\mathbb{P}}\right|_{\mathcal{G}_T}\leq k\right\}}\right|\mathcal{G}_0\right]
+\int_0^T\tilde{\xi}_u\;{\rm d}S_u\geq H\right|\mathcal{G}_0\right)}\nonumber\\&&=
\mathbb{P}\left(\left.H \mathbf{1}_{\left\{\left.\frac{{\rm d}\mathbb{Q}^*}{{\rm d}\mathbb{P}}\right|_{\mathcal{G}_T}\leq k\right\}}\geq H\right|\mathcal{G}_0\right)\geq
\mathbb{P}\left(\left. \left.\frac{{\rm d}\mathbb{Q}^*}{{\rm d}\mathbb{P}}\right|_{\mathcal{G}_T}\leq k\right|\mathcal{G}_0\right),\nonumber
\end{eqnarray}
which completes the proof in view of (\ref{P_ineq}).
\vspace{3mm} \hfill $\Box$
{\it Proof of Theorem \ref{minCostQH}}
Observe that for any $(\alpha\mathbb{E}_{\mathbb{Q}_\mathbb{G}}H,\xi)\in\mathcal{A}^\mathbb{G}$ we
have: \begin{eqnarray}
\lefteqn{\mathbb{Q}^*\left(\left.\alpha\mathbb{E}_{\mathbb{Q}_\mathbb{G}}H+\int_0^T\xi_u\;{\rm
d}S_u\geq H\right|\mathcal{G}_0\right)}\nonumber\\&&
=\frac{1}{\left.\frac{{\rm d}\mathbb{Q}^*}{{\rm
d}\mathbb{Q}_\mathbb{G}}\right|_{\mathcal{G}_0}}
\mathbb{E}_{\mathbb{Q}_\mathbb{G}}\left(\left.\left.\frac{{\rm d}\mathbb{Q}^*}{{\rm
d}\mathbb{Q}_\mathbb{G}}\right|_{\mathcal{G}_T}\mathbf{1}_{\left\{\alpha\mathbb{E}_{\mathbb{Q}_\mathbb{G}}H+\int_0^T\xi_u\;{\rm
d}S_u\geq H\right\}}\right|\mathcal{G}_0\right)\nonumber\\&&
\leq\frac{\mathbb{E}_{\mathbb{Q}_\mathbb{G}}\left(\left.\alpha\mathbb{E}_{\mathbb{Q}_\mathbb{G}}H+\int_0^T\xi_u\;{\rm
d}S_u\right|\mathcal{G}_0\right)}{\mathbb{E}_{\mathbb{Q}_\mathbb{G}}H}=\alpha.\nonumber \end{eqnarray}
Applying second part of Lemma \ref{NPL} for the success set
$A=\left\{\alpha\mathbb{E}_{\mathbb{Q}_\mathbb{G}}H+\int_0^T\xi_u\;{\rm d}S_u\geq
H\right\}$ and using required inequality (\ref{P2inequality}) and
definition of $k$ given in (\ref{eqk2}) we derive: \begin{eqnarray}
\alpha&\geq&\mathbb{Q}^*\left(\left.\alpha\mathbb{E}_{\mathbb{Q}_\mathbb{G}}H+\int_0^T\xi_u\;{\rm d}S_u\geq H\right|\mathcal{G}_0\right)\nonumber\\
&\geq&\mathbb{Q}^*\left(\left.\left.\frac{{\rm d}\mathbb{Q}^*}{{\rm d}\mathbb{P}}\right|_{\mathcal{G}_T}\leq k\right|\mathcal{G}_0\right).\nonumber
\label{rhs}
\end{eqnarray}
We prove now that for this particular minimal choice of $\alpha$ being the rhs of (\ref{rhs})
the strategy $\left(\mathbb{E}_{\mathbb{Q}_\mathbb{G}}\left[H \left.\mathbf{1}_{\left\{\left.\frac{{\rm d}\mathbb{Q}^*}{{\rm d}\mathbb{P}}\right|_{\mathcal{G}_T}\leq k\right\}}\right|\mathcal{G}_0\right],\tilde{\xi}\right)$ satisfies the inequality (\ref{P2inequality}) of the Problem \ref{P2}:
\begin{eqnarray}
\lefteqn{\mathbb{P}\left(\left.\alpha\mathbb{E}_{\mathbb{Q}_\mathbb{G}}H+\int_0^T\tilde{\xi}_u\;{\rm d}S_u\geq H\right|\mathcal{G}_0\right)}\nonumber\\
&&=\mathbb{P}\left(\left.\mathbb{E}_{\mathbb{Q}_\mathbb{G}}\left.\left[H \mathbf{1}_{\left\{\left.\frac{{\rm d}\mathbb{Q}^*}{{\rm d}\mathbb{P}}\right|_{\mathcal{G}_T}\leq k\right\}}\right|\mathcal{G}_0\right]
+\int_0^T\tilde{\xi}_u\;{\rm d}S_u\geq H\right|\mathcal{G}_0
\right)\nonumber\\
&&=
\mathbb{P}\left(\left. H \mathbf{1}_{\left\{\left.\frac{{\rm d}\mathbb{Q}^*}{{\rm d}\mathbb{P}}\right|_{\mathcal{G}_T}\leq k\right\}}
\geq H\right|\mathcal{G}_0
\right)
\nonumber\\
&&\geq\mathbb{P}\left(\left.\left.\frac{{\rm d}\mathbb{Q}^*}{{\rm d}\mathbb{P}}\right|_{\mathcal{G}_T}\leq k\right|\mathcal{G}_0\right)=1-\epsilon.\nonumber
\end{eqnarray}
This completes the proof.
\vspace{3mm} \hfill $\Box$
\section*{Acknowledgements}
This work is partially supported by the Ministry of Science and Higher Education of Poland under the grant N N2014079 33 (2007-2009).
\bibliographystyle{abbrv}
| {'timestamp': '2008-11-23T15:59:13', 'yymm': '0811', 'arxiv_id': '0811.3749', 'language': 'en', 'url': 'https://arxiv.org/abs/0811.3749'} |
\section{Juggling with one or two balls}
Mathematically, juggling is studied via siteswap sequences (see \cite{jugggle}) which are a sequence of numbers $t_1t_2\ldots $ which indicates a throw of a ball to ``height'' $t_i$ at time step $i$ (i.e., so the ball will land at time $i+t_i$). A periodic juggling pattern is one which repeats in which case we only need to include enough of the sequence to get us started. A siteswap with period $n$ then will be a sequence $t_1t_2\ldots t_n$ where at time $i+kn$ we throw a ball to height $t_i$. One simple assumption to make is that no two balls {\em land}\/ at the same time (in multiplex juggling this condition is relaxed). This translates into the condition
\[
\{1,2,\ldots,n\}=\bigcup_{i=1}^n\{i+t_i\pmod n\}.
\]
A simple consequence of this is that $(t_1+t_2+\cdots+t_n)/n$ is integer valued and corresponds to the number of balls needed to juggle the sequence. (When $t_i=0$ then this indicates an empty hand or a no throw.)
Another type of juggling has recently been introduced by Akihiro Matsuura \cite{spherical} which involves juggling inside of a sphere where balls are ``thrown'' from the north pole on great circles to return back after some number of time steps. Such juggling sequences are again denoted by a siteswap $t_1t_2\ldots t_n$ and in addition to the condition that $i+t_i \pmod n$ are distinct (avoiding collisions at the north pole) we also need that $i+(t_i/2) \pmod n$ are distinct for the $t_i\neq 0$ (to avoid collisions at the south pole).
We now count the number of one ball siteswap sequences of length $n$ for both forms of juggling, these have the added advantage of being impossible to have collisions. In this case since there is only one ball then we have that $t_1+t_2+\cdots+t_n=n$, i.e., no balls can be thrown further than length $n$. In particular, the only thing that we need to know is at what times we have the ball in our hands and at what time our hand is empty. This can be done by associating such juggling sequences with necklaces with two colors of beads (with position $1$ at the top and increasing clockwise). The black beads will indicate when the ball is in our hand and the white beads will indicate when the ball is in the air. For example, in Figure~\ref{fig:jug} we have a necklace with black and white beads, to translate this into a juggling pattern we associate each white bead with a $0$ and each black bead with the number of steps until the next black bead (note in one ball juggling we must throw to the next occurrence of a black bead). So this particular necklace is associated with the juggling sequence $30020300$.
\begin{figure}[hftb]
\centering
\includegraphics{necklaces.eps}
\caption{A necklace corresponding to the juggling siteswap sequence $30020300$.}
\label{fig:jug}
\end{figure}
Since we must have at least one black bead and there are $n$ beads then in total we have $2^n-1$ different possible necklaces (i.e., everything but the all white bead necklace) and so $2^n-1$ different siteswap sequences (note that we consider rotations of necklaces distinct just as rotations of siteswap sequences are considered distinct). This analysis also works for spherical juggling.
Now let us consider the number of two ball juggling siteswap sequences with the added condition that a non-zero throw is made at each time step (i.e., $t_i\geq 1$ for all $i$). There is a natural correspondence between one ball juggling siteswaps and two ball juggling siteswaps with a non-zero throw made at each time step. Namely if $t_1t_2\ldots t_n$ is a one ball juggling siteswap sequence then we associate it with $t_1't_2'\ldots t_n'$ by letting $t_i'=t_i+1$. Clearly this does not add any collisions where two balls land at the same time and so we can conclude that the number of two ball siteswap sequences with no non-zero throws of length $n$ is $2^n-1$.
For spherical juggling however the map $t_1t_2\ldots t_n \mapsto t_1't_2'\ldots t_n'$ {\em might}\/ create collisions at the south pole. For example the sequence $20\mapsto 31$, this latter sequence is not a spherical juggling sequence since $0+3/2\equiv 1+1/2\pmod n$. Let us determine which one ball juggling sequences will produce valid two ball juggling sequences where a throw is made at every time step for spherical juggling. Our only problem will be the creation of a collision at the south pole.
Suppose that we have the necklace associated with with the one ball siteswap sequence $t_1t_2\ldots t_n$. The halfway points between two consecutive black beads marks the times where the ball is at the south pole. Now when we go to $t_1't_2'\ldots t_n'$ all the occurrences of the ball at the south pole will shift clockwise by exactly a half step {\em and}\/ every white bead will add a new ball at the south pole exactly a half step after the white bead. So collisions at the south pole will exist precisely when the halfway point between two consecutive black beads lands on top of a white bead.
Therefore, in terms of necklaces, we need to have at least one black bead and between any two consecutive black beads the number of white beads must be {\em even}.
\section{Counting admissible necklaces}
So to find the number of two ball spherical juggling siteswap sequences of length $n$ with a throw made at each time we need to count the number of necklaces on $n$ beads where (i) there is at least one black bead; (ii) between any two (consecutive) black beads there is an even number of white beads; and (iii) rotations/flippings of necklaces are considered distinct. To solve this we will find a recurrence and this is done in the following more general theorem.
\begin{theorem}\label{mainresult}
Let $a_q(n)$ be the number of arrangements of black and white beads on a necklace with a total of $n$ beads satisfying the following:
(i) there is at least one black bead;
(ii) between any two black beads the number of white beads is divisible by $q$;
(iii) rotations/flippings of a necklace are considered distinct.
Then $a_q(0)=0$, $a_q(1)=\cdots=a_q(q)=1$ and for $n\ge q+1$
\begin{equation}\label{recurq}
a_q(n)=\left\{\begin{array}{l@{\qquad}l}
a_q(n-1)+a_q(n-q)+q&\mbox{if }n\equiv 1\pmod{q};\\
a_q(n-1)+a_q(n-q)&\mbox{otherwise.}
\end{array}\right.
\end{equation}
\end{theorem}
We are interested in the case $q=2$ which produces the sequence
\[
0, 1, 1, 4, 5, 11, 16, 29, 45, 76, 121, 199, 320, 521, 841, 1364, 2205, 3571, 5776, 9349, 15125,\ldots.
\]
These are the ``associated Mersenne numbers'' ({\tt A001350} in Sloane \cite{sloane}), which satisfy the recurrence
\[
a_2(n)=a_2(n-1)+a_2(n-2)+1-(-1)^n.
\]
Unsurprisingly these are connected to Lucas numbers, $L_n$, which satisfy a similar recurrence. In particular the Lucas numbers start
\[
2, 1, 3, 4, 7, 11, 18, 29, 47, 76, 123, 199, 322, 521, 843, 1364, 2207, 3571, 5778, 9349, 15127,\ldots.
\]
Comparing the two sequences it is easy to see (and show) that
\[
a_2(n)=\left\{\begin{array}{l@{\qquad}l}L_n&\mbox{if $n$ is odd;}\\
L_n-2&\mbox{if $n$ is even.}\end{array}\right.
\]
In particular, our the rate of growth of the number of two ball spherical juggling siteswap sequences of length $n$ with a throw made at each time step is approximately $c\phi^n$ where $\phi$ is the golden ratio. By comparison the number of two ball juggling siteswap sequences of length $n$ with a throw made at each time step grows at a rate of $c2^n$, significantly more.
For $n$ a prime, besides the one necklace with all black beads the remaining necklaces can be grouped into sets related by rotation. This gives the following result.
\begin{corollary}\label{prime}
If $n$ is prime then $a_q(n)\equiv 1\pmod n$. In particular, $L_n\equiv 1\pmod n$ for $n$ a prime.
\end{corollary}
\begin{proof}
We already have shown that $a_q(n)\equiv 1\pmod n$ for primes. To establish the result for Lucas numbers we note that for $n=2$ we have $L_2=1\equiv1\pmod2$; otherwise, $L_n=a_2(n)\equiv1\pmod n$.
\end{proof}
This result is one of the well known properties of Lucas numbers (see \cite{bicknell}) and this proof is essentially the same as the proof given by Benjamin and Quinn \cite{count} where they relate Lucas numbers to tilings of necklaces with curved squares (which correspond to black beads) and curved dominoes (which correspond to pairs of white beads).
\begin{proof}[Proof of Theorem~\ref{mainresult}]
We observe the initial conditions are satisfied. First, $a_q(0)=0$ since an admissible necklace must have at least one black bead. Similarly, $a_q(1)=\cdots=a_q(q)=1$ since the only admissible necklace in these cases is one with all black beads (i.e., there is not enough space to put $q$ white beads and at least one black bead in the necklace).
There is a $1$-$1$ correspondence between necklaces on $n-q$ beads and necklaces on $n$ beads with white beads in positions $1,2,\ldots,q$. Namely, given such a necklace on $n$ beads we remove the white beads in position $1,2,\ldots,q$ to get an admissible necklace on $n-q$ beads, this process can also be reversed.
Similarly there is a $1$-$1$ correspondence between necklaces on $n-1$ beads and necklaces on $n$ beads with $2$ or more black beads and at least one black bead in one of the first $q$ entries. Namely, given such a necklace we remove one of the black beads in position $1,2,\ldots,q$ to get an admissible necklace on $n-1$ bead, this process can be reversed.
(Note it is not as clear that the reverse process results in a unique necklace. To see this we note that for the necklace on $n-1$ beads there are essentially two cases that can occur among the first $q-1$ beads: (1) if all of the beads are white then to satisfy the condition that between any two black beads the number of white beads is a multiple of $q$ there is a unique positions where it can be placed; (2) if some of the beads are black then it must be placed among/adjacent to the existing black beads so that again the condition that the number of white beads is multiple of $q$ is satisfied. In either case the reverse process results in a unique necklace.)
Finally, we count what is left. These are necklaces on $n$ beads with one black bead and it occurs in one of the positions $1,2,\ldots,q$. This is only possible if $n\equiv 1\pmod q$ (since there are exactly $n-1$ white beads between the black bead and itself). So there are $q$ remaining necklaces when $n\equiv 1\pmod q$ and $0$ otherwise.
Combining this altogether we can conclude that the recursion given in \eqref{recurq} holds.
\end{proof}
| {'timestamp': '2010-04-29T02:01:41', 'yymm': '1004', 'arxiv_id': '1004.4293', 'language': 'en', 'url': 'https://arxiv.org/abs/1004.4293'} |
\section{Introduction}
The purpose of this text is to characterize fundamental groups of compact K\"ahler manifolds which are {\it cubulable}, i.e. which act properly discontinuously and cocompactly on a ${\rm CAT}(0)$ {\it cubical complex}. For a survey concerning fundamental groups of compact K\"ahler manifolds, referred to as {\it K\"ahler groups} below, we refer the reader to~\cite{abckt,burger}. For basic definitions about cubical complexes, we refer the reader to~\cite[I.7]{bh} or~\cite{sageev1}. We only mention here that cubical complexes form a particular type of polyhedral complexes, and provide an important source of examples in geometric group theory.
From now on we will only deal with finite dimensional cubical complexes. Such complexes have a natural complete and geodesic metric. In the 80's, Gromov gave a combinatorial criterion for this metric to be ${\rm CAT}(0)$~\cite[\S 4.2]{gromov}. His work on {\it hyperbolization procedures}~\cite{gromov}, as well as Davis' work on reflection groups~\cite{davis} drew attention to these complexes; see also~\cite{basw,dj,djs}. Later, Sageev established a link between actions of a group $G$ on a ${\rm CAT}(0)$ cubical complex and the existence of a {\it multi-ended} subgroup of $G$~\cite{sageev0}. More recently, under the influence of Agol, Haglund and Wise, ${\rm CAT}(0)$ cubical complexes received a lot of attention as the list of examples of {\it cubulated groups} increased dramatically and thanks to their applications to $3$-dimensional topology. We refer the reader to~\cite{agol,bhw,bwise,hagenwise,haglundwise,wise0,wise1} for some of these developments.
On the other hand, as we will see, actions of K\"ahler groups on ${\rm CAT}(0)$ cubical complexes are very constrained. Even more, K\"ahler groups are in a sense orthogonal to groups acting on ${\rm CAT}(0)$ cubical complexes. The first results in this direction go back to the work of the first author, together with Gromov~\cite{delzantgromov}. Note that most of the results of~\cite{delzantgromov} are not formulated in terms of actions on ${\rm CAT}(0)$ cubical complex but can be interpreted in terms of such actions thanks to the work of Sageev~\cite{sageev0}. In~\cite{delzantgromov}, the authors studied codimension one subgroups of hyperbolic K\"ahler groups. More generally they studied {\it filtered ends}, or {\it cuts} of hyperbolic K\"ahler groups, see section~\ref{filterede} for the definitions. Some of their results were later generalized by Napier and Ramachandran~\cite{nr2}. Stated informally, the results of~\cite{delzantgromov} show that under suitable hyperbolicity assumptions, the presence of sufficiently many subgroups of a K\"ahler group $\Gamma$ whose numbers of filtered ends are greater than $2$ implies that a finite index subgroup $\Gamma_{0}$ of $\Gamma$ is a {\it subdirect product} of a certain number of surface groups with a free Abelian group. All along this text, the expression {\it surface group} stands for the fundamental group of a closed oriented hyperbolic surface. Recall that a subdirect product of groups $G_{1}, \ldots ,G_{m}$ is a subgroup of $G_{1}\times \cdots \times G_{m}$ which surjects onto each factor. So, in the previous statement, we mean that $\Gamma_{0}$ is a subdirect product of $G_{1},\ldots , G_{m}$ where all the $G_{i}$'s are surface groups, except possibly for one of them which could be free Abelian.
There is also a particular class of actions of K\"ahler groups on ${\rm CAT}(0)$ cubical complexes which are easy to describe. These are the ones given by homomorphisms into right-angled Artin groups. For homomorphisms from K\"ahler groups to right-angled Artin groups, it is easy to obtain a factorization result through a subdirect product of surface groups, possibly with a free Abelian factor. This relies on the facts that right-angled Artin groups embed into Coxeter groups and that Coxeter groups act properly on the product of finitely many trees. This was observed by the second author in~\cite{py}.
These results left open the question of describing actions of K\"ahler groups on general ${\rm CAT}(0)$ cubical complexes. We answer this question here assuming that the cubical complexes are locally finite and that the actions have no fixed point in the visual boundary. We will briefly discuss in section~\ref{ques} the need for these two hypothesis. From this, one deduces easily a description of cubulable K\"ahler groups.
The following statements involve various more or less standard notions about cubical complexes: essential actions, irreducible complexes, visual boundaries... We recall all these definitions in section~\ref{cch}.
\begin{main}\label{fac-pv} Let $M$ be a closed K\"ahler manifold whose fundamental group $\pi_{1}(M)$ acts on a finite dimensional locally finite irreducible ${\rm CAT}(0)$ cubical complex $X$. Assume that:
\begin{itemize}
\item the action is essential,
\item $\pi_{1}(M)$ has no fixed point in the visual boundary of $X$,
\item $\pi_{1}(M)$ does not preserve a Euclidean flat in $X$.
\end{itemize}
Then there exists a finite cover $M_{1}$ of $M$ which fibers: there exists a holomorphic map with connected fibers $F : M_{1}\to \Sigma$ to a hyperbolic Riemann surface such that the induced map $F_{\ast} : \pi_{1}(M) \to \pi_{1}(\Sigma)$ is surjective. Moreover the kernel of the homomorphism $F_{\ast}$ acts elliptically on $X$, i.e. its fixed point set in $X$ is nonempty.
\end{main}
Hence, up to restricting the action to a convex subset of $X$, the action factors through the fundamental group of a hyperbolic Riemann surface. Indeed, as a convex subspace one can take the fixed point set of the subgroup $Ker(F_{\ast})$; this is a subcomplex of the first cubic subdivision of $X$.
From Theorem~\ref{fac-pv}, and using results due to Caprace and Sageev~\cite{cs} and to Bridson, Howie, Miller and Short~\cite{bhms2002}, we deduce a characterization of cubulable K\"ahler groups. In the next three statements, we implicitly assume that the cubical complexes are locally finite.
\begin{main}\label{fac-complet} Suppose that a K\"ahler group $\Gamma$ acts properly discontinuously and cocompactly on a ${\rm CAT}(0)$ cubical complex $X$. Let $X=X_{1}\times \cdots \times X_{r}$ be the decomposition of $X$ into irreducible factors. Assume moreover that the action of $\Gamma$ on $X$ is essential and that each $X_{i}$ is unbounded. Then $\Gamma$ has a finite index subgroup $\Gamma_{\ast}$ which is isomorphic to a direct product
\begin{equation}\label{peiso}
\Gamma_{\ast}\simeq H_{1}\times \cdots \times H_{k}\times G_{1} \times \cdots \times G_{m},
\end{equation}
where each $H_{j}$ is isomorphic to $\mathbb{Z}$, each $G_{j}$ is a surface group, and $k+m=r$ is the number of irreducible factors of $X$. Moreover the isomorphism~\eqref{peiso} can be chosen in such a way that for each $i$, $X_{i}$ contains a convex $\Gamma_{\ast}$-invariant subset $Y_{i}$ on which the action factors through one of the projections $\Gamma_{\ast}\to G_{j}$ or $\Gamma_{\ast}\to H_{j}$.
\end{main}
If a group $G$ acts properly discontinuously and cocompactly on a ${\rm CAT}(0)$ cubical complex $X$, one can always find a $G$-invariant subcomplex $Y$ inside $X$, on which the action is essential and all of whose irreducible factors are unbounded. This follows from Corollary 6.4 in~\cite{cs}. This implies the following corollary:
\begin{cor}\label{cckcc} Suppose that a K\"ahler group $\Gamma$ acts properly discontinuously and cocompactly on a ${\rm CAT}(0)$ cubical complex $X$. Then $\Gamma$ has a finite index subgroup which is isomorphic to a direct product of surface groups with a free Abelian group. The conclusion of Theorem~\ref{fac-complet} holds after replacing $X$ by a suitable invariant subcomplex.
\end{cor}
In what follows, we will say that a closed manifold $M$ is {\it cubulable} if it has the same homotopy type as the quotient of a finite dimensional ${\rm CAT}(0)$ cubical complex by a properly discontinuous, free, cocompact group action. Using an argument going back to Siu~\cite{siu}, and the fact that a cubulable manifold is aspherical, we will finally prove the following theorem.
\begin{main}\label{manifold} If a K\"ahler manifold $M$ is cubulable, it admits a finite cover which is biholomorphic to a topologically trivial principal torus bundle over a product of Riemann surfaces. If $M$ is algebraic, it admits a finite cover which is biholomorphic to a direct product of an Abelian variety with a product of Riemann surfaces.
\end{main}
We mention that the factorization result in Theorem~\ref{fac-pv} does {\it not} rely on the use of harmonic maps unlike most of the factorization results for actions of K\"ahler groups on symmetric spaces, trees or more general buildings. Indeed, although there is a general theory of harmonic maps with values into ${\rm CAT}(0)$ spaces~\cite{gs,ks}, the crucial step
$$harmonic \Rightarrow pluriharmonic$$
namely the use of a Bochner formula to prove that a harmonic map from a K\"ahler manifold to a certain nonpositively curved space is automatically pluriharmonic is not available when the target space is a ${\rm CAT}(0)$ cubical complex. Thus, our proof follows a different scheme. The idea is to produce fibrations of our K\"ahler manifold $M$ over a Riemann surface, in such a way that the kernel $N\lhd \pi_{1} (M)$ of the homomorphism induced by the fibration preserves a certain hyperplane $\hat{\mathfrak{h}}$ of the cubical complex. Since $N$ is normal it will have to preserve each hyperplane of the $\pi_{1}(M)$-orbit of $\hat{\mathfrak{h}}$. From this we will deduce that $N$ actually acts elliptically on the ${\rm CAT}(0)$ cubical complex, i.e. has a fixed point.
From the K\"ahler point of view, the new idea of this paper appears in Section~\ref{pf}. It can be summarized as follows. Consider an infinite covering $Y$ of a closed K\"ahler manifold. Given a proper pluriharmonic function $u$ on $Y$, one can look for conditions under which the foliation induced by the holomorphic form $du^{1,0}$ is actually a fibration. Our observation is that it is enough to find on $Y$ a second {\it plurisubharmonic} function which is not a function of $u$. This differs from other fibration criterions in the study of K\"ahler groups as the second function we need is only required to be plurisubharmonic. In the various Castelnuovo-de Franchis type criterions known so far, one needs two pluriharmonic functions, or two holomorphic $1$-forms, see~\cite{nrjta} and the references mentioned in the introduction there.
The text is organized as follows. In section~\ref{gaoccc}, we recall basic facts on ${\rm CAT}(0)$ cubical complexes as well as more advanced results due to Caprace and Sageev. Given a group $G$ acting on a ${\rm CAT}(0)$ cubical complex and satisfying suitable hypothesis, wee also explain how to construct a hyperplane $\hat{\mathfrak{h}}$ whose stabilizer $H$ in $G$ has the property that the Schreier graph of $H\backslash G$ is non-amenable. In section~\ref{rems}, we explain how to construct some nontrivial plurisubharmonic functions on certain covering spaces of a K\"ahler manifold whose fundamental group acts on a ${\rm CAT}(0)$ cubical complex. The proof of Theorem~\ref{fac-pv} is concluded in Section~\ref{pf}. In section~\ref{ckmag}, we prove Theorem~\ref{fac-complet} and~\ref{manifold}. Finally, section~\ref{ques} contains a few comments about possible improvements of our results.
{\bf Acknowledgements.} We would like to thank Pierre-Emmanuel Caprace, Yves de Cornulier and Misha Sageev for many explanations about ${\rm CAT}(0)$ cubical complexes as well as Martin Bridson for discussions motivating this work.
\section{Groups acting on ${\rm CAT}(0)$ cubical complexes}\label{gaoccc}
In this section we recall some basic properties of ${\rm CAT}(0)$ cubical complexes and some advanced results due to Caprace and Sageev~\cite{cs}. For a more detailed introduction to these spaces, we refer the reader to~\cite[I.7]{bh} and \cite{sageev1} for instance. From now on and until the end of the text, all ${\rm CAT}(0)$ cubical complexes will be finite dimensional; we will not mention this hypothesis anymore. By convention, we assume that all cubes of our cubical complexes are isometric to $[-1,1]^{n}$ for some $n$.
\subsection{Cubical complexes and hyperplanes}\label{cch}
A cubical complex $X$ can be naturally endowed with a distance as follows. If $x$ and $y$ are in $X$, one considers chains
$$x_{0}=x, x_{1}, \ldots , x_{n}=y$$
where each pair $(x_{i}, x_{i+1})$ is contained in a cube $C_{i}$ of $X$ and one defines
$$d(x,y)={\rm inf} \sum_{i=0}^{n-1}d_{C_{i}}(x_{i},x_{i+1})$$
where the inf is taken over all possible chains. Here $d_{C_{i}}$ is the intrinsic distance of the cube; the number $d_{C_{i}}(x_{i},x_{i+1})$ does not depend on the choice of the cube containing $x_{i}$ and $x_{i+1}$. The function $d$ is a distance which is complete and geodesic thanks to the finite dimensionality hypothesis~\cite[I.7]{bh}. From now on, we will always assume that the cube complexes we consider are ${\rm CAT}(0)$ spaces, when endowed with the previous metric. According to a classical theorem by Gromov~\cite[\S 4.2]{gromov}, this is equivalent to the fact that the complex is simply connected and that the link of each vertex is a flag complex. The visual boundary~\cite[II.8]{bh} of a ${\rm CAT}(0)$ cubical complex $X$ is denoted by $\partial_{\infty}X$.
We now recall the definition of {\it hyperplanes}, first introduced by Sageev~\cite{sageev0}. We fix a ${\rm CAT}(0)$ cubical complex $X$. Let $\Box$ be the equivalence relation on edges of our complex,
defined as follows. One says that two edges $e$ and $f$ are equivalent, denoted $e\Box f$, if there exists a chain $$e_{1}=e, \ldots , e_{n}=f$$
such that for each $i$, $e_{i}$ and $e_{i+1}$ are opposite edges of some $2$-dimensional cube. If $e$ is an edge we will denote by $[e]$ its equivalence class.
A {\it midcube} of a cube $C$, identified with $[-1,1]^{n}$, is the subset of $C$ defined by setting one of the coordinates equal to $0$. One says that a midcube and an edge $e$ of a cube $C \simeq [-1,1]^{n}$ are {\it transverse} if the midcube is
defined by $t_{i}=0$ and the coordinate $t_{i}$ is the only nonconstant coordinate of the egde $e$. Now the hyperplane associated to an equivalence class of edges $[e]$ is the union of all midcubes which are transverse to an edge of the class $[e]$. It is denoted by $\hat{\mathfrak{h}}([e])$. If we want to denote a hyperplane without referring to the edge used to define it, we will denote it by $\hat{\mathfrak{h}}$. Finally we will denote by $N(\hat{\mathfrak{h}})$ the union of the interiors of all cubes which intersect a hyperplane $\hat{\mathfrak{h}}$.
One can prove that hyperplanes enjoy the following properties~\cite{sageev0}:
\begin{enumerate}
\item Each hyperplane intersects a cube in at most one midcube.
\item Each hyperplane $\hat{\mathfrak{h}}$ separates $X$ into two connected components, the closures of the two connected components of $X-\hat{\mathfrak{h}}$ are called the two {\it half-spaces} associated to $\hat{\mathfrak{h}}$.
\item Every hyperplane as well as every halfspace is a convex subset of $X$ for the distance $d$.
\item For every hyperplane $\hat{\mathfrak{h}}$, the set $N(\hat{\mathfrak{h}})$ is convex and naturally isometric to $\hat{\mathfrak{h}} \times (-1,1)$. If $t\in (-1,1)$, the subset of $N(\hat{\mathfrak{h}})$ corresponding to $\hat{\mathfrak{h}} \times \{t\}$ under this isometry is called a {\it translated hyperplane} of $\hat{\mathfrak{h}}$.
\item If a geodesic segment intersects a hyperplane in two points, it is entirely contained in it.
\end{enumerate}
The group ${\rm Aut}(X)$ of automorphisms of a ${\rm CAT}(0)$ cubical complex $X$ is the group of all permutations $X \to X$ which send isometrically a cube of $X$ to another cube of $X$. An automorphism of $X$ is automatically an isometry of the distance $d$ introduced earlier. In what follows, every time we will say that a group $G$ acts on a ${\rm CAT}(0)$ cubical complex, we will mean that we have a homomorphism from the group $G$ to the group ${\rm Aut}(X)$ of automorphisms of $X$. In this case, if $\hat{\mathfrak{h}}$ is a hyperplane, we will denote by
$$Stab_{G}(\hat{\mathfrak{h}})$$
the subgroup of $G$ of elements which globally preserve $\hat{\mathfrak{h}}$ and which also preserve each component of the complement of $\hat{\mathfrak{h}}$.
Following~\cite{cs}, we say that an action of a group $G$ on a ${\rm CAT}(0)$ cubical complex $X$ is {\it essential} if no $G$-orbit is contained in a bounded neighborhood of a halfspace. This implies in particular that $G$ has no fixed point in $X$. We will also use the following convention. If $\mathfrak{h}$ is a halfspace of $X$, we will denote by $\hat{\mathfrak{h}}$ the associated hyperplane (the boundary of $\mathfrak{h}$) and by $\mathfrak{h}^{\ast}$ the opposite halfspace, i.e. the closure of $X\setminus\mathfrak{h}$.
Finally, we mention that there is a natural notion of irreducibility for a ${\rm CAT}(0)$ cubical complex, see~\cite[\S 2.5]{cs}. Any finite dimensional ${\rm CAT}(0)$ cube complex $X$ has a canonical decomposition as a product of finitely many irreducible ${\rm CAT}(0)$ cubical complexes. Moreover every automorphism of $X$ preserves this decomposition, up to a permutation of possibly isomorphic factors. We refer the reader to~\cite[\S 2.5]{cs} for a proof of these facts. We will use this decomposition to deduce Theorem~\ref{fac-complet} from Theorem~\ref{fac-pv}.
\subsection{Existence of strongly separated hyperplanes}\label{essh}
The following definition is due to Behrstock and Charney~\cite{berch} and is a key notion to study {\it rank $1$ phenomena} for group actions on ${\rm CAT}(0)$ cubical complexes.
\begin{defi} Two hyperplanes $\hat{\mathfrak{h}}_{1}$ and $\hat{\mathfrak{h}}_{2}$ in a ${\rm CAT}(0)$ cubical complex are strongly separated if they are disjoint and if there is no hyperplane meeting both $\hat{\mathfrak{h}}_{1}$ and $\hat{\mathfrak{h}}_{2}$.
\end{defi}
\noindent If $X$ is a ${\rm CAT}(0)$ cubical complex and $Y$ is a closed convex subset of $X$, we will denote by $\pi_{Y} : X \to Y$ the projection onto $Y$~\cite[II.2]{bh}. The following proposition is also taken from~\cite{berch}.
\begin{prop}\label{projunique}
If the hyperplanes $\hat{\mathfrak{h}}_{1}$ and $\hat{\mathfrak{h}}_{2}$ are strongly separated, there exists a unique pair of point $(p_{1}, p_{2}) \in \hat{\mathfrak{h}}_{1} \times \hat{\mathfrak{h}}_{2}$ such that $d(p_{1},p_{2})=d(\hat{\mathfrak{h}}_{1},\hat{\mathfrak{h}}_{2})$. The projection of any point of $\hat{\mathfrak{h}}_{2}$ (resp. $\hat{\mathfrak{h}}_{1}$) onto $\hat{\mathfrak{h}}_{1}$ (resp. $\hat{\mathfrak{h}}_{2}$) is $p_{1}$ (resp. $p_{2}$).
\end{prop}
{\it Proof.} The first claim is Lemma 2.2 in~\cite{berch}. The proof given there also shows that if $\hat{\mathfrak{h}}$ is a hyperplane distinct from $\hat{\mathfrak{h}}_{1}$ and $\hat{\mathfrak{h}}_{2}$, no translated hyperplane of $\hat{\mathfrak{h}}$ can intersect both $\hat{\mathfrak{h}}_{1}$ and $\hat{\mathfrak{h}}_{2}$. We now prove the last assertion of the proposition. It is enough to prove that the projection on $\hat{\mathfrak{h}}_{1}$ of each point of $\hat{\mathfrak{h}}_{2}$ is the middle of an edge of the cubical complex. Since $\hat{\mathfrak{h}}_{1}$ is connected, this implies that all points of $\hat{\mathfrak{h}}_{1}$ have the same projection, which must necessarily be the point $p_{1}$. Let $x\in \hat{\mathfrak{h}}_{2}$. If the projection $q$ of $x$ onto $\hat{\mathfrak{h}}_{1}$ is not the middle of an edge, than there exists a cube $C$ of dimension at least $3$ which contains $q$ as well as the germ at $q$ of the geodesic from $q$ to $x$. One can identify $C$ with $[-1,1]^{n}$ in such a way that
$$q=(0,s,s_{3}, \ldots , s_{n})$$
with $\vert s \vert <1$. Since the germ of geodesic going from $q$ to $x$ is orthogonal to $\hat{\mathfrak{h}}_{1}$, it must be contained in $[-1,1]\times \{s\}\times [-1,1]^{n-2}$. We call $\hat{\mathfrak{m}}$ the hyperplane associated to any edge of $C$ parallel to $\{0\}\times [-1,1]\times \{0\}^{n-2}$. Hence the germ of $[q,x]$ at $q$ is contained in a translated hyperplane of $\hat{\mathfrak{m}}$. This implies that $[q,x]$ is entirely contained in this translated hyperplane. This contradicts the fact that no translated hyperplane can intersect both $\hat{\mathfrak{h}_{1}}$ and $\hat{\mathfrak{h}_{2}}$.\hfill $\Box$
\noindent Note that the second point of the proposition can be stated in the following slightly stronger way. If $\mathfrak{h}_{1}$ is the half-space defined by $\hat{\mathfrak{h}}_{1}$ and which does not contain $\hat{\mathfrak{h}}_{2}$, then the projection of any point of $\mathfrak{h}_{1}$ onto $\hat{\mathfrak{h}}_{2}$ is equal to $p_{2}$. Indeed if $q\in \mathfrak{h}_{1}$ and if $\gamma$ is the geodesic from $q$ to $\pi_{\hat{\mathfrak{h}}_{2}}(q)$, there must exist a point $q'$ on $\gamma$ which belongs to $\hat{\mathfrak{h}}_{1}$. Hence $\pi_{\hat{\mathfrak{h}}_{2}}(q)=\pi_{\hat{\mathfrak{h}}_{2}}(q')=p_{2}$. This proves the claim.
The following theorem is due to Caprace and Sageev~\cite[\S 1.2]{cs}. It gives sufficient conditions for the existence of strongly separated hyperplanes in a ${\rm CAT}(0)$ cubical complex $X$.
\begin{theorem}\label{existence-strong-s} Assume that the ${\rm CAT}(0)$ cubical complex is irreducible and that the group ${\rm Aut}(X)$ acts essentially on $X$ and without fixed point in the visual boundary. Then $X$ contains two strongly separated hyperplanes.
\end{theorem}
In the end of this section, we consider a ${\rm CAT}(0)$ cubical complex $X$, two strongly separated hyperplanes $\hat{\mathfrak{h}}$ and $\hat{\mathfrak{k}}$ in $X$, and a group $G$ acting on $X$. We prove a few lemmas which will be used in the next sections. We denote by $\mathfrak{k}$ the halfspace delimited by $\hat{\mathfrak{k}}$ which does not contain $\hat{\mathfrak{h}}$.
\begin{lemma} Let $p$ be the projection of $\hat{\mathfrak{k}}$ (or $\mathfrak{k}$) on $\hat{\mathfrak{h}}$. Let $h$ be an element of $Stab_{G}(\hat{\mathfrak{h}})$. If $h(\mathfrak{k})\cap \mathfrak{k}$ is nonempty, then $h$ fixes $p$.
\end{lemma}
{\it Proof.} This is an easy consequence of Proposition~\ref{projunique}. Let $\pi_{\hat{\mathfrak{h}}} : X \to \hat{\mathfrak{h}}$ be the projection. We have seen that $\pi_{\hat{\mathfrak{h}}} (\mathfrak{k})=p$. Since $h\in Stab_{G}(\hat{\mathfrak{h}})$, the map $\pi_{\hat{\mathfrak{h}}}$ is $h$-equivariant. Let $x\in \mathfrak{k}$ be such that $h(x)\in \mathfrak{k}$. We have:
$$\pi_{\hat{\mathfrak{h}}}(h(x))=p$$
since $h(x)\in \mathfrak{k}$. But we also have $\pi_{\hat{\mathfrak{h}}}(h(x))=h(\pi_{\hat{\mathfrak{h}}}(x))=h(p)$ since $x\in \mathfrak{h}$. Hence $h(p)=p$.\hfill $\Box$
\medskip
We now define
$$\Sigma=\{h\in Stab_{G}(\hat{\mathfrak{h}}), h(\mathfrak{k})\cap \mathfrak{k} \neq \emptyset \}$$
and let $A$ be the subgroup of $Stab_{G}(\hat{\mathfrak{h}})$ generated by $\Sigma$. According to the previous lemma, every element of $A$ fixes $p$. Let
$$U=\bigcup_{a\in A}a(\mathfrak{k})$$
be the union of all the translates by $A$ of the halfpsace $\mathfrak{k}$.
\begin{lemma}\label{separea} Let $h$ be an element of $Stab_{G}(\hat{\mathfrak{h}})$. If $h(U)\cap U$ is nonempty, then $h\in A$ and $h(U)=U$.
\end{lemma}
{\it Proof.} Let $h$ be such that $h(U)\cap U$ is nonempty. Then there exist $a_{1}$ and $a_{2}$ in $A$ such that $ha_{1}(\mathfrak{k})\cap a_{2}(\mathfrak{k})\neq \emptyset$. This implies that $a_{2}^{-1}ha_{1}$ is in $\Sigma$, hence in $A$. In particular $h$ is in $A$.\hfill $\Box$
If the space $X$ is proper, there are only finitely many hyperplanes passing through a given ball of $X$. Since the group $A$ fixes the point $p$, in this case we get that the family of hyperplanes
$$\left( a(\hat{\mathfrak{k}} )\right)_{a\in A}$$
is actually a {\it finite} family. This implies that the family of halfspaces $(a(\mathfrak{k}))_{a\in A}$ is also finite. Note that the same conclusion holds if we assume that the action of $G$ has finite stabilizers instead of assuming the properness of $X$. Indeed, the whole group $A$ is finite in that case. We record this observation in the following:
\begin{prop}\label{finitude} If $X$ is locally finite or if the $G$-action on $X$ has finite stabilizers, then there exists a finite set $A_{1}\subset A$ such that for all $a\in A$, there exists $a_{1}\in A_{1}$ such that $a_{1}(\mathfrak{k})=a(\mathfrak{k})$.
\end{prop}
We will make an important use of this Proposition in section~\ref{sphm}. Although we will only use it under the hypothesis that $X$ is locally finite, we decided to remember the fact that the conclusion still holds for actions with finite stabilizers as this might be useful to study proper actions of K\"ahler groups on non-locally compact ${\rm CAT}(0)$ cubical complexes.
\subsection{Non-amenability of certain Schreier graphs}
In this section we consider an irreducible ${\rm CAT}(0)$ cubical complex and a finitely generated group $G$ which acts essentially on $X$, does not preserve any Euclidean subspace of $X$, and has no fixed point in $\partial_{\infty}X$. A consequence of the work of Caprace and Sageev~\cite{cs} is that under these hypothesis, the group $G$ contains a nonabelian free group. We will need the following slight modification of this important fact; see also~\cite{kasa} for a similar statement.
To state the next theorem, we need the following definition. Following Caprace and Sageev~\cite{cs}, we say that three halfspaces $\mathfrak{a}$, $\mathfrak{b}$, $\mathfrak{c}$ form a {\it facing triple of halfspaces} if they are pariwise disjoint.
\begin{theorem}\label{groupeslibres} Let $X$ be an irreducible ${\rm CAT}(0)$ cubical complex. Assume that $G$ is a finitely generated group of automorphisms of $X$ which satisfies the following three conditions: $G$ does not fix any point in the visual boundary of $X$, does not preserve any Euclidean subspace of $X$ and acts essentially.
Then $X$ contains a facing triple of halfspaces $\mathfrak{k}$, $\mathfrak{h}$, $\mathfrak{l}$ such that the three hyperplanes $\hat{\mathfrak{k}}$, $\hat{\mathfrak{h}}$, $\hat{\mathfrak{l}}$ are strongly separated and such that there exists a non-Abelian free group $F<G$ with the property that $F\cap gStab_{G}(\hat{\mathfrak{h}})g^{-1} =\{1\}$ for all $g\in G$.
\end{theorem}
Besides the facts about ${\rm CAT}(0)$ cubical complexes already recalled in the previous sections, we will further use the following three results from~\cite{cs}, which apply under the hypothesis of the previous theorem.
\begin{enumerate}
\item The space $X$ contains a facing triple of halfspaces, see Theorem E in~\cite{cs}.\label{exft}
\item We will use the {\it flipping lemma} from~\cite[\S 1.2]{cs}: for any halfspace $\mathfrak{h}$, there exists $g\in G$ such that $\mathfrak{h}^{\ast} \subsetneq g(\mathfrak{h})$.
\item Finally we will also use the {\it double skewer lemma} from~\cite[\S 1.2]{cs}: for any two halfspaces $\mathfrak{k}\subset \mathfrak{h}$, there exists $g\in G$ such that $g(\mathfrak{h})\subsetneq \mathfrak{k} \subset \mathfrak{h}$.
\end{enumerate}
We now turn to the proof of the theorem.
\noindent {\it Proof of Theorem~\ref{groupeslibres}.} By~\ref{exft} above, one can choose a facing triple of halfspaces
$$\mathfrak{h}, \mathfrak{h}_{1}, \mathfrak{h}_{2}$$
in $X$. By the flipping lemma, there exists an element $k\in G$ such that $k (\mathfrak{h}^{\ast})\subsetneq \mathfrak{h}$. We now define $\mathfrak{h}_{3}=k(\mathfrak{h}_{1})$ and $\mathfrak{h}_{4}=k(\mathfrak{h}_{2})$. By construction, $\mathfrak{h}_{1}$, $\mathfrak{h}_{2}$, $\mathfrak{h}_{3}$, $\mathfrak{h}_{4}$ is a facing quadruple of halfspaces. We will need to assume moreover that these four halfspaces are strongly separated. This can be done thanks to the following lemma.
\begin{lemma}\label{interm} There exists half-spaces $\mathfrak{h}_{j}'\subset \mathfrak{h}_{j}$ ($1\le j \le 4$) such that the $\mathfrak{h}_{j}'$s are strongly separated.
\end{lemma}
{\it Proof of Lemma~\ref{interm}.} According to Theorem~\ref{existence-strong-s}, we can find two halfspaces $\mathfrak{a}_{1}\subset \mathfrak{a}_{2}$ such that the corresponding hyperplanes $\hat{\mathfrak{a}}_{i}$ are strongly separated. We claim the following:
\begin{center}
{\it Up to replacing the pair $(\mathfrak{a}_{1},\mathfrak{a}_{2})$ by the pair $(\mathfrak{a}_{2}^{\ast},\mathfrak{a}_{1}^{\ast})$, there exists $i\in \{ 1, 2, 3, 4\}$ and $x\in G$ such that $x(\mathfrak{a}_{1})\subset x(\mathfrak{a}_{2})\subset \mathfrak{h}_{i}^{\ast}$.}
\end{center}
Let us prove this claim. First we prove that one of the four halfspaces $\mathfrak{a}_{1}$, $\mathfrak{a}_{2}$, $\mathfrak{a}_{1}^{\ast}$, $\mathfrak{a}_{2}^{\ast}$ is contained in $\mathfrak{h}_{j}^{\ast}$ for some $j$. If this is false, each of these four halfspaces intersects the interior of each of the $\mathfrak{h}_{j}$. In particular $\mathfrak{a}_{k}$ and $\mathfrak{a}_{k}^{\ast}$ intersect the interior of $\mathfrak{h}_{j}$. Since $\mathfrak{h}_{j}$ is convex, $\hat{\mathfrak{a}}_{k}$ intersects it also. Considering two indices $j\neq j'$ and a geodesic from a point in $\hat{\mathfrak{a}}_{k}\cap \mathfrak{h}_{j}$ to a point in $\hat{\mathfrak{a}}_{k}\cap \mathfrak{h}_{j'}$, one sees that $\hat{\mathfrak{a}}_{k}$ intersects each of the hyperplanes $\hat{\mathfrak{h}}_{j}$. Since this is true for $k=1, 2$, this contradicts the fact that the hyperplanes $\hat{\mathfrak{a}_{1}}$ and $\hat{\mathfrak{a}_{2}}$ are strongly separated. This concludes the proof that one of the four halfspaces $\mathfrak{a}_{1}$, $\mathfrak{a}_{2}$, $\mathfrak{a}_{1}^{\ast}$, $\mathfrak{a}_{2}^{\ast}$ is contained in $\mathfrak{h}_{j}^{\ast}$ for some $j$. If $\mathfrak{a}_{2}$ or $\mathfrak{a}_{1}^{\ast}$ is contained in one of the $\mathfrak{h}_{j}^{\ast}$, this proves the claim (with $x=1$). Otherwise we assume that $\mathfrak{a}_{1}\subset \mathfrak{h}_{j}^{\ast}$ (the last case $\mathfrak{a}_{2}^{\ast}\subset \mathfrak{h}_{j}^{\ast}$ being similar). In this case the double skewer lemma applied to $\mathfrak{a}_{1}\subset \mathfrak{a}_{2}$ implies that there exists $x\in G$ such that:
$$x(\mathfrak{a}_{1})\subset x(\mathfrak{a}_{2})\subset \mathfrak{a}_{1}\subset \mathfrak{h}_{j}^{\ast}.$$
This proves our claim. We now write $\mathfrak{b}_{1}:=x(\mathfrak{a}_{1})$, $\mathfrak{b}_{2}:=x(\mathfrak{a}_{2})$.
Since $\mathfrak{h}_{1}\subset \mathfrak{h}_{2}^{\ast}$, the double skewer lemma implies that there exists $g\in G$ such that $g(\mathfrak{h}_{2}^{\ast})\subsetneq \mathfrak{h}_{1}$. Similarly there exists $h\in G$ such that $\mathfrak{h}_{3} \supsetneq h(\mathfrak{h}_{4}^{\ast})$. Applying one of the four elements $g, g^{-1}, h, h^{-1}$ to $\mathfrak{b}_{1}$ and $\mathfrak{b}_{2}$, we obtain two halfspaces which are contained in one of the $\mathfrak{h}_{j}$. For instance if $i=2$ in the claim above, one has $g(\mathfrak{b}_{1})\subset g(\mathfrak{b}_{2})\subset \mathfrak{h}_{1}$. In what follows we continue to assume that we are in this case, the other ones being similar. Since $\mathfrak{h}_{1}\subset \mathfrak{h}_{4}^{\ast}$ and since $h(\mathfrak{h}_{4}^{\ast})\subset \mathfrak{h}_{3}$, one has
$$hg(\mathfrak{b}_{1})\subset hg(\mathfrak{b}_{2})\subset \mathfrak{h}_{3}.$$
Finally by a similar argument we have:
$$g^{-1}hg(\mathfrak{b}_{1})\subset g^{-1}hg(\mathfrak{b}_{2})\subset \mathfrak{h}_{2}\;\;\;\; and \;\;\;\; h^{-1}g(\mathfrak{b}_{1})\subset h^{-1}g(\mathfrak{b}_{2})\subset \mathfrak{h}_{4}.$$
We now define $\mathfrak{h}_{1}'=g(\mathfrak{b}_{1})$, $\mathfrak{h}_{2}'=g^{-1}hg(\mathfrak{b}_{1})$, $\mathfrak{h}_{3}'=hg(\mathfrak{b}_{1})$ and $\mathfrak{h}_{4}'=h^{-1}g(\mathfrak{b}_{1})$. We check that the $\mathfrak{h}_{j}'$s are strongly separated halfspaces. It is clear that they are pairwise disjoint. We check that the corresponding hyperplanes are strongly separated. We do this for the pair $\{ \hat{\mathfrak{h}_{1}'} , \hat{\mathfrak{h}_{2}'} \}$, the other cases being similar. If a hyperplane $\hat{\mathfrak{u}}$ intersects both $\hat{\mathfrak{h}_{1}'}$ and $\hat{\mathfrak{h}_{2}'}$, it will have to intersect also $g(\hat{\mathfrak{b}}_{2})$. This contradicts the fact that the pair
$$g(\hat{\mathfrak{b}}_{1}), g(\hat{\mathfrak{b}}_{2})$$ is strongly separated. Hence $\hat{\mathfrak{h}_{1}'}$ and $\hat{\mathfrak{h}_{2}'}$ are strongly separated.\hfill $\Box$
We now continue our proof using the facing quadruple of strongly separated hyperplanes constructed in the previous lemma. So, up to replacing $\mathfrak{h}_{j}$ by $\mathfrak{h}_{j}'$, we assume that $$(\mathfrak{h}_{j})_{1\le j\le 4}$$ is a facing quadruple of strongly separated halfspaces which moreover do not intersect $\hat{\mathfrak{h}}$.
Exactly as in the proof of the previous lemma, the double skewer lemma implies that there exists $g\in G$ such that $g(\mathfrak{h}_{2}^{\ast})\subsetneq \mathfrak{h}_{1}$ and $h\in G$ such that $\mathfrak{h}_{3} \supsetneq h(\mathfrak{h}_{4}^{\ast})$. Define $U=\mathfrak{h}_{1}\cup \mathfrak{h}_{2}$ and $V=\mathfrak{h}_{3}\cup \mathfrak{h}_{4}$. We now see that we have a Schottky pair: since $g(V)\subset \mathfrak{h}_{1} \subset \mathfrak{h}_{2}^{\ast}$, for any positive integer $n$, we have $g^{n}(V)\subset \mathfrak{h}_{1}\subset U$. Also, since $g^{-1}(V)\subset \mathfrak{h}_{2}\subset \mathfrak{h}_{1}^{\ast}$ we have $g^{-n}(V)\subset \mathfrak{h}_{2}\subset U$ for any positive integer $n$. Similarly $h^{n}(U)\subset V$ for any non zero integer $n$. The ping-pong lemma implies that $g$ and $h$ generate a free subgroup of $G$. Note that this argument is borrowed from the proof of Theorem F in~\cite{cs}.
Now we observe that $\hat{\mathfrak{h}} \subset U^{c}\cap V^{c}$. If we apply one of the four elements $\{g,g^{-1},h,h^{-1}\}$ to $\hat{\mathfrak{h}}$ we obtain a subset of $U$ or a subset of $V$. This implies that for any nontrivial element $\gamma$ of $\langle g, h\rangle$ we have $\gamma (\hat{\mathfrak{h}}) \subset U$ or $\gamma (\hat{\mathfrak{h}}) \subset V$. In particular $\gamma (\hat{\mathfrak{h}})\cap \hat{\mathfrak{h}}=\emptyset$, and the intersection of the groups $\langle g, h\rangle $ and $Stab_{G}(\hat{\mathfrak{h}})$ is trivial.
We need to prove something slightly stronger. Namely we are looking for a free subgroup of $G$ which intersects trivially every conjugate of $Stab_{G}(\hat{\mathfrak{h}})$. So we first make the following observation. Let $x$ be an element of $\langle g,h\rangle$ which is not conjugate to a power of $g$ or of $h$. We will prove that $x$ does not belong to any conjugate of $Stab_{G}(\hat{\mathfrak{h}})$. Up to changing $x$ into $x^{-1}$ and up to conjugacy, we can assume that this element has the form:
$$x=g^{a_{1}}h^{b_{1}}\cdots g^{a_{r}}h^{b_{r}}$$
with $a_{i}$, $b_{i}$ nonzero integers and $r\ge 1$. This implies that any positive power $x^{m}$ of $x$ satisfies $x^{m}(U)\subset U$. Better, since $x(U)\subset g^{a_{1}}(V)$ it follows from the properties of $g$ and of the $\mathfrak{h}_{i}$'s that the distance of $x(U)$ to the boundary of $U$ is bounded below by some positive number $\delta$. This implies that
\begin{equation}\label{distu}
d(x^{m}(U),\partial U)\ge m\delta.
\end{equation}
Similarly one proves that
\begin{equation}\label{distv}
d(x^{-m}(V),\partial V)\ge m\delta'
\end{equation} for any $m\ge 1$ and for some positive number $\delta'$. Suppose now by contradiction that $x$ stabilizes a hyperplane $\hat{\mathfrak{u}}$. Assume that $\hat{\mathfrak{u}}$ does not intersect $U$. For $p\in \hat{\mathfrak{u}}$ we have
$$d(p,U)=d(x^{m}(p),x^{m}(U))$$
for any integer $m\ge 1$. But since $x^{m}(p)$ is never in $U$ the quantity $d(x^{m}(p),x^{m}(U))$ is greater or equal to the distance from $x^{m}(U)$ to $\partial U$. Hence we obtain, using Equation~\eqref{distu}:
$$d(p,U)\ge m\delta$$
which is a contradiction since the left hand side does not depend on $m$. This proves that $\hat{\mathfrak{u}}$ must intersect $U$. In a similar way, using Equation~\eqref{distv}, one proves that $\hat{\mathfrak{u}}$ must intersect $V$. But this contradicts the fact that the halfspaces $(\mathfrak{h}_{j})_{1\le j \le 4}$ are pairwise strongly separated. Hence $x$ does not preserve any hyperplane. In particular $x$ is not contained in any $G$-conjugate of $Stab_{G}(\hat{\mathfrak{h}})$.
Now one can consider a normal subgroup $N$ of the free group $\langle g,h\rangle$ which does not contain any nontrivial power of $g$ or $h$ (for instance its derived subgroup). Any finitely generated subgroup $F<N$ has the property that it intersects trivially every $G$-conjugate of $Stab_{G}(\hat{\mathfrak{h}})$. This proves our claim.
We finally construct two halfspaces $\mathfrak{k}$ and $\mathfrak{l}$ as in the statement of the Theorem. We simply have to repeat the arguments used in the proof of Lemma~\ref{interm}. Exactly as in the proof of that lemma, one can find two halfspaces $\mathfrak{b}_{1}\subset \mathfrak{b}_{2}$ which are contained in $\mathfrak{h}_{j}^{\ast}$ for some $j$ and such that $\hat{\mathfrak{b}}_{1}$ and $\hat{\mathfrak{b}}_{2}$ are strongly separated. We assume that $j=2$ to simplify. We continue as in the proof of the lemma. Applying the element $g$ to $\mathfrak{b}_{1}$ and $\mathfrak{b}_{2}$, we obtain two halfspaces which are contained in $\mathfrak{h}_{1}$: $g(\mathfrak{b}_{1})\subset g(\mathfrak{b}_{2})\subset \mathfrak{h}_{1}$. Since $\mathfrak{h}_{1}\subset \mathfrak{h}_{4}^{\ast}$ and since $h(\mathfrak{h}_{4}^{\ast})\subset \mathfrak{h}_{3}$, one has
$$hg(\mathfrak{a}_{1})\subset hg(\mathfrak{a}_{2})\subset \mathfrak{h}_{3}.$$
Finally by a similar argument we have:
$$g^{-1}hg(\mathfrak{b}_{1})\subset g^{-1}hg(\mathfrak{b}_{2})\subset \mathfrak{h}_{2}.$$ We now define $\mathfrak{k}=g(\mathfrak{b}_{1})$ and $\mathfrak{l}=g^{-1}hg(\mathfrak{b}_{1})$. One now checks that $\hat{\mathfrak{h}}$, $\hat{\mathfrak{l}}$, $\hat{\mathfrak{k}}$ are strongly separated exactly as in the end of the proof of Lemma~\ref{interm}.\hfill $\Box$
In what follows, we will use the following definition.
\begin{defi} Let $G$ be a finitely generated group acting on a ${\rm CAT}(0)$ cubical complex $X$. We say that a hyperplane $\hat{\mathfrak{h}}$ of $X$ is stable for $G$ if the Schreier graph $Stab_{G}(\hat{\mathfrak{h}})\backslash G$ is nonamenable, i.e. satisfies a linear isoperimetric inequality. We say that a halfspace $\mathfrak{h}$ is stable for $G$ if the corresponding hyperplane $\hat{\mathfrak{h}}$ is stable for $G$.
\end{defi}
Note that to define the Schreier graph of $Stab_{G}(\hat{\mathfrak{h}})\backslash G$, one needs to pick a finite generating set for $G$, but the (non)amenability of this graph is independent of this choice. We refer the reader to~\cite{ikapovich} for the discussion of various equivalent notions of amenability for Schreier graphs. Here we will only need the following:
\begin{lemma}\label{joli} Let $G$ be a group, $H<G$ a subgroup, and $F<G$ a finitely generated free group such that $F\cap gHg^{-1}=\{1\}$ for all $g\in G$. Then any Schreier graph of $H\backslash G$ satisfies a linear isoperimetric inequality i.e. is nonamenable.
\end{lemma}
{\it Proof.} It is well-known that the Schreier graph of $H\backslash G$ satisfies a linear isoperimetric inequality if and only if the $G$-action on $\ell^{2}(G/H)$ does not have almost invariant vectors. So we will prove this last fact. For this it is enough to prove that the $F$-action on $\ell^{2}(G/H)$ does not have almost invariant vectors. But the hypothesis on $F$ implies that $F$ acts freely on $G/H$. Hence, as an $F$-representation, $\ell^{2}(G/H)$ is isomorphic to a direct sum of copies of the regular representation of $F$ on $\ell^{2}(F)$. This proves the claim.\hfill $\Box$
We will need to record the following corollary of the previous theorem.
\begin{coro}\label{allstable} Under the hypothesis of Theorem~\ref{groupeslibres}, we have:
\begin{itemize}
\item Any halfspace which is part of a facing triple of halfspaces is stable.
\item For any halfspace $\mathfrak{h}$ which is part of a facing triple of halfspaces, and any finite index subgroup $G_{2}$ of $G$, there exists $\gamma \in G_{2}$ such that $\mathfrak{h}$ and $\gamma (\mathfrak{h})$ are strongly separated.
\end{itemize}
\end{coro}
\noindent {\it Proof.} This is contained in the proof of the previous theorem. Indeed for the first point of the corollary, we observe that we started the proof of the previous theorem with any facing triple of halfspaces and proved that a given halfspace among the three is stable, as a consequence of Lemma~\ref{joli}.
For the second point, we consider the triple $\mathfrak{h}, \mathfrak{l}, \mathfrak{k}$ constructed in the previous theorem. We have $\mathfrak{l} \subset \mathfrak{h}^{\ast}$. Applying the double skewer lemma to this last pair, we find $x\in G$ such that $x(\mathfrak{h}^{\ast})\subsetneq \mathfrak{l}$. This implies $x^{n}(\mathfrak{h}^{\ast})\subset \mathfrak{l}$ for all $n\ge 1$. We pick $n_{0}\ge 1$ such that $x^{n_{0}}\in G_{2}$. In particular the hyperplane $x^{n_{0}}(\hat{\mathfrak{h}})$ is contained in $\mathfrak{l}$. Any hyperplane meeting both $x^{n_{0}}(\hat{\mathfrak{h}})$ and $\hat{\mathfrak{h}}$ would have to meet $\hat{\mathfrak{l}}$, which is impossible since $\hat{\mathfrak{h}}$ and $\hat{\mathfrak{l}}$ are strongly separated. Hence $x^{n_{0}}(\hat{\mathfrak{h}})$ and $\hat{\mathfrak{h}}$ are strongly separated.\hfill $\Box$
We will also use the following classical fact.
\begin{lemma}\label{inisop} Let $M$ be a closed Riemannian manifold with fundamental group $G$. Let $H<G$ be a subgroup and let $M_{1}\to M$ be the covering space associated to $H$. Then the Schreier graph $H\backslash G$ satisfies a linear isoperimetric inequality if and only if $M_{1}$ satisfies a linear isoperimetric inequality.
\end{lemma}
A proof can be found in~\cite[Ch. 6]{gromovmetric}. Note that the proof in~\cite{gromovmetric} is given only is the case when $H$ is trivial, but the arguments apply in general. Combining Theorem~\ref{groupeslibres}, Corollary~\ref{allstable}, and Lemma~\ref{inisop}, we obtain:
\begin{prop}\label{propstabi} Let $M$ be a closed Riemannian manifold with fundamental group $G$. Suppose that $G$ acts on a ${\rm CAT}(0)$ cubical complex $X$, satisfying the hypothesis of Theorem~\ref{groupeslibres}. Let $\mathfrak{h}$ be a halfspace of $X$ which is part of a facing triple of halfspaces and let $\hat{\mathfrak{h}}$ be the corresponding hyperplane. Let $M_{\hat{\mathfrak{h}}}$ be the covering space of $M$ corresponding to the subgroup
$$Stab_{G}(\hat{\mathfrak{h}})<G.$$
Then, $M_{\hat{\mathfrak{h}}}$ satisfies a linear isoperimetric inequality.
\end{prop}
\section{Fibering K\"ahler groups acting on ${\rm CAT}(0)$ cubical complexes}\label{rems}
In this section we first give a criterion to produce fibrations of certain open K\"ahler manifolds over Riemann surfaces (Proposition~\ref{filteredinuse}). Although this criterion is well-known to experts, we explain how to deduce its proof from known results about {\it filtered ends} of K\"ahler manifolds. This serves as a pretext to survey this notion and its applications in the study of K\"ahler groups. Later, in sections~\ref{first} and~\ref{sphm}, we explain how to construct pluriharmonic or plurisubharmonic functions on certain covering spaces of a compact K\"ahler manifold, starting from an action of its fundamental group on a ${\rm CAT}(0)$ cubical complex. We finally prove Theorem~\ref{fac-pv} in section~\ref{pf}.
\subsection{Filtered ends}\label{filterede}
The aim of this subsection is to recall the proof of the following classical proposition.
\begin{prop}\label{filteredinuse} let $M$ be a closed K\"ahler manifold. Let $M_{1}\to M$ be a covering space of $M$ and let $\pi : \widetilde{M}\to M_{1}$ be the universal cover. We assume that there exists a proper, pluriharmonic map $e : M_{1} \to I$ where $I$ is an open interval of $\mathbb{R}$. Let $\widetilde{e}:= e\circ \pi$ be the lift of $e$ to the universal cover $\widetilde{M}$. If some level set of $\widetilde{e}$ is not connected, then there exists a proper holomorphic map with connected fibers
$$M_{1}\to \Sigma$$
where $\Sigma$ is a Riemann surface. This applies in particular if some levet set of $e$ is not connected.
\end{prop}
Before turning to the proof of the proposition, we recall briefly various notions of ends in topology and group theory.
Let $Y$ be a noncompact manifold. Recall that the number of ends of $Y$, denoted by $e(Y)$, is the supremum of the number of unbounded connected components of $Y-K$ where $K$ runs over the relatively compact open sets of $Y$ with smooth boundary. Now if $M$ is a closed manifold, $M_{1}\to M$ a covering space, and $\pi : \widetilde{M}\to M_{1}$ the universal covering space, one can look at the number of ends of various spaces, each of which also admits a purely group theoretical description.
\begin{itemize}
\item There is the number of ends of $\widetilde{M}$; this is also the number of ends of the fundamental group of $M$.
\item There is the number of ends of the space $M_{1}$, which is an invariant of the pair $\pi_{1}(M_{1})< \pi_{1}(M)$~\cite{houghton,scott}. When this number is greater than $1$, one says that $\pi_{1}(M_{1})$ has codimension $1$ in $\pi_{1}(M)$ or that it is multi-ended.
\item There is also a third, less known notion, of filtered ends, which can be associated to a continuous map between two manifolds. Here we will only consider the case where this map is the universal covering map. This leads to the following definition.
\end{itemize}
\begin{defi} Let $M_{1}$ be an open manifold and let $\pi : \widetilde{M}\to M_{1}$ be its universal covering space. A filtered end of $M_{1}$ (or of $\pi : \widetilde{M} \to M_{1}$) is an element of the set
$$\underset{\leftarrow}{{\rm lim}}\, \pi_{0}(\widetilde{M}-\pi^{-1}(K))$$
where $K$ runs over the relatively compact open sets of $M_{1}$ with smooth boundary. The number of filtered ends of $M_{1}$ is denoted by $\widetilde{e}(M_{1})$.
\end{defi}
As for the previous notions, one can show that in the case where $M_{1}$ covers a compact manifold $M$, this number only depends on the pair $\pi_{1}(M_{1})< \pi_{1}(M)$. Also, one always has $\widetilde{e}(M_{1})\ge e(M_{1})$. The interest in this notion is that the number of filtered ends of $\pi : \widetilde{M}\to M_{1}$ can be greater than $1$ even if $M_{1}$ is $1$-ended. A simple example of this situation is obtained as follows. Take $M$ to be a genus $2$ closed surface and $\Sigma$ to be a subsurface of genus $1$ with one boundary component. The covering space $M_{1}$ of $M$ defined by the subgroup $\pi_{1}(\Sigma)<\pi_{1}(M)$ has this property.
This notion was first introduced by Kropholler and Roller~\cite{kr} in a purely group theoretical context. A topological approach to it was later given in the book by Geoghegan~\cite{ge}. Filtered ends were studied in the context of K\"ahler groups by Gromov and the first author~\cite{delzantgromov}. There, the name {\it cut} was used instead of filtered ends, or rather the term cut was used to indicate the presence of at least two filtered ends for a certain map or covering space, whereas the term {\it Schreier cut} referred to the classical notion of relative ends of a pair of groups in~\cite{delzantgromov}.
With the notion of filtered ends at our disposal, Proposition~\ref{filteredinuse} will be a simple application of the following theorem.
\begin{theorem}\label{dgnr} Let $M$ be a closed K\"ahler manifold and let $M_{1}\to M$ be an infinite covering space of $M$. If the number of filtered ends of $M_{1}$ is greater or equal to $3$, then there exists a proper holomorphic mapping with connected fibers $M_{1}\to \Sigma$, where $\Sigma$ is a Riemann surface.
\end{theorem}
This result was proved in~\cite{delzantgromov} under certain additional hypothesis of ``stability". A more general version was later proved by Napier and Ramachandran~\cite{nr2}. Their version includes the theorem stated above but also more general ones which apply to K\"ahler manifolds which are not necessarily covering spaces of a compact manifold.
Before turning to the proof of Proposition~\ref{filteredinuse}, we make the following easy observation.
\begin{lemma}\label{lieucritnd} Let $V$ be a complex manifold and $f : V \to \mathbb{R}$ be a nonconstant smooth pluriharmonic function. Denote by $Crit(f)$ the set of critical points of $f$. Then for each $t\in \mathbb{R}$, the set $Crit(f)\cap f^{-1}(t)$ is nowhere dense in $f^{-1}(t)$.
\end{lemma}
The proof is straightforward, once one remembers that the function $f$ is locally the real part of a holomorphic function $F$ and that the critical set of $f$ locally coincides with that of $F$.
\noindent {\it Proof of Proposition~\ref{filteredinuse}.} Note that if $M_{1}$ has at least three ends, the result follows from much older results, see~\cite{nr0}. Thus, we could assume that $M_{1}$ has only two ends, although this is not necessary.
Let $t_{1}$ be a real number such that $\widetilde{e}^{-1}(t_{1})$ is not connected. Let $x$ and $y$ be two points in distinct connected components of $\widetilde{e}^{-1}(t_{1})$. By Lemma~\ref{lieucritnd}, we can assume that $x$ and $y$ are not critical point of $f$. We claim that at least one of the two sets $$\{ \widetilde{e}>t_{1}\}, \;\;\;\; \{ \widetilde{e} < t_{1}\}$$
is not connected. Let us assume that this is false. Then one can find a path $\alpha$ from $x$ to $y$ contained in the set $\{ \widetilde{e}\ge t_{1}\}$ and a path $\beta$ from $y$ to $x$ contained in the set $\{\widetilde{e} \le t_{1}\}$. We can assume that $\alpha$ and $\beta$ intersect $\widetilde{e}^{-1}(t_{1})$ only at their endpoints. Let $\gamma$ be the path obtained by concatenating $\alpha$ and $\beta$. Let $\gamma_{1} : S^{1}\to M_{1}$ be a smooth map freely homotopic to $\gamma$, intersecting $\widetilde{e}^{-1}(t_{1})$ only at the points $x$ and $y$ and transverse to $\widetilde{e}^{-1}(t_{1})$ there. Since $\widetilde{M}$ is simply connected, we can find a smooth map
$$u : D^{2}\to \widetilde{M}$$
coinciding with $\gamma_{1}$ on the boundary. Pick $\varepsilon >0$ small enough, such that $t_{1}+\varepsilon$ is a regular value of $\widetilde{e}\circ u$. The intersection of the level $\widetilde{e}\circ u = t_{1}+\varepsilon$ with the boundary of the disc is made of two points $a_{\varepsilon}$ and $b_{\varepsilon}$ such that $u(a_{\varepsilon})\to x$ and $u(b_{\varepsilon})\to y$ when $\varepsilon$ goes to $0$. Let $I_{\varepsilon}$ be the connected component of the set $\{\widetilde{e}\circ u = t_{1}+\varepsilon\}$ whose boundary is $\{a_{\varepsilon},b_{\varepsilon}\}$. Up to taking a subsequence, we can assume that $I_{\varepsilon}$ converges as $\varepsilon$ goes to $0$ to a compact connected subset $I_{0}\subset \{\widetilde{e}\circ u =t_{1}\}$. Now the set $u(I_{0})$ is connected, contained in $\widetilde{e}^{-1}(t_{1})$ and contains $x$ and $y$. This contradicts the fact that $x$ and $y$ are in distinct components of $\widetilde{e}^{-1}(t_{1})$. Hence at least one of the two sets $\{ \widetilde{e}>t_{1}\}$ and $\{ \widetilde{e}<t_{1}\}$ is not connected. Note that the maximum principle implies that no connected component of these open sets can be at bounded distance from the set $\widetilde{e}^{-1}(t_{1})$. This is easily seen to imply that $M_{1}$ has at least three filtered ends. The conclusion now follows from Theorem~\ref{dgnr}.\hfill $\Box$
\begin{rem} Note that our Proposition~\ref{filteredinuse} can be applied to the case where $e$ is the primitive of the lift of a harmonic $1$-form $\alpha$ on $M$ with integral periods and where $M_{1}$ is the covering space associated to the kernel of the homomorphism $\pi_{1}(M)\to \mathbb{Z}$ induced by $\alpha$. In this case one recovers a particular case of a result due to Simpson~\cite{simpson}. Hence Theorem~\ref{dgnr} about filtered ends implies this particular case of the result of Simpson. Proposition~\ref{filteredinuse} can also be thought of as a nonequivariant version of Simpson's result.
\end{rem}
\subsection{The first pluriharmonic map}\label{first}
We now consider a ${\rm CAT}(0)$ cubical complex $X$, assumed to be irreducible and locally finite. Let $M$ be a compact K\"ahler manifold and let $\Gamma$ be its fundamental group. We consider a homomorphism
$$\varrho : \Gamma \to {\rm Aut}(X).$$
We suppose that the $\Gamma$-action on $X$ satisfies the three hypothesis of Theorem~\ref{fac-pv}, although this will not be used untill section~\ref{sphm}. We denote by $\widetilde{M}$ the universal cover of $M$. If $\hat{\mathfrak{h}}$ is a hyperplane of $X$ we will write $M_{\hat{\mathfrak{h}}}$ for the quotient of $\widetilde{M}$ by the group $Stab_{\Gamma}(\hat{\mathfrak{h}})$:
\begin{equation}\label{notationrev}
M_{\hat{\mathfrak{h}}}=\widetilde{M}/Stab_{\Gamma}(\hat{\mathfrak{h}}).
\end{equation}
Since $X$ is contractible, one can choose a Lipschitz $\varrho$-equivariant map $\psi : \widetilde{M}\to X$. We fix a hyperplane $\hat{\mathfrak{h}}$ of $X$ and a halfspace $\mathfrak{h}$ associated to $\hat{\mathfrak{h}}$. Define a function $d_{\mathfrak{h}}$ on $X$ by
$$d_{\mathfrak{h}}(x)=\left\{ \begin{array}{ll}
d(x,\hat{\mathfrak{h}}) & if\; x\in \mathfrak{h}, \\
-d(x,\hat{\mathfrak{h}}) & if \; x\in \mathfrak{h}^{\ast}. \\
\end{array}\right.$$
Let $f : \widetilde{M}\to \mathbb{R}$ be the composition $f=d_{\mathfrak{h}}\circ \psi$. Finally let $\overline{f}$ be the function induced by $f$ on $M_{\hat{\mathfrak{h}}}$.
\begin{prop}\label{proprete} The map $\overline{f}$ is proper. In particular the manifold $M_{\hat{\mathfrak{h}}}$ has at least two ends.
\end{prop}
{\it Proof.} In the following proof, we let $H=Stab_{G}(\hat{\mathfrak{h}})$. We pick a point $x_{0}$ in $\widetilde{M}$. We consider the map
$$F : H\backslash G \to \mathbb{R}$$
defined by $F(Hg)=f(g\cdot x_{0})$. It is enough to prove that $F$ is proper, this is easily seen to imply that $\overline{f}$ is proper. So let $(g_{n})$ be a sequence of elements of $G$ such that
\begin{equation}\label{borne}
\vert F(Hg_{n})\vert \le C,\end{equation}
for some constant $C$. We need to prove that the set $\{Hg_{n}\}$ is a finite subset of $H\backslash G$. But equation~\eqref{borne} implies that the distance of $\varrho (g_{n})(\psi(x_{0}))$ to $\hat{\mathfrak{h}}$ is bounded by $C$. Put differently this says that each of the hyperplanes $(\varrho (g_{n})^{-1}(\hat{\mathfrak{h}}))$ intersects the ball of radius $C$ centered at $\psi (x_{0})$. By our hypothesis of local finiteness of $X$ this implies that the family of hyperplanes $(\varrho (g_{n})^{-1}(\hat{\mathfrak{h}}))$ is {\it finite}. Hence there exists element $x_{1}, \ldots , x_{r}$ in $G$ such that for all $n$ there exists $i$ such that $\varrho (g_{n})^{-1} (\hat{\mathfrak{h}} ) = \varrho (x_{j})(\hat{\mathfrak{h}})$. This implies $Hg_{n}=Hx_{i}^{-1}$.\hfill $\Box$
In what follows we will denote by $Ends(V)$ the space of ends of an open manifold $V$. We recall that $V\cup Ends(V)$ carries a natural topology, see for instance~\cite[Ch.6]{dk}. We now make use of the following classical theorem.
\begin{theorem}\label{tresjoli} Let $V$ be a complete Riemannian manifold of bounded geometry and satisfying a linear isoperimetric inequality. Let $\chi : Ends(V)\to \{-1,1\}$ be a continuous map. Then there exists a unique continuous map $h_{\chi} : V\cup Ends(V)\to [-1,1]$ which extends $\chi$, is harmonic on $V$, and has finite energy:
$$\int_{V}\vert \nabla h_{\chi}\vert^{2}<\infty.$$
\end{theorem}
Note that we will not need the precise definition of a Riemannian manifold of bounded geometry; the only important point we need is that a covering space of a closed Riemannian manifold is of bounded geometry. This theorem was proved by Kaimanovich and Woess~\cite{kw} and by Li and Tam~\cite{litam} independently. A simple beautiful proof due to M. Ramachandran can be found in the paper by Kapovich~\cite[\S 9]{mkapovich}. In the case when $V$ is K\"ahler, the map $h_{\chi}$ is pluriharmonic. This follows from a standard integration by part argument, which is valid here because $h_{\chi}$ has finite energy, see Lemma 3.1 in~\cite{li} or ~\cite[\S 1.1.B]{khgromov}.
We now assume that $\hat{\mathfrak{h}}$ is stable for $\Gamma$ and apply the above to the manifold $V=M_{\hat{\mathfrak{h}}}$. The proper function $\overline{f}$ defines a partition of the space of ends of this manifold into two open sets: the set of ends of $\{\overline{f}\ge 0\}$ and the set of ends of $\{\overline{f}\le 0\}$. Let $\chi : Ends(V)\to \{-1,1\}$ be the function taking the value $1$ on the first open set and $-1$ on the second open set. Since $M_{\hat{\mathfrak{h}}}$ satisfies a linear isoperimetric inequality, we obtain:
\begin{prop}\label{tropcool} There exists a unique function
$$u_{\mathfrak{h}} : M_{\hat{\mathfrak{h}}} \to (-1,1)$$
which is pluriharmonic, of finite energy and satisfies $u_{\mathfrak{h}}(x)\to 1$ when $x\to \infty$ and $\overline{f}(x) >0$ and $u_{\mathfrak{h}}(x)\to -1$ when $x\to \infty$ and $\overline{f}(x) <0$.
\end{prop}
We denote by $p$ the projection from $\widetilde{M}$ to $M_{\hat{\mathfrak{h}}}$. For $t_{0}\in (0,1)$ we define: $$U_{\mathfrak{h},t_{0}} =\{x\in \widetilde{M}, u_{\mathfrak{h}}(p(x))>t_{0}\}.$$ We have:
\begin{lemma}\label{bhd}
If $t_{0}$ is close enough to $1$ the set $U_{\mathfrak{h},t_{0}}$ satisfies $\psi (U_{\mathfrak{h},t_{0}}) \subset \mathfrak{h}$.
\end{lemma}
{\it Proof.} By the continuity of $u_{\mathfrak{h}}$ on $M_{\hat{\mathfrak{h}}}\cup Ends(M_{\hat{\mathfrak{h}}})$, we have that if $t_{0}$ is close enough to $1$, one has the inclusion $\{u_{\mathfrak{h}}> t_{0}\}\subset \{\overline{f}\ge 0\}$. This implies the conclusion of the lemma.\hfill $\Box$
We will ultimately produce a fibration of the manifold $M_{\hat{\mathfrak{h}}}$ onto a Riemann surface. To achieve this, we will construct a second plurisubharmonic function defined on $M_{\hat{\mathfrak{h}}}$, which is not a function of $u_{\mathfrak{h}}$. We build this second map in the next section.
\subsection{A second plurisubharmonic map}\label{sphm}
We consider a facing triple of strongly separated halfspaces $\mathfrak{h}$, $\mathfrak{k}$, $\mathfrak{l}$ as given by Theorem~\ref{groupeslibres}. According to Corollary~\ref{allstable}, $\mathfrak{h}$, $\mathfrak{k}$ and $\mathfrak{l}$ are stable for $\Gamma$. Thus, we can apply the results of section~\ref{first} to anyone of the three covering spaces $M_{\hat{\mathfrak{h}}}$, $M_{\hat{\mathfrak{k}}}$, $M_{\hat{\mathfrak{l}}}$. We recall that these covering spaces were defined by Equation~\eqref{notationrev}. We also make use of the various lemmas proved in section~\ref{essh}. In particular we consider the subgroup
$$A< Stab_{\Gamma}(\hat{\mathfrak{h}})$$
generated by $\Sigma=\{h\in Stab_{\Gamma}(\hat{\mathfrak{h}}), h(\mathfrak{k})\cap \mathfrak{k}\neq \emptyset \}$. If $a\in A$, the hyperplane $a(\hat{\mathfrak{k}})$ is stable for $\Gamma$, as $\hat{\mathfrak{k}}$ is. Thus, we can also apply the results of section~\ref{first} to the covering space $M_{a(\hat{\mathfrak{k}})}$. So we denote by
$$u_{\mathfrak{h}} : M_{\hat{\mathfrak{h}}}\to (-1,1)\;\;\;\;\; {\rm and} \;\;\;\;\; u_{a(\mathfrak{k})} : M_{a(\hat{\mathfrak{k}})} \to (-1,1)$$
the proper pluriharmonic maps of finite energy provided by Proposition~\ref{tropcool}.
For each $a\in A$, let $\widetilde{u_{a(\mathfrak{k})}}$ be the lift of $u_{a(\mathfrak{k})}$ to $\widetilde{M}$. Now define a function $\beta_{\mathfrak{k}} : \widetilde{M}\to \mathbb{R}$ by:
$$\beta_{\mathfrak{k}}(x)=\underset{a\in A}{{\rm sup}} \, \widetilde{u_{a(\mathfrak{k})}}(x).$$
Proposition~\ref{finitude} implies that the $A$-orbit of $\mathfrak{k}$ is finite, hence the supremum above is actually the supremum of a finite number of smooth pluriharmonic functions. Hence $\beta_{\mathfrak{k}}$ is a continuous plurisubharmonic function. We now fix some constant $C$ in $(0,1)$. Let $V_{\mathfrak{k},C}=\{\beta_{\mathfrak{k}}>C\}$.
\begin{lemma} \label{lemmeinterm}
\begin{enumerate}
\item The function $\beta_{\mathfrak{k}} : \widetilde{M}\to \mathbb{R}$ is $A$-invariant. In particular the open set $V_{\mathfrak{k},C}\subset \widetilde{M}$ is $A$-invariant.
\item For each $a\in A$, there exists $C_{a}\in(0,1)$ such that $\psi (\{\widetilde{u_{a(\mathfrak{k})}}>C_{a}\})\subset a(\mathfrak{k})$.
\item If $C$ is close enough to $1$, one has $\psi(V_{\mathfrak{k},C})\subset \cup_{a\in A} a(\mathfrak{k})$.\label{lemmeinterm:3}
\end{enumerate}
\end{lemma}
{\it Proof.} Note that by the uniqueness of the map $u_{a(\mathfrak{k})}$, we have:
$$\widetilde{u_{a(\mathfrak{k})}}=\widetilde{u_{\mathfrak{k}}}\circ a^{-1}.$$
So the function $\beta_{\mathfrak{k}}$ is also equal to
$$\underset{a\in A}{{\rm sup}}\, \widetilde{u_{\mathfrak{k}}}\circ a^{-1},$$
which is $A$-invariant. This proves the first point. The second point was already proved in Lemma~\ref{bhd} for the harmonic function $u_{\mathfrak{h}} : M_{\hat{\mathfrak{h}}}\to (-1,1)$ and the proof is similar here. As for the third point, if $A_{1}\subset A$ is a finite set such that $\{a(\mathfrak{k})\}_{a\in A}=\{a(\mathfrak{k})\}_{a\in A_{1}}$, the constant $C=\underset{a\in A_{1}}{{\rm max}} \, C_{a}$ has the desired property.\hfill $\Box$
We now assume that the constant $C$ satisfies the conclusion of the previous lemma. Note that the open set $V_{\mathfrak{k},C}$ naturally defines an open set $V^{\ast}_{\mathfrak{k},C}=V_{\mathfrak{k},C}/A$ inside $\widetilde{M}/A$. This open set actually embeds into the quotient of $\widetilde{M}$ by the bigger subgroup $Stab_{\Gamma}(\hat{\mathfrak{h}})$. Namely:
\begin{prop} The natural map $\pi : \widetilde{M}/A\to M_{\hat{\mathfrak{h}}}$ is injective on the closure of $V^{\ast}_{\mathfrak{k},C}$.
\end{prop}
{\it Proof.} We will check that if $h\in Stab_{\Gamma}(\hat{\mathfrak{h}})-A$ then $h(\overline{V_{\mathfrak{k},C}})\cap \overline{V_{\mathfrak{k},C}}$ is empty. Let us suppose by contradiction that this is not empty for some $h\in Stab_{\Gamma}(\hat{\mathfrak{h}})-A$. Let $x\in \overline{V_{\mathfrak{k},C}}$ be such that $h(x)\in \overline{V_{\mathfrak{k},C}}$. By Lemma~\ref{lemmeinterm}~\eqref{lemmeinterm:3}, there exist $a_{1}, a_{2}\in A$ such that $\psi(x)\in a_{1}(\mathfrak{k})$ and $\psi(h(x))\in a_{2}(\mathfrak{k})$. This implies that $h(a_{1}(\mathfrak{k}))\cap a_{2}(\mathfrak{k})\neq \emptyset$. By Lemma~\ref{separea}, $h$ must be in $A$, a contradiction.\hfill $\Box$
\medskip
We now define a function $\gamma : M_{\hat{\mathfrak{h}}} \to \mathbb{R}_{+}$ as follows. If $x\in \pi(V^{\ast}_{\mathfrak{k},C})$, define $\gamma (x)=\beta_{\mathfrak{k}}(y)-C$, where $y$ is a lift of $x$ inside $V_{\mathfrak{k},C}$. If $x\notin \pi (V^{\ast}_{\mathfrak{k},C})$, define $\gamma (x)=0$. We have:
\begin{lemma}
If $C$ is close enough to $1$, the function $\gamma$ is continuous and plurisubharmonic. Hence, $V^{\ast}_{\mathfrak{k},C}$ is a plurimassive set in the sense of~\cite[Def. 1.1]{nr2}.
\end{lemma}
{\it Proof.} As in the previous Proposition, we denote by $\pi$ the projection $\widetilde{M}/A\to M_{\hat{\mathfrak{h}}}$. Up to taking $C$ closer to $1$, we can always assume that there exists $C_{0}\in (0,C)$ such that the conclusion of the third point of Lemma~\ref{lemmeinterm} holds for $C_{0}$. We claim that $F:=\pi(\overline{V_{\mathfrak{k},C}}/A)$ is closed. We first conclude the proof of the lemma using this claim. We consider the two open sets
$$\pi(V_{\mathfrak{k},C_{0}}/A)\;\;\;\;\; and \;\;\;\;\; M_{\hat{\mathfrak{h}}}-F.$$
They cover $M_{\hat{\mathfrak{h}}}$, hence it is enough to prove that $\gamma$ is continuous and plurisubharmonic on each of them. On $M_{\hat{\mathfrak{h}}}-F$, $\gamma$ is $0$ so there is nothing to prove. On $\pi(V_{\mathfrak{k},C_{0}}/A)$, the function $\gamma$ is constructed as follows: one considers the plurisubharmonic function ${\rm max}(\beta_{\mathfrak{k}}-C,0)$ on $\widetilde{M}$. It descends to a function $q : \widetilde{M}/A\to \mathbb{R}_{+}$. The restriction of $\gamma$ to $\pi(V_{\mathfrak{k},C_{0}}/A)$ is obtained by composing the inverse of the map $\pi : V_{\mathfrak{k},C_{0}}/A \to \pi(V_{\mathfrak{k},C_{0}}/A)$ with $q$. Hence it is continuous and plurisubharmonic.
We finally prove that $F:=\pi(\overline{V_{\mathfrak{k},C}}/A)$ is closed. It is enough to see that
$$\bigcup_{h\in Stab_{\Gamma}(\hat{\mathfrak{h}})}h(\overline{V_{\mathfrak{k},C}})$$
is closed. For this it is enough to check that there exists $\varepsilon >0$ such that for any $h\in Stab_{\Gamma}(\hat{\mathfrak{h}})-A$, the distance from $h(\overline{V_{\mathfrak{k},C}})$ to $\overline{V_{\mathfrak{k},C}}$ is greater or equal to $\varepsilon$. Applying the map $\psi$, we see that it is enough to find a positive lower bound, independent of $h\in Stab_{\Gamma}(\hat{\mathfrak{h}})-A$, for the distance from $h(U)$ to $U$ in $X$, where $U=\cup_{a\in A}a(\mathfrak{k})$, as in section~\ref{essh}. But this follows from the fact that there is a uniform positive lower bound for the distance between two disjoint halfspaces in a ${\rm CAT}(0)$ cubical complex.\hfill $\Box$
We started this section considering a facing triple of strongly separated hyperplanes $\mathfrak{h}$, $\mathfrak{k}$ and $\mathfrak{l}$. So far, we only used $\mathfrak{h}$ and $\mathfrak{k}$. In the next proposition, we make use of the third hyperplane $\mathfrak{l}$.
\begin{prop}\label{supercool} Assume that the level sets of $\widetilde{u_{\mathfrak{h}}}$ (the lift of $u_{\mathfrak{h}}$ to $\widetilde{M}$) are connected. Then, there exists a finite cover $M_{2} \to M_{\hat{\mathfrak{h}}}$ (possibly equal to $M_{\hat{\mathfrak{h}}}$) and a continuous plurisubharmonic function $\delta : M_{2}\to \mathbb{R}_{+}$ such that there exists a level set of the lift of $u_{\mathfrak{h}}$ to $M_{2}$ on which $\delta$ is not constant.
\end{prop}
{\it Proof.} Note that the function $u_{\mathfrak{h}}$ is surjective. Assume that the conclusion of the proposition is false when $M_{2}=M_{\hat{\mathfrak{h}}}$ and $\delta=\gamma$. Then there exists a function
$$\varphi : (-1,1) \to \mathbb{R}_{+}$$
such that $\gamma =\varphi \circ u_{\mathfrak{h}}$. We claim that the function $\varphi$ is continuous, convex, and vanishes on $[t_{0},1)$ for $t_{0}$ close enough to $1$. Let us prove these claims.
First we note that $\varphi$ is continuous in a neighborhood of every real number $t$ such that $u_{\mathfrak{h}}^{-1}(t)$ is not contained in the critical set of $u_{\mathfrak{h}}$. But every real number $t$ has this property according to Lemma~\ref{lieucritnd}. Hence $\varphi$ is continuous on $(-1,1)$. Note that this argument proves that for each $t\in (-1,1)$, one can find $q\in M_{\hat{\mathfrak{h}}}$ and local coordinates $(z_{1},\ldots , z_{n})$ centered at $q$ such that $\gamma(z)=\varphi(u_{\mathfrak{h}}(q)+Re(z_{1}))$. The convexity of $\varphi$ then follows from~\cite[5.13]{demailly}. For the last claim, we pick a point $x\in \widetilde{M}$ such that $u_{\mathfrak{h}}(p(x))>t_{0}$. Here and as before $p$ denotes the projection $\widetilde{M}\to M_{\hat{\mathfrak{h}}}$. If $t_{0}$ is close enough to $1$, Lemma~\ref{bhd} implies that $\psi (x)\in \mathfrak{h}$. Since $\mathfrak{h}$ and
$$\bigcup_{a\in A}a(\mathfrak{k})$$
are disjoint, Lemma~\ref{lemmeinterm} implies that $h(x)\notin V_{\mathfrak{k}, C}$ for any $h$ in $Stab_{\Gamma}(\hat{\mathfrak{h}})$. Hence $\gamma (p(x))=0$. This proves that $\varphi$ vanishes on $[t_{0},1)$.
These three properties of $\varphi$ imply that the level sets of $\varphi$ are connected. In fact the level $\varphi = c$ is a point for $c>0$ and is an interval of the form $[a_{0},1)$ for $c=0$. This implies, together with the hypothesis on the level sets of $\widetilde{u_{\mathfrak{h}}}$, that the level sets of $\gamma \circ p : \widetilde{M} \to \mathbb{R}_{+}$ are connected. But this implies $Stab_{\Gamma}(\hat{\mathfrak{h}})=A$. Now since the $A$-orbit of $\mathfrak{k}$ is finite, the group
$$H_{2}=Stab_{\Gamma}(\hat{\mathfrak{h}})\cap Stab_{\Gamma}(\hat{\mathfrak{k}})$$
is of finite index in $Stab_{\Gamma}(\hat{\mathfrak{h}})$. Let $M_{2}\to M_{1}$ be the corresponding cover and $u_{2}$ be the lift of $u_{\mathfrak{h}}$ to $M_{2}$. Let $\delta$ be the lift of the function $u_{\mathfrak{k}}$ to $M_{2}$. We claim that $\delta$ satisfies the conclusion of the proposition.
If this is not the case, then $\delta$ is a function of $u_{2}$. As before one sees that $\delta =\varphi_{0} \circ u_{2}$ where $\varphi_{0}$ is continuous and convex. Actually, since $\delta$ is pluriharmonic and not only plurisubharmonic, $\varphi_{0}$ must be affine. So there exists real numbers $a$ and $b$ such that:
$$\delta = au_{2}+b.$$
Since $u_{2}$ and $\delta$ both take values into $(-1,1)$ and are onto, one sees that $b$ must be $0$ and that $a=\pm 1$. We now obtain a contradiction from this fact, making use of the hyperplane $\mathfrak{l}$. Let $s_{n}$ be a sequence of points of $\widetilde{M}$ such that $d(\psi (s_{n}),\hat{\mathfrak{l}})$ goes to infinity and such that $\psi (s_{n})\in \mathfrak{l}$. Such a sequence exists because the action is essential. Since $d(\psi(s_{n}),\mathfrak{h})\ge d(\psi (s_{n}),\hat{\mathfrak{l}})$ we must have that $\overline{f}(p(s_{n}))\to -\infty$, hence also $u_{\mathfrak{h}}(p(s_{n}))\to -1$.
In the next paragraph, we denote by $x\mapsto [x]$ the covering map $\widetilde{M}\to M_{2}$.
If $a=-1$, one sees that $\delta ([s_{n}])\to 1$ which implies that $\psi (s_{n})\in \mathfrak{k}$ for $n$ large enough. Hence $\Psi (s_{n})\in \mathfrak{k}\cap \mathfrak{l}$. This is a contradiction since $\mathfrak{k}$ and $\mathfrak{l}$ are disjoint. If $a=1$ we argue in a similar way with the pair $(\mathfrak{h},\mathfrak{k})$. We take a sequence $q_{n}$ of points of $\widetilde{M}$ such that $\psi (q_{n})\in \mathfrak{h}$ and $d(\psi(q_{n}),\hat{\mathfrak{h}})\to \infty$. This implies $u_{2}([q_{n}])\to 1$. Since $\delta = u_{2}$ this implies that $\psi (q_{n})\in \mathfrak{k}$ for $n$ large enough. Since $\mathfrak{k}\cap \mathfrak{h}$ is empty we get a contradiction again. This proves the proposition.\hfill $\Box$
\subsection{Producing fibrations}\label{pf}
We continue with the notations and hypothesis from section~\ref{sphm}. Our aim is now to prove the following:
\begin{prop}\label{onemore} The manifold $M_{\hat{\mathfrak{h}}}$ {\it fibers}: there exists a proper holomorphic map $M_{\hat{\mathfrak{h}}}\to \Sigma$ with connected fibers onto a Riemann surface $\Sigma$.
\end{prop}
\noindent {\it Proof.} By Proposition~\ref{filteredinuse}, we can assume that the level sets of the lift of the map
$$u_{\mathfrak{h}} : M_{\hat{\mathfrak{h}}}\to (-1,1)$$
to the universal cover are connected, otherwise the conclusion is already known. By Proposition~\ref{supercool}, we can first replace $M_{\hat{\mathfrak{h}}}$ by a finite cover $p : M_{2}\to M_{\hat{\mathfrak{h}}}$ such that there exists a function $\delta : M_{2} \to \mathbb{R}_{+}$ which is continuous, plurisubharmonic and not constant on the set $\{u_{\mathfrak{h}}\circ p =t\}$ for some real number $t$. Note that if this is true for some number $t$, one can always find a number $t'$, close to $t$, which has the same property and moreover satisfies that $t'$ is a regular value of $u_{\mathfrak{h}}\circ p$. Let $\mathscr{F}$ be the foliation of $M_{2}$ defined by the $(1,0)$ part of the differential of $u_{\mathfrak{h}}\circ p$. Note that $d(u_{\mathfrak{h}}\circ p)$ might have zeros; we refer to~\cite[p.55]{abckt} for the precise definition of $\mathscr{F}$. We now consider the manifold
$$X_{t'}=(u_{\mathfrak{h}}\circ p)^{-1}(t').$$
It is invariant by the foliation $\mathscr{F}$. Hence on $X_{t'}$ we have a real codimension $1$ foliation defined by the nonsingular closed $1$-form which is the restriction of $d^{\mathbb{C}}(u_{\mathfrak{h}}\circ p)$ to $X_{t'}$. Such a foliation has all its leaves closed or all its leaves dense; this is an elementary particular case of the theory of Riemannian foliations. We must thus show that the restriction of $\mathscr{F}$ to $X_{t'}$ cannot have all its leaves dense. Let $q$ be a point where the restriction of $\delta$ to $X_{t'}$ reaches its maximum $m$. Let $L(q)$ be the leaf of $\mathscr{F}$ through $q$. The maximum principle implies that $\delta$ is constant on $L(q)$. Hence the closure of $L(q)$ is contained in the set $\delta =m$. Since $\delta$ is not constant on $X_{t'}$, $L(q)$ cannot be dense in $X_{t'}$, hence is closed. We have found a compact leaf of the foliation $\mathscr{F}$. This compact leaf projects to a compact leaf of the foliation $\mathscr{F}_{\hat{\mathfrak{h}}}$ defined by $du_{\mathfrak{h}}^{1,0}$ on $M_{\hat{\mathfrak{h}}}$. But it is now classical that this implies that the foliation $\mathscr{F}_{\hat{\mathfrak{h}}}$ is actually a fibration. See for instance~\cite[(7.4)]{carlsontoledo} or~\cite[\S 4.1]{delzantgromov}.\hfill $\Box$
Now we will apply the following result:
{\it Let $V$ be a closed K\"ahler manifold. Assume that $V$ has a covering space $V_{1}\to V$ which admits a proper holomorphic mapping to a Riemann surface, with connected fibers: $\pi_{1} : V_{1}\to \Sigma_{1}$. Then $\pi_{1}$ descends to a finite cover of $V$: there exists a finite cover $V_{2}$ of $V$ such that $V_{1}\to V$ decomposes as $V_{1}\to V_{2}\to V$, $V_{2}$ admits a holomorphic mapping $\pi_{2} : V_{2} \to \Sigma_{2}$ with connected fibers and there exists a holomorphic map $\Sigma_{1}\to \Sigma_{2}$ which makes the following diagram commutative:
\begin{displaymath}
\xymatrix{V_{1} \ar[r] \ar[d] & \Sigma_{1} \ar[d] \\
V_{2} \ar[r] & \Sigma_{2}. \\}
\end{displaymath} }
This fact is now well-known, we refer the reader to~\cite[\S 5.6]{delzantgromov} or~\cite[Prop. 4.1]{nr1} for a proof. Applying this result to $V=M$ and to the cover $V_{1}=M_{\hat{\mathfrak{h}}}$ we obtain a finite cover $M_{2}\to M$ and a fibration $\pi_{2} : M_{2}\to \Sigma$ onto a Riemann surface. By replacing $M_{2}$ by another finite cover, we can assume that the fundamental group of a smooth fiber of $\pi_{2}$ surjects onto the kernel of the map $(\pi_{2})_{\ast} : \pi_{1}(M_{2})\to \pi_{1}(\Sigma)$. Note that a smooth fiber of the fibration of $M_{\hat{\mathfrak{h}}}$ obtained in Proposition~\ref{onemore} projects onto a smooth fiber of $\pi_{2}$. This implies that the normal subgroup $$N:=Ker((\pi_{2})_{\ast} : \pi_{1}(M_{2})\to \pi_{1}(\Sigma))$$
is contained in the stabilizer of $\hat{\mathfrak{h}}$ inside $\pi_{1}(M_{2})$. In what follows, we write $\Gamma_{2}=\pi_{1}(M_{2})$. To conclude the proof of Theorem~\ref{fac-pv}, we only have to establish the next proposition.
\begin{prop} The normal subgroup $N$ acts as an elliptic subgroup of ${\rm Aut}(X)$ i.e. the fixed point set of $N$ in $X$ is nonempty.
\end{prop}
{\it Proof.} We know that $N$ is contained in the group $Stab_{\Gamma_{2}}(\hat{\mathfrak{h}})$. But since $N$ is normal in $\Gamma_{2}$, $N$ is contained in
$$Stab_{\Gamma_{2}}(\gamma(\hat{\mathfrak{h}}))$$
for all $\gamma$ in $\Gamma_{2}$. By Corollary~\ref{allstable}, we can pick $\gamma \in \Gamma_{2}$ such that $\hat{\mathfrak{h}}$ and $\gamma (\hat{\mathfrak{h}})$ are strongly separated. Since $N$ preserves $\hat{\mathfrak{h}}$ and $\gamma (\hat{\mathfrak{h}})$ it must preserve the projection of $\hat{\mathfrak{h}}$ onto $\gamma(\hat{\mathfrak{h}})$, which is a point according to Proposition~\ref{projunique}. Hence $N$ is elliptic.\hfill $\Box$
\section{Cubulable K\"ahler manifolds and groups}\label{ckmag}
\subsection{Cubulable K\"ahler groups}
We first recall a few definitions concerning finiteness properties of groups as well as a result by Bridson, Howie, Miller and Short~\cite{bhms2002}. These will be used in the proof of Theorem~\ref{fac-complet}. A group $G$ is of type ${\rm FP}_{n}$ if there is an exact sequence
$$P_{n}\to P_{n-1}\to \cdots \to P_{0}\to \mathbb{Z} \to 0$$
\noindent of $\mathbb{Z} G$-modules, where the $P_{i}$ are finitely generated and projective and where $\mathbb{Z}$ is considered as a trivial $\mathbb{Z} G$-module. It is of type ${\rm FP}_{\infty}$ if it is of type ${\rm FP}_{n}$ for all $n$. See~\cite[\S VIII.5]{browncoho} for more details on these notions. We simply mention that the fundamental group of a closed aspherical manifold is of type ${\rm FP}_{\infty}$.
It is proved in~\cite{bhms2002} that if $H_{1}, \ldots , H_{n}$ are either finitely generated free groups or surface groups and if $G$ is a subgroup of the direct product
$$H_{1}\times \cdots \times H_{n}$$
which is of type ${\rm FP}_{n}$, then $G$ is virtually isomorphic to a direct product of at most $n$ groups, each of which is either a surface group of a finitely generated free group. We also refer the reader to~\cite{bridsonhowie,bhms2009} for more general results. Note that the idea of applying the results from~\cite{bhms2002} to K\"ahler groups is not new. This possibility was already discussed in~\cite{bridsonhowie,bhms2009}, and put into use in~\cite{py}. We also mention here that there exist K\"ahler groups which are subgroups of direct products of surface groups but which are not of type ${\rm FP}_{\infty}$, see~\cite{dps,cli}.
We now prove Theorem~\ref{fac-complet}. So let $\Gamma$ and $X$ be as in the statement of the theorem. Let
$$X= X_{1}\times \cdots \times X_{r}$$
be the decomposition of $X$ into a product of irreducible ${\rm CAT}(0)$ cubical complexes. There is a finite index subgroup $\Gamma_{1}$ of $\Gamma$ which preserves this decomposition. Note that the action of $\Gamma_{1}$ on each of the $X_{i}$ is essential since the original action is essential on $X$. We will make use of the following two results from~\cite{cs}:
\begin{itemize}
\item The group $\Gamma_{1}$ contains an element $\gamma_{0}$ which acts as a rank $1$ isometry on each irreducible factor.
\item The group $\Gamma_{1}$ contains a copy of $\mathbb{Z}^{r}$.
\end{itemize}
See~\cite{cs}, Theorem C and Corollary D for these statements. We recall here that a rank $1$ isometry of a ${\rm CAT}(0)$ space is a hyperbolic isometry none of whose axis bounds a flat half-plane.
\begin{prop}\label{tuerparab} Let $i\in \{1, \ldots , r\}$. Exactly one of the following two cases occurs:
\begin{enumerate}
\item The action of $\Gamma_{1}$ preserves a geodesic line in $X_{i}$.
\item The action of $\Gamma_{1}$ has no invariant Euclidean flat in $X_{i}$ and fixes no point in the viusal boundary of $X_{i}$.
\end{enumerate}
\end{prop}
{\it Proof.} Since $\Gamma_{1}$ contains a rank $1$ isometry, if $\Gamma_{1}$ preserves a Euclidean flat in $X_{i}$, this flat must be a geodesic line. According to Proposition 7.3 from~\cite{cs} (see also the proof of that proposition), if $\Gamma_{1}$ does not preserve a geodesic line in $X_{i}$, it does not have any fixed point in the visual boundary of $X_{i}$.\hfill $\Box$
Up to replacing $\Gamma_{1}$ by a finite index subgroup, we assume that whenever $\Gamma_{1}$ preserves a geodesic line $L_{i}$ in some $X_{i}$, it acts by translations on it. In this case, the translation group is discrete as follows from~\cite{bridson} for instance. Hence the action of $\Gamma_{1}$ on $L_{i}$ factors through a homomorphism to $\mathbb{Z}$.
We now continue the proof of Theorem~\ref{fac-complet}. We change the numbering of the factors $X_{i}$'s so that for $1\le i \le k$, $\Gamma_{1}$ preserves a geodesic line in $X_{i}$ and acts by translations on it, whereas for $i>k$, it fixes no point in the visual boundary of $X_{i}$ and preserves no flat. Hence for $j>k$, the $\Gamma_{1}$-action on $X_{j}$ satisfies all the hypothesis of Theorem~\ref{fac-pv}. Using Theorem~\ref{fac-pv} we get a finite index subgroup $\Gamma_{2}<\Gamma_{1}$ and a homomorphism
\begin{equation}\label{ssttrr}
\psi : \Gamma_{2}\to \mathbb{Z}^{k}\times \pi_{1}(\Sigma_{1})\times \cdots \times \pi_{1}(\Sigma_{r-k})
\end{equation}
with the following properties:
\begin{enumerate}
\item The homomorphism $\Gamma_{2}\to \pi_{1}(\Sigma_{j})$ induced by $\psi$ is surjective for each $k+1\le j \le r$.
\item For each $i$, there exists a convex cobounded subset $Y_{i}\subset X_{i}$ on which the $\Gamma_{2}$-action factors through the coordinate number $i$ of the homomorphism $\Gamma_{2}\to \mathbb{Z}^{k}$ if $i\le k$ or through the homomorphism $\Gamma_{2} \to \pi_{1}(\Sigma_{i-k})$ if $i>k$.
\end{enumerate}
\noindent Since the action of $\Gamma_{2}$ is proper, the kernel $N$ of $\psi$ is finite. By replacing $\Gamma_{2}$ by a finite index subgroup $\Gamma_{3}$, we can assume that $N\cap \Gamma_{3}$ is central in $\Gamma_{3}$. We thus have a central extension:
\begin{displaymath}
\xymatrix{ \{0\} \ar[r] & N\cap \Gamma_{3} \ar[r] & \Gamma_{3} \ar[r] & \psi(\Gamma_{3}) \ar[r] & \{0\}. \\
}
\end{displaymath}
\begin{lemma}\label{fpfpfp} The group $\psi(\Gamma_{3})$ is of type ${\rm FP}_{\infty}$.
\end{lemma}
{\it Proof.} Let $Y\subset X$ be the fixed point set of $N$. This is a subcomplex of the first cubical subdivision of $X$. Since $\psi(\Gamma_{3})$ is torsion-free, it acts freely on $Y$. The quotient $Y/\psi(\Gamma_{3})$ is a finite complex, hence $\psi(\Gamma_{3})$ is of type ${\rm FL}$, in particular ${\rm FP}_{\infty}$. See~\cite[VIII.6]{browncoho} for the definition of the ${\rm FL}$ condition and its relation to the ${\rm FP}_{n}$ and ${\rm FP}_{\infty}$ conditions.\hfill $\Box$
Now the result of~\cite{bhms2002} implies that $\psi(\Gamma_{3})$ itself is isomorphic to a direct product of surface groups and finitely generated free groups. No non-Abelian free factor can appear, but since this is not needed for the next lemma, we will postpone a little bit the explanation of this fact.
\begin{lemma} The group $\Gamma_{3}$ has a finite index subgroup $\Gamma_{4}$ which does not intersect $N$. In particular $\Gamma_{4}\simeq \psi(\Gamma_{4})$.
\end{lemma}
{\it Proof.} It is enough to prove that the central extension of $\psi (\Gamma_{3})$ by $N\cap \Gamma_{3}$ appearing above becomes trivial on a finite index subgroup of $\psi (\Gamma_{3})$. Being isomorphic to a direct product of surface groups and free groups, $\psi(\Gamma_{3})$ has torsion-free $H_{1}$. Hence the universal coefficient theorem implies that
$$H^{2}(\psi (\Gamma_{3}),N\cap \Gamma_{3})$$
is isomorphic to $Hom(H_{2}(\psi(\Gamma_{3}),\mathbb{Z}),N\cap \Gamma_{3})$. If $H<\psi(\Gamma_{3})$ is a subgroup of finite index such that every element in the image of the map $H_{2}(H,\mathbb{Z})\to H_{2}(\psi (\Gamma_{3}),\mathbb{Z})$ is divisible by a large enough integer $p$, the pull-back map $Hom(H_{2}(\psi(\Gamma_{3}),\mathbb{Z}),N\cap \Gamma_{3})\to Hom(H_{2}(H,\mathbb{Z}),N\cap \Gamma_{3})$ is trivial. Hence the desired central extension is trivial on $H$.\hfill $\Box$
As in Lemma~\ref{fpfpfp}, one proves that $\Gamma_{4}$ is of type ${\rm FP}_{\infty}$. Applying again the result of~\cite{bhms2002}, we get that $\Gamma_{4}$ is isomorphic to a direct product of surface groups, possibly with a free Abelian factor. To obtain the more precise statement of Theorem~\ref{fac-complet}, and to justify the fact $\Gamma_{4}$ does not contain any free non-Abelian factor, we argue as follows. We will call direct factor of the product
\begin{equation}\label{onemooore}
\mathbb{Z}^{k}\times \pi_{1}(\Sigma_{1})\times \cdots \times \pi_{1}(\Sigma_{r-k}).
\end{equation}
either one of the groups $\pi_{1}(\Sigma_{s})$ or one of the $\mathbb{Z}$ copy of $\mathbb{Z}^{k}$. The intersection of $\Gamma_{4}$ with each of the $r$ direct factors in~\eqref{onemooore} must be nontrivial because $\Gamma_{4}$ contains a copy of $\mathbb{Z}^{r}$. Indeed if one of these intersections was trivial, $\Gamma_{4}$ would embed into a direct product which does not contain $\mathbb{Z}^{r}$. We call $L_{1}, \ldots , L_{r}$ these intersections, where $L_{i} <\mathbb{Z}$ for $1\le i \le k$ and where $L_{i}<\pi_{1}(\Sigma_{i-k})$ for $i\ge k+1$. The proof of~\cite[p. 101]{bhms2002} shows that for $i\ge k+1$, $L_{i}$ is finitely generated and of finite index inside $\pi_{1}(\Sigma_{i-k})$. This implies that $\Gamma_{4}$ contains a finite index subgroup $\Gamma_{\ast}$ which is the direct product of each of its intersections with the factors in~\eqref{onemooore}. The group $\Gamma_{\ast}$ now satisfies the conclusion of Theorem~\ref{fac-complet}.
\subsection{Cubulable K\"ahler manifolds}\label{sscm}
We now turn to the proof of Theorem~\ref{manifold}. Let $M$ be as in the statement of the theorem. Note that in particular, $M$ is aspherical. Applying Corollary~\ref{cckcc}, we see that a finite cover $M_{1}$ of $M$ has fundamental group isomorphic to a product of the form
\begin{equation}\label{eeeeeee}
\mathbb{Z}^{2l}\times \pi_{1}(S_{1})\times \cdots \times \pi_{1}(S_{m})
\end{equation}
where the $S_{i}$'s are closed surfaces of genus greater than $1$. We fix such an isomorphism and we denote by $\pi_{i}$ ($1\le i \le m$) the projection from $\pi_{1}(M_{1})$ onto $\pi_{1}(S_{i})$. From now on the proof will not make any further use of ${\rm CAT}(0)$ cubical complexes. We only use arguments from K\"ahler geometry. First, we will need the following classical result, see Theorem 5.14 in~\cite{catanese}:
\begin{theorem} Let $X$ be a K\"ahler manifold, $S$ a topological surface of genus $\geqslant 2$, and $\pi : \pi_1(X) \rightarrow \pi_1(S) $ a surjective homomorphism with finitely generated kernel. Then, there exists a complex structure on $S$ and a holomorphic map with connected fibers $p : X \rightarrow S$ such that the map
$$p_{\ast} : \pi_1(X) \rightarrow \pi_1(S)$$
induced by $p$ is equal to $\pi$.
\end{theorem}
Applying this theorem to the various $\pi_{i}$'s we obtain that the surfaces $S_{i}$'s can be endowed with complex structures such that one has holomorphic maps $p_{i} : M_{1} \to S_{i}$ inducing the homomorphisms $\pi_{i}$ at the level of fundamental group.
Let $A$ be the Albanese torus of $M_{1}$ and let $\alpha : M_{1} \to A$ be the Albanese map, which is well-defined up to translation. We also denote by $A_{i}$ the Albanese torus (or Jacobian) of the Riemann surface $S_{i}$ and by $\alpha_{i} : S_{i} \to A_{i}$ the corresponding map. By definition of the Albanese maps, for each $i$ there exists a holomorphic map $\varphi_{i} : A \to A_{i}$ which makes the following diagram commutative:
\begin{displaymath}
\xymatrix{M_{1} \ar[r]^{\alpha} \ar[d]^{p_{i}} & A \ar[d]^{\varphi_{i}} \\
S_{i} \ar[r]^{\alpha_{i}} & A_{i}. \\}
\end{displaymath}
We denote by $\varphi : A \to A_{1}\times \cdots \times A_{m}$ the product of the maps $\varphi_{i}$ and by $\beta : S_{1}\times \cdots \times S_{m}\to A_{1}\times \cdots \times A_{m}$ the product of the maps $\alpha_{i}$. Up to composing the maps $\alpha_{i}$'s with some translations, we can and do assume that $\varphi$ maps the origin of $A$ to the origin of $A_{1}\times \cdots \times A_{m}$, hence is a group homomorphism.
Let $Y$ be the following submanifold of $A$:
$$Y=\{y\in A, \varphi (y)\in Im (\beta)\}$$
This is indeed a submanifold since $\beta$ is an embedding and $\varphi$ is a submersion. Now by construction the Albanese map $\alpha$ of $M_{1}$ can be written as:
$$\alpha = i\circ \Phi$$
where $i$ is the inclusion of $Y$ in $A$ and $\Phi: M_{1} \to Y$ is holomorphic. We now have:
\begin{lemma} The map $\Phi$ is a homotopy equivalence between $M_{1}$ and $Y$.
\end{lemma}
{\it Proof.} Let $B$ be the kernel of the map $\varphi$. The complex dimension of $B$ is equal to
$$\frac{1}{2}\left(b_{1}(M_{1})-\sum_{j=1}^{m}b_{1}(S_{j})\right)$$
which equals the number $l$ appearing in Equation~\eqref{eeeeeee}. One can find a $C^{\infty}$ diffeomorphism $\theta : A\to B \times A_{1}\times \cdots \times A_{m}$ such that $\varphi \circ \theta^{-1}$ is equal to the natural projection
$$ B \times A_{1}\times \cdots \times A_{m}\to A_{1}\times \cdots \times A_{m}.$$
This implies that the complex manifold $Y$ is $C^{\infty}$ diffeomorphic to $B \times S_{1}\times \cdots \times S_{m}$. In particular, $Y$ and $M_{1}$ have isomorphic fundamental groups. We finally prove that $\Phi$ induces an isomorphism on fundamental groups. It follows from our description of $Y$ that one can choose identifications of $\pi_{1}(M_{1})$ and $\pi_{1}(Y)$ with
$$\mathbb{Z}^{2l}\times \pi_{1}(S_{1})\times \cdots \times \pi_{1}(S_{m})$$
in such a way that $\Phi_{\ast}$ preserves each of the projections onto the groups $\pi_{1}(S_{j})$. This implies that $\Phi_{\ast}$ induces an isomorphism between the quotients of $\pi_{1}(M_{1})$ and $\pi_{1}(Y)$ by their respective centers. To conclude, it is enough to check that $\Phi_{\ast}$ induces an isomorphism between the centers of $\pi_{1}(M_{1})$ and $\pi_{1}(Y)$. But the composition of $\Phi_{\ast}$ with the projection from $\pi_{1}(Y)$ onto its abelianization coincides with the map $\alpha_{\ast} : \pi_{1}(M_{1})\to \pi_{1}(A)\simeq H_{1}(M_{1},\mathbb{Z})$. Since the center of $\pi_{1}(M_{1})$ injects in $H_{1}(M_{1},\mathbb{Z})$, the restriction of $\Phi_{\ast}$ to the center of $\pi_{1}(M_{1})$ must be injective. Now using the fact that the quotient of $H_{1}(M_{1},\mathbb{Z})$ by the image of the center of $\pi_{1}(M_{1})$ in $H_{1}(M_{1},\mathbb{Z})$ is torsionfree, one sees easily that $\Phi_{\ast}(Z(\pi_{1}(M_{1})))$ must be equal to the center of $\pi_{1}(Y)$. This concludes the proof that $\Phi_{\ast}$ is an isomorphism.
The manifold $M_{1}$ is aspherical by hypothesis. The manifold $Y$ is also aspherical since it is diffeomorphic to $B\times S_{1} \times \cdots \times S_{m}$. Since $\Phi$ induces an isomorphism on fundamental group, it is a homotopy equivalence. This concludes the proof of the lemma.\hfill $\Box$
We now conclude the proof using the following fact due to Siu and contained in the proof of Theorem~8 from~\cite{siu}:
\begin{center} {\it Let $f : Z_{1}\to Z_{2}$ be a holomorphic map between two compact K\"ahler manifolds of dimension $n$. Assume that $f$ is of degree $1$ and that the induced map $H_{2n-2}(Z_{1},\mathbb{R})\to H_{2n-2}(Z_{2},\mathbb{R})$ is injective. Then $f$ is a holomorphic diffeomorphism. The conclusion holds in particular if $f$ is a homotopy equivalence.}
\end{center}
\noindent Applying Siu's result to the map $\Phi : M_{1} \to Y$ we obtain that $M_{1}$ and $Y$ are biholomorphic. This proves the first statement in Theorem~\ref{manifold}. When the original manifold $M$ is algebraic, an easy application of Poincar\'e's reducibility theorem~\cite[VI.8]{debarre} shows that a finite cover of $M_{1}$ is actually biholomorphic to a product of a torus with finitely many Riemann surfaces. This concludes the proof of Theorem~\ref{manifold}.
\section{Comments}\label{ques}
We discuss here some possible improvements to our results.
First, one would like to remove the hypothesis of local finiteness in Theorem~\ref{fac-pv}. We summarize at which points this hypothesis was used:
\begin{enumerate}
\item It was used for the first time in Proposition~\ref{finitude}. However in this place, we have seen that it can be replaced by the hypothesis that the group action under consideration has finite stabilizers. Note that Proposition~\ref{finitude} is used later in section~\ref{sphm} to prove that a certain function defined as a supremum of continuous plurisubharmonic functions is actually the supremum of a finite number of continuous functions, hence is continuous.
\item It is also used in the proof of Proposition~\ref{proprete} to show that the {\it signed distance function} to a hyperplane induces a proper function on a suitable covering space of the manifold under consideration.
\end{enumerate}
Second, one could try to remove the hypothesis that there is no fixed point in the visual boundary in Theorem~\ref{fac-pv}. There is a well-known strategy to achieve this, see~\cite[\S 2.H]{cif} as well as appendix B by Caprace in~\cite{cif}. If a group $G$ acts without fixed point on a ${\rm CAT}(0)$ cubical complex $X$ but with a fixed point in the visual boundary $\partial_{\infty}X$, one can construct another ${\rm CAT}(0)$ cubical complex $X_{1}$, of smaller dimension, on which $G$ acts and such that $X_{1}$ embeds in the Roller boundary of $X$. By applying this construction repeatedly, one can obtain a description of actions having a fixed point in the visual boundary. The reason why we cannot use this method here is that the passage from $X$ to $X_{1}$ does {\it not} preserve the local finiteness of the complex. We thank Pierre-Emmanuel Caprace for useful discussions concerning this point. Thus, one sees that removing the hypothesis of local finiteness from Theorem~\ref{fac-pv} should also allow to describe {\it parabolic} actions of K\"ahler groups on ${\rm CAT}(0)$ cubical complexes. Note that parabolic actions of K\"ahler groups on trees are already understood~\cite{delzant2}.
| {'timestamp': '2016-09-28T02:07:06', 'yymm': '1609', 'arxiv_id': '1609.08474', 'language': 'en', 'url': 'https://arxiv.org/abs/1609.08474'} |
\section{Introduction}
Recent work suggests that Loop Quantum Gravity (LQG) may be capable of resolving the singularities that are inevitable in classical general relativity. Because of the inherent difficulty in solving the complete system, the focus has been
on dimensionally reduced, mini-superspace models of quantum cosmology \cite{lqc} and spherically symmetric black hole spacetimes \cite{Ashtekar05, modesto06, boehmer07, pullin08}. As observed in \cite{Ashtekar05} and \cite{modesto06}, for example, singularity avoidance in these two cases is deeply connected because the interior Schwarzschild spacetime coincides with the homogeneous anistropic Kantowski-Sachs cosmology. While significant progress has been made, there is as yet no clear evidence as to which theory of quantum gravity will ultimately be proven correct, LQG, string theory, or perhaps something else. Nor is there, to the best of our knowledge, a rigorous and unambiguous path from the full loop quantum gravity theory to the quantization techniques used in the mini-superspace models. One particularly fruitful technique that has been used recently to great effect \cite{boehmer07, pullin08} is a semi-classical polymerization that preserves aspects of the underlying discreteness of spacetime suggested by LQG but considers the limit in which quantum effects are vanishingly small. Different polymerizations can give qualitatively different regularized spacetimes, so that it is of great interest to examine more fully a wider class of models and methods.
In the following, we describe quantum corrections that arise from the semi-classical polymerization of the interior of generic black holes in a family of theories known collectively as generic 2-D dilaton gravity. Of prime importance for the present work is that this family includes spherically symmetric black holes in spacetime dimension three or higher. We investigate two different polymerization schemes, and show that the results differ qualitatively: in one case the resulting non-singular spacetime generically has only a single horizon while in the second there are multiple horizons.
In our considerations, the polymerization scale is taken to be a constant but we note that in the context of loop quantum cosmology, as well as in some black hole scenarios \cite{nelson}, a dynamical polymerization scale has been considered.
Our key result is the analytic solution of the semi-classical equations that is obtained in 4-D when only area is polymerized. This solution can be extended analytically to a complete non-singular spacetime with only a single horizon. This has the advantage over other candidates for loop quantum corrected black holes \cite{boehmer07, pullin08} of avoiding the problem of mass inflation \cite{mass_inflation} normally associated with inner horizons. The exterior has non-zero quantum stress energy but closely approximates the classical spacetime for macroscopic black holes. There are two interior regions, one in the past and one in the future. Both exhibit a bounce at a microscopic scale and then asymptote (one in the infinite past and the other in the infinite future) to a non-singular product Kantowski-Sachs \cite{kantowski-sachs} type cosmological spacetime containing an anisotropic fluid, with product topology of a spacelike R and an expanding 2+1 positive curvature FRW cosmology. The polymer dynamics thus drives the system into an asymptotic interior end-state that is not a small correction to the classical spacetime. In the limit that the polymerization scale goes to zero, the interior cosmological regions ``pinch off'' leaving behind the standard singular Schwarzschild interior. The complete, non-singular semi-classical spacetime is suggestive of past proposals for ``universe creation'' inside black holes \cite{frolov90}.
\section{Classical Theory}
Our formalism begins with the most general (up to point reparametrizations) $1+1$-dimensional, second order, diffeomorphism
invariant action that can be built from a 2-metric $g_{\mu \nu}$ and a scalar $\phi$ \cite{gru,dil1}:
\begin{equation} \label{eq:act} S[g,\phi ] = \frac{1}{2G}\int d^2x \sqrt{-g}\, \bigg( \phi R(g)
+ \frac{V(\phi)}{l^2}\bigg),
\end{equation}
where $l$ is a positive constant with
a dimension of length and $G$ is the dimensionless two-dimensional Newton's constant. This action provides a convenient representation of spherically symmetric black hole spacetimes in $D\equiv n+2$ dimensions. It obeys a generalized Birkhoff theorem \cite{dil1} with general solution:
\begin{equation} \label{eq:ds} ds^2 = -\big[ 2lGM-j(\phi )\big]^{-1}l^2 d\phi^2+\big[ 2lGM-j(\phi )\big]dx^2.
\end{equation}
where $j(\phi )$ satisfies $dj/d\phi=V(\phi )$. For our purposes, it is convenient to assume that $j(\phi)\rightarrow 0$ when \mbox{$\phi\rightarrow 0$}. The integration constant $M$ is the Arnowitt-Deser-Misner (ADM) mass, and we take $M>0$.
For monotonic functions $j(\phi )$ the solution contains precisely one Killing horizon \cite{dil1} at $\phi_\text{H}$, such that $j(\phi_\text{H}) = 2lGM$. The metric (\ref{eq:ds}) and dilaton can be related to a spherically symmetric metric in $D$ dimensions as follows:
\begin{equation}
ds^2_\text{phys}= \frac{ds^2}{j(\phi )} + r^2(\phi) d\Omega^2_{(D-2)},
\label{eq:physical_metric}
\end{equation}
where $d\Omega^2_{(D-2)}$ is the line element of the unit $(D-2)$-sphere. Eq. (\ref{eq:physical_metric}) corresponds in particular to the interior of the 4-D Schwarzschild solution with the identifications $2Gl^2=G^{\text{(4)}}$, $V=1/(2\sqrt{\phi})$ and $\phi=r^2/(4l^2)$.
In order to address the question of singularity resolution in the semi-classical polymerized theory, we restrict to homogeneous slices with metric parametrization
\begin{equation} \label{eq:adm}
ds^2 = e^{2\rho(t)}\big( -\sigma^2(t)dt^2+dx^2\big).
\end{equation}
After suppressing an irrelevant integration over the spatial coordinate, the resulting action is that of a parametrized system:
\begin{equation} I=\frac{1}{2G}\int dt\Big(\Pi_\rho\dot{\rho}+\Pi_\phi\dot{\phi} +\sigma \mathcal{G}\Big),
\label{eq:action}
\end{equation}
where a dot denotes a derivative with respect to the time coordinate $t$ and the (Hamiltonian) constraint is
\begin{equation}
\mathcal{G}=G\Pi_\rho \Pi_\phi +e^{2\rho} \frac{V}{2l^2G}\approx 0.
\label{eq:constraint}
\end{equation}
In spherically symmetric 4-D spacetime, this Hamiltonian can be converted to the LQG Hamiltonian of \cite{pullin08} by a simple point canonical transformation.
The simplicity of the Hamiltonian in (\ref{eq:constraint}) makes it rather straightforward to find analytic solutions for the components of the physical metric. These solutions depend on two parameters, the ADM mass $M$, and its canonical conjugate $P_M$. In the full (inhomogeneous) spherically symmetric theory the latter corresponds to the Schwarzschild time separation of spatial slices \cite{kuchar94}. In the present case, the arbitrariness of $P_M$ represents the residual invariance of the theory under rescalings of the Schwarzschild ``time'' coordinate $x$.
\section{Semi-Classical Polymer Approach}
In the polymer representation of quantum mechanics \cite{ash,halvorson} one effectively studies
the Hamiltonian dynamics on a discrete spatial lattice.
In many cases the full polymer theory is rather challenging to analyze
but fortunately one can get interesting results by investigating the
semi-classical limit of the theory, which corresponds formally to the limit in which
quantum effects are small ($\hbar\to 0$), but the polymerization scale $\mu$ stays finite.
This approximation is the basis for recent analyses of black hole interiors \cite{boehmer07,pullin08}. It can be derived by studying the action of the fully quantized operators on coherent states and expanding in the width of the states \cite{husain:semiclass,ding08}. The end result is to simply replace the classical momentum variable $p$ in the classical Hamiltonian function by $\sin (\mu p)/\mu$. After the replacement, one studies the \hbox{(semi-)classical} dynamics of the resulting polymer Hamiltonian by means of standard techniques. We assume in the following that this effective description is valid without further state-dependent corrections, at least for states that approach semi-classical behaviour at scales large compared to $\mu$.
We first polymerize only the generalized area variable $\phi$. While this may seem {\it ad hoc}, it is perhaps not unreasonable to introduce a fundamental discreteness for the geometrical variable that corresponds to area in the spherically symmetric theory while leaving the coordinate dependent conformal mode of the metric continuous. This approach is motivated by (but not derived from) full LQG, where it is the area operator that is naturally discretised. For completeness we will subsequently illustrate the result of polymerizing both variables. Details of both polymerizations will be presented elsewhere \cite{AK:prep}. Note that we choose to work with a state-independent (constant) polymerization scale, despite the fact that in the context of loop quantum cosmology consistency with predictions requires a state-dependent discreteness scale\cite{lqc}. Here we choose the simplest approach that produces reasonable semi-classical behaviour and leave the study of state-dependent polymerization scales for future research.
The partially polymerized Hamiltonian constraint is:
\begin{equation} \label{eq:polyH}\mathcal{G} = G \Pi_\rho\frac{\sin (\mu\Pi_\phi)}{\mu}
+ e^{2\rho}\frac{V(\phi)}{2l^2G}\approx 0.
\end{equation}
In this equation $\mu$ has a dimension of length, which means that $\phi$ has a discrete polymer structure with edge length of $\mu/l$. The essence of the singularity resolution mechanism is evident from the equation of motion for $\phi$:
\begin{eqnarray}
\frac{\dot{\phi}}{G\sigma}&=& -\Pi_\rho \cos(\mu\Pi_\phi).
\label{eq:phidot}
\end{eqnarray}
$\dot{\phi}$ now vanishes at two turning points: the ``classical'' turning point $\Pi_\rho=0$ and semi-classical turning point: $\cos(\mu\Pi_\phi)=0$. The former condition turns out to be satisfied at the horizon as expected while the latter occurs first at a microscopic scale proportional to $\mu$.
The solution to the corresponding H-J equation is:
\begin{equation} \label{eq:HJF} S=-\frac{\alpha}{4G}\, e^{2\rho}+\frac{l}{\mu}\int \arcsin
\bigg( \frac{\mu V}{\alpha lG}\bigg)\, d\phi +C,
\end{equation}
where $\alpha$ and $C$ are constants. For convenience, we shall take $\alpha >0$.
The expression for $\Pi_\phi$ now reads:
\begin{equation}
\label{eq:pi1}\Pi_\phi =\frac{1}{l}\frac{\partial S}{\partial \phi } =
\frac{1}{\mu}\arcsin \bigg( \frac{\mu V}{\alpha lG}\bigg).
\end{equation}
We concentrate henceforth on spherically symmetric gravity for which $V\propto\phi^{-1/n}$. In this case $\phi$ has an minimum given by $\mu V(\phi_{\text{min}})=\alpha lG$. Note that $\phi_\text{min}$ is located at the roots of the cosine function as expected from (\ref{eq:phidot}).
To find the solutions, we require:
\begin{equation} \label{eq:beta2} \frac{\partial S}{\partial\alpha}=-\beta,
\end{equation}
where $\beta$ is again a constant of motion that is conjugate to $\alpha$. The constants $\beta$ and $\alpha$ are related to the usual canonical pair $(M,P_M)$ by a simple canonical transformation. Eq. (\ref{eq:beta2}) yields a solution for $e^{2\rho}$ in terms of $\phi$ and the two constants $\alpha$ and $\beta$ that are determined by initial conditions. The solution is:
\begin{equation} \label{eq:rho soln} \frac{1}{4G}\, e^{2\rho}+\frac{l}{\mu} I^{(n)}(\phi ) = \beta ,
\end{equation}
where
\begin{eqnarray}
I^{(n)}&:=& \frac{\epsilon}{\alpha}\int\frac{V}{\sqrt{\left(\alpha lG/\mu\right)^2-V^2}}\,d\phi \, .
\label{eq:I}
\end{eqnarray}
In (\ref{eq:I}), the value of $\epsilon=\pm 1$ depends on a branch of $\mu\Pi_\phi$. The upper sign is valid in the branches where the cosine function is positive, which include the principal branch $(-\pi/2,\pi/2)$, whereas the lower sign is used elsewhere.
One now has a complete solution in terms of a single arbitrary function that must be fixed by specifying a time variable. It is illustrative to write the physical metric using $\phi$ as a coordinate, which corresponds to the area of the throat in the interior of the extended Schwarzschild solution of the unpolymerized theory. Suppressing the angular part, one obtains
\begin{equation} ds^2_\text{phys}=\frac{1}{j(\phi)}\Big(\frac{-4l^2d\phi^2}{\alpha^2
e^{2\rho}[1-(\mu V/(\alpha lG))^2]}+e^{2\rho}dx^2\Big),
\end{equation}
which shows that the solution has a horizon at $\phi=\phi_{\text{H}}$ for which $e^{2\rho}=0$.
Taking $\Pi_\phi$ as the time variable, one can deduce, generically, the following qualitative time evolution. One starts at the horizon $\phi=\phi_\text{H}$ , with corresponding initial $\Pi_\phi^\text{H}$ given by (\ref{eq:pi1}). Without loss of generality, we can take $\mu\Pi_\phi^\text{H}$ to be in the principle branch of the arcsin function, and because $\phi$ is positive, it takes values between $(0,\pi/2)$. As $\mu\Pi_\phi$ increases, $\phi$ decreases until it reaches its minimum value at $\mu\Pi_\phi =\pi/2$. After that $\phi$ increases as $\Pi_\phi$ increases. When $\mu\Pi_\phi$ is in the range $(\pi/2, \pi)$, $\epsilon$ in (\ref{eq:I}) necessarily changes sign, and one can verify that after the bounce $e^{2\rho}$ does not vanish again, but the throat area expands to $\phi\to\infty$ in finite coordinate time $\Pi_\phi$. The expansion takes an infinite amount of proper time so that our semi-classical polymerization has produced a solution that avoids the singularity but does not oscillate. The time evolution of the physical conformal mode, $e^{2\rho}/j(\phi)$, is illustrated in Fig. \ref{fig:bounce}.
\begin{figure}[htb!]
\begin{center}
\includegraphics{4Dbounce2.eps}
\caption{The physical conformal mode in 4-D spacetime. The calculations use $\alpha=G^\text{(4)}=M=1$ and $\mu=0.1$.}
\label{fig:bounce}
\end{center}
\end{figure}
We now contrast the above behavior with that of the fully polymerized theory. The Hamiltonian constraint is:
\begin{equation} \label{eq:fullyH}\mathcal{G} = G \frac{\sin(\mu\Pi_\rho)}{\mu}\frac{\sin (\mu\Pi_\phi)}{\mu}
+ e^{2\rho}\frac{V}{2l^2G}\approx 0.
\end{equation}
Going through precisely the same Hamilton-Jacobi analysis as before, we are lead to the following expression for the
conformal mode of the metric:
\begin{equation}
e^{2\rho}= \frac{2lG}{\alpha\mu}\sin\big( 2\mu\alpha\beta/l-2\alpha I^{(n)}(\phi)\big),
\label{eq:fullrho}
\end{equation}
where $I^{(n)}$ is given in Eq. (\ref{eq:I}). All other $\phi$-dependence is unchanged so that the net change from the partially polymerized case is that the conformal mode is now an oscillating function of $\phi$. There will now be inner horizons whenever the sin function vanishes, giving rise to a qualitatively different quantum corrected spacetime. In fact the number of horizons will vary depending on the relative magnitude of $M$ and the quantity $\mu /\alpha$ \cite{AK:prep}.
\section{Four-Dimensional Schwarzschild Black Hole}
We now write the 4-D partially polymerized metric in terms of the radius $r$ and the Schwarzschild ``time'' $x$:
\begin{eqnarray}
\!\!\!\!\!\!\!\!\!\!\!ds^2_\text{phys}\!\!&=&\! -\frac{dr^2}{\Big(\frac{2MG^{(4)}}{r} - \epsilon\sqrt{1-\frac{k^2}{r^2}}\Big)\Big(1-\frac{k^2}{r^2}\Big)} \nonumber\\
& &+ \bigg(\!\frac{2MG^{(4)}}{r} - \epsilon\sqrt{1-\frac{k^2}{r^2}}\,\bigg)\!\bigg(\!\frac{2dx}{\alpha}\!\bigg)^{\!\!\!2} \!+\!r^2d\Omega^2.
\label{eq:four_metric}
\end{eqnarray}
In the above, $M \equiv\alpha^2 \beta/(2l)$ and $k\equiv \mu/(\alpha G)$. As per our earlier claim, $P_M=2l/\alpha$ completes the canonical transformation from the pair $(\alpha,\beta)$ to $(M,P_M)$, and hence the conjugate $P_M$ does indeed rescale the $x$ coordinate. These rescalings do affect the bounce radius $k$ which is a consequence of the fact that the introduction of the discrete scale has broken the scale invariance of the theory.
The metric (\ref{eq:four_metric}) has remarkable properties. There is a single bifurcative horizon at $r_\text{H} := \sqrt{(2MG^{(4)})^2+k^2}$ which exhibits a quantum correction due to the polymerization. The solution evolves from the horizon at $r_\text{H}$ to the minimum radius $k$ in finite proper time, and then expands to $r=\infty$ in infinite proper time. As the throat expands in the interior, the metric approaches:
\begin{equation}
ds^2_\text{phys} = -\frac{dr^2}{1+2MG^{(4)}/r}+\Big(1+\frac{2MG^{(4)}}{r}\Big)dx^2,
\label{eq:asympt_interior}
\end{equation}
where we have absorbed $2/\alpha$ into $x$ and suppressed the angles. This asymptotic interior solution does not obey the vacuum Einstein equations, but has non-vanishing stress tensor with $T^{\,r}_r = T^{\,x}_x \propto -1/r^2$ which does not depend on $k$. This corresponds to an anisotropic perfect fluid that has been recently considered in a model of the Schwarzschild interior \cite{culetu}.
It is possible to continue the metric (\ref{eq:four_metric}) analytically across the horizon to the exterior region. The validity of this extension is an open question given that our chosen foliation does not extend to the exterior, but the procedure seems natural in the present context and has been used before to construct a complete semi-classical black hole spacetime. In the present case, the resulting black hole exterior has interesting and physically reasonable properties: in particular, it is asymptotically flat and closely approximates a Schwarzschild black hole of mass $M$.
The fact that $r=k$ in (\ref{eq:four_metric}) is a coordinate singularity can be explicitly illustrated by the coordinate transformation $r=k\cosh(y)$ \cite{ftnote}. The resulting metric is regular at the bounce $y=0$ and has a horizon at $\sinh(y_\text{H})=2MG^{(4)}/k$. For large $y$, the metric describes the exterior of the black hole, while the asymptotic interior region corresponds to $y\to -\infty$. The conformal diagram of the complete spacetime is shown in Fig. \ref{fig:conformal}. It includes two exterior regions (I and I'), the black hole and the white hole interior regions (II and II'), and two ``quantum corrected'' interior regions (III and III'). The classical singularity is replaced by a bounce at $r=r_\text{min}$ and subsequent expansion to $r=\infty$.
The Ricci and Kretschmann scalars are everywhere non-singular and vanish rapidly in the exterior (large, positive $y$).
A calculation of the Einstein tensor reveals that while the solutions violate the classical energy conditions, the violations are of order $k^2/r^4$ and hence vanish far from the bounce radius $r=k$. This means in particular that the exterior is endowed with non-zero quantum stress energy that is vanishingly small for macroscopic black holes ($r_\text{H}\gg k$) so that the Schwarzschild solution is well approximated everywhere outside the horizon. This also leads naturally to a quantum corrected Hawking temperature of order $\mathcal{O}(k^2/M^3)$ as well as a logarithmic correction to the Bekenstein-Hawking entropy \cite{AK:prep}.
The interior spacetime on the other hand describes an expanding anisotropic cosmology with stress energy that approximately satisfies the energy conditions but does not vanish far from $k$, nor does it vanish in the limit that $k \to 0$. Instead, the asymptotic region ``pinches off'' in this limit at the curvature singularity at $r=0$, leaving behind the standard, complete but singular Schwarzschild spacetime and two disconnected, time-reversed copies of the (singular) cosmological spacetime.
\begin{figure}[htb!]
\begin{center}
\includegraphics{ConfDiag2.eps}
\caption{Conformal diagram of the partially polymerized Schwarzschild spacetime. }
\label{fig:conformal}
\end{center}
\end{figure}
\section{Conclusion}
We have presented an intriguing candidate for a complete, non-singular quantum corrected black hole spacetime. This spacetime was derived by the semi-classical polymerization of only the area in the interior of spherically symmetric black hole spacetimes. Our treatment neglects polymeric corrections to the conformal mode of the metric, but this somewhat speculative procedure is perhaps justified by the interesting results that emerge. The singularity is resolved at a bounce radius determined by the polymerization scale and the exterior black hole spacetime has small, but non-zero quantum-energy. Remarkably, the solution in the interior does not oscillate, but instead re-expands indefinitely to a Kantowski-Sachs spacetime with anisotropic fluid stress-energy that is non-vanishing in the limit that the polymerization scale vanishes. The generation via polymerization of an interior cosmology is reminiscent of earlier work that explored universe creation in black hole interiors \cite{frolov90}.
One may also note that the solution does not reduce to Minkowski space in the limit that the Schwarzschild mass $M$ goes to zero. In fact, Eq. (\ref{eq:four_metric}) shows that there is still a horizon in this limit, located at the bounce radius $r=r_\text{min}$. While it is tempting to speculate about quantum remnants, it must be remembered that the semi-classical approximation employed here will likely break down for microscopic black holes. This is certainly worthy of further investigation.
{\bf Note Added:} After this paper was completed, a paper by Modesto \cite{modesto08} appeared which presents an interesting and thorough analysis of the complete LQG corrected, multiple horizon 4-D black hole spacetimes that emerge from a generalization of the procedure in \cite{pullin08}.
\section*{Acknowledgements}
We thank J. Ziprick, R. Daghigh, J. Gegenberg, J. Louko, the theory group at CECS and H. Maeda in particular for useful discussions. GK acknowledges the kind hospitality of the University of Nottingham, the University of New Brunswick and CECS where parts of this work were carried out. This work was supported in part by the Natural Sciences and Engineering Research Council of Canada.
| {'timestamp': '2009-03-19T20:31:39', 'yymm': '0811', 'arxiv_id': '0811.3240', 'language': 'en', 'url': 'https://arxiv.org/abs/0811.3240'} |
\section{Introduction}
A Brownian particle moving in a periodic potential and subected
to a spatially non-uniform temperature profile gives rise
to a net current, acting like a Brownian motor. This device
is autonomous since it is entirely driven by thermal fluctations.
Several properties of such a Brownian motor, such as current,
heat and thermodynamic eficiency have been studied.
Another important performance characteristic of such a Brownian
device is the transport coherence as measured by the Peclet
number, which is the ratio of the thermal velocity times the
period of the substrate potential and the effective diffusion
coefficient of the motor. It is desirable to design motors
which produce the maximum velocity with the minimum dispersion
i.e. small diffusion coefficient.
While the net current of such a Brownian ratchet has been derived
for an overdamped system in the works of Landauer, Van Kampen and
B{\"u}ttiker the determination of the effective diffusion coefficient had
remained a challenging task until the early twenty-first century.
Reimann {\it{et al.}} first determined the effective diffusion coefficient
for
In the past decade several studies have appeared regarding the
coherent transport of Brownian motors. Most studies have been carried out based on
uniform temperature. In this work we obtain analytical expressions for current, effective diffusion
coefficient analytically and numerically. In the low temperature regime, we determine
the transition rates and from that the current and effective diffusion coefficient.
\section{System}
The potential and temperature profile are respectively:-
$$
U(x) =
\begin{cases}
\frac{U_{0}x}{\alpha L}, & \text{for } 0 \leq x < \alpha L \\
\newline\\
\frac{U_{0}(L-x)}{(1-\alpha)L} & \text{for } \alpha L \leq x < L \\
\end{cases}
$$
$$
T(x) =
\begin{cases}
T_H, & \text{for } 0 \leq x < \alpha L \\
\newline\\
T_C & \text{for } \alpha L \leq x < L \\
\end{cases}
$$
Both Potential and Temperatue profiles are periodic i.e. $U(x+L)=U(x)$ and $T(x+L)=T(x)$.
\begin{figure}[tbh]
\includegraphics[width=3.0in]{fig1drawing.eps}
\caption{\label{fig:profile}(Color online)Schematic of piecewise linear potential alternately sujected to hot and cold baths. }
\end{figure}
$\alpha$ is the potential asymmetry parameter, $U_{0}$ is the barrier height and $L$ is the spatial period of the potential. Without loss of generality we consider $U_{0}=1$ and $L=1$.
The Langvin equation used to study the motion of the Brownian particle is
given by,
\begin{equation}
m\ddot{x}=-\gamma\dot{x}-U^\prime(x)+\sqrt{2k_{B}T(x)\gamma}\xi(t)
\end{equation}
We set $k_{B}=1$ and $\gamma=1$.
\begin{figure}
\includegraphics[width=3.0in]{figpot.eps}
\caption{\label{fig:potential}(Color online) $U(x)$, $\psi(x)$, and $\phi(x)$.}
\end{figure}
In the overdamped limit, we ignore the inertial term. However, for temperature dependent on position, an additional term needs to be added as pointd out in earlier works. As per the Stratonovich interpretation, the overdamped Langevin equation is given by,
\begin{equation}
\dot{x}=-U^\prime(x) -\frac{1}{2}\frac{dT(x)}{dx} + \sqrt{2T(x)\gamma}\xi(t)
\end{equation}
Here, $<\xi(t)>=0$ and $<\xi(t)\xi(t^\prime)>=\delta(t-t^\prime)$.
We set $\gamma=1$. This equation will be understood according to the Stratonovivh
interpretation.
The Fokker-Planck equation corresponding to this overdamped Langevin equation is given by,
\begin{equation}
\frac{\partial P(x,t)}{\partial t}=\frac{\partial }{\partial x}\left[U^\prime(x)P(x,t)\right]+
\frac{\partial^{2}}{\partial x^{2}}\left[T(x) P(x,t)\right]
\end{equation}
The current is given by,
\begin{equation}
<\dot{x}>=\frac{x(t_f)-x(t_s)}{t_f - t_s}
\end{equation}
where, $t_s \gg 1$ is the time taken to reach the steady state and $t_f$
is the final time upto which the simulations are carried out.
The effective diffusion coefficient is computed as per the following relation:
\begin{equation}
D_{eff}=\frac{<(x(t_f)-x(t_s))^{2}>-<x(t_f)-x(t_s)>^2}{2(t_f-t_s)}
\end{equation}
Our numerical calculations are carrier out as per the Stochastic
Euler-Maruyama algorithm.
The analytical calculation were carried out as per the following formulas
provided in Ref.~\cite{lindner}.
\begin{equation}
<\dot{x}>=L\frac{1-\exp(\psi(L))}{\int_{0}^{L} dx I_+(x)/g(x)}
\end{equation}
where, $g(x)=\sqrt{T(x)}$ and
\begin{equation}
I_+(x)=\exp(-\psi(x))\int_{x}^{x+L} dy \exp(\psi(y))/g(y)
\end{equation}
The analytical expression for the effective diffusion coefficient is,
\begin{equation}
D_{eff}=(L^{2})\frac{\int_{0}^{L} dx I_+(x)^{2}I_-(x)/g(x)}{[\int_{0}^{L} dx I_+(x)/g(x)]^{3}}
\end{equation}
where,
\begin{equation}
I_-(x)=\exp(\psi(x))\int_{x-L}^{x} dy \exp(-\psi(y))/g(y)
\end{equation}
We can also determine the Peclet number which determines the coherence of transport. It's
given by,
\begin{equation}
Pe=L <\dot{x}>/D_{eff}
\end{equation}
We also carried out numerical simulations to test the validity of our analytical
results using the Euler-Mayuram algorithm. The time-step chosen was $h=0.001$ and
he number of relizations was $5000$.
\section{Exact Expressions}
The first step to solve this model is to recognize that the
non-uniform temperature breaks the symmetry and simultaneously results in
a non-equilibriun condition, the minimal ingredient to
produce directed motion. In order break the left right symmetry there should
be a phase difference between them. Due to the spatial dependence of the temperature
profile the noise term is multiplicative and Brownian particles subject
to such a temperature profile move under the influence of the generalized
potential given by,
\begin{equation}
\psi(x)=\int_{0}^{x} dx^\prime [U^{\prime}(x^{\prime})+(1/2)T^{\prime}(x^{\prime})]/T(x^{\prime})
\end{equation}
The condition to achieve directed transport ($ <\dot{x}>\neq 0$) is for this potential to have an
effective bias such that $\psi(L=1)-\psi(0) \neq 0$.
For the piecewise linear potential and piecewise constant temperature profile,
it is simple to calculate $\psi(x)$, which is given by,
\begin{widetext}
$$
\psi(x) =
\begin{cases}
\frac{U_0}{T_C}+\frac{U_{0}(x+1-\alpha)}{\alpha T_H}, & \text{for } -1 \leq x < -1+\alpha \\
\newline\\
\frac{-U_{0}x}{(1-\alpha)T_C} +\frac{1}{2}\log(\frac{T_C}{T_H}) & \text{for } -1+\alpha \leq x < 0 \\
\newline \\
\frac{U_{0}x}{\alpha T_H} & \text{for } 0 \leq x <\alpha \\
\newline \\
\frac{U_0}{T_H}-\frac{U_{0}(x-\alpha)}{(1-\alpha)T_{C}}+\frac{1}{2}\log(\frac{T_C}{T_H}) & \text{for } \alpha \leq x < 1 \\
\newline \\
\frac{U_0}{T_H}-\frac{U_0}{T_C}+\frac{U_0(x-1)}{\alpha T_H} & \text{for } 1 \leq x < 1+\alpha \\
\newline \\
\frac{2U_0}{T_H}-\frac{U_0}{T_C}-\frac{U_{0}(x-1-\alpha)}{(1-\alpha)T_C}+\frac{1}{2}\log(\frac{T_C}{T_H}) & \text{for } 1+\alpha \leq x < 2 \\
\end{cases}
$$
\end{widetext}
Using a suitable transformation of variables one can convert the overdamped LAngevin equation with multiplicatie noise to one with additive noise.
The required transformation is given by,
\begin{equation}
y(x)=\int_{0}^{x} \,dz/\sqrt{T(z)}
\end{equation}
and the corresponding Langevin equation is given by,
\begin{equation}
\dot{y}=\dot{x}/\sqrt{T(x)}=-\frac{d\phi(y)}{dy}+\sqrt{2}\xi(t)
\end{equation}
where,
\begin{equation}
\phi(y)=\int_{0}^{y} \,dy^{*} \frac{U^{\prime}[x(y^{*})]+(1/2)T^{\prime}[x(y^{*})]}{\sqrt{T[x(y^{*})]}}
\end{equation}
and $\phi(y)=\psi[x(y)]\,.$
For our potential and temperature profiles, the relation between the original and transformed coordinates is given by,
$$
y =
\begin{cases}
\frac{\alpha-1}{\sqrt{T_C}}-\frac{\alpha-x-1}{\sqrt{T_H}}, & \text{for } -1 \leq x < -1+\alpha \\
\newline\\
\frac{x}{\sqrt{T_C}}& \text{for } -1+\alpha \leq x < 0 \\
\newline \\
\frac{x}{\sqrt{T_H}} & \text{for } 0 \leq x <\alpha \\
\newline \\
\frac{1}{\sqrt{T_H}}+\frac{x-\alpha}{\sqrt{T_{C}}} & \text{for } \alpha \leq x < 1 \\
\newline \\
\frac{1}{\sqrt{T_H}}+\frac{1-\alpha}{\sqrt{T_C}}+\frac{(x-1)}{\sqrt{T_H}} & \text{for } 1 \leq x < 1+\alpha \\
\newline \\
\frac{2\alpha}{\sqrt{T_H}}+\frac{1-\alpha}{\sqrt{T_C}}+\frac{(x-1-\alpha)}{\sqrt{T_C}} & \text{for } 1+\alpha \leq x < 2 \\
\end{cases}
$$
Finally, the effective potential in the transformed coordinates is given by,
\begin{widetext}
$$
\phi(y) =
\begin{cases}
\frac{U_0}{T_C}+\frac{U_{0}(y+[(1-\alpha)/\sqrt{T_C}])}{\alpha \sqrt{T_H}}, & \text{for } \frac{\alpha-1}{\sqrt{T_C}}-\frac{\alpha}{\sqrt{T_H}} \leq y < \frac{\alpha-1}{\sqrt{T_C}} \\
\newline\\
\frac{-U_{0}y}{(1-\alpha)\sqrt{T_C}} +\frac{1}{2}\log(\frac{T_C}{T_H}) & \text{for } \frac{-1+\alpha}{\sqrt{T_C}} \leq y < 0 \\
\newline \\
\frac{U_{0}y}{\alpha \sqrt{T_H}} & \text{for } 0 \leq y < \frac{\alpha}{\sqrt{T_H}} \\
\newline \\
\frac{U_0}{T_H}-\frac{U_{0}(y-\alpha/\sqrt{T_H})}{(1-\alpha)\sqrt{T_{C}}}+\frac{1}{2}\log(\frac{T_C}{T_H}) & \text{for } \frac{\alpha}{\sqrt{T_H}} \leq y < \frac{\alpha}{\sqrt{T_H}} + \frac{1-\alpha}{\sqrt{T_C}} \\
\newline \\
-\frac{U_0}{T_C}+\frac{U_0(y+(\alpha-1)/\sqrt{T_C})}{\alpha \sqrt{T_H}} & \text{for } \frac{\alpha}{\sqrt{T_H}} + \frac{1-\alpha}{\sqrt{T_C}} \leq y < \frac{2\alpha}{\sqrt{T_H}} + \frac{1-\alpha}{\sqrt{T_C}} \\
\newline \\
\frac{2U_0}{T_H}-\frac{U_0}{T_C}-\frac{U_{0}(y-2\alpha/\sqrt{T_H})}{(1-\alpha)\sqrt{T_C}}+\frac{1}{2}\log(\frac{T_C}{T_H}) & \text{for } \frac{2\alpha}{\sqrt{T_H}} + \frac{1-\alpha}{\sqrt{T_C}} \leq y < \frac{2\alpha}{\sqrt{T_H}} + \frac{2(1-\alpha)}{\sqrt{T_C}} \\
\end{cases}
$$
\end{widetext}
The period of $\phi(y)$ is $L_{y}=\frac{\alpha}{\sqrt{T_H}}+\frac{1-\alpha}{\sqrt{T_C}}$.
The effective diffusion coefficient computed as per the transformed dynamics with additive noise
is given by,
\begin{equation}
D_{eff,y}=\frac{\int_{0}^{L_y} \,dx [I_-(x)]^{2}I_+(x)/L_y}{[\int_{0}^{L_y} \,dx I_-(x)/L_y]^{3}}
\end{equation}
and
\begin{equation}
I_\pm (x)=\mp e^{\pm \phi(x)}\int_{x}^{x \mp L_y} \,dy \, e^{\mp \phi(y)}
\end{equation}
The velocity in the transformed coordinates is given by,
\begin{equation}
v_{y}=\frac{1-e^{\phi(L_y)}}{[\int_{0}^{L_y} \,dx \, I_-(x)/L_y]}
\end{equation}
The relation between the effective diffusion coefficient and the particle current
in the original and transformed coordinates is given by,
\begin{equation}
D_{eff}=\frac{D_{eff,y}}{{L_{y}}^{2}} , \,\,\, <\dot{x}>=\frac{v_{y}}{L_{y}}
\end{equation}
We will calculate the effective diffusion coefficient using Eq. 8.
The Integral in the denominatior is given by,
\begin{widetext}
\begin{equation}
I_d=\int_{0}^{\alpha} \,dx \frac{e^{-\psi(x)}}{\sqrt{T_H}} \int_{x}^{x+1} \,dy \frac{e^{\psi(y)}}{\sqrt{T(y)}} + \int_{\alpha}^{1} \,dx \frac{e^{-\psi(x)}}{\sqrt{T_C}} \int_{x}^{x+1} \,dy \frac{e^{\psi(y)}}{\sqrt{T(y)}}= A+B
\end{equation}
\end{widetext}
such that
\begin{equation}
A=\int_{0}^{\alpha} \,dx \frac{e^{-\psi(x)}}{\sqrt{T_H}} \int_{x}^{x+1} \,dy e^{\psi(y)}/\sqrt{T(y)}
\end{equation}
and,
\begin{equation}
B=\int_{\alpha}^{1} \,dx \frac{e^{-\psi(x)}}{\sqrt{T_C}} \int_{x}^{x+1} \,dy e^{\psi(y)}/\sqrt{T(y)}
\end{equation}
we find that,
\begin{widetext}
\begin{equation}
A=\int_{0}^{\alpha} \,dx \frac{e^{-\frac{U_0{x}}{\alpha T_H}}}{\sqrt{T_H}}\left[ \int_{x}^{\alpha} \,dy \frac{e^{\frac{U_0 {y}}{\alpha T_H}}}{\sqrt{T_H}} + \int_{\alpha}^{1} \,dy \frac{e^{\frac{U_0}{T_H}-\frac{U_{0}(y-\alpha)}{(1-\alpha)T_{C}}+\frac{1}{2}\log\left(\frac{T_C}{T_H}\right) }}{\sqrt{T_C}} + \int_{1}^{x+1} \,dy \frac{e^{\frac{U_0}{T_H}-\frac{U_0}{T_C}+\frac{U_0(y-1)}{\alpha T_H}}}{\sqrt{T_H}} \right]
\end{equation}
and,
\begin{equation}
\begin{split}
B = \int_{\alpha}^{1} \,dx \frac{e^{-\left[{\frac{U_0}{T_H}-\frac{U_{0}(x-\alpha)}{(1-\alpha)T_{C}}+\frac{1}{2}\log(\frac{T_C}{T_H}) }\right]}}{\sqrt{T_C}} \left[ \int_{x}^{1} \,dy \frac{e^{\frac{U_0}{T_H}-\frac{U_{0}(y-\alpha)}{(1-\alpha)T_{C}}+\frac{1}{2}\log\left(\frac{T_C}{T_H}\right) }}{\sqrt{T_C}} +\int_{1}^{1+\alpha} \,dy \frac{e^{\frac{U_0}{T_H}-\frac{U_0}{T_C}+\frac{U_0(y-1)}{\alpha T_H}}}{\sqrt{T_H}} \right. \\
\left. +\int_{1+\alpha}^{x+1}\,dy \frac{e^{\frac{2U_0}{T_H}-\frac{U_0}{T_C}-\frac{U_{0}(y-1-\alpha)}{(1-\alpha)T_C}+\frac{1}{2}\log(\frac{T_C}{T_H})}} {\sqrt{T_C}} \right]
\end{split}
\end{equation}
\end{widetext}
Here,
\begin{equation}
a_{11}=\int_{x}^{\alpha} \,dy \frac{e^{\frac{U_0 {y}}{\alpha T_H}}}{\sqrt{T_H}}=-{\frac {\alpha\,\sqrt {T_{H}}}{U_{0}} \left( {{\rm e}^{{\frac {U_{0}
\,x}{\alpha\,T_{H}}}}}-{{\rm e}^{{\frac {U_{0}}{T_{H}}}}} \right) }
\end{equation}
\begin{widetext}
\begin{equation}
a_{12}=\int_{\alpha}^{1} \,dy \frac{e^{\frac{U_0}{T_H}-\frac{U_{0}(y-\alpha)}{(1-\alpha)T_{C}}+\frac{1}{2}\log\left(\frac{T_C}{T_H}\right) }}{\sqrt{T_C}} =
-{\frac { \left( \alpha-1 \right) T_{C}}{U_{0}\,\sqrt {T_{H}}}{{\rm e}
^{{\frac {U_{0}\, \left( T_{C}-T_{H} \right) }{T_{C}\,T_{H}}}}}
\left( {{\rm e}^{{\frac {U_{0}}{T_{C}}}}}-1 \right) }
\end{equation}
\end{widetext}
and,
\begin{widetext}
\begin{equation}
a_{13}=\int_{1}^{x+1} \frac{e^{\frac{U_0}{T_H} -\frac{U_0}{T_C} + \frac{U_{0}(y-1)}{\alpha T_H} } } {\sqrt{T_H}} = {\frac {\alpha\,\sqrt {T_{H}}}{U_{0}} \left( {{\rm e}^{{\frac {U_{0}\,
\left( \alpha+x \right) }{\alpha\,T_{H}}}}}-{{\rm e}^{{\frac {U_{0}}{
T_{H}}}}} \right) {{\rm e}^{-{\frac {U_{0}}{T_{C}}}}}}
\end{equation}
\end{widetext}
\begin{widetext}
Similarly,
\begin{equation}
b_{11}={\frac { \left( \alpha-1 \right) T_{C}}{U_{0}\,\sqrt {T_{H}}} \left( -
{{\rm e}^{{\frac {U_{0}\, \left( T_{C}\,\alpha+T_{H}\,x-T_{C}-T_{H}
\right) }{T_{H}\, \left( \alpha-1 \right) T_{C}}}}}+{{\rm e}^{{\frac
{U_{0}}{T_{H}}}}} \right) {{\rm e}^{-{\frac {U_{0}}{T_{C}}}}}}
\end{equation}
\begin{equation}
b_{12}={\frac {\alpha\,\sqrt {T_{H}}}{U_{0}}{{\rm e}^{{\frac {U_{0}\, \left(
T_{C}-T_{H} \right) }{T_{C}\,T_{H}}}}} \left( {{\rm e}^{{\frac {U_{0}
}{T_{H}}}}}-1 \right) }
\end{equation}
\begin{equation}
b_{13}={\frac { \left( \alpha-1 \right) T_{C}}{U_{0}\,\sqrt {T_{H}}} \left( {
{\rm e}^{{\frac {U_{0}\, \left( 2\,T_{C}\,\alpha-\alpha\,T_{H}+T_{H}\,
x-2\,T_{C} \right) }{T_{H}\, \left( \alpha-1 \right) T_{C}}}}}-{
{\rm e}^{2\,{\frac {U_{0}}{T_{H}}}}} \right) {{\rm e}^{-{\frac {U_{0}
}{T_{C}}}}}}
\end{equation}
\begin{equation}
\begin{split}
A=-\frac {\alpha}{{U_{0}}^{2}} \left( \left( \left( -T_{C}+T_{H}-U_{0
} \right) \alpha+T_{C} \right) {{\rm e}^{{\frac {U_{0}\, \left( T_{C}-
T_{H} \right) }{T_{C}\,T_{H}}}}}+ \right. \\
\left. \left( \left( T_{C}-T_{H} \right)
\alpha-T_{C} \right) {{\rm e}^{-{\frac {U_{0}}{T_{C}}}}}+
\left(
\left( T_{C}-T_{H} \right) \alpha-T_{C} \right) {{\rm e}^{{\frac {U_{
0}}{T_{H}}}}}+ \left( -T_{C}+T_{H}+U_{0} \right) \alpha+T_{C} \right)
\end{split}
\end{equation}
\begin{equation}
\begin{split}
B=\frac {\alpha-1}{{U_{0}}^{2}} \left( \left( \left( -T_{C}+T_{H}-U_{
0} \right) \alpha+T_{C}+U_{0} \right) {{\rm e}^{{\frac {U_{0}\,
\left( T_{C}-T_{H} \right) }{T_{C}\,T_{H}}}}}+ \right. \\
\left. \left( \left( T_{C}-T
_{H} \right) \alpha-T_{C} \right) {{\rm e}^{-{\frac {U_{0}}{T_{C}}}}}+
\left( \left( T_{C}-T_{H} \right) \alpha-T_{C} \right) {{\rm e}^{{
\frac {U_{0}}{T_{H}}}}}+ \left( -T_{C}+T_{H}+U_{0} \right) \alpha+T_{C
}-U_{0} \right)
\end{split}
\end{equation}
Then $I_d$ is given by,
\begin{equation}
\begin{split}
I_d=\frac {1}{{U_{0}}^{2}} \left( \left( \left( T_{C}-T_{H}+2\,U_{0}
\right) \alpha-T_{C}-U_{0} \right) {{\rm e}^{{\frac {U_{0}\, \left( T
_{C}-T_{H} \right) }{T_{C}\,T_{H}}}}}+ \right. \\
\left. \left( \left( -T_{C}+T_{H}
\right) \alpha+T_{C} \right) {{\rm e}^{-{\frac {U_{0}}{T_{C}}}}}+
\left( \left( -T_{C}+T_{H} \right) \alpha+T_{C} \right) {{\rm e}^{{
\frac {U_{0}}{T_{H}}}}}+ \left( T_{C}-T_{H}-2\,U_{0} \right) \alpha-T_
{C}+U_{0} \right)
\end{split}
\end{equation}
\end{widetext}
The numerator in the expression for effective diffusion coefficient can be written as,
\begin{widetext}
\begin{equation}
\begin{split}
num=\int_{0}^{\alpha} \,dx \frac{e^{-\frac{U_0{x}}{\alpha T_H}}}{\sqrt{T_H}} \left[\int_{x}^{x+L}\,dy \frac{\exp(\psi(y))}{\sqrt{T(y)}}\right]^{2}\left(\int_{x-L}^{x}\,dy \frac{\exp[-\psi(y)]}{\sqrt{T(y)}}\right) + \\
\int_{\alpha}^{1} \,dx \frac{e^{-\left[{\frac{U_0}{T_H}-\frac{U_{0}(x-\alpha)}{(1-\alpha)T_{C}}+\frac{1}{2}\log(\frac{T_C}{T_H}) }\right]}}{\sqrt{T_C}}\left[\int_{x}^{x+L}\,dy \frac{\exp(\psi(y))}{\sqrt{T(y)}}\right]^{2}\left(\int_{x-L}^{x}\,dy \frac{\exp[-\psi(y)]}{\sqrt{T(y)}}\right)
\end{split}
\end{equation}
such that,
\begin{equation}
\begin{split}
num=\int_{0}^{\alpha} \,dx \frac{e^{-\frac{U_0{x}}{\alpha T_H}}}{\sqrt{T_H}} \left[ a_{11}+a_{12}+a_{13} \right]^{2}\left(\int_{x-L}^{x}\,dy \frac{\exp[-\psi(y)]}{\sqrt{T(y)}}\right) + \\
\int_{\alpha}^{1} \,dx \frac{e^{-\left[{\frac{U_0}{T_H}-\frac{U_{0}(x-\alpha)}{(1-\alpha)T_{C}}+\frac{1}{2}\log(\frac{T_C}{T_H}) }\right]}}{\sqrt{T_C}}\left[ b_{11}+b_{12}+b_{13}\right]^{2}\left(\int_{x-L}^{x}\,dy \frac{\exp[-\psi(y)]}{\sqrt{T(y)}}\right)
\end{split}
\end{equation}
\end{widetext}
For $0 \leq x < \alpha$,
\begin{equation}
P=\int_{x-1}^{x} \,dy \frac{\exp[-\psi(y)]}{\sqrt{T(y)}}=p_{11}+p_{12}+p_{13}
\end{equation}
where,
\begin{equation}
p_{11}=
{\frac {\alpha\,\sqrt {T_{H}}}{U_{0}} \left( {{\rm e}^{{\frac {U_{0}\,
\left( \alpha-x \right) }{\alpha\,T_{H}}}}}-1 \right) {{\rm e}^{-{
\frac {U_{0}}{T_{C}}}}}}
\end{equation}
\begin{equation}
p_{12}=-{\frac {\sqrt {T_{H}} \left( \alpha-1 \right) }{U_{0}} \left( {
{\rm e}^{{\frac {U_{0}}{T_{C}}}}}-1 \right) {{\rm e}^{-{\frac {U_{0}}{
T_{C}}}}}}
\end{equation}
\begin{equation}
p_{13}=-{\frac {\alpha\,\sqrt {T_{H}}}{U_{0}} \left( {{\rm e}^{-{\frac {U_{0}
\,x}{\alpha\,T_{H}}}}}-1 \right) }
\end{equation}
For $\alpha \leq x < 1$,
\begin{equation}
Q=\int_{x-1}^{x} \,dy \frac{\exp[-\psi(y)]}{\sqrt{T(y)}}=q_{11}+q_{12}+q_{13}
\end{equation}
where,
\begin{widetext}
\begin{equation}
q_{11}=\int_{x-1}^{0} \,dy \frac{e^{\frac{U_{0}y}{(1-\alpha)T_C}-\frac{1}{2}\log(T_{C}/T_{H})}}{\sqrt{T_C}}=
{\frac { \left( \alpha-1 \right) \sqrt {T_{H}}}{U_{0}} \left( {{\rm e}
^{-{\frac {U_{0}\, \left( x-1 \right) }{ \left( \alpha-1 \right) T_{C}
}}}}-1 \right) }
\end{equation}
\end{widetext}
\begin{equation}
q12=\int_{0}^{\alpha} \,dy \frac{e^{\frac{-U_{0}y}{\alpha T_H}}}{\sqrt{T_H}}=
{\frac {\alpha\,\sqrt {T_{H}}}{U_{0}} \left( {{\rm e}^{{\frac {U_{0}}{
T_{H}}}}}-1 \right) {{\rm e}^{-{\frac {U_{0}}{T_{H}}}}}}
\end{equation}
\begin{equation}
\begin{split}
q13=\int_{\alpha}^{x}\,dy \frac {e^{\frac{-U_{0}}{T_H}+\frac{U_{0}(y-\alpha)}{(1-\alpha)T_{C}}-\frac{1}{2}\log(\frac{T_C}{T_H})} } {\sqrt{T_C}}
=\\
-{\frac { \left( \alpha-1 \right) \sqrt {T_{H}}}{U_{0}} \left( {
{\rm e}^{{\frac {U_{0}\, \left( \alpha-x \right) }{ \left( \alpha-1
\right) T_{C}}}}}-1 \right) {{\rm e}^{-{\frac {U_{0}}{T_{H}}}}}}
\end{split}
\end{equation}
Finally, we have
\begin{widetext}
\begin{equation}
\begin{split}
num=\int_{0}^{\alpha} \,dx \frac{e^{-\frac{U_0{x}}{\alpha T_H}}}{\sqrt{T_H}} \left[ a_{11}+a_{12}+a_{13} \right]^{2}( p_{11}+p_{12}+p_{13} ) + \\
\int_{\alpha}^{1} \,dx \frac{e^{-\left[{\frac{U_0}{T_H}-\frac{U_{0}(x-\alpha)}{(1-\alpha)T_{C}}+\frac{1}{2}\log(\frac{T_C}{T_H}) }\right]}}{\sqrt{T_C}}\left[ b_{11}+b_{12}+b_{13}\right]^{2} ( q_{11}+q_{12}+q_{13})
\end{split}
\end{equation}
\end{widetext}
The final expression is obtained as,
\begin{equation}
num=num1+num2
\end{equation}
where,
\begin{widetext}
\begin{equation}
\begin{split}
num1=\frac{1}{\sqrt{T_H}}\left({\varphi_{0}}^{3}\alpha +\frac{\alpha T_H}{U_0}\left(\exp\left(\frac{U_{0}}{T_H}\right)-1\right)\left[ {\varphi_{0}}^{2}{\tilde{\varphi_{1}}}{\varphi_{c}}+\varphi_{1}\varphi_{c}(2{\varphi_{0}}^{2} +\varphi_{1} {\tilde{\varphi_{1}}} {{\varphi_{c}}^{2}} \exp\left(\frac{U_0}{T_H}\right) \right] \right. \\
\left. +2\alpha\varphi_{0}\varphi_{1}\tilde{\varphi_{1}}{\varphi_{c}}^{2}\exp(U_{0}/T_{H}) +\varphi_{0}{\varphi_{1}}^{2}{\varphi_{c}}^{2}\frac{\alpha T_H}{2 U_0} \left(exp(2U_{0}/T_{H})-1 \right) \right)
\end{split}
\end{equation}
\begin{equation}
\begin{split}
num2=\frac{{{\mu_{0}}^{2}}\lambda_{0}(1-\alpha)T_{C}} {2U_0} \left[ \exp \left( \frac{2U_0}{T_C} \right) -1\right] +\frac{\xi_{1}(1-\alpha)T_C}{U_0} \left[ \exp \left(\frac{U_0}{T_C} \right) -1\right] +\xi_{0}(1-\alpha) \\
+\mu_{2}\lambda_{1}\frac{\alpha-1}{U_0}T_{C}\left[\exp\left( -\frac{U_0}{T_C} \right)-1 \right]
\end{split}
\end{equation}
\end{widetext}
The numerator is then given by $num=num1+num2$. The effective diffusion coefficient then obtained as
\begin{equation}
D_{eff}=\frac{num}{{I_d}^3}
\end{equation}
The various terms are provided in the Appendix.
The current is calculated as,
\begin{equation}
<\dot{x}>=L\frac{1-\exp(\psi(L))}{I_d}
\end{equation}
In the low temperature limit, the particle current and effective diffusion coefficient
can be obtained in terms of the transition rates in forward and reverse directions given
by,
\begin{equation}
r_{f}=\frac{1}{\alpha}\frac{1}{\alpha T_{H} Y^2 +(\alpha-1)T_{C}Z}
\end{equation}
where,
\begin{equation}
Y=\exp(\frac{U_{0}}{2T_{H}})-\exp(\frac{U_{0}}{2T_{C}})
\end{equation}
and,
\begin{equation}
Z=(\exp(\frac{-U_{0}}{T_{C}})-1)(\exp(\frac{U_{0}}{T_{H}})-1)
\end{equation}
and,
\begin{equation}
r_{b}=\frac{1}{1-\alpha}\frac{1}{\alpha T_{H} X +(1-\alpha) T_{C} R^{2}}
\end{equation}
where,
\begin{equation}
X=(1-\exp(-\frac{1}{T_{H}}))(\exp(\frac{1}{T_{C}})-1)
\end{equation}
and,
\begin{equation}
R=\exp(\frac{U_{0}}{2T_{C}})-\exp(-\frac{U_{0}}{2T_{C}})
\end{equation}
So, the current and effective diffusion coefficient are given by,
\begin{equation}
<\dot{x}>=r_{f}\alpha-r_{b}(1-\alpha)
\end{equation}
and,
\begin{equation}
D_{eff}=\frac{r_{f}\alpha+r_{b}(1-\alpha)}{2}
\end{equation}
{\it{\underline{Results.-}}} In Fig.~2, we plot the current as a function of the asymmetry parameter.
In FIg.~3 we show the effective diffusion coefficient as a function of the temperature
of the hot bath. Fig.~4 shows the Peclet number. Good agreement is obtained between theory
and simulation result.
\begin{figure}[htb]
\includegraphics[width=3.0in]{deff1.eps}
\caption{\label{fig:Q-F_fridge}(Color online) Effective diffusion coefficient as a function
of the asymmetry parameter $\alpha$.}
\end{figure}
\begin{figure}[htb]
\includegraphics[width=3.0in]{deff_vs_Thot.eps}
\caption{\label{fig:Q-F_fridge}(Color online) Effective diffusion coefficient as a function
of the temperature of the hot bath.}
\end{figure}
\begin{figure}[htb]
\includegraphics[width=3.0in]{vel1.eps}
\caption{\label{fig:Q-F_fridge}(Color online) Current as a function
of the asymmetry parameter $\alpha$.}
\end{figure}
\begin{figure}[htb]
\includegraphics[width=3.0in]{pec1.eps}
\caption{\label{fig:Q-F_fridge}(Color online) Peclet number as a function
of the asymmetry parameter $\alpha$.}
\end{figure}
{\it{\underline{Conclusion.-}}} In this work, we have analytically and numerically obtained
the current and effective diffusion coefficient of a Brownian particle in a piecewise
linear potential subject alternately to hot and cold baths. Good agreement is obtained between
our numerical and analytical results. In some parameter regimes transport is enhanced.
\begin{acknowledgments}
The author acknowledges
\end{acknowledgments}
| {'timestamp': '2022-11-09T02:18:17', 'yymm': '2211', 'arxiv_id': '2211.04421', 'language': 'en', 'url': 'https://arxiv.org/abs/2211.04421'} |
\section{Introduction}
Outer automorphims of a free group divide into two categories, polynomially
growing and exponentially growing, according to the behaviour of word lengths
under iteration. Subgroups of $\mathrm{Out}(F_r)$ that consist of only polynomially
growing outer automorphisms are known as \emph{Kolchin} subgroups and are
the $\mathrm{Out}(F_r)$ analog of unipotent subgroups of a linear group.
Clay and Pettet~\cite{clay-pettet} and Gultepe~\cite{gultepe} give sufficient
conditions for when a subgroup of $\mathrm{Out}(F_r)$ generated by two Dehn twists
contains an exponentially growing outer automorphism. This article gives an
algorithmic criterion that characterizes when a subgroup of $\mathrm{Out}(F_r)$
generated by two Dehn twists is Kolchin, in terms of a combinatorial
invariant of the gnerators known as the \emph{edge-twist} directed graph (see
Definition~\ref{def:et}).
\begin{theorem}[Main Theorem]
Suppose $\sigma, \tau \in \mathrm{Out}(F_r)$ are Dehn twists. The subgroup
$\langle \sigma,\tau\rangle$ is Kolchin if and only if the edge
twist digraph of the defining graphs of groups is directed acyclic.
\end{theorem}
\section{Background}
\subsection{Graphs of groups and Dehn twists}
A \emph{graph} $\Gamma$ is a collection of vertices $V(\Gamma)$, edges
$E(\Gamma)$, initial and terminal vertex maps $o, t: E\to V$ and an involution
$\bar{\cdot}: E\to E$ satisfying $\bar{e}\neq e$ and $o(\bar{e}) = t(e)$. A
\emph{directed graph (digraph)} omits the involution.
\begin{definition}
A \emph{graph of groups} is a pair $(G,\Gamma)$ where
$\Gamma$ is a connected graph and $G$ is an assignment of groups to the
vertices and edges of $\Gamma$ satisfying $G_e = G_{\bar{e}}$ and
injections $\iota_e: G_e\to G_{t(e)}$. The assignment will often be
suppressed and $\Gamma_v, \Gamma_e$ used instead.
\end{definition}
\begin{definition}
The \emph{fundamental groupoid} $\pi_1(\Gamma)$ of a graph
of groups $\Gamma$ is the groupoid with vertex set $V(\Gamma)$
generated by the path groupoid of $\Gamma$ and the groups $G_v$ subject
to the following relations. We require that for each $v\in V(\Gamma)$
the group $G_v$ is a subgroupoid based at $v$ and that the group and
groupoid structures agree. Further, for all $e\in E(\Gamma)$ and $g\in
G_e$ we have $\bar{e}\iota_{\bar{e}}(g)e = \iota_e(g)$.
\end{definition}
The \emph{fundamental group} of $\Gamma$ based at $v$, $\pi_1(\Gamma, v)$ is
the vertex subgroup of $\pi_1(\Gamma)$ based at $v$. It is standard that
changing the basepoint gives an isomorphic group~\cites{trees, higgins}.
Let $(e_1,\ldots,e_n)$ be a possibly empty edge path in $\Gamma$ starting at
$v$ and $(g_0,\ldots, g_n)$ be a sequence of elements $g_i\in G_{t(e_i)}$ with
$g_0 \in G_v$. These data represent an arrow of $\pi_1(\Gamma)$ by the groupoid
product \[ g_0e_1g_1\cdots e_ng_n.\] A non-identity element of $\pi_1(\Gamma)$
expressed this way is \emph{reduced} if either $n = 0$ and $g_0\neq \mathrm{id}$, or $n
> 0$ ans for all $i$ such that $e_i = \bar{e}_{i+1}$, $g_i \not\in
\iota_{e_i}(G_{e_i})$. The edges appearing in a reduced arrow are uniquely
determined.
Further, if $t(e_n) = o(e_1)$ the arrow is cyclically reduced if either $e_n
\neq \bar{e}_0$ or $e_n = \bar{e}_0$ and $g_ng_0\not\in \iota_{e_n}(G_{e_n})$.
For an element $g\in \pi_1(\Gamma, v)$, the edges appearing in a cyclically
reduced arrow conjugate to $g$ in $\pi_1(\Gamma)$ is a conjugacy class
invariant.\footnote{These edges are covered by the axis of $g$ in the
Bass-Serre tree of $\Gamma$.}
\begin{definition}
Given a graph of groups $\Gamma$ a subset of edges
$E'\subseteq E(\Gamma)$ and edge-group elements $\{z_e\}_{e\in E'}$
satisfying $z_e \in Z(\Gamma_e)$ and $z_{\bar{e}} = z_e^{-1}$, the
\emph{Dehn twist} of $\Gamma$ about $E'$ by $\{z_e\}$ is the
fundamental groupoid
automorphism $D_z$ given on the generators by
\begin{align*}
D_z(e) &= ez_e & e\in E' \\
D_z(e) &= e & e\not\in E' \\
D_z(g) &=g & g\in \Gamma_v \\
\end{align*}
As this groupoid automorphism preserves vertex subgroups it induces a
well-defined outer automorphism class $D_z\in\mathrm{Out}(\pi_1(\Gamma, v))$,
which we will also refer to as a Dehn twist.
\end{definition}
For a group $G$ we say $\sigma\in\mathrm{Out}(G)$ is a \emph{Dehn twist} if it can be realized
as a Dehn twist about some graph of groups $\Gamma$ with $\pi_1(\Gamma, v)
\cong G$.
Specializing to $\mathrm{Out}(F_r)$, when $\sigma\in\mathrm{Out}(F_r)$ is a Dehn twist there
are many graphs of groups $\Gamma$ with $\pi_1(\Gamma, v) \cong F_r$ that can
be used to realize $\sigma$. However, Cohen and Lustig~\cite{cohen-lustig}
define the notion of an \emph{efficient graph of groups} representative of a
Dehn twist and show that each Dehn twist in $\mathrm{Out}(F_r)$ has a unique efficient
representative. For a fixed $\sigma$ let $\mathcal{G}(\sigma)$ denote the graph
of groups of its efficient representative; edge groups of
$\mathcal{G}(\sigma)$ are infinite cyclic~\cite{cohen-lustig}.
\begin{remark}\label{rem:twist-power}
If $\sigma,\tau\in\mathrm{Out}(F_r)$ are Dehn twists with a common power, then
$\mathcal{G}(\sigma) = \mathcal{G}(\tau)$.
\end{remark}
\subsection{Topological representatives and the Kolchin theorem}
Given a graph $\Gamma$ the \emph{topological realization} of $\Gamma$ is a
simplicial complex with zero-skeleton $V(\Gamma)$ and one-cells joining $o(e)$
and $t(e)$ for each edge in a set of $\bar{\cdot}$ orbit representatives. It
will not cause confusion to use $\Gamma$ for both a graph and its topological
representative. If $\gamma \subset \Gamma$ is a based loop, denote the
associated element of $\pi_1(\Gamma)$ by $\gamma^\ast$. Given $\sigma\in
\mathrm{Out}(F_r)$, a \emph{topological realization} is a homotopy equivalence
$\hat{\sigma}:\Gamma\to\Gamma$ so that $\hat{\sigma}_\ast : \pi_1(\Gamma,v)\to
\pi_1(\Gamma,\hat{\sigma}(v))$ is a representative of $\sigma$. A homotopy
equivalence $\hat{\sigma}:\Gamma\to\Gamma$ is \emph{filtered} if there is a
filtration $\emptyset = \Gamma_0\subsetneq \Gamma_1\subsetneq \cdots\subsetneq
\Gamma_k = \Gamma$ preserved by $\hat{\sigma}$.
\begin{definition}
A filtered homotopy equivalence $\hat{\sigma}:\Gamma\to\Gamma$ is
\emph{upper triangular} if
\begin{enumerate}
\item $\hat{\sigma}$ fixes the vertices of $\Gamma$,
\item Each stratum of the filtration $\Gamma_i\setminus
\Gamma_{i-1} = e_i$ is a single topological edge,
\item Each edge $e_i$ has a preferred orientation and with this
orientation there is a closed path $u_i\subseteq
\Gamma_{i-1}$ based at $t(e_i)$ so that
$\hat{\sigma}(e_i) = e_iu_i$.
\end{enumerate}
\end{definition}
The path $u_i$ is called the suffix associated to $u_i$. A filtration assigns
each edge a height, the $i$ such that $e\in \Gamma_i\setminus\Gamma_{i-1}$, and
taking a maximum this definition extends to edge paths.
Every Dehn twist in $\mathrm{Out}(F_r)$ has an upper-triangular
representative~\cites{bfh-ii, cohen-lustig}. In a previous
paper~\cite{bering-lg} I describe how to construct $\mathcal{G}(\sigma)$ from
an upper-triangular representative, following a similar construction of
Bestvina, Feighn, and Handel~\cite{bfh-ii}. The following is an immediate
consequence of my construction.
\begin{lemma}\label{lem:filter}
Suppose $\Gamma$ is a filtered graph and $\sigma\in\mathrm{Out}(F_r)$ is a Dehn
twist that is upper triangular with respect to $\Gamma$. Then
\begin{enumerate}
\item there is a height function $ht: E(\mathcal{G}(\sigma))\to
\mathbb{N}$ so that for any loop
$\gamma\subseteq\Gamma_i$ the height of the edges in a
cyclically reduced representative of the conjugacy
class of $\gamma^\ast$ in
$\pi_1(\mathcal{G}(\sigma))$ is at most $i$,
\item For each edge $e\in E(\mathcal{G}(\sigma))$, $ht(e) =
ht(\bar{e})$, and the edge
group $\mathcal{G}(\sigma)_e$ is a conjugate of a
maximal cyclic subgroup of $F_r$ that contains
$u_i^\ast$ for some suffix $u_i$, and $ht(e) >
\min_{\gamma \sim u_i} \{ht_\Gamma(\gamma)\}$, where
$\gamma$ ranges over loops freely homotopic to $u_i$.
\end{enumerate}
\end{lemma}
Bestvina, Feighn, and Handel proved an $\mathrm{Out}(F_r)$ analog of the classical
Kolchin theorem for $\mathrm{Out}(F_r)$, which provides simultaneous upper-triangular
representatives for Kolchin-type subgroups of $\mathrm{Out}(F_r)$.
\begin{theorem}[\cite{bfh-ii}]\label{thm:kolchin}
Suppose $H\leq\mathrm{Out}(F_r)$ is a Kolchin subgroup. Then there is a finite
index subgroup $H' \leq H$ and a filtered graph $\Gamma$ so that each
$\sigma \in H'$ is upper triangular with respect to $\Gamma$.
\end{theorem}
\subsection{Twists and polynomial growth}
In a previous paper~\cite{bering-lg} I introduced the edge-twist digraph of two
Dehn twists and used it to provide a sufficient condition for a subgroup of
$\mathrm{Out}(F_r)$ generated by two Dehn twists to be Kolchin.
\begin{definition}[\cite{bering-lg}]\label{def:et}
The edge-twist digraph $\mathcal{ET}(A,B)$ of two graphs of groups $A$,
$B$ with isomorphic fundamental groups and infinite cyclic edge
stabilizers is the digraph with vertex set
\[ V(\mathcal{ET}(A,B)) = \{(e, \bar{e}) | e\in E(A)\}\cup
\{(f,\bar{f})|f\in E(B)\} \]
directed edges $((e,\bar{e}), (f,\bar{f}))$ $e\in E(A), f\in E(B)$ when a generator of $A_e$
uses $f$ or $\bar{f}$ in its cyclically reduced representation in
$\pi_1(B)$, and directed edges $((f,\bar{f}),(e,\bar{e}))$, $f\in E(B),
e\in E(A)$ when a generator of $B_f$ uses $e$ or $\bar{e}$ in a
cyclically reduced representation in $\pi_1(A)$.
\end{definition}
\begin{remark}
This is well-defined, using an edge is a conjugacy invariant, and using
an edge or its reverse is preserved under taking inverses.
\end{remark}
\begin{lemma}[\cite{bering-lg}]\label{lem:pg}
If $\sigma, \tau\in\mathrm{Out}(F_r)$ are Dehn twists and
$\mathcal{ET}(\mathcal{G}(\sigma),\mathcal{G}(\tau))$ is directed acyclic, then
$\langle \sigma,\tau\rangle$ is Kolchin.
\end{lemma}
\section{Proof of the main theorem}
\begin{proof}
It suffices to prove the converse to Lemma \ref{lem:pg}. Suppose
$\langle \sigma, \tau\rangle$ is Kolchin.
By Theorem \ref{thm:kolchin}, there is a finite
index subgroup $H\leq \langle \sigma, \tau\rangle$
where every element of $H$ is upper triangular with respect
respect to a fixed filtered graph $\Gamma$. Since $H$ is finite index, there are
powers $m, n$ so that $\sigma^m, \tau^n \in H$, so that $\sigma^m$
and $\tau^n$ are upper triangular with respect to $\Gamma$.
By Lemma \ref{lem:filter}, the filtration of $\Gamma$ induces height
functions on $E(\mathcal{G}(\sigma^m))$ and $E(\mathcal{G}(\tau^n))$,
combining these gives a height function on the vertices of
$\mathcal{ET}(\mathcal{G}(\sigma^m),\mathcal{G}(\tau^n))$. Every
directed edge $((e,\bar{e}), (f,\bar{f}))$ satisfies $ht(e) > ht(f)$.
Indeed, suppose $(e,\bar{e}) \in E(\mathcal{G}(\sigma^m))$. Let
$[g] \subset F_r$ be the conjugacy class of a generator of
$\mathcal{G}(\sigma^m)_e$. By
Lemma \ref{lem:filter} (ii), there is a representative $g\in [g]$ such that
$g^k = u_i^\ast$ for some $\sigma$-suffix $u_i$. Take a minimum
height loop $\gamma$ representing $[g]$, so that $\gamma \subseteq
\Gamma_{ht(\gamma)}$. Again by Lemma \ref{lem:filter} (ii), $ht(e) >
ht(\gamma)$. Finally, by Lemma \ref{lem:filter} (i), each edge $f$ in a
cyclically reduced $\pi_1(\mathcal{G}(\tau^n))$ representative of
$[g]$ satisfies $ht(f) \leq ht(\gamma)$. Thus $ht(e) > ht(f)$ for each
directed edge with origin $(e,\bar{e})$, as required. The argument for
generators of the edge groups of
$\mathcal{G}(\tau^n)$ is symmetric. Therefore any directed path
in $\mathcal{ET}(\mathcal{G}(\sigma^m),\mathcal{G}(\tau^n))$ has
monotone decreasing vertex height, which implies that $\mathcal{ET}$
is directed acyclic. To conclude, observe that by
Remark~\ref{rem:twist-power}
$\mathcal{ET}(\mathcal{G}(\sigma^m),\mathcal{G}(\tau^n)) =
\mathcal{ET}(\mathcal{G}(\sigma), \mathcal{G}(\tau))$.
\end{proof}
Cohen and Lustig~\cite{cohen-lustig-ii} give an algorithm to find efficient
representatives. Computing the edge-twist digraph and testing if it is acyclic
are straightforward computations, so the criterion in the main theorem is
algorithmic.
\section*{Acknowledgements}
I thank Mladen Bestvina for a helpful conversation that led to this note.
\begin{bibdiv}
\begin{biblist}
\bibselect{bibliography}
\end{biblist}
\end{bibdiv}
\end{document}
| {'timestamp': '2018-10-18T02:14:51', 'yymm': '1810', 'arxiv_id': '1810.07633', 'language': 'en', 'url': 'https://arxiv.org/abs/1810.07633'} |
\section{Introduction}
In the theory of perfect plasticity, the deformation of a material is mainly decomposed into two components;
an elastic deformation due to reversible microscopic processes for which there is a one-to-one map between
the stress and the strain, and an irreversible plastic deformation for which such a map is lost.
The calculation of deformations of an elasto-plastic material must therefore take into account not only
its current state, but also its history. For this reason, the common approach consists to find this
deformation as the result of infinitesimal variations of the strain and the stress tensors (see, e. g.,
\cite{lubliner1970}, \cite{hashi2012}, \cite{hanreddy}). \\
$\;$\\
Before describing the aim and main results of this work, let us start with a simple reminder of the general principles of elasto-plasticity. Leaving aside thermal effects and hardening (the latter will be the subject of a specific
study in a future paper), the deformation of a material occupying - in its undeformed state -
a domain $\Omega \subset {\mathbb R}^3$ is described by the knowledge of
the displacement vector field $\vecc{u}(x, t)$ characterizing the geometry,
and the Cauchy stress tensor ${\ggrec \sigma}(x, t)$ characterizing the state of
the material (with $x$ is any point of $\Omega$ and $t$ is time). In incremental
elastic perfectly plastic model, the displacement $\vecc{u}$ (assumed to be small) and the stress
${\ggrec \sigma}$ are governed by the following usual
principles:
\begin{itemize}
\item {\it (additive decomposition)} The strain rate tensor $\drv{{\ggrec \varepsilon}}$ is the sum the elastic strain rate $\drv{\teps^{\rm e}} $ and the plastic strain rate $\drv{\teps^{\rm p}}$:
\begin{equation}\label{sumdecompo0}
\drv{{\ggrec \varepsilon}}= \drv{ \teps^{\rm e}} +\drv{\teps^{\rm p}}.
\end{equation}
\item {\it (the yield criterion) } The stress tensor satisfies
\begin{equation}\label{yield_principle}
{\ggrec \sigma} \in {C},
\end{equation}
where $ {C} $ is a nonempty closed {\it convex} subset of three-by-three symmetric tensors, ${\mathbb M}^{3\times 3}_{\rm{sym}}$ (see, e. g., \cite{drucker1949}, \cite{hill56}, \cite{lubliner1970}).
It is assumed that the material is perfectly plastic, that is ${C}$ is constant during the deformation process (no hardening or softening occurs). When the stress is strictly inside ${C}$, the strain-rate is equal
to the elastic stress-rate $\drv{{\ggrec \varepsilon}} = \drv{\teps^{\rm e}}$ and $\drv{\teps^{\rm p}} = 0$. The plastic onset
occurs when the stress reaches the boundary $\partial {C}$ of ${C}$ ($\partial {C}$ is called
the yield surface). Note that many yield criteria used in practice are commonly defined by functional constraints of the form
\begin{equation}\label{inequal_yield}
f_i({\ggrec \sigma}) \leq 0 \mbox{ for } 1 \leq i \leq m,
\end{equation}
where the yield functions $f_i$ depend only on the principal stresses of ${\ggrec \sigma}$. \\
There is a large number of criteria describing the yield of materials in the literature.
The most commonly used are the Tresca criterion (\cite{tresca1964}) and the von Mises criterion \cite{mises1928}.
\item {\it (Hooke's Law)} The elastic strain rate $\drv{\teps^{\rm e}}$ is related to the stress rate by
\begin{equation}\label{hookzero}
\drv{ {\ggrec \sigma}} = {\mathbb H}(\drv{\teps^{\rm e}}),
\end{equation}
where ${\mathbb H}$ is the fourth-order isotropic elasticity tensor given by \eqref{law_hooke4}.
\item {\it (Principle of maximum work)} when ${\ggrec \sigma} \in\partial {C}$, the pair $({\ggrec \sigma}, \drv{\teps}^{\rm p})$ satisfies
\begin{equation}\label{intro_normal}
\drv{\teps}^{\rm p}:{\ggrec \sigma} \geq \drv{\teps}^{\rm p}:{{\ggrec \sigma^\star}} \mbox{ for all } {\ggrec \sigma^\star} \in {C}.
\end{equation}
(see \cite{hill48, hill50}, \cite{koiter1953, koiter1960}, \cite{lubliner1970}).
\item {\it (Consistency condition) } when ${\ggrec \sigma} \in\partial {C}$, $ \drv{\teps}^{\rm p}$ and $ \drv{{\ggrec \sigma}}$ verify
\begin{equation}\label{consistancy0}
\drv{\teps}^{\rm p} : \drv{{\ggrec \sigma}} = 0,
\end{equation}
at all times.
As indicated in \cite{DuvautLions}, condition \eqref{consistancy0} results from
\eqref{intro_normal} under strong time regularity assumptions on ${\ggrec \sigma}$ (here
${\ggrec \sigma}$ is a tensor valued function depending on time $t$ and position $x$).
This might explain why this condition
is often sidelined and not taken into account by many authors.
\end{itemize}
In view of \eqref{sumdecompo0} and \eqref{hookzero}, Condition \eqref{intro_normal} is often written in the following
form, called the normality rule,
\begin{equation}\label{flowrule0}
\drv{\teps}^{\rm p} \in \NC{{C}}{{\ggrec \sigma}},
\end{equation}
or, equivalently,
\begin{equation}\label{orthoPDS}
\drv{{\ggrec \varepsilon}} - {\mathbb H}^{-1}(\drv{{\ggrec \sigma}}) \in \NC{{C}}{{\ggrec \sigma}},
\end{equation}
where $\NC{{C}}{{\ggrec \sigma}}$ denotes the normal cone of ${C}$ at ${\ggrec \sigma}$ and ${\mathbb H}^{-1}$ is
the inverse of the operator ${\mathbb H}$. \\
Because of the inherent irreversibility of plastic deformations, it is more meaningful
to describe elasto-plastic deformation processes by their infinitesimal variations, i.e. by the strain rate and stress rate tensors.
Besides, these principles are complemented by the local equations governing the displacement of the material elements, given
in the general form
\begin{equation}\label{evomotion_eq}
\rho \ddrv{\vecc{u}} - \mathrm{div}\, {\ggrec \sigma} - h = 0,
\end{equation}
where $u$ is the displacement of the material element with respect to its original spacial coordinate $x$ with the two dots on its top denoting the second derivative in time, $ h$ is the volume density of forces, $\rho$ the density of the material, and $\mathrm{div}\, {\ggrec \sigma}$ the vector field
given by
$
(\mathrm{div}\, {\ggrec \sigma})_i = \sum_{j=1}^3 \frac{\partial \sigma_{i j}}{\partial x_j} \mbox{ for } 1 \leq i \leq 3,
$
augmented by appropriate boundary and initial conditions set by the forces acting on the boundary of the material and its initial state. These physical constraint that are nonetheless necessary in order to find a meaningful solution to this system of equations are not important for the present work. The proposed formulation can nevertheless accommodate any type of boundary and initial conditions.
It is worthwhile mentioning that under the assumption of a quasi-static evolution, equation \eqref{evomotion_eq} becomes
\begin{equation}\label{qsmotion_eq}
\mathrm{div}\, {\ggrec \sigma} + h = 0,
\end{equation}
and that this type of assumption is often made in practice when the elasto-plastic times scales of the material are much faster than those of the underlying volume and boundary forces.
The elastic and perfectly plastic time dependent problem of a material occupying a geometrical domain is
often written as a variational inequality (see, e. g., \cite{DuvautLions}).
It leads to a non-linear and cumbersome large number of equations although it has been studied by several authors, both in the case of quasi-static evolution and in the case of full dynamics (see, e. g., \cite{DuvautLions}, \cite{johnson1976}, \cite{temam1986}, \cite{fuchs}, \cite{dalmaso2006}, \cite{baba2021}).
The present work aims at reformulating the local principles of elasto-plasticity
into a smaller number of equations, which allows in particular
to get rid of the inequalities (and thus of the variational inequalities
associated with the global time evolution problem).
Our approach is based on the following statement: behind the system lies
the unique orthogonal decompositions of the strain rate $\drv{{\ggrec \varepsilon}}$ and its image
${\mathbb H}(\drv{{\ggrec \varepsilon}})$ in the form $ {\ggrec \tau}+{\ggrec \eta}$ with ${\ggrec \tau}$ belonging
to the tangent cone of ${C}$ at ${\ggrec \sigma}$ and ${\ggrec \eta}$ belonging
to its polar cone (i. e. the normal cone of ${C}$ at ${\ggrec \sigma}$). More
precisely, we shall prove the following.
$$
\drv{\teps}^{\rm e} = \PRJ{\TG{{C}}{{\ggrec \sigma}}} \drv{{\ggrec \varepsilon}}, \; \; \drv{\teps}^{\rm p} = \displaystyle{ \PRJ{\NC{{C}}{{\ggrec \sigma}}} \drv{{\ggrec \varepsilon}}}, \; \; \drv{{\ggrec \sigma}} = \PRJ{\TG{{C}}{{\ggrec \sigma}}}{\mathbb H} (\drv{{\ggrec \varepsilon}}),
$$
where $ \PRJ{\TG{{C}}{{\ggrec \sigma}}}$ (resp. $\PRJ{\NC{{C}}{{\ggrec \sigma}}}$) represents the orthogonal
projection on $\TG{{C}}{{\ggrec \sigma}}$ (resp. on ${\NC{{C}}{{\ggrec \sigma}}}$). This will allow us in particular to formulate the system as an augmented evolution system of the form
\begin{equation}\label{evol_sys}
\frac{\partial}{\partial t}
\left(
\begin{array}{c}
v \\
{\ggrec \sigma}
\end{array}
\right) -
\left(
\begin{array}{c}
\mathrm{div}\, {\ggrec \sigma} \\
{\boldsymbol {\mathscr H}}({\ggrec \sigma}, \drv{{\ggrec \varepsilon}} (v))
\end{array}
\right) =
\left(
\begin{array}{c}
h \\
0
\end{array}
\right),
\end{equation}
with ${\boldsymbol {\mathscr H}}({\ggrec \sigma}, \drv{{\ggrec \varepsilon}}(v)) = \PRJ{\TG{{C}}{{\ggrec \sigma}}}{\mathbb H}(\drv{{\ggrec \varepsilon}}(v))$ and $v=\dot u$ is the flow velocity of the material elements. The nonlinear equality and inequality constraints associated with the yield criterion, which are inevitable when using a variational method for example, are replaced by easy to calculate projections onto the tangent and normal cones, as demonstrated by the application of this new methodology to the von Mises and Tresca criteria. \\
Time differentiation of the first component of the system leads to an evolution equation involving only the velocity
\begin{equation}
\frac{\partial^2 v }{\partial t^2} - \mathrm{div}\, {\boldsymbol {\mathscr H}}({\ggrec \sigma}, \drv{{\ggrec \varepsilon}} (v)) = \frac{\partial h}{\partial t}.
\end{equation}
From all these elements, it follows that it is necessary to calculate the projectors
$\PRJ{\TG{{C}}{.}}$ and $\PRJ{\NC{{C}}{.}}$
on the tangent cone and the normal cone. \\
This approach is quite general, and does not require assumptions about the regularity of ${C}$,
nor restrictions on the number of functions that define it. However, we will present the
calculations in the cases where ${C}$ is defined by one or two yield functions only, to demonstrate the usefulness of the new methodology in practical applications. The case of the Von
Mises criterion will be in particular well detailed. The Tresca criterion, for
which the domain has degenerate corners, will also be carefully examined.
Finally, it will be discussed how invariance by similarity of the yield functions can be exploited to compute
easily and efficiently the projectors involved in the formulation.
The rest of this paper is organized as follows. This section will end with preliminaries and notations.
In Section \ref{sec_main_res}, we present the main results allowing to reformulate the equations of the elastic perfectly plastic model in terms of the projectors on the normal and tangent cones. In Section \ref{consti_laws}, we will treat in more details the case where we have one or two functions defining the yield surface. Section \ref{VM-T-cr} is devoted to the Von Mises's and Tresca's criteria.
The last section is devoted to some concluding remarks concerning invariance by similarity of
yield functions and upcoming extensions.
{\it Notations and preliminaries}\label{sect_notat}
In the sequel, all the elements of ${\mathbb R}^3$ will be considered as column vectors.
For $x, \, y \in {\mathbb R}^3$, we denote by $\displaystyle{x}\cdot{y} \in {\mathbb R}^3$ their Euclidian scalar product,
by $x \otimes y= x y^t$ their tensor product and by $x \odot y= \frac{1}{2}(x \otimes y +y \otimes x)$
their symmetric tensor product. The superscript $t$ denotes the matrix or vector transpose.The same notation will be used for any
second order tensor ${\ggrec \sigma}$ and for the matrix of its components $(\sigma_{ij})_{1 \leq i, \; j \leq n}$
(with respect to the canonical basis of ${\mathbb R}^3$). Given two second order tensors ${\ggrec \sigma}$
and ${\ggrec \varepsilon}$, we denote by ${\ggrec \sigma} {\ggrec \varepsilon}$ their products, by ${\ggrec \sigma}:{\ggrec \varepsilon} = {\rm tr}{({\ggrec \sigma} {\ggrec \varepsilon}^T)}$
their scalar product and by $\|.\|$ the associated (Frobenius) tensor norm. When ${\ggrec \sigma}$ and ${\ggrec \varepsilon}$ are symmetric,
we have
$$
{\ggrec \sigma} : {\ggrec \varepsilon} = \sum_{i,j=1}^3 \sigma_{ij} \varepsilon_{ij} = \sum_{i =1}^3 \sigma_{i i}\varepsilon_{ii} + 2 \sum_{1 \leq i < j \leq 3} \sigma_{i j}\varepsilon_{ij},
$$
and
$$
\|{\ggrec \sigma}\| = \{\sigma_{11}^2 + \sigma_{22}^2 + \sigma_{33}^2 + 2 \sigma_{12}^2 + 2 \sigma_{13}^2 + 2 \sigma_{23}^2 \}^{1/2}.
$$
The identity tensor of second order (resp. of order four) will be denoted by ${\rm{\bf{I}}}$ (resp. ${\mathbb{Id}}$).
Given an integer $n \geq 1$, ${\mathbb M}^{n\times n}_{\rm{sym}}$ denotes the space of $ n\times n$ symmetric matrices, ${\mathbb M}^{n\times n}_{\rm{sym, +}}$ is the subset of ${\mathbb M}^{n\times n}_{\rm{sym}}$ comprised of positive semidefinite matrices and
$\ORT{n}$ is the group of $n \times n$ orthogonal matrices. Given a symmetric tensor ${\ggrec \sigma} \in {\mathbb M}^{3\times 3}_{\rm{sym}}$, $ \lambda_1({\ggrec \sigma})$, $ \lambda_2({\ggrec \sigma})$ and $ \lambda_3({\ggrec \sigma})$
denote its principal values (or eigenvalues)
aranged in decreasing order: $ \lambda_1({\ggrec \sigma}) \geq \lambda_2({\ggrec \sigma}) \geq \lambda_3({\ggrec \sigma})$. We set
$$
\Lambda ({\ggrec \sigma}) = ( \lambda_1({\ggrec \sigma}), \lambda_2({\ggrec \sigma}), \lambda_3({\ggrec \sigma})).
$$
We denote by $I_1({\ggrec \sigma})$, $I_2({\ggrec \sigma})$ and $I_3({\ggrec \sigma})$ the principal invariants of ${\ggrec \sigma}$ such that
the characteristic polynomial of ${\ggrec \sigma}$ is given by
\begin{equation}
\det(\lambda I - {\ggrec \sigma}) = \lambda^3 - I_1({\ggrec \sigma})\lambda^2 + I_2({\ggrec \sigma})\lambda - I_3({\ggrec \sigma}) \mbox{ for all } \lambda \in {\mathbb R}.
\end{equation}
We have
\begin{equation}
I_1({\ggrec \sigma}) = {\rm tr}({\ggrec \sigma}), \; I_2({\ggrec \sigma}) = \frac{1}{2} ( {\rm tr}({\ggrec \sigma})^2 - {\rm tr}({\ggrec \sigma}^2)), \; I_3({\ggrec \sigma}) = \det({\ggrec \sigma}).
\end{equation}
If $f\; :\;{\mathbb M}^{3\times 3}_{\rm{sym}} \to {\mathbb R}$ is differentiable at ${\ggrec \sigma} \in {\mathbb M}^{3\times 3}_{\rm{sym}} $, then ${\boldsymbol \nabla} f({\ggrec \sigma})$ will be its symmetric
gradient at ${\ggrec \sigma}$, that is ${\boldsymbol \nabla} f({\ggrec \sigma})$ is the unique second order tensor satisfying
$$
\forall {\ggrec \sigma^\star} \in {\mathbb M}^{3\times 3}_{\rm{sym}}, \; df({\ggrec \sigma}){\ggrec \sigma^\star} = ({\boldsymbol \nabla} f({\ggrec \sigma}) ) : {\ggrec \sigma^\star},
$$
where $df({\ggrec \sigma})$ is the differential of $f$ at ${\ggrec \sigma}$. It is worth noting
that when the function $f$ is defined over the nine dimensional space ${\mathbb M}^{3\times 3}$ of $3 \times 3$
matrices with real entries, its symmetric gradient ${\boldsymbol \nabla} f({\ggrec \sigma})$ is different from its
gradient as a function on ${\mathbb M}^{3\times 3}$ (the former is the symmetric part of the latter).
More precisely, the components of ${\boldsymbol \nabla} f({\ggrec \sigma})$ are given by
$$
{\boldsymbol \nabla} f({\ggrec \sigma}) = \sum_{1 \leq i \leq j \leq 3} \frac{\partial f}{\partial \sigma_{ij}}({\ggrec \sigma}) \vecc{e}_i \odot \vecc{e}_j.
$$
We introduce the subspace of deviatoric tensors (or matrices):
\begin{equation}
{\mathbb M}^{3\times 3}_{\rm{D}} = \{{\ggrec \sigma} \in {\mathbb M}^{3\times 3}_{\rm{sym}} \;|\; {\rm tr}({\ggrec \sigma}) = 0\},
\end{equation}
Obviously, ${\mathbb M}^{3\times 3}_{\rm{sym}} = {\R \id} \oplus^\perp {\mathbb M}^{3\times 3}_{\rm{D}}$ and for all ${\ggrec \sigma} \in {\mathbb M}^{3\times 3}_{\rm{sym}}$, we can write
\begin{equation}
{\ggrec \sigma} = \overline{\tsig}+ \frac{{\rm tr}{({\ggrec \sigma})}}{3} {\rm{\bf{I}}},
\end{equation}
where ${\rm{\bf{I}}}$ is the identity tensor and $\overline{\tsig} \in {\mathbb M}^{3\times 3}_{\rm{D}} $ is the deviator of ${\ggrec \sigma}$. Obviously $I_1(\overline{\tsig})=0$ and
\begin{equation}
I_2(\overline{\tsig}) = \displaystyle{ I_2({\ggrec \sigma}) - \frac{I_1({\ggrec \sigma})^2}{3}}, \;
I_3(\overline{\tsig}) = \displaystyle{\frac{ {\rm tr}(\overline{\tsig}^3) }{3} = \frac{2 I_1({\ggrec \sigma})^3}{27} - \frac{I_1({\ggrec \sigma}) I_2({\ggrec \sigma}) }{3}+ I_3({\ggrec \sigma}).}
\end{equation}
It is customary in solid mechanics to set $J_2({\ggrec \sigma}) = - I_2(\overline{\tsig})$ and $J_3({\ggrec \sigma}) = I_3(\overline{\tsig})$.
We have
\begin{eqnarray}
J_2({\ggrec \sigma}) & =& \displaystyle{ \frac{1}{6} (( \sigma_{11} - \sigma_{22})^2 + ( \sigma_{22} - \sigma_{33})^2 + ( \sigma_{11} - \sigma_{33})^2) } \nonumber \\
&& + \sigma_{12}^2 + \sigma_{23}^2 + \sigma_{13}^2, \\
&=& \displaystyle{ \frac{1}{6} (( \lambda_{1} - \lambda_{2})^2 + ( \lambda_{2} - \lambda_{3})^2 +( \lambda_{1} - \lambda_{3})^2 ), } \\
&=& \frac{1}{2} \|\overline{\tsig}\|^2.
\end{eqnarray}
Also, by the Cayley-Hamilton theorem, which states that every matrix is a solution of its characteristic equation, we have
\begin{equation}
{\ggrec \sigma}^3 = I_1({\ggrec \sigma}){\ggrec \sigma}^2 - I_2({\ggrec \sigma}){\ggrec \sigma} + I_3({\ggrec \sigma}) {\rm{\bf{I}}},
\end{equation}
and thus \[\overline{\tsig}^3 = J_2({\ggrec \sigma}) \overline{\tsig} + J_3({\ggrec \sigma}) {\rm{\bf{I}}}. \]
In the sequel, we also need to express the gradient of these invariants. The reader can easily verify the
following identities
\begin{eqnarray}
{\boldsymbol \nabla} I_1({\ggrec \sigma}) &=& {\rm{\bf{I}}}, \;\\
{\boldsymbol \nabla} I_2({\ggrec \sigma}) &=& I_1({\ggrec \sigma}) {\rm{\bf{I}}}- {\ggrec \sigma}, \; \\
{\boldsymbol \nabla} I_3({\ggrec \sigma}) &=& {\ggrec \sigma}^2 - I_1({\ggrec \sigma}) {\ggrec \sigma} + I_2({\ggrec \sigma}) {\rm{\bf{I}}}, \\
{\boldsymbol \nabla} J_2({\ggrec \sigma}) &=& \overline{\tsig}, \;\\
{\boldsymbol \nabla} J_3({\ggrec \sigma}) &=& \overline{\tsig}^2 -\frac{2}{3} J_2({\ggrec \sigma}) {\rm{\bf{I}}}. \label{gradJ2J3}
\end{eqnarray}
Now, given a nonempty convex set $K \subset {\mathbb M}^{3\times 3}_{\rm{sym}}$ and ${\ggrec \sigma}\in K$, the tangent cone $\TG{K}{{\ggrec \sigma}}$ to $K$ at ${\ggrec \sigma}$ is defined by
\begin{equation}
\TG{K}{{\ggrec \sigma}} = \overline{\{\alpha ({\ggrec \eta}-{\ggrec \sigma})\;|\; {\ggrec \eta} \in K \mbox{ and } \alpha > 0\},}\label{tc1}
\end{equation}
where the overline denotes the topological closure of the underlying set. This is a closed convex cone of
${\mathbb M}^{3\times 3}_{\rm{sym}}$.
The normal cone $\NC{K}{{\ggrec \sigma}}$ of $K$ at ${\ggrec \sigma}$ is defined as the dual cone of $\TG{K}{{\ggrec \sigma}}$, that is
\begin{equation}
\NC{K}{{\ggrec \sigma}} = \{{\ggrec \eta} \in {\mathbb M}^{3\times 3}_{\rm{sym}} \;|\; {\ggrec \tau} : {\ggrec \eta} \leq 0 \mbox{ for all } {\ggrec \tau} \in \TG{K}{{\ggrec \sigma}} \}. \label{nc1}
\end{equation}
We recall that $\TG{K}{{\ggrec \sigma}} = {\mathbb M}^{3\times 3}_{\rm{sym}}$ and $\NC{K}{{\ggrec \sigma}} = \{\vecc{0}\}$ when $ {\ggrec \sigma}$ belongs to the interior of $K$.
We have the two elementary properties
\begin{itemize}
\item For all $\alpha > 0$, $\TG{\alpha K}{\alpha {\ggrec \sigma}} = \TG{K}{{\ggrec \sigma}}$, $\NC{\alpha K}{\alpha {\ggrec \sigma}} = \NC{K}{{\ggrec \sigma}}$.
\item For all ${\ggrec \tau} \in {\mathbb M}^{3\times 3}_{\rm{sym}}$, $\TG{ K+{\ggrec \tau}}{{\ggrec \sigma}+{\ggrec \tau}} = \TG{K}{{\ggrec \sigma}}$, $\NC{ K+{\ggrec \tau}}{{\ggrec \sigma}+{\ggrec \tau}} = \NC{K}{{\ggrec \sigma}}$.
\end{itemize}
Finally, we denote by $\PRJ{K}$ the
orthogonal projection on the convex $K$ as an operator on ${\mathbb M}^{3\times 3}_{\rm{sym}}$. For ${\ggrec \tau} \in {\mathbb M}^{3\times 3}_{\rm{sym}}$, we have
\begin{equation}
\PRJ{K} {\ggrec \tau} = {\rm argmin}\,_{{\ggrec \eta} \in K }\|{\ggrec \tau} - {\ggrec \eta}\|. \label{prc1}
\end{equation}
\section{Reformulating elasto-plasticity equations}\label{sec_main_res}
Consider an elasto-plastic material occupying a region $\Omega \subset {\mathbb R}^3$.
When volume and/or surface forces are applied to the material body, the deformation and the state of this material
can be characterized by the evolution of the displacement vector field $(x, t) \in \Omega \times I \mapsto u(x, t)$
and the Cauchy (internal) stress tensor $(x, t) \in \Omega \times I \mapsto {\ggrec \sigma}(x, t)$, respectively. Here, $I$ is the time interval during which the deformation take place. For convenience, we assume that I I is semi-open on the form $I=[0,T)$.
We denote by $v = \partial u/{\partial t}$ the velocity vector field, and $\drv{{\ggrec \sigma}}$ the time derivative of ${\ggrec \sigma}$ while ${\ggrec \varepsilon}$ and $\drv{{\ggrec \varepsilon}}$ are respectively the strain and the strain rate tensors:
\begin{equation}\label{strainsrate_def}
{\varepsilon}_{i,j} = \frac{1}{2} \left( \frac{\partial u_i}{\partial x_j} + \frac{\partial u_j }{\partial x_i} \right), \; {\drv{{\varepsilon}}}_{i,j} = \frac{1}{2} \left( \frac{\partial v_i}{\partial x_j} + \frac{\partial v_j }{\partial x_i} \right), \; 1 \leq i, j \leq 3.
\end{equation}
We are interested in the evolution in time of these quantities, locally, at all points
$x \in \Omega$. The focus is on the constitutive law which links
the strain, the strain rate, the stress, and the stress rate tensors. \\
As stated in the introduction section, for many materials, it is meaningful to assume that the internal stress is restricted to a closed convex set ${C}$ of ${\mathbb M}^{3\times 3}_{\rm{sym}}$ and that
${C}$ is insensitive to hydrostatic pressure, that is
\begin{equation}\label{condition_indiff_hydro}
\forall {\ggrec \sigma} \in {C}, \forall p \in {\mathbb R}, \; {\ggrec \sigma} + p {\rm{\bf{I}}} \in {C}.
\end{equation}
Geometrically, this means that ${C}$ is an infinite cylinder aligned along the identity matrix. In particular,
${C}$ is unbounded. The extension to materials not complying with \eqref{condition_indiff_hydro}
will be the subject of future work.
This indifference assumption has a direct implication on the normal and the tangent cones. We have
\begin{equation}\label{indif_conesNT}
{\R \id} \subset \TG{ {C}}{{\ggrec \sigma}}, \; \NC{ {C}}{{\ggrec \sigma}} \subset ({\R \id})^\perp
\end{equation}
and, in particular,
\begin{equation}\label{indif_conesNT2}
{\rm tr}({\ggrec \tau}) = 0 \mbox{ for all } {\ggrec \tau} \in \NC{ {C}}{{\ggrec \sigma}}.
\end{equation}
Here and subsequently, we drop the $(x, t)$ arguments for simplicity of notation. \\
Inside ${C}$, the internal stress ${\ggrec \sigma}$ is related to the elastic strain through Hooke's law \eqref{hookzero} with
\begin{equation}\label{law_hooke4}
{\mathbb H}({\ggrec \tau}) = 2 \mu {\ggrec \tau}+ \lambda {\rm tr}({\ggrec \tau}) {\rm{\bf{I}}}, \; \; \mbox{ for all } {\ggrec \tau} \in {\mathbb M}^{3\times 3}_{\rm{sym}},
\end{equation}
where $\lambda > 0$ and $\mu >0$ are the Lam\'e coefficients, which are assumed to be constant. We can also write
$\teps^{\rm e} = {\mathbb H}^{-1}({\ggrec \sigma})$ with
\begin{equation}\label{law_hooke1}
{\mathbb H}^{-1}({\ggrec \sigma}) = - \frac{\lambda}{2 \mu (3 \lambda + 2 \mu)} {\rm tr}({\ggrec \sigma}) {\rm{\bf{I}}} + \frac{1}{2 \mu} {\ggrec \sigma},
\end{equation}
or
\begin{equation}\label{law_hooke2}
{\mathbb H}^{-1}({\ggrec \sigma}) = - \frac{\nu}{E} {\rm tr}({\ggrec \sigma}) {\rm{\bf{I}}} + \frac{1+\nu}{E} {\ggrec \sigma},
\end{equation}
where $E$ and $\nu$ are respectively the Young's modulus and Poisson's ratio, given by
\begin{equation}\label{Hooke_consts}
E = \frac{ \mu(3 \lambda + 2 \mu)}{ \lambda +\mu}, \; \nu = \frac{ \lambda}{2( \lambda +\mu)}.
\end{equation}
\begin{theorem}\label{moreau_elastoplast}
The following two statements are equivalent:
\begin{enumerate}
\item $({\ggrec \varepsilon}, \teps^{\rm e}, \teps^{\rm p}, {\ggrec \sigma})$ satisfy \eqref{sumdecompo0}, \eqref{yield_principle}, \eqref {hookzero}, \eqref{intro_normal} and \eqref{consistancy0}.
\item $({\ggrec \varepsilon}, \teps^{\rm e}, \teps^{\rm p}, {\ggrec \sigma})$ satisfy \eqref{yield_principle} at $t=0$
and the following identities hold
\begin{eqnarray}
\drv{\teps}^{\rm e} &=& \PRJ{\TG{{C}}{{\ggrec \sigma}}} \drv{{\ggrec \varepsilon}}, \; \label{str_tangent0} \\
\drv{\teps}^{\rm p} &=& \displaystyle{ \PRJ{\NC{{C}}{{\ggrec \sigma}}} \drv{{\ggrec \varepsilon}}},\label{str_normal} \\
\drv{{\ggrec \sigma}} &=& \PRJ{\TG{{C}}{{\ggrec \sigma}}}{\mathbb H} (\drv{{\ggrec \varepsilon}}). \; \label{const_tangent0}
\end{eqnarray}
\end{enumerate}
When these statements are true, we also have
\begin{eqnarray}
\drv{{\ggrec \sigma}} &=& {\mathbb H} (\drv{{\ggrec \varepsilon}}) - 2\mu \PRJ{\NC{{C}}{{\ggrec \sigma}}} \drv{{\ggrec \varepsilon}}, \label{projHookEpsP1} \\
\PRJ{\NC{{C}}{{\ggrec \sigma}}}{\mathbb H} (\drv{{\ggrec \varepsilon}}) &=& 2\mu \PRJ{\NC{{C}}{{\ggrec \sigma}}} \drv{{\ggrec \varepsilon}}, \label{projHookEpsP2} \\
\drv{\teps}^{\rm e}:\drv{\teps}^{\rm p} &=& 0, \label{2.14}\\
\| \drv{{\ggrec \varepsilon}}\|^2 &=& \| \drv{\teps}^{\rm e} \|^2+ \| \drv{\teps}^{\rm p} \|^2. \label{2.15}
\end{eqnarray}
\end{theorem}
Before giving the proof of this theorem, let us make some comments. An important point that emerges from this theorem is the following constitutive law
relating the stress rate and the strain rate
\begin{equation}\label{const_main_law}
\drv{{\ggrec \sigma}} = \PRJ{\TG{{C}}{{\ggrec \sigma}}}{\mathbb H} (\drv{{\ggrec \varepsilon}}).
\end{equation}
It can be observed that this law unifies the elastic and plastic regimes.
Indeed, in the elastic regime, ${\ggrec \sigma}$ is inside ${C}$ and $ \PRJ{\TG{{C}}{{\ggrec \sigma}}} = {\mathbb{Id}}$
and we fall back onto the equations of linear elasticity. If on the other hand ${\ggrec \sigma}$ is on
the boundary of the yield domain, then ${\TG{{C}}{{\ggrec \sigma}}} \ne {\mathbb M}^{3\times 3}_{\rm{sym}}$. In this case,
it becomes important to give an explicit expression for the projector
$\PRJ{\TG{{C}}{{\ggrec \sigma}}}$. This will be done in Section \ref{consti_laws}
for a yield domain defined by one or more yield functions.
The examples of the Von Mises criterion and the Tresca criterion will be particularly detailed in section \ref{VM-T-cr}. \\
$\;$\\
Let us briefly describe the impact of the characterization in \eqref{const_main_law} on the
equations of motion governing the material deformation in
unsteady and quasi-static regimes. The evolution equations in \eqref{evomotion_eq} and \eqref{const_main_law}
can be gathered into the following system.
\begin{equation}\label{new_evosyst}
\frac{\partial}{\partial t}
\left(
\begin{array}{c}
\rho v \\
{\ggrec \sigma}
\end{array}
\right) +
\left(
\begin{array}{c}
- \mathrm{div}\, {\ggrec \sigma} \\
\PRJ{\TG{{C}}{{\ggrec \sigma}}}{\mathbb H} (\drv{{\ggrec \varepsilon}})
\end{array}
\right) =
\left(
\begin{array}{c}
h \\
0
\end{array}
\right).
\end{equation}
According to Lemma \ref{moreau_decompo} below, we have
$\PRJ{\TG{{C}}{{\ggrec \sigma}}} ({\ggrec \tau})+ \PRJ{\NC{{C}}{{\ggrec \sigma}}} ({\ggrec \tau}) = {\ggrec \tau}$
for all ${\ggrec \tau} \in {\mathbb M}^{3\times 3}_{\rm{sym}}$.
Combining this with \eqref{projHookEpsP2} gives
$$
\PRJ{\TG{{C}}{{\ggrec \sigma}}}{\mathbb H} (\drv{{\ggrec \varepsilon}}) = {\mathbb H} (\drv{{\ggrec \varepsilon}}) - \PRJ{\NC{{C}}{{\ggrec \sigma}}} {\mathbb H} (\drv{{\ggrec \varepsilon}}) = {\mathbb H} (\drv{{\ggrec \varepsilon}}) - 2\mu \PRJ{\NC{{C}}{{\ggrec \sigma}}} (\drv{{\ggrec \varepsilon}}) .
$$
Thus,
\begin{equation}
\frac{\partial}{\partial t}
\left(
\begin{array}{c}
\rho v \\
{\ggrec \sigma}
\end{array}
\right) -
\left(
\begin{array}{c}
\mathrm{div}\, {\ggrec \sigma} \\
{\boldsymbol {\mathscr H}} ({\ggrec \sigma}, \drv{{\ggrec \varepsilon}})
\end{array}
\right) =
\left(
\begin{array}{c}
h \\
0
\end{array}
\right).
\end{equation}
with
\begin{equation}
{\boldsymbol {\mathscr H}}({\ggrec \sigma}, {\ggrec \tau}) = {\mathbb H} ({\ggrec \tau}) - 2\mu \PRJ{\NC{{C}}{{\ggrec \sigma}}}({\ggrec \tau}).
\end{equation}
In other words, from a mathematical point of view,
the elasto-plastic incremental model of the material is obtained from
the incremental elasticity model by replacing the linear elasticity operator
${\mathbb H}$ by the non-linear operator ${\boldsymbol {\mathscr H}}({\ggrec \sigma}, .)$. Moreover,
unlike ${\mathbb H}$, this non-linear operator ${\boldsymbol {\mathscr H}}({\ggrec \sigma}, .)$ obviously depends
on the current state of stress ${\ggrec \sigma}$.
Time differentiation of the first component of the system above leads to an evolution equation involving only the velocity filed.
\begin{equation} \label{Ewaves}
\frac{\partial^2 v }{\partial t^2} - \mathrm{div}\, {\boldsymbol {\mathscr H}}({\ggrec \sigma}, \drv{{\ggrec \varepsilon}}) = \frac{\partial h}{\partial t}.
\end{equation}
The equations in (\ref{Ewaves}) expand the
usual elastic wave equations that are valid only within the elastic regimes. We note however that ${\boldsymbol {\mathscr H}}({\ggrec \sigma}, .)$ depends directly on ${\ggrec \sigma}$ and in the general case, the system must be complemented with \eqref{const_main_law}.\\
For practical applications, it only remains to express the operator $ {\boldsymbol {\mathscr H}}$ in terms of its arguments. As it will elucidated below, in the case of the Von Mises criterion \eqref{VonMisesDom}, for example, Formula \eqref{onefunc_strSIGCM} gives
\begin{equation} \label{onefunc_strSIGCM2}
{\boldsymbol {\mathscr H}}({\ggrec \sigma}, \drv{{\ggrec \varepsilon}}) = \displaystyle{ \lambda {\rm tr}(\drv{{\ggrec \varepsilon}} ) {\rm{\bf{I}}}+ 2\mu \drv{{\ggrec \varepsilon}}
- \frac{\mu}{k^2} \max(0, \drv{{\ggrec \varepsilon}}: \overline{\tsig}) \chi(\|\overline{\tsig}\|^2 - 2 k^2) \overline{\tsig}, }
\end{equation}
where $\chi \; :\; {\mathbb R} \to {\mathbb R}$ is the Heaviside type function defined by
$\chi(t) = 1$ if $ t \geq 0$ and $\chi(t) = 0$ else.
We now prove the theorem.
\begin{ourproof}{of Theorem \ref{moreau_elastoplast}}
We need the following two lemmas. The first one is due to Morreau \cite{jjmoreau}. For a straightforward proof see for example Theorem 6.29 and Corollary 6.30 of \cite{combettes_livre}.
\begin{lemma}\label{moreau_decompo}
Let $({\mathscr H}, \<,\>)$ be a real Hilbert space and $K$ a closed convex cone in ${\mathscr H}$. Let $K^*$ be its polar cone defined by
$$
K^* = \{v \in {\mathscr H} \;|\; \forall \vecc{w} \in K, \; \< v, \vecc{w}\> \leq 0\}.
$$
Then, for any $z \in {\mathscr H}$ we have
\begin{enumerate}
\item $ z = \PRJ{K} z + \PRJ{K^*} z$,
\item $ \< \PRJ{K} z, \PRJ{K^*} z\> = 0$,
\item If $z = x + y$ with $x \in K$ and $y \in K^*$ such that $x \ne \PRJ{K} z$ and $y \ne \PRJ{K^*} z$, then $\<x, y\> <0$.
\end{enumerate}
\end{lemma}
\begin{lemma}\label{decompo_Hook_lem}
For all ${\ggrec \tau} \in {\mathbb M}^{3\times 3}_{\rm{sym}}$ and ${\ggrec \eta} \in {C}$, one has
\begin{eqnarray}
\PRJ{\TG{{C}}{{\ggrec \eta}}}{{\mathbb H} ({\ggrec \tau})} &= & {\mathbb H} (\PRJ{\TG{{C}}{{\ggrec \eta}}} {\ggrec \tau}), \label{decompoTHook1} \\
\PRJ{\NC{{C}}{{\ggrec \eta}}}{{\mathbb H} ({\ggrec \tau})} &= & {\mathbb H} (\PRJ{\NC{{C}}{{\ggrec \eta}}} {\ggrec \tau}), \label{decompoNHook1}\\
&= & 2\mu \PRJ{\NC{{C}}{{\ggrec \eta}}} {\ggrec \tau}. \label{decompoNHook2}
\end{eqnarray}
\end{lemma}
\begin{ourproof}{}
Let ${{\ggrec \tau}}_T = \PRJ{\TG{{C}}{{\ggrec \eta}}}{\ggrec \tau}$ and $ \; {\ggrec \tau}_N = \PRJ{\NC{{C}}{{\ggrec \eta}}} {\ggrec \tau}$. Then, ${\ggrec \tau} = {\ggrec \tau}_T + {\ggrec \tau}_N$. In view of \eqref{indif_conesNT2} we have
\begin{equation}
{\mathbb H} ({\ggrec \tau}) = ( \lambda {\rm tr}({{\ggrec \tau}_T}) {\rm{\bf{I}}} + 2 \mu {{\ggrec \tau}_T} ) + 2 \mu {{\ggrec \tau}_N}.
\end{equation}
We observe that $ \lambda {\rm tr}({{\ggrec \tau}_T}) {\rm{\bf{I}}} + 2 \mu {{\ggrec \tau}_T} \in \TG{{C}}{{\ggrec \eta}}$ (since $\TG{{C}}{{\ggrec \eta}}$ is a convex cone containing ${\rm{\bf{I}}}$),
that $2 \mu {{\ggrec \tau}_N} \in \NC{{C}}{{\ggrec \eta}}$ and that $( \lambda {\rm tr}({{\ggrec \tau}_T}) {\rm{\bf{I}}} + 2 \mu {{\ggrec \tau}_T} ):(2 \mu {{\ggrec \tau}_N}) = 0$. In view
of Lemma \ref{moreau_decompo}, we deduce that $ \lambda {\rm tr}({{\ggrec \tau}_T}) {\rm{\bf{I}}} + 2 \mu {{\ggrec \tau}_T} = \PRJ{\TG{{C}}{{\ggrec \eta}}} {\mathbb H}{{\ggrec \tau}}$ and
$2 \mu {{\ggrec \tau}_N} = \PRJ{\NC{{C}}{{\ggrec \eta}}}{{\mathbb H} ({\ggrec \tau})}$. This ends the proof of Lemma \ref{decompo_Hook_lem}.
\end{ourproof}
Back to the proof of Theorem \ref{moreau_elastoplast}. We proceed in three separate steps.
\begin{itemize}
\item[] First, we show that (1) implies (2). Assume that \eqref{sumdecompo0}, \eqref{yield_principle}, \eqref {hookzero}, \eqref{intro_normal} and \eqref{consistancy0} are satisfied. On the one hand, since ${\ggrec \sigma}(t) \in {C}$ for all
$ t \in I$, we have for all $t\in I$ and $h > 0$ sufficiently small, such that $t+h\in I$, we have $({\ggrec \sigma}(t+h) -{\ggrec \sigma}(t))/h \in \TG{{C}}{{\ggrec \sigma}(t)}$. Taking the limit
when $h \to 0^+$ implies that $ \drv{{\ggrec \sigma}} (t)\in \TG{{C}}{{\ggrec \sigma}(t)}$.
On the other hand, \eqref{hookzero} gives
$$
\drv{\teps}^{\rm e}(t) = {\mathbb H}^{-1} (\drv{{\ggrec \sigma}}(t)).
$$
Thus, using Hooke law in (\ref{law_hooke1}) and the fact that the tangent cone contains all real multiples of the identity (\ref{indif_conesNT}), we deduce that $ \drv{\teps}^{\rm e} \in \TG{{C}}{{\ggrec \sigma}}$.
On the other hand, we have $ \drv{\teps}^{\rm p} \in \NC{{C}}{{\ggrec \sigma}}$, thanks to \eqref{flowrule0}. Furthermore,
$$
\begin{array}{rcl}
\drv{\teps}^{\rm e} : \drv{\teps}^{\rm p} &=& \displaystyle{ {\mathbb H}^{-1} (\drv{{\ggrec \sigma}}): \drv{\teps}^{\rm p}} \\
&=&\displaystyle{ \frac{1+\nu}{E} \drv{{\ggrec \sigma}}: \drv{\teps}^{\rm p} - \frac{\nu}{E} {\rm tr}{\drv{({\ggrec \sigma})}} {\rm{\bf{I}}}: \drv{\teps}^{\rm p}.}
\end{array}
$$
Using the consistency condition in \eqref{consistancy0} and the fact that $\drv{\teps}^{\rm p}$ is traceless according to \eqref{indif_conesNT2} yields
$$
\drv{\teps}^{\rm e} : \drv{\teps}^{\rm p} = 0.
$$
It follows that $\drv{\teps}^{\rm e} + \drv{\teps}^{\rm p}$ is the Moreau's
decomposition of $\drv{{\ggrec \varepsilon}}$ described in Lemma \eqref{moreau_decompo}
with $K = \TG{{C}}{{\ggrec \sigma}}$. This entails \eqref{str_tangent0} and
\eqref{str_normal}. \\
Combining \eqref{str_tangent0}, \eqref{hookzero} and \eqref{decompoTHook1} yields
$$
\drv{{\ggrec \sigma}} = {\mathbb H}(\drv{\teps}^{\rm e})= {\mathbb H}( \PRJ{\TG{{C}}{{\ggrec \sigma}}} \drv{{\ggrec \varepsilon}}) = \PRJ{\TG{{C}}{{\ggrec \sigma}}} {\mathbb H}(\drv{{\ggrec \varepsilon}}).
$$
Which is the identity in \eqref{const_tangent0}.
\item[] Second, we show that (2) implies (1). Conversely, assume that \eqref{str_tangent0},
\eqref{str_normal} and \eqref{const_tangent0} are true and that \eqref{yield_principle} is satisfied at $t=0$. We need to show that \eqref{sumdecompo0}, \eqref{yield_principle}, \eqref{intro_normal} and \eqref{consistancy0} are true.
From
\eqref{const_tangent0} we have hat $ \drv{{\ggrec \sigma}} \in {\TG{{C}}{{\ggrec \sigma}}}$.
Since ${\ggrec \sigma}(0)\in {{C}}$, we deduce that ${{\ggrec \sigma}} (t)\in {{C}}$ for all $t\in I$, meaning that \eqref{yield_principle} is satisfied.
Identity \ref{sumdecompo0} follows from \eqref{str_tangent0}, \eqref{str_normal} and Lemma \ref{moreau_decompo}.
Combining \eqref{str_tangent0}, \eqref{const_tangent0} and \eqref{decompoTHook1} yields \eqref{hookzero}.
For \eqref{consistancy0} we observe that
$$
\drv{\teps}^{\rm p} : \drv{{\ggrec \sigma}} = \drv{\teps}^{\rm p} : {\mathbb H}(\drv{\teps}^{\rm e}) = 2 \mu \drv{\teps}^{\rm p} : \drv{\teps}^{\rm e}+ \lambda {\rm tr}(\drv{\teps}^{\rm e} ) \drv{\teps}^{\rm p} : {\rm{\bf{I}}} = 0,
$$
thanks to Lemma \ref{moreau_decompo} and Property \ref{indif_conesNT}. \\
\item[] Finally, identities \eqref{projHookEpsP1} and \eqref{projHookEpsP2} follow from \eqref{str_normal} and \eqref{decompoNHook2} while \eqref{2.14} and \eqref{2.15} result from \eqref{str_tangent0}, \eqref{str_normal} and Lemma \ref{moreau_decompo}, and this concludes the proof of the theorem.
\end{itemize}
\end{ourproof}
\begin{remark}
The hydrostatic pressure invariance property in \eqref{condition_indiff_hydro} implies that
for all ${\ggrec \sigma} \in {C}$, ${\ggrec \tau} \in {\mathbb M}^{3\times 3}_{\rm{sym}}$,
$\alpha > 0$ and $\beta \in {\mathbb R}$, we have
\begin{equation}
\; \PRJ{\NC{{C}}{{\ggrec \sigma}}}(\alpha {\ggrec \tau} + \beta {\rm{\bf{I}}}) = \alpha \PRJ{\NC{{C}}{{\ggrec \sigma}}}{{\ggrec \tau}}.
\end{equation}
Indeed, for ${\ggrec \eta} \in \NC{{C}}{{\ggrec \sigma}}$
$$
\|(\alpha {\ggrec \tau} +\beta {\rm{\bf{I}}}) - {\ggrec \eta}\|^2 = \alpha^2 \| {\ggrec \tau} - \frac{1}{\alpha} {\ggrec \eta}\|^2 +\beta^2 \|{\rm{\bf{I}}}\|^2 + 2 \alpha \beta {\rm tr}({\ggrec \tau}),
$$
(thanks to \eqref{indif_conesNT2}). Therefore, minimizing $\|(\alpha {\ggrec \tau} +\beta {\rm{\bf{I}}}) - {\ggrec \eta}\|$ with respect to $ {\ggrec \eta}$
is equivalent to minimizing $ \| {\ggrec \tau} - \alpha^{-1} {\ggrec \eta}\|$.
\end{remark}
\section{Explicit constitutive laws in the case of one or two yield functions}\label{consti_laws}
\subsection{The main practical results}
In this section we write more explicitly the constitutive laws \eqref{str_tangent0}, \eqref{str_normal} and \eqref{const_tangent0} when the yield domain is defined by functional constraints.
This is the case in most criteria used in practice where the yield domain is often defined by a single function.
Nevertheless, Theorems \ref{explicit_const_law_onefunc} and \ref{explicit_const_law_twofunc} stated below
deal with domains defined by several functions but the boundary points are characterized by either exactly one function and exactly two functions, respectively,
thus covering most practical cases. For steamlining, the proofs of these theorems discussed in
in sections \ref{proj_onefunc} and \ref{proj_twofunc} where the explicit calculations are presented. As one would expect, it is essentially a matter of explicitly computing the
projection onto the tangent and normal cones invoked in \eqref{str_tangent0}, \eqref{str_normal} and \eqref{const_tangent0}.
We also note that these results are applied in Section \ref{VM-T-cr} to the cases of the Von Mises and Tresca criteria for illustration. Besides, the case of the Tresca criterion has some particularities which do not fit
completely into the framework of theorems \ref{explicit_const_law_onefunc} and \ref{explicit_const_law_twofunc}.
$\;$\\
As done in \cite{hanreddy},
\cite{koiter1953}, and \cite{koiter1960}, for example, we consider in what follows a yield domain of the form:
\begin{equation}\label{domain_rdc}
{C} = \{ {\ggrec \sigma} \in {\mathbb M}^{3\times 3}_{\rm{sym}} \;|\; f_i({\ggrec \sigma}) \leq k_i \mbox{ for } i = 1, \cdots, m\}, \; \;
\end{equation}
where $f_1, \cdots, f_m$ are $m$ convex differentiable functions of class ${\mathscr C}^1$ and $k_1, \cdots, k_m$ are real constants. We assume that
there exists at least one element ${\ggrec \sigma^\star} \in {\mathbb M}^{3\times 3}_{\rm{sym}}$ such that
\begin{equation}\label{slater_condition}
f_i({\ggrec \sigma^\star}) <k_i \mbox{ for all } i = 1, \cdots, m.
\end{equation}
From a mathematical point of view, this condition amounts to saying that the interior of ${C}$ is non empty and it is usually called Slater's condition (see, e. g., \cite{borweinlewis} or
\cite{LemareLivre}). \\
Now, given ${\ggrec \sigma} \in {C} $, define $\Sat{{\ggrec \sigma}}$ the set of indices of saturated constraints
at ${\ggrec \sigma}$ (see, e. g., \cite{borweinlewis} or
\cite{LemareLivre}):
$$
\Sat{{\ggrec \sigma}} = \{ i \;|\; 1 \leq i \leq m \mbox{ and } f_i({\ggrec \sigma}) = k_i\}.
$$
The set $\Sat{{\ggrec \sigma}}$ is empty when ${\ggrec \sigma}$ is strictly inside the yield domain ${C}$.
Otherwise, $\Sat{{\ggrec \sigma}} \ne \emptyset $, for all ${\ggrec \sigma}$ on the boundary of ${C}$. We may observe that
$$
\forall i \in \Sat{{\ggrec \sigma}}\cap \{1,2,\cdots,m\}, \; {\boldsymbol \nabla} f_i({\ggrec \sigma}) \ne 0.
$$
This a straightforward consequence of the assumption in \eqref{slater_condition} because of the convexity of the functions $f_i$.
\\
Also, in view of Slater's condition \eqref{slater_condition}, we know that for all ${\ggrec \sigma} \in {C}$ such that $\Sat{{\ggrec \sigma}} \ne \emptyset$
$$
\begin{array}{rcl}
\TG{{C}}{{\ggrec \sigma}} &=& \{ {\ggrec \tau} \in {\mathbb M}^{3\times 3}_{\rm{sym}} \;|\; \forall i \in \Sat{{\ggrec \sigma}}, {\ggrec \tau} : {\boldsymbol \nabla} f_i({\ggrec \sigma}) \leq 0 \}, \\
\NC{{C}}{{\ggrec \sigma}} &=& \displaystyle{ \{ \sum_{i \in \Sat{{\ggrec \sigma}} } \alpha_i {\boldsymbol \nabla} f_i({\ggrec \sigma}) \;|\; \alpha_i \geq 0 \mbox{ for all } i \in \Sat{{\ggrec \sigma}} \}.}
\end{array}
$$
When $\Sat{{\ggrec \sigma}} = \emptyset$, $\TG{{C}}{{\ggrec \sigma}} = {\mathbb M}^{3\times 3}_{\rm{sym}}$ and $\NC{{C}}{{\ggrec \sigma}} = \{\vecc{0}\}$.
We recall that we are working under the assumption that the yield functions are insensitive to the hydrostatic pressure, that is the convex ${C}$ in \eqref{domain_rdc} satisfies the condition in (\ref{condition_indiff_hydro}).
A direct consequence of this assumption is that for all ${\ggrec \sigma} \in {C}$, we have
\begin{equation}\label{indiff_grads}
\forall i \in \Sat{{\ggrec \sigma}}, \; {\boldsymbol \nabla} f_i({\ggrec \sigma}):{\rm{\bf{I}}} = 0.
\end{equation}
Assumption \eqref{condition_indiff_hydro} is in particular valid when the functions $f_i, i=1,\cdots,m$, depend only on the invariants
$J_2$ and $J_3$, that is
$$
f_i({\ggrec \sigma}) = F_i(J_2({\ggrec \sigma}), J_3({\ggrec \sigma})) \mbox{ for } 1 \leq i \leq m,
$$
where $F_1, \cdots, F_m$ are given functions of two variables. We have the following two theorems.
\begin{theorem}\label{explicit_const_law_onefunc}
Let ${C}$ be a yield domain defined by $m$ smooth functions as in \eqref{domain_rdc}. Assume that
${C}$ satisfies the hydrostatic pressure stability condition \eqref{condition_indiff_hydro}. Let ${\ggrec \sigma}\in {C}$ such that $f_1({\ggrec \sigma}) = k_1$ and $f_i({\ggrec \sigma}) < k_i$ for all $i \geq 2$. Then, the constitutive laws in \eqref{str_tangent0}, \eqref{str_normal}, and \eqref{const_tangent0} can be rewritten as
\begin{eqnarray}
\drv{ \teps^{\rm p}} &=& \displaystyle{ \frac{\max(0,\drv{ {\ggrec \varepsilon}} : {\boldsymbol \nabla} f_1({\ggrec \sigma}))}{\|{\boldsymbol \nabla} f_1({\ggrec \sigma})\|^2} {\boldsymbol \nabla} f_1({\ggrec \sigma}), }\label{onefunc_strP}\\
\drv{ \teps^{\rm e}} &=& \displaystyle{ \drv{ {\ggrec \varepsilon}} - \drv{ \teps^{\rm p}}, } \label{onefunc_strE}\\
\drv{{\ggrec \sigma}} &=& \displaystyle{ \lambda {\rm tr}(\drv{{\ggrec \varepsilon}} ) {\rm{\bf{I}}}+
2 \mu \left( \drv{{\ggrec \varepsilon}} - \frac{\max(0,
\drv{{\ggrec \varepsilon}}: {\boldsymbol \nabla} f_1({\ggrec \sigma}) )}{\|{\boldsymbol \nabla} f_1({\ggrec \sigma})\|^2} {\boldsymbol \nabla} f_1({\ggrec \sigma})\right)}. \label{constitu_law1}
\end{eqnarray}
\end{theorem}
\begin{theorem}\label{explicit_const_law_twofunc}
Let ${C}$ be a yield domain defined by $m$ smooth functions \eqref{domain_rdc}. Assume that
${C}$ satisfies the hydrostatic pressure stability condition \eqref{condition_indiff_hydro}. Let ${\ggrec \sigma}\in {C}$ such that $f_i({\ggrec \sigma}) = k_i$ for $i=1,2$, and that $f_i({\ggrec \sigma}) < k_i$ for all $i \geq 3$. Assume also that $ {\boldsymbol \nabla} f_1({\ggrec \sigma})$ and $ {\boldsymbol \nabla} f_2({\ggrec \sigma})$ are not collinear.
Let
$$\begin{array}{rclcrcl}
\alpha_i &=& \displaystyle{ \frac{ \drv{{\ggrec \varepsilon}} : {\boldsymbol \nabla} f_i({\ggrec \sigma}) }{\|{\boldsymbol \nabla} f_i({\ggrec \sigma})\|} }\;\;, i = 1, 2,
&& \eta_1 &=& \displaystyle{ \frac{ \alpha_1 - \delta \alpha_2 }{1-\delta^2} }, \\
\eta_2 &= & \displaystyle{ \frac{ \alpha_2 - \delta \alpha_1 }{1-\delta^2} },
&\mbox{ and }& \delta &=& \displaystyle{ \frac{ {\boldsymbol \nabla} f_1({\ggrec \sigma}) : {\boldsymbol \nabla} f_2({\ggrec \sigma}) }{ \|{\boldsymbol \nabla} f_1 ({\ggrec \sigma})\| \|{\boldsymbol \nabla} f_2 ({\ggrec \sigma})\|}}.
\end{array}
$$
Then, the constitutive laws \eqref{str_tangent0}, \eqref{str_normal}, and \eqref{const_tangent0} can be rewritten as
\begin{eqnarray}
\drv{ \teps^{\rm p}} &=&
\left\{
\begin{array}{ll}
\displaystyle{ \eta_1 \frac{ {\boldsymbol \nabla} f_1({\ggrec \sigma}) }{\|{\boldsymbol \nabla} f_1({\ggrec \sigma})\|} +
\eta_2 \frac{ {\boldsymbol \nabla} f_2({\ggrec \sigma}) }{\|{\boldsymbol \nabla} f_2({\ggrec \sigma})\|} }, & \mbox{ if } \eta_i \geq 0 \mbox{ for } i=1, 2, \\
\displaystyle{ \max(\alpha_1, 0) \frac{ {\boldsymbol \nabla} f_1({\ggrec \sigma}) }{\|{\boldsymbol \nabla} f_1({\ggrec \sigma})\|} } ,& \mbox{ if } \min(\eta_1, \eta_2) < 0 \mbox{ and } \alpha_1 \geq \alpha_2, \\
\displaystyle{ \max(\alpha_2, 0) \frac{ {\boldsymbol \nabla} f_2({\ggrec \sigma}) }{\|{\boldsymbol \nabla} f_2({\ggrec \sigma})\|} } , & \mbox{ if } \min(\eta_1, \eta_2) < 0 \mbox{ and } \alpha_1 \leq \alpha_2,
\end{array}
\right. \;\;\;\; \\
\drv{ \teps^{\rm e}} &=& \displaystyle{ \drv{ {\ggrec \varepsilon}} - \drv{ \teps^{\rm p}}}, \\
\drv{{\ggrec \sigma}} &=& \displaystyle{ \lambda {\rm tr}(\drv{{\ggrec \varepsilon}} ) {\rm{\bf{I}}}+
2 \mu \drv{{\ggrec \varepsilon}} - 2\mu \drv{ \teps^{\rm p}}.}
\end{eqnarray}
\end{theorem}
Theorems \ref{explicit_const_law_onefunc}
and \ref{explicit_const_law_twofunc} are part of the main results of this contribution.
In view of Theorem \ref{moreau_elastoplast}, in order to establish the results in these Theorems
it is sufficient to find the expressions of the projectors $ \PRJ{\TG{{C}}{{\ggrec \sigma}}}$ and $ \PRJ{\NC{{C}}{{\ggrec \sigma}}}$, for yield domains as in (\ref{domain_rdc}), in the two simple cases when $\Sat{{\ggrec \sigma}}$ is reduced to one and two indices, respectively. This problem technical and purely computational in nature and is almost independent of mechanical modeling: it is a more general issue
of characterizing the projection of any vector on the normal and tangent cones of $C$.
This is the subject of the following subsections. \\
Proofs of theorems \eqref{explicit_const_law_onefunc} and \eqref{explicit_const_law_twofunc} are given in sections \ref{proj_onefunc} and \ref{proj_twofunc} hereafter.
\subsection{A simple lemma on projections}
For ${\ggrec \sigma}\in {C}$ consider the subspace ${\mathscr N}({\ggrec \sigma})$ defined by
$$
{\mathscr N}({\ggrec \sigma}) = {\rm{span}} \{ {\boldsymbol \nabla} f_i({\ggrec \sigma}) \;|\; i \in \Sat{{\ggrec \sigma}}\},
$$
with the convention ${\mathscr N}({\ggrec \sigma}) = \{\vecc{0}\}$ if $\Sat{{\ggrec \sigma}} = \emptyset$. Denote by $\PRJP{{\ggrec \sigma}}$ (resp. $\PRJO{{\ggrec \sigma}}$) the orthogonal projection on
${\mathscr N}({\ggrec \sigma})$ (resp. on ${\mathscr N}({\ggrec \sigma})^\perp$, where ${\mathscr N}({\ggrec \sigma})^\perp$ represents the orthogonal
subspace of ${\mathscr N}({\ggrec \sigma})$).
It is easy to check that
$$
\NC{{C}}{{\ggrec \sigma}} \subset {\mathscr N}({\ggrec \sigma}) \mbox{ and } {\mathscr N}({\ggrec \sigma})^\perp \subset \TG{{C}}{{\ggrec \sigma}}.
$$
We have the following technical lemma.
\begin{lemma}\label{prop_carac_proj1}
Let ${\ggrec \sigma} \in {C}$. Then, the following identities hold true
\begin{equation}\label{identity_proj}
\PRJ{\NC{C}{{\ggrec \sigma}}} = \PRJ{\NC{C}{{\ggrec \sigma}}} \circ \PRJP{{\ggrec \sigma}} \mbox{ and } \PRJ{\TG{C}{{\ggrec \sigma}}} = {\mathbb{Id}} - \PRJ{\NC{C}{{\ggrec \sigma}}} \circ \PRJP{{\ggrec \sigma}}.
\end{equation}
Here, the symbol $\circ$ denotes the composition operator of functions.
\end{lemma}
This lemma will simplify the search for the closed form expressions of the projections on the normal and tangent cones. Indeed, given ${\ggrec \tau} \in {\mathbb M}^{3\times 3}_{\rm{sym}}$, $\PRJ{\NC{C}{{\ggrec \sigma}}} {\ggrec \tau}$ can be computed by following the two steps:
\begin{itemize}
\item[i.] The first step consists of finding the orthogonal projection $ {\ggrec \tau}^{{\mathscr N}}$ of ${\ggrec \tau}$ on ${\mathscr N}({\ggrec \sigma})$:
$$
{\ggrec \tau}^{{\mathscr N}} = \sum_{i \in \Sat{{\ggrec \sigma}}} \beta_i {\boldsymbol \nabla} f_i({\ggrec \sigma}),
$$
where $\beta_i$, $i \in \Sat{{\ggrec \sigma}}$ are (unsigned) real numbers such that
$$
\sum_{i \in \Sat{{\ggrec \sigma}}} \beta_i {\boldsymbol \nabla} f_i({\ggrec \sigma}) : {\boldsymbol \nabla} f_k({\ggrec \sigma}) = {\ggrec \tau}: {\boldsymbol \nabla} f_k({\ggrec \sigma}) \mbox{ for all } k \in \Sat{{\ggrec \sigma}}.
$$
\item[ii.] The second step consists of projecting $ {\ggrec \tau}^{{\mathscr N}}$ (which lives in the finite dimensional space ${\mathscr N}$) onto $\NC{{C}}{{\ggrec \sigma}}$. This projection coincides with
$\PRJ{\NC{C}{{\ggrec \sigma}}} {\ggrec \tau}$, the projection of ${\ggrec \tau}$ on $\NC{C}{{\ggrec \sigma}}$, according to Lemma \ref{prop_carac_proj1}.
\end{itemize}
The projection $\PRJ{\TG{C}{{\ggrec \sigma}}}$ is then obtained from the second identity in \eqref{identity_proj}.
\begin{ourproof}{of Lemma \ref{prop_carac_proj1}}
Let ${\ggrec \tau} \in {\mathbb M}^{3\times 3}_{\rm{sym}}$ and set $ {\ggrec \tau}^{\small{\mathbin{\!/\mkern-5mu/\!}}} = \PRJP{{\ggrec \sigma}} {\ggrec \tau}$ and
${\ggrec \tau}^{\perp} = \PRJO{{\ggrec \sigma}} {\ggrec \tau}$. Thus, ${\ggrec \tau} = {\ggrec \tau}^{\small{\mathbin{\!/\mkern-5mu/\!}}} + {\ggrec \tau}^{\perp}$. In view of Lemma \ref{moreau_elastoplast}, we can also write
$$
{\ggrec \tau}^{\small{\mathbin{\!/\mkern-5mu/\!}}} = {\ggrec \tau}_N + {\ggrec \tau}_T,
$$
where ${\ggrec \tau}_N = \PRJ{\NC{C}{{\ggrec \sigma}}} {\ggrec \tau}^{\small{\mathbin{\!/\mkern-5mu/\!}}}$ and ${\ggrec \tau}_T = \PRJ{\TG{C}{{\ggrec \sigma}}} {\ggrec \tau}^{\small{\mathbin{\!/\mkern-5mu/\!}}}$. Set
${\ggrec \tau}^\star_T = {\ggrec \tau}_T+ {\ggrec \tau}^{\perp}$. Obviously, ${\ggrec \tau}^\star_T\in \TG{C}{{\ggrec \sigma}}$ since ${\mathscr N}({\ggrec \sigma})^\perp \subset \TG{{C}}{{\ggrec \sigma}}$ and that $\TG{C}{{\ggrec \sigma}}$ is a convex cone.
In addition, ${\ggrec \tau}^\star_T : {\ggrec \tau}_N = 0$ and ${\ggrec \tau} = {\ggrec \tau}_N +{\ggrec \tau}^\star_T$. Thus, $ {\ggrec \tau}_N = \PRJ{\NC{C}{{\ggrec \sigma}}} {\ggrec \tau}$
and ${\ggrec \tau}^\star_T = \PRJ{\TG{C}{{\ggrec \sigma}}} {\ggrec \tau}$, thanks to Lemma \ref{moreau_elastoplast}. This ends the proof.
\end{ourproof}
\begin{remark}
The hypothesis that ${C}$ is insensitive to hydrostatic pressure is not required in Lemma \ref{prop_carac_proj1}.
\end{remark}
\subsection{The case of one saturated yield function: Proof of Theorem \ref{explicit_const_law_onefunc} }\label{proj_onefunc}
In this subsection we consider the case of a single saturated yield function, that of points ${\ggrec \sigma}\in {C}$ for which
$
\card{\Sat{{\ggrec \sigma}}} = 1,
$
where $\card{\Sat{{\ggrec \sigma}}}$ denotes the cardinality of the set $\Sat{{\ggrec \sigma}}$. Without loss of generality, we assume that
\begin{equation}\label{one_satu}f_1({\ggrec \sigma}) = k_1 \mbox{ and }f_i({\ggrec \sigma}) < 0 \mbox{ for }2 \leq i \leq m.\end{equation} Thus,
$$
\TG{{C}}{{\ggrec \sigma}} = \{ {\ggrec \varphi} \in {\mathbb M}^{3\times 3}_{\rm{sym}} \;|\; {\ggrec \varphi}:{\boldsymbol \nabla} f_1({\ggrec \sigma}) \leq 0\},
$$
and
$$
\NC{{C}}{{\ggrec \sigma}} = \{ p {\boldsymbol \nabla} f_1(\alpha) \;|\; p\geq 0\}.
$$
\begin{proposition}\label{projec_sat1}
Let ${\ggrec \sigma} \in {C}$ such that \eqref{one_satu} holds true. Let ${\ggrec \tau} \in {\mathbb M}^{3\times 3}_{\rm{sym}}$. Then,
\begin{eqnarray}
\PRJ{\NC{C}{{\ggrec \sigma}}}({\ggrec \tau}) &=& \displaystyle{ \frac{\max(0, {\ggrec \tau}:{\boldsymbol \nabla} f_1({\ggrec \sigma}))}{\|{\boldsymbol \nabla} f_1({\ggrec \sigma})\|^2} {\boldsymbol \nabla} f_1({\ggrec \sigma}),} \label{form_projN_onefunc} \\
\PRJ{\TG{C}{{\ggrec \sigma}}}({\ggrec \tau}) &=& {\ggrec \tau} - \frac{\max(0, {\ggrec \tau}:{\boldsymbol \nabla} f_1({\ggrec \sigma}))}{\|{\boldsymbol \nabla} f_1({\ggrec \sigma})\|^2} {\boldsymbol \nabla} f_1({\ggrec \sigma}). \label{form_projT_onefunc}
\end{eqnarray}
\end{proposition}
We recall that as a consequence of Salter's condition in \eqref{slater_condition} and convexity, we have ${\boldsymbol \nabla} f_1({\ggrec \sigma}) \neq 0$ whenever $f_1({\ggrec \sigma})= k_1$. We may also observe that
the condition in \eqref{condition_indiff_hydro} is not needed in Proposition \ref{projec_sat1}.
\begin{ourproof}
Let $ {\ggrec \tau} \in {\mathbb M}^{3\times 3}_{\rm{sym}}$ and set
$$
{\ggrec \eta} = {\ggrec \tau} - c_0 {\boldsymbol \nabla} f_1({\ggrec \sigma}) \mbox{ with }c_0 = \frac{\max(0, {\ggrec \tau}:{\boldsymbol \nabla} f_1({\ggrec \sigma}))}{\|{\boldsymbol \nabla} f_1({\ggrec \sigma})\|^2}.
$$
Obviously ${\ggrec \eta} \in \TG{C}{{\ggrec \sigma}}$. We observe that $c_0 {\ggrec \eta} : {\boldsymbol \nabla} f_1({\ggrec \sigma}) = 0$. Let ${\ggrec \kappa}$ be an element of $\TG{C}{{\ggrec \sigma}}$. Then,
$$
\begin{array}{rcl}
\|{\ggrec \tau} - {\ggrec \kappa} \|^2 - \|{\ggrec \tau} - {\ggrec \eta} \|^2 &= & \|({\ggrec \tau} - {\ggrec \eta}) + ({\ggrec \eta} - {\ggrec \kappa}) \|^2 - \|{\ggrec \tau} - {\ggrec \eta} \|^2 \\
&=& 2 ({\ggrec \tau} - {\ggrec \eta}):({\ggrec \eta} - {\ggrec \kappa})+ \|{\ggrec \eta} -{\ggrec \kappa} \|^2 \\
&=& 2c_0 {\boldsymbol \nabla} f_1({\ggrec \sigma}):({\ggrec \eta} - {\ggrec \kappa})+ \|{\ggrec \eta} -{\ggrec \kappa} \|^2 \\
&=& -2c_0 {\boldsymbol \nabla} f_1({\ggrec \sigma}):{\ggrec \kappa}+ \|{\ggrec \eta} -{\ggrec \kappa} \|^2 \\
&\geq& 0.
\end{array}
$$
It follows that ${\ggrec \eta}$ minimizes $\|{\ggrec \tau} - {\ggrec \kappa} \|$ over $\TG{C}{{\ggrec \sigma}}$.
The rest of the proof follows thanks to lemma \ref{moreau_decompo}, from the fact that $c_0{\boldsymbol \nabla} f_1({\ggrec \sigma}) \in \NC{(C}{{\ggrec \sigma}}$ and that the observation that this quantity is perpendicular to ${\ggrec \eta}$.
\end{ourproof}
\begin{remark}
A sufficient condition for the yield domain ${C}$ to be stable to hydrostatic pressure variations (Condition \ref{condition_indiff_hydro}) is that the functions $f_1,f_2,\cdots,f_m$ depend only on the invariants $J_2$ and $J_3$. We note here that
if $f_1({\ggrec \sigma}) = F_1(J_2({\ggrec \sigma}), J_3({\ggrec \sigma}))$ for some differentiable
function $F_1$, then,
$$
\begin{array}{rcl}
{\boldsymbol \nabla} f_1({\ggrec \sigma}) =\displaystyle{ \frac{\partial F_1}{\partial J_2}(J_2({\ggrec \sigma}), J_3({\ggrec \sigma})) \overline{\tsig} + \frac{\partial F_1}{\partial J_3}(J_2({\ggrec \sigma}), J_3({\ggrec \sigma})) (\overline{\tsig}^2 - \frac{2}{3} J_2({\ggrec \sigma}){\rm{\bf{I}}}).}
\end{array}
$$
(thanks to formula \eqref{gradJ2J3}), which yields a straightforward formula for computing the tangent and normal cones.
\end{remark}
The proof of theorem \ref{explicit_const_law_onefunc} follows from Formulas \eqref{form_projN_onefunc} and \eqref{form_projT_onefunc}.
\subsection{The case of two saturated yield function: Proof of Theorem \ref{explicit_const_law_twofunc} }\label{proj_twofunc}
In this subsection we consider the case of two saturated yield functions, that is when
$$
\card{\Sat{{\ggrec \sigma}}} = 2.
$$
Let's assume that
\begin{equation}
\label{two_satu}
f_1({\ggrec \sigma}) = k_1, f_2({\ggrec \sigma}) = k_2, \mbox{ and }f_i({\ggrec \sigma}) < k_i \mbox{ for } 3 \leq i \leq m. \end{equation}
We necessarily have
\begin{equation}\label{inde_grads}
{\boldsymbol \nabla} f_1({\ggrec \sigma}) \mbox{ and } {\boldsymbol \nabla} f_2({\ggrec \sigma}) \mbox{ are not collinear,}
\end{equation}
otherwise, the two functions could be combined into one and we are back in the case of a single constraint.
We know that
$$
\TG{{C}}{{\ggrec \sigma}} = \{ {\ggrec \varphi} \in {\mathbb M}^{3\times 3}_{\rm{sym}} \;|\; {\ggrec \varphi}:{\boldsymbol \nabla} f_1({\ggrec \sigma}) \leq 0 \mbox{ and } {\ggrec \varphi}:{\boldsymbol \nabla} f_2({\ggrec \sigma}) \leq 0\},
$$
$$
\NC{{C}}{{\ggrec \sigma}} = \{ \eta_1 {\boldsymbol \nabla} f_1(\alpha) + \eta_2 {\boldsymbol \nabla} f_2(\alpha) \;|\; \eta_1\geq 0 \mbox{ and } \eta_2\geq 0\}.
$$
As per Lemma \ref{prop_carac_proj1}, let
$$
{\mathscr N}({\ggrec \sigma}) = {\rm{span}} \{ {\boldsymbol \nabla} f_1({\ggrec \sigma}), {\boldsymbol \nabla} f_2({\ggrec \sigma})\}.
$$
Obviously $\NC{{C}}{{\ggrec \sigma}} \subset {\mathscr N}({\ggrec \sigma})$. Consider the unit vectors
$$
{\ggrec \tau}_i = \frac{ 1 }{\|{\boldsymbol \nabla} f_i({\ggrec \sigma})\|} {\boldsymbol \nabla} f_i({\ggrec \sigma}), \; i=1, 2, \;
$$
and set
$$
\delta = {\ggrec \tau}_1:{\ggrec \tau}_2 \in (-1, 1).
$$
The orthogonal projection on ${\mathscr N}({\ggrec \sigma})$ is given by
\begin{equation}
\PRJP{{\ggrec \sigma}} {\ggrec \tau} = \alpha_1({\ggrec \tau}) {\ggrec \tau}_1 + \alpha_2({\ggrec \tau}) {\ggrec \tau}_2,
\end{equation}
where
$$
\alpha_1({\ggrec \tau}) = \frac{({\ggrec \tau}:{\ggrec \tau}_1)- \delta ({\ggrec \tau}:{\ggrec \tau}_2) }{1-\delta^2} , \; \alpha_2({\ggrec \tau}) = \frac{({\ggrec \tau}:{\ggrec \tau}_2)- \delta ({\ggrec \tau}:{\ggrec \tau}_1) }{1-\delta^2}.
$$
\begin{proposition}\label{proj_2grads}
Under the assumptions in \eqref{two_satu} and \eqref{inde_grads}, we have, for ${\ggrec \tau}\in {\mathbb M}^{3\times 3}_{\rm{sym}}$,
\begin{equation}
\PRJ{\NC{C}{{\ggrec \sigma}}}({\ggrec \tau}) =
\left\{
\begin{array}{ll}
\PRJP{{\ggrec \sigma}} {\ggrec \tau} & \mbox{ if } \alpha_1({\ggrec \tau}) \geq 0 \mbox{ and } \alpha_2({\ggrec \tau}) \geq 0, \\
\max({\ggrec \tau}:{\ggrec \tau}_1, 0) {\ggrec \tau}_1 & \mbox{ if } \min(\alpha_1({\ggrec \tau}),\alpha_2({\ggrec \tau}))<0 \mbox{ and } {\ggrec \tau}:{\ggrec \tau}_1 \geq {\ggrec \tau}:{\ggrec \tau}_2, \\
\max({\ggrec \tau}:{\ggrec \tau}_2, 0) {\ggrec \tau}_2 & \mbox{ if } \min(\alpha_1({\ggrec \tau}),\alpha_2({\ggrec \tau}))<0 \mbox{ and } {\ggrec \tau}:{\ggrec \tau}_1 \leq {\ggrec \tau}:{\ggrec \tau}_2.
\end{array}
\right.
\end{equation}
\end{proposition}
\begin{ourproof}{}
Let ${\ggrec \tau} \in {\mathbb M}^{3\times 3}_{\rm{sym}}$ and set ${\ggrec \tau}_{0} = \PRJP{{\ggrec \sigma}} {\ggrec \tau} \in {\mathscr N}({\ggrec \sigma})$. From Lemma \ref{prop_carac_proj1} we know that
$$
\PRJ{\NC{C}{{\ggrec \sigma}}}{\ggrec \tau} = \PRJ{\NC{C}{{\ggrec \sigma}}}({\ggrec \tau}_{0}).
$$
We have four distinct cases:
\begin{enumerate}
\item ${\ggrec \tau}_{0} \in \NC{{C}}{{\ggrec \sigma}}$, that is $\alpha_i({\ggrec \tau}) \geq 0$ for $ 1\leq i \leq 2$. Then, \\
$\PRJ{\NC{C}{{\ggrec \sigma}}} ({\ggrec \tau}) = \PRJ{\NC{C}{{\ggrec \sigma}}}({\ggrec \tau}_{0}) = {\ggrec \tau}_{0}$.
\item ${\ggrec \tau}_{0} \in \TG{{C}}{{\ggrec \sigma}}$, that is if ${\ggrec \tau}:{\ggrec \tau}_i = {\ggrec \tau}_{0}:{\ggrec \tau}_i \leq 0$ for $ 1\leq i \leq 2$. Then $\PRJ{\TG{{C}}{{\ggrec \sigma}}}({\ggrec \tau}_{0}) = {\ggrec \tau}_{0} $ and $\PRJ{\NC{{C}}{{\ggrec \sigma}}}({\ggrec \tau}_{0}) = \vecc{0} $.
\item ${\ggrec \tau}_{0} \not \in \NC{{C}}{{\ggrec \sigma}}$, ${\ggrec \tau}_{0} \not \in \TG{{C}}{{\ggrec \sigma}}$ and ${\ggrec \tau}:{\ggrec \tau}_1 \geq {\ggrec \tau}:{\ggrec \tau}_2$. In this case
$$
\alpha_1({\ggrec \tau}) - \alpha_2({\ggrec \tau}) = \frac{({\ggrec \tau}:{\ggrec \tau}_1)- ({\ggrec \tau}:{\ggrec \tau}_2) }{1-\delta} \geq 0.
$$
Thus $\alpha_1({\ggrec \tau}) \geq \alpha_2({\ggrec \tau})$. Necessarily ${\ggrec \tau}:{\ggrec \tau}_1 > 0$ (since ${\ggrec \tau} \not \in \TG{{C}}{{\ggrec \sigma}}$) and $\alpha_2({\ggrec \tau}) < 0$ (since ${\ggrec \tau}_{0} \not \in \NC{{C}}{{\ggrec \sigma}}$) . We also have
$$
\PRJP{{\ggrec \sigma}} {\ggrec \tau} - ({\ggrec \tau}:{\ggrec \tau}_1) {\ggrec \tau}_1 = \alpha_2({\ggrec \tau})(-\delta {\ggrec \tau}_1+{\ggrec \tau}_2).
$$
Hence, $(\PRJP{{\ggrec \sigma}} {\ggrec \tau} - ({\ggrec \tau}:{\ggrec \tau}_1) {\ggrec \tau}_1) :{\ggrec \tau}_i \leq 0$ for $i =1, 2$. Thus, $\PRJP{{\ggrec \sigma}} {\ggrec \tau} - ({\ggrec \tau}:{\ggrec \tau}_1) {\ggrec \tau}_1 \in \TG{{C}}{{\ggrec \sigma}}$.
Since $({\ggrec \tau}:{\ggrec \tau}_1) {\ggrec \tau}_1 \in \NC{{C}}{{\ggrec \sigma}}$ and $({\ggrec \tau}:{\ggrec \tau}_1) {\ggrec \tau}_1 \perp \PRJP{{\ggrec \sigma}} {\ggrec \tau} - ({\ggrec \tau}:{\ggrec \tau}_1) {\ggrec \tau}_1$, we deduce that
$\PRJ{\NC{{C}}{{\ggrec \sigma}}}({\ggrec \tau}_{0}) = ({\ggrec \tau}:{\ggrec \tau}_1) {\ggrec \tau}_1$ and $\PRJ{\TG{{C}}{{\ggrec \sigma}}}({\ggrec \tau}_{0}) = \PRJP{{\ggrec \sigma}} {\ggrec \tau} - ({\ggrec \tau}:{\ggrec \tau}_1) {\ggrec \tau}_1$.
\item The case ${\ggrec \tau}_{0} \not \in \NC{{C}}{{\ggrec \sigma}}$, ${\ggrec \tau}_{0} \not \in \TG{{C}}{{\ggrec \sigma}}$ and ${\ggrec \tau}:{\ggrec \tau}_2 \geq {\ggrec \tau}:{\ggrec \tau}_1$ is obtained by simple symmetry.
\end{enumerate}
\end{ourproof}
The proof of Theorem \ref{explicit_const_law_twofunc} follows from
Theorem \ref{moreau_elastoplast} and Proposition \ref{proj_2grads}.
\section{Examples: Von Mises and Tresca criteria }\label{VM-T-cr}
\subsection{Von Mises criterion}
In the case of the Von Mises criterion, the yield domain is defined as
\begin{equation}\label{VonMisesDom}
{C} = \{ {\ggrec \sigma} \in {\mathbb M}^{3\times 3}_{\rm{sym}} \;|\; \sqrt{J_2({\ggrec \sigma})} \leq k\},
\end{equation}
for some given constant $k > 0$.
\begin{corollary}
Assume that ${C}$ is defined by \eqref{VonMisesDom} and that
$J_2({\ggrec \sigma}) = k$. Then, the constitutive laws in \eqref{str_tangent0}, \eqref{str_normal}, and \eqref{const_tangent0}
are reduced to
\begin{eqnarray}
\drv{ \teps^{\rm p}} &=& \displaystyle{ \frac{\max(0,\drv{ {\ggrec \varepsilon}} : \overline{\tsig})}{2k^2} \overline{\tsig}, } \label{onefunc_strPCM}\\
\drv{ \teps^{\rm e}} &=& \displaystyle{ \drv{ {\ggrec \varepsilon}} - \frac{\max(0,\drv{{\ggrec \varepsilon}} : \overline{\tsig})}{2k^2} \overline{\tsig}}, \label{onefunc_strECM}\\
\drv{{\ggrec \sigma}} &=& \displaystyle{ \lambda {\rm tr}(\drv{{\ggrec \varepsilon}} ) {\rm{\bf{I}}}+ 2\mu \drv{{\ggrec \varepsilon}}
- \frac{\mu}{k^2} \max(0, \drv{{\ggrec \varepsilon}}: \overline{\tsig}) \overline{\tsig}. } \label{onefunc_strSIGCM}
\end{eqnarray}
\end{corollary}
\begin{ourproof}{}
This case corresponds to a single yield function
$$
f_1({\ggrec \sigma}) = J_2({\ggrec \sigma}) = \frac{1}{2} \|\overline{\tsig}\|^2.
$$
Using \eqref{gradJ2J3} gives
$$
{\boldsymbol \nabla} f_1({\ggrec \sigma}) = \overline{\tsig}.
$$
Hence,
$$
\| {\boldsymbol \nabla} f_1({\ggrec \sigma}) \|^2 = \| \overline{\tsig} \|^2 = 2 J_2({\ggrec \sigma}) = 2k^2.
$$
Replacing in \eqref{onefunc_strP}, \eqref{onefunc_strE} and \eqref{constitu_law1} gives formula
\eqref{onefunc_strPCM}, \eqref{onefunc_strECM} and \eqref{onefunc_strSIGCM}.
\end{ourproof}
\subsection{Tresca criterion}\label{Tresca}
The well known Tresca criterion (or the maximum shear stress criterion) corresponds to the
yield domain
\begin{equation}\label{tresca_domain}
{C} = \{ {\ggrec \sigma} \in {\mathbb M}^{3\times 3}_{\rm{sym}} \;|\; f_{{T}}({\ggrec \sigma}) \leq k\},
\end{equation}
where
\begin{equation}\label{trsc_function}
f_{{T}}({\ggrec \sigma}) = \frac{1}{2} ( \lambda_1({\ggrec \sigma}) - \lambda_3({\ggrec \sigma})).
\end{equation}
This is a convex function since
\begin{equation}\label{expression_ftresca}
f_{{T}}({\ggrec \sigma}) = \frac{1}{2} ( \lambda_1({\ggrec \sigma}) + \lambda_1(-{\ggrec \sigma})) \mbox{ and } \lambda_1({\ggrec \sigma}) = \max_{\|u\| = 1} {\ggrec \sigma}: (u \otimes u).
\end{equation}
Also, the hydrostatic pressure stability condition in \eqref{condition_indiff_hydro} is satisfied since
$$
\forall {\ggrec \sigma} \in {\mathbb M}^{3\times 3}_{\rm{sym}} , \forall \theta \in {\mathbb R}, \; f_{{T}}({\ggrec \sigma} + \theta {\rm{\bf{I}}}) = f_{{T}}({\ggrec \sigma}).
$$
However, it is easy to see that the function $f_{{T}}$ is not everywhere differentiable.
\begin{remark}
Of course, the definition in \eqref{expression_ftresca} is based on the assumption
that the eigenvalues of ${\ggrec \sigma}$ are ordered ($ \lambda_1({\ggrec \sigma}) \geq \lambda_2({\ggrec \sigma}) \geq \lambda_3({\ggrec \sigma})$). However, one can also use the definition
$$
f_{{T}}({\ggrec \sigma}) = \frac{1}{4} (| \lambda_1({\ggrec \sigma}) - \lambda_2({\ggrec \sigma})|+| \lambda_2({\ggrec \sigma}) - \lambda_3({\ggrec \sigma})|+| \lambda_1({\ggrec \sigma}) - \lambda_3({\ggrec \sigma})|),
$$
which does not depend on the order of the eigenvalues but makes it clearer that $f_{{T}}$ is not be everywhere differentiable.
\end{remark}
Consistent with the condition in \eqref{condition_indiff_hydro}, we have
\begin{equation}
f_{{T}}({\ggrec \sigma}) = \frac{1}{2} ( \lambda_1(\overline{\tsig}) - \lambda_3(\overline{\tsig})).
\end{equation}
We are drawn to apply the results of Proposition \ref{projec_sat1}, corresponding to the case of a single function to express the constitutive laws for the Tresca yield criterion. However, since the function is not differentiable everywhere, we will have to treat separately the points where $f_{{T}}$ is not differentiable.
In order to use Proposition \ref{explicit_const_law_onefunc}, we would like to express the yield function in terms of the components of ${\ggrec \sigma}$ directly. One sometimes encounters in the literature the smooth function (see, e. g., \cite{lubliner1970}, p. 137):
$$
\begin{array}{rcl}
F({\ggrec \sigma}) &=& \displaystyle{ \Pi_{ 1 \leq i < j \leq 3} (( \lambda_i({\ggrec \sigma})- \lambda_j({\ggrec \sigma}))^2 - 4k^2), }\\
&=& 4 J_2({\ggrec \sigma})^3 - 27 J_3({\ggrec \sigma})^2 - 36 k^2 J_2({\ggrec \sigma})^2 + 96 k^4 J_2({\ggrec \sigma}) - 64 k^6,
\end{array}
$$
which satisfies
$$
f_{{T}}({\ggrec \sigma}) \leq k \mbox{ (resp. $f_{{T}}({\ggrec \sigma}) = k$)} \Longrightarrow F({\ggrec \sigma}) \leq 0 \mbox{ (resp. $F({\ggrec \sigma}) = 0$)}.
$$
This is a one directional implication which clearly is not an equivalence. A simple way to be convinced of this,
is to observe that the domain $\{{\ggrec \sigma} \;|\; F({\ggrec \sigma}) \leq 0\}$ has a smooth boundary,
unlike the domain \eqref{tresca_domain} defined by the Tresca criterion
which has corners, or from a mathematical point of view, points on the boundary
where the normal cone degenerates because the function $f_{{T}}$ is not differentiable there. \\
$\;$\\
It is enough to characterize the constitutive laws
when ${\ggrec \sigma}$ is at the boundary of the yield domain, i.e, when
\begin{equation}\label{satu_bord_tresca}
f_{{T}}({\ggrec \sigma}) = k, (k>0),
\end{equation}
i.e, when the plastic yield limit is attained.
This characterization goes through two steps: the computation of the normal cone,
followed by the computation of the projections on the normal and the tangent cones. The former
step requires the calculation of the sub-differential of $f_{{T}}$.
We have two distinct cases:
\begin{enumerate}
\item The three principal stresses are distinct; i.e, $ \lambda_1({\ggrec \sigma}) > \lambda_2({\ggrec \sigma}) > \lambda_3({\ggrec \sigma})$. In this case, the function
$f_{{T}}$ is differentiable at ${\ggrec \sigma}$ (and its sub-differential is
reduced to a singleton). The projection calculation in this case falls under Proposition \ref{projec_sat1} and Theorem \ref{explicit_const_law_onefunc}.
\item Two of the principal stresses are equal, i.e, $ \lambda_2({\ggrec \sigma}) = \lambda_1({\ggrec \sigma})\ne \lambda_3({\ggrec \sigma}) $ or
$ \lambda_2({\ggrec \sigma}) = \lambda_3({\ggrec \sigma})\ne \lambda_1({\ggrec \sigma})$. This case is more complex because the subdifferential is not reduced to a point. The computation of the resulting projection will also be more complex
and it is not be covered by Proposition \ref{projec_sat1} for a single function nor by Proposition \ref{proj_2grads} for the case of two functions.
\end{enumerate}
We note that the case where the three principal stresses are equal is obviously excluded because
of \eqref{satu_bord_tresca}.
\begin{theorem}\label{theo_vonmises}
Assume that ${C}$ is given by \eqref{tresca_domain}. Assume that ${\ggrec \sigma}$ satisfies \eqref{satu_bord_tresca}
and that $ \lambda_1({\ggrec \sigma}) > \lambda_2({\ggrec \sigma}) > \lambda_3({\ggrec \sigma})$.
Let $v_1({\ggrec \sigma})$ and $v_3({\ggrec \sigma})$ be the unitary eigenvectors associated with $ \lambda_1({\ggrec \sigma})$ and $ \lambda({\ggrec \sigma})$, respectively.
Then, the rules \eqref{str_tangent0}, \eqref{str_normal} and \eqref{const_tangent0} can be rewritten as
\begin{eqnarray}
\drv{ \teps^{\rm p}} &=& \displaystyle{ q(\drv{{\ggrec \varepsilon}}; {\ggrec \sigma}) [ v_1({\ggrec \sigma}) \otimes v_1({\ggrec \sigma}) - v_3({\ggrec \sigma}) \otimes v_3({\ggrec \sigma})], }\\
\drv{ \teps^{\rm e}} &=& \displaystyle{ \drv{ {\ggrec \varepsilon}} - \drv{ \teps^{\rm p}}, } \\
\drv{{\ggrec \sigma}}&= &\displaystyle{ \lambda {\rm tr}(\drv{{\ggrec \varepsilon}} ) {\rm{\bf{I}}} + 2\mu\left(\drv{{\ggrec \varepsilon}} - q(\drv{{\ggrec \varepsilon}}; {\ggrec \sigma}) [ v_1({\ggrec \sigma}) \otimes v_1({\ggrec \sigma}) - v_3({\ggrec \sigma}) \otimes v_3({\ggrec \sigma})] \right), }
\end{eqnarray}
with
\begin{eqnarray}
q(\drv{{\ggrec \varepsilon}}; {\ggrec \sigma}) & = & \frac{1}{2} \max\left( 0, \drv{{\ggrec \varepsilon}}:( v_1({\ggrec \sigma}) \otimes v_1({\ggrec \sigma}) - v_3({\ggrec \sigma}) \otimes v_3({\ggrec \sigma}))\right), \\
&=& \frac{1}{2} \max\left( 0, \overline{\drv{{\ggrec \varepsilon}}}:( v_1({\ggrec \sigma}) \otimes v_1({\ggrec \sigma}) - v_3({\ggrec \sigma}) \otimes v_3({\ggrec \sigma}))\right).
\end{eqnarray}
\end{theorem}
The proof of Theorem \ref{theo_vonmises} is based on combining Proposition \ref{projec_sat1} and the following Lemma
\begin{lemma}\label{diff_tresca_diff}
Let ${\ggrec \sigma} \in {\mathbb M}^{3\times 3}_{\rm{sym}}$ such that $ \lambda_1({\ggrec \sigma}) > \lambda_2({\ggrec \sigma}) > \lambda_3({\ggrec \sigma})$.
Let $v_1({\ggrec \sigma})$ and $v_3({\ggrec \sigma})$ be the unitary eigenvectors of ${\ggrec \sigma}$ corresponding
to $ \lambda_1({\ggrec \sigma})$ and $ \lambda_3({\ggrec \sigma})$, respectively. Then,
\begin{equation}
{\boldsymbol \nabla}f_{{T}}({\ggrec \sigma})= \frac{1}{2} ( v_1({\ggrec \sigma}) \otimes v_1({\ggrec \sigma}) - v_3({\ggrec \sigma}) \otimes v_3({\ggrec \sigma})),
\end{equation}
and thus, $\|{\boldsymbol \nabla} f ({\ggrec \sigma})\|^2 = 1/2$.
\end{lemma}
\begin{ourproof}{}
Since the $ \lambda_i(\overline{\tsig})$, $1 \leq i \leq 3$, are solutions of the equation
$$
\lambda^3 - J_2({\ggrec \sigma}) \lambda - J_3({\ggrec \sigma}) = 0,
$$
we have
\begin{eqnarray}\label{eigenvalues_tresca}
\lambda_1(\overline{\tsig}) &=& \displaystyle{ \Lambda_0 \cos \frac{\varphi_0({\ggrec \sigma})}{3}, } \nonumber \\
\lambda_2(\overline{\tsig}) &=& \displaystyle{ \Lambda_0 \cos ( \frac{2\pi-\varphi_0({\ggrec \sigma})}{3}), } \nonumber \\
\lambda_3(\overline{\tsig})&=& \displaystyle{ \Lambda_0 \cos ( \frac{2\pi+\varphi_0({\ggrec \sigma})}{3}).} \nonumber
\end{eqnarray}
where
$$
\Lambda_0({\ggrec \sigma}) = \sqrt{ \frac{4 J_2({\ggrec \sigma})}{3}} , \; \varphi_0({\ggrec \sigma}) = \arccos \left (\frac{3\sqrt{3} J_3({\ggrec \sigma})}{2J_2({\ggrec \sigma})^{3/2}} \right) \in [0, \pi].
$$
It follows that
$$
f_{{T}}({\ggrec \sigma}) = \frac{1}{2} ( \lambda_1(\overline{\tsig}) - \; \lambda_3(\overline{\tsig})) = \sqrt{J_2({\ggrec \sigma})} \sin (\theta({\ggrec \sigma})) \mbox{ with } \theta({\ggrec \sigma}) = \frac{\pi+\varphi_0({\ggrec \sigma})}{3}.
$$
Let $v_1({\ggrec \sigma})$, $v_2({\ggrec \sigma})$ and $v_3({\ggrec \sigma})$ be the unitary eigenvectors of ${\ggrec \sigma}$ (and of $\overline{\tsig}$) corresponding to
$ \lambda_1({\ggrec \sigma})$, $ \lambda_2({\ggrec \sigma})$ and $ \lambda_3({\ggrec \sigma})$, respectively. Let $P({\ggrec \sigma})$ be the orthogonal matrix whose column vectors are $v_1({\ggrec \sigma})$, $v_2({\ggrec \sigma})$ and $v_3({\ggrec \sigma})$. Then, $\overline{\tsig} = P({\ggrec \sigma}) D(\overline{\tsig}) P({\ggrec \sigma})^t $ with
$$
D(\overline{\tsig}) = \diagn{ \lambda_1(\overline{\tsig})}{ \lambda_2(\overline{\tsig})}{ \lambda_3(\overline{\tsig})} = \frac{2 \sqrt{J_2}}{\sqrt{3}} \diagn{\cos(\alpha)}{\cos(\beta)}{\cos(\gamma)}
$$
and $\alpha = \theta - \pi/3$, $\beta = \pi-\theta$ and $\gamma = \pi+\theta$. We have
$$
\begin{array}{rcl}
{\boldsymbol \nabla} f_{{T}}({\ggrec \sigma}) &= & \displaystyle{ \frac{\sin (\theta({\ggrec \sigma})) }{2\sqrt{ J_2({\ggrec \sigma}) }} {\boldsymbol \nabla} J_2({\ggrec \sigma}) + \frac{1}{3} J_2({\ggrec \sigma}) \cos(\theta({\ggrec \sigma})) {\boldsymbol \nabla} \varphi_0({\ggrec \sigma}),} \\
&= & \displaystyle{ \frac{\sin (\theta({\ggrec \sigma})) }{ 2 \sqrt{ J_2({\ggrec \sigma}) }} {\boldsymbol \nabla} J_2({\ggrec \sigma}) }\\
& + &\displaystyle{\frac{\sqrt{3}\cos(\theta({\ggrec \sigma}))}{2 J_2({\ggrec \sigma})^2 \sin(\varphi_0({\ggrec \sigma}))} \left(\frac{3}{2} J_3({\ggrec \sigma}) {\boldsymbol \nabla} J_2({\ggrec \sigma}) - J_2({\ggrec \sigma}) {\boldsymbol \nabla} J_3({\ggrec \sigma})\right). }
\end{array}
$$
Since $J_3 = 2\cos(\varphi_0) J^{3/2}_2/(3\sqrt{3})$ and $ \varphi_0 = 3\theta - \pi$, we get
\begin{equation}\label{fonc_grad}
{\boldsymbol \nabla} f_{{T}}({\ggrec \sigma}) = \frac{\cos (2\theta) }{2\sqrt{ J_2 }\sin(3\theta)} {\boldsymbol \nabla} J_2({\ggrec \sigma}) + \frac{\sqrt{3}\cos(\theta)}{2J_2 \sin(3\theta)} {\boldsymbol \nabla} J_3({\ggrec \sigma}).
\end{equation}
Combining with \eqref{gradJ2J3} gives
$$
{\boldsymbol \nabla} J_3({\ggrec \sigma}) = \frac{2}{3} J_2 P({\ggrec \sigma}) \diagn{\cos(2\alpha)}{\cos(2\beta)}{\cos(2\gamma)} P({\ggrec \sigma})^t.
$$
Finally,
$$
{\boldsymbol \nabla} f_{{T}} ({\ggrec \sigma})= \frac{1}{2} P({\ggrec \sigma})
\diagn{1}{0}
{-1} P({\ggrec \sigma})^t = \frac{1}{2} ( v_1({\ggrec \sigma}) v_1({\ggrec \sigma})^t - v_3({\ggrec \sigma}) v_3({\ggrec \sigma})^t).
$$
\end{ourproof}
\begin{remark}
$\varphi_0({\ggrec \sigma})/{3}$ is called the Lode angle (see \cite{wlode}).
\end{remark}
\begin{remark}
An alternative proof of the Lemma can be formulated using the characterization in \eqref{lewis_caract}
for the subdifferential of $f_{{T}}$.
\end{remark}
We now deal with the case of a double eigenvalue.
Assume that $ \lambda_2({\ggrec \sigma}) = \lambda_\ell({\ggrec \sigma})$ for some $\ell \in \{1, 3\}$ and
$ \lambda_2({\ggrec \sigma}) \ne \lambda_{4-\ell}({\ggrec \sigma})$. Set $ m = 4 -\ell \in \{1, 3\}$. Let $\{v_1({\ggrec \sigma}) , v_2({\ggrec \sigma}) , v_3({\ggrec \sigma})\}$
be a corresponding orthonormal basis of eigenvectors of ${\ggrec \sigma}$. Thus,
\begin{equation}
\begin{array}{rcl}
{\ggrec \sigma} &=& \sum_{k=1}^3 \lambda_k({\ggrec \sigma}) v_k({\ggrec \sigma}) \otimes v_k({\ggrec \sigma})\\
&=& \lambda_2({\ggrec \sigma}) (v_2({\ggrec \sigma}) \otimes v_2({\ggrec \sigma}) + v_\ell({\ggrec \sigma}) \otimes v_\ell({\ggrec \sigma}))+ \lambda_m({\ggrec \sigma}) v_m({\ggrec \sigma}) \otimes v_m({\ggrec \sigma})\\
\end{array}
\end{equation}
Define the subspace
\begin{equation}
G_m({\ggrec \sigma}) = \{{\ggrec \tau} \in {\mathbb M}^{3\times 3}_{\rm{sym}} \;|\; {\ggrec \tau} v_m ({\ggrec \sigma}) = 0\},
\end{equation}
and set
\begin{equation}
\begin{array}{rcl}
W_{m, i}({\ggrec \sigma}) &=& v_i({\ggrec \sigma}) \otimes v_i({\ggrec \sigma}), \; \mbox{ for } 1 \leq i \leq 3 \mbox{ and } i \ne m, \\
W_{m, m}({\ggrec \sigma}) &=& \sqrt{2}\, v_2({\ggrec \sigma})\odot v_\ell({\ggrec \sigma}).
\end{array}
\end{equation}
We will use the following lemma whose proof is easy but reported in Appendix \ref{A1} for completeness.
\begin{lemma}\label{lem_base_G}
Elements of $G_m({\ggrec \sigma})$ commute with ${\ggrec \sigma}$ and $\{W_{m, 1}({\ggrec \sigma}), W_{m, 2}({\ggrec \sigma}), W_{m, 3}({\ggrec \sigma})\}$ is an orthonormal basis of $G_m({\ggrec \sigma})$.
\end{lemma}
In what follows $\prj{G_m({\ggrec \sigma})}$ denotes the orthogonal projection on $G_m({\ggrec \sigma})$. Obviously, for all ${\ggrec \tau} \in {\mathbb M}^{3\times 3}_{\rm{sym}}$
\begin{equation}
\prj{G_m({\ggrec \sigma})} {\ggrec \tau} = \sum_{i=1}^3 ({\ggrec \tau} : W_{m, i}({\ggrec \sigma})) W_{m, i}({\ggrec \sigma}).
\end{equation}
We set
\begin{equation}
{\mathbb S}_m ({\ggrec \sigma}; {\ggrec \tau}) = \prj{G_m({\ggrec \sigma})} \overline{\ttau} - \lambda_m(\prj{G_m({\ggrec \sigma})} \overline{\ttau}) {\rm{\bf{I}}}.
\end{equation}
We observe that for all ${\ggrec \tau} \in {\mathbb M}^{3\times 3}_{\rm{sym}}$
\begin{itemize}
\item ${\mathbb S}_3 ({\ggrec \sigma}; {\ggrec \tau})$ (resp. ${\mathbb S}_1 ({\ggrec \sigma}; {\ggrec \tau})$) is symmetric semidefinite positive (resp. semidefinite negative) (since $ \lambda_1(\prj{G_3({\ggrec \sigma})} \overline{\ttau}) \geq \lambda_2(\prj{G_3({\ggrec \sigma})} \overline{\ttau}) \geq \lambda_3(\prj{G_3({\ggrec \sigma})} \overline{\ttau})$).
\item $\prj{G_m({\ggrec \sigma})} {\ggrec \tau}$ and ${\mathbb S}_m ({\ggrec \sigma}; {\ggrec \tau})$ commute with ${\ggrec \sigma}$.
\item $0$ is an eigenvalue of $\prj{G_m({\ggrec \sigma})} \overline{\ttau}$ (it is an eigenvalue of all elements of $G_m({\ggrec \sigma})$),
\end{itemize}
For $\prj{G_m({\ggrec \sigma})} \overline{\ttau} \ne 0$, we set
\begin{equation}
\rho_m ({\ggrec \sigma}; {\ggrec \tau})= \max(\frac{1}{4} + (-1)^{(3-m)/2} \frac{3}{4}\frac{\sum_{k=1}^3 \lambda_k(\prj{G_m({\ggrec \sigma})} \overline{\ttau})}{\sum_{k=1}^3 | \lambda_k(\prj{G_m({\ggrec \sigma})} \overline{\ttau})|}, 0) \in [0, 1],
\end{equation}
and, by convention, we set $\rho_m ({\ggrec \sigma}; {\ggrec \tau})= 0$ when $\prj{G_m({\ggrec \sigma})} \overline{\ttau} = 0$.
\begin{theorem}\label{theo_tr_casegal}
Assume that ${C}$ is given by \eqref{tresca_domain}, that ${\ggrec \sigma}$ satisfies \eqref{satu_bord_tresca} and that $ \lambda_2({\ggrec \sigma}) = \lambda_\ell({\ggrec \sigma})$ for some $\ell \in \{1, 3\}$ and $ \lambda_2({\ggrec \sigma}) \ne \lambda_{m}({\ggrec \sigma})$ with $m = 4-\ell$. Then, the rules \eqref{str_tangent0}, \eqref{str_normal} and \eqref{const_tangent0} can be rewritten as
\begin{eqnarray}
\drv{ \teps^{\rm p}} &=& \displaystyle{ \rho_m(\drv{{\ggrec \varepsilon}}; {\ggrec \sigma}) [{\mathbb S}_m ({\ggrec \sigma}; \drv{ {\ggrec \varepsilon}} ) - {\rm tr}({\mathbb S}_m ({\ggrec \sigma}; \drv{ {\ggrec \varepsilon}} )) v_m ({\ggrec \sigma}) \otimes v_m({\ggrec \sigma})], }\\
\drv{ \teps^{\rm e}} &=& \displaystyle{ \drv{{\ggrec \varepsilon}} - \drv{ \teps^{\rm p}}, } \\
\drv{{\ggrec \sigma}}&= &\displaystyle{ \lambda {\rm tr}(\drv{{\ggrec \varepsilon}} ) {\rm{\bf{I}}} }\\
&& \displaystyle{ + 2\mu\left(\drv{{\ggrec \varepsilon}} - \rho_m(\drv{{\ggrec \varepsilon}}; {\ggrec \sigma}) [{\mathbb S}_m ({\ggrec \sigma}; \drv{ {\ggrec \varepsilon}} ) - {\rm tr}({\mathbb S}_m ({\ggrec \sigma}; \drv{ {\ggrec \varepsilon}} )) v_m ({\ggrec \sigma}) \otimes v_m ({\ggrec \sigma})]\right). }
\end{eqnarray}
\end{theorem}
The proof of Theorem \ref{theo_tr_casegal} is mainly based on the following proposition.
\begin{proposition}\label{propo_casegal}
Assume that $f({\ggrec \sigma}) = k$ and that $ \lambda_1({\ggrec \sigma}) = \lambda_2({\ggrec \sigma}) > \lambda_3({\ggrec \sigma}) $. Then,
\begin{itemize}
\item[(a)] $ \NC{C_{T}}{{\ggrec \sigma}} =\{ {\ggrec \kappa} - {\rm tr}({\ggrec \kappa} ) v_3({\ggrec \sigma}) \otimes v_3({\ggrec \sigma}) \;|\;
{\ggrec \kappa} \in {\mathbb M}^{3\times 3}_{\rm{sym, +}}, {\ggrec \kappa} v_3({\ggrec \sigma}) = 0\}. $
\item[(b)] $ \NC{C_{T}}{{\ggrec \sigma}} =\{ {\ggrec \kappa} - {\rm tr}({\ggrec \kappa} ) v_3({\ggrec \sigma}) \otimes v_3({\ggrec \sigma}) \;|\;
{\ggrec \kappa} \in {\mathbb M}^{3\times 3}_{\rm{sym, +}}, {\ggrec \kappa} {\ggrec \sigma} = {\ggrec \sigma} {\ggrec \kappa}\}. $
\item[(c)] $\PRJ{\NC{C_{T}}{{\ggrec \sigma}}}({\ggrec \tau}) = \PRJ{\NC{C_{T}}{{\ggrec \sigma}}}(\overline{\ttau})$ for all ${\ggrec \tau} \in {\mathbb M}^{3\times 3}_{\rm{sym}}$.
\item[(d)] For all ${\ggrec \tau} \in {\mathbb M}^{3\times 3}_{\rm{sym}}$
\begin{equation}\label{formula_proj_trs_bis}
\PRJ{\NC{C_{T}}{{\ggrec \sigma}}}({\ggrec \tau}) = \rho_3 ({\ggrec \sigma}; {\ggrec \tau}) [{\mathbb S}_3 ({\ggrec \sigma}; {\ggrec \tau}) - {\rm tr}({\mathbb S}_3 ({\ggrec \sigma}; {\ggrec \tau})) v_3 ({\ggrec \sigma}) \otimes v_3 ({\ggrec \sigma})].
\end{equation}
\end{itemize}
\end{proposition}
\begin{ourproof}{of Proposition \ref{propo_casegal}}
\begin{itemize}
\item[(a)] This characterization of the normal cone can be found in \cite{boulmezaoud_khouider2}. It can also be deduced from the characterization in \eqref{lewis_caract} due to \cite{lewis96} (Theorem 8.1).
\item[(b)] If $ {\ggrec \kappa} \in {\mathbb M}^{3\times 3}_{\rm{sym, +}}, {\ggrec \kappa} v_3({\ggrec \sigma}) = 0$, then $0$ is an eignenvalue of ${\ggrec \kappa}$. Spectral decomposition of ${\ggrec \kappa}$ gives ${\ggrec \kappa} = \alpha_1 w_1 \otimes w_1+ \alpha_2 w_2 \otimes w_2$
with $\alpha_i \in {\mathbb R}$ and $w_i \in \{ v_3({\ggrec \sigma})\}^\perp = {\rm{span}}\{ v_1({\ggrec \sigma}), v_2({\ggrec \sigma})\} = N({\ggrec \sigma} - \lambda_1({\ggrec \sigma}){\rm{\bf{I}}})$, $1 \leq 1 \leq 2$. It is then easy to check that ${\ggrec \sigma}$ and ${\ggrec \kappa}$ commute. \\
Conversely, if ${\ggrec \sigma}$ and ${\ggrec \kappa}$ commute then ${\ggrec \kappa} v_3({\ggrec \sigma})$ is
an eigenvector of ${\ggrec \sigma}$ corresponding to the eigenvalue $ \lambda_3$. Thus,
$v_3({\ggrec \sigma})$ is an eigenvector of ${\ggrec \kappa}$. Let $\alpha_3$ be the corresponding
eigenvalue. Then, ${\ggrec \kappa} _0 = {\ggrec \kappa} - \alpha_3 v_3 \otimes v_3$ is also semidefinite
positive and satisfies ${\ggrec \kappa} _0 v_3({\ggrec \sigma}) = 0$.
\item[(c)] This is a direct consequence of \eqref{indif_conesNT}.
\item[(d)] We need to calculate the projection of any ${\ggrec \tau} \in {\mathbb M}^{3\times 3}_{\rm{sym}}$ on $\NC{C_{T}}{{\ggrec \sigma}}$. In view of the characterization above of $\NC{C_{T}}{{\ggrec \sigma}}$ one is lead to consider the minimization problem
\begin{equation}\label{prj_pb_trs1}
\min_{{\ggrec \kappa} \in {\mathbb M}^{3\times 3}_{\rm{sym, +}} \cap G_3({\ggrec \sigma})} \| {\ggrec \tau} - {\ggrec \kappa} + {\rm tr}({\ggrec \kappa}) v_3 \otimes v_3\|^2.
\end{equation}
Since ${\rm tr}( {\ggrec \kappa} - {\rm tr}( {\ggrec \kappa}) v_3 \otimes v_3) = 0$ for all ${\ggrec \kappa} \in {\mathbb M}^{3\times 3}_{\rm{sym}}$, problem \eqref{prj_pb_trs1} can be reformulated in terms of the deviatoric of ${\ggrec \tau}$ as follows
\begin{equation}\label{prj_pb_trs2}
\min_{ {\ggrec \kappa} \in {\mathbb M}^{3\times 3}_{\rm{sym, +}} \cap G_3({\ggrec \sigma})} \| \overline{\ttau} - {\ggrec \kappa} + {\rm tr}({\ggrec \kappa}) v_3 \otimes v_3\|^2.
\end{equation}
Set
$$
F = {\rm span}\{v_3 \otimes v_3\}, \; H = (F \oplus G_3({\ggrec \sigma}))^\perp,
$$
where $(F \oplus G_3({\ggrec \sigma}))^\perp$ denote the orthogonal complement of $F+G_3({\ggrec \sigma})$ as a subspace of ${\mathbb M}^{3\times 3}_{\rm{sym}}$.
We may observe that $G_3({\ggrec \sigma})$ is orthogonal to $F$ and
\begin{equation}
{\mathbb M}^{3\times 3}_{\rm{sym}} = F \oplus^\perp G_3({\ggrec \sigma})\oplus^\perp H.
\end{equation}
Besides, ${\rm{\bf{I}}} \in F \oplus G_3({\ggrec \sigma})$ since
$$
{\rm{\bf{I}}} = \sum_{k=1}^3 v_k \otimes v_k = v_3 \otimes v_3 + W_1 + W_2.
$$
Since $H = (F \oplus G_3({\ggrec \sigma}))^\perp$ and ${\rm{\bf{I}}} \in F \oplus G_3({\ggrec \sigma})$ we get
\begin{equation}\label{tita_ortho}
{\rm tr}({\ggrec \eta}) ={\ggrec \eta}: {\rm{\bf{I}}} = 0 \mbox{ for all } {\ggrec \eta} \in H.
\end{equation}
Now, we can write
$$
\overline{\ttau} = {\ggrec \tau}_F + {\ggrec \tau}_G + {\ggrec \tau}_H,
$$
with ${\ggrec \tau}_F, {\ggrec \tau}_G, \; {\ggrec \tau}_H$ the orthogonal projections of $ \overline{\ttau}$ on the subspaces
$F$, $G$ and $H$ respectively. We may observe that
$$
\begin{array}{rcl}
{\ggrec \tau}_F &=& (\overline{\ttau} : v_3 \otimes v_3)\, v_3 \otimes v_3= a_3 \, v_3 \otimes v_3 \;\;\; \mbox{ with } a_3 = \overline{\ttau} : v_3 \otimes v_3.
\end{array}
$$
Observing that ${\rm tr}(\overline{\ttau}) = 0$ and ${\rm tr}( {\ggrec \tau}_H)=0$ (thanks to \eqref{tita_ortho}),
we that $a_3 = {\rm tr}({\ggrec \tau}_F) = - {\rm tr}( {\ggrec \tau}_G)$. Problem \ref{prj_pb_trs2} becomes
$$
\min_{{\ggrec \kappa} \in {\mathbb M}^{3\times 3}_{\rm{sym, +}} \cap G} \| {\ggrec \tau}_F + {\rm tr}({\ggrec \kappa}) v_3 \otimes v_3\|^2 + \| {\ggrec \tau}_G - {\ggrec \kappa}\|^2,
$$
or
\begin{equation}\label{prj_pb_trs3}
\min_{{\ggrec \kappa} \in {\mathbb M}^{3\times 3}_{\rm{sym, +}} \cap G_3({\ggrec \sigma})} ({\rm tr}({\ggrec \kappa}) - {\rm tr}({\ggrec \tau}_G) )^2 + \| {\ggrec \tau}_G - {\ggrec \kappa}\|^2,
\end{equation}
Let $\{w_1, w_2, v_3\}$ be an orthonormal basis of eigenvectors of ${\ggrec \tau}_G$ with $\mu_1$, $\mu_2$ and $0$ the corresponding eigenvalues (the vector $v_3$
is already an eigenvector). We can write
$$
{\ggrec \tau}_G = \mu_1 w_1 \otimes w_1 + \mu_2 w_2 \otimes w_2.
$$
Since $a_3 = - {\rm tr}( {\ggrec \tau}_G)$ we have
\begin{equation}
a_3 = - \mu_1 - \mu_2.
\end{equation}
Any tensor ${\ggrec \kappa}$ of $G_3({\ggrec \sigma})$ can be written in the form
$$
{\ggrec \kappa} = x w_1 \otimes w_1 + y w_2 \otimes w_2 + z w_1\odot w_2,
$$
with $x = {\ggrec \kappa} : w_1 \otimes w_1 $, $y = {\ggrec \kappa} : w_2 \otimes w_2$ and $z = 2 {\ggrec \kappa} : w_1\odot w_2 = 2 {\ggrec \kappa} : w_1\otimes w_2$. The corresponding matrix is semidefinite positive if and only if
$$
x \geq 0, \; y \geq 0 \mbox{ and } z^2 \leq 4 xy.
$$
In view of these elements, we can reformulate \eqref{prj_pb_trs2} as follows
\begin{equation}\label{prj_pb_trs4}
\min_{x \geq 0, y \geq 0, z^2 \leq xy} (x+y -\mu_1-\mu_2)^2 + (\mu_1-x)^2 + (\mu_2-y)^2 + z^2/4.
\end{equation}
We have obviously $z=0$ when the minimum is reached and the problem becomes
\begin{equation}\label{prj_pb_trs5}
\min_{x \geq 0, y \geq 0} (x+y -\mu_1-\mu_2)^2 + (x-\mu_1)^2 + (y-\mu_2)^2.
\end{equation}
This quadratic optimization problem can be solved by writing usual
Karush-Kuhn-Tucker (KKT) conditions. The minimizer
is ${\ggrec \kappa}_0 = x_0 w_1 \otimes w_1 + y_0 w_2 \otimes w_2+ z_0 w_1\odot w_2$ with
\begin{equation}\label{formule_x0y0z0}
(x_0, y_0, z_0) =
\left\{
\begin{array}{ll}
(\mu_1, \mu_2, 0) & \mbox{ if } \mu_1 \geq 0 \mbox{ and } \;\mu_2 \geq 0, \\
(0, 0, 0) & \mbox{ if } \;\mu_1 + \mu_2/2 \leq 0 \mbox{ and } \;\mu_2 + \mu_1/2 \leq 0,\\
(\mu_1 +\mu_2/2, 0, 0) & \mbox{ if } \mu_2 < 0 \mbox{ and } \;\mu_1 + \mu_2/2 > 0, \\
(0, \mu_2 + \mu_1/2, 0) & \mbox{ if } \mu_1 < 0 \mbox{ and } \;\mu_2 + \mu_1/2 > 0. \\
\end{array}
\right.
\end{equation}
We would now like to write ${\ggrec \kappa}_0 $ as
$$
{\ggrec \kappa}_0 = \alpha {\ggrec \tau}_G + \beta (w_1 \otimes w_1 + w_2 \otimes w_2 ) = \alpha {\ggrec \tau}_G + \beta ({\rm{\bf{I}}} - v_3 \otimes v_3).
$$
This is possible if and only if
$$
\alpha \mu_1 + \beta = x_0, \; \alpha \mu_2 + \beta = y_0.
$$
Hence, if $\mu_2\neq \mu_1$ this system has one and only one solution
\begin{equation}
\alpha = \frac{y_0-x_0}{\mu_2-\mu_1}, \; \beta = \frac{\mu_2 x_0 - \mu_1 y_0}{\mu_2-\mu_1}.
\end{equation}
When $\mu_2=\mu_1$ we know that $x_0 = y_0$ (thanks to \eqref{formule_x0y0z0}) and we can take
\begin{equation}
\alpha = 1 \mbox{ and } \beta = -\min(\mu_1, 0).
\end{equation}
Using \eqref{formule_x0y0z0} we can prove that if $(\mu_1, \mu_2) \neq 0$ then
\begin{equation}
\alpha = \max(\frac{1}{4} + \frac{3}{4}\frac{\mu_1+\mu_2}{|\mu_1|+|\mu_2|}, 0), \beta = -\min(\mu_1, \mu_2, 0) \alpha.
\end{equation}
In this case, since ${\rm tr}(M_0) = \alpha {\rm tr}({\ggrec \tau}_G) + 3 \beta$, we can write
$$
\begin{array}{rcl}
{\ggrec \kappa}_0 - {\rm tr}({\ggrec \kappa}_0) v_3 \otimes v_3 &=& \displaystyle{ \alpha {\ggrec \tau}_G + \beta {\rm{\bf{I}}} -(\alpha {\rm tr}({\ggrec \tau}_G) + 3 \beta)v_3 \otimes v_3}, \\
&=& \displaystyle{ \alpha [ {\ggrec \tau}_G - {\rm tr}({\ggrec \tau}_G) v_3 \otimes v_3] + \beta ({\rm{\bf{I}}} - 3 v_3 \otimes v_3)},\\
&=& \displaystyle{ \alpha [ {\ggrec \tau}^\star_G - {\rm tr}({\ggrec \tau}^\star_G) v_3 \otimes v_3],}
\end{array}
$$
with $ {\ggrec \tau}^\star_G = {\ggrec \tau}_G -\min(\mu_1, \mu_2, 0) {\rm{\bf{I}}}$. This ends the proof of formula \eqref{formula_proj_trs_bis}.
\end{itemize}
\end{ourproof}
\begin{ourproof}{of Theorem \ref{propo_casegal}}
When $ \lambda_1({\ggrec \sigma}) = \lambda_2({\ggrec \sigma}) > \lambda_3({\ggrec \sigma})$ the result is a straightforward consequence
of Theorem \ref{moreau_elastoplast} and Proposition \ref{propo_casegal}. \\
Assume that $ \lambda_1({\ggrec \sigma}) > \lambda_2({\ggrec \sigma}) = \lambda_3({\ggrec \sigma})$. We have $ \lambda_k(-{\ggrec \sigma}) = \lambda_{4-k}({\ggrec \sigma})$ for $ 1 \leq k \leq 3$ and
$$
\NC{C_{T}}{-{\ggrec \sigma}} = - \NC{C_{T}}{{\ggrec \sigma}}.
$$
Hence $\PRJ{\NC{C_{T}}{{\ggrec \sigma}}}({\ggrec \tau}) = - \PRJ{\NC{C_{T}}{-{\ggrec \sigma}}}(-{\ggrec \tau})$ and we get
$$
\PRJ{\NC{C_{T}}{{\ggrec \sigma}}}({\ggrec \tau}) = -\rho_3 (-{\ggrec \sigma}; -{\ggrec \tau}) [{\mathbb S}_3 (-{\ggrec \sigma}; -{\ggrec \tau}) - {\rm tr}({\mathbb S}_3 (-{\ggrec \sigma}; -{\ggrec \tau})) v_3 (-{\ggrec \sigma}) \otimes v_3 (-{\ggrec \sigma})].
$$
Since $G_3(-{\ggrec \sigma}) = G_1({\ggrec \sigma})$ we deduce that
$$
{\mathbb S}_3 (-{\ggrec \sigma}; -{\ggrec \tau}) = - {\mathbb S}_1 ({\ggrec \sigma}; {\ggrec \tau}), \;\; \rho_3 (-{\ggrec \sigma}; -{\ggrec \tau}) =\rho_1 ({\ggrec \sigma}; {\ggrec \tau}).
$$
Hence,
$$
\PRJ{\NC{C_{T}}{{\ggrec \sigma}}}({\ggrec \tau}) = \rho_1 ({\ggrec \sigma}; {\ggrec \tau}) [{\mathbb S}_1 ({\ggrec \sigma}; {\ggrec \tau}) - {\rm tr}({\mathbb S}_1 ({\ggrec \sigma};{\ggrec \tau})) v_1 ({\ggrec \sigma}) \otimes v_1 ({\ggrec \sigma})].
$$
Combining with Theorem \ref{moreau_elastoplast} ends the proof.
\end{ourproof}
\begin{remark}
We have
$$
\prj{G_3({\ggrec \sigma})} {\rm{\bf{I}}} =
\displaystyle{ v_1({\ggrec \sigma}) \otimes v_1({\ggrec \sigma}) + v_2({\ggrec \sigma})\otimes v_2({\ggrec \sigma}) = {\rm{\bf{I}}} - v_3({\ggrec \sigma}) \otimes v_3({\ggrec \sigma})}.
$$
Hence, we get the identity
\begin{equation}
\prj{G_3({\ggrec \sigma})} (\overline{\ttau}) = \prj{G_3({\ggrec \sigma})} ({\ggrec \tau}) - \frac{{\rm tr}({\ggrec \tau})}{3} \left({\rm{\bf{I}}} - v_3({\ggrec \sigma}) \otimes v_3({\ggrec \sigma})\right).
\end{equation}
We also deduce the identities
$$
\begin{array}{rcl}
\displaystyle{ \sum_{k=1}^3 \lambda_k(\prj{G_3({\ggrec \sigma})} \overline{\ttau}) } &=& \displaystyle{ \sum_{k=1}^3 \lambda_k(\prj{G_3({\ggrec \sigma})} {\ggrec \tau}) - \frac{2{\rm tr}({\ggrec \tau})}{3}, }\\
\displaystyle{ \sum_{k=1}^3 | \lambda_k(\prj{G_3({\ggrec \sigma})} \overline{\ttau})| } &=&\displaystyle{ \sum_{k=1}^3 | \lambda_k(\prj{G_3({\ggrec \sigma})} {\ggrec \tau}) - \frac{{\rm tr}({\ggrec \tau})}{3}| - \frac{|{\rm tr}({\ggrec \tau})|}{3}. }
\end{array}
$$
\end{remark}
\section{Concluding remarks}
\subsection{Main results}
The reformulation of the perfectly elasto-plastic model described in Section \ref{sec_main_res} is based in general on two main steps:
\begin{itemize}
\item[(a)] the characterization of the normal and tangent cones at any given point of the yield surface,
\item[(b)] the projection of the strain rate and its Hooke law-transform on these cones at each point of the
yield domain, according to Equations \ref{str_tangent0}, \ref{str_normal}, and \ref{const_tangent0}.
\end{itemize}
The result is an explicit evolution equation for the internal stress $\sigma$ (\ref{projHookEpsP1}) which together with the flow evolution equations form a closed system of PDEs (\ref{evol_sys}). \\
In Sections \ref{consti_laws} and \ref{VM-T-cr}, we have considered the particular case when the yield domain is described by an arbitrary (finite) number one functional constraints (convex and differentiable) and obtained the explicit expression of these projections when only one or only two of these constraints are saturated at a given boundary points. We then looked at the most common practical examples of the Von Mises and Tresca yield criteria. As we saw, while the Von Mises examples fall into the case a single functional constraint, the example of the Tresca criterion turned out to be much more complex and required a separate treatment. Nevertheless, but using several convex analysis and linear algebra tools, we were able to reduce it explicitly into the form in (\ref{evol_sys}). This demonstrate that our is general and can applied in principle to all imaginable yield criteria. \\
\subsection{General case of spectral yield functions}
In the light of the Von Mises and Tresca examples considered here, it could be observed, however,
that in most practical situations the yield functions are {\it spectral}. A function $f \; : \; {\mathbb M}^{3\times 3}_{\rm{sym}} \to {\mathbb R}$ is said to be spectral (or weakly orthogonally invariant) if $f({\ggrec \tau} {\ggrec \sigma} {\ggrec \tau}^{-1}) = f({\ggrec \sigma})$ for any ${\ggrec \sigma} \in {\mathbb M}^{3\times 3}_{\rm{sym}}$ and ${\ggrec \tau} \in \ORT{3}$ (see, e. g., \cite{lewis96} and \cite{jarre2000}). Thus, $f$ is spectral if and only if there exists a {\it permutation invariant } function ${\widehat{f}} \;:\;{\mathbb R}^n \to {\mathbb R}$ such that
\begin{equation}
f({\ggrec \sigma}) = {\widehat{f}}( \lambda_1({\ggrec \sigma}), \cdots, \lambda_n({\ggrec \sigma})).
\end{equation}
(${\widehat{f}}$ is said to be permutation invariant if ${\widehat{f}}(v_{\pi(1)}, \cdots, v_{\pi(n)}) = {\widehat{f}}(v)$ for any $v \in {\mathbb R}^n$ and any permutation $\pi$ of $\{1, \cdots, n\}$). Obviously, such a function ${\widehat{f}}$ is unique since
\begin{equation}
{\widehat{f}}(v_1, \cdots, v_n) = f(\Diag{v}).
\end{equation}
For example, in the case of the Von Mises yield criterion \eqref{VonMisesDom} we have
$$
{\widehat{f}}_{M}(v_1, v_2, v_3) = \frac{1}{6} \sum_{1 \leq i < j \leq 3} (v_i - v_j)^2,
$$
while for the Tresca criterion \eqref{trsc_function} we have
$$
\hat{f_{{T}}}(v_1, v_2, v_3) = \frac{1}{2} \sum_{1 \leq i < j \leq 3} |v_i -v_j|.
$$
Spectral functions have been the subject of much mathematical work
which is not fully exploited in plasticity or fracture mechanics (see, e. g., \cite{lewis96}, \cite{jarre2000}, \cite{ hornBook}, \cite{dani2008} and references therein). The purpose of this section is
to show that one can go much further in characterizing projections on the normal and tangent cones for spectral yield domains.
\begin{proposition}
Assume that ${C}$ is defined by inequalities \eqref{inequal_yield} with $f_i$ spectral for all $1 \leq i \leq m$, and that ${\rm int}{({C})} \ne \emptyset$. Then, for all ${\ggrec \sigma}$,
\begin{equation}\label{nrmlC_sublev}
\begin{array}{rcl}
\NC{{C}}{{\ggrec \sigma}} &=& \{ \sum_{i=1}^m \alpha_i {\ggrec \tau}_i^t \Diag{v_i} {\ggrec \tau}_i \;|\; {\ggrec \tau}_i \in \ORT{3}, \; \alpha_i \geq 0, \\
&& \; v_i \in \partial {\widehat{f}}_i ( \lambda({\ggrec \sigma})), \alpha_i f_i({\ggrec \sigma}) = 0, \ {\ggrec \tau}_i^t \Diag{ \lambda({\ggrec \sigma})} {\ggrec \tau}_i = {\ggrec \sigma} \}.
\end{array}
\end{equation}
\end{proposition}
\begin{ourproof}{}
The proof is a straightforward consequence of widely known results in convex analsyis. Set
\begin{equation}
F({\ggrec \sigma}) = \max_{1 \leq i \leq m} f_i({\ggrec \sigma}).
\end{equation}
Obviously, $F$ is convex, spectral and ${C} = \{F \leq 0\}$.
If $F({\ggrec \sigma}) = 0$ then (see, e. g., \cite{LemareLivre}, Theorem 1.3.5 p 172)
$$
\NC{{C}}{{\ggrec \sigma}} = \{\mu {\ggrec \tau} \;|\; {\ggrec \tau} \in \partial F ({\ggrec \sigma}) \mbox{ and } \mu \geq 0\}.
$$
We also know that (see, e. g., \cite{LemareLivre}, Lemma 4.4.1)
$$
\partial F ({\ggrec \sigma}) = {\rm co} \left(\cup_{i \in J({\ggrec \sigma})} \partial f_i ({\ggrec \sigma})\right), \; \;\mbox{ with } J({\ggrec \sigma}) = \{ i \;|\; 1 \leq i \leq m \mbox{ and } f_i({\ggrec \sigma}) = 0\},
$$
(where ${\rm co(A)}$ denotes the convex hull of $A$, $A$ being a subset of ${\mathbb M}^{3\times 3}_{\rm{sym}}$). Thus,
$$
\NC{{C}}{{\ggrec \sigma}} = \left\{\sum_{i=1}^m \mu_i {\ggrec \eta}_i \;|\; \mu_i \geq 0, \; {\ggrec \eta}_i \in \partial f_i ({\ggrec \sigma}) \mbox{ and } \mu_i f_i ({\ggrec \sigma}) = 0\right\}.
$$
We then obtain \eqref{nrmlC_sublev} by using the following characterization of $\partial f_i$ (see \cite{lewis96}, Theorem 8.1):
\begin{equation}\label{lewis_caract}
\begin{array}{rcl}
\partial f_i ({\ggrec \sigma}) &=& \{{\ggrec \tau}^t \Diag{w} {\ggrec \tau} \;|\; w \in \partial {\widehat{f}}_i ( \lambda({\ggrec \sigma})), \\
&& \; {\ggrec \tau} \in \ORT{3},\; {\ggrec \tau}^t \Diag{ \lambda({\ggrec \sigma})} {\ggrec \tau} = {\ggrec \sigma} \}.
\end{array}
\end{equation}
\end{ourproof}
\subsection{Outlook}
The formulation we proposed here clearly reveals the nature of the nonlinearity of perfect elasto-plasticity. Through the expressions in \eqref{str_tangent0} to \eqref{2.15}, the rules governing the behaviour of an elasto-plastic material are effectively reduced to the calculation of the projectors on the tangent and normal cones of the yield domain. This led us to propose the new equation of motion
\begin{equation}
\frac{\partial^2 v }{\partial t^2} - \mathrm{div}\, {\boldsymbol {\mathscr H}}({\ggrec \sigma}, \drv{{\ggrec \varepsilon}} (v)) = \frac{\partial h}{\partial t}.
\end{equation}
The study of this equation of elasto-plastic waves remains to be done.
The authors also plan to extend the approach proposed here to
elasto-plastic deformations with hardening. This is the subject of a paper in preparation.
Moreover, a near future goal is to apply the results obtained here in the modelling and study of the behaviour of sea ice dynamics, which is shown to behave like an elasto-plastic material \cite{coon1}. However, because of the apparent technical difficulty early authors who worked on this subject either assumed that the elastic deformations are ignored when the plastic regime is reached \cite{coon1} or that the sea ice is assumed to behave as a viscous plastic material \cite{hibler}. Also, due to large difference between the horizontal and vertical (thickness) extends of sea ice, it is effectively considered a 2D material.
\section*{Acknowledgement}
This work has been conducted in 2020-2021 when T. Z. B. was a visiting professor at the University of Victoria. This visit is funded by the French Government through the Centre National de la Recherche Scientifique (CNRS)-Pacific Institute for the Mathematical Sciences (PIMS) mobility program. The research of B. K. is partially funded by a Natural Sciences and Engineering Research Council of Canada Discovery grant.
| {'timestamp': '2022-07-27T02:12:03', 'yymm': '2207', 'arxiv_id': '2207.12719', 'language': 'en', 'url': 'https://arxiv.org/abs/2207.12719'} |
\section{Introduction}
\label{sec:intro}
Nowadays, terrorism represents a worldwide threat, illustrated by the ongoing deadly attacks perpetrated by the Islamic State (also called ISIS, ISIL, or Daesh) in Iraq and Syria, Boko Haram in Nigeria, and the al-Nusra Front (also called Jabhat al-Nusra) in Syria for example. In response to this threat, states may use a combination of counterterrorism policies, which include the use of criminal justice, military power, intelligence, psychological operations, and preventives measures \citep[p.~45]{Crelinsten2009}. In particular, following the tragic terrorist attacks on September 11, 2001 (9/11) in New York, states have tended to increase their spending to counter terrorism. From 2001 to 2008, the expenditure of worldwide homeland security increased by US\$\,70 billion \citep{Nato2008}. In the US alone, the 2013 federal budget devoted to combat terrorism reached around US\$\,17.2 billion \citep{Washington2013}.
Theoretical work and empirical studies at country-level pointed out that the causes of terrorism are complex and multidimensional and include economic, political, social, cultural, and environmental factors (\citealp{Brynjar2000}; \citealp[p.~60]{Richardson2006}; \citealp{Gassebner2011}; \citealp{Krieger2011}; \citealp{Hsiang2013}). Moreover, the relationship between specific factors and terrorism is often not straightforward. For example, \citet[p.~194]{Hoffman2006} described the role of media in covering terrorism as a ``double-edged sword'': publicity promotes terrorist groups, which facilitates the recruitment of new members and strengthens the cohesion of groups, but in turn, encourages society to marshal resources to combat terrorism \citep{Rapoport1996
. At the individual level, the actions and beliefs of each member of terrorist groups are important drivers of terrorism as well (\citealp[p.29]{Crenshaw1983}; \citealp[p.141]{Wilkinson1990}; \citealp[pp.92-93]{Richardson2006}).
From a modelling perspective, terrorism is therefore not an entirely random process. Similar to contagious diseases \citep{Jacquez1996, Mantel1967, Loftin1986} or seismic activity \citep{Mohler2011,Crough1980,Courtillot2003}, terrorist events are rarely homogeneously distributed in space. In contrast, they tend to exhibit high concentration levels in specific locations (so-called ``hot-spots'') \citep{LaFree2009b,LaFree2010,LaFree2012, Nacos2010, Steen2006, Piegorsch2007}, and can spread from one location to another \citep{Midlarsky1980, Neumayer2010, Mohler2013}, as in a ``diffusion'' processes \citep{Cohen1999, Forsberg2014}. Moreover, the activity of terrorism may also vary over time and often exhibits temporally clustered patterns like crime or insurgencies \citep{Anselin2000, Eck2005,Zammit2012}.
Despite successful applications of space-time Bayesian models in similar fields of research, such as crime and conflict \citep{Zammit2012, Zammit2013, Mohler2013, Lewis2012, Rodrigues2010}, these models have not yet been applied in terrorism research. Most empirical research in terrorism has focused on its temporal dimension \citep{Hamilton1983, Porter2012, Brandt2012, Barros2003, Enders1999, Suleman2012, Bilal2012, Holden1986, Enders1993, Enders2011, Enders2005b, Enders2000, Weimann1988, Raghavan2013}, or has considered purely spatial models only \citep{Braithwaite2007, Savitch2001, Brown2004}. Moreover, studies that have explicitly integrated both space and time dimensions have been carried out at country or higher level of analysis \citep{LaFree2010, Midlarsky1980, Neumayer2010, Enders2006, Gao2013}, or at subnational level of analysis but within specific study areas \citep{LaFree2012, Behlendorf2012, Nunn2007, Piegorsch2007, Oecal2010, Medina2011, Siebeneck2009, Mohler2013}.
As a result, scholars have failed to systematically capture the fine-scale spatial dynamics of terrorism. Local drivers of terrorism have not been identified and their effects have not been systematically assessed. In an effort to address this shortcoming, we use space-time Bayesian models based on the stochastic partial differential equation (SPDE) approach implemented through computationally efficient integrated nested Laplace approximation (INLA) techniques \citep{Rue2009, Lindgren2011}. Our approach, which integrates spatially explicit covariates and data on terrorist events, allows to capture local-scale spatial patterns of both the capacity of terrorist attack of causing death and the number of lethal attacks, which will respectively further refer to the \textit{lethality} of terrorism and the \textit{frequency} of lethal attacks.
Moreover, this study provides a measure of the effects of local-scale factors involved in the spatial and temporal variations of the lethality and the frequency of terrorism across the world from 2002 to 2013. The results of this study could benefit policy makers needing a systematic and accurate assessment of the security threat posed by deadly terrorist activity at subnational level. The paper is structured as follows. Section~\ref{sec:data} briefly introduces the data used for the analysis. The statistical models are described in Section~\ref{sec:model} and the results are provided in Section~\ref{sec:result}. Finally, conclusions and recommendations for further research are discussed in Section~\ref{sec:discussion}. The computer code used in this paper is available in supplementary material.
\section{Data Selection}
\label{sec:data}
\subsection{Terrorism database}
\label{subsec:terrorismdatabase}
In order to build valuable, empirically-based models, it is crucial to base an analysis on a data source that is as suitable as possible for a given study \citep{Zammit2012}. There are currently four major databases that provide data on worldwide \textit{non-state} terrorism (terrorism perpetrated by non-state actors): the \textit{Global Terrorism Database} (GTD), the \textit{RAND Database of Worldwide Terrorism Incidents} (RDWTI), the \textit{International Terrorism: Attributes of Terrorist Events} (ITERATE), and the \textit{Global Database of Events, Language, and Tone} (GDELT). ITERATE has been extensively referred to in terrorism research \citep{Enders2011}, however, events are geolocalised at the country level, which does not allow to capture subnational processes. GDELT is not suitable for our purpose since it does not provide the number of fatalities or information on lethality of terrorism. Equally problematic, GDELT uses a fully automated coding system based on Conflict and Mediation Event Observations (CAMEO) (for further information on CAMEO, see: \url{http://data.gdeltproject.org/documentation/CAMEO.Manual.1.1b3.pdf}), which may lead to a strong geographic bias, as mentioned by \citet{Hammond2014}.
Hence, RDWTI and GTD are the only potentially relevant databases that provide geolocalised terrorist events across the world. Drawing from \citeauthor{Sheehan2012}'s approach to compare terrorism databases (\citeyear{Sheehan2012}), we defined four criteria to select the one which will be used in our study: \textit{conceptual clarity}, \textit{scope}, \textit{coding method}, and \textit{spatial accuracy}. Given that the concept of terrorism is intrinsically ambiguous and being debated to this day \citep{Beck2013}, \textit{conceptual clarity} in both the definition of terrorist events and the \textit{coding method} used to gather data are crucial. In both GTD and RDWTI, the definition used to class an event as terrorism is clearly specified. The \textit{coding method} of GTD appears more rigorous, since events are gathered from numerous sources and articles (more than 4,000,000 news articles and 25,000 news sources used from 1998 and 2013 \citep{GTD2014}), which limits the risk of bias resulting from inconsistent ways of reporting the number of fatalities in different media \citep{Drakos2006a, Drakos2007}. The data collection methodology used in RDWTI is less reliable since some events are gathered from two sources only. Moreover, the \textit{scope} of GTD is wider than RDWTI. GTD is updated annually and includes more than 140,000 events from 1970 until 2014, whereas RDWTI was not updated after 2009 and includes 40,129 events from 1969 to 2009 only \citep{Start2014, Rand2011}.
Since this research investigates fine-scale spatial phenomena, we put particular emphasis on the \textit{spatial accuracy} of the data. GTD is the only database that includes a variable assigning the spatial accuracy of each individual observation. Spatial accuracy is represented by an ordinal variable called \textit{specificity}, with 5 possible levels of spatial accuracy (for further information on \textit{specificity}, see GTD codebook: \url{https://www.start.umd.edu/gtd/downloads/Codebook.pdf}
\citep{GTD2014}. Based on all these considerations we have chosen GTD as the appropriate data source for this stud
. The dataset contains 35,917 spatially accurate events (events corresponding to the highest level of spatial accuracy, with \textit{specificity}=1), occurring between 2002 and 2013.
\subsection{Covariates}
\label{subsec:covariates}
Potentially relevant covariates were identified based on a thorough review of 43 studies carried out at country level by \citet{Gassebner2011}, which highlighted the main explanatory factors among 65 potential determinants of terrorism. Among those factors, we consider covariates that satisfy two essential characteristics: (i) potential relationship with the lethality of terrorism and/or the frequency of lethal terrorist attacks; (ii)
availability at high spatial resolution, in order to model fine-scale spatial dynamics of terrorism worldwide. Seven spatial and space-time covariates met these criteria and their potential association with terrorism is described in more detail below: satellite night light ($lum$), population density ($pop$), political regime ($pol$), altitude ($alt$), slope ($slo$), travel time to the nearest large city ($tt$), and distance to the nearest national border ($distb$).
First, we assess the role of economic factors on the lethality of terrorism, whose possible effects are still under debate. Most country-level empirical studies have not provided any evidence of a linear relationship between terrorism and gross domestic product (GDP) \citep{Abadie2006, Drakos2006a, Gassebner2011, Krueger2008, Krueger2003, Piazza2006}, without excluding possible non-linear relationship \citep{Enders2012}. Case studies focused in the Middle East, including Israel and Palestine, showed that GDP is not significantly related to the number of suicide terrorist attacks \citep{Berman2008}. Few studies, however, found that countries with high per capita GDP may encounter high levels of terrorist attacks \citep{Tavares2004, Blomberg2009
. In line with the subnational nature of our study, we use NOAA satellite lights at night (Version 4 DMSP-OLS) as a covariate, which provides information about worldwide human activities on a yearly basis and at a high spatial resolution (30 arc-second grid) \citep{Chen2011, NOAA2014}. This variable has been used as a proxy for socio-economic development measures such as per capita GDP estimation \citep{Sutton2002, Sutton2007, Elvidge2007, Ebener2005, Henderson2009}. Note that three versions are available: \textit{Average Visible}, \textit{Stable Lights}, and \textit{Cloud Free Coverages}. We use \textit{Stable Lights}, which filter background noise and identify zero-cloud free observations \citep{NOAA2014}. In order to compare values of different years, we calibrate the data according to the procedure described in \citet[Chap.6]{Elvidge2013}
Second, we assess the role of demography. Cities may provide more human mobility, anonymity, audiences and a larger recruitment pool in comparison to rural areas (\citealp[p.~115]{Crenshaw1990}; \citealp{Savitch2001}). Large cities, in particular, offer a high degree of anonymity for terrorists to operate \citep[p.~41]{Laqueur1999}. More specifically, densely populated areas appear vulnerable and are usually more prone to terrorism than sparsely populated areas \citep{Ross1993, Savitch2001, Crenshaw1981, Swanstrom2002, Coaffee2010}. In addition, locations that shelter high-value symbolic targets (buildings or installations), human targets (government officials, mayors, etc.), and public targets (public transports, shopping centres, cinemas, sport arenas, public venues, etc.) are particularly vulnerable to suicide terrorism \citep[p.~167]{Hoffman2006}. Therefore, we use the Gridded Population of the World (v3), which provides population density on a yearly basis and at high-resolution (2.5 arc-minute grid) \citep{CIESIN2005}. Moreover, terrorists usually require free and rapid movement by rail or road in order to move from and to target points (\citealp{Heyman1980}; \citealp[p.~189]{Wilkinson1979}). We compute the travel time from each terrorist event to the nearest large city (more than 50,000 inhabitants) based on Travel Time to Major Cities \citep{Nelson2008} at a high spatial resolution (30 arc-second grid).
Third, we assess the role of geographical variables: altitude, surface topography (slope), and distance to the nearest national border. Although the relationship between altitude, slope, and the lethality of terrorism is not straightforward, both variables provide an indication of the type of the geographical location, which could be a determining factor for terrorists regarding their choice of target \citep{Ross1993}. Moreover, \citet{Nemeth2014} suggested that distance to the nearest national border and altitude or slope might have an impact on terrorist activity. We extract both variables from NOAA Global Relief Model (ETOPO1), which provides altitude values at high spatial resolution (1 arc-minute grid) \citep{Amante2009}.
Fourth, we assess the role of democracy. Under-reporting biases may occur especially in non-democratic countries where the press is often not free \citep{Drakos2006a, Drakos2007}. We extract the level of democracy from Polity IV Project, Political Regime Characteristics and Transitions, 1800-2014 (\text{Polity IV}) \citep{Marshall2014}. \text{Polity IV} informs about the level of freedom of press, and captures the level of democracy from $-10$ (hereditary monarchy) to $+10$ (consolidated democracy) for most independent countries from 1800 to 2014. Therefore, it has been commonly referred as proxy for measuring the type of regime or the extent of constraints in democratic institutions \citep{Gleditsch2007,Li2005,Piazza2006}.
Fifth, we assess the role of ethnicity. We compute the number of different ethnic groups from the ``Georeferencing of ethnic groups'' (GREG) database. GREG is the digitalised version of the Soviet Atlas Narodov Mira (ANM), and counts 1,276 ethnic groups around the world \citep{Weidmann2010b}. Although ANM includes information dating back to the 1960s, it is still regarded as a reliable source for ethnicity across the world \citep{Bhavnani2012, Morelli2014}. Although ethnic diversity does not necessarily lead to violence per se \citep[p.~68]{Silberfein2003}, studies at country-level suggest that more terrorism may occur in ethnically fragmented societies \citep{Kurrild2006, Gassebner2011}, in countries with strong ethnic tensions, or may originate from ethnic conflicts in other regions \citep{Basuchoudhary2010}.
\section{Modelling the Spatial Dynamics of Terrorism}
\label{sec:model}
\subsection{SPDE Framework}
\label{subsec:spdemodel}
We assume that both the lethality of terrorism and the frequency of lethal terrorist attacks are continuous phenomena with Gaussian properties, which exhibit dependencies in both space and time. We suggest modelling their spatial dynamics through the SPDE approach introduced by \citet{Lindgren2011}. The solution of the SPDE given in equation~(\ref{eq:spde}) is a Gaussian field (GF) \citep{Lindgren2011}, whose approximation represents a Gaussian Markov random field (GMRF) used herein to model the spatio-temporal dependencies inherent in the data. The linear SPDE can be formulated as:
\begin{equation}
(\kappa^2 - \Delta )^{\alpha/2}(\tau \zeta(\bm{s})) = \mathcal{W}(\bm{s}),\quad \bm{s}\in \mathcal{D}\;,\label{eq:spde}
\end{equation}
with the Laplacian $\Delta$, smoothness parameter $\alpha=\lambda+1$ (for two-dimensional processes), scale parameter $\kappa>0$, variance parameter $\tau$, domain $\mathcal{D}$, and Gaussian spatial white noise $\mathcal{W}(\bm{s})$ \citep[chap 6]{Blangiardo2015}. The stationary solution of equation~(\ref{eq:spde}) is the GF ($\zeta(\bm{s}$)) with Mat\'{e}rn covariance function:
\begin{equation}
Cov(\zeta(\bm{s}_{\bm{i}}),\zeta(\bm{s}_{\bm{j}})) =
\sigma^2_{\zeta}\frac{1}{\Gamma(\lambda)2^{\lambda-1}}\bigg(\kappa\left\|\bm{s}_{\bm{i}}-\bm{s}_{\bm{j}}\right\|\bigg)^\lambda K_\lambda\bigg(\kappa\left\|\bm{s}_{\bm{i}}-\bm{s}_{\bm{j}}\right\|\bigg)
\;,\label{eq:cov}
\end{equation}
where $\left\|\bm{s}_{\bm{i}}-\bm{s}_{\bm{j}}\right\|$ is the Euclidean distance between two locations, $\sigma^2_{\zeta}$ is the marginal variance, and $K_\lambda$ is the modified Bessel function of the second kind and order $\lambda>0$. The distance from which the spatial correlation becomes negligible (for $\lambda>0.5$) is given by the range $r$ (vertical dotted line in figure~\ref{fig:mesh}, \textit{centre} and \textit{right}), which can be empirically derived from the scale parameter $r=\sqrt{8\lambda}/\kappa$ to be estimated. The GF ($\zeta(\bm{s}$)) is approximated as a GMRF ($\tilde{\zeta}(\bm{s})$) through a finite element method using basis functions defined on a Constrained Refined Delaunay Triangulation (mesh) over the earth, modelled as a sphere (figure~\ref{fig:mesh}, \textit{left}) \citep{Lindgren2011}. Here, we use a three-stage Bayesian hierarchical modelling framework \citep{Banerjee2014} to model the lethality of terrorism as a Bernoulli process (Section~\ref{subsec:bernoullimodel}) and the frequency of lethal terrorist attacks as a Poisson process (Section~\ref{subsec:poissonmodel}).
\vspace{0pt}
\begin{figure}[hb]
\centering
\raisebox{.2\height}{\includegraphics[scale=0.17]{mesh1}}
\includegraphics[scale=0.17]{Matern}
\includegraphics[scale=0.17]{Matern2}
\caption{\textit{Left}: Constrained Refined Delaunay Triangulation (mesh) with 9,697 vertices, on the top of which the SPDE and its discretised solutions (GMRF) are built \citep{Lindgren2012}. Mat\'{e}rn correlation function (solid line) with parameters $\lambda=1$ and its $CI_{95\%}$ (dashed lines) for Bernoulli model with posterior mean $\kappa\approx 23.32$ (\textit{centre}) and Poisson model with posterior mean $\kappa\approx 76.03$ (\textit{right}). Note the differences in the posterior mean range (vertical dotted line) between Bernoulli (\textit{centre}) and Poisson models (\textit{right}), which corresponds to correlation $\approx 0.1$ (horizontal dotted line)}
\label{fig:mesh}
\end{figure}
\subsection{Bernoulli Model}
\label{subsec:bernoullimodel}
While terrorist attacks are inherently discrete in space (terrorist attacks occur at specific locations on earth), we consider their lethality $Y$ as a continuous phenomenon (in the sense of geostatistics \citep{Cressie1991}), which is observed at $\bm{s}$ locations (attacks) over the surface of the earth $\mathbb{S}^2$ at time $t\in \mathbb{R}$. The lethality is assumed to be the realisations of a continuously indexed space-time process $Y(\bm{s},t)\equiv \{y(\bm{s},t): (\bm{s}, t) \in \mathcal{D} \subseteq \mathbb{S}^2 \times \mathbb{R}\}$, from which inference can be made about the process at any desired locations in $\mathcal{D}$ \citep{Cameletti2013}. Hence, our hierarchical modelling framework is composed of three levels:
\begin{subequations}
\begin{align}
y(\bm{s}_i,t)\vert \bm{\theta}, \tilde{\zeta}\sim \text{Bernoulli($\pi(\bm{s}_i,t)$)}\label{eq:BHM1} \\
\text{logit($\pi(\bm{s}_i,t)$)}\vert \bm{\theta}=\eta(\bm{s}_i,t)\vert \bm{\theta}=\beta_{0} + \bm{z}(\bm{s}_i,t)\bm{\beta} + \tilde{\zeta}(\bm{s}_i,t) + \epsilon(\bm{s}_i,t) \label{eq:BHM2} \\
\bm{\theta}=p(\bm{\theta}),\label{eq:BHM3}
\end{align}
\label{eq:BHM}
\end{subequations}
where the lethality $y(\bm{s}_i,t)$ is a dichotomous variable that takes the value 1 if the attack generated one or more deaths, and 0 if not (equation~(\ref{eq:BHM1})). The parameters to be estimated are $\bm{\theta}=\{\beta_{0}, \bm{\beta}, \sigma^2_\epsilon,\tau,\kappa,\rho\}$, which include the precision of the GMRF $\tau=1/\sigma^2_{\tilde{\zeta}}$, the scale parameter $\kappa>0$ of its Mat\'{e}rn covariance function (equation~(\ref{eq:cov})), and the temporal autocorrelation parameter $\left|\rho\right|<1$ (described in more detail below) (equations~(\ref{eq:BHM1}),~(\ref{eq:BHM2}),~(\ref{eq:BHM3})). The conditional distribution of the linear predictor $\eta(\bm{s}_i,t)=\;$logit($\pi(\bm{s}_i,t)$), given the parameters $\bm{\theta}$ (equation~(\ref{eq:BHM2})), includes Gaussian white noise $\epsilon(\bm{s}_i,t)\sim \mathcal{N}(0,\sigma^2_\epsilon)$, with measurement error variance $\sigma^2_\epsilon$, a vector of $k$ covariates $\bm{z}(\bm{s}_i,t)=(z_1(\bm{s}_i,t),\dots, z_k(\bm{s}_i,t))$ with coefficient vector $\bm{\beta}=(\beta_1,\dots,\beta_k)^{\prime}$ and the GMRF $\tilde{\zeta}(\bm{s}_i,t)$.
Based on GTD \citep{GTD2014}, we extract the lethality of 35,917 accurately geolocalised terrorist attacks that occurred from year $t=2002$ to $t=2013$ in $i=1,\dots,35,917$ space-time locations $\bm{s}_i$. In equation~(\ref{eq:BHM1}), we assume that $y(\bm{s}_i,t)$ follows a conditional Bernoulli distribution with probability $\pi(\bm{s}_i,t)$ of observing a lethal event and $1-\pi(\bm{s}_i,t)$ of observing a non-lethal event, given the GMRF $\tilde{\zeta}$ and parameters $\bm{\theta}$.
In order to minimise the complexity of the models, and consequently, reduce the computing time required to fit them, we assume a separable space-time covariance \citep[chap 7]{Blangiardo2015}. Hence, the GMRF $\tilde{\zeta}(\bm{s}_i,t)$ follows a first-order autoregressive process AR(1): $\tilde{\zeta}(\bm{s}_i,t)=\rho\;\tilde{\zeta}(\bm{s}_i,t-1)+ \psi(\bm{s}_i,t)$, time independent zero-mean Gaussian field $\psi(\bm{s}_i,t)$ with $Cov(\tilde{\zeta}(\bm{s}_i,t),\tilde{\zeta}(\bm{s}_j,u)) = 0$ if $t\neq u$, and $Cov(\tilde{\zeta}(\bm{s}_i),\tilde{\zeta}(\bm{s}_j))$ if $t=u, \forall t, u\in \{2002,\dots,2013\}$.
Prior distributions (equation~(\ref{eq:BHM3})) are set for the parameters to be estimated ($\bm{\theta}=\{\beta_{0}, \bm{\beta}, \sigma^2_\epsilon,\tau,\kappa,\rho\}$), which includes GMRF's precision $\tau=1/\sigma^2_{\tilde{\zeta}}$, scale parameter $\kappa>0$ of its Mat\'{e}rn covariance function (equation~(\ref{eq:cov})), and temporal autocorrelation parameter $\left|\rho\right|<1$. We use the default option for the stationary model in \texttt{R-INLA} as priors on $\tau$ and $\kappa$ through prior distribution on $\log(\tau), \log(\kappa) \sim\mathcal{N}(0,1)$, so that $\log(\kappa(\bm{s})) =\log(\kappa)$ and $\log(\tau(\bm{s}))=\log(\tau)$. For prior sensitivity analysis, we compare our results with alternative priors on $\tau$ and $\kappa$, which is further discussed in Section~\ref{sec:result}.
The modelling approach is used to identify regions of abnormally high values (hot-spots) of the lethality of terrorism from the estimated posterior distribution of the probability of lethal attack interpolated in all space-time locations $\bm{s},t \in \mathcal{D}$. For each year ($t=2002,\dots,2013$), we identify locations where the 95\% credible interval (CI) for the probability of lethal attack is above a threshold $\epsilon$ ($0<\epsilon<1$):
\begin{equation}
\label{eq:hotspot}
L_{CI_{95\%}} \pi(\bm{s},t)>\epsilon, \quad \bm{s},t \in \mathcal{D},
\end{equation}
where $L_{CI_{95\%}}$ is the lower bound of the 95\% CI, $\pi(\bm{s},t)$ is the probability of the attack(s) to be lethal in location $s$ and time $t$, $\epsilon$ is the threshold, and $\mathcal{D}$ is the domain. We define a hot-spot as the groups of contiguous locations $\bm{s}$ that satisfy equation~(\ref{eq:hotspot}). Here we use $\epsilon=0.5$, which means that we define hot-spot areas where it is more likely to have lethal attacks than non-lethal attacks. In other terms, we are 95\% confident that the true value of the probability of lethal attack is greater than 50\%.
\subsection{Poisson Model}
\label{subsec:poissonmodel}
In addition to modelling the lethality of terrorist attacks, one might identify locations that are more likely to encounter a higher number of lethal attacks over a year, which we further refer as the \textit{frequency} of lethal terrorist attacks. The identification of such locations could be crucial for city planners, emergency managers, insurance companies, and property administrators for example, since this information could be used to better allocate resources used to prevent and counter terrorism \citep{Piegorsch2007,Nunn2007}. As in equation~(\ref{eq:BHM}), we use a three-stage Bayesian hierarchical modelling framework \citep{Banerjee2014}:
\begin{subequations}
\begin{align}
y(\bm{s}_i,t)\vert \bm{\theta}, \tilde{\zeta}\sim \text{Poisson($\mu(\bm{s}_i,t)$)} \label{eq:PHM1} \\
\text{log($\mu(\bm{s}_i,t)$)}\vert \bm{\theta}=\eta(\bm{s}_i,t)\vert \bm{\theta}=\beta_{0} + \bm{z}(\bm{s}_i,t)\bm{\beta} + \tilde{\zeta}(\bm{s}_i,t) + \epsilon(\bm{s}_i,t)\label{eq:PHM2} \\
\bm{\theta}=p(\bm{\theta}). \label{eq:PHM3}
\end{align}
\label{eq:PHM}
\end{subequations}
Based on \citet{GTD2014}, we consider the observed number of lethal attacks ($y(\bm{s}_i,t)$ in equation~(\ref{eq:PHM1})) that occurred in a period of 12 years ($t=2002,\dots,2013$) in 6,386 locations $\bm{s}_i$ within a $0.5\degree$ radius of cities' centroids. Since we model a ``count'' variable ($y(\bm{s}_i,t)$), it is convenient to aggregate events that occurred in very close locations within identical municipality areas for example. This has resulted in spatial aggregation reducing the number of observations from 35,917 (equation~(\ref{eq:BHM})) to 6,386 (equation~(\ref{eq:PHM})). Moreover, we assume that $y(\bm{s}_i,t)$ follows a Poisson distribution with parameter $\mu(\bm{s}_i,t)$, with $\log(\mu(\bm{s}_i,t))=\eta(\bm{s}_i,t)$ and expected value $\mathbb{E}(y(\bm{s}_i,t))= \mu(\bm{s}_i,t)$.
Equation~(\ref{eq:PHM2}) is structurally identical to equation~(\ref{eq:BHM2}). As in the Bernoulli model (Section~\ref{subsec:bernoullimodel}), we use the default option for the stationary model in \texttt{R-INLA} as priors on $\tau$ and $\kappa$ through prior distribution on $\log(\tau), \log(\kappa) \sim\mathcal{N}(0,1)$, so that $\log(\kappa(\bm{s})) =\log(\kappa)$ and $\log(\tau(\bm{s}))=\log(\tau)$. To assess prior sensitivity, we compare our results with elicited priors on $\kappa$ and $\tau$, which is further discussed in Section~\ref{sec:result}. As in the Bernoulli model, we identify hot-spots of high number of lethal attacks by replacing the posterior probability $\pi$ (equation~(\ref{eq:hotspot})) with the posterior expected number of lethal events $\mu(\bm{s}_i,t)$ (equation~(\ref{eq:PHM1})). We set $\epsilon=5$, which highlights cells with an expected number of lethal attacks greater than 5 (Figure~\ref{fig:hotspot}). This threshold corresponds to the 90\textsuperscript{th} percentile of the number of lethal attacks observed in the sample ($n=6,386$).
\section{Results}
\label{sec:result}
\subsection{Explaining the Spatial Dynamics of Terrorism}
\label{subsec:explain}
We use INLA as an accurate and computationally efficient model fitting method \citep{Rue2009, Held2010, Simpson2016
. With a 12-core Linux machine (99 GB of RAM), \texttt{R-INLA} requires approximately 7 days to fit each model. It is likely that fitting a similar model with MCMC methods would take too long to be practically feasible. We select one Bernoulli (among models with 0, 1, 2, 3, 4, and 5 covariates) and one Poisson model (among models with 3, 4, and 5 covariates), which exhibit the lowest Deviance Information Criteria (DIC) and Watanabe-Akaike information criterion (WAIC) \citep{Watanabe2010, Spiegelhalter2002}. The selected Bernoulli model includes $k=4$ standardised covariates with corresponding coefficients: satellite night light ($\beta_{lum}$), altitude ($\beta_{alt}$), ethnicity ($\beta_{greg}$), and travel time to the nearest large city ($\beta_{tt}$). The selected Poisson model includes $k=5$ standardised covariates with corresponding coefficients: satellite night light ($\beta_{lum}$), altitude ($\beta_{alt}$), democracy level ($\beta_{pol}$), population density ($\beta_{pop}$), and travel time to the nearest large city ($\beta_{tt}$).
The Bernoulli model (table~\ref{tab:bincoef}, columns ``Bernoulli'') suggests that terrorist attacks are more likely to be lethal far away from large cities, in higher altitude, in locations with higher ethnic diversity ($CI_{95\%}\;\beta_{tt}, \beta_{greg}, \beta_{alt}>0$), and are less likely in areas with higher human activity ($CI_{95\%}\;\beta_{lum}<0$). As an illustration, we compare the effect of an hypothetical 50\% increase in the mean of luminosity ($lum$) on the probability of encountering lethal attacks ($\pi$). Since the covariates $\bm{\beta}$ are standardised, particular care should be taken to estimate such effect. Recall that NOAA satellite data on lights at night provide non-standardised values of luminosity from 0 (min) to 63 (max) \citep{NOAA2014}. Based on 35,917 observations used in the Bernoulli model, a 50\% increase in the mean of luminosity corresponds to an increase from 34.3 to 51.5, or equivalently, from 0 to 0.73 on a standardised scale. In the benchmark scenario, all predictors, including $lum$ and other covariates $tt$, $greg$, $alt$, and the GMRF ($\tilde{\xi}$) are held equal to 0. Hence, the linear predictor $\eta$ (equation~(\ref{eq:BHM2})) equals 0 and $\pi=1/(1+\exp{(-\eta)})=1/(1+1)=0.5$. In the scenario including a 50\% increase in the mean of luminosity, $lum=0.73$, $\eta=\beta_{lum}\times lum=-0.11\times0.73=-0.0803$, given that $\beta_{lum}=-0.11$ (table~\ref{tab:bincoef}, columns ``Bernoulli''). Hence, $\pi=1/(1+\exp{-(-0.0803)})\cong0.48$. As a result, an increase in the mean of luminosity by 50\% decreases the probability of lethal attacks by approximately 2\%.
In contrast, the Poisson model (table~\ref{tab:bincoef}, columns ``Poisson'') suggests that more economically developed areas, and locations with higher democratic levels ($CI_{95\%}\;\beta_{lum}, \beta_{pol}>0$) and close to large cities ($CI_{95\%}\;\beta_{tt}<0$) are more likely to encounter a higher number of lethal attacks. However, we did not find a significant relationship between altitude, population density, and the number of lethal attacks ($0 \in CI_{95\%}\; \beta_{alt}, \beta_{pop}$). The interpretation of these results is discussed in further detail in Section~\ref{sec:discussion}. Note that the results from the two models cannot be directly compared since they are based on a different number of observations, set of covariates, and spatial aggregation.
As with the Bernoulli model, we analyse the effect of a 50\% increase in the mean of luminosity on the expected number of lethal attacks ($\mu$) estimated by the Poisson model, with all predictors, including $lum$ and the other covariates $tt$, $alt$, $pol$, $pop$, and the GMRF ($\tilde{\xi}$) held equal to 0. In the benchmark scenario, $\eta$ (equation~(\ref{eq:PHM2})) equals to 0, therefore the expected number of lethal attacks $\mu=\exp{(0)}=1$. Based on 6,386 observations used in the Poisson model, a 50\% increase in the mean of luminosity corresponds to an increase from 7.45 to 11.2, or equivalently, from 0 to 0.25 in the corresponding standardised scale. Hence, $\eta=(\beta_{lum}\times lum)$, with $\beta_{lum}=0.51$ (table~\ref{tab:bincoef}, columns ``Poisson''). We obtain $\mu=\exp{(0.51\times 0.25)}\cong1.14$. Therefore, an increase in the mean of luminosity by 50\% increases the expected number of lethal attacks by approximately 0.14.
\begin{table}[h!]
\caption{\label{tab:bincoef}Posterior mean, standard deviation, and 95\% credible intervals (CI) of the intercept $\beta_0$, the coefficients of the standardised covariates ($\bm{\beta}$), the temporal ($\rho$), and the spatial parameters of the GMRF $\tilde{\zeta}$ ($\kappa$, $\sigma^2_{\tilde{\zeta}}$, and range $r$) estimated in the Bernoulli ($n=35,917$) and Poisson ($n=6,386$) models. Note that the 95\% CI associated with the spatial parameters correspond to 95\% highest probability density intervals.}
\centering
\fbox
\begin{tabular}{lcccccc}
& \multicolumn{3}{l}{Bernoulli (n=35,917)}& \multicolumn{3}{l}{Poisson (n=6,386)} \\
\cmidrule(l){2-4} \cmidrule(l){5-7}
& mean&sd& 95\% CI &mean&sd& 95\% CI \\
Cov. ($\beta$) & & & &&& \\
\cmidrule(l){1-1}
$\beta_0 $ & -0.58 & 0.13 & (-0.83; -0.32) & -1.37 & 0.09 & (-1.55; -1.19)\\
$\beta_{lum}$&-0.11& 0.02 & (-0.15; -0.06) &0.51& 0.02 & (\;0.47; \;0.54)\\
$\beta_{tt}$&0.06& 0.02 & (\;0.02; \;0.09) &-0.38& 0.02 & (-0.42; -0.34)\\
$\beta_{greg}$ &0.04 &0.02 & (0.003; 0.08) & & & \\
$\beta_{alt}$ & 0.08& 0.03& (\;0.03; \;0.13) & 0.03& 0.02& (-0.001; 0.07)\\
$\beta_{pol}$ & & & & 0.42 & 0.01 &(\;0.39; \;0.45)\\
$\beta_{pop}$ & & & & 0.009 & 0.02 &(-0.03; \;0.04)\\
\\
GMRF ($\tilde{\zeta}$) & & & &&&\\
\cmidrule(l){1-1}
$\rho$ (AR1) & 0.91 & 0.01 & (0.88; 0.91) & 0.91 & 0.07 & (0.90; 0.93)\\
$\kappa$ & 23.3 & & (19.4; 27.6) & 76.0 & & (66.9; 85.8) \\
$\sigma^2_{\tilde{\zeta}}$ & 2.27 & & (1.83; 2.73) & 4.61 & & (4.11; 5.13) \\
$r$ $\left[km\right]$ & 779 & & (643; 915) & 238 & & (208; 267) \\
\\
\end{tabular}}
\end{table}
\subsection{Quantifying the Uncertainty}
\label{subsec:uncertainty}
As a further step, we explore the spatial dynamics of both the lethality of terrorism and the frequency of lethal terrorist attacks by visualising relevant parameters that vary in both space and time. For this purpose, the posterior mean ($\tilde{\zeta}(\bm{s},t)$) and standard deviation ($\sigma_{\tilde{\zeta}}(\bm{s},t)$) of the GMRF provide valuable insight into the understanding of the spatial dynamics of terrorism and more particularly, of the uncertainty in the predictions of both the lethality of terrorism ($\pi(\bm{s},t)$) and the frequency of lethal terrorist attacks ($\mu(\bm{s},t)$). High values of $\sigma_{\tilde{\zeta}}(\bm{s},t)$ signify that there is high uncertainty with regard to the estimated values of $\tilde{\zeta}(\bm{s},t)$, mainly due to the scarcity or absence of data. Some areas have not encountered any terrorist attack during the entire period and therefore exhibit persistently high values, such as Siberia, the Amazonian region, Central Australia, or Greenland (figures~\ref{fig:Bbisdrf1}, \ref{fig:Bbisdrf12}, \ref{fig:Pbisdrf1}, \ref{fig:Pbisdrf12}). In contrast, several regions in South America, Africa, Golf Peninsula, India and Pakistan show an increase of the lethality of terrorism and the frequency of lethal terrorist attacks (figures~\ref{fig:Bprobsurf1}, \ref{fig:Bprobsurf12}, \ref{fig:Pprobsurf1}, \ref{fig:Pprobsurf12}) between 2002 and 2013 accompanied by lower values of $\sigma_{\tilde{\zeta}}(\bm{s},t)$. As expected, uncertainty is much lower in regions which encounter more attacks.
\begin{sidewaysfigure
\centering
\begin{subfigure}{0.32\textheight}
\includegraphics[width=0.32\textheight,height=6cm]{birf1
\vspace{-2em}
\caption{posterior mean $\tilde{\zeta}(\bm{s},2002)$}
\label{fig:Bbirf1}
\end{subfigure}
\vspace{0.8em}
\begin{subfigure}{0.32\textheight}
\includegraphics[width=0.32\textheight,height=6cm]{bisdrf1}
\vspace{-3em}
\caption{posterior standard deviation $\sigma_{\tilde{\zeta}(\bm{s},2002)}$}
\label{fig:Bbisdrf1}
\end{subfigure}
\vspace{0.5em}
\begin{subfigure}{0.32\textheight}
\includegraphics[width=0.32\textheight,height=6cm]{probsurf1}
\vspace{-3.2em}
\caption{posterior mean $\pi(\bm{s},2002)$}
\label{fig:Bprobsurf1}
\end{subfigure}
\vspace{-2.5em}
\begin{subfigure}{0.32\textheight}
\includegraphics[width=0.32\textheight,height=6cm]{birf12
\vspace{-2em}
\caption{posterior mean $\tilde{\zeta}(\bm{s},2013)$}
\label{fig:Bbirf12}
\end{subfigure}
\vspace{0.3em}
\begin{subfigure}{0.32\textheight}
\includegraphics[width=0.32\textheight,height=6cm]{bisdrf12}
\vspace{-2.9em}
\caption{posterior standard deviation $\sigma_{\tilde{\zeta}(\bm{s},2013)}$}
\label{fig:Bbisdrf12}
\end{subfigure}
\vspace{0em}
\begin{subfigure}{0.32\textheight}
\includegraphics[width=0.32\textheight,height=6cm]{probsurf12}
\vspace{-3.2em}
\caption{posterior mean $\pi(\bm{s},2013)$}
\label{fig:Bprobsurf12}
\end{subfigure}
\vspace{1em}
\caption{Bernoulli model of the lethality of terrorist attacks with GMRF posterior mean $\tilde{\zeta}(\bm{s},t)$ (\textit{left}), posterior standard deviation $\sigma_{\tilde{\zeta}(\bm{s},t)}$ (\textit{centre}), and posterior mean probability of lethal attack $\pi(\bm{s},t)$ (\textit{right}) estimated in the 9,697 locations of mesh vertices and interpolated in all locations on land surface $\bm{s} \in \mathbb{S}^2$. Illustrative projected maps provide values on land surface for years $t=2002$ (\textit{top}) and $t=2013$ (\textit{bottom}). Note the presence of high uncertainty expressed through high values of $\sigma_{\tilde{\zeta}_{s,t}}$ in e.g. Siberia or Amazonian areas due to the sparsity or absence of terrorist events (\textit{top centre and bottom centre}). Moreover, one can observe an increase in the probability of lethal attack ($\pi_{s,t}$) from 2002 (\textit{top right}) to 2013 (\textit{bottom right}) in some African and Middle East areas}
\label{fig:prob}
\end{sidewaysfigure
\begin{sidewaysfigure
\centering
\begin{subfigure}{0.32\textheight}
\includegraphics[width=0.32\textheight,height=6cm]{pbirf1
\vspace{-2.2em}
\caption{posterior mean $\tilde{\zeta}(\bm{s},2002)$}
\label{fig:Pbirf1}
\end{subfigure}
\vspace{1em}
\begin{subfigure}{0.32\textheight}
\vspace{0.2em}
\includegraphics[width=0.32\textheight,height=6cm]{pbisdrf1}
\vspace{-3.5em}
\caption{posterior standard deviation $\sigma_{\tilde{\zeta}(\bm{s},2002)}$}
\label{fig:Pbisdrf1}
\end{subfigure}
\begin{subfigure}{0.32\textheight}
\vspace{0.2em}
\includegraphics[width=0.32\textheight,height=6.2cm]{pprobsurf1}
\vspace{-4.3em}
\caption{posterior mean log($\mu(\bm{s},2002)$)}
\label{fig:Pprobsurf1}
\end{subfigure}
\vspace{-2.0em}
\begin{subfigure}{0.32\textheight}
\includegraphics[width=0.32\textheight,height=6cm]{pbirf12
\vspace{-2em}
\caption{posterior mean $\tilde{\zeta}(\bm{s},2013)$}
\label{fig:Pbirf12}
\end{subfigure}
\vspace{1em}
\begin{subfigure}{0.32\textheight}
\includegraphics[width=0.32\textheight,height=6cm]{pbisdrf12}
\vspace{-3.1em}
\caption{posterior standard deviation $\sigma_{\tilde{\zeta}(\bm{s},2013)}$}
\label{fig:Pbisdrf12}
\end{subfigure}
\begin{subfigure}{0.32\textheight}
\vspace{0.0em}
\includegraphics[width=0.32\textheight,height=6.2cm]{pprobsurf12}
\vspace{-4em}
\caption{posterior mean log($\mu(\bm{s},2013)$)}
\label{fig:Pprobsurf12}
\end{subfigure}
\vspace{1em}
\caption{Poisson model of the frequency of lethal terrorist attacks with GMRF posterior mean $\tilde{\zeta}(\bm{s},t)$ (\textit{left}), posterior standard deviation $\sigma_{\tilde{\zeta}(\bm{s},t)}$ (\textit{centre}), and posterior mean of the frequency of lethal attacks $\mu(\bm{s},t)$ (\textit{right}) (logarithmic scale) estimated in the 9,697 locations of mesh vertices and interpolated for all locations $\bm{s} \in \mathbb{S}^2$. Illustrative projected maps provide values in land surface for years $t=2002$ (\textit{top}) and $t=2013$ (\textit{bottom})}
\label{fig:Pprob}
\end{sidewaysfigure
\subsection{Hot-spots}
\label{subsec:hotspots}
The identification of hot-spots of lethality and frequency of lethal attacks is highly valuable since it highlights areas more vulnerable to terrorism, which call for increased vigilance. For example, one might observe important changes in both the lethality of terrorism (figures~\ref{fig:Bhot1}, \ref{fig:Bhot12}) and frequency of lethal terrorist attacks (figures~\ref{fig:Phot1}, \ref{fig:Phot12}) in various locations in Iraq from 2002 to 2013. Both the number and the size of hot-spots in Iraq increased from 2002 to 2013, which reflects the intensification of terrorist activity that followed the invasion of Iraq in 2003 (Operation Iraqi Freedom) carried out by the coalition. In 2002 already, the CIA director George Tenet and the US National Intelligence Council warned the US and UK government that radicalization and terrorism activity will increase in Iraq and outside its borders due to the invasion of Iraq. This was later confirmed by Britain's Royal Institute of International Affairs (Chatham House) shortly after the 2005 London bombings, and also by various reports from Israeli think tank, Saudi and French Intelligence in particular \citep[pp.~18-21]{Chomsky2006}.
Similarly, we observe an increase in both the lethality of terrorism and frequency of lethal terrorist attacks in some regions in Afghanistan. Indeed, Afghanistan has been the theatre of numerous terrorist attacks since the US-led invasion in 2001 following 9/11. From 2003 to 2013, 3,539 terrorist attacks occurred mainly in the centre-east of Afghanistan, including the cities of Kabul, Jalalabad, Khogyani, and Sabari. From 2003 to 2013, 243 events (of which 157 were lethal) occurred in Kabul only. Most of these attacks were perpetrated by the Taliban. Even after the Taliban's withdrawal from Kabul in November 2001, lethal terrorist attacks did not cease in the city and within the country \citep{aljazeera2009}. Indeed, highly lethal suicide bombings intensified from 2006 to 2013 (and further) \citep{GTD2014}.
\begin{sidewaysfigure
\centering
\vspace{-5em}
\begin{subfigure}{0.48\textheight}
\includegraphics[width=0.48\textheight,height=8cm]{hotspot1
\vspace{-4em}
\caption{Hot-spot of lethality in 2002 ($L_{CI_{95\%}} \pi(\bm{s},2002)>0.5$)}
\label{fig:Bhot1}
\end{subfigure}
\begin{subfigure}{0.49\textheight}
\includegraphics[width=0.49\textheight,height=8cm]{hotspot12}
\vspace{-5.2em}
\caption{Hot-spot of lethality in 2013 ($L_{CI_{95\%}} \pi(\bm{s},2013)>0.5$)}
\label{fig:Bhot12}
\end{subfigure}
\begin{subfigure}{0.49\textheight}
\vspace{-3em}
\includegraphics[width=0.488\textheight,height=8cm]{photspot1
\vspace{-4em}
\caption{Hot-spot of frequency in 2002 ($L_{CI_{95\%}} \mu(\bm{s},2002)>5$)}
\label{fig:Phot1}
\end{subfigure}
\begin{subfigure}{0.49\textheight}
\vspace{-3em}
\includegraphics[width=0.488\textheight,height=8cm]{photspot12}
\vspace{-5.1em}
\caption{Hot-spot of frequency in 2013 ($L_{CI_{95\%}} \mu(\bm{s},2013)>5$)}
\label{fig:Phot12}
\end{subfigure}
\caption[Hot-spots of lethality of terrorism and frequency of lethal terrorist attacks]{Hot-spots of lethality of terrorism across the world in 2002 (figure~\ref{fig:Bhot1}) and 2013 (figure~\ref{fig:Bhot12}), with lower bound of the 95\% credible intervals of posterior probabilities of lethal attack ($L_{CI_{95\%}}$) greater than 0.5. Hot-spots of frequency of lethal terrorist attacks across the world in 2002 (figure~\ref{fig:Phot1}) and 2013 (figure~\ref{fig:Phot12}), with $L_{CI_{95\%}}$ of posterior expected number of lethal attacks greater than 5. For illustrative purpose, we use a logarithmic scale ranging from 1.61 (log(5)) to 7.7 (log(2,153)). One may notice a general increase of terrorism activity in the Middle East and some African countries from 2002 to 2013 as illustrated by the presence of lethality (figure~\ref{fig:Bhot12}) and frequency (figure~\ref{fig:Phot12}) hot-spots in North-East Nigeria. }
\label{fig:hotspot}
\end{sidewaysfigure
\subsection{Robustness tests}
\label{subsec:robustness}
The present study used the default Gaussian priors (multivariate Normal) provided by \texttt{R-INLA} for the parameters of the GMRF ($\kappa$ and $\tau$). As a robustness test, we run a prior sensitivity analysis for both the Bernoulli and the Poisson model changing the prior distribution of $\kappa$ and $\tau$. In practice, we changed the prior distribution of the variance of the GMRF $\sigma^2_{\tilde{\zeta}}$ and range $r$, whose quantities can be more easily interpreted, and therefore attributed a prior distribution. For both, the Binomial and the Poisson model, we set $\sigma^2_{\tilde{\zeta}}=50$, which corresponds to a relatively large variance of the GMRF, since we assume that the spatial structure might exhibit considerable variation among areas that encountered a high number of lethal terrorist attacks (e.g.\ some locations in Iraq, Pakistan or Afghanistan) and those that did encounter only a few or none (e.g.\ some locations in Portugal, Brazil or Alaska).
Assuming that the lethality of terrorist attacks can spread over relatively large areas (e.g.\ through demonstration and imitation processes promoted by the media \citep{Brosius1991, Enders1992, Brynjar2000}), we set $r=500$ [km] in the Binomial model. In contrast, we set $r=100$ [km] in the Poisson model, since we believe that the number of lethal attacks is very specific to the characteristics related to the close neighbourhood in which they occur. The number of potential high-value, symbolic targets, human and public targets (see Section~\ref{subsec:covariates}) can vary widely across distant areas. For example, one might reasonably assume that Baghdad shares important similarities with close Iraqi cities such as Abu-Grahib or Al-Fallujah (approximately 30-60 km from Baghdad). However, distant cities such as Al-Kasrah, Iraq (approximately 300 km from Baghdad) might exhibit important different characteristics, and therefore numbers of lethal attacks that are almost independent of the levels observed in Baghdad. Hence, we compute the corresponding $\kappa$ and $\tau$ \citep{Lindgren2015,Bivand2015b}, $\kappa=\frac{\sqrt{8}}{r}$ and $\tau=\frac{1}{\sqrt{4\pi}\kappa\sigma}$ \citep{Lindgren2011}.
\vspace{0pt}
\begin{figure}[h!]
\centering
\includegraphics[scale=0.32]{binpriorcompa}\hspace{1em}
\includegraphics[scale=0.32]{poisspriorcompa
\caption{Estimation of the mean and the 95\% credible intervals (line segment) of the posterior distribution of the parameters: intercept $\beta_0$, covariates $\bm{\beta}$, and spatial parameters (range $r$ and precision $\tau$ of the GMRF). The estimated values are provided for both the Bernoulli (\textit{left}) and the Poisson (\textit{right}) models. The mean of the posterior distribution of each parameter is illustrated for the default prior models (\mycircle{black}) and the models with the modified priors (\mytriangle{black}). The Bernoulli model uses modified priors set as: $\sigma^2_{\tilde{\zeta}}=50$ and $r=500$ [km], while the Poisson model uses $\sigma^2_{\tilde{\zeta}}=50$ and $r=100$ [km]. In both the Bernoulli and Poisson models, the results using the modified priors are consistent with those using the default priors in both point estimation (mean) and the direction of the effect}
\label{fig:priorcompar}
\end{figure}
The mean and the credible intervals of the estimated coefficients $\bm{\beta}$ and the parameters of the GMRF ($\tau$, $r$) are illustrated for both Bernoulli (Figure~\ref{fig:priorcompar}, \textit{left}) and Poisson models (Figure~\ref{fig:priorcompar}, \textit{right}), where default priors (\mycircle{black}) and modified priors (\mytriangle{black}) are specified. The parameters of the GMRF ($\tau$, $r$) are not affected by a change in prior in both Bernoulli and Poisson models, albeit the estimated value of the mean of $\tau$ is less robust in the Poisson model. In the Bernoulli model, both mean and credible intervals of all parameters are almost identical between the default and modified priors. In the Poisson models, the direction of the effect of the estimated coefficients is not affected by changes in prior, albeit the estimated values of the mean of the parameters (especially $\beta_{tt}$ and $\beta_{lum}$) are less robust. A higher sensitivity to change in prior is expected in the Poisson model since its number of observations ($n=6,386$) is considerably reduced compared to the Bernoulli model ($n=35,917$), thus the prior distribution has a higher influence on the posterior estimation, as illustrated in these results.
\section{Discussion}
\label{sec:discussion}
This study proposes a Bayesian hierarchical framework to model both the lethality of terrorism and the frequency of lethal terrorist attacks across the world between 2002 and 2013. The statistical framework integrates spatial and temporal dependencies through a GMRF whose parameters have been estimated with \texttt{R-INLA}. The novelty of this study lies in its ability to systematically capture the effects of factors that explain the lethality of terrorism and the frequency of lethal terrorist attacks worldwide and at subnational levels. Moreover, the analysis of hot-spots at subnational level provides key insight into understanding the spatial dynamics of both lethality of terrorism and frequency of terrorist events that occurred worldwide from 2002 to 2013. In this Section, we highlight the main findings and limitations of this study, and suggest potential improvement, which could be carried out in future studies.
Most country-level studies did not find significant linear relationship between the number of terrorist attacks and economic variables \citep{Krueger2003, Abadie2006, Drakos2006a, Krueger2008, Gassebner2011}. On a local scale, we showed that more economically developed areas tend to encounter more lethal attacks, which provides support to the theory advanced by \citet{Piazza2006}. The author suggested that more economically developed and literate societies with high standards of living may exhibit more lucrative targets, and therefore are expected to be more targeted by terrorist attacks. However, we also showed that terrorist attacks, when they occur, are less likely to be lethal in more economically developed areas. Despite that more economically developed areas are more prone to terrorism, they provide better protection to their targets, which reduces the risk of deadly attack. Therefore, one expect a smaller proportion of lethal attacks in more economically developed areas, as confirmed by our results.
Similar to most country-level studies \citep{Abadie2006, Eubank2001, Li2005, Schmid1992}, we found that areas in democratic countries tend to encounter more lethal attacks compared to those in autocracies. The presence of freedom of speech, movement, and association in democratic countries might reduce the costs to conduct terrorist activities compared to those in autocratic countries \citep{Li2005}. In line with country-level \citep{Gassebner2011,Kurrild2006} and sub-national \citep{Nemeth2014} findings, we found that terrorist attacks are more likely to be lethal in ethnically diverse locations, perhaps due to stronger ethnic tensions \citep{Basuchoudhary2010}. As pointed out in \citet{Esteban2012}, one should acknowledge that ethnic diversity is only one possible measure of ethnic division among others, including \textit{ethnic fractionalization} and \textit{ethnic polarization}. Further analysis using different measures of ethnic division is required in order to assess the role of ethnic division in its wider sense.
While most country-level studies found significant positive linear relationship between population density and the number of terrorist attacks \citep{Ross1993, Savitch2001, Crenshaw1981, Swanstrom2002, Coaffee2010}, we did not find evidence that terrorist attacks are more likely to be lethal or that lethal attacks are more frequent in densely populated areas. However, the Euclidean distance from terrorist events to the nearest large city is positively associated with the lethality of terrorist attacks but negatively associated with the number of lethal attacks. Since targets are usually less secure in small cities and rural areas, this might facilitate deadly terrorist operations, which is consistent with our findings. In contrast, large cities offer greater anonymity and recruitment pool, which might open the door for a high number of lethal attacks.
In addition, more lethal attacks are expected within or close to large cities since they have an impact on a larger audience, which is often a desired outcome (\citealp[p.~41]{Laqueur1999}; \citealp[p.~115]{Crenshaw1990}; \citealp{Savitch2001}). Furthermore, terrorists benefit from high density communication network (road and rail) in large cities to move freely and rapidly from and to target points (\citealp{Heyman1980} ; \citealp[p.~189]{Wilkinson1979}). It is also not uncommon that terrorists target communication network infrastructure, as exemplified by the March 11, 2004 simultaneous attacks on several commuter trains in Madrid, Spain, which killed 191 people \citep{LAT2014}.
Even though the number of lethal attacks itself does not appear to be associated with altitude, we found that terrorist attacks, when they occur, tend to be more lethal in higher altitude. Terrorists might be less constrained by governmental forces during terrorist operations launched in less accessible regions, such as mountains, which can provide save havens to terrorists \citep{Abadie2006, Ross1993}. Terrorist groups can therefore benefit from knowledge of ``rough'' terrain (mountainous regions) to defeat the enemy, as illustrated by the successful attacks carried out by the Mujahedeen groups against the Soviet Union, and later, the Taliban against NATO \citep{Buhaug2009}. Therefore, one might reasonably assume that terrorists increase their killing efficiency in mountains, which is reflected in a higher proportion of ``successful'' lethal attacks. However, since human targets are usually more scarce in mountainous regions, the advantage of the terrain might not suffice to compensate the lack of targets, which in turn reduces the number of possible lethal attacks that can be planned and carried out. Nevertheless, complementary analysis would be required in order to confirm the plausibility of this interpretation of the results.
As with any statistical analyses of complex social phenomena, the outcome of this study should be taken with caution. First, since we aim to investigate terrorism across the entire world and at high spatial resolution, the availability of suitable covariates is limited. As a result, our study has ineluctably omitted numerous relevant drivers of both the lethality of terrorist attacks and their frequency, which include characteristics (e.g. psychological processes) of each member of terrorist groups, ideology, beliefs, and cultural factors (\citealp[p.~29]{Crenshaw1983}; \citealp[p.~151]{Wilkinson1990}; \citealp{Brynjar2000}; \citealp[pp.~92-93]{Richardson2006}), or reciprocal interactions between counterterrorism and terrorism \citep{English2010, Hoffman2002} for example. Despite the fact that our models do not allow for estimating the marginal effect of each potential unobserved factor, their aggregated effect has been however taken into account through the space-time dependence structure represented by the GMRF ($\tilde{\zeta}$ in equations~(\ref{eq:BHM2}) and (\ref{eq:PHM2})).
Second, one could reasonably expect some spatial variability in the lethality and the frequency of terrorist attacks, especially within large cities that are regularly targeted by terrorists. However, since terrorist events from GTD are reported at the centroid of the nearest city in which they occurred, spatial variability of terrorism's lethality and frequency within cities cannot be captured. Moreover, we assume that the spatial correlation in both the lethality of terrorism and the frequency of lethal terrorist attacks depends only on the distance between the locations of terrorist attacks (stationarity) and is invariant to rotation (isotropy). This assumption might be too restrictive, since it would be equally reasonable to assume that the spatial correlation related to mass-casualty attacks extends into a larger spatial range, via a broader diffusion through media for example. Further studies might investigate the use of non-stationary models, which are currently being developed for the model class that may be fitted with INLA \citep{Lindgren2011}.
Third, the temporal unit exhibits limitations as well, since the study period is discretised into 12 years (2002-2013), even though GTD provides day, month, and year for most events. Access to more computational power may allow further analysis to investigate variation in a monthly or weekly basis. Moreover, for computational reasons, we assume no interaction between spatial and temporal dependencies of the lethality and frequency of terrorist attacks, i.e. \textit{separable} space-time models, where the covariance structure can be written as the product of a purely spatial and a purely temporal covariance function for all space-time locations \citep{Gneiting2006}. In our models, $\tilde{\zeta}$ follows a simple autoregressive process (AR(1)) in time. In non-separable models, the dependencies structure in both space and time is usually highly complex \citep{Harvill2010}, and therefore more computationally demanding
Fourth, subjective choices have been made throughout the entire modelling process, which might affect both the internal and external validity of our results. A major concern is the absence of consensus on the definition of terrorism \citep{Beck2013, Jackson2016}, and subjectivity is therefore inevitable \citep[p.~23]{Hoffman2006}. In line with \citet[pp. 24-25]{English2010}, we agree that is all the more important that studies on terrorism must clearly state how terrorism is understood. Accordingly, we use data from GTD, which clearly states the definition used to classify acts as terrorist events. Moreover, as with any Bayesian analysis, our study involves a degree of subjectivity with regard to the choice of priors. Because of our relatively large dataset, we are confident that the choice of priors does not influence our results, as confirmed by our prior sensitivity analysis (Section~\ref{sec:result}). However, subjectivity remains in the definition of the threshold for hot-spots. We have chosen ones which ensures probability of lethal attacks higher than non-lethal (Bernoulli) and expected number of lethal attacks that correspond to high percentile (Poisson). We recommend practitioners and researchers in the field of terrorism to take particular care in choosing cut-off values, which might vary according to the purpose of their study.
Despite its aforementioned shortcomings, this study suggests a rigorous framework to investigating the spatial dynamics of the lethality of terrorism and the frequency of lethal terrorist attacks across the world and on a local scale. It assesses the uncertainty of the predictions, which is crucial for policy-makers to make informed decisions \citep[p.~64]{Zammit2013} or to evaluate the impact of counterterrorism policies \citep{Perl2007} for example. Ultimately, this research may provide complementary tools to enhance the efficacy of preventive counterterrorism policies.
\clearpage
\small
\bibliographystyle{chicago}
| {'timestamp': '2016-10-12T02:07:52', 'yymm': '1610', 'arxiv_id': '1610.01215', 'language': 'en', 'url': 'https://arxiv.org/abs/1610.01215'} |
\section*{Appendix}
\parskip0.0ex
\let\environmentsize=\appenvironmentsize
\let\proofsize=\appproofsize
\environmentsize
\renewcommand{\thesection}{\Alph{section}}
\renewcommand{\theequation}{\thesection.\arabic{equation}}
\setcounter{section}{0}
}
\newcommand\AppendixOut{
\renewcommand{\thesection}{\arabic{section}}
\renewcommand{\theequation}{\arabic{equation}}
\parskip1.5ex
\let\environmentsize=\bodyenvironmentsize
\let\proofsize=\bodyproofsize
\environmentsize
}
\date{
This version: \today \\
}
\title{
\sc Overcoming Free-Riding in Bandit Games\thanks{
This paper supersedes our earlier paper ``Strongly Symmetric Equilibria in Bandit Games''
(circulated in 2014 as Cowles Discussion Paper No.~1956
and SFB/TR 15 Discussion Paper No.~469)
which considered pure Poisson learning only.
Thanks for comments and suggestions are owed to
the editor,
three anonymous referees,
and seminar participants at
Aalto University,
Austin,
Berlin,
Bonn,
City University of Hong Kong,
Collegio Carlo Alberto,
Duisburg-Essen,
Edinburgh,
Exeter,
Frankfurt
(Goethe University, Frankfurt School of Finance and Management),
London
(Queen Mary, LSE),
Lund,
Maastricht,
Mannheim,
McMaster University,
Microsoft Research New England,
Montreal,
Oxford,
Paris
(S\'{e}minaire Roy,
S\'{e}minaire Parisien de Th\'{e}orie des Jeux,
Dauphine),
Southampton,
St.\ Andrews,
Sydney,
Toronto,
Toulouse,
University of Western Ontario,
Warwick,
Zurich,
the 2012 International Conference on Game Theory at Stony Brook,
the 2013 North American Summer Meeting of the Econometric Society,
the 2013 Annual Meeting of the Society for Economic Dynamics,
the 2013 European Meeting of the Econometric Society,
the 4th Workshop on Stochastic Methods in Game Theory at Erice,
the 2013 Workshop on Advances in Experimentation at Paris II,
the 2014 Canadian Economic Theory Conference,
the 8th International Conference on Game Theory and Management in St.\ Petersburg,
the SING 10 Conference in Krakow,
the 2015 Workshop on Stochastic Methods in Game Theory in Singapore,
the 2017 Annual Meeting of the Society for the Advancement of Economic Theory in Faro,
the 2019 Annual Conference of the Royal Economic Society,
and
the 2020 International Conference on Game Theory at Stony Brook.
Part of this paper was written during a visit to the Hausdorff Research Institute for Mathematics
at the University of Bonn under the auspices of the Trimester Program
``Stochastic Dynamics in Economics and Finance''.
Johannes H\"{o}rner acknowledges funding from the Cowles Foundation and the Agence nationale de la recherche under grant ANR-17-EURE-0010 (Investissements d'Avenir program).
Nicolas Klein acknowledges financial support from the Fonds de Recherche du Qu\'{e}bec Soci\'{e}t\'{e} et Culture
and the Social Sciences and Humanities Research Council of Canada.
Sven Rady acknowledges financial support from Deutsche Forschungsgemeinschaft through SFB/TR 15 (project A08) and SFB/TR 224 (project B04).
}}
\author{
Johannes H\"{o}rner\thanks{Yale University, 30 Hillhouse Ave., New Haven, CT 06520, USA, and TSE (CNRS), and CEPR, {\tt johannes.horner@yale.edu}.}
\and
Nicolas Klein\thanks{Universit\'{e} de Montr\'{e}al, D\'{e}partement de Sciences \'{E}conomiques,
C.P.\ 6128 succursale Centre-ville; Montr\'{e}al, H3C 3J7, Canada, and CIREQ, {\tt kleinnic@yahoo.com}.}
\and
Sven Rady\thanks{University of Bonn, Adenauerallee 24-42, D-53113 Bonn, Germany, and CEPR, {\tt rady@hcm.uni-bonn.de}.}
}
\begin{document}
\maketitle
\setcounter{page}{0}
\thispagestyle{empty}
\setstretch{1.00}
\newpage
\vspace{4ex}
\begin{abstract}
\noindent
This paper considers a class of experimentation games with L\'{e}vy bandits encompassing those of Bolton and Harris (1999) and Keller, Rady and Cripps (2005). Its main result is that efficient (perfect Bayesian) equilibria exist whenever players' payoffs have a diffusion component. Hence, the trade-offs emphasized in the literature do not rely on the intrinsic nature of bandit models but on the commonly adopted solution concept (Markov perfect equilibrium). This is not an artifact of continuous time: we prove that efficient equilibria arise as limits of equilibria in the discrete-time game. Furthermore, it suffices to relax the solution concept to strongly symmetric equilibrium.
\vspace{1ex}
\noindent {\sc Keywords:}
Two-Armed Bandit, Bayesian Learning, Strategic Experimentation, Strongly Symmetric Equilibrium.
\vspace{1ex}
\noindent
{\em JEL} {\sc Classification Numbers:}
C73,
D83.
\end{abstract}
\newpage
\AppendixOut
\setstretch{1.20}
\section{Introduction
\label{sec:intro}}
The goal of this paper is to evaluate the role of the Markov assumption in strategic bandit models. Our main finding is that it is the driving force behind the celebrated trade-off between the free-riding and encouragement effects (Bolton and Harris, 1999). More precisely, we show that free-riding does not prevent efficiency from being achievable in equilibrium when learning involves a Brownian component, as in the Bolton-Harris model.
In the pure Poisson case, free-riding can be overcome entirely if payoff arrivals are not very informative, and partially if they are.
Our framework follows Bolton and Harris (1999), Keller et al.\ (2005)\ and Keller and Rady (2010). Impatient players repeatedly choose between a risky and a safe arm. They share a common prior about the risky arm, which can be of one of two types.
Learning occurs via the players' payoffs, which are publicly observable and, in the case of the risky arm, type-dependent.
In terms of expected payoffs, a risky arm of the good type dominates the safe arm, which in turn dominates a risky arm of the bad type.
The payoff process which we assume for the risky arm is the simplest that encompasses both the Brownian motion in Bolton and Harris (1999) and the Poisson process in Keller et al.\ (2005)\ and Keller and Rady (2010). Unlike ours, these three papers focus on Markov perfect equilibria of the game in continuous time with the posterior probability of a good risky arm as the state variable.
Understanding the role of the Markov refinement requires discretizing the timeline of the game because defining standard game-theoretic notions, such as perfect Bayesian equilibrium, raises conceptual problems in continuous-time games with observable actions (Simon and Stinchcombe, 1989). We then let the time interval between successive opportunities to revise actions vanish in order to get a clean characterization and a meaningful comparison with the literature.
The efficient behavior in this frequent-action limit is for all players to use the risky arm when they are sufficiently confident that it is of the good type, and for all of them to use the safe arm otherwise.
To give some very rough intuition for our results, the best equilibria have a flavor of ``grim trigger.''
Efficiency obtains if, and only if, the following holds when the common belief is right above the threshold where a social planner would stop all experimentation: a player that deviates from the risky to the safe arm would find it best to always use the safe arm thereafter---independent of the outcome of all other players' choices at that instant, and \textit{assuming} that all other players react to the deviation by using the safe arm exclusively thereafter.
Intuitively, having all other players stop experimenting forever is the worst punishment a defecting player can face.
If the best response to this punishment is to also play the safe arm, a unilateral deviation from the risky to the safe arm stops \emph{all} experimentation.
Then, each player effectively faces the same trade-off as the social planner, weighing the informational benefits of all experiments against the cost (in terms of current expected payoffs) of a single one.
A complicating factor, however, is that bandit models are stochastic games: the common belief about the risky arm evolves.
Will the threat be carried out in the case ``good news'' obtains, as the posterior belief might be so optimistic that all players find it preferable to adopt the risky arm, even in case of a single deviation?
Also, if a player expects experimentation to stop in the next instant
absent any good news about the risky arm, it had better be that the good news event be sufficiently likely; otherwise, what good is the threat that other players stop experimenting?
Hence, the question is whether the threat of the punishment in case of good news is both credible and the corresponding event sufficiently likely.
When there is a diffusion component, it is the leading force in players' belief updating, and good news is likely: the Brownian path is just as likely to go up as it is to go down (in contrast, the Poisson component is much less likely to yield good news).
The threat is credible, moreover: the belief does not jump up, but rather ticks up slightly, so that,
in the next period, it will almost certainly stay in a tight neighborhood of the current belief.
As the efficient belief threshold is strictly smaller than that of a single player experimenting in isolation,
the belief will thus remain in a region where playing safe is a best response to everyone else doing so.
Our criterion is then satisfied.
The situation is less clear in the pure Poisson case.
If good news arrives there, the belief jumps up and may reach a region in which the deviating player would find it optimal to use the risky arm.
This may increase the incentive to deviate and reduce the other player's ability to punish.
If good news are conclusive, for example, they lead all players to play risky forever, so there is no scope for any punishment whatsoever.
For inconclusive news, ``large'' jumps in beliefs allow for some punishment, but not enough for efficiency.
``Small'' jumps, however, are compatible with efficiency, and for the same reason as above: good news generated by other players keeps the posterior belief in a region where each player finds it optimal to use the safe arm if everybody else does so.
At a more technical level, the difference between our findings for payoff processes with and without a Brownian component has to do with the standard deviation of the ``noise'' in observed payoffs, which determines how fast the informational benefit of an experiment with the risky arm vanishes as the discretization period shrinks.
In view of the details needed to formulate an intuition along these lines, we postpone this until after the statement of our main result in Section \ref{sec:result}.
Irrespective of whether efficiency can be achieved or not, we show that both the highest and lowest average equilibrium payoff is attainable with strongly symmetric equilibria (SSEs),
that is, equilibria in which all players use the same continuation strategy for any given history, independent of their identity (\textit{e.g.}, regardless of whether they had been the sole deviator).
Moreover, both the highest and lowest payoff are obtained by an alternation between the same two Markov strategies: one that yields the highest payoff for any belief, and which governs play as long as no player deviates, and one that yields the lowest payoff, given that play reverts to the other Markov strategy at some random time.
Both these Markov strategies are cutoff strategies that have all players use the risky arm if and only if the current belief exceeds some threshold.
There is no need to resort to more complicated perfect Bayesian equilibria (PBEs). Of course, PBE need not involve symmetric payoffs, but we show that in terms of total payoffs across players, there is no difference between SSE and PBE: the best and worst total (and so average) equilibrium payoffs coincide.\footnote{One appealing property of SSEs is that payoffs can be studied via a coupled pair of functional equations that extends the functional equation characterizing MPE payoffs (see Proposition \ref{prop:chara-discrete}).}
We further show that the worst average equilibrium payoff equals the optimal payoff of a single player experimenting in isolation.
Two caveats are in order.
First, we have pointed out the importance of studying the discrete-time game to make sense of PBE (as well as to factor out equilibria that are continuous-time quirks, such as the ``infinite-switching equilibria'' of Keller et al.\ (2005), which have no equivalent in discrete time). However, our results are asymptotic to the extent that they only hold when the time interval between rounds is small enough. There is no qualitative difference between an arbitrarily small uptick vs.\ a discrete jump when the interval length is bounded away from zero. Our results rely heavily on what is known about the continuous-time limits, and especially on the analyses in Bolton and Harris (1999), Keller and Rady (2010), and Cohen and Solan (2013).
To the extent that some of our proofs are involved, it is because they require careful comparison and convergence arguments. Because we rely on discrete time, we must settle on a particular discretization. We believe that our choice is natural:
players may revise their action choices at equally spaced time
opportunities, while payoffs and information accrue in continuous
time, independent of the duration of the intervals.\footnote{That is, ours is
the simplest version of inertia strategies as introduced by Bergin
and MacLeod (1993).} Nonetheless, other discretizations might conceivably yield different
predictions.
Second, our results do not cover all bandit games. Indeed, an explicit characterization of the single-agent and planner solutions, on which we build, requires some restrictions on the payoff process.
Here, we follow Cohen and Solan (2013) in ruling out bad-news jumps.\footnote{
Unlike Cohen and Solan (2013), we further rule out learning from the size of a lump-sum payoff.
We believe that this restriction is inconsequential; see the concluding comments for further details.}
Moreover, our framework does not subsume the ``breakdowns'' model of Keller and Rady (2015).\footnote{
The technical difficulty
there
is that the value functions cannot be solved in closed form. They are defined recursively, with the functional form depending on the number of
breakdowns
triggering an end to all experimentation. We return to this scenario in the concluding comments.}
Our paper belongs to the growing literature on strategic bandits.
We have already mentioned the standard references in this literature.
Studying the undiscounted limit of the experimentation game, Bolton and Harris (2000) consider the Brownian motion case, while Keller and Rady (2020) allow for L\'{e}vy processes.
A number of authors have extended the exponential bandit framework of Keller et al.\ (2005).
Klein and Rady (2011) and Das, Klein and Schmid (2020) investigate games in which the quality of the risky arm is heterogeneous across players.
Dong (2018) endows one player with superior information regarding the state of the world,
Marlats and M\'{e}nager (2019) examine strategic monitoring,
and Thomas (2021) analyzes congestion on the safe arm.
All these papers work in continuous time and rely on MPE as the solution concept.\footnote{
To the best of our knowledge, the MPE concept is adopted by all papers in the literature on strategic bandits unless they consider agency models (the principal having commitment, one solves for a constrained optimum rather than an equilibrium) or drop the assumption of perfect monitoring of actions and payoffs. Examples of the latter include Bonatti and H\"{o}rner (2011), Heidhues et al.\ (2015), and Rosenberg et al.\ (2007). As there is no common belief that could serve as a state variable, these authors use Nash equilibrium or one of its refinements (such as perfect Bayesian equilibrium).}
Here, we focus on the canonical bandit game with discounting and homogenous risky arms,
but relax the solution concept by considering a sequence of discrete-time games.\footnote{
Hoelzemann and Klein (2020) suggest that MPE may be a decent predictor of subjects' behavior in a laboratory experiment of the Keller et al.\ (2005)\ setting. They reject the hypothesis that subjects played according to the welfare-maximizing PBE constructed here. Rather, subjects adopted non-cutoff and turn-taking behaviors, which are quite reminiscent of Keller et al.\ (2005)'s simple MPEs. The paper leaves open the question of what deterred subjects from the simple on-path cutoff behavior of the best equilibrium.}
Our results show that the conclusions drawn from strategic-experimentation models may crucially depend on the equilibrium concept being used.
When strategic experimentation is embedded in a richer environment, however, the robustness of MPE-based findings depends on the fine details of the game.
Two papers from the industrial organization literature that build on Keller et al.\ (2005)\ may serve to illustrate this.
Besanko and Wu (2013) study how learning and product-market externalities affect incentives to cooperate in R\&D.
Our results apply to the case that the overall externality is positive (so there is an incentive to free-ride on other firms' R\&D efforts); the best SSE then involves experimentation at full intensity down to the same cutoff as in the symmetric MPE.
The comparison between research competition and research cooperation thus becomes simpler, but the main insights remain unchanged.
In Besanko, Tong and Wu's (2018) analysis of research subsidies, our results would again reduce firms' free-riding (see their footnote 19), and hence increase the incentives to invest under the different subsidy types that they consider.
If there is no shadow cost of public funds, the funding agency can overcome free-riding through the design of its subsidy program, so MPE is not restrictive in this case.
With such a shadow cost, by contrast, this is no longer true: even if the agency chooses the subsidy that minimizes the risk of underinvestment in R\&D, this investment is not ``flat-out'' above the resulting cutoff, so the best SSE would again improve matters here.
Our paper also contributes to the literature on SSE.
Equilibria of this kind have been studied in repeated games since Abreu (1986). They are known to be
restrictive. First, they make no sense if the model itself fails to be
symmetric. However, as Abreu (1986) notes for repeated games, they are
(i) easily calculated, being completely characterized by two
simultaneous scalar equations; (ii) more general than static Nash, or even
Nash reversion; and even (iii) without loss in terms of total welfare,
at least in some cases, as in ours. See also Abreu, Pearce and
Stacchetti (1986) for the optimality of symmetric equilibria within a
standard oligopoly framework and Abreu, Pearce and Stacchetti (1993)
for a motivation of the solution concept based on a notion of equal
bargaining power. Cronshaw and Luenberger (1994) conduct a more
general analysis for repeated games with perfect monitoring, showing
how the set of SSE payoffs can be obtained by solving for the largest
scalar solving a certain equation. Hence, our paper shows that
properties (i)--(iii) extend to bandit games, with ``Markov perfect''
replacing ``Nash'' in statement (ii) and ``functional'' replacing ``scalar'' in (i): as mentioned above, a pair of functional equations
replaces the usual Hamilton-Jacobi-Bellman (HJB) (or Isaacs) equation from optimal control.
The paper is organized as follows.
Section \ref{sec:model} introduces the model.
Section \ref{sec:continuous-time} characterizes the efficient solution when actions can be chosen in continuous time and shows that MPEs cannot achieve efficiency.
Section \ref{sec:discrete-time} presents the game in which actions can
only be adjusted at regularly spaced points in time, the discrete-time game or discrete game for short.
Section \ref{sec:result} contains the main results regarding the set of equilibrium payoffs in the discrete game as the time between consecutive choices tends to zero.
Section \ref{sec:construction} is devoted to the construction of SSE in the discrete game.
Section \ref{sec:functional-equations} studies functional equations
that characterize SSE payoffs in both the discrete game and the continuous-time limit.
Section \ref{sec:conclu} concludes the paper.
Appendix \ref{app:auxiliary} presents auxiliary results on the evolution of beliefs and on various payoff functions.
The proofs of all other results are relegated to Appendix \ref{app:proofs}.
\section{The Model
\label{sec:model}}
Time $t \in [0,\infty)$ is continuous.
There are $N \geq 2$ players,
each facing the same two-armed bandit problem with one safe and one risky arm.
The safe arm generates a known constant payoff $s > 0$ per unit of time.
The distribution of the payoffs generated by the risky arm depends on the state of the world, $\theta \in \{0,1\}$, which nature draws at the outset with $\mathbb{P}\left[ \theta = 1 \right] = p$. Players do not observe $\theta$, but they know $p$.
They also understand that the evolution of the risky payoffs depends on $\theta$.
Specifically, the payoff process $X^n$ associated with player $n$'s risky arm evolves according to
$$
dX^n_t = \alpha_\theta\,dt+\sigma\,dZ^n_t + h\,dN^n_t,
$$
where
$Z^n$ is a standard Wiener process,
$N^n$ is a Poisson process with intensity $\lambda_\theta$,
and the scalar parameters $\alpha_0, \alpha_1, \sigma, h, \lambda_0, \lambda_1$ are known to all players.
Conditional on $\theta$, the processes $Z^1,\ldots,Z^N,N^1,\ldots,N^N$ are independent.
As $Z^n$ and $N^n - \lambda_\theta t$ are martingales, the expected payoff increment from using the risky arm over an interval of time $[t, t + dt)$ is
$m_\theta \, dt$ with $m_\theta = \alpha_\theta + \lambda_\theta h$.
Players share a common discount rate $r>0$.
We write $k_{n,t} = 0$ if player $n$ uses the safe arm at time $t$ and $k_{n,t} = 1$ if the player uses the risky arm at time $t$.
Given actions $(k_{n,t})_{t \geq 0}$ such that $k_{n,t} \in \{0,1\}$ is measurable with respect to the information available at time $t$,
player $n$'s total expected discounted payoff, expressed in per-period units, is
$$
\mathbb{E}\! \left[ \int_0^\infty r e^{-rt} \left[(1-k_{n,t}) s + k_{n,t} m_\theta\right] \, dt \right],
$$
where the expectation is over both the random variable $\theta$ and the stochastic process $(k_{n,t})$.\footnote{
Note that we have not yet defined the set of strategies available to each player and hence are silent at this point on how the players' strategy profile actually induces a stochastic process of actions
$(k_{n,t})_{t \geq 0}$ for each of them.
We will close this gap in two different ways in Sections \ref{sec:continuous-time} and \ref{sec:discrete-time}: by imposing Markov perfection in the former and a discrete time grid of revision opportunities in the latter.}
We make the following assumptions:
(i) $m_0 < s < m_1$, so each player prefers the risky arm to the safe
arm in state $\theta=1$ and prefers the safe arm to the risky arm in state $\theta=0$.
(ii) $\sigma > 0$ and $h > 0$, so the Brownian payoff component is always present and jumps of the Poisson component entail positive lump-sum payoffs;
(iii) $\lambda_1 \geq \lambda_0 \geq 0$, so jumps are at least as frequent in state $\theta=1$ as in state $\theta=0$.
Players begin with a common prior belief about $\theta$, given by the probability $p$ with which nature draws state $\theta = 1$.
Thereafter, they learn about this state in a Bayesian fashion by observing one another's actions and payoffs; in particular, they hold common posterior beliefs throughout time.
A detailed description of the evolution of beliefs is presented in Appendix \ref{app:beliefs}.
When $\lambda_1=\lambda_0$ (and hence $\alpha_1 > \alpha_0$), the
arrival of a lump-sum payoff contains no information about the state
of the world, and our setup is equivalent to that in Bolton and Harris
(1999), with the learning being driven entirely by the Brownian payoff component.
When $\alpha_1=\alpha_0$ (and hence $\lambda_1 > \lambda_0$), the
Brownian payoff component contains no information, and our setup is
equivalent to that in Keller et al.\ (2005)\ or Keller and Rady (2010), depending on whether $\lambda_0 =
0$ or $\lambda_0 > 0$, with the learning being driven entirely by the arrival of lump-sum payoffs.\footnote{
Keller et al.\ (2005)\ and Keller and Rady (2010)\ consider compound Poisson processes where the distribution of lump-sum payoffs (and their mean $h$) at the time of a Poisson jump is independent of, and hence uninformative about, the state of the world.
By contrast, Cohen and Solan (2013) allow for L\'{e}vy processes where
the size of lump-sum payoffs contains information about the state, but
a lump sum of any given size arrives weakly more frequently in state $\theta=1$.}
\section{Efficiency and Markov Perfect Equilibria in Continuous Time
\label{sec:continuous-time}}
The authors cited in the previous paragraph assume that players use pure
Markov strategies in continuous time with the posterior belief as the state variable,
so that $k_{n,t}$ is a time-invariant deterministic function of the probability $p_t$ assigned to state $\theta = 1$ at time $t$.\footnote{
In the presence of discrete payoff increments, one actually has to
take the left limit $p_{t-}$ as the state variable, owing to the
informational constraint that the action chosen at time $t$ cannot
depend on the arrival of a lump sum at $t$.
In the following, we simply write $p_t$ with the understanding that the left limit is meant whenever this distinction is relevant.
Note that $p_{0-}=p_0$ by convention.}
In this section, we show how some of their
insights generalize to the present setting.
First, we present the efficient benchmark.
Second, we show that efficient behavior cannot be sustained as an MPE.
Consider a planner who maximizes the \emph{average} of the players' expected payoffs in continuous time by selecting an entire action profile $(k_{1,t},\ldots,k_{N,t})$ at each time $t$.
The corresponding average expected payoff increment is
$$
\left[ \left( 1 - \frac{K_t}{N} \right) s + \frac{K_t}{N} m_\theta \right] dt \qquad \text{with} \qquad K_t = \sum_{n=1}^{N} k_{n,t}.
$$
A straightforward extension of the main results of Cohen and Solan (2013) shows that the evolution of beliefs also depends on $K_t$ only\footnote{Cf.\ Appendix \ref{app:beliefs}.}
and that the planner's value function, denoted by $V_N^*$, has the following properties.
First, $V_N^*$ is the unique once-continuously differentiable solution of the HJB equation
$$
v(p) = s + \max_{K \in \{0,1, \ldots, N\}} K \left[b(p,v) - \frac{c(p)}{N} \right]
$$
on the open unit interval subject to the boundary conditions $v(0) = m_0$ and $v(1) = m_1$.
Here,
\begin{equation} \label{eq:b}
b(p,v)
= \frac{\rho}{2r}p^2(1-p)^2 v''(p)
- \frac{\lambda_1-\lambda_0}{r} \, p(1-p) \, v'(p)
+ \frac{\lambda(p)}{r} \, \left[ v(j(p)) - v(p) \right]
\end{equation}
can be interpreted as the expected informational benefit of using the risky arm when continuation payoffs are given by a (sufficiently regular) function $v$.\footnote{
Up to division by $r$, this is the infinitesimal generator of the process of posterior beliefs for $K = 1$, applied to the function $v$; cf.\ Appendix \ref{app:beliefs} for details.}
The first term on the right-hand side of \eref{eq:b} reflects Brownian learning, with
$$
\rho = \frac{(\alpha_1-\alpha_0)^2}{\sigma^2}
$$
representing the signal-to-noise ratio for the continuous payoff component.
The second term captures the downward drift in the belief when no
Poisson lump sum arrives.
The third term expresses the discrete change in the overall payoff
once such a lump sum arrives, with the belief jumping up from $p$ to
$$
j(p) = \frac{\lambda_1 p}{\lambda(p)};
$$
this occurs at the expected rate
$$
\lambda(p) = p \lambda_1 + (1-p) \lambda_0.
$$
The function
$$
c(p) = s - m(p)
$$
captures the opportunity cost of playing the risky arm in terms of expected current payoff forgone;
here,
$$
m(p) = p m_1 + (1-p) m_0
$$
denotes the risky arm's expected flow payoff given the belief $p$.
Thus, the planner weighs the shared opportunity cost of each experiment on the risky arm against the learning benefit, which accrues fully to each agent because of the perfect informational spillover.
Second, there exists a cutoff $p_N^*$ such that all agents using the safe arm $(K = 0$) is optimal for the planner when $p \leq p_N^*$, and all agents using the risky arm ($K = N$) is optimal when $p > p_N^*$.
This cutoff is given by
$$
p_N^* = \frac{\mu_N (s - m_0)}{(\mu_N + 1) (m_1 - s) + \mu_N (s - m_0)}\,,
$$
where $\mu_N$ is the unique positive solution of the equation
$$
\frac{\rho}{2}\mu(\mu+1)+(\lambda_1-\lambda_0)\mu+\lambda_0\left(\frac{\lambda_0}{\lambda_1}\right)^\mu-\lambda_0-\frac{r}{N} = 0.
$$
Both $\mu_N$ and $p_N^*$ increase in $r/N$.
Thus, the interval of beliefs for which all agents using the risky arm is efficient widens with the number of agents and their patience.
Third, the value function satisfies $V_N^*(p)=s$ for $p \leq p_N^*$, and
\begin{equation}\label{eq:coopval}
V_N^*(p)=m(p)+\frac{c(p_N^*)}{u(p_N^*;\mu_N)}\ u(p;\mu_N) > s,
\end{equation}
for $p > p_N^*$, where
$$
u(p;\mu) = (1-p) \left(\frac{1-p}{p}\right)^\mu
$$
is strictly decreasing and strictly convex for $\mu > 0$. The function
$V_N^*$ is strictly increasing and strictly convex on $[p_N^*,1]$.
By setting $N=1$, one obtains the single-agent value function $V_1^*$ and corresponding cutoff $p_1^* > p_N^*$.
Now consider $N \geq 2$ players acting noncooperatively.
Suppose that each of them uses a Markov strategy with the common belief as the state variable.
As in Bolton and Harris (1999), Keller et al.\ (2005)\ and Keller and Rady (2010), the HJB equation for
player $n$ when he faces opponents who use Markov strategies is given by
$$
v_n(p) = s + K_{\neg n}(p) b(p,v_n)+ \max_{k_n \in \{0,1\}} k_n \left[b(p,v_n) - c(p) \right],
$$
where $K_{\neg n}(p)$ is the number of $n$'s opponents that use the risky arm.
That is, when playing a best response, each player weighs the
opportunity cost of playing risky against his own informational benefit only.
Consequently, $V_N^*$ does not solve the above HJB equation when player $n$'s opponents use the efficient strategy.
Efficient behavior therefore cannot be sustained in MPE.
To obtain existence of a symmetric MPE, the above authors actually allow the players to allocate one unit of a perfectly divisible resource across the two arms at each point in time, so the fraction allocated to the risky arm can be $k_{n,t} \in [0,1]$.
The symmetric MPE is unique and has all players play safe on an interval $[0,\tilde{p}_N]$ with $p_N^* < \tilde{p}_N < p_1^*$, play risky on an interval $[p_N^\dagger,1]$ with $p_N^\dagger > \tilde{p}_N$, and use an interior allocation on $(\tilde{p}_N,p_N^\dagger)$; see Keller and Rady (2010, Proposition 4), for example.
An adaptation of the proof of that proposition yields the same result for the payoff processes that we consider here.\footnote{Details are available from the authors on request.}
Figure 1 illustrates the payoff function $\tilde{V}_N$ of the symmetric MPE together with the cooperative value function $V_N^*$ and the single-agent value function $V_1^*$ for the parameters $(r,s,\sigma,\alpha_1,\alpha_0,h,\lambda_1,\lambda_0,N)=(1,1,1,0.1,0,1.5,1,0.2,5)$,
implying $\rho = 0.01$, $m_1 = 1.6$, $m_0 = 0.3$ and $(p_N^*,\tilde{p}_N,p_1^*,p_N^\dagger) \simeq (0.27,0.40,0.45,0.53)$.
The comparatively large gap between $\tilde{V}_N$ and $V_N^*$ reflects the double inefficiency of the MPE:
it not only involves a higher cutoff (hence, an earlier stop to all use of the risky arms) but also entails too low an intensity of experimentation on an intermediate range of beliefs.
\begin{figure}[h]
\centering
\begin{picture}(175.00,100.00)(0,0)
\put(10,00){\scalebox{1.45}{\includegraphics{figure1.eps}}}
\end{picture}
\begin{quote}
\caption{Payoffs $V_N^*$ (upper solid curve), $\tilde{V}_N$ (dotted) and $V^*_1$ (lower solid curve) for
$(r,s,\sigma,\alpha_1,\alpha_0,h,\lambda_1,\lambda_0,N)=(1,1,1,0.1,0,1.5,1,0.2,5)$.
}
\end{quote}
\end{figure}
\section{The Discrete Game
\label{sec:discrete-time}}
Henceforth, we restrict players to changing their actions $k_{n,t} \in \{0,1\}$
only at the times $t=0, \Delta, 2 \Delta, \ldots$ for some fixed $\Delta > 0$.
This yields a discrete-time game evolving in a continuous-time framework; in particular, the payoff processes are observed continuously.\footnote{ \label{fn:discretization}
While arguably natural, our discretization remains nonetheless \textit{ad hoc}, and other discretizations might yield other results.
Not only is it well known that the limits of the solutions of the discrete-time models
might differ from the continuous-time solutions, but the particular
discrete structure might also matter; see, among others, M\"{u}ller (2000), Fudenberg and Levine (2009), H\"{o}rner and Samuelson (2013), and Sadzik and Stacchetti (2015).
In H\"{o}rner and Samuelson (2013), for instance, there are multiple solutions to the optimality equations, corresponding to different boundary conditions, and to select among them, it is necessary to investigate in detail the discrete-time game (see their Lemma 3).
However, the role of the discretization goes well beyond selecting the ``right'' boundary condition; see Sadzik and Stacchetti (2015). }
Moreover, we allow for non-Markovian strategies.
The expected discounted payoff increment from using the safe arm for the length of time $\Delta$ is
$\int_0^\Delta r\, e^{-r\,t}\, s \, dt = (1-\delta) s$
with $\delta = e^{-r\,\Delta}$.
Conditional on $\theta$, the expected discounted payoff increment from using the risky arm is
$\int_0^\Delta r\, e^{-r\,t}\, m_\theta \, dt = (1-\delta) m_\theta$.
Given the probability $p$ assigned to $\theta=1$, the expected discounted payoff increment from the risky arm conditional on all available information is
$(1-\delta) m(p)$.
A history of length $t=\Delta,2\Delta,\ldots$ is a sequence
$$
h_t = \left(
\big(k_{n,0},\widetilde Y^n_{[0,\Delta)}\big)_{n=1}^N,
\big(k_{n,\Delta},\widetilde Y^n_{[\Delta,2\Delta)}\big)_{n=1}^N,
\ldots,
\big(k_{n,t-\Delta},\widetilde Y^n_{[t-\Delta,t)}\big)_{n=1}^N
\right),
$$
where $k_{n,\ell \Delta}=1$ if player $n$ uses the risky arm on the time interval $[\ell \Delta, (\ell+1)\Delta)$;
$k_{n,\ell\Delta}=0$ if player $n$ uses the safe arm on this interval;
$\widetilde Y^n_{[\ell \Delta, (\ell+1)\Delta)}$ is the observed sample path $Y^n_{[\ell \Delta, (\ell+1)\Delta)}$ on the interval $[\ell \Delta, (\ell+1)\Delta)$ of the payoff process associated with player $n$'s risky arm if $k_{n,\ell \Delta}=1$;
and $\widetilde Y^n_{[\ell \Delta, (\ell+1)\Delta)}$ equals the empty set if $k_{n,\ell \Delta}=0$.
We write $H_t$ for the set of all histories of length $t$, set $H_0=\{\emptyset\}$, and let $H = \bigcup_{t=0,\Delta,2\Delta,\ldots}^\infty H_t$.
In addition, we assume that players have access to a public randomization device in every period, namely, a draw from the uniform distribution on $[0,1]$, which is assumed to be independent of $\theta$ and across periods. Following standard practice, we omit its realizations from the description of histories.
A behavioral strategy $\sigma_n$ for player $n$ is a sequence $(\sigma_{n,t})_{t=0,\Delta,2\Delta,\ldots}$, where $\sigma_{n,t}$ is a measurable map from $H_t$ to the set of probability distributions on $\{0,1\}$;
a pure strategy takes values in the set of degenerate distributions only.
Along with the prior probability $p_0$ assigned to $\theta = 1$, each profile of strategies induces a distribution over $H$.
Given his opponents' strategies $\sigma_{-n}$, player $n$ seeks to maximize
$$
(1-\delta)\, \mathbb{E}\!^{\ \sigma_{-n},\sigma_n}\left[\sum_{\ell=0}^\infty \delta^\ell \left\{ \astrut{2.75}
[1-\sigma_{n,\ell\Delta}(h_{\ell\Delta})] s
+ \sigma_{n,\ell\Delta}(h_{\ell\Delta}) m_\theta
\right\}\right].
$$
By the law of iterated expectations, this equals
$$
(1-\delta)\, \mathbb{E}\!^{\ \sigma_{-n},\sigma_n}\left[\sum_{\ell=0}^\infty \delta^\ell \left\{ \astrut{2.75}
[1-\sigma_{n,\ell\Delta}(h_{\ell\Delta})] s
+ \sigma_{n,\ell\Delta}(h_{\ell\Delta}) m(p_{\ell\Delta})]
\right\}\right].
$$
Nash equilibrium, PBE and MPE, with actions after history $h_t$ depending only on the associated posterior belief $p_t$, are defined in the usual way.
Imposing the standard ``no signaling what you don't know'' refinement, beliefs are pinned down after all histories, on and off path.\footnote{
While we could equivalently define this Bayesian game as a stochastic game with the common posterior belief as a state variable, no characterization or folk theorem applies to our setup, as the Markov chain (over consecutive states) does not satisfy the sufficient ergodicity assumptions;
see Dutta (1995) and H\"{o}rner, Sugaya, Takahashi and Vieille (2011).
}
An SSE is a PBE in which all players use the same strategy: $\sigma_n(h_t)=\sigma_{n'}(h_t)$ for all $n,n'$ and $h_t \in H$.
This implies symmetry of behavior after \emph{any} history, not just on the equilibrium path of play. By definition, any symmetric MPE is an SSE, and any SSE is a PBE.
\section{Main Results}
\label{sec:result}
Fix $\Delta > 0$.
For $p \in [0,1]$, let $\Wsup_{\rm PBE}(p)$ and $\Winf_{\rm \, PBE}(p)$ denote the supremum and infimum, respectively, of the set of average payoffs (per player) over all PBE, given prior belief $p$.
Let $\Wsup_{\rm SSE}(p)$ and $\Winf_{\rm \, SSE}(p)$ be the corresponding supremum and infimum over all SSE. If such equilibria exist,
\begin{equation}\label{eq:payoffs}
\Wsup_{\rm PBE}(p) \ge \Wsup_{\rm SSE}(p) \ge \Winf_{\rm \, SSE}(p) \ge \Winf_{\rm \, PBE}(p).
\end{equation}
Given that we assume a public randomization device, these upper and lower bounds define the corresponding equilibrium average payoff sets.
As any player can choose to ignore the information contained in the
other players' experimentation results, the value function
$W_1^\Delta$ of a single agent experimenting in isolation constitutes
a lower bound on a player's payoff in any PBE.
Lemma \ref{lem:convergence-single-agent} establishes that this lower bound converges to $V_1^*$ as $\Delta \rightarrow 0$. Hence, we obtain a lower bound to the limits of all terms in \eqref{eq:payoffs}, namely $\liminf_{\Delta\rightarrow 0}\Winf_{\rm \, PBE} \geq V_1^*$.
An upper bound is also easily found. As any discrete-time strategy profile is feasible for the continuous-time planner from the previous section,
it holds that $\Wsup_{\rm PBE} \leq V_N^*$.
The main theorem provides an exact characterization of the limits of
all four functions. It requires introducing a new family of
payoffs. Namely, we define the players' common payoff in continuous time when they all use the
risky arm if, and only if, the belief exceeds a given threshold $\hat{p}$. This
function admits a closed form that generalizes the first-best payoff $V_N^*$ (cf.\ Section \ref{sec:continuous-time}).
It is given by
\[
V_{N,\phat}(p)=m(p)+\frac{c(\hat{p})}{u(\hat{p};\mu_N)}\ u(p;\mu_N)
\]
for $p > \hat{p}$, and by $V_{N,\phat}(p)=s$ otherwise.
For $\hat{p}=p_N^*$, $V_{N,\phat}$ coincides with the cooperative value function $V_N^*$.
For $\hat{p}>p_N^*$, it satisfies $V_{N,\phat} < V_N^*$ on $(p_N^*,1)$, is continuous, strictly increasing and strictly convex on $[\hat{p},1]$, and continuously differentiable except for a convex kink at $\hat{p}$.
\begin{thm}\label{thm}
{\rm (i)} There exists $\hat{p} \in [p_N^*,p_1^*]$ such that
$$\lim_{\Delta \rightarrow 0} \Wsup_{\rm PBE} = \lim_{\Delta \rightarrow 0} \Wsup_{\rm SSE} = V_{N,\phat},$$
and
$$\lim_{\Delta \rightarrow 0} \Winf_{\rm \, PBE} = \lim_{\Delta \rightarrow 0} \Winf_{\rm \, SSE} = V_1^*,$$
uniformly on $[0,1]$.
{\rm (ii)} If $\rho > 0$, then $\hat{p} = p_N^*$ (and hence $V_{N,\phat}= V_N^*$).
{\rm (iii)} If $\rho = 0$, then $\hat{p}$ is the unique belief in $[p_N^*,p_1^*]$ satisfying
\begin{equation} \label{eq:phat}
N \lambda(\hat{p}) \left[ V_{N,\phat}(j(\hat{p})) - s\right] - (N-1) \lambda(\hat{p}) \left[V_1^*(j(\hat{p})) - s \right]=rc(\hat{p});
\end{equation}
moreover,
$\hat{p} = p_N^*$ if, and only if, $j(p_N^*) \leq p_1^*$,
and
$\hat{p} = p_1^*$ if, and only if, $\lambda_0 = 0$.
\end{thm}
This result is proved in Section \ref{sec:construction}, where we construct SSEs that get arbitrarily close to the highest and lowest possible average PBE payoffs for sufficiently short discretization intervals.
Given the fundamental difference between learning from a Brownian component ($\rho > 0$) and learning from jumps only ($\rho = 0)$, we treat these scenarios separately: the former case is covered by Propositions \ref{prop:SSE-Brownian}--\ref{prop:limit-Brownian} in Section \ref{sec:construction-Brownian}, the latter by Propositions \ref{prop:thresh}--\ref{prop:limit-Poisson} in Section \ref{sec:construction-Poisson}.
Pure Poisson learning needs two more intermediate results than the case with a Brownian component: one to identify the highest possible average PBE payoff in the frequent-action limit (Proposition \ref{prop:thresh}), another (Proposition \ref{prop:SSE-exponential}) to cover the case of fully conclusive Poisson news ($\rho = 0$ and $\lambda_0 = 0$), which requires a different approach to equilibrium construction than Brownian and inconclusive Poisson learning, respectively.
When $\rho > 0$ or $\lambda_0 > 0$, in fact, we can construct SSEs of the discrete game via two-state automata with a ``normal'' and a ``punishment'' state.
In the normal state, players are supposed to use the risky arm at all beliefs above some threshold $\underline{p}$;
in the punishment state, the players are again supposed to use a cutoff strategy, but with a higher threshold $\bar{p}$.
The idea here is that the normal state has all players experiment over as large a range of beliefs as possible,
whereas the punishment state has all players refrain from experimentation---and thus from the production of valuable information---at all beliefs except the most optimistic ones.
A unilateral deviation in the normal state triggers a transition to the punishment state; otherwise, the normal state persists.
The punishment state persists after a unilateral deviation there; otherwise, a public randomization device determines whether play reverts to the normal state.
When $\rho = 0$ and $\lambda_0 = 0$, by contrast, our proof relies on the existence of two symmetric mixed-strategy equilibria of the discrete game for beliefs close to the single-agent cutoff.
Choosing continuation play as a function of history in the appropriate way, we can then construct SSEs with suitable properties at higher beliefs.
Turning to part (i) of the theorem, the fact that the best SSE payoff and the best average PBE payoff coincide---and equal the payoff of a cutoff strategy---in the frequent-action limit is plausible (though not obvious) because efficiency in continuous time requires symmetric play of a cutoff strategy; cf.\ Section \ref{sec:continuous-time}.
As to the worst payoffs, the requisite punishments can also be implemented in a strongly symmetric fashion.
At a belief below the threshold $\bar{p}$ in the punishment state of the above automaton, for instance, either everybody playing safe forever is already an equilibrium of the game, or a unilateral deviation to the risky arm provides a higher payoff.
In the latter case, the promise to revert to joint risky play at a later time serves to compensate the players for the flow payoff deficit that playing safe causes in the meantime.
Part (ii) of the theorem states that efficiency can be achieved in the frequent-action limit whenever the Brownian component of risky payoffs is informative about the true state ($\rho > 0$).
The reason is that the resulting diffusion component in the stochastic process of posterior beliefs is the dominant force in belief updating for small discretization steps $\Delta$.
To gain some intuition, consider the above two-state automaton with belief thresholds $\underline{p} \in (p_N^*,p_1^*)$ and $\bar{p} \in (p_1^*,1)$ in the normal and punishment state, respectively, and think of $\bar{p}$ as being very close to 1, so that punishment essentially means autarky.
At a belief $p$ to the immediate right of $\underline{p}$, a player contemplating a deviation from the risky to the safe arm in the normal state then faces the following trade-off.
On the one hand, the deviation saves the player the opportunity cost of experimentation, $(1-\delta) c(\pi)$, which is $O(\Delta)$ as $\Delta$ vanishes.
On the other hand, the deviation changes the lottery over continuation values to the player's disadvantage.
In fact, use of the safe arm triggers a transition to the punishment state, with an expected continuation value of at most the safe payoff level $s$ plus a term that is linear in $\Delta$.
This is because the probability that the opponents' experiments lift the posterior belief close to $p_1^*$ (the only scenario in which all experimentation does not stop for good) within the length of time $\Delta$ is $O(\Delta)$.\footnote{
In the absence of any lump-sum payoff, this is a consequence of Chebysheff's inequality; the probability of a lump-sum payoff within $\Delta$ units of time, moreover, is itself of order $\Delta$ for small $\Delta$.}
Staying with the risky arm, by contrast, would mean that continuation values above $s$ are always within immediate reach---no matter how small $\Delta$ becomes and even if one takes $p$ all the way down to $\underline{p}$---implying an expected continuation value of at least $s$ plus a term that is $O(\Delta^\gamma)$ with $\gamma < 1$.\footnote{
The proof of Proposition \ref{prop:SSE-Brownian} shows that one can take $\gamma = \divn{3}{4}$.}
Roughly speaking, this is because the payoff function $V_{N,\underline{p}}$ (to which continuation payoffs in the normal state converge as $\Delta$ vanishes) has a convex kink at $\underline{p}$ and, owing to the diffusion part of the belief dynamics, the probability of reaching its upward-sloping part stays bounded away from 0 as $\Delta$ vanishes.
For sufficiently small $\Delta$, therefore, the loss in expected continuation value after a deviation from risky to safe play outweighs the saved opportunity cost, and the deviation is unprofitable.
As this argument works as long as $V_{N,\underline{p}}$ has a convex kink, one can take $\underline{p}$ arbitrarily close to $p_N^*$.
It thus follows that $\hat{p}$, the infimum of possible thresholds $\underline{p}$, equals $p_N^*$.\footnote{
This argument is reminiscent of the intuitive explanation for smooth pasting in stopping problems for diffusion processes as given in Dixit and Pindyck (1994), for example.}
We can therefore reinterpret Figure 1 as depicting the best and worst SSE and average PBE payoffs in the frequent-action limit for the given parameter values, with the payoffs of the symmetric MPE of the continuous-time game sandwiched in between them.
Part (iii) of the theorem characterizes $\hat{p}$ when all updating is driven by the Poisson component of risky payoffs ($\rho = 0$).
The fundamental difference to Brownian learning (and the reason that asymptotic efficiency may be out of reach) is that the cost of a deviation is of the \emph{same} order in $\Delta$ as its benefit.
To see this, consider a two-state automaton with thresholds $\underline{p}$ and $\bar{p}$ as above.
In the normal state, the players' temptation to deviate to the safe arm is strongest when the belief is so close to $\underline{p}$ that the lack of good news over a period of $\Delta$ makes the belief drop below $\underline{p}$, and thus into a region where safe prevails in either state---whether a single player has deviated or not.
Absent a success in the current round, therefore, deviations cannot be punished in the future.
The cost of deviating thus arises only if good news arrives.
Starting out from $\underline{p}$, this is expected to happen with probability $N \lambda(\underline{p}) \Delta + o(\Delta)$ if no player deviates; a deviation reduces this probability to $(N-1) \lambda(\underline{p}) \Delta + o(\Delta)$.
Without a deviation, moreover, a player's continuation payoff then amounts at most to the cooperative payoff---evaluated at the posterior belief after the news event---given no use of the risky arm below $\underline{p}$; the resulting payoff improvement relative to the case that no news arrives converges to $V_{N,\pr}(j(\underline{p})) - s$ as $\Delta$ vanishes.
In the event of a deviation, the continuation payoff is at least the single-player payoff, and the corresponding payoff increment converges to $V_1^*(j(\underline{p})) - s$.
The cost of the deviation in terms of expected continuation value foregone is at most
$
\left\{ N \lambda(\underline{p}) [V_{N,\pr}(j(\underline{p})) - s] - (N-1) \lambda(\underline{p}) [V_1^*(j(\underline{p})) - s] \right\} \Delta + o(\Delta),
$
therefore.
A necessary condition for equilibrium is that this again exceed the saved opportunity cost of playing risky, $(1-\delta) c(\pi) = r c(\underline{p}) \Delta + o(\Delta)$.
At the infimum of possible thresholds $\underline{p}$, the leading (that is, first-order) terms in the cost and benefit of a deviation are just equalized, hence the equation \eref{eq:phat} for $\hat{p}$.
Asymptotic efficiency means that $\hat{p}$ coincides with the efficient continuous-time cutoff $p_N^*$,
which equates the \emph{social} benefit of an experiment with its opportunity cost.
When $\rho = 0$, this benefit is $N \lambda(p_N^*) [V_N^*(j(p_N^*)) - s]$ as the experiment contributes to the arrival of news at rate $\lambda(p_N^*)$, with all $N$ players then reaping the gain $V_N^*(j(p_N^*)) - s$;
this is also the first term on the left-hand side of \eref{eq:phat} when $\hat{p} = p_N^*$ and hence $V_{N,\phat} = V_N^*$.
The opportunity cost is $r c(p_N^*)$, the term on the right-hand side of \eref{eq:phat}.
So, $p_N^*$ solves \eref{eq:phat} if, and only if, the second term on the left-hand side of \eref{eq:phat} vanishes at $\hat{p} = p_N^*$.
As $V_1^* = s$ on $[0,p_1^*]$, this is tantamount to the inequality $j(p_N^*) \le p_1^*$.
Intuitively, this condition means that a deviation from efficient play can be punished by a complete stop to all experimentation---even after good news.
A player's incentives are then perfectly aligned with the social planner's: the individual action effectively dictates the collective action choice.
When $\rho = 0$ and $\lambda_0 = 0$, finally, the arrival of good news freezes the belief at 1, and the resulting cooperative and single-player payoffs both equal $m_1 = \lambda_1 h$.
In this case, \eref{eq:phat} reduces to $\lambda(\hat{p}) [m_1 - s] = r c(\hat{p})$, which equates the benefit of an experiment to a \emph{single} agent with its opportunity cost.
Hence, the solution is $\hat{p} = p_1^*$.
The intuition is straightforward: when a player's continuation payoffs coincide with those of a single agent whether he deviates or not, it is impossible to sustain experimentation below the single-agent cutoff.
Figure 2 shows the cooperative value function $V_N^*$,
the supremum $V_{N,\phat}$ and infimum $V_1^*$ of the limit SSE and average PBE payoffs,
and the payoff function $\tilde{V}_N$ of the symmetric continuous-time MPE for a parameter configuration that implies $p_N^* < \hat{p} < p_1^*$.
For $\rho = 0$ and $(r,s,h,\lambda_1,\lambda_0,N)=(1,1,1.5,1,0.2,5)$, we indeed have $(p_N^*,\hat{p},\tilde{p}_N,p_1^*,p_N^\dagger)\simeq (0.270,0.399,0.450,0.455,0.571)$.
\begin{figure}[h]
\centering
\begin{picture}(175.00,100.00)(0,0)
\put(10,00){\scalebox{1.45}{\includegraphics{figure2.eps}}}
\end{picture}
\begin{quote}
\caption{Payoffs $V_N^*$ (upper solid curve), $V_{N,\phat}$ (dashed), $\tilde{V}_N$ (dotted) and $V^*_1$ (lower solid curve) for $\rho = 0$ and $(r,s,h,\lambda_1,\lambda_0,N)=(1,1,1.5,1,0.2,5)$.}
\end{quote}
\end{figure}
The figure suggests that, even when the first-best is not achievable in the frequent-action limit, the best SSE performs strictly better than the symmetric continuous-time MPE, maintaining experimentation---at maximal intensity---on a larger set of beliefs.
The following result confirms this ordering of belief thresholds.
It is formulated for pure Poisson learning with inconclusive news since $\hat{p} = \tilde{p}_N = p_1^*$ when $\rho = 0$ and $\lambda_0 = 0$.
\begin{prop}\label{prop:comparison_sym_MPE}
Let $\rho = 0$ and $\lambda_0>0$. Then $\hat{p} < \tilde{p}_N$.
\end{prop}
Figure 2 further shows that relative to the symmetric MPE of the continuous-time game, the best SSE internalizes a much larger part of the informational externality that the players exert on each other.
If we take the payoff difference $V_N^*(p)-V_1^*(p)$ as a measure of the size of that externality,
we can interpret the ratios
$[V_{N,\phat}(p)-V_1^*(p)]/[V_N^*(p)-V_1^*(p)]$
and
$[\tilde{V}(p)-V_1^*(p)]/[V_N^*(p)-V_1^*(p)]$
as the fraction of the externality that is internalized by the best SSE and the symmetric MPE, respectively.\footnote{
We set both ratios to 100\% whenever the denominator vanishes, that is, whenever it is efficient for all players to play safe.}
At $\tilde{p}_N$, for example, the latter ratio is 0\% whereas the former is 52.5\% for the parameters underlying Figure 2---
in a scenario with $\hat{p} = p_N^*$, it would even be 100\%.
It was said in Section \ref{sec:continuous-time} that the efficient cutoff $p_N^*$ decreases with the number of players.
For $\rho = 0 $ and $\lambda_0=0$, the threshold $\hat{p}$ is independent of $N$; for inconclusive news, however, it behaves like $p_N^*$.
\begin{prop}\label{prop:comp-statics-N}
For $\rho = 0$ and $\lambda_0>0$, $\hat{p}$ is decreasing in $N$.
\end{prop}
The last result of this section characterizes the area (in the $(\lambda_1, \lambda_0)$-plane) where asymptotic efficiency obtains under pure Poisson learning.
\begin{prop} \label{prop:efficiency-region-Poisson}
Let $\rho = 0$.
Then, $j(p_N^*) > p_1^*$ whenever $\lambda_0 \leq \lambda_1/N$.
On any ray in $\mathbbm{R}_+^2$ emanating from the origin $(0,0)$ with a slope strictly between $1/N$ and 1, there is a unique critical point $(\lambda_1^*,\lambda_0^*)$ at which $j(p_N^*) = p_1^*$;
moreover, $j(p_N^*) > p_1^*$ at all points of the ray that are closer to the origin than $(\lambda_1^*,\lambda_0^*)$,
and $j(p_N^*) < p_1^*$ at all points that are farther from the origin than $(\lambda_1^*,\lambda_0^*)$.
These critical points form a continuous curve that is bounded away from the origin and asymptotes to the ray of slope $1/N$.
The curve shifts downward as $r$ falls or $N$ rises.
\end{prop}
This result is illustrated in Figure 3.
\setlength {\unitlength} {1mm}
\begin{figure}[t]
\begin{picture}(175.00,100.00)(0,0)
\put(30,00){\scalebox{.5}{\includegraphics{figure3.eps}}}
\put(100,40){\makebox(0,0)[cc]{$j(p_N^*) < p_1^*$}}
\put(100,10){\makebox(0,0)[cc]{$j(p_N^*) > p_1^*$}}
\end{picture}
\begin{quote}
\caption{
Asymptotic efficiency is achieved for parameter combinations $(\lambda_1,\lambda_0)$ between the diagonal and the curve but not below the curve. The dashed line is the ray of slope $1/N$.
Parameter values: $r=1$, $N=5$.
}
\end{quote}
\end{figure}
As is intuitive, having more players, or more patience, increases the scope for the first-best.
When $r \to 0$, the curve in Figure 3 converges to the ray of slope $1/N$; given Poisson rates $(\lambda_1,\lambda_0)$, therefore, asymptotic efficiency can always be achieved with a sufficiently large number of players.
When $r \to \infty$, by contrast, the curve in Figure 3 shifts ever higher: as myopic players do not react to future rewards and punishments,
it is no surprise that asymptotic efficiency cannot be attained then.
\section{Construction of Equilibria}
\label{sec:construction}
We first consider the case of a diffusion component (Section \ref{sec:construction-Brownian}) and then turn to the case of pure jump processes (Section \ref{sec:construction-Poisson}).
We need the following notation.
Let $F_K^\Delta(\cdot\vert p)$ denote the cumulative distribution function of the posterior belief $p_\Delta$ when $p_0 = p$ and $K$ players use the risky arm on the time interval $[0,\Delta)$.
For any measurable function $w$ on $[0,1]$ and $p \in [0,1]$, we write
$$
{\cal E}^\Delta_K w(p) = \int_0^1 w(p') \, F_K^\Delta(dp'\vert p),
$$
whenever this integral exists.
Thus, ${\cal E}^\Delta_K w(p)$ is the expectation of $w(p_\Delta)$ given the prior $p$ and $K$ experimenting players.
\subsection{Learning with a Brownian Component ($\rho > 0$)
\label{sec:construction-Brownian}}
For a sufficiently small $\Delta > 0$, we specify an SSE that can be summarized by two functions,
$\overline{\kappa}} %{\kappa_+$
and
$\underline{\kappa}} %{\kappa_-$,
which do not depend on $\Delta$.
The equilibrium strategy is characterized by a two-state automaton.
In the ``good'' state, play proceeds according to $\overline{\kappa}} %{\kappa_+$, and the equilibrium payoff satisfies
\begin{equation} \label{eq:SSE-good-state}
\overline{w}^\Delta} %{w_+^\Delta(p) = (1-\delta)[(1-\overline{\kappa}} %{\kappa_+(p))s + \overline{\kappa}} %{\kappa_+(p) m(p)]+ \delta {\cal E}^\Delta_{N\overline{\kappa}} %{\kappa_+(p)} \overline{w}^\Delta} %{w_+^\Delta(p),
\end{equation}
while in the ``bad'' state, play proceeds according to $\underline{\kappa}} %{\kappa_-$, and the payoff satisfies
\begin{equation} \label{eq:SSE-bad-state}
\underline{w}^\Delta} %{w_-^\Delta(p) = \max_k \left\{\astrut{2.5}
(1-\delta)[(1-k)s + k m(p)] + \delta {\cal E}^\Delta_{(N-1)\underline{\kappa}} %{\kappa_-(p)+k} \underline{w}^\Delta} %{w_-^\Delta(p)
\right\}.
\end{equation}
That is, $\underline{w}^\Delta} %{w_-^\Delta$ is the value from the best response to all other players following $\underline{\kappa}} %{\kappa_-$.
A unilateral deviation from $\overline{\kappa}} %{\kappa_+$ in the good state is punished by a transition to the bad state in the following period; otherwise, we remain in the good state.
If there is a unilateral deviation from $\underline{\kappa}} %{\kappa_-$ in the bad state, we remain in the bad state.
Otherwise, a draw of the public randomization device determines whether the state next period is good or bad; this probability is chosen such that the expected payoff is indeed given by $\underline{w}^\Delta} %{w_-^\Delta$ (see below).
With continuation payoffs given by $\overline{w}^\Delta} %{w_+^\Delta$ and $\underline{w}^\Delta} %{w_-^\Delta$, the common action $\kappa \in \{0,1\}$
is incentive compatible at a belief $p$ if, and only if,
\begin{eqnarray} \label{eq:SSE-incentive-constraint}
\lefteqn{(1-\delta)[(1-\kappa)s + \kappa m(p)] + \delta {\cal E}^\Delta_{N\kappa} \overline{w}^\Delta} %{w_+^\Delta(p)} \\
& \geq & \astrut{3} (1-\delta)[\kappa s + (1-\kappa) m(p)] + \delta {\cal E}^\Delta_{(N-1)\kappa + 1 - \kappa} \underline{w}^\Delta} %{w_-^\Delta(p). \nonumber
\end{eqnarray}
Therefore, the functions $\overline{\kappa}} %{\kappa_+$ and $\underline{\kappa}} %{\kappa_-$ define an SSE if, and only if, \eref{eq:SSE-incentive-constraint} holds for $\kappa = \overline{\kappa}} %{\kappa_+(p)$ and $\kappa = \underline{\kappa}} %{\kappa_-(p)$ at all $p$.
The probability $\eta^\Delta(p)$ of a transition from the bad to the good state in the absence of a unilateral deviation from $\underline{\kappa}} %{\kappa_-(p)$ is pinned down by the requirement that
\begin{eqnarray} \label{eq:SSE-randomization}
\underline{w}^\Delta} %{w_-^\Delta(p) & = & (1-\delta)[(1-\underline{\kappa}} %{\kappa_-(p))s + \underline{\kappa}} %{\kappa_-(p) m(p)] \\
& & \astrut{3.5} \mbox{} + \delta \left\{ \astrut{2.5}
\eta^\Delta(p)\, {\cal E}^\Delta_{N\underline{\kappa}} %{\kappa_-(p)} \overline{w}^\Delta} %{w_+^\Delta(p)
+ [1-\eta^\Delta(p)]\, {\cal E}^\Delta_{N\underline{\kappa}} %{\kappa_-(p)} \underline{w}^\Delta} %{w_-^\Delta(p)
\right\}. \nonumber
\end{eqnarray}
If $k = \underline{\kappa}} %{\kappa_-(p)$ is optimal in \eref{eq:SSE-bad-state}, we simply set $\eta^\Delta(p) = 0$.
Otherwise, \eref{eq:SSE-bad-state} and \eref{eq:SSE-incentive-constraint} imply
$$
\ \delta {\cal E}^\Delta_{N\overline{\kappa}} %{\kappa_+(p)} \overline{w}^\Delta} %{w_+^\Delta(p)
\geq \underline{w}^\Delta} %{w_-^\Delta(p) - (1-\delta)[(1-\underline{\kappa}} %{\kappa_-(p))s + \underline{\kappa}} %{\kappa_-(p) m(p)]
> \delta {\cal E}^\Delta_{N\underline{\kappa}} %{\kappa_-(p)} \underline{w}^\Delta} %{w_-^\Delta(p),
$$
so \eref{eq:SSE-randomization} holds with
$$\eta^\Delta(p)
= \frac{\underline{w}^\Delta} %{w_-^\Delta(p) - (1-\delta)[(1-\underline{\kappa}} %{\kappa_-(p))s + \underline{\kappa}} %{\kappa_-(p) m(p)] - \delta {\cal E}^\Delta_{N\underline{\kappa}} %{\kappa_-(p)} \underline{w}^\Delta} %{w_-^\Delta(p)}
{\delta {\cal E}^\Delta_{N\underline{\kappa}} %{\kappa_-(p)} \overline{w}^\Delta} %{w_+^\Delta(p) - \delta {\cal E}^\Delta_{N\underline{\kappa}} %{\kappa_-(p)} \underline{w}^\Delta} %{w_-^\Delta(p)}
\in (0,1].$$
It remains to specify $\overline{\kappa}} %{\kappa_+$ and $\underline{\kappa}} %{\kappa_-$.
Let
$$
p^m = \frac{s-m_0}{m_1 - m_0}\,.
$$
As $m(p^m) = s$, this is the belief at which a myopic agent is indifferent between the two arms.
It is straightforward to verify that $p_1^* < p^m$.
Fixing $\underline{p} \in (p_N^*,p_1^*)$ and $\bar{p} \in (p^m,1)$, we let
$\overline{\kappa}} %{\kappa_+(p)=\mathbbm{1}_{p > \underline{p}}$
and $\underline{\kappa}} %{\kappa_-(p)=\mathbbm{1}_{p > \bar{p}}$.\footnote{
$\mathbbm{1}_A$ denotes the indicator function of the event $A$.}
Note that punishment and reward strategies coincide outside of $(\underline{p},\bar{p})$.
Also note that $\bar{p}>p^m$, \textit{i.e.}, in the punishment state,
less experimentation is enforced than would be myopically optimal.
In fact, for Proposition \ref{prop:limit-Brownian}, we are letting $\bar{p}\rightarrow 1$.
This way, we can, in a strongly symmetric way, exact the harshest conceivable punishment, which, for any given player, entails a vanishing information spill-over from the other players.
Indeed, this construction, in the limit, pushes players' payoffs down to their single-agent value $V_1^*$.
As players cannot be prevented from best-responding to the vanishing information spill-overs, it is not possible to push their continuation values below $V_1^*$.
\begin{prop} \label{prop:SSE-Brownian}
For $\rho > 0$, there are beliefs $p^\flat \in (p_N^*, p_1^*)$ and $p^\sharp \in (p^m,1)$ such that for all $\underline{p} \in (p_N^*,p^\flat)$ and $\bar{p} \in (p^\sharp,1)$,
there exists $\bar{\Delta} > 0$ such that for all $\Delta \in (0,\bar{\Delta})$, the two-state automaton with functions $\overline{\kappa}} %{\kappa_+$ and $\underline{\kappa}} %{\kappa_-$ defines an SSE of the experimentation game with period length $\Delta$.
\end{prop}
The proof consists of verifying that, for a sufficiently small $\Delta$, the actions $\overline{\kappa}} %{\kappa_+(p)$ and $\underline{\kappa}} %{\kappa_-(p)$ satisfy the incentive-compatibility constraint \eref{eq:SSE-incentive-constraint} at all $p$.
First, we find $\varepsilon>0$ small enough that $\underline{w}^\Delta} %{w_-^\Delta=s$ in a neighborhood of $\underline{p}+\varepsilon$.
The payoff functions $\overline{w}^\Delta} %{w_+^\Delta$ and $\underline{w}^\Delta} %{w_-^\Delta$ resulting from the two-state automaton are
then bounded away from one another on $[\underline{p}+\varepsilon,\bar{p}]$ for small $\Delta$.
In this range, therefore, the difference in expected continuation values across states does not vanish as $\Delta$ tends to 0, whereas the difference in current expected payoffs across actions is of order $\Delta$, rendering deviations unattractive for small enough $\Delta$.
On $(\bar{p},1]$ and $[0,\underline{p}]$, $\overline{\kappa}} %{\kappa_+$ and $\underline{\kappa}} %{\kappa_-$ both prescribe the myopically optimal action.
Given that continuation payoffs are weakly higher in the good state, it is easy to show that there are no incentives to deviate on these intervals.
For beliefs in $(\underline{p},\underline{p}+\varepsilon)$, $\underline{\kappa}} %{\kappa_-$ again prescribes the myopically optimal action.
The proof of incentive compatibility of $\overline{\kappa}} %{\kappa_+$ on this interval crucially relies on the fact that, for small $\Delta$, $\overline{w}^\Delta} %{w_+^\Delta$ is bounded below by $V_{N,\underline{p}}$\hspace{0.05em}, which has a convex kink at $\underline{p}$.
This, together with the fact that, conditional on no lump sum arriving, the log-likelihood ratio of posterior beliefs is Gaussian, allows us to demonstrate the existence of some constant $C_1 > 0$ such that,
for $\Delta$ small enough,
$
{\cal E}^\Delta_N \overline{w}^\Delta} %{w_+^\Delta(p) \geq s + C_1 \Delta^{\frac{3}{4}}
$
to the immediate right of $\underline{p}$, whereas
${\cal E}^\Delta_{N-1}\underline{w}^\Delta} %{w_-^\Delta(p) \leq s + C_0 \Delta$
with some constant $C_0>0$.
For small $\Delta$, therefore, the linearly vanishing current-payoff advantage of the safe over the risky arm is dominated by the incentives provided through continuation payoffs.
The next result is the last remaining step in the proof of Theorem \ref{thm} for the case $\rho>0$; it
essentially follows from letting $\underline{p}\rightarrow p_N^*$ and $\bar{p}\rightarrow 1$ in Proposition \ref{prop:SSE-Brownian}.
\begin{prop} \label{prop:limit-Brownian} For $\rho > 0$,
$\lim_{\Delta \rightarrow 0} \Wsup_{\rm SSE} = V_N^*$
and
$\lim_{\Delta \rightarrow 0} \Winf_{\rm \, SSE} = V_1^* $, uniformly on $[0,1]$.
\end{prop}
\subsection{Pure Poisson Learning ($\rho = 0$)
\label{sec:construction-Poisson}}
Let $\rho = 0$, and take $\hat{p}$ as in
part (iii)
of Theorem \ref{thm}.
\begin{prop} \label{prop:thresh}
Let $\rho = 0$.
For any $\varepsilon > 0$, there is a $\Delta_\varepsilon > 0$ such that for all $\Delta \in (0,\Delta_\varepsilon)$, the set of beliefs at which experimentation can be sustained in a PBE of the discrete game with period length $\Delta$ is contained in the interval $(\hat{p}-\varepsilon,1]$.
In particular,
$\limsup_{\Delta \rightarrow 0} \Wsup_{\rm PBE}(p) \leq V_{N,\phat}(p)$.
\end{prop}
For a heuristic explanation of the logic behind this result, consider a sequence of pure-strategy PBEs for vanishing $\Delta$ such that the infimum of the set of beliefs at which at least one player experiments converges to some limit $\tilde{p}$.
Selecting a subsequence of $\Delta$s and relabeling players, if necessary, we can assume without loss of generality that players $1,\ldots,L$ play risky immediately to the right of $\tilde{p}$, while players $L+1,\ldots,N$ play safe.
In the limit, players' individual continuation payoffs are bounded below by the single-agent value function $V_1^*$ and cannot sum to more than $NV_{N,\plow}$, so the sum of the continuation payoffs of players $1, \ldots, L$ is bounded above by $NV_{N,\plow}-(N-L)V_1^*$.
Averaging these players' incentive-compatibility constraints thus yields
$$
L\lambda(\tilde{p}) \left[ \frac{N V_{N,\plow}(j(\tilde{p}))-(N-L)V_1^*(j(\tilde{p}))}{L}- s \right] - rc(\tilde{p})
\geq (L-1)\lambda(\tilde{p}) \left[ V_1^*(j(\tilde{p}))- s \right].
$$
Simplifying the left-hand side, adding $(N-L)\lambda(\tilde{p}) \left[ V_1^*(j(\tilde{p}))- s \right]$ to both sides and re-arranging, we obtain
$$
N\lambda(\tilde{p}) \left[ V_{N,\plow}(j(\tilde{p}))- s \right] - rc(\tilde{p})
\geq (N-1)\lambda(\tilde{p}) \left[ V_1^*(j(\tilde{p}))- s \right],
$$
which in turn implies $\tilde{p} \geq \hat{p}$, as we show in Lemma \ref{lem:thresh} in the appendix.
The proof of Proposition \ref{prop:thresh} makes this heuristic argument rigorous and extends it to mixed equilibria.
For non-revealing jumps ($\lambda_0 > 0$), the construction of SSEs that achieve the above bounds in the limit relies on the same two-state automaton as in Proposition \ref{prop:SSE-Brownian}, the only difference being that the threshold $\underline{p}$ is now restricted to exceed $\hat{p}$.
\begin{prop} \label{prop:SSE-Poisson}
Let $\rho = 0$ and $\lambda_0 >0$.
There are beliefs $p^\flat \in (\hat{p}, p_1^*)$ and $p^\sharp \in (p^m,1)$ such that for all $\underline{p} \in (\hat{p},p^\flat)$ and $\bar{p} \in (p^\sharp,1)$,
there exists $\bar{\Delta} > 0$ such that for all $\Delta \in (0,\bar{\Delta})$, the two-state automaton with functions $\overline{\kappa}} %{\kappa_+$ and $\underline{\kappa}} %{\kappa_-$ defines an SSE of the experimentation game with period length $\Delta$.
\end{prop}
The strategy for the proof of this proposition is the same as that of Proposition \ref{prop:SSE-Brownian}, except for the belief region to the immediate right of $\underline{p}$,
where incentives are now provided through terms of first order in $\Delta$,
akin to those in equation (3).
In the case $\lambda_0>0$, we are able to provide incentives in the
potentially last round of experimentation by threatening punishment
\emph{conditional on there being a success} (that is, a successful
experiment). This option is no longer available in the case of
$\lambda_0=0$. Indeed, any success now takes us to a posterior of one, so that everyone plays risky forever after.
This means that, irrespective of whether a success occurs in that round, continuation strategies are independent of past behavior, conditional on the players' belief.
This raises the possibility of unravelling.
If incentives just above the candidate threshold at which players give up on the risky arm cannot be provided, can this threshold be lower than in the MPE?
To settle whether unravelling occurs requires us to study the discrete game in considerable detail.
We start by noting that
for $\lambda_0 = 0$,
we can strengthen Proposition \ref{prop:thresh} as follows:
there is no PBE with any experimentation at beliefs below the discrete-time single-agent cutoff $p_1^\Delta = \inf\{p\!: W_1^\Delta(p) > s\}$; see Heidhues et al. (2015).\footnote{
In particular, this excludes the possibility that the asymmetric MPE of Keller et al.\ (2005)\ with an infinite number of switches between the two arms below $p_1^*$ can be approximated in the discrete game.}
The highest average payoff that can be hoped for, then, involves all players experimenting above $p_1^\Delta$.
Unlike in the case of $\lambda_0>0$ (see Proposition \ref{prop:SSE-Poisson}), an explicit description of a two-state automaton implementing SSEs whose payoffs converge to the obvious upper and lower bounds appears elusive.
This is partly because equilibrium strategies are, as it turns out, necessarily mixed for beliefs that are arbitrarily close to (but above) $p_1^\Delta$.
The proof of the next proposition establishes that the length of the interval of beliefs for which this is the case vanishes as $\Delta \rightarrow 0$.
In particular, for higher beliefs (except for beliefs arbitrarily close to 1, when playing risky is strictly dominant), both pure actions can be enforced in some equilibrium.
\begin{prop}\label{prop:SSE-exponential}
Let $\rho = 0$ and $\lambda_0 = 0$.
For any beliefs $\underline{p}$ and $\bar{p}$ such that $p_1^* < \underline{p} < p^m < \bar{p} < 1$,
there exists a $\bar{\Delta} > 0$ such that for all $\Delta \in (0,\bar{\Delta})$, there exists
\begin{itemize}
\vspace{-2ex}
\item[-] an SSE in which, starting from a prior above $\underline{p}$, all players
use the risky arm
on the path of play as long as the belief remains above $\underline{p}$ and use
the safe arm for beliefs below $p_1^*$; and
\vspace{-1ex}
\item[-] an SSE in which, given a prior between $\underline{p}$ and $\bar{p}$, the players' payoff is no larger than their best-reply payoff against opponents who use the risky arm
if, and only if, the belief lies in $[p_1^*,\underline{p}] \cup [\bar{p},1]$.
\end{itemize}
\end{prop}
While this is somewhat weaker than Proposition \ref{prop:SSE-Poisson}, its implications for limit payoffs as $\Delta \rightarrow 0$ are the same.
Intuitively, given that the interval $[p_1^*,\underline{p}]$ can be chosen arbitrarily small (actually, of the order $\Delta$, as the proof establishes), its impact on equilibrium payoffs starting from priors above $\underline{p}$ is of order $\Delta$.
This suggests that for the equilibria whose existence is stated in Proposition \ref{prop:SSE-exponential}, the payoff converges to the payoff from all players experimenting above $p_1^*$ and to the best-reply payoff against none of the opponents experimenting.
Indeed, we have the following result, covering both inconclusive and conclusive jumps.
\begin{prop} \label{prop:limit-Poisson} For $\rho = 0$,
$\lim_{\Delta \rightarrow 0} \Wsup_{\rm SSE} = V_{N,\phat}$
and
$\lim_{\Delta \rightarrow 0} \Winf_{\rm \, SSE} = V_1^* $, uniformly on $[0,1]$.
\end{prop}
\section{Functional Equations for SSE Payoffs}
\label{sec:functional-equations}
While it is possible to derive explicit solutions to the equilibrium payoff sets of interest, at least asymptotically, note that, already in the discrete game, a characterization in terms of optimality equations can be obtained, which defines the correspondence of SSE payoffs. As discussed in the introduction, these generalize the familiar equation characterizing the value function of the symmetric MPE. Instead of a single (HJB) equation, the characterization of SSE payoffs involves two coupled functional equations, whose solution delivers the highest and lowest equilibrium payoff. Proposition \ref{prop:chara-discrete} states this in the discrete game, while Proposition \ref{prop:chara-continuous} gives the continuous-time limit. As these propositions do not heavily rely on the specific structure of our game, we believe that they might be useful for analyzing SSE payoffs for more general processes or other stochastic games.
Fix $\Delta > 0$.
For $p \in [0,1]$, let $\overline{W}^\Delta(p)$ and $\underline{W}^\Delta(p)$ denote the supremum
and infimum, respectively, of the set of payoffs over
\emph{pure-strategy} SSEs, given prior belief $p$.\footnote{For the existence of various types of equilibria in discrete-time stochastic games, see Mertens, Sorin and Zamir (2015), Chapter 7.}
If such an equilibrium exists,
these extrema are achieved,
and $\overline{W}^\Delta(p) \ge \underline{W}^\Delta(p)$.
For $\rho > 0$ or $\lambda_0 > 0$, we have shown in Sections \ref{sec:construction-Brownian}--\ref{sec:construction-Poisson} that in the limit as $\Delta \rightarrow 0$, the best and worst average payoffs (per player) over all PBEs are achieved by SSE in pure strategies.
The following result characterizes $\overline{W}^\Delta$ and $\underline{W}^\Delta$ via a pair of coupled functional equations.
\begin{prop}\label{prop:chara-discrete}
Suppose that the discrete game with time increment $\Delta > 0$ admits a pure-strategy SSE for any prior belief.
Then, the pair of functions $(\overline{w},\underline{w}) = (\overline{W}^\Delta, \underline{W}^\Delta)$ solves the functional equations
\begin{eqnarray}
\overline{w}(p) & = & \max_{\kappa\in{\cal K}(p;\wbar,\wlow)} \left\{ \astrut{2.5}
(1-\delta)[(1-\kappa)s + \kappa m(p)] + \delta {\cal E}^\Delta_{N\kappa}\overline{w}(p)
\right\}, \label{eq:wbar} \\
\underline{w}(p) & = & \min_{\kappa\in{\cal K}(p;\wbar,\wlow)} \max_{k\in\{0,1\}} \left\{ \astrut{2.5}
(1-\delta)[(1-k)s + k m(p)] + \delta {\cal E}^\Delta_{(N-1)\kappa+k} \underline{w}(p) \right\}, \label{eq:wlow}
\end{eqnarray}
where ${\cal K}(p;\wbar,\wlow) \subseteq \{0, 1\}$ denotes the set of all $\kappa$ such that
\begin{eqnarray}
\lefteqn{(1-\delta)[(1-\kappa)s + \kappa m(p)] + \delta {\cal E}^\Delta_{N\kappa}\overline{w}(p)} \label{eq:kappa} \\
& \geq & \astrut{3} \max_{k\in\{0,1\}} \left\{ \astrut{2.5}
(1-\delta)[(1-k)s + k m(p)] + \delta {\cal E}^\Delta_{(N-1)\kappa+k} \underline{w}(p) \right\}. \nonumber
\end{eqnarray}
Moreover, $\underline{W}^\Delta \leq \underline{w} \leq \overline{w} \leq \overline{W}^\Delta$ for any solution $(\overline{w},\underline{w})$ of \eref{eq:wbar}--\eref{eq:kappa}.
\end{prop}
This result relies on arguments that are familiar from Cronshaw and Luenberger (1994).
We briefly sketch them here.
The above equations can be understood as follows. The ideal condition for a given (symmetric) action profile to be incentive compatible is that if each player conforms to it, the continuation payoff is the highest possible, while a deviation triggers the lowest possible continuation payoff. These actions are precisely the elements of ${\cal K}(p;\wbar,\wlow)$, as defined by equation \eref{eq:kappa}. Given this set of actions, equation \eqref{eq:wlow} provides the recursion that characterizes the constrained minmax payoff under the assumption that if a player were to deviate to his myopic best reply to the constrained minmax action profile, the punishment would be restarted next period, resulting in a minimum continuation payoff. Similarly, equation \eqref{eq:wbar} yields the highest payoff under this constraint, but here, playing the best action (within the set) is on the equilibrium path.
Note that in \textit{any} SSE, given $p$, the action $\kappa(p)$ must be an element of $\mathcal{K}(p;\overline{W}^\Delta,\underline{W}^\Delta)$. This is because the left-hand side of \eqref{eq:kappa} with $\overline{w}=\overline{W}^\Delta$ is an upper bound on the continuation payoff if no player deviates, and the right-hand side with $\underline{w}=\underline{W}^\Delta$ a lower bound on the continuation payoff after a unilateral deviation. Consider the equilibrium that achieves $\overline{W}^\Delta$. Then,
\[
\overline{W}^\Delta(p) \le \max_{\kappa \in \mathcal{K}(p;\overline{W}^\Delta,\underline{W}^\Delta)} \left\{ \astrut{2.5}
(1-\delta)[(1-\kappa)s + \kappa m(p)] + \delta {\cal E}^\Delta_{N\kappa}\overline{W}^\Delta(p)
\right\},
\]
as the action played must be in $\mathcal{K}(p;\overline{W}^\Delta,\underline{W}^\Delta)$, and the continuation payoff is at most given by $\overline{W}^\Delta$. Similarly, $\underline{W}^\Delta$ must satisfy \eref{eq:wlow} with ``$\ge$'' instead of ``$=$.'' Suppose now that the ``$\le$'' were strict. Then, we can define a strategy profile given prior $p$ that (i) in period 0, plays the maximizer of the right-hand side, and (ii) from $t=\Delta$ onward, abides by the continuation strategy achieving $\overline{W}^\Delta(p_\Delta)$. Because the initial action is in $\mathcal{K}(p;\overline{W}^\Delta,\underline{W}^\Delta)$, this constitutes an equilibrium, and it achieves a payoff strictly larger than $\overline{W}^\Delta(p)$, a contradiction. Hence, \eref{eq:wbar} must hold with equality for $\overline{W}^\Delta$. The same reasoning applies to $\underline{W}^\Delta$ and \eref{eq:wlow}.
Fix a pair $(\overline{w},\underline{w})$ that satisfies
\eref{eq:wbar}--\eref{eq:kappa}. Note that this implies $\underline{w} \le
\overline{w}$. Given such a pair and any prior $p$, we specify two SSEs whose
payoffs are $\overline{w}$ and $\underline{w}$, respectively. It then follows that
$\underline{W}^\Delta \leq \underline{w} \leq \overline{w} \leq \overline{W}^\Delta$. Let $\overline{\kappa}} %{\kappa_+$ and $\underline{\kappa}} %{\kappa_-$ denote a
selection of the maximum and minimum of
\eref{eq:wbar}--\eref{eq:wlow}. The equilibrium strategies are
described by a two-state automaton, whose states are referred to as
``good'' or ``bad.'' The difference between the two equilibria lies in
the initial state: $\overline{w}$ is achieved when the initial state is good,
$\underline{w}$ is achieved when it is bad.
In the good state, play proceeds according to $\overline{\kappa}} %{\kappa_+$; in the bad state,
it proceeds according to $\underline{\kappa}} %{\kappa_-$. Transitions are exactly as in the equilibria described in Sections \ref{sec:construction-Brownian}--\ref{sec:construction-Poisson}.
This structure precludes profitable one-shot deviations in either state, so that the automaton describes equilibrium strategies, and the desired payoffs are obtained.
Figure 4 presents the result of a numerical computation of $\overline{W}^\Delta$ and $\underline{W}^\Delta$ based on Proposition \ref{prop:chara-discrete}.\footnote{
We thank Kai Echelmeyer and Martin Rumpf from the Institute for Numerical Simulation at the University of Bonn for the implementation of the underlying algorithm.
Starting from the pair of functions $(\overline{w}^0,\underline{w}^0)=(V_N^*, V_1^*)$, it computes $(\overline{w}^{k+1},\underline{w}^{k+1})$ by evaluating the right-hand sides of \eref{eq:wbar}--\eref{eq:wlow} at $(\overline{w}^k,\underline{w}^k)$.
Because of the incentive-compatibility constraint \eref{eq:SSE-incentive-constraint}, the corresponding value-iteration operator does not appear to be a contraction mapping.
While we do not have a convergence proof for this algorithm, it converged reliably for sufficiently small $\Delta$.
}
In between the thresholds $\underline{p}^\Delta$ and $\bar{p}^\Delta$, both risky and safe play can be sustained in an SSE; the former is chosen in the best SSE, the latter in the worst.
Only risky play can be sustained above $\bar{p}^\Delta$, and only safe play below $\underline{p}^\Delta$.
These changes in the set of enforceable actions manifest themselves in jump discontinuities of $\overline{W}^\Delta$ and $\underline{W}^\Delta$ at $\underline{p}^\Delta$ and $\bar{p}^\Delta$, respectively.
Note also that $\underline{W}^\Delta$ dips below $V_1^*$ to the immediate right of $p_1^*$.
The worst punishment can thus be harsher for positive $\Delta$ than in the frequent-action limit, and convergence of the corresponding payoff function is non-monotonic.
While the example shown is one of pure Brownian learning, these patterns also emerge when conclusive lump-sums are added to the payoff process.
\begin{figure}[h]
\centering
\begin{picture}(175.00,110.00)(0,0)
\put(10,-45){\scalebox{1.45}{\includegraphics{figure4.pdf}}}
\put(28,97){\makebox(0,0)[cc]{payoff}}
\put(135,6){\makebox(0,0)[cc]{$p$}}
\end{picture}
\begin{quote}
\caption{Payoffs $V_N^*$ (upper dashed curve), $\overline{W}^\Delta$ (upper solid curve), $\underline{W}^\Delta$ (lower solid curve) and $V^*_1$ (lower dashed curve) for $\Delta = 0.1$ and $(r,s,\sigma,\alpha_1,\alpha_0,h,\lambda_1,\lambda_0,N)=(1,2,1,2.5,1.5,0,0,0,5)$.}
\end{quote}
\end{figure}
As $\Delta$ tends to 0, equations \eref{eq:wbar}--\eref{eq:wlow} transform into differential-difference equations involving terms that are familiar from the continuous-time analysis in Section \ref{sec:continuous-time}.
A formal Taylor approximation shows that for any $\kappa \in \{0,1\}$, $K \in \{0,1,\ldots,N\}$ and a sufficiently regular function $w$ on the unit interval,
\begin{eqnarray*}
\lefteqn{(1-\delta)[(1-\kappa)s + \kappa m(p)] + \delta {\cal E}^\Delta_K w(p)} \\
& = & \astrut{3} w(p) + r \left\{ \astrut{2.5}
(1-\kappa)s + \kappa m(p) + K\, b(p,w) - w(p) \right\} \Delta
+ o(\Delta).
\end{eqnarray*}
Applying this approximation to \eref{eq:wbar}--\eref{eq:wlow}, cancelling the terms of order 0 in $\Delta$, dividing through by $\Delta$, letting $\Delta \rightarrow 0$ and recalling the notation
$c(p) = s - m(p)$
for the opportunity cost of playing risky, we obtain the coupled differential-difference equations that appear in the following result.
\begin{prop}\label{prop:chara-continuous}
Let $\rho > 0$ or $\lambda_0 > 0$.
As $\Delta \rightarrow 0$, the pair of functions $(\overline{W}^\Delta,\underline{W}^\Delta)$ converges uniformly (in $p$) to a pair of functions $(\overline{w},\underline{w})$ solving
\begin{eqnarray}
\overline{w}(p) & = & s + \max_{\kappa \in \overline{{\cal K}}(p)} \kappa \left[N b(p,\overline{w}) - c(p) \right], \label{eq:wbar-ct-b} \\
\underline{w}(p) & = & s + \min_{\kappa \in \overline{{\cal K}}(p)} (N-1) \kappa\, b(p,\underline{w}) + \max_{k\in\{0,1\}} k \left[ b(p,\underline{w}) - c(p) \right] \label{eq:wlow-ct-b},
\end{eqnarray}
where
\begin{equation} \label{eq:K-ct}
\overline{{\cal K}}(p) = \left\{
\begin{array}{lll}
\{0\} & \mbox{for} & p \leq \hat{p}, \\
\{0,1\} & \mbox{for} & \hat{p} < p < 1, \\
\{1\} & \mbox{for} & p = 1,
\end{array} \right.
\end{equation}
and
$\hat{p}$ is as in parts (ii) and (iii) of Theorem \ref{thm}.
\end{prop}
This result is an immediate consequence of the previous results.
It follows from Sections \ref{sec:construction-Brownian}--\ref{sec:construction-Poisson} that,
except when $\rho = \lambda_0 = 0$, there exist pure-strategy SSEs and
the pair $(\overline{W}^\Delta,\underline{W}^\Delta)$ converges uniformly to $(V_{N,\phat},V_1^*)$.
It is straightforward to verify that $(\overline{w},\underline{w})=(V_{N,\phat},V_1^*)$ solves \eref{eq:wbar-ct-b}--\eref{eq:K-ct}.
First, as $V_N^*$ satisfies\footnote{
This equation follows from the HJB equation in Section \ref{sec:continuous-time}: because the maximand is linear in $K$, the continuous-time planner finds it optimal to set $K=0$ or $K=N$ at any given belief.}
$$
V_N^*(p) = s + \max_{\kappa \in \{0,1\}} \kappa \left[N b(p,V_N^*) - c(p) \right],
$$
with $N b(p,V_N^*) - c(p) > 0$ to the right of $p_N^*$, \eref{eq:wbar-ct-b} is trivially solved by $V_N^*$ whenever $\hat{p} = p_N^*$.
Second, for $\hat{p} > p_N^*$, the function $V_{N,\phat}$
satisfies
$$
V_{N,\phat}(p) = s + \mathbbm{1}_{p > \hat{p}} \left[Nb(p,V_{N,\phat}) - c(p) \right],
$$
with $N b(p;V_{N,\phat}) - c(p) > 0$ on $(\hat{p}, 1)$.
This implies that $V_{N,\phat}$ solves \eref{eq:wbar-ct-b} when $\hat{p} > p_N^*$.
Third, $V_1^*$ always solves \eref{eq:wlow-ct-b}.
In fact, as $b(p;V_1^*) \geq 0$ everywhere, we have $\min_{\kappa \in \{0,1\}} (N-1) \kappa\, b(p,V_1^*) = 0$, and \eref{eq:wlow-ct-b} with this minimum set to zero is just the HJB equation for $V_1^*$.
Note that the continuous-time functional equations \eref{eq:wbar-ct-b}--\eref{eq:wlow-ct-b} would be equally easy to solve for any \emph{arbitrary} $\hat{p}$ in \eref{eq:K-ct}.
However, only the solution with $\hat{p}$ as in Theorem \ref{thm} captures the asymptotics of our discretization of the experimentation game.
\section{Concluding Comments}
\label{sec:conclu}
We have shown that the inefficiencies arising in strategic bandit problems are driven by the solution concept, MPE.
Inefficiencies entirely disappear when news has a Brownian component or good news events are not too informative. The best PBE can be achieved with an SSE, specifying a simple rule of conduct (unlike in an MPE), namely on-path play of the cutoff type.
Of course, we do not expect the finding that SSE and PBE payoffs coincide to generalize to all symmetric stochastic games.
For instance, SSE can be restrictive when actions are imperfectly monitored, as shown by Fudenberg, Levine and Takahashi (2007). Nonetheless, SSE is a class of equilibria that both allows for ``stick-and-carrot'' incentives, as in standard discrete-time repeated (or stochastic) games, but is also amenable to continuous-time optimal control techniques, as illustrated by Proposition \ref{prop:chara-continuous} (for a given transversality condition that must be derived from independent considerations, such as a discretized version of the game).
The information/payoff processes that we consider are a subset of those in Cohen and Solan (2013).
There, the size of a lump-sum payoff is allowed to contain information about the state of world, so that the arrival of a discrete payoff increment makes the belief jump to a posterior that depends on the size of the increment.
As lump sums of any size are assumed to arrive more frequently in state $\theta = 1$, however, they are always good news.
For processes with an informative Brownian component, our proof that risky play is incentive compatible immediately to the right of the threshold $p_N^*$ only exploits the properties of the posterior belief process \emph{conditional on no lump sum arriving}.
As these properties are the same whether lump sums are informative or not, asymptotic efficiency in the presence of a Brownian component should obtain more generally---and even when lump sums of certain sizes are bad news (meaning that they are less frequent in state $\theta = 1$).
When learning is driven by lump-sum payoffs only, inspection of equation \eref{eq:phat} suggests that efficiency requires that a lump sum \emph{of any size} arriving at the initial belief $p_N^*$ lead to a posterior belief no higher than $p_1^*$.
This is a constraint on the maximal amount of good news that a lump sum can carry; lump-sum sizes that carry bad news should again be innocuous here.
The ``breakdowns'' variant of pure Poisson learning in Keller and Rady (2015) is one of cost minimization.
In our payoff-maximization setting, this corresponds to letting both the safe flow payoff and the average size of lump-sum payoffs be negative with $\lambda_1 h < s < \lambda_0 h \leq 0$.
Now, $\theta = 1$ is the \emph{bad} state of the world, and the efficient and single-player solution cutoffs in continuous time satisfy $p_N^* > p_1^*$, with the stopping region lying to the \emph{right} of the cutoff in either case.
The associated value functions $V_1^*$ and $V_N^*$ solve the same HJB equations as in Section \ref{sec:continuous-time}; except for the case $\lambda_0 = 0$, they are not available in closed form, however.
Starting from $p_N^*$, the belief now always remains in the single-agent stopping region for small $\Delta$: either there is a breakdown and the belief jumps up to $j(p_N^*) > p_N^* > p_1^*$, or there is no breakdown and the belief slides down to somewhere close to $p_N^*$, and hence still above $p_1^*$.
This means that the harshest possible punishment, consisting of all other players playing safe forever, can be meted out to any potential deviator, whether there is a breakdown or not.
Thus, we conjecture that asymptotic efficiency also obtains in this framework.
The intricacies of our proofs derive to some extent from specific features of the benchmark models in the literature on strategic bandit problems: these are games of informational externalities only, and safe play halts learning.
In many applications, payoff externalities are present as well, as might be background learning. Extraneous instruments that players might have at their disposal in order to provide incentives will certainly facilitate cooperation, and thus facilitate efficiency. Similarly, exogenous learning, such as background learning, presumably helps with efficiency, as a deviating player can never be too sure “what the future holds”.
Nonetheless, such changes to the model would blur our point that informational externalities by themselves suffice for first-best to be achievable.
While the environment (perfect monitoring and lack of commitment) called for a discretization of the game, and a meticulous analysis of the convergence of payoffs and strategies as the mesh vanishes, one might wonder whether there is no shortcut to perform the analysis in continuous time directly.
For the case of pure Poisson learning, an attempt to describe continuous-time strategies which achieve the extremal payoffs can be found in H\"{o}rner et al. (2014, Appendix A).
It involves an independent ``Poisson clock'' that determines the random times at which play reverts to the normal state of the automaton.
However, we do not believe that there is an easy way to determine optimality of such a strategy profile, as the key boundary condition that determines how much experimentation takes place (independent of whether efficiency obtains) seems difficult to identify in continuous time; and indeed, as discussed in footnote \ref{fn:discretization}, we cannot rule out that other discretizations might yield other boundary conditions.
Finally, our study is limited to solving for the best---and worst---average equilibrium payoff across players. As we have stressed, the symmetry in the strategy profiles that achieve these payoffs is a result, not an assumption. Nonetheless, it would be interesting to characterize the entire equilibrium payoff set, especially in view of a potential generalization to asymmetric games. For instance, what is the equilibrium that maximizes player 1's payoff, say? A careful analysis of this problem would take us too far astray, but we note that the findings of this paper provide a foundation for it, as we identify the minmax value of the game, and equilibrium strategies that support it. Since the game has observable actions, this is the punishment that should be used to support any Pareto efficient equilibrium, leaving us with the task of identifying the equilibrium paths of such equilibria.
\AppendixIn
\setcounter{equation}{0}
\section{Auxiliary Results} \label{app:auxiliary}
\subsection{Evolution of Beliefs} \label{app:beliefs}
For the description of the evolution of beliefs, it is convenient to work with the log odds ratio
$$
\ell_t = \ln \frac {p_t}{1-p_t}\,.
$$
Suppose that starting from $\ell_0=\ell$, the players use the fixed action profile $(k_1,\ldots,k_N) \in \{0,1\}^N$.
By Peskir and Shiryayev (2006, pp.\ 287--289 and 334--338), the log odds ratio at time $t>0$ is then
$$
\ell_t = \ell + \sum_{\{n: k_n=1\}} \left\{ \frac{\alpha_1 - \alpha_0}{\sigma^2} \left( X^n_t - \alpha_0 t - h N^n_t \right) - \left[ \frac{(\alpha_1 - \alpha_0)^2}{2 \sigma^2} + \lambda_1 - \lambda_0 \right] t + \ln \frac{\lambda_1}{\lambda_0} \, N^n_t \right\},
$$
where $X^n$ and $N^n$ are the payoff and Poisson processes, respectively, associated with player $n$'s risky arm.
The terms involving $\alpha_1, \alpha_0$ and $\sigma$ capture learning from the continuous component, $X^n_t - h N^n_t$, of the payoff process, with higher realizations making the players more optimistic.
The terms involving $\lambda_1$ and $\lambda_0$ capture learning from lump-sum payoffs, with the players becoming more pessimistic on average as long as no lump-sum arrives,
and each arrival increasing the log odds ratio by the fixed increment $\ln (\lambda_1/\lambda_0)$.\footnote{
Here, $\lambda_1/\lambda_0$ is understood to be 1 when $\lambda_1 = \lambda_0 = 0$.
When $\lambda_1 > \lambda_0 = 0$, we have $\ell_t = \infty$ and $p_t = 1$ from the arrival time of the first lump-sum on.}
Under the probability measure $\mathbb{P}_\theta$ associated with state $\theta \in \{0,1\}$,
$X^n_t - \alpha_0 t - h N^n_t$ is Gaussian with mean $(\alpha_\theta-\alpha_0) t$ and variance $\sigma^2 t$,
so that
$\sum_{\{n: k_n=1\}} (\alpha_1 - \alpha_0) \sigma^{-2} \left( X^n_t - \alpha_0 t - h N^n_t \right)$
is Gaussian with mean $K (\alpha_1 - \alpha_0) (\alpha_\theta-\alpha_0) \sigma^{-2} t$ and variance $K \rho t$,
where
$K = \sum_{n=1}^{N} k_n$ and $\rho = (\alpha_1-\alpha_0)^2 \sigma^{-2}$.
Conditional on the event that $\sum_{\{n: k_n=1\}} N^n_t = J$, therefore, $\ell_t$ is normally distributed with mean
$\ell - K \left(\lambda_1 - \lambda_0 - \frac{\rho}{2}\right) t + J \ln (\lambda_1/\lambda_0)$
and variance $K \rho t$ under $\mathbb{P}_1$,
and normally distributed with mean
$\ell - K \left(\lambda_1 - \lambda_0 + \frac{\rho}{2}\right) t + J \ln (\lambda_1/\lambda_0)$
and variance $K \rho t$ under $\mathbb{P}_0$.
Finally, the probability under measure $\mathbb{P}_\theta$ that $\sum_{\{n: k_n=1\}} N^n_t = J$ equals
$\frac{(K \lambda_\theta t)^J}{J!} e^{-K \lambda_\theta t}$
by the sum property of the Poisson distribution.
Taken together, these facts make it possible to explicitly compute the distribution of
$$
p_t = \frac{e^{\ell_t}}{1+e^{\ell_t}}
$$
under the players' measure $\mathbb{P}_p = p \mathbb{P}_1 +(1-p) \mathbb{P}_0$.
As this explicit representation is not needed in what follows, we omit it here.
Instead, we turn to the characterization of infinitesimal changes of $p_t$, once more assuming a fixed action profile with $K$ players using the risky arm.
Arguing as in Cohen and Solan (2013, Section 3.3), one shows that, with respect to the players' information filtration, the process of posterior beliefs is a Markov process whose infinitesimal generator ${\cal L}^K$ acts as follows on real-valued functions $v$ of class $C^2$ on the open unit interval:
$$
{\cal L}^K v(p)
= K \left\{ \astrut{3}
\frac{\rho}{2} p^2(1-p)^2 v''(p)
- (\lambda_1-\lambda_0) p(1-p) v'(p)
+ \lambda(p) \left[ v(j(p)) - v(p) \right]
\right\}.
$$
In particular, instantaneous changes in beliefs exhibit linearity in $K$ in the sense that ${\cal L}^K = K {\cal L}^1$.
By the very nature of Bayesian updating, finally, the process of posterior beliefs is a martingale with respect to the players' information filtration.
\subsection{Payoff Functions}\label{app:payoffs}
Our first auxiliary result concerns the function $u(\cdot; \mu_N)$ defined in Section \ref{sec:continuous-time}.
\begin{lem} \label{lem:expectation-u}
$\delta {\cal E}^\Delta_K u(\cdot;\mu_N)(p) = \delta^{1-\frac{K}{N}} u(p;\mu_N)$
for all $\Delta > 0$, $K \in \{1,\ldots,N\}$ and $p \in (0,1]$.
\end{lem}
\proof{
We simplify notation by writing $u$ for $u(\cdot; \mu_N)$.
Consider the process $(p_t)$ of posterior beliefs in continuous time when $p_0 = p > 0$ and $K$ players use the risky arm.
By Dynkin's formula,
\begin{eqnarray*}
\mathbb{E} \left[e^{-rK\Delta/N} u(p_\Delta)\right]
& = & u(p) + \mathbb{E} \left[ \int_0^\Delta e^{-rKt/N} \left\{ {\cal L}^K u(p_t) - \frac{rK}{N} u(p_t) \right\} dt \right] \\
& = & u(p) + K\, \mathbb{E} \left[ \int_0^\Delta e^{-rKt/N} \left\{ {\cal L}^1 u(p_t) - \frac{r}{N} u(p_t) \right\} dt \right] \\
& = & u(p),
\end{eqnarray*}
where the last equality follows from the fact that ${\cal L}^1 u = r u/N$ on $(0,1]$.\footnote{
To verify this identity, note that
$$
u'(p) = -\frac{\mu_N + p}{p (1-p)} u(p), \quad u''(p) = \frac{\mu_N (\mu_N+1)}{p^2 (1-p)^2} u(p), \quad u(j(p)) = \frac{\lambda_0}{\lambda(p)} \left( \frac{\lambda_0}{\lambda_1} \right)^{\mu_N} u(p),
$$
and use the equation defining $\mu_N$.}
Thus, $\delta^{K/N} {\cal E}^\Delta_K u(p) = u(p)$.
}
We further note that ${\cal E}^\Delta_K m(p) = m(p)$ for all $K$ by the martingale property of beliefs and the linearity of $m$ in $p$.
These properties are used repeatedly in what follows.
Their first application is in the proof of uniform convergence of the discrete-time single-agent value function to its continuous-time counterpart.
Let $(\mathcal{W},\|\cdot\|)$ be the Banach space of bounded real-valued functions on $[0,1]$ equipped with the supremum norm.
Given $\Delta > 0$, and any $w \in \mathcal{W}$, define a function $T_1^\Delta w \in \mathcal{W}$ by
$$
T_1^\Delta w(p)
= \max\left\{\astrut{2.5}
(1-\delta) m(p) + \delta {\cal E}^\Delta_1 w(p),\
(1-\delta) s + \delta w(p)
\right\}.
$$
The operator $T_1^\Delta$ satisfies Blackwell's sufficient conditions for being a contraction mapping with modulus $\delta$ on $(\mathcal{W},\|\cdot\|)$: monotonicity ($v \leq w$ implies $T_1^\Delta v \leq T_1^\Delta w$) and discounting ($T_1^\Delta(w + c) = T_1^\Delta w + \delta c$ for any real number $c$).
By the contraction mapping theorem, $T_1^\Delta$ has a unique fixed point in $\mathcal{W}$; this is the value function $W_1^\Delta$ of an agent experimenting in isolation.
The corresponding continuous-time value function is $V_1^*$ as introduced in Section \ref{sec:continuous-time}.
As any discrete-time strategy is feasible in continuous time, we trivially have $W_1^\Delta \leq V_1^*$.
\begin{lem} \label{lem:convergence-single-agent}
$W_1^\Delta \to V_1^*$ uniformly as $\Delta \rightarrow 0$.
\end{lem}
\proof{
A lower bound for $W_1^\Delta$ is given by the payoff function $W_*^\Delta$ of a single agent who uses the cutoff $p_1^*$ in discrete time; this function is the unique fixed point in $\mathcal{W}$ of the contraction mapping $T_*^\Delta$ defined by
\[T_*^\Delta w(p)
= \left\{ \begin{array}{lll}
(1-\delta) m(p) + \delta {\cal E}^\Delta_1 w(p)
& \mbox{if} & p > p_1^*, \\
(1-\delta) s + \delta w(p)
& \mbox{if} & p \leq p_1^*.
\end{array}\right.\]
Next, choose $\breve{p} < p_1^*$, and define $p^\natural=\frac{\breve{p}+p_1^*}{2}$ and the function $v=m+Cu(\cdot;\mu_1)+\mathbbm{1}_{[0,p^\natural]}(s-m-Cu(\cdot;\mu_1))$, where the constant $C$ is chosen so that $s=m(\breve{p})+Cu(\breve{p};\mu_1)$.
Fix $\varepsilon > 0$.
As $v$ converges uniformly to $V_1^*$ as $\breve p \to p_1^*$, we can choose $\breve p$ such that $v \geq V_1^* - \varepsilon$.
It suffices now to show that there is a $\bar \Delta > 0$ such that $T_*^\Delta v \geq v$ for $\Delta < \bar \Delta$.
In fact, the monotonicity of $T_*^\Delta$ then implies $W_*^\Delta \geq v$ and hence
$V_1^* - \varepsilon \leq v \leq W_*^\Delta \leq W_1^\Delta \leq V_1^*$
for all $\Delta < \bar \Delta$.
For $p \leq p_1^*$, we have $T_*^\Delta v(p) = (1-\delta) s + \delta v (p) \geq v(p)$ for all $\Delta$, because $v\leq s$ in this range.
For $p>p_1^*$,
\begin{eqnarray*}
T_*^\Delta v(p)
& = & (1-\delta) m(p)+\delta \mathcal{E}_1^\Delta v(p)\\
& = & (1-\delta) m(p)+ \delta {\cal E}^\Delta_1\left[ m + Cu + \mathbbm{1}_{[0,p^\natural]}(s-m-Cu) \right](p) \\
& = & v(p) + \delta {\cal E}^\Delta_1\left[ \mathbbm{1}_{[0,p^\natural]}(s-m-Cu) \right](p),
\end{eqnarray*}
where the last equation uses that ${\cal E}^\Delta_1 m(p) = m(p)$ and $\delta{\cal E}^\Delta_1 u(p)=u(p)$.
In particular, $T_*^\Delta v(1) = v(1)$.
The function $s-m-Cu$ is negative on the interval $(0,\breve p)$ and positive on $(\breve p,p^\sharp)$, for some $p^\sharp>p_1^*$.
The expectation of $s - m(p_\Delta)- C u(p_\Delta)$ conditional on $p_0 = p$ and $p_\Delta \leq p^\natural$ is continuous in
$(p,\Delta) \in [p_1^*,1) \times (0,\infty)$
and converges to $s - m(p^\natural)- C u(p^\natural) > 0$ as $p \rightarrow 1$ or $\Delta \rightarrow 0$ because the conditional distribution of $p_\Delta$ becomes a Dirac measure at $p^\natural$ in either limit.
This implies existence of $\bar \Delta > 0$ such that this conditional expectation is positive for all $(p,\Delta) \in [p_1^*,1) \times (0,\bar \Delta)$.
For these $(p,\Delta)$, we thus have
$$
{\cal E}^\Delta_1\left[ \mathbbm{1}_{[0,p^\natural]}(s-m-Cu) \right](p)
\geq {\cal E}^\Delta_1\left[ \mathbbm{1}_{[p^\flat,p^\natural]}(s-m-Cu) \right](p)
\geq 0,
$$
where $p^\flat=\frac{\check p+p^\natural}{2}$.
As a consequence, $T_*^\Delta v \ge v$ for all $(p,\Delta) \in (p_1^*,1) \times (0,\bar \Delta)$.
}
Next, we turn to the payoff function associated with the good state of the automaton defined in Section \ref{sec:construction}.
By the same arguments as invoked immediately before Lemma \ref{lem:convergence-single-agent}, $\overline{w}^\Delta} %{w_+^\Delta$ is the unique fixed point in $\mathcal{W}$ of the operator $\overline{T}^\Delta$ defined by
\[\overline{T}^\Delta w(p)
= \left\{ \begin{array}{lll}
(1-\delta) m(p) + \delta {\cal E}^\Delta_N w(p)
& \mbox{if} & p > \underline{p}, \\
(1-\delta) s + \delta w(p)
& \mbox{if} & p \leq \underline{p}.
\end{array}\right.\]
\begin{lem} \label{lem:lower-bound-reward}
Let $\underline{p} > p_N^*$. Then $\overline{w}^\Delta} %{w_+^\Delta \geq V_{N,\pr}$ for $\Delta$ sufficiently small.
\end{lem}
\proof{
Because of the monotonicity of the operator $\overline{T}^\Delta$, it suffices to show that $\overline{T}^\DeltaV_{N,\pr} \geq V_{N,\pr}$ for sufficiently small $\Delta$.
Recall that for $p > \underline{p}$, $V_{N,\pr}(p) = m(p) + C u(p;\mu_N)$ where the constant $C > 0$ is chosen to ensure continuity at $\underline{p}$.
For $p \leq \underline{p}$, we use exactly the same argument as in the penultimate paragraph of the proof of Lemma \ref{lem:convergence-single-agent};
for $p > \underline{p}$, the argument is the same as in the last paragraph of that proof.
}
The next two results concern the payoff function associated with the bad state of the automaton defined in Section \ref{sec:construction}.
Fix a cutoff $\bar{p} \in (p^m,1)$ and let $K(p)=N-1$ when $p>\bar{p}$, and $K(p)=0$ otherwise.
Given $\Delta > 0$, and any bounded function $w$ on $[0,1]$, define a bounded function $\underline{T}^\Delta w$ by
$$
\underline{T}^\Delta w(p)
= \max\left\{\astrut{2.5}
(1-\delta) m(p) + \delta {\cal E}^\Delta_{K(p)+1}w(p),\
(1-\delta) s + \delta {\cal E}^\Delta_{K(p)}w(p)
\right\}.
$$
The operator $\underline{T}^\Delta$ again satisfies Blackwell's sufficient conditions for being a contraction mapping with modulus $\delta$ on $\mathcal{W}$.
Its unique fixed point in this space is the payoff function $\underline{w}^\Delta} %{w_-^\Delta$ (introduced in Section \ref{sec:construction}) from playing a best response against $N-1$ opponents who all play risky when $p > \bar{p}$, and safe otherwise.
\begin{lem} \label{lem:upper-bound-punishment}
Let $\underline{p}\in(p_N^*,p_1^*)$.
Then there exists $p^\diamond\in[p^m,1)$ such that for all $\bar{p}\in(p^\diamond,1)$,
the inequality $\underline{w}^\Delta} %{w_-^\Delta\leq V_{N,(\underline{p}+p_1^*)/2}$ holds for $\Delta$ sufficiently small.
\end{lem}
\proof{
Let $\tilde p = (\underline{p}+p_1^*)/2$.
For $p > \tilde p$, we have $V_{N,\tilde p}(p) = m(p) + C u(p;\mu_N)$ where the constant $C > 0$ is chosen to ensure continuity at $\tilde p$.
To simplify notation, we write $\tilde v$ instead of $V_{N,\tilde p}$ and $u$ instead of $u(\cdot;\mu_N)$.
For $x > 0$, we define
\[p^*_x=\frac{\mu_x(s-m_0)}{(\mu_x+1)(m_1-s)+\mu_x(s-m_0)}\,,\]
where $\mu_x$ is the unique positive root of
\[f(\mu;x)=
\frac{\rho}{2}\mu(\mu+1)+(\lambda_1-\lambda_0)\mu+\lambda_0\left(\frac{\lambda_0}{\lambda_1}\right)^\mu-\lambda_0-\frac{r}{x}\,;\]
existence and uniqueness of this root follow from continuity and monotonicity of $f(\cdot;x)$ together with the fact that $f(0;x) < 0$ while $f(\mu;x) \to \infty$ as $\mu \to \infty$.\footnote{\emph{Cf.}\ Lemma 6 in Cohen \& Solan (2013).}
This extends our previous definitions of $\mu_N$ and $p_N^*$ to non-integer numbers.
It is immediate to verify now that $\frac{d \mu_x}{dx}<0$ and hence $\frac{d p^*_x}{d x}<0$.
Thus, there exists $\breve{x}\in (1,N)$ such that $p^*_{\breve{x}}\in (\tilde p, p_1^*)$.
Having chosen such an $\breve{x}$, we fix a belief $\breve{p}\in(\tilde p, p^*_{\breve{x}})$ and, on the open unit interval, consider the function $\breve{v}$ that solves
\[{\cal L}^1 v -\frac{r}{\breve{x}}(v-m)=0\]
subject to the conditions $\breve{v}(\breve{p})=s$ and $\breve{v}'(\breve{p})=0$.
This function has the form
\[\breve{v}(p)=m(p)+\breve{u}(p),\]
with
\[\breve{u}(p)=A(1-p)\left(\frac{1-p}{p}\right)^{\breve{\mu}}+Bp\left(\frac{p}{1-p}\right)^{\hat{\mu}}=A u(p;\breve{\mu})+Bu(1-p;\hat{\mu}).\]
Here, $\breve{\mu} = \mu_{\breve{x}}$ and $\hat{\mu}$ is the unique positive root of
\[g(\mu;x)=
\frac{\rho}{2}\mu(\mu+1)-(\lambda_1-\lambda_0)\mu+\lambda_1\left(\frac{\lambda_1}{\lambda_0}\right)^\mu-\lambda_1-\frac{r}{x}\,;\]
existence and uniqueness of this root follow along the same lines as above.
The constants of integration $A$ and $B$ are pinned down by the conditions $\breve{v}(\breve{p})=s$ and $\breve{v}'(\breve{p})=0$.
One calculates
that $B>0$ if, and only if, $\breve{p}<p^*_{\breve{x}}$, which holds by construction,
and that $A>0$ if, and only if,
\[\breve{p}<\frac{(1+\hat{\mu})(s-m_0)}{\hat{\mu}(m_1-s)+(1+\hat{\mu})(s-m_0)}\,.\]
The right-hand side of this inequality is decreasing in $\hat{\mu}$ and tends to $p^m$ as $\hat{\mu}\rightarrow\infty$. Therefore, we can conclude that the inequality holds, and $A>0$ as well.
Moreover, $A+B>0$ implies that $\breve{v}$ is strictly increasing and strictly convex on $(\breve{p},1)$; as $B > 0$, finally, $\breve{v}(p) \to \infty$ for $p \to 1$.
So there exists a belief $p^\natural\in(\breve{p},1)$ such that $\breve{v}(p^\natural)=\tilde v(p^\natural)$ and $\breve{v}>\tilde v$ on $(p^\natural,1)$.
We now show that $\breve{v}<\tilde v$ in $(\breve{p},p^\natural)$.
Indeed, if this is not the case, then $\breve{v}-\tilde v$ assumes a non-negative local maximum at some $p^\sharp \in (\breve{p},p^\natural)$.
This implies:
\noindent (i) $\breve{v}(p^\sharp)\geq\tilde v(p^\sharp)$, \emph{i.e.},
\begin{equation}\label{eq:C8_0}
A u(p^\sharp;\breve{\mu})+B u(1-p^\sharp;\hat{\mu})\geq C u(p^\sharp;\mu_N);
\end{equation}
(ii) $\breve{v}'(p^\sharp)=\tilde v'(p^\sharp)$, \emph{i.e.},
\begin{equation}\label{eq:C8_1}
-(\breve{\mu}+p^\sharp)A u(p^\sharp;\breve{\mu})+(\hat{\mu}+1-p^\sharp)B u(1-p^\sharp;\hat{\mu})=-(\mu_N+p^\sharp)C u(p^\sharp;\mu_N);
\end{equation}
and (iii) $\breve{v}''(p^\sharp)\leq\tilde v''(p^\sharp)$, \emph{i.e.},
\begin{equation}\label{eq:C8_2}
\breve{\mu}(\breve{\mu}+1)A u(p^\sharp;\breve{\mu})+\hat{\mu}(1+\hat{\mu})B u(1-p^\sharp;\hat{\mu})\leq \mu_N(\mu_N+1)C u(p^\sharp;\mu_N).
\end{equation}
Solving for $B u(1-p^\sharp;\hat{\mu})$ in \eref{eq:C8_1} and inserting the result into \eref{eq:C8_0} and \eref{eq:C8_2}, we obtain, respectively,
\[\frac{C u(p^\sharp;\mu_N)}{A u(p^\sharp;\breve{\mu})}\leq \frac{\breve{\mu}+\hat{\mu}+1}{\mu_N+\hat{\mu}+1},\]
and
\[\frac{C u(p^\sharp;\mu_N)}{A u(p^\sharp;\breve{\mu})}\geq\frac{\breve{\mu}(\breve{\mu}+1)(\hat{\mu}+1-p^\sharp)+\hat{\mu}(\hat{\mu}+1)(\breve{\mu}+p^\sharp)} {\mu_N(\mu_N+1)(\hat{\mu}+1-p^\sharp)+\hat{\mu}(\hat{\mu}+1)(\mu_N+p^\sharp)}\,.\]
This implies that
\[\frac{\breve{\mu}+\hat{\mu}+1}{\mu_N+\hat{\mu}+1}\geq \frac{\breve{\mu}(\breve{\mu}+1)(\hat{\mu}+1-p^\sharp)+\hat{\mu}(\hat{\mu}+1)(\breve{\mu}+p^\sharp)} {\mu_N(\mu_N+1)(\hat{\mu}+1-p^\sharp)+\hat{\mu}(\hat{\mu}+1)(\mu_N+p^\sharp)}\,,\]
which one shows
to be equivalent to $\breve{\mu} \leq \mu_N$.
But $\breve{x}<N$ and $\frac{d\mu_x}{dx}<0$ imply $\breve{\mu} > \mu_N$.
This is the desired contradiction.
Now let $p^\diamond=\max\{p^m,p^\natural\}$, fix $\bar{p}\in(p^\diamond,1)$ and define
\[v(p)=\left\{\begin{array}{lll}
\tilde v(p) & \mbox{if} & p>p^\natural,\\
\breve{v}(p) & \mbox{if} & \breve{p}\leq p\leq p^\natural,\\
s &\mbox{if} & p<\breve{p}.\end{array}\right.\]
By construction, $s\leq v\leq \min\{\tilde v, \breve v\}$.
This immediately implies that $(1-\delta)s+\delta v\leq v$.
We now show that $\underline{T}^\Delta v\leq v$, and hence $\underline{w}^\Delta} %{w_-^\Delta\leq v$, for $\Delta$ sufficiently small.
First, let $p \in (\bar{p},1]$.
We have
\begin{eqnarray*}
(1-\delta)m(p)+\delta{\cal E}^\Delta_N v(p)
& \leq & (1-\delta)m(p)+\delta{\cal E}^\Delta_N\left[ m + Cu + \mathbbm{1}_{(0,\breve{p})}(s-m-Cu)\right](p) \\
& = & m(p) + Cu(p)+\delta{\cal E}^\Delta_N\left[\mathbbm{1}_{(0,\breve{p})}(s-m-Cu)\right](p) \\
& \leq & m(p)+Cu(p) \\
& = & v(p),
\end{eqnarray*}
for $\Delta$ small enough that ${\cal E}^\Delta_N\left[\mathbbm{1}_{(0,\breve{p})}(s-m-Cu)\right](\bar{p}) \leq 0$; that this inequality holds for small $\Delta$ follows from the fact that $s<m+Cu$ on $(\tilde p,\breve{p})$.
By the same token,
\begin{eqnarray*}
(1-\delta)s+\delta{\cal E}^\Delta_{N-1} v(p)
& \leq & (1-\delta)s+\delta{\cal E}^\Delta_{N-1} (m+Cu)(p)+\delta{\cal E}^\Delta_{N-1}\left[\mathbbm{1}_{(0,\breve{p})}(s-m-Cu)\right](p)\\
& = &(1-\delta)s+\delta m(p)+\delta^{\frac{1}{N}}Cu(p)+\delta{\cal E}^\Delta_{N-1}\left[\mathbbm{1}_{(0,\breve{p})}(s-m-Cu)\right](p)\\
&\leq & m(p)+Cu(p)\\
&=& v(p),
\end{eqnarray*}
for $\Delta$ small enough that ${\cal E}^\Delta_{N-1}\left[\mathbbm{1}_{(0,\breve{p})}(s-m-Cu)\right](\bar{p}) \leq 0$, as $Cu(p)>0$ and $s<m(p)$ for $p>p^m$.
Second, let $p\in(p^\natural,\bar{p}]$. Now, we have
\begin{eqnarray*}
(1-\delta)m(p)+\delta{\cal E}^\Delta_1 v(p)
& \leq & m(p)+\delta^{1-\frac{1}{N}} C u(p) + \delta{\cal E}^\Delta_1\left[\mathbbm{1}_{(0,\breve{p})}(s-m-Cu)\right](p)\\
& \leq & m(p) + Cu(p) \\
& = & v(p),
\end{eqnarray*}
for $\Delta$ small enough that ${\cal E}^\Delta_1\left[\mathbbm{1}_{(0,\breve{p})}(s-m-Cu)\right](p^\natural) \leq 0$.
Third, let $p\in[\breve{p},p^\natural]$. In this case,
\begin{eqnarray*}
(1-\delta)m(p)+\delta{\cal E}^\Delta_1v(p)
& \leq & (1-\delta)m(p)+\delta{\cal E}^\Delta_1\breve{v}(p)\\
& = & m(p)+\delta{\cal E}^\Delta_1\breve{u}(p)\\
& = & m(p)+\breve{u}(p) + \mathbb{E}\left[\left. \int_0^\Delta e^{-rt}\left\{ \astrut{2} {\cal L}^1\breve{u}(p_t) - r \breve{u}(p_t) \right\} dt\, \right| p_0=p \right] \\
& \leq & m(p)+\breve{u}(p) + \mathbb{E}\left[\left. \int_0^\Delta e^{-rt}\left\{ \astrut{2} {\cal L}^1\breve{u}(p_t) - \frac{r}{\breve{x}} \breve{u}(p_t) \right\} dt\, \right| p_0=p \right] \\
& = & m(p)+\breve{u}(p) \\
& = & v(p),
\end{eqnarray*}
where the second equality follows from Dynkin's formula, the second inequality holds because $\breve{u}(p_t)>0$ and $\breve{x}>1$, and the third equality is a consequence of the identity
${\cal L}^1\breve{u} - r\breve{u}/\breve{x} = 0$.
Finally, let $p\in [0,\breve{p})$. By monotonicity of $m$ and $v$ (and the previous step), we see that $(1-\delta)m(p)+\delta{\cal E}^\Delta_1v(p)\leq(1-\delta)m(\breve{p})+\delta{\cal E}^\Delta_1v(\breve{p})\leq v(\breve{p})=s=v(p)$.
}
\begin{lem}\label{lem:flat-part}
There exist $\check{p} \in (p^m,1)$ and $p^\ddagger \in (p_N^*,p_1^*)$ such that $\underline{w}^\Delta} %{w_-^\Delta(p) = s$ for all $\bar{p} \in (\check{p},1)$, $p \leq p^\ddagger$ and $\Delta > 0$.
For any $\varepsilon > 0$, moreover, there exists $\check p_\varepsilon \in (\check p,1)$ such that $\underline{w}^\Delta} %{w_-^\Delta \leq V_1^* + \varepsilon$ for all $\Delta > 0$.
\end{lem}
\proof{
Consider any $\bar{p} \in (p^m,1)$ and an initial belief $p < \bar{p}$.
We obtain an upper bound on $\underline{w}^\Delta} %{w_-^\Delta(p)$ by considering a modified problem in which (i) the player can choose a best response in continuous time and (ii) the game is stopped with continuation payoff $m_1$ as soon as the belief $\bar{p}$ is reached.
This problem can be solved in the standard way, yielding an optimal cutoff $p^\ddagger$.
By construction, $\underline{w}^\Delta} %{w_-^\Delta = s$ on $[0,p^\ddagger]$.
As we take $\bar{p}$ close to 1, $p^\ddagger$ approaches $p_1^*$ from the left and thus gets to lie strictly in between $p_N^*$ and $p_1^*$.
This proves the first statement.
The second follows from the fact that the value function of the modified problem converges uniformly to $V_1^*$ as $\bar{p} \to 1$.
}
In the case of pure Poisson learning ($\rho = 0$), we need a sharper characterization of the payoff function $\underline{w}^\Delta} %{w_-^\Delta$ as $\Delta$ becomes small.
To this end, we define $V_{1,\pp}$ as the continuous-time counterpart to $\underline{w}^\Delta} %{w_-^\Delta$.
The methods employed in Keller and Rady (2010)\ can be used to establish that $V_{1,\pp}$ has the following properties for $\rho = 0$.
First, there is a cutoff $p^\dagger < p^m$ such that $V_{1,\pp}=s$ on $[0,p^\dagger]$, and $V_{1,\pp}>s$ everywhere else.
Second, $V_{1,\pp}$ is continuously differentiable everywhere except at $\bar{p}$.
Third, $V_{1,\pp}$ solves the Bellman equation
$$
v(p) = \max\left\{\astrut{2.5}
m(p) + [K(p)+1] b(p,v),\
s + K(p) b(p,v)\right\},
$$
where
$$
b(p,v)
= \frac{\lambda(p)}{r} \, \left[ v(j(p)) - v(p) \right]
- \frac{\lambda_1-\lambda_0}{r} \, p(1-p) \, v'(p),
$$
and
$v'(p)$ is taken to mean the left-hand derivative of $v$.
Fourth,
$b(p,V_{1,\pp}) \geq 0$ for all $p$.
Fifth, because of smooth pasting at $p^\dagger$, the term $m(p) + b(p,V_{1,\pp}) - s$ is continuous in $p$ except at $\bar{p}$;
it has a single zero at $p^\dagger$, being positive to the right of it and negative to the left.
Finally, we note that $V_{1,\pp} = V_1^*$ and $p^\dagger = p_1^*$ for $\bar{p} = 1$.
\begin{lem} \label{lem:convergence-punish-pbar}
Let $\rho = 0$.
Then $V_{1,\pp} \rightarrow V_1^*$ uniformly as $\bar{p} \rightarrow 1$.
The convergence is monotone in the sense that $\bar{p}' > \bar{p}$ implies
$V_{1,\bar{p}'} < V_{1,\pp}$ on $\{p\!: s < V_{1,\pp}(p) < \lambda_1 h\}$.
\end{lem}
As the closed-form solutions for the functions in question make it straightforward to establish this result, we omit the proof.
A key ingredient in the analysis of the pure Poisson case is uniform convergence of $\underline{w}^\Delta} %{w_-^\Delta$ to $V_{1,\pp}$ as $\Delta \to 0$, which we establish by means of the following result.\footnote{
To the best of our knowledge, the earliest appearance of this result in the economics literature is in Biais et al.\ (2007).
A related approach is taken in Sadzik and Stacchetti (2015).}
\begin{lem} \label{lem:convergence-fixed-point}
Let $\{T^\Delta\}_{\Delta > 0}$ be a family of contraction mappings on the Banach space $(\mathcal{W}; \|\cdot\|)$ with moduli $\{\beta^\Delta\}_{\Delta > 0}$ and associated fixed points $\{w^\Delta\}_{\Delta > 0}$.
Suppose that there is a constant $\nu > 0$ such that $1-\beta^\Delta = \nu \Delta + o(\Delta)$ as $\Delta \rightarrow 0$.
Then, a sufficient condition for $w^\Delta$ to converge in $(\mathcal{W}; \|\cdot\|)$ to the limit $v$ as $\Delta \rightarrow 0$ is that
$\|T^\Delta v - v\| = o(\Delta)$.
\end{lem}
\proof{
As
$$
\|w^\Delta - v\|
= \| T^\Delta w^\Delta - v \|
\leq \| T^\Delta w^\Delta - T^\Delta v \| + \| T^\Delta v - v \|
\leq \beta^\Delta \|w^\Delta - v\| + \|T^\Delta v - v\|,
$$
the stated conditions on $\beta^\Delta$ and $\|T^\Delta v - v\|$ imply
$$
\|w^\Delta - v\|
\leq \frac{\|T^\Delta v - v\|}{1-\beta^\Delta}
= \frac{\Delta f(\Delta)}{\nu \Delta + \Delta g(\Delta)}
= \frac{f(\Delta)}{\nu + g(\Delta)},
$$
with $\lim_{\Delta\rightarrow0} f(\Delta) = \lim_{\Delta\rightarrow0} g(\Delta) = 0$.
}
In our application of this lemma, $\cal W$ is again the Banach space of bounded real-valued functions on the unit interval, equipped with the supremum norm.
The operator in question is $\underline{T}^\Delta$ as defined above; the corresponding moduli are $\beta^\Delta = \delta = e^{-r\Delta}$, so that $\nu = r$.
\begin{lem} \label{lem:convergence-punish-Delta}
Let $\rho = 0$.
Then $\underline{w}^\Delta} %{w_-^\Delta \rightarrow V_{1,\pp}$ uniformly as $\Delta \rightarrow 0$.
\end{lem}
\proof{
To simplify notation, we write $v$ instead of $V_{1,\pp}$.
For $K \in \{0,1,\ldots,N\}$, a straightforward Taylor expansion of ${\cal E}^\Delta_K v$ with respect to $\Delta$ yields
\begin{equation} \label{eq:Taylor}
\lim_{\Delta \rightarrow 0} \frac{1}{\Delta} \left\| \delta\, {\cal E}^\Delta_K v - v - r [K b(\cdot,v) - v] \Delta \right\| = 0.
\end{equation}
For $p > \bar{p}$, we have $K(p)=N-1$, and \eref{eq:Taylor} implies
\begin{eqnarray*}
(1-\delta) m(p) + \delta {\cal E}^\Delta_Nv(p) & = & v(p) + r \left[m(p) + N b(p,v) - v(p)\right] \Delta + o(\Delta), \\
(1-\delta) s + \delta {\cal E}^\Delta_{N-1}v(p) & = & v(p) + r \left[s + (N-1) b(p,v) - v(p)\right] \Delta + o(\Delta).
\end{eqnarray*}
As $m(p) > s$ on $[\bar{p},1]$ and $b(p,v) \geq 0$, there exists $\xi > 0$ such that
$$m(p) + N b(p,v) - \left[s + (N-1) b(p,v)\right] > \xi,$$
on $(\bar{p},1]$.
Thus,
$\underline{T}^\Delta v(p) = (1-\delta) m(p) + \delta {\cal E}^\Delta_N v(p)$ for $\Delta$ sufficiently small,
and the fact that
$v(p) = m(p) + N b(p,v)$ now implies
$\underline{T}^\Delta v(p) = v(p) + o(\Delta)$ on $(\bar{p},1]$.
On $[0,\bar{p}]$, we have $K(p)=0$, and \eref{eq:Taylor} implies
\begin{eqnarray}
\label{eq:taylor1}
\left\|(1-\delta)m+\delta{\cal E}^\Delta_1 v-v-r[m+b(\cdot,v)-v)\Delta\right\| & = & \Delta \psi_R(\Delta), \\
\label{eq:taylor2}
\left\|(1-\delta)s+\delta{\cal E}^\Delta_0 v-v-r[s-v]\Delta\right\| & = & \Delta \psi_S(\Delta),
\end{eqnarray}
for some functions $\psi_R,\psi_S\!:(0,\infty)\to[0,\infty)$ that satisfy $\psi_R(\Delta)\rightarrow 0$ and $\psi_S(\Delta)\rightarrow 0$ as $\Delta\rightarrow 0$.
First, let $p\in (p^\dagger,\bar{p}]$.
We note that $\underline{T}^\Delta v(p)\geq (1-\delta)m(p)+\delta{\cal E}^\Delta_1 v(p)\geq v(p)-\Delta \psi_R(\Delta)$ in this range,
where the first inequality follows from the definition of $\underline{T}^\Delta$, and the second inequality is implied by \eref{eq:taylor1} and $v(p) = m(p) + b(p,v) $ for $p\in (p^\dagger,\bar{p}]$.
If the maximum in the definition of $\underline{T}^\Delta v(p)$ is achieved by the risky action, the first in the previous chain of inequalities holds as an equality, and \eref{eq:taylor1} immediately implies that $\underline{T}^\Delta v(p) = v(p) + o(\Delta)$.
If the maximum in the definition of $\underline{T}^\Delta v(p)$ is achieved by the safe action, however, we have $\underline{T}^\Delta v(p)=(1-\delta)s+\delta{\cal E}^\Delta_0 v(p)\leq v(p)+r[s-v(p)]\Delta+\Delta \psi_S(\Delta)\leq v(p)+\Delta\psi_S(\Delta)$,
where the second inequality follows from $v>s$ on $(p^\dagger,\bar{p}]$.
Thus $v(p)-\Delta\psi_R(\Delta)\leq \underline{T}^\Delta v(p)\leq v(p)+\Delta \psi_S(\Delta)$, and we can conclude that $\underline{T}^\Delta v(p) = v(p) + o(\Delta)$ in this case as well.
Now, let $p \leq p^\dagger$.
We note that $\underline{T}^\Delta v(p)\geq (1-\delta)s+\delta{\cal E}^\Delta_0 v(p)\geq v(p)-\Delta \psi_S(\Delta)$ in this range,
where the first inequality follows from the definition of $\underline{T}^\Delta$, and the second inequality is implied by \eref{eq:taylor2} and $v(p) = s $ for $p\leq p^\dagger$.
If the maximum in the definition of $\underline{T}^\Delta v(p)$ is achieved by the safe action, the first in the previous chain of inequalities holds as an equality, and \eref{eq:taylor2} immediately implies that $\underline{T}^\Delta v(p) = v(p) + o(\Delta)$.
If the maximum in the definition of $\underline{T}^\Delta v(p)$ is achieved by the risky action, however, we have $\underline{T}^\Delta v(p)=(1-\delta)m(p)+\delta{\cal E}^\Delta_1 v(p)\leq v(p)+r[m(p)+b(p,v)-v(p)]\Delta+\Delta \psi_R(\Delta)\leq v(p)+\Delta\psi_R(\Delta)$,
where the second inequality follows from $v=s\geq m(p)+b(p,v)$ on $[0,p^\dagger]$.
Thus $v(p)-\Delta\psi_S(\Delta)\leq \underline{T}^\Delta v(p)\leq v(p)+\Delta \psi_R(\Delta)$, and we can again conclude that $\underline{T}^\Delta v(p) = v(p) + o(\Delta)$ in this case as well.
}
Our last two auxiliary results pertain to the case of pure Poisson learning.
\begin{lem}\label{lem:thresh}
Let $\rho = 0$.
There is a belief $\hat{p} \in [p_N^*,p_1^*]$ such that
$$
\lambda(\underline{p}) \left[ N V_{N,\underline{p}}(j(\underline{p})) - (N-1) V_1^*(j(\underline{p})) - s \right] - rc(\underline{p})
$$
is negative if $0 < \underline{p} < \hat{p}$, zero if $\underline{p} = \hat{p}$, and positive if $\hat{p} < \underline{p} <1$.
Moreover, $\hat{p} = p_N^*$ if, and only if, $j(p^*_N)\leq p_1^*$,
and $\hat{p} = p_1^*$ if, and only if, $\lambda_0 = 0$.
\end{lem}
\proof{
We start by noting that given the functions $V_1^*$ and $V_N^*$, the cutoffs $p_1^*$ and $p_N^*$ are uniquely determined by
\begin{equation} \label{eq:p_1^*}
\lambda(p_1^*)[V_1^*(j(p_1^*))-s] = rc(p_1^*),
\end{equation}
and
\begin{equation} \label{eq:p_N^*}
\lambda(p_N^*)[NV_N^*(j(p_N^*))-Ns] = rc(p_N^*),
\end{equation}
respectively.
Consider the differentiable function $f$ on $(0,1)$ given by
\[
f(\underline{p})=\lambda(\underline{p})[NV_{N,\underline{p}}(j(\underline{p}))-(N-1)V_1^*(j(\underline{p}))-s]-rc(\underline{p}).
\]
For $\lambda_0 = 0$, we have $j(\underline{p})=1$ and $V_{N,\underline{p}}(j(\underline{p}))=V_1^*(j(\underline{p}))=m_1$ for all $p$, so
$f(\underline{p})=\lambda(\underline{p})[V_1^*(j(\underline{p}))-s] - rc(\underline{p})$,
which is zero at $\underline{p}=p_1^*$ by \eref{eq:p_1^*}, positive for $\underline{p}>p_1^*$, and negative for $\underline{p}<p_1^*$.
Assume $\lambda_0 > 0$.
For $0 < \underline{p} < p \leq 1$, we have $V_{N,\underline{p}}(p)=m(p)+c(\underline{p}) u(p;\mu_N)/u(\underline{p};\mu_N)$.
Moreover, we have $V_1^*(p)=s$ when $p \leq p_1^*$, and $V_1^*(p)=m(p)+Cu(p;\mu_1)$ with a constant $C>0$ otherwise.
Using the fact that
$$u(j(p);\mu)=\frac{\lambda_0}{\lambda(p)}\left(\frac{\lambda_0}{\lambda_1}\right)^\mu u(p;\mu),$$
we see that the term $\lambda(\underline{p})NV_{N,\underline{p}}(j(\underline{p}))$ is actually linear in $\underline{p}$.
When $j(\underline{p}) \leq p_1^*$, the term $-\lambda(\underline{p})(N-1)V_1^*(j(\underline{p}))$ is also linear in $\underline{p}$;
when $j(\underline{p}) > p_1^*$, the nonlinear part of this term simplifies to
$-(N-1) C \lambda_0^{\mu_1+1} u(\underline{p};\mu_1)/\lambda_1^{\mu_1}$.
This shows that $f$ is concave, and strictly concave on the interval of all $\underline{p}$ for which $j(\underline{p}) > p_1^*$.
As $\lim_{\underline{p} \rightarrow 1} f(\underline{p}) > 0$, this in turn implies that $f$ has at most one root in the open unit interval; if so, $f$ assumes negative values to the left of the root, and positive values to the right.
As $V_{N,p_1^*}(j(p_1^*))>V_1^*(j(p_1^*))$, moreover, we have
$
f(p_1^*) > \lambda(p_1^*)[V_1^*(j(p_1^*))-s]-rc(p_1^*) = 0
$
by \eref{eq:p_1^*}.
Any root of $f$ must thus lie in $[0,p_1^*)$.
If $j(p_N^*) \leq p_1^*$, then $V_1^*(j(p_N^*)) = s$ and
$
f(p_N^*)=\lambda(p_N^*)[NV_N^*(j(p_N^*))-Ns]-rc(p_N^*) = 0
$
by \eref{eq:p_N^*}.
If $j(p_N^*) > p_1^*$, then $V_1^*(j(p_N^*)) > s$ and $f(p_N^*) < 0$, so $f$ has a root in $(p_N^*,p_1^*)$.
}
The following result is used in the proof of Proposition \ref{prop:efficiency-region-Poisson}.
\begin{lem} \label{lem:comparison-mu1-muN}
Let $\rho = 0$.
Then $\mu_1 (\mu_1+1) > N \mu_N (\mu_N+1)$.
\end{lem}
\proof{
We change variables to $\beta =\lambda_0/\lambda_1$ and $x=r/\lambda_1$, so that $\mu_N$ and $\mu_1$ are implicitly defined as the positive solutions of the equations
\begin{eqnarray*}
\frac{x}{N} + \beta - (1-\beta) \mu_N & = & \beta^{\mu_N+1}, \\
x + \beta - (1-\beta) \mu_1 & = & \beta^{\mu_1+1}.
\end{eqnarray*}
Fixing $\beta \in [0,1)$ and considering $\mu_N$ and $\mu_1$ as functions of $x \in (0,\infty)$, we obtain
\begin{eqnarray*}
\mu_N' & = & \frac{N^{-1}}{1-\beta+\beta^{\mu_N+1}\ln\beta} \ \ = \ \ \frac{N^{-1}}{1-\beta+\left[\divn{x}{N} + \beta - (1-\beta) \mu_N\right] \ln\beta} \,, \\
\mu_1' & = & \frac{1}{1-\beta+\beta^{\mu_1+1}\ln\beta} \ \ = \ \ \frac{1}{1-\beta+\left[x + \beta - (1-\beta) \mu_1\right] \ln\beta} \,.
\end{eqnarray*}
(All denominators are positive because $1-\beta+\beta^{\mu+1}\ln\beta \geq 1-\beta+\beta \ln\beta > 0$ for all $\mu \geq 0$.)
Let $d = \mu_1 (\mu_1+1) - N \mu_N (\mu_N+1)$.
As $\lim_{x \rightarrow 0} \mu_N = \lim_{x \rightarrow 0} \mu_1 = 0$, we see that $\lim_{x \rightarrow 0} d = 0$ as well.
It is thus enough to show that $d' > 0$ at any $x > 0$.
This is the case if, and only if, $(2\mu_1+1)\mu_1' > N (2\mu_N+1) \mu_N'$, that is,
$$
(2\mu_1+1) \left\{1-\beta+\left[\divn{x}{N} + \beta - (1-\beta) \mu_N\right]\ln\beta\right\} > (2\mu_N+1) \left\{1-\beta+\left[x + \beta - (1-\beta) \mu_1\right]\ln\beta\right\}.
$$
This inequality reduces to
$$
(\mu_1-\mu_N) \left\{ 2(1-\beta) + \left[\divn{2x}{N} + 1 + \beta\right]\ln\beta\right\} > (2\mu_N+1) \left[x-\divn{x}{N}\right] \ln\beta.
$$
It is straightforward to show that $\mu_1 > \mu_N + \frac{1}{1-\beta}\left[x-\divn{x}{N}\right]$.
So $d' > 0$ if
$$
2(1-\beta) + \left[\divn{2x}{N} + 1 + \beta\right]\ln\beta > (2\mu_N+1) (1-\beta) \ln\beta,
$$
which simplifies to $1-\beta+\left[\divn{x}{N} + \beta - (1-\beta) \mu_N\right] \ln\beta > 0$ -- an inequality that we have already established.
}
\section{Proofs} \label{app:proofs}
\subsection{Main Results (Theorem 1 and Propositions \ref{prop:comparison_sym_MPE}--\ref{prop:efficiency-region-Poisson})}
\proofof{Theorem}{thm}{
For $\rho > 0$, this result is an immediate consequence of inequalities \eref{eq:payoffs},
the fact that $\liminf_{\Delta \rightarrow 0} \Winf_{\rm \, PBE} \geq V_1^*$ and $\Wsup_{\rm PBE} \leq V_N^*$,
and Proposition \ref{prop:limit-Brownian}.
For $\rho = 0$, the result follows from inequalities \eref{eq:payoffs},
the fact $\liminf_{\Delta \rightarrow 0} \Winf_{\rm \, PBE} \geq V_1^*$,
and Propositions \ref{prop:thresh} and \ref{prop:limit-Poisson}.
}
\proofof{Proposition}{prop:comparison_sym_MPE}{
Arguing as in Keller and Rady (2010), one establishes that in the unique symmetric MPE of the continuous-time game, all experimentation stops at the belief $\tilde{p}_N$ implicitly defined by
$rc(\tilde{p}_N) = \lambda(\tilde{p}_N)[\tilde{V}_N(j(\tilde{p}_N))-s]$,
where $\tilde{V}_N$ is the players' common equilibrium payoff function.
The equilibrium construction along the lines of Keller and Rady (2010)\ further implies that $V_{N,\tilde{p}_N}(j(\tilde{p}_N)) > \tilde{V}_N(j(\tilde{p}_N)) > V_1^*(j(\tilde{p}_N))$,
so that
$NV_{N,\tilde{p}_N}(j(\tilde{p}_N))-(N-1)V_1^*(j(\tilde{p}_N)) > \tilde{V}_N(j(\tilde{p}_N))$,
and hence $\hat{p} < \tilde{p}_N$ by Lemma \ref{lem:thresh}.
}
\proofof{Proposition}{prop:comp-statics-N}{
We only need to consider the case that $\hat{p}>p_N^*$.
Recall the defining equation for $\hat{p}$ from Lemma \ref{lem:thresh},
\[
\lambda(\hat{p} )NV_{N,\hat{p}}(j(\hat{p} ))-\lambda (\hat{p} )s-rc(\hat{p} )=(N-1) \lambda (\hat{p} )V_1^*(j(\hat{p} )).
\]
We make use of the closed-form expression for $V_{N,\hat{p}}$ to rewrite its left-hand side as
\[
N\lambda (\hat{p})\lambda(j(\hat{p}))h+Nc(\hat{p})[\lambda_0-\mu_N(\lambda_1-\lambda_0)]-\lambda(\hat{p})s.
\]
Similarly, by noting that $\hat{p}>p_N^*$ implies $j(\hat{p})> j(p_N^*)>p_1^*$, we can make use of the closed-form expression for $V_1^*$ to rewrite the right-hand side as
\[
(N-1)\lambda (\hat{p})\lambda(j(\hat{p}))h+(N-1) c(p_1^*)\frac{u(\hat{p};\mu_1)}{u(p_1^*;\mu_1)}[r+\lambda_0-\mu_1(\lambda_1-\lambda_0)].
\]
Combining, we have
\[
\frac{\lambda (\hat{p})\lambda(j(\hat{p}))h+Nc(\hat{p})[\lambda_0-\mu_N(\lambda_1-\lambda_0)]-\lambda(\hat{p})s}{(N-1) [r+\lambda_0-\mu_1(\lambda_1-\lambda_0)] c(p_1^*)}=\frac{u(\hat{p};\mu_1)}{u(p_1^*;\mu_1)}.
\]
It is convenient to change variables to
\[
\beta=\frac{\lambda_0}{\lambda_1} \quad \mbox{and} \quad y=\frac{\lambda_1}{\lambda_0} \, \frac{\lambda_1 h-s}{s-\lambda_0 h} \, \frac{\hat{p}}{1-\hat{p}}\,.
\]
The implicit definitions of $\mu_1$ and $\mu_N$ imply
\[
N=\frac{\beta^{1+\mu_1}-\beta+\mu_1(1-\beta)}{\beta^{1+\mu_N}-\beta+\mu_N(1-\beta)}\,,
\]
allowing us to rewrite the defining equation for $\hat{p}$ as the equation $F(y,\mu_N) = 0$ with
\begin{eqnarray*}
F(y,\mu)
& = & 1 - y + [\beta(1+\mu)y-\mu]\,\frac{1-\beta}{\beta}\,\frac{\beta^{1+\mu_1}-\beta+\mu_1(1-\beta)}{(\mu_1-\mu)(1-\beta)+\beta^{1+\mu_1}-\beta^{1+\mu}} \\
& & \mbox{} - \frac{\mu_1^{\mu_1}}{(1+\mu_1)^{1+\mu_1}} \, y^{-\mu_1}.
\end{eqnarray*}
As $y$ is a strictly increasing function of $\hat{p}$, we know from Lemma \ref{lem:thresh} that $F(\cdot,\mu_N)$ admits a unique root, and that it is strictly increasing in a neighborhood of this root.
A straightforward computation shows that
\[
\frac{\partial F(y,\mu_N)}{\partial \mu}
= \frac{1-\beta}{\beta}\,\frac{\beta^{1+\mu_1}-\beta+\mu_1(1-\beta)}{((\mu_1-\mu_N)(1-\beta)+\beta^{1+\mu_1}-\beta^{1+\mu_N})^2}\ \zeta(y,\mu_N),
\]
with
\[
\zeta(y,\mu) = \beta(1-\beta)(1+\mu_1)y - (1-\beta)\mu_1 + (1-\beta y)(\beta^{1+\mu}-\beta^{1+\mu_1}) + \beta^{1+\mu}\,(\beta(1+\mu)y-\mu)\ln \beta.
\]
As $p_N^* < \hat{p} < p_1^*$, we have
\[
\frac{\mu_N}{1+\mu_N} < \beta y < \frac{\mu_1}{1+\mu_1}\,,
\]
which implies
\[
\zeta(y,\mu_1) = (\beta(1+\mu_1)y-\mu_1)\,(1-\beta+\beta^{1+\mu_1}\ln \beta) < 0,
\]
and
\[
\frac{\partial \zeta(y,\mu)}{\partial \mu} = \beta^{1+\mu} [\beta(1+\mu)y-\mu] (\ln \beta)^2 > 0,
\]
for all $\mu \in [\mu_N,\mu_1]$.
This establishes $\zeta(y,\mu_N) < 0$.
By the implicit function theorem, therefore, $y$ is increasing in $\mu_N$.
Recalling from Keller and Rady (2010)\ that $\mu_N$ is decreasing in $N$, we have thus shown that $y$ (and hence $\hat{p}$) are decreasing in $N$.
}
\proofof{Proposition}{prop:efficiency-region-Poisson}{
There is nothing to show for $\lambda_0 = 0$.
Using the same change of variables as in the proof of Lemma \ref{lem:comparison-mu1-muN}, we fix $\beta \in (0,1)$, therefore, and define
$$
q = \beta \cdot \frac{1+\mu_N^{-1}}{1+\mu_1^{-1}}\,,
$$
so that $j(p_N^*) \leq p_1^*$ if, and only if, $q \geq 1$.
As $\lim_{x \rightarrow \infty} \mu_N = \lim_{x \rightarrow \infty} \mu_1 = \infty$,
we have $\lim_{x \rightarrow \infty} q = \beta < 1$.
As $\lim_{x \rightarrow 0} \mu_N = \lim_{x \rightarrow 0} \mu_1 = 0$, moreover,
$$\lim_{x \rightarrow 0} q
= \beta \lim_{x \rightarrow 0} \frac{\mu_1}{\mu_N}
= \beta \lim_{x \rightarrow 0} \frac{\mu_1'}{\mu_N'}
= \beta N$$
by l'H\^{o}pital's rule.
Finally, $q'$ is easily seen to have the same sign as
$$
- \mu_1 (\mu_1+1) (1-\beta+\beta^{\mu_1+1}\ln\beta) + N \mu_N (\mu_N+1) (1-\beta+\beta^{\mu_N+1}\ln\beta).
$$
As $\beta^{\mu_1+1}\ln\beta > \beta^{\mu_N+1}\ln\beta$, Lemma \ref{lem:comparison-mu1-muN} implies that $q$ decreases strictly in $x$.
This in turn implies that $q < 1$ at all $x \in (0,\infty)$ when $\beta N \leq 1$, which proves the first part of the proposition.
Otherwise, there exists a unique $x^* \in (0,\infty)$ at which $q = 1$.
The second part of the proposition thus holds with $(\lambda_1^*,\lambda_0^*) = (r/x^*,\beta r/x^*)$.
It is straightforward to see that $x$ varies continuously with $\beta$ and that $\lim_{\beta \rightarrow 1/N} x^* = 0$.
So it remains to show that $x^*$ remains bounded as $\beta \rightarrow 1$.
Rewriting the defining equation for $x^*$ as
$$
1+\frac{1}{(1-\beta)\mu_1(x^*(\beta),\beta)}=\frac{1}{(1-\beta)\mu_N(x^*(\beta),\beta)}\,,
$$
we see that $(1-\beta)\mu_N(x^*(\beta),\beta)$ must stay bounded as $\beta \rightarrow 1$.
By the defining equation for $\mu_N$, $x^*(\beta)$ must then also stay bounded.
}
\subsection{Learning with a Brownian Component (Propositions \ref{prop:SSE-Brownian}--\ref{prop:limit-Brownian})}
The proof of Proposition \ref{prop:SSE-Brownian} rests on a sequence of lemmas that prove incentive compatibility of the proposed strategies on various subintervals of $[0,1]$.
When no assumption on the signal-to-noise ratio $\rho$ is stated, the respective result holds irrespectively of whether $\rho > 0 $ or $\rho = 0$.
In view of Lemmas
\ref{lem:upper-bound-punishment} and \ref{lem:flat-part},
we take $\underline{p}$ and $\bar{p}$ such that
\begin{equation} \label{eq:thresholds-Brownian}
p_N^* < \underline{p} < p^\ddagger < p_1^* < p^m < \max\{p^\diamond,\check{p}\} < \bar{p} < 1.
\end{equation}
The first two lemmas deal with the safe action ($\kappa = 0$) on the interval $[0,\bar{p}]$.
\begin{lem}\label{lem:k=0;0-pddagger}
For all $p \leq p^\ddagger$,
$$
(1-\delta) s + \delta \overline{w}^\Delta} %{w_+^\Delta(p) \geq (1-\delta) m(p) + \delta {\cal E}^\Delta_1\underline{w}^\Delta} %{w_-^\Delta(p).
$$
\end{lem}
\proof{
As $\overline{w}^\Delta} %{w_+^\Delta(p) \geq s = \underline{w}^\Delta} %{w_-^\Delta(p)$ for $p \leq p^\ddagger$,
we have
$(1-\delta) s + \delta \overline{w}^\Delta} %{w_+^\Delta(p) \geq s$
whereas $s \geq (1-\delta) m(p) + \delta {\cal E}^\Delta_1\underline{w}^\Delta} %{w_-^\Delta(p)$ by the functional equation for $\underline{w}^\Delta} %{w_-^\Delta$.
}
\begin{lem}\label{lem:k=0;pddagger-pp}
There exists $\Delta_{(p^\ddagger,\bar{p}]} > 0$ such that
$$
(1-\delta) s + \delta \overline{w}^\Delta} %{w_+^\Delta(p) \geq (1-\delta) m(p) + \delta {\cal E}^\Delta_1\underline{w}^\Delta} %{w_-^\Delta(p),
$$
for all $p \in (p^\ddagger,\bar{p}]$ and $\Delta < \Delta_{(p^\ddagger,\bar{p}]}$.
\end{lem}
\proof{
By Lemmas \ref{lem:lower-bound-reward} and \ref{lem:upper-bound-punishment}, there exist $\nu > 0$ and $\Delta_0 > 0$ such that
$\overline{w}^\Delta} %{w_+^\Delta(p) - \underline{w}^\Delta} %{w_-^\Delta(p) \geq \nu$ for all $p \in [p^\ddagger, \bar{p}]$ and $\Delta < \Delta_0$.
Further, there is a $\Delta_1 \in (0,\Delta_0]$ such that
$|{\cal E}^\Delta_1 \underline{w}^\Delta} %{w_-^\Delta(p) - \underline{w}^\Delta} %{w_-^\Delta(p)| \leq \frac{\nu}{2}$ for all $p \in [p^\ddagger, \bar{p}]$ and $\Delta < \Delta_1$.
For these $p$ and $\Delta$, we thus have
$$
(1-\delta) s + \delta \overline{w}^\Delta} %{w_+^\Delta(p) - \left[ (1-\delta) m(p) + \delta {\cal E}^\Delta_1\underline{w}^\Delta} %{w_-^\Delta(p) \right]
\geq (1-\delta) [s-m(p)] + \delta \, \frac{\nu}{2}.
$$
Finally, there is a $\Delta_{(p^\ddagger,\bar{p}]} \in (0,\Delta_1]$ such that the right-hand side of this inequality is positive for all $p \in (p^\ddagger,\bar{p}]$ and $\Delta < \bar{\Delta}$.
}
We establish incentive compatibility of the risky action ($\kappa = 1$) to the immediate right of $\underline{p}$ by means of the following result.
\begin{lem}\label{lem:Gaussian-variable}
Let $X$ be a Gaussian random variable with mean $m$ and variance $V$.
\begin{enumerate}
\item For all $\eta > 0$,
\[
\mathbb{P}\!\left[X - m > \eta\right] < \frac{V}{\eta^2}\,.
\]
\item There exists $\overline{V} \in (0,1)$ such that for all $V < \overline{V}$,
\[
\mathbb{P}\!\left[ V^{\frac{3}{4}} \le X - m \le V^{\frac{1}{4}}\right] \ge \frac{1}{2}-V^{\frac{1}{4}}\,.
\]
\end{enumerate}
\end{lem}
\proof{
The first statement is a trivial consequence of Chebysheff's inequality. The proof of the second relies on the following inequality (13.48) of Johnson et al.\ (1994) for the standard normal cumulative distribution function:
\[
\frac{1}{2}\left[1+(1-e^{-x^2/2})^{\frac{1}{2}}\right] \le \Phi(x) \le \frac{1}{2}\left[1+(1-e^{-x^2})^{\frac{1}{2}}\right].
\]
Letting $\Phi^{V}$ denote the cdf of the Gaussian distribution with variance $V$ (and mean 0),
and using the above upper and lower bounds, we have
\begin{equation*}
\frac{\frac{1}{2}+\Phi^{V}(V^{\frac{3}{4}})-\Phi^{V}(V^{\frac{1}{4}})}{ \sqrt[4]{V }}
\le \frac{1-\sqrt{1-e^{-\frac{1}{2 \sqrt{V}}}}+\sqrt{1-e^{-\sqrt{V }}}}{2 \sqrt[4]{V }}\,.
\end{equation*}
Writing $x=\sqrt{V}$ and using the fact that $1 - \sqrt{1-y} \le \sqrt{y}$ for $0 \leq y \le 1$, moreover, we have
\[
\frac{1-\sqrt{1-e^{-\frac{1}{2 x}}}+\sqrt{1-e^{-x}}}{2 \sqrt{x}}\le \frac{1}{2}\sqrt{\frac{e^{-\frac{1}{2x}}}{x}}+\frac{1}{2} \sqrt{\frac{1-e^{-x}}{x}}\rightarrow \frac{1}{2},
\]
as $x\rightarrow 0$.
Thus,
\begin{equation*}
\frac{\frac{1}{2}+\Phi^{V}(V^{\frac{3}{4}})-\Phi^{V}(V^{\frac{1}{4}})}{ \sqrt[4]{V }}\le 1,
\end{equation*}
for sufficiently small $V$, which is the second statement of the lemma.
}
We apply this lemma to the log odds ratio
$\ell$ associated with the current belief $p$.
For later use, we note that
$dp/d\ell = p\,(1-p)$.
\begin{lem}\label{lem:k=1;pr-pr+epsilon-Brownian}
Let $\rho > 0$. There exist $\varepsilon \in (0,p^\ddagger-\underline{p})$ and $\Delta_{(\underline{p},\underline{p}+\varepsilon]} > 0$ such that
$$
(1-\delta) m(p) + \delta {\cal E}^\Delta_{N}\overline{w}^\Delta} %{w_+^\Delta(p) \geq (1-\delta) s + \delta {\cal E}^\Delta_{N-1}\underline{w}^\Delta} %{w_-^\Delta(p),
$$
for all $p \in (\underline{p},\underline{p}+\varepsilon]$ and $\Delta < \Delta_{(\underline{p},\underline{p}+\varepsilon]}$.
\end{lem}
\proof{
Consider a belief $p_0 = p$ and the corresponding log odds ratio $\ell$.
Let $K$ players use the risky arm on the time interval $[0,\Delta)$ and consider the resulting belief $p^{(K)}_\Delta$ and the associated log odds ratio $\ell^{(K)}_\Delta$.
Let $\mathbb{P}_\theta$ denote the probability measure associated with state $\theta \in \{0,1\}$.
Expected continuation payoffs are computed by means of the measure $\mathbb{P}_p = p \mathbb{P}_1 +(1-p) \mathbb{P}_0$.
Let $J_0^\Delta$ denote the event that no lump-sum arrives by time $\Delta$.
The probability of $J_0^\Delta$ under the measure $\mathbb{P}_\theta$ is $e^{-\lambda_\theta \Delta}$.
Note that
$$
e^{-\lambda_\theta \Delta} \mathbb{P}_\theta[A \mid J_0^\Delta]
\le \mathbb{P}_\theta[A]
\le e^{-\lambda_\theta \Delta} \mathbb{P}_\theta[A \mid J_0^\Delta] + 1 - e^{-\lambda_\theta \Delta},
$$
for any event $A$.
As we have seen in Appendix \ref{app:beliefs}, conditional on $J_0^\Delta$, the random variable $\ell^{(K)}_\Delta$ is normally distributed with mean
$\ell - K \left(\lambda_1 - \lambda_0 - \frac{\rho}{2}\right) \Delta$ and variance $K \rho \Delta$ under $\mathbb{P}_1$,
and normally distributed with mean
$\ell - K \left(\lambda_1 - \lambda_0 + \frac{\rho}{2}\right) \Delta$ and variance $K \rho \Delta$ under $\mathbb{P}_0$.
Now choose $\varepsilon > 0$ such that $\underline{p} + \varepsilon < p^\ddagger$.
Write $\underline{\ell}$, $\ell_\varepsilon$, $\ell^\ddagger$ and $\bar{\ell}$ for the log odds ratios associated with $\underline{p}$, $\underline{p} + \varepsilon$, $p^\ddagger$ and $\bar{p}$, respectively.
Choose $\Delta_0 > 0$ such that
$$
\nu_0 = \min_{(\Delta,\ell)\in[0,\Delta_0]\times[\underline{\ell},\ell_\varepsilon]} \left[\ell^\ddagger - \ell + (N-1) \left(\lambda_1 - \lambda_0 - \frac{\rho}{2}\right) \Delta\right]^2 > 0.
$$
For all $p\in(\underline{p},\underline{p}+\varepsilon]$ and $\Delta \in (0,\Delta_0)$, the first part of Lemma \ref{lem:Gaussian-variable} now implies
\begin{eqnarray*}
\mathbb{P}_p\!\left[ p^{(N-1)}_\Delta > p^\ddagger \right]
& = & \mathbb{P}_p\!\left[ \ell^{(N-1)}_\Delta > \ell^\ddagger \right] \\
& \le & p \left\{ e^{-\lambda_1 \Delta} \mathbb{P}_1\!\left[ \left. \ell^{(N-1)}_\Delta > \ell^\ddagger \right| J_0^\Delta \right] + 1 - e^{-\lambda_1 \Delta} \right\} \\
& & \mbox{} + (1-p) \left\{ e^{-\lambda_0 \Delta} \mathbb{P}_0\!\left[ \left. \ell^{(N-1)}_\Delta > \ell^\ddagger \right| J_0^\Delta \right] + 1 - e^{-\lambda_0 \Delta} \right\} \\
& \le & p \left\{ \frac{e^{-\lambda_1 \Delta} (N-1) \rho \Delta}{\nu_0} + 1 - e^{-\lambda_1 \Delta} \right\} \\
& & \mbox{} + (1-p) \left\{ \frac{e^{-\lambda_0 \Delta} (N-1) \rho \Delta}{\nu_0} + 1 - e^{-\lambda_0 \Delta} \right\} \\
& \le & \frac{(N-1) \rho \Delta}{\nu_0} + 1 - e^{-\lambda_1 \Delta} \\
& \le & \left\{ \frac{(N-1) \rho}{\nu_0} + \lambda_1 \right\} \Delta.
\end{eqnarray*}
As $\underline{w}^\Delta} %{w_-^\Delta\leq s+ (m_1-s) \mathbbm{1}_{(p^\ddagger,1]}$, moreover,
$$
{\cal E}^\Delta_{N-1}\underline{w}^\Delta} %{w_-^\Delta(p) \leq s + (m_1-s) \, \mathbb{P}_p\!\!\left[ p^{(N-1)}_\Delta > p^\ddagger\right].
$$
So there exists $C_0 > 0$ such that
${\cal E}^\Delta_{N-1}\underline{w}^\Delta} %{w_-^\Delta(p) \leq s + C_0 \Delta$
for all $p\in(\underline{p},\underline{p}+\varepsilon]$ and $\Delta \in (0,\Delta_0)$.
Next, define
$\nu_1 = \min_{\underline{p} \leq p \leq \bar{p}} p \, (1-p)$
and note that for $\underline{p} \leq p \leq \bar{p}$ (and thus for $\underline{\ell} \leq \ell \leq \bar{\ell}$),
$$
V_{N,\pr}(p) \geq s + \max\left\{0, V_{N,\pr}'(\underline{p}+) (p - \underline{p})\right\} \geq s + \max\left\{0, V_{N,\pr}'(\underline{p}+) \nu_1 (\ell - \underline{\ell})\right\}.
$$
By the second part of Lemma \ref{lem:Gaussian-variable}, there exists $\Delta_1 > 0$ such that $N \rho \Delta_1 < 1$ and
$$
\mathbb{P}_1\!\left[ \left. (N \rho \Delta)^{\frac{3}{4}} \le \ell^{(N)}_\Delta - \ell + N \left(\lambda_1 - \lambda_0 - \frac{\rho}{2}\right) \Delta \le (N \rho \Delta)^{\frac{1}{4}} \right| J_0^\Delta \right]
\ge \frac{1}{2} - (N \rho \Delta)^{\frac{1}{4}},
$$
for arbitrary $\ell$ and all $\Delta \in (0,\Delta_1)$.
In particular,
\begin{eqnarray*}
\lefteqn{\mathbb{P}_p\!\left[ (N \rho \Delta)^{\frac{3}{4}} \le \ell^{(N)}_\Delta - \ell + N \left(\lambda_1 - \lambda_0 - \frac{\rho}{2}\right) \Delta \le (N \rho \Delta)^{\frac{1}{4}} \right]} \\
& \ge & p \mathbb{P}_1\!\left[ (N \rho \Delta)^{\frac{3}{4}} \le \ell^{(N)}_\Delta - \ell + N \left(\lambda_1 - \lambda_0 - \frac{\rho}{2}\right) \Delta \le (N \rho \Delta)^{\frac{1}{4}} \right] \\
& \ge & p e^{-\lambda_1 \Delta} \mathbb{P}_1\!\left[ \left. (N \rho \Delta)^{\frac{3}{4}} \le \ell^{(N)}_\Delta - \ell + N \left(\lambda_1 - \lambda_0 - \frac{\rho}{2}\right) \Delta \le (N \rho \Delta)^{\frac{1}{4}} \right| J_0^\Delta \right] \\
& \ge & p e^{-\lambda_1 \Delta} \left(\frac{1}{2} - (N \rho \Delta)^{\frac{1}{4}}\right),
\end{eqnarray*}
for these $\Delta$.
Taking $\Delta_1$ smaller if necessary, we can also ensure that
$$
\underline{\ell}
< \ell - N \left(\lambda_1 - \lambda_0 - \frac{\rho}{2}\right) \Delta + (N \rho \Delta)^{\frac{3}{4}}
< \ell - N \left(\lambda_1 - \lambda_0 - \frac{\rho}{2}\right) \Delta + (N \rho \Delta)^{\frac{1}{4}}
< \bar{\ell},
$$
for all $\ell \in (\underline{\ell},\ell_\varepsilon]$ and all $\Delta \in (0,\Delta_1)$.
By Lemma \ref{lem:lower-bound-reward}, there exists $\Delta_2 \in (0,\Delta_1)$ such that $\overline{w}^\Delta} %{w_+^\Delta \geq V_{N,\pr}$ for $\Delta \in (0,\Delta_2)$.
For such $\Delta$ and $p \in (\underline{p}, \underline{p}+\varepsilon]$, we now have
\begin{eqnarray*}
{\cal E}^\Delta_N \overline{w}^\Delta} %{w_+^\Delta(p)
\!\! & \geq & \!\! s + p e^{-\lambda_1 \Delta} \left(\frac{1}{2} - (N \rho \Delta)^{\frac{1}{4}}\right) V_{N,\pr}'(\underline{p}+) \, \nu_1 \left[ \ell - N \left(\lambda_1 - \lambda_0 - \frac{\rho}{2}\right) \Delta + (N \rho \Delta)^{\frac{3}{4}} - \underline{\ell} \right] \\
& \geq & \!\! s + \underline{p} (1 - \lambda_1 \Delta) \left(\frac{1}{2} - (N \rho \Delta)^{\frac{1}{4}}\right) V_{N,\pr}'(\underline{p}+) \, \nu_1 \left[ - N \left(\lambda_1 - \lambda_0 - \frac{\rho}{2}\right) \Delta + (N \rho \Delta)^{\frac{3}{4}} \right].
\end{eqnarray*}
This implies the existence of $\Delta_3 \in (0,\Delta_2)$ and $C_1 > 0$ such that
$$
{\cal E}^\Delta_N \overline{w}^\Delta} %{w_+^\Delta(p) \geq s + C_1 \Delta^{\frac{3}{4}},
$$
for all $p\in(\underline{p},\underline{p}+\varepsilon]$ and $\Delta \in (0,\Delta_3)$.
For $p\in(\underline{p},\underline{p}+\varepsilon]$ and $\Delta \in (0,\min\{\Delta_0,\Delta_3\})$, finally,
\begin{eqnarray*}
\lefteqn{(1-\delta) m(p) + \delta {\cal E}^\Delta_{N}\overline{w}^\Delta} %{w_+^\Delta(p) - \left[(1-\delta) s + \delta {\cal E}^\Delta_{N-1}\underline{w}^\Delta} %{w_-^\Delta(p)\right]}\\
& \ge & (1-\delta) [m(\underline{p})-s] + \delta \left\{ C_1 \Delta^{\frac{3}{4}} - C_0 \Delta \right\} \\
& = & C_1 \Delta^{\frac{3}{4}} - \left\{r [s-m(\underline{p})] + C_0 \right\} \Delta + o(\Delta).
\end{eqnarray*}
As the term in $\Delta^{\frac{3}{4}}$ dominates as $\Delta$ becomes small, there exists
$\Delta_{(\underline{p},\underline{p}+\varepsilon]} \in (0,\min\{\Delta_0,\Delta_3\})$ such that this expression is positive for all $p \in (\underline{p},\underline{p}+\varepsilon]$ and $\Delta < \Delta_{(\underline{p},\underline{p}+\varepsilon]}$.
}
\begin{lem}\label{lem:k=1;pr+epsilon-pp}
For all $\varepsilon \in (0,p^\ddagger-\underline{p})$, there exists $\Delta_{(\underline{p}+\varepsilon,\bar{p}]} > 0$ such that
$$
(1-\delta) m(p) + \delta {\cal E}^\Delta_N\overline{w}^\Delta} %{w_+^\Delta(p) \geq (1-\delta) s + \delta {\cal E}^\Delta_{N-1}\underline{w}^\Delta} %{w_-^\Delta(p),
$$
for all $p \in (\underline{p}+\varepsilon,\bar{p}]$ and $\Delta < \Delta_{(\underline{p}+\varepsilon,\bar{p}]}$.
\end{lem}
\proof{
First, by Lemma \ref{lem:lower-bound-reward}, there exists $\Delta_0 > 0$ such that $\overline{w}^\Delta} %{w_+^\Delta \geq V_{N,\pr}$ on the unit interval.
Second, by Lemma \ref{lem:upper-bound-punishment}, there exist $\nu > 0$, $\eta > 0$ and $\Delta_1 \in (0, \Delta_0)$ such that
$V_{N,\pr}(p) - \underline{w}^\Delta} %{w_-^\Delta(p) \geq \nu$ for all $p \in [\underline{p}+\frac{\varepsilon}{2}, \bar{p}+\eta]$ and $\Delta < \Delta_1$.
For these $p$ and $\Delta$, and by convexity of $V_{N,\pr}$, we then have
\begin{eqnarray*}
{\cal E}^\Delta_N \overline{w}^\Delta} %{w_+^\Delta(p) - {\cal E}^\Delta_{N-1} \underline{w}^\Delta} %{w_-^\Delta(p)
& \geq & {\cal E}^\Delta_N V_{N,\pr}(p) - {\cal E}^\Delta_{N-1} \underline{w}^\Delta} %{w_-^\Delta(p) \\
& \geq & {\cal E}^\Delta_{N-1} V_{N,\pr}(p) - {\cal E}^\Delta_{N-1} \underline{w}^\Delta} %{w_-^\Delta(p) \\
& \geq & \chi^\Delta(p) \nu + [1 - \chi^\Delta(p)] (s - m_1),
\end{eqnarray*}
where $\chi^\Delta(p)$ denotes the probability that the belief $p_{t+\Delta}$ lies in $[\underline{p}+\frac{\varepsilon}{2}, \bar{p}+\eta]$ given that $p_t = p$ and $N-1$ players use the risky arm for a length of time $\Delta$.
Next, there exists $\Delta_2 \in (0, \Delta_1)$ such that
$$
\chi^\Delta(p) \geq \frac{\frac{\nu}{2}+m_1-s}{\nu+m_1-s},
$$
for all $p \in (\underline{p}+\varepsilon,\bar{p}]$ and $\Delta < \Delta_2$.
For these $p$ and $\Delta$, we thus have
$$
(1-\delta) m(p) + \delta {\cal E}^\Delta_N\overline{w}^\Delta} %{w_+^\Delta(p) - \left[ (1-\delta) s + \delta {\cal E}^\Delta_{N-1}\underline{w}^\Delta} %{w_-^\Delta(p) \right]
\geq (1-\delta) [m(p)-s] + \delta \, \frac{\nu}{2}.
$$
Finally, there is a $\Delta_{(\underline{p}+\varepsilon,\bar{p}]} \in (0, \Delta_2)$ such that the right-hand side of this inequality is positive for all $p \in (\underline{p}+\varepsilon,\bar{p}]$ and $\Delta < \Delta_{(\underline{p}+\varepsilon,\bar{p}]}$.}
\begin{lem}\label{lem:k=1;pp-1}
There exists $\Delta_{(\bar{p},1]} > 0$ such that
$$
(1-\delta) m(p) + \delta {\cal E}^\Delta_N\overline{w}^\Delta} %{w_+^\Delta(p) \geq (1-\delta) s + \delta {\cal E}^\Delta_{N-1}\underline{w}^\Delta} %{w_-^\Delta(p),
$$
for all $p > \bar{p}$ and $\Delta < \Delta_{(\bar{p},1]}$.
\end{lem}
\proof{
By Lemmas \ref{lem:lower-bound-reward} and \ref{lem:upper-bound-punishment}, there exists $\Delta_{(\bar{p},1]} > 0$ such that $\overline{w}^\Delta} %{w_+^\Delta \geq \underline{w}^\Delta} %{w_-^\Delta$ for all $\Delta < \Delta_{(\bar{p},1]}$.
For such $\Delta$ and all $p > \bar{p}$, we thus have
$$
(1-\delta) m(p) + \delta {\cal E}^\Delta_N\overline{w}^\Delta} %{w_+^\Delta(p) = \overline{w}^\Delta} %{w_+^\Delta(p) \geq \underline{w}^\Delta} %{w_-^\Delta(p)\geq (1-\delta) s + \delta {\cal E}^\Delta_{N-1}\underline{w}^\Delta} %{w_-^\Delta(p),
$$
with the last inequality following from the functional equation for $\underline{w}^\Delta} %{w_-^\Delta$.
}
\proofof{Proposition}{prop:SSE-Brownian}{
Given $\underline{p}$ and $\bar{p}$ as in \eref{eq:thresholds-Brownian}, choose $\varepsilon > 0$ and $\Delta_{(\underline{p},\underline{p}+\varepsilon]}$ as in Lemma \ref{lem:k=1;pr-pr+epsilon-Brownian},
and
$\Delta_{(p^\ddagger,\bar{p}]}$,
$\Delta_{(\underline{p}+\varepsilon,\bar{p}]}$
and
$\Delta_{(\bar{p},1]}$
as in Lemmas
\ref{lem:k=0;pddagger-pp},
\ref{lem:k=1;pr+epsilon-pp}
and
\ref{lem:k=1;pp-1}.
The two-state automaton is an SSE for all
$$\Delta < \min\left\{
\Delta_{(p^\ddagger,\bar{p}]},
\Delta_{(\underline{p},\underline{p}+\varepsilon]},
\Delta_{(\underline{p}+\varepsilon,\bar{p}]},
\Delta_{(\bar{p},1]}
\right\}.
$$
So the statement of the proposition holds with $p^\flat=p^\ddagger$ and $p^\sharp=\max\{\check{p},p^\diamond\}$.
}
\proofof{Proposition}{prop:limit-Brownian}{
Let $\varepsilon > 0$ be given.
First, the explicit representation for $V_{N,\underline{p}}$ in Section \ref{sec:result} and Lemma \ref{lem:flat-part} allow us to choose $\underline{p} \in (p_N^*,p^\flat)$ and $\bar{p} \in (p^\sharp,1)$ such that
$V_{N,\pr} > V_N^* - \varepsilon$
and
$\underline{w}^\Delta} %{w_-^\Delta < V_1^* + \varepsilon$ for all $\Delta > 0$.
Second,
Lemmas \ref{lem:convergence-single-agent} and \ref{lem:lower-bound-reward}
and Proposition \ref{prop:SSE-Brownian}
imply the existence of a $\Delta^\dagger > 0$ such that for all $\Delta \in (0,\Delta^\dagger)$:
$W_1^\Delta > V_1^* - \varepsilon$,
$\overline{w}^\Delta} %{w_+^\Delta \geq V_{N,\pr}$,
and $\overline{w}^\Delta} %{w_+^\Delta$ and $\underline{w}^\Delta} %{w_-^\Delta$ are SSE payoff functions of the game with period length $\Delta$.
Third, $\Wsup_{\rm PBE} \leq V_N^*$ for all $\Delta > 0$ because any discrete-time strategy profile is feasible for a planner who maximizes the players' average payoff in continuous time.
For $\Delta \in (0,\Delta^\dagger)$, we thus have
$$
V_N^* - \varepsilon < V_{N,\pr} \leq \overline{w}^\Delta} %{w_+^\Delta \le \Wsup_{\rm SSE} \leq \Wsup_{\rm PBE} \leq V_N^*,
$$
and
$$
V_1^* - \varepsilon < W_1^\Delta \leq \Winf_{\rm \, PBE} \leq \Winf_{\rm \, SSE} \leq \underline{w}^\Delta} %{w_-^\Delta < V_1^* + \varepsilon,
$$
so that
$\|\Wsup_{\rm PBE}-V_N^*\|$,
$\|\Wsup_{\rm SSE}-V_N^*\|$,
$\|\Winf_{\rm \, PBE}-V_1^*\|$
and
$\|\Winf_{\rm \, SSE}-V_1^*\|$
are all smaller than $\varepsilon$, which was to be shown.
}
\subsection{Pure Poisson Learning (Propositions \ref{prop:thresh}--\ref{prop:limit-Poisson})}
\proofof{Proposition}{prop:thresh}{
For any given $\Delta > 0$, let
$\tilde{p}^\Delta$ be the infimum of the set of beliefs at which there is some PBE that gives a payoff $w_n(p) > s$ to at least one player.
Let $\tilde{p} = \liminf_{\Delta \rightarrow 0} \tilde{p}^\Delta$.
For any fixed $\varepsilon > 0$ and $\Delta > 0$, consider the problem of maximizing the players' average payoff subject to no use of the risky arm at beliefs $p \leq \tilde{p}-\varepsilon$.
Denote the corresponding value function by $\widetilde{W}^{\Delta,\varepsilon}$.
By the definition of $\tilde{p}$, there exists a $\tilde{\Delta}_\varepsilon >0$ such that for $\Delta \in (0,\tilde{\Delta}_\varepsilon)$, the function $\widetilde{W}^{\Delta,\varepsilon}$ provides an upper bound on the players' average payoff in any PBE, and so $\Wsup_{\rm PBE} \leq \widetilde{W}^{\Delta,\varepsilon}$.
The value function of the continuous-time version of this maximization problem is $V_{N,p_\varepsilon}$ with $p_\varepsilon = \max\{\tilde{p} - \varepsilon, p_N^*\}$.
As the discrete-time solution is also feasible in continuous time, we have $\widetilde{W}^{\Delta,\varepsilon} \leq V_{N,p_\varepsilon}$, and hence $\Wsup_{\rm PBE} \leq V_{N,p_\varepsilon}$ for $\Delta < \tilde{\Delta}_\varepsilon$.
Consider a sequence of such $\Delta$'s converging to 0 such that the corresponding beliefs $\tilde{p}^\Delta$ converge to $\tilde{p}$.
For each $\Delta$ in this sequence, select a belief $p^\Delta > \tilde{p}^\Delta$ with the following two properties: (i) starting from $p^\Delta$, a single failed experiment takes us below $\tilde{p}^\Delta$; (ii) given the initial belief $p^\Delta$, there exists a PBE for reaction lag $\Delta$ in which at least one player plays risky with positive probability in the first round.
Select such an equilibrium for each $\Delta$ in the sequence and let $L^\Delta$ be the number of players in this equilibrium who, at the initial belief $p^\Delta$, play risky with positive probability.
Let $L$ be an accumulation point of the sequence of $L^\Delta$'s.
After selecting a subsequence of $\Delta$'s, we can assume without loss of generality that player $n=1,\ldots,L$ plays risky with probability
$\pi_n^\Delta > 0$ at $p^\Delta$, while player $n=L+1,\ldots,N$ plays safe; we can further assume that $(\pi_n^\Delta)_{n=1}^L$ converges to a limit $(\pi_n)_{n=1}^L$ in $[0,1]^L$.
For player $n = 1,\ldots,L$ to play optimally at $p^\Delta$, it must be the case that
\begin{eqnarray*}
\lefteqn{
(1-\delta)\left[\pi_n^\Delta\lambda(p^\Delta)h+(1-\pi_n^\Delta)s\right]
+\delta \left\{
\mbox{Pr}^\Delta(\emptyset)w_{n,\emptyset}^\Delta
+ \sum_{K=1}^L\sum_{\vert I\vert =K}\mbox{Pr}^\Delta(I)\sum_{J=0}^\infty \Lambda_{J,K}^\Delta(p^\Delta) w_{n,I,J}^\Delta
\right\}
} \\
& \geq &
(1-\delta)s
+\delta \left\{
\mbox{Pr}_{-n}^\Delta(\emptyset)w_{n,\emptyset}^\Delta
+ \sum_{K=1}^{L-1}\sum_{\vert I\vert =K, \, n\not\in I}\mbox{Pr}_{-n}^\Delta(I)\sum_{J=0}^\infty \Lambda_{J,K}^\Delta(p^\Delta) w_{n,I,J}^\Delta
\right\}, \hspace{5em}
\end{eqnarray*}
where we write $\mbox{Pr}^\Delta(I)$ for the probability that the set of players experimenting is $I \subseteq \{1,\ldots,L\}$,
$\mbox{Pr}_{-n}^\Delta(I)$ for the probability that among the $L-1$ players in $\{1,\cdots,L\}\setminus\{n\}$ the set of players experimenting is $I$,
and $w_{n,I,J}^\Delta$ for the conditional expectation of player $n$'s continuation payoff given that exactly the players in $I$ were experimenting and had $J$ successes ($w_{n,\emptyset}^\Delta$ is player $n$'s continuation payoff if no one was experimenting).
As $\mbox{Pr}^\Delta(\emptyset) = (1-\pi_n^\Delta) \mbox{Pr}_{-n}^\Delta(\emptyset) \leq \mbox{Pr}_{-n}^\Delta(\emptyset)$, the inequality continues to hold when we replace $w_{n,\emptyset}^\Delta$ by its lower bound $s$.
After subtracting $(1-\delta)s$ from both sides, we then have
\begin{eqnarray*}
\lefteqn{
(1-\delta)\pi_n^\Delta \left[\lambda(p^\Delta)h-s\right]
+\delta \left\{
\mbox{Pr}^\Delta(\emptyset) s
+ \sum_{K=1}^L\sum_{\vert I\vert =K}\mbox{Pr}^\Delta(I)\sum_{J=0}^\infty \Lambda_{J,K}^\Delta(p^\Delta) w_{n,I,J}^\Delta
\right\}
}\\
& \geq &
\delta \left\{
\mbox{Pr}_{-n}^\Delta(\emptyset) s
+ \sum_{K=1}^{L-1}\sum_{\vert I\vert =K, \, n\not\in I}\mbox{Pr}_{-n}^\Delta(I)\sum_{J=0}^\infty \Lambda_{J,K}^\Delta(p^\Delta) w_{n,I,J}^\Delta
\right\}. \hspace{5em}
\end{eqnarray*}
Summing up these inequalities over $n=1,\ldots,L$ and writing $\bar{\pi}^\Delta=\frac{1}{L}\sum_{n=1}^L \pi_n^\Delta$ yields
\begin{eqnarray*}
\lefteqn{
(1-\delta)L\bar{\pi}^\Delta \left[\lambda(p^\Delta)h-s\right]
+\delta \left\{
\mbox{Pr}^\Delta(\emptyset) L s
+ \sum_{K=1}^L\sum_{\vert I\vert =K}\mbox{Pr}^\Delta(I)\sum_{J=0}^\infty \Lambda_{J,K}^\Delta(p^\Delta) \sum_{n=1}^L w_{n,I,J}^\Delta
\right\}
}\\
& \geq &
\delta \left\{
\sum_{n=1}^L \mbox{Pr}_{-n}^\Delta(\emptyset) s
+ \sum_{n=1}^L \sum_{K=1}^{L-1} \sum_{\vert I\vert =K, \, n\not\in I}\mbox{Pr}_{-n}^\Delta(I)\sum_{J=0}^\infty \Lambda_{J,K}^\Delta(p^\Delta) w_{n,I,J}^\Delta
\right\}. \hspace{5em}
\end{eqnarray*}
By construction, $w_{n,I,0}^\Delta = s$ whenever $I \neq \emptyset$.
For $\vert I \vert = K > 0$ and $J > 0$, moreover, we have $w_{n,I,J}^\Delta \geq W_1^\Delta(B_{J,K}^\Delta(p^\Delta))$ for \emph{all} players $n=1,\ldots,N$, and hence
\begin{eqnarray*}
\sum_{n=1}^L w_{n,I,J}^\Delta
& \leq & N \Wsup_{\rm PBE}(B_{J,K}^\Delta(p^\Delta)) - (N-L) W_1^\Delta(B_{J,K}^\Delta(p^\Delta)) \\
& \leq & N V_{N,p_\varepsilon}(B_{J,K}^\Delta(p^\Delta)) - (N-L) W_1^\Delta(B_{J,K}^\Delta(p^\Delta)).
\end{eqnarray*}
So, for the preceding inequality to hold, it is necessary that
\begin{eqnarray*}
\lefteqn{
(1-\delta)L\bar{\pi}^\Delta \left[\lambda(p^\Delta)h-s\right]
+\delta \left\{ \astrut{4.5}
\mbox{Pr}^\Delta(\emptyset) L s + \sum_{K=1}^L \sum_{\vert I\vert =K} \mbox{Pr}^\Delta(I) \Lambda_{0,K}^\Delta(p^\Delta) L s
\right.}\\
& & \left.
+ \sum_{K=1}^L \sum_{\vert I\vert =K} \mbox{Pr}^\Delta(I) \sum_{J=1}^\infty \Lambda_{J,K}^\Delta(p^\Delta) \left[ N V_{N,p_\varepsilon}(B_{J,K}^\Delta(p^\Delta)) - (N-L) W_1^\Delta(B_{J,K}^\Delta(p^\Delta)) \right]
\right\} \\
& \geq & \astrut{6}
\delta \left\{ \astrut{4.5}
\sum_{n=1}^L \mbox{Pr}_{-n}^\Delta(\emptyset) s
+ \sum_{n=1}^L \sum_{K=1}^{L-1} \sum_{\vert I\vert =K, \, n\not\in I} \mbox{Pr}_{-n}^\Delta(I) \Lambda_{0,K}^\Delta(p^\Delta) s
\right. \\
& & \left. \hspace{1.75em}
+ \sum_{n=1}^L \sum_{K=1}^{L-1} \sum_{\vert I\vert =K, \, n\not\in I} \mbox{Pr}_{-n}^\Delta(I) \sum_{J=1}^\infty \Lambda_{J,K}^\Delta(p^\Delta) W_1^\Delta(B_{J,K}^\Delta(p^\Delta))
\right\}.
\end{eqnarray*}
As
$$
\mbox{Pr}^\Delta(\emptyset) + \sum_{K=1}^L \sum_{\vert I\vert =K} \mbox{Pr}^\Delta(I) = 1
\quad \mbox{and} \quad
\sum_{K=1}^L \sum_{\vert I\vert =K} \mbox{Pr}^\Delta(I) K = L \bar{\pi}^\Delta,
$$
we have the first-order expansions
\begin{eqnarray*}
\lefteqn{
\mbox{Pr}^\Delta(\emptyset) + \sum_{K=1}^L \sum_{\vert I\vert =K} \mbox{Pr}^\Delta(I) \Lambda_{0,K}^\Delta(p^\Delta)
} \\
& = & \mbox{Pr}^\Delta(\emptyset) + \sum_{K=1}^L \sum_{\vert I\vert =K} \mbox{Pr}^\Delta(I) \left( 1 - K \lambda(p^\Delta) \Delta \right) + o(\Delta) \\
& = & \astrut{3} 1 - L \bar{\pi}^\Delta \lambda(p^\Delta) \Delta + o(\Delta),
\end{eqnarray*}
and
$$
\sum_{K=1}^L \sum_{\vert I\vert =K} \mbox{Pr}^\Delta(I) \Lambda_{1,K}^\Delta(p^\Delta)
= \sum_{K=1}^L \sum_{\vert I\vert =K} \mbox{Pr}^\Delta(I) K \lambda(p^\Delta) \Delta + o(\Delta)
= L \bar{\pi}^\Delta \lambda(p^\Delta) \Delta + o(\Delta),
$$
so, by uniform convergence $W_1^\Delta \to V_1^*$ (Lemma \ref{lem:convergence-single-agent}), the left-hand side of the last inequality expands as
$$
Ls + L \left\{ \astrut{3} r \bar{\pi} \left[\lambda(\tilde{p})h-s\right] - r s
+ \bar{\pi} \lambda(\tilde{p}) \left[N V_{N,p_\varepsilon}(j(\tilde{p})) - (N\!-\!L) V_1^*(j(\tilde{p})) - L s\right] \right\} \Delta
+ o(\Delta),
$$
with $\bar{\pi} = \lim_{\Delta \rightarrow 0} \bar{\pi}^\Delta$.
In the same way, the identities
$$\mbox{Pr}_{-n}^\Delta(\emptyset) + \sum_{K=1}^{L-1} \sum_{\vert I\vert =K, \, n\not\in I} \mbox{Pr}_{-n}^\Delta(I) = 1
\quad \mbox{and} \quad
\sum_{K=1}^{L-1} \sum_{\vert I\vert =K, \, n\not\in I} \mbox{Pr}_{-n}^\Delta(I) K = L \bar{\pi}^\Delta - \pi_n^\Delta
$$
imply
$$
\sum_{n=1}^L \mbox{Pr}_{-n}^\Delta(\emptyset) + \sum_{n=1}^L \sum_{K=1}^{L-1} \sum_{\vert I\vert =K, \, n\not\in I} \mbox{Pr}_{-n}^\Delta(I) \Lambda_{0,K}^\Delta(p^\Delta)
= \astrut{3} L - L (L-1) \bar{\pi}^\Delta \lambda(p^\Delta) \Delta + o(\Delta),
$$
and
$$
\sum_{n=1}^L \sum_{K=1}^{L-1} \sum_{\vert I\vert =K, \, n\not\in I} \mbox{Pr}_{-n}^\Delta(I) \Lambda_{1,K}^\Delta(p^\Delta)
= L (L-1) \bar{\pi}^\Delta \lambda(p^\Delta) \Delta + o(\Delta),
$$
and so the right-hand side of the inequality expands as
$$
Ls + L \left\{ \astrut{2.5} - r s + (L-1) \bar{\pi} \lambda(\tilde{p}) \left[ V_1^*(j(\tilde{p})) - s \right] \right\} \Delta
+ o(\Delta).
$$
Comparing terms of order $\Delta$, dividing by $L$ and letting $\varepsilon \rightarrow 0$, we obtain
$$
\bar{\pi} \left\{ \astrut{2.5} \lambda(\tilde{p}) \left[ \astrut{2} N V_{N,\breve{p}}(j(\tilde{p})) - (N\!-\!1) V_1^*(j(\tilde{p})) - s \right] - r c(\tilde{p}) \right\} \geq 0.
$$
By Lemma \ref{lem:thresh}, this means $\tilde{p} \geq \hat{p}$ whenever $\bar{\pi} > 0$.
For the case that $\bar{\pi}=0$, we write the optimality condition for player $n \in \{1,\ldots, L\}$ as
\begin{eqnarray*}
\lefteqn{
(1-\delta) \lambda(p^\Delta)h
+\delta \left\{\sum_{K=0}^{L-1}\sum_{\vert I\vert =K,\, n\not\in I}\mbox{Pr}_{-n}^\Delta(I)\sum_{J=0}^\infty \Lambda_{J,K+1}^\Delta(p^\Delta) w_{n,I\dot{\cup}\{n\},J}^\Delta\right\}
} \\
& \geq &
(1-\delta)s
+\delta \left\{
\mbox{Pr}_{-n}^\Delta(\emptyset)w_{n,\emptyset}^\Delta
+ \sum_{K=1}^{L-1}\sum_{\vert I\vert =K, \, n\not\in I}\mbox{Pr}_{-n}^\Delta(I)\sum_{J=0}^\infty \Lambda_{J,K}^\Delta(p^\Delta) w_{n,I,J}^\Delta
\right\}. \hspace{5em}
\end{eqnarray*}
As above, $w_{n,\emptyset}^\Delta \geq s$, and $w_{n,I,0}^\Delta = s$ whenever $I \neq \emptyset$.
For $\vert I \vert = K > 0$ and $J > 0$, moreover, we have
$w_{n,I,J}^\Delta \geq W_1^\Delta(B_{J,K}^\Delta(p^\Delta))$,
$w_{n,I\dot{\cup}\{n\},J}^\Delta \geq W_1^\Delta(B_{J,K+1}^\Delta(p^\Delta))$ and
$w_{n,I\dot{\cup}\{n\},J}^\Delta \leq N V_{N,p_\varepsilon}(B_{J,K+1}^\Delta(p^\Delta)) - (N-1) W_1^\Delta(B_{J,K+1}^\Delta(p^\Delta))$.
So, for the optimality condition to hold, it is necessary that
\begin{eqnarray*}
\lefteqn{
(1-\delta) \lambda(p^\Delta)h
+\delta \left\{ \astrut{4.5}
\sum_{K=0}^{L-1} \sum_{\vert I\vert =K,\, n\not\in I}\mbox{Pr}_{-n}^\Delta(I) \Lambda_{0,K+1}^\Delta(p^\Delta) s
\right.}\\
\lefteqn{
\left.
+ \sum_{K=0}^{L-1} \sum_{\vert I\vert =K,\, n\not\in I}\mbox{Pr}_{-n}^\Delta(I) \sum_{J=1}^\infty \Lambda_{J,K+1}^\Delta(p^\Delta) \left[ N V_{N,p_\varepsilon}(B_{J,K+1}^\Delta(p^\Delta)) - (N\!-\!1) W_1^\Delta(B_{J,K+1}^\Delta(p^\Delta)) \right]
\right\}
} \\
& \geq & \astrut{6}
(1-\delta)s
+\delta \left\{ \astrut{4.5}
\mbox{Pr}_{-n}^\Delta(\emptyset) s
+ \sum_{K=1}^{L-1} \sum_{\vert I\vert =K, \, n\not\in I} \mbox{Pr}_{-n}^\Delta(I) \Lambda_{0,K}^\Delta(p^\Delta) s
\right. \\
& & \left. \hspace{6em}
+ \sum_{K=1}^{L-1} \sum_{\vert I\vert =K, \, n\not\in I} \mbox{Pr}_{-n}^\Delta(I)
\sum_{J=1}^\infty \Lambda_{J,K}^\Delta(p^\Delta) W_1^\Delta(B_{J,K}^\Delta(p^\Delta))
\right\}. \hspace{5em}
\end{eqnarray*}
Now,
$$
\sum_{K=1}^{L-1} \sum_{\vert I\vert =K, \, n\not\in I} \mbox{Pr}_{-n}^\Delta(I) K = L \bar{\pi}^\Delta - \pi_n^\Delta \rightarrow 0,
$$
as $\Delta$ vanishes.
Therefore, the left-hand side of the above inequality expands as
$$
s + \left\{ \astrut{3} r \left[\lambda(\tilde{p})h-s\right]
+ \lambda(\tilde{p}) \left[N V_{N,p_\varepsilon}(j(\tilde{p})) - (N\!-\!1) V_1^*(j(\tilde{p})) - s\right] \right\} \Delta
+ o(\Delta),
$$
and the right-hand side as $s + o(\Delta)$.
Comparing terms of order $\Delta$, letting $\varepsilon \rightarrow 0$ and using Lemma \ref{lem:thresh} once more, we again obtain $\tilde{p} \geq \hat{p}$.
The statement about the range of experimentation now follows immediately from the fact that
for $\Delta < \tilde{\Delta}_\varepsilon$, we have $\Wsup_{\rm PBE} \leq V_{N,p_\varepsilon}$,
and hence $\Wsup_{\rm PBE} = V_{N,p_\varepsilon} = s$ on $[0,\tilde{p} - \varepsilon] \supseteq [0,\hat{p} - \varepsilon]$.
The statement about the supremum of equilibrium payoffs follows from
the inequality $\Wsup_{\rm PBE} \leq V_{N,p_\varepsilon}$ for $\Delta < \tilde{\Delta}_\varepsilon$,
convergence $V_{N,p_\varepsilon} \rightarrow V_{N,\tilde{p}}$ as $\varepsilon \rightarrow 0$,
and the inequality $V_{N,\tilde{p}} \leq V_{N,\phat}$.
}
We now turn to the proof of Proposition \ref{prop:SSE-Poisson}.
The only difference to the case with a Brownian component is the proof of incentive compatibility to the immediate right of $\underline{p}$.
In view of Lemmas
\ref{lem:thresh}, \ref{lem:upper-bound-punishment} and \ref{lem:flat-part},
we consider $\underline{p}$ and $\bar{p}$ such that
\begin{equation} \label{eq:thresholds-Poisson}
\hat{p} < \underline{p} < p^\ddagger < p_1^* < p^m < \max\{p^\diamond,\check{p}\} < \bar{p} < 1.
\end{equation}
\begin{lem}\label{lem:k=1;pr-pr+epsilon-Poisson}
Let $\rho = 0$ and $\lambda_0 > 0$.
There exists $p^\sharp \in (\max\{p^\diamond,\check{p}\},1)$ such that for all $\bar{p} \in (p^\sharp,1)$, there exist $\varepsilon \in (0,p^\ddagger-\underline{p})$ and $\Delta_{(\underline{p},\underline{p}+\varepsilon]} > 0$ such that
$$
(1-\delta) m(p) + \delta {\cal E}^\Delta_{N}\overline{w}^\Delta} %{w_+^\Delta(p) \geq (1-\delta) s + \delta {\cal E}^\Delta_{N-1}\underline{w}^\Delta} %{w_-^\Delta(p),
$$
for all $p \in (\underline{p},\underline{p}+\varepsilon]$ and $\Delta < \Delta_{(\underline{p},\underline{p}+\varepsilon]}$.
\end{lem}
\proof{
By Lemma \ref{lem:lower-bound-reward}, there exists $\Delta_0 > 0$ such that $\overline{w}^\Delta} %{w_+^\Delta \geq V_{N,\pr}$ for $\Delta \in (0,\Delta_0)$.
By Lemma \ref{lem:thresh},
\[
\lambda(p)[NV_{N,p}(j(p))-(N-1)V_1^*(j(p))-s]-rc(p) > 0
\]
on $[\underline{p},1]$.
As $V_{N,p}(j(p)) \leq V_{N,\pr}(j(p))$ for $p \geq \underline{p}$, this implies
\[
\lambda(p)[NV_{N,\pr}(j(p))-(N-1)V_1^*(j(p))-s]-rc(p) > 0
\]
on $[\underline{p},1]$.
By Lemma \ref{lem:convergence-punish-pbar}, there exists a belief $p^\sharp > \max\{p^\diamond,\check{p}\}$ such that for all $\bar{p} > p^\sharp$,
$$
\lambda(p)[NV_{N,\pr}(j(p))-(N-1)V_{1,\pp}(j(p))-s]-rc(p) > 0
$$
on $[\underline{p},1]$.
Fix a $\bar{p} \in (p^\sharp,1)$, define
$$
\nu = \min_{p \in [\underline{p},1]} \left\{ \lambda(p)[NV_{N,\pr}(j(p))-(N-1)V_{1,\pp}(j(p))-s]-rc(p) \right\} > 0,
$$
and choose $\varepsilon > 0$ such that $\underline{p} + \varepsilon < p^\ddagger$ and
$$
(N \lambda(\underline{p} + \varepsilon) + r)\left[V_{N,\pr}(\underline{p} + \varepsilon) - s\right] < \nu/3.
$$
In the remainder of the proof, we write $p^K_J$ for the posterior belief starting from $p$ when $K$ players use the risky arm and $J$ lump-sums arrive within the length of time $\Delta$.
For $p \in (\underline{p},\underline{p}+\varepsilon]$ and $\Delta \in (0,\Delta_0)$,
\begin{eqnarray*}
\lefteqn{(1-\delta) m(p) + \delta {\cal E}^\Delta_N \overline{w}^\Delta} %{w_+^\Delta(p)} \\
& \geq & (1-\delta) m(p) + \delta {\cal E}^\Delta_N V_{N,\pr}(p) \\
& = & r\Delta\, m(p)
+ (1-r\Delta) \left\{
N\lambda(p)\Delta\, V_{N,\pr}(p^N_1)
+ (1-N\lambda(p)\Delta)\, V_{N,\pr}(p^N_0)
\right\}
+O(\Delta^2) \\
& = & V_{N,\pr}(p^N_0)
+ \left\{
r m(p)
+ N \lambda(p) V_{N,\pr}(p^N_1)
- (N \lambda(p) + r) V_{N,\pr}(p^N_0)
\right\} \Delta
+ O(\Delta^2),
\end{eqnarray*}
while
\begin{eqnarray*}
\lefteqn{(1-\delta) s + \delta {\cal E}^\Delta_{N-1} \underline{w}^\Delta} %{w_-^\Delta(p)} \\
& = & r\Delta\, s
+ (1-r\Delta) \left\{
(N-1)\lambda(p)\Delta\, \underline{w}^\Delta} %{w_-^\Delta(p^{N-1}_1)
+ [1-(N-1)\lambda(p)\Delta]\, \underline{w}^\Delta} %{w_-^\Delta(p^{N-1}_0)
\right\}
+O(\Delta^2) \\
& = & \underline{w}^\Delta} %{w_-^\Delta(p^{N-1}_0)
+ \left\{
rs
+ (N-1) \lambda(p) \underline{w}^\Delta} %{w_-^\Delta(p^{N-1}_1)
- [(N-1) \lambda(p) + r] \underline{w}^\Delta} %{w_-^\Delta(p^{N-1}_0)
\right\} \Delta + O(\Delta^2).
\end{eqnarray*}
As $V_{N,\pr}(p^N_0) \geq s = \underline{w}^\Delta} %{w_-^\Delta(p^{N-1}_0)$,
the difference
$(1-\delta) m(p) + \delta {\cal E}^\Delta_{N}\overline{w}^\Delta} %{w_+^\Delta(p) - \left[ (1-\delta) s + \delta {\cal E}^\Delta_{N-1}\underline{w}^\Delta} %{w_-^\Delta(p) \right]$
is no smaller than $\Delta$ times
$$
\lambda(p) \left[ N V_{N,\pr}(p^N_1) - (N-1) \underline{w}^\Delta} %{w_-^\Delta(p^{N-1}_1) - s \right]
- r c(p)
- (N \lambda(p) + r) \left[ V_{N,\pr}(p^N_0) - s \right],
$$
plus terms of order $\Delta^2$ and higher.
Let $\xi=\frac{\nu}{6(N-1)\lambda_1}$.
By Lemma \ref{lem:convergence-punish-Delta} as well as Lipschitz continuity of $V_{N,\pr}$ and $V_{1,\pp}$, there exists $\Delta_1 \in (0,\Delta_0)$ such that
$\|\underline{w}^\Delta} %{w_-^\Delta - V_{1,\pp}\|$,
$\max_{\underline{p} \leq p \leq p^\ddagger} |V_{N,\pr}(p^N_1) - V_{N,\pr}(j(p))|$
and
$\max_{\underline{p} \leq p \leq p^\ddagger} |V_{1,\pp}(p^{N-1}_1) - V_{1,\pp}(j(p))|$
are all smaller than $\xi$ when $\Delta < \Delta_1$.
For such $\Delta$ and $p \in (\underline{p},p^\ddagger]$, we thus have
$V_{N,\pr}(p^N_1) > V_{N,\pr}(j(p)) - \xi$
and
$\underline{w}^\Delta} %{w_-^\Delta(p^{N-1}_1) < V_{1,\pp}(j(p)) + 2 \xi$,
so that the expression displayed above is larger than
$\nu - 2 (N-1) \lambda(p) \xi - \nu/3 > \nu/3$.
This implies existence of a $\Delta_{(\underline{p},\underline{p}+\varepsilon]} \in (0,\Delta_1)$ as in the statement of the lemma.
}
\proofof{Proposition}{prop:SSE-Poisson}{
Given $\underline{p}$ as in \eref{eq:thresholds-Poisson}, take $p^\sharp$ as in Lemma \ref{lem:k=1;pr-pr+epsilon-Poisson} and fix $\bar{p} > p^\sharp$.
Choose $\varepsilon > 0$ and $\Delta_{(\underline{p},\underline{p}+\varepsilon]}$ as in Lemma \ref{lem:k=1;pr-pr+epsilon-Poisson},
and
$\Delta_{(p^\ddagger,\bar{p}]}$,
$\Delta_{(\underline{p}+\varepsilon,\bar{p}]}$
and
$\Delta_{(\bar{p},1]}$
as in Lemmas
\ref{lem:k=0;pddagger-pp},
\ref{lem:k=1;pr+epsilon-pp}
and
\ref{lem:k=1;pp-1}.
The two-state automaton is an SSE for all
$$\Delta < \min\left\{
\Delta_{(p^\ddagger,\bar{p}]},
\Delta_{(\underline{p},\underline{p}+\varepsilon]},
\Delta_{(\underline{p}+\varepsilon,\bar{p}]},
\Delta_{(\bar{p},1]}
\right\}.
$$
So the statement of the proposition holds with $p^\flat=p^\ddagger$ and the chosen $p^\sharp$.
}
For the proof of Proposition \ref{prop:SSE-exponential}, we modify notation slightly, writing $\Lambda$ for the probability that, conditional on $\theta=1$, a player has at least one success on his own risky arm in any given round, and $g$ for the corresponding expected payoff per unit of time.\footnote{\textit{I.e.}, $\Lambda = 1- e^{-\lambda_1\Delta}$ and $g = m_1$.}
Consider an SSE played at a given prior $p$, with associated payoff $W$.
If $K \ge 1$ players unsuccessfully choose the risky arm, the belief jumps down to a posterior denoted $p_K$.
Note that an SSE allows the continuation play to depend on the identity of these players.
Taking the expectation over all possible combinations of $K$ players who experiment, however,
we can associate with each posterior $p_K$, $K \ge 1$, an expected continuation payoff $W_K$.
If $K=0$, so that no player experiments, the belief does not evolve, but there is no reason that the continuation strategies (and so the payoff) should remain the same.
We denote the corresponding payoff by $W_0$.
In addition, we write $\pi \in [0,1]$ for the probability with which each player experiments at $p$, and
$q_K$ for the probability that at least one player has a success, given $p$, when $K$ of them experiment.
The players' common payoff must then satisfy the following optimality equation:
\begin{eqnarray*}
W
& = &\max \left\{(1-\delta)p_0 g+\delta \sum_{K=0}^{N-1}\binom{N-1}{K}\pi^K(1-\pi)^{N-1-K}[q_{K+1} g+(1-q_{K+1})W_{K+1})]\right.,\\
&& \hspace{-2em} \left.(1-\delta)s+\delta \sum_{K=1}^{N-1}\binom{N-1}{K}\pi^K(1-\pi)^{N-1-K}(q_{K} g+(1-q_{K})W_K)+\delta(1-\pi)^{N-1}W_0)\right\}.
\end{eqnarray*}
The first term corresponds to the payoff from playing risky, the second from playing safe.
As it turns out, it is more convenient to work with
odds ratios
$$\omega=\frac{p}{1-p} \quad \mbox{and} \quad \omega_K=\frac{p_K}{1-p_K},$$
which we refer to as ``belief'' as well. Note that
\[
p_K=\frac{p\,(1-\omega)^{\!K}}{p\,(1-\omega)^{\!K}+1-p}
\]
implies that $\omega_K=(1-\Lambda)^{\!K} \omega.$ Note also that
\[
1-q_K=p\,(1-\Lambda)^{\!K}+1-p=(1-p)(1+\omega_K),\quad q_K=p-(1-p)\omega_K=(1-p)(\omega-\omega_K).
\]
We
define
$$
m=\frac{s}{g-s},
\quad
\upsilon=\frac{W-s}{(1-p)(g-s)},
\quad
\upsilon_K=\frac{W_K-s}{(1-p_K)(g-s)}.
$$
Note that $\upsilon \ge 0$ in any equilibrium, as $s$ is a lower bound on the value.
Simple computations now give
\begin{eqnarray*}
\upsilon &=&\max \left\{ \omega-(1-\delta)m+\delta \sum_{K=0}^{N-1}\binom{N-1}{K}\pi^K(1-\pi)^{N-1-K}(\upsilon_{K+1}-\omega_{K+1})\right.,\\
&&\hspace{2.75em}\left.\delta \omega+\delta \sum_{K=0}^{N-1}\binom{N-1}{K}\pi^K(1-\pi)^{N-1-K}(\upsilon_K-\omega_K)\right\}.
\end{eqnarray*}
It is also useful to introduce $w=\upsilon-\omega$ and $w_K=\upsilon_K-\omega_K$.
We then obtain
\begin{eqnarray}\label{webern}
w&=&\max \left\{-(1-\delta)m+\delta \sum_{K=0}^{N-1}\binom{N-1}{K}\pi^K(1-\pi)^{N-1-K}w_{K+1}\right.,\notag \\
&&\hspace{2.75em}\left.-(1-\delta)\omega+\delta\sum_{K=0}^{N-1}\binom{N-1}{K}\pi^K(1-\pi)^{N-1-K}w_K\right\}.
\end{eqnarray}
We define
\[
\omega^*=\frac{m}{1+\frac{\delta}{1-\delta}\Lambda}\,.
\]
This is the odds ratio corresponding to the single-agent cutoff $p_1^\Delta$, \textit{i.e.}, $\omega^*=p_1^\Delta/(1-p_1^\Delta)$. Note that $p_1^\Delta>p_1^*$ for $\Delta>0$.
As stated in Section \ref{sec:construction-Poisson}, no PBE involves experimentation below $p_1^\Delta$ or, in terms of odds ratios, $\omega^*$.
For all beliefs $\omega < \omega^*$, therefore, any equilibrium has $w=-\omega$, or $\upsilon=0$, for each player.
\vskip8truept
\proofof{Proposition}{prop:SSE-exponential}{
Following terminology from repeated games, we say that we can \emph{enforce} action $\pi\in\{0,1\}$ at belief $\omega$ if we can construct an SSE for the prior belief $\omega$ in which players prefer to choose $\pi$ in the first round rather than deviate unilaterally.
Our first step is to derive sufficient conditions for enforcement of $\pi\in\{0,1\}$. The conditions to enforce these actions are intertwined, and must be derived simultaneously.
\emph{Enforcing $\pi=0$ at $\omega$.} To enforce $\pi=0$ at $\omega$, it suffices that one round of using the safe arm followed by the best equilibrium payoff at $\omega$ exceeds the payoff from one round of using the risky arm followed by the resulting continuation payoff at belief $\omega_1$ (as only the deviating player will have experimented). See below for the precise condition.
\emph{Enforcing $\pi=1$ at $\omega$.} If a player deviates to $\pi=0$, we jump to $w_{N-1}$ rather than $w_N$ in case all experiments fail.
Assume that at $\omega_{N-1}$ we can enforce $\pi=0$. As explained above,
this implies that at $\omega_{N-1}$, a player's continuation payoff can be pushed down to what he would get by unilaterally deviating to experimentation, which is at most $-(1-\delta)m+\delta w_N$ where $w_N$ is the highest possible continuation payoff at belief $\omega_N$.
To enforce $\pi=1$ at $\omega$, it then suffices that
\[
w=-(1-\delta)m+\delta w_N \ge -(1-\delta)\omega+\delta(-(1-\delta)m+\delta w_N),
\]
with the same continuation payoff $w_N$ on the left-hand side of the inequality.
The inequality simplifies to
\[
\delta w_N \ge (1-\delta)m-\omega;
\]
by the formula for $w$, this is equivalent to $w \ge -\omega$, \textit{i.e.}, $\upsilon \ge 0$.
Given that
\[
\upsilon
=\omega-(1-\delta)m+\delta(\upsilon_N-\omega_N)
=(1-\delta(1-\Lambda)^N) \omega-(1-\delta)m+\delta \upsilon_N,
\]
to show that $\upsilon \ge 0$, it thus suffices that
\[
\omega \ge \frac{m}{1+\frac{\delta}{1-\delta}(1-(1-\Lambda)^N)}=\tilde{\omega},
\]
and that $\upsilon_N \ge 0$, which is necessarily the case if $\upsilon_N$ is an equilibrium payoff.
Note that $(1-\Lambda)^N\tilde{\omega} \le \omega^*$, so that $\omega_N \ge \omega^*$ implies $\omega \ge \tilde{\omega}$.
In summary, to enforce $\pi=1$ at $\omega$, it suffices that $\omega_N \ge \omega^*$ and $\pi=0$ be enforceable at $\omega_{N-1}$.
\emph{Enforcing $\pi=0$ at $\omega$ (continued).}
Suppose we can enforce it at $\omega_1,\omega_2,\ldots,\omega_{N-1}$, and that $\omega_N \ge \omega^*$. Note that $\pi=1$ is then enforceable at $\omega$ from our previous argument, given our hypothesis that $\pi=0$ is enforceable at $\omega_{N-1}$.
It then suffices that
\[
-(1-\delta)\omega+\delta (-(1-\delta)m+\delta w_N) \ge -(1-\delta^N)m+\delta^N w_N,
\]
where again it suffices that this holds for the highest value of $w_N$. To understand this expression, consider a player who deviates by experimenting. Then the following period the belief is down one step, and if $\pi=0$ is enforceable at $\omega_1$, it means that his continuation payoff there can be chosen to be no larger than what he can secure at that point by deviating and experimenting again, etc. The right-hand side is then obtained as the payoff from $N$ consecutive unilateral deviations to experimentation (in fact, we have picked an upper bound, as the continuation payoff after this string of deviations need not be the maximum $w_N$). The left-hand side is the payoff from playing safe one period before setting $\pi=1$ and getting the maximum payoff $w_N$, a continuation strategy that is sequentially rational given that $\pi=1$ is enforceable at $\omega$ by our hypothesis that $\pi=0$ is enforceable at $\omega_{N-1}$.
Plugging in the definition of $\upsilon_N$, this inequality simplifies to
\[
(\delta^2-\delta^N)\upsilon_N \ge (\delta^2-\delta^N)(\omega_N-m)+(1-\delta)(\omega-m),
\]
which is always satisfied for beliefs $\omega\leq m$, \textit{i.e.}, below the myopic cutoff $\omega^m$ (which coincides with the normalized payoff $m$).
To summarize, if $\pi=0$ can be enforced at the $N-1$ consecutive beliefs $\omega_1,\ldots,\omega_{N-1}$, with $\omega_N \ge \omega^*$ and $\omega\le \omega^m$,
then both $\pi=0$ and $\pi=1$ can be enforced at $\omega$.
By induction, this implies that if we can find an interval of beliefs $[\omega_N,\omega)$ with $\omega_N \ge \omega^*$ for which $\pi=0$ can be enforced, then $\pi=0,1$ can be enforced at all beliefs $\omega' \in (\omega,\omega^m)$.
Our second step is to establish that such an interval of beliefs exists. This second step involves itself three steps. First, we derive some ``simple'' equilibrium, which is a symmetric Markov equilibrium. Second, we show that we can enforce $\pi=1$ on sufficiently (finitely) many consecutive values of beliefs building on this equilibrium; third, we show that this can be used to enforce $\pi=0$ as well.
It will be useful to distinguish beliefs according to whether they belong to the interval $[\omega^*,(1+\lambda_1 \Delta)\omega^*),[(1+\lambda_1 \Delta)\omega^*,(1+2\lambda_1 \Delta)\omega^*),\ldots$
For $\tau \in I\! \! N$, let $I_{\tau+1}=[(1+\tau \lambda_1 \Delta)\omega^*,(1+(\tau+1)\lambda_1 \Delta)\omega^*)$. For fixed $\Delta$, every $\omega \ge \omega^*$ can be uniquely mapped into a pair $(x,\tau) \in [0,1) \times I\! \! N$ such that $\omega=(1+\lambda_1(x+\tau)\Delta)\omega^*$, and we alternatively denote beliefs by such a pair. Note also that, for small enough $\Delta>0$, one unsuccessful experiment takes a belief that belongs to the interval $I_{\tau+1}$ to (within $O(\Delta^2)$ of) the interval $I_{\tau}$. (Recall that $\Lambda=\lambda_1 \Delta +O(\Delta^2)$.)
Let us start with deriving a symmetric Markov equilibrium. Hence, because it is Markovian, $\upsilon_0=\upsilon$ in our notation, that is, the continuation payoff when nobody experiments is equal to the payoff itself.
Rewriting the equations, using the risky arm gives the payoff\footnote{To pull out the terms involving the belief $\omega$ from the sum appearing in the definition of $\upsilon$, use the fact that $ \sum_{K=0}^{N-1}\binom{N-1}{K}\pi^K(1-\pi)^{N-1-K}(1-\Lambda)^K=(1-\pi \Lambda)^N/(1-\pi \Lambda)$.}
\[
\upsilon=\omega-(1-\delta)m-\delta (1-\Lambda)(1-\pi \Lambda)^{N-1}\omega+\delta \sum_{K=0}^{N-1}\binom{N-1}{K}\pi^K(1-\pi)^{N-1-K}\upsilon_{K+1},\]
while using the safe arm yields
\[
\upsilon=\delta (1-(1-\pi \Lambda)^{N-1})\omega+\delta(1-\pi)^{N-1}\upsilon+\delta\sum_{K=1}^{N-1}\binom{N-1}{K}\pi^K(1-\pi)^{N-1-K}\upsilon_K.
\]
In the Markov equilibrium we derive, players are indifferent between both actions, and so their payoffs are the same. Given any belief $\omega$ or corresponding pair $(\tau,x)$, we conjecture an equilibrium in which $\pi=a(\tau,x) \Delta^2+O(\Delta^3)$, $\upsilon=b(\tau,x) \Delta^2+O(\Delta^3)$, for some functions $a,b$ of the pair $(\tau,x)$ only. Using the fact that $\Lambda=\lambda_1 \Delta +O(\Delta^2),1-\delta=r \Delta +O(\Delta^2)$, we replace this in the two payoff expressions, and take Taylor expansions to get, respectively,
\[
0=\left( rb(\tau,x)+\frac{\lambda_1 m}{\lambda_1+r}(N-1)a(\tau,x) \right)\Delta^3+O(\Delta^4),
\]
and
\[
0=\left[b(\tau,x)-r m \lambda_1(\tau+x)\right]\Delta^2+O(\Delta^3).
\]
We then solve for $a(\tau,x)$, $b(\tau,x)$, to get
\[
\pi_-=\frac{r(\lambda_1+r)(x+\tau)}{N-1}\Delta^2+O(\Delta^3),
\]
with corresponding value
\[
\upsilon_-=\lambda_1 m r(x+\tau)\Delta^2+O(\Delta^3).
\]
This being an induction on $K$, it must be verified that the expansion indeed holds at the lowest interval, $I_1$, and this verification is immediate.\footnote{
Note that this solution is actually continuous at the interval endpoints.
It is not the only solution to these equations; as mentioned in the text, there are intervals of beliefs for which multiple symmetric Markov equilibria exist in discrete time. It is easy to construct such equilibria in which $\pi=1$ and the initial belief is in (a subinterval of) $I_1$.}
We now turn to the second step and argue that we can find $N-1$
consecutive beliefs at which $\pi=1$ can be enforced. We then verify that incentives can be provided to do so, assuming that $\upsilon_-$ are the continuation values used by the players whether a player deviates or not from $\pi=1$. Assume that $N-1$ players choose $\pi=1$. Consider the remaining one. His incentive constraint to choose $\pi=1$ is
\begin{equation}\label{zelenka}
-(1-\delta)m+\delta \upsilon_N-\delta (1-\Lambda)^N \omega \ge -(1-\delta)\omega-\delta(1-\Lambda)^{N-1}\omega+\delta \upsilon_{N-1},
\end{equation}
where $\upsilon_N,\upsilon_{N-1}$ are given by $\upsilon_-$ at $\omega_N$, $\omega_{N-1}$. The interpretation of both sides is as before, the payoff from abiding with the candidate equilibrium action vs. the payoff from deviating.
Fixing $\omega$ and the corresponding pair $(\tau,x)$, and assuming that $\tau \ge N-1$,\footnote{Considering $\tau<N-1$ would lead to $\upsilon_N=0$, so that the explicit formula for $\upsilon_-$ would not apply at $\omega_N$. Computations are then easier, and the result would hold as well.} we insert our formula for $\upsilon_-$, as well as $\Lambda=\lambda_1 \Delta+O(\Delta),1-\delta=r \Delta+O(\Delta)$. This gives
\[
\tau \ge (N-1)\left(2+\frac{\lambda_1}{\lambda_1+r}\right)-x.
\]
Hence, given any integer $N' \in I\! \! N$, $N'>3(N-1)$, there exists $\bar{\Delta}>0$ such that for every $\Delta\in (0,\bar{\Delta})$, $\pi=1$ is an equilibrium action at all beliefs $\omega=\omega^*(1+\tau\Delta)$, for $\tau =3(N-1),\ldots,N'$ (we pick the factor 3 because $\lambda_1/(\lambda_1+r) <1$).
Fix $N-1$ consecutive beliefs such that they all belong to intervals $I_\tau$ with $\tau \ge 3(N-1)$ (say, $\tau \le {4N}$), and fix $\Delta$ for which the previous result holds, \textit{i.e.}, $\pi=1$ can be enforced at all these beliefs. We now turn to the third step, showing how $\pi=0$ can be enforced as well for these beliefs.
Suppose that players choose $\pi=0$. As a continuation payoff, we can use the payoff from playing $\pi=1$ in the following round, as we have seen that this action can be enforced at such a belief. This gives
\[
\delta \omega+\delta(-(1-\delta)m-\delta(1-\Lambda)^Nl+\delta \upsilon_-(\omega_N)).
\]
(Note that the discounted continuation payoff is the left-hand side of \eqref{zelenka}.)
By deviating from $\pi=0$, a player gets at most
\[
\omega+\left(-(1-\delta)m-\delta(1-\Lambda)\omega+\delta\upsilon_-(\omega_1)\right).
\]
Again inserting our formula for $\upsilon_-$, this reduces to
\[
\frac{mr(N-1)\lambda_1}{\lambda_1+r}\Delta \ge 0.
\]
Hence we can also enforce $\pi=0$ at all these beliefs. We can thus apply our induction argument: there exists $\bar{\Delta}>0$ such that, for all $\Delta \in (0,\bar{\Delta})$, both $\pi=0,1$ can be enforced at all beliefs $\omega \in ( \omega^*(1+4N\Delta),\omega^m)$.
Note that we have not established that, for such a belief $\omega$, $\pi=1$ is enforced with a continuation in which $\pi=1$ is being played in the next round (at belief $\omega_N>\omega^*(1+4N\Delta)$). However, if $\pi=1$ can be enforced at belief $\omega$, it can be enforced when the continuation payoff at $\omega_N$ is highest possible; in turn, this means that, as $\pi=1$ can be enforced at $\omega_N$, this continuation payoff is at least as large as the payoff from playing $\pi=1$ at $\omega_N$ as well. By induction, this implies that the highest equilibrium payoff at $\omega$ is at least as large as the one obtained by playing $\pi=1$ at all intermediate beliefs in $(\omega^*(1+4N\Delta),\omega)$ (followed by, say, the worst equilibrium payoff once beliefs below this range are reached).
Similarly, we have not argued that, at belief $\omega$, $\pi=0$ is enforced by a continuation equilibrium in which, if a player deviates and experiments unilaterally, his continuation payoff at $\omega_1$ is what he gets if he keeps on experimenting alone. However, because $\pi=0$ can be enforced at $\omega_1$, the lowest equilibrium payoff that can be used after a unilateral deviation at $\omega$ must be at least as low as what the player can get at $\omega_1$ from deviating unilaterally to risky again. By induction, this implies that the lowest equilibrium payoff at belief $\omega$ is at least as low as the one obtained if a player experiments alone for all beliefs in the range $(\omega^*(1+4N\Delta),\omega)$ (followed by, say, the highest equilibrium payoff once beliefs below this interval are reached).
Note that, as $\Delta \rightarrow 0$, these bounds converge (uniformly in $\Delta$) to the cooperative solution (restricted to no experimentation at and below $\omega=\omega^*$) and the single-agent payoff, respectively, which was to be shown. (This is immediate given that these values correspond to precisely the cooperative payoff (with $N$ or $1$ player) for a cutoff that is within a distance of order $\Delta$ of the cutoff $\omega^*$, with a continuation payoff at that cutoff which is itself within $\Delta$ times a constant of the safe payoff.)
This also immediately implies (as for the case $\lambda_0>0$) that for fixed $\omega>\omega^m$, both $\pi=0,1$ can be enforced at all beliefs in $[\omega^m,\omega]$ for all $\Delta<\bar{\Delta}$, for some $\bar{\Delta}>0$: the gain from a deviation is of order $\Delta$, yet the difference in continuation payoffs (selecting as a continuation payoff a value close to the maximum if no player unilaterally defects, and close to the minimum if one does) is bounded away from 0, even as $\Delta \rightarrow 0$.\footnote{This follows by contradiction. Suppose that for some $\Delta \in (0,\bar{\Delta})$, there is $\hat{\omega} \in [\omega^m,\omega]$ for which either $\pi=0$ or 1 cannot be enforced. Consider the infimum over such beliefs. Continuation payoffs can then be picked as desired, which is a contradiction as it shows that at this presumed infimum belief $\pi=0,1$ can in fact be enforced.} Hence, all conclusions extend: fix $\omega \in (\omega^*,\infty)$; for every $\varepsilon>0$, there exists $\bar{\Delta}>0$ such that for all $\Delta<\bar{\Delta}$, the best SSE payoff starting at belief $\omega$ is at least as much as the payoff from all players choosing $\pi=1$ at all beliefs in $(\omega^*+\varepsilon,\omega)$ (using $s$ as a lower bound on the continuation once the belief $\omega^*+\varepsilon$ is reached); and the worst SSE payoff starting at belief $\omega$ is no more than the payoff from a player whose opponents choose $\pi=1$ if, and only if, $\omega \in (\omega^*,\omega^*+\varepsilon)$, and 0 otherwise.
The first part of the proposition follows immediately, picking arbitrary $\underline{p} \in (p_1^*,p^m)$ and $\bar{p} \in (p^m,1)$. The second part follows from the fact that (i) $p_1^*<p_1^\Delta$, as noted, and (ii) for any $p \in [p_1^\Delta,\underline{p}]$, player $i$'s payoff in any equilibrium is weakly lower than his best-reply payoff against $\kappa(p)=1$ for all $p \in [p_1^*,\underline{p}]$, as easily follows from \eqref{webern}, the optimality equation for $w$.\footnote{Consider the possibly random sequence of beliefs visited in an equilibrium. At each belief, a flow loss of either $-(1-\delta)m$ or $-(1-\delta)\omega$ is incurred. Note that the first loss is independent of the number of other players' experimenting, while the second is necessarily lower when at each round all other players experiment.}
}
\proofof{Proposition}{prop:limit-Poisson}{
For $\lambda_0 > 0$, the proof is the same as that of Proposition \ref{prop:limit-Brownian}, except for the fact that it deals with $V_{N,\phat}$ rather than $V_N^*$ and relies on Propositions \ref{prop:thresh}--\ref{prop:SSE-Poisson} rather than Proposition \ref{prop:SSE-Brownian}.
For $\lambda_0 = 0$, the proof of Proposition \ref{prop:SSE-exponential} establishes that there exists a natural number $M$ such that, given $\underline{p}$ as stated, we can take $\bar{\Delta}$ to be $(\underline{p}-p_1^*)/M$.
Equivalently, $p_1^*+M\bar{\Delta}=\underline{p}$.
Hence, Proposition \ref{prop:SSE-exponential} can be restated as saying that, for some $\bar{\Delta}>0$, and all $\Delta \in (0,\bar{\Delta})$, there exists $p_\Delta \in (p_1^*,p_1^*+M\Delta)$ such that the two conclusions of the proposition hold with $\underline{p}=p_\Delta$.
Fixing the prior, let $\overline{w}^\Delta} %{w_+^\Delta ,\underline{w}^\Delta} %{w_-^\Delta $ denote the payoffs in the first and second SSE from the proposition, respectively.\footnote{
Hence, to be precise, these payoffs are only defined on those beliefs that can be reached given the prior and the equilibrium strategies.}
Given that $\underline{p} \rightarrow p_1^*$ and $\overline{w}^\Delta} %{w_+^\Delta(p) \rightarrow s,\underline{w}^\Delta} %{w_-^\Delta(p) \rightarrow s$ for all $p \in (p_1^*,p_\Delta)$ as $\Delta \to 0$, it follows that we can pick $\Delta^\dagger \in (0,\bar{\Delta})$ such that for all $\Delta \in (0,\Delta^\dagger)$,
$\Wsup_{\rm PBE} \leq V_{N,\phat} + \varepsilon$,
$\overline{w}^\Delta} %{w_+^\Delta \geq V_{N,\pr}-\varepsilon$,
$\|W_1^\Delta-V_1^*\| < \varepsilon$
and $\|\underline{w}^\Delta} %{w_-^\Delta-V_{1,\pp}\| < \frac{\varepsilon}{2}$.
The obvious inequalities follow as in the proof of Proposition \ref{prop:limit-Brownian} with the subtraction of an additional $\varepsilon$ from the left-hand side of the first one; and the conclusion follows as before, using $2\varepsilon$ as an upper bound.
}
\AppendixOut
\setstretch{1.10}
\def\noindent \hangindent = 3em \hangafter 1{\noindent \hangindent = 3em \hangafter 1}
\def American Economic Review {American Economic Review}
\def Econometrica {Econometrica}
\def Games and Economic Behavior {Games and Economic Behavior}
\def Journal of Economic Theory {Journal of Economic Theory}
\def Quarterly Journal of Economics {Quarterly Journal of Economics}
\def RAND Journal of Economics {RAND Journal of Economics}
\def Review of Economic Studies {Review of Economic Studies}
\def Theoretical Economics {Theoretical Economics}
\newcommand{\AJ}[6]{\noindent \hangindent = 3em \hangafter 1 {#1}
(#2): ``#3," {\it #4}, {\bf #5}, #6.}
\newcommand{\ajo}[6]{\AJ{\sc #1}{#2}{#3}{#4}{#5}{#6}}
\newcommand{\ajt}[7]{\AJ{{\sc #1} and {\sc #2}}{#3}{#4}{#5}{#6}{#7}}
\newcommand{\AB}[6]{\noindent \hangindent = 3em \hangafter 1 {#1}
(#2): ``#3," in {\it #4}, #5. #6.}
\newcommand{\abo}[7]{\AB{\sc #1}{#2}{#3}{#4}{#5}{#6}{#7}}
\newcommand{\abt}[8]{\AB{{\sc #1} and {\sc #2}}{#3}{#4}{#5}{#6}{#7}{#8}}
\newcommand{\BK}[4]{\noindent \hangindent = 3em \hangafter 1 {#1}
(#2): {\it #3}. #4.}
\newcommand{\bko}[4]{\BK{\sc #1}{#2}{#3}{#4}}
\newcommand{\bkt}[5]{\BK{{\sc #1} and {\sc #2}}{#3}{#4}{#5}}
\newcommand{\WP}[4]{\noindent \hangindent = 3em \hangafter 1 {#1}
(#2): ``#3," #4.}
\newcommand{\wpo}[4]{\WP{\sc #1}{#2}{#3}{#4}}
\newcommand{\wpt}[5]{\WP{{\sc #1} and {\sc #2}}{#3}{#4}{#5}}
\newpage
\section*{References}
\ajo{Abreu, D.} {1986}
{Extremal Equilibria of Oligopolistic Supergames}
{Journal of Economic Theory} {39} {195--225}
\ajo{Abreu, D., D.\ Pearce and E.\ Stacchetti} {1986}
{Optimal Cartel Equilibria with Imperfect Monitoring}
{Journal of Economic Theory} {39} {251--269}
\ajo{Abreu, D., D.\ Pearce and E.\ Stacchetti} {1993}
{Renegotiation and Symmetry in Repeated Games}
{Journal of Economic Theory} {60} {217--240}
\ajt{Bergin, J.}{W.B.\ MacLeod}{1993}
{Continuous Time Repeated Games}
{International Economic Review}{34}{21--37}
\ajt{Besanko, D.}{J.\ Wu} {2013}
{The Impact of Market Structure and Learning on the Tradeoff between R\&D Competition and Cooperation}
{Journal of Industrial Economics} {61} {166--201}
\ajt{Besanko, D., Tong, J.}{J.\ Wu} {2018}
{Subsidizing Research Programs with `If' and `When' Uncertainty in the Face of Severe Informational Constraints}
{RAND Journal of Economics} {49} {285--310}
\ajt{Biais, B., T.\ Mariotti, G.\ Plantin}{J.-C.\ Rochet} {2007}
{Dynamic Security Design: Convergence to Continuous Time and Asset Pricing Implications}
{Review of Economic Studies} {74} {345--390}
\ajt{Bolton, P.}{C.\ Harris} {1999}
{Strategic Experimentation}
{Econometrica} {67} {349--374}
\abt{Bolton, P.}{C.\ Harris} {2000}
{Strategic Experimentation: the Undiscounted Case}
{Incentives, Organizations and Public Economics -- Papers in Honour of Sir James Mirrlees}
{P.J.\ Hammond and G.D.\ Myles (Eds.)}
{Oxford: Oxford University Press, pp.\ 53--68}
\ajt{Bonatti, A.}{J. H\"{o}rner} {2011}
{Collaborating}
{American Economic Review} {101} {632–-663}
\ajt{Cohen, A.}{E.\ Solan} {2013}
{Bandit Problems with L\'{e}vy Payoff Processes}
{Mathematics of Operations Research} {38} {92--107}
\ajt{Cronshaw, M.B.}{D.G.\ Luenberger} {1994}
{Strongly Symmetric Subgame Perfect Equilibria in Infinitely Repeated Games with Perfect Monitoring and Discounting}
{Games and Economic Behavior} {6} {220--237}
\bkt{Dixit, A.K.}{R.S.\ Pindyck} {1994}
{Investment under Uncertainty}
{Princeton: Princeton University Press}
\wpo{Dong, M.}{2018}
{Strategic Experimentation with Asymmetric Information}
{Working paper, Pennsylvania State University}
\ajo{Dutta, P.K.} {1995}
{A Folk Theorem for Stochastic Games}
{Journal of Economic Theory} {66} {1--32}
\ajo{Fudenberg, D. and D.K.\ Levine} {2009}
{Repeated Games with Frequent Signals}
{Quarterly Journal of Economics} {124} {233--265}
\ajo{Fudenberg, D., D.K.\ Levine and S.\ Takahashi} {2007}
{Perfect Public Equilibrium when Players Are Patient}
{Games and Economic Behavior} {61} {27--49}
\ajt{Heidhues, P., S.\ Rady}{P.\ Strack} {2015}
{Strategic Experimentation with Private Payoffs}
{Journal of Economic Theory} {159} {531--551}
\wpt{Hoelzemann, J.}{N.\ Klein}{2020}
{Bandits in the Lab}
{Working paper, University of Toronto and Universit\'{e} de Montr\'{e}al}
\wpt{H\"{o}rner, J., N.\ Klein}{S.\ Rady}{2014}
{Strongly Symmetric Equilibria in Bandit Games}
{Cowles Foundation Discussion Paper No.\ 1956}
\ajo{H\"{o}rner, J., T.\ Sugaya, S.\ Takahashi and N.\ Vieille} {2011}
{Recursive Methods in Discounted Stochastic Games: An Algorithm for $\delta \rightarrow 1$ and a Folk Theorem}
{Econometrica} {79} {1277--1318}
\ajt{H\"{o}rner, J.}{L.\ Samuelson} {2013}
{Incentives for Experimenting Agents}
{RAND Journal of Economics} {44} {632--663}
\bkt{Johnson, N.L., S.\ Kotz}{N.\ Balakrishnan} {1994}
{Continuous Univariate Distributions: Volume 1 {\rm (second edition)}}
{New York: Wiley}
\ajt{Keller, G.}{S.\ Rady} {2010}
{Strategic Experimentation with Poisson Bandits}
{Theoretical Economics} {5} {275--311}
\ajt{Keller, G.}{S.\ Rady} {2015}
{Breakdowns}
{Theoretical Economics} {10} {175--202}
\ajt{Keller, G.}{S.\ Rady} {2020}
{Undiscounted Bandit Games}
{Games and Economic Behavior} {124} {43--61}
\ajt{Keller, G., S.\ Rady}{M.\ Cripps} {2005}
{Strategic Experimentation with Exponential Bandits}
{Econometrica} {73} {39--68}
\ajt{Klein, N.}{S.\ Rady} {2011}
{Negatively Correlated Bandits}
{Review of Economic Studies} {78} {693--732}
\bkt{Mertens, J.F., Sorin, S.}{S.\ Zamir}{2015}
{Repeated Games {\rm (Econometric Society Monographs, Vol.~55)}}{Cambridge: Cambridge University Press}
\ajo{M\"{u}ller, H.M.} {2000}
{Asymptotic Efficiency in Dynamic Principal-Agent Problems}
{Journal of Economic Theory} {39} {251--269}
\bkt{Peskir, G.}{A.\ Shiryaev} {2006}
{Optimal Stopping and Free-Boundary Problems}
{Basel: Birkh\"{a}user Verlag}
\ajt{Rosenberg, D., E.\ Solan}{N.\ Vieille} {2007}
{Social Learning in One-Arm Bandit Problems}
{Econometrica} {75} {1591–-1611}
\ajt{Sadzik, T.}{E.\ Stacchetti} {2015}
{Agency Models with Frequent Actions}
{Econometrica} {83} {193--237}
\ajt{Simon, L.K.}{M.B.\ Stinchcombe}{1995}
{Equilibrium Refinement for Infinite Normal-Form Games}
{Econometrica}{63}{1421--1443}
\ajo{Thomas, C.D.}{2021}
{Strategic Experimentation with Congestion}
{American Economic Journal: Microeconomics} {13} {1--82}
\end{document} | {'timestamp': '2021-03-03T02:05:38', 'yymm': '1910', 'arxiv_id': '1910.08953', 'language': 'en', 'url': 'https://arxiv.org/abs/1910.08953'} |
\section{Introduction}
With the gradual expansion of feature space, it is more and more difficult for people to recognize feature space \citep{R1}. The irrelevant or misleading features will degrade the performance of the classifier. The emergence of feature selection has turned these problems around. To solve these problems, feature selection is used to improve the performance of the classifier by removing the irrelevant and misleading features from original features. \citep{guyon2003an}.
Feature selection is helpful to reduce the dimension of feature space, which can reduce the computational cost, improve the performance of classifier and restrain the occurrence of over fitting. The design of feature selection methods is to select better features that allow the classifier to achieve better performance while reducing the demand on computing resources \citep{R3}.
The filter method and the wrapper method are two main feature selection motheds \citep{R4}. The former ranks features and selects the best part of features as final feature subset. Mathematical methods are mostly used to measure the relationship between each feature and label. An evaluation value is calculated to each feature, which is used to rank the features \citep{R5}. The wrapper method sorting the feature subsets, the best one is as the final feature subset. The feature subsets are generated by the method and an evaluation value obtained by the classifier is used to rank the feature subsets \citep{R6}.
The metaheuristic algorithms are proposed for the optimization problems \citep{R7}. A deterministic algorithm can obtain the optimal solution to the optimization problem, while a metaheuristic algorithm is based on an intuitive or empirical construction that can give a feasible solution at an acceptable cost, and the degree of deviation of that feasible solution from the optimal solution may not be predictable in advance \citep{R8}.
The feature selection is essentially a Non-deterministic Polynomial (NP) problem, which is solved by the metaheuristic algorithms \citep{Yusta2009Different}. Metaheuristic algorithms rely on a combination of local and global search to find an optimal solution in a large solution space. The search process requires the use of iteration to approach the optimal solution, and the setting of the parameters in search process has a significant impact. Both advanced algorithms and suitable parameters are needed to achieve a favorable solution.
As mentioned above, a sophisticated design is necessary for the metaheuristic algorithm to balance local and global search. This design trades relative complexity for the validity of the algorithm, and different tasks require individual finding of the suitable parameters. Swarm intelligence algorithms are gradually becoming the main implementation of metaheuristic algorithms such as \citep{R9,Yusta2009Different,R91}.
The main contributions of this paper are summarized as follows:
\begin{enumerate}[1.]
\item Use one-hot encoding to process categorical features and perform feature selection directly from the processed high-dimensional feature space.
\item Propose a novel approach to exploit the outcomes of filter method.
\item Propose a concise method for feature selection.
\end{enumerate}
The rest of this paper is organized as follows: \autoref{S2} reviews the related works. \autoref{S3} introduces two powerful tools used in this paper. \autoref{S4} details the proposed feature selection method. \autoref{S5} discusses the experiments and results. \autoref{S6} concludes this paper.
\section{Related works}
\label{S2}
The features in the dataset are not independent. In the context,it is essential for feature selection to consider the interaction between features. \cite{R6} explored the interaction between features. In this paper, authors investigated the strengths and weaknesses of the wrapper method and provided some improved design solutions. The wrapper method is designed to find the optimal feature subset.During the experimen, performance evaluation is based on some datasets. The experimental results indicate that the proposed algorithm achieves an improvement in accuracy.
With feature selection going from the edge of the stage to the center, \cite{guyon2003an} provided an introduction of feature selection. In this paper,the following aspects of knowledge are discussed: the definition of the objective function,feature construction, feature ranking, multivariate feature selection, feature validity evaluation method and efficient search methods for better feature subset. Datasets in many areas have thousands of features, which makes feature selection especially useful for two purposes: better the performance of classifier, faster the speed of classifier.
\cite{R10} developed a noval feature selection method to overcome the limitations of MIFS \citep{R11}, MIFS-U \citep{R12},and mRMR \citep{R13}. The model known as NMIFS (normalized mutual information feature selection) is designed to optimize the measure of the relationship between features and labels. NMIFS is a filter method independent of any machine learning model. The purpose of normalization is to reduce the bias of mutual information toward multivalued features and restrict its value to the interval [0,1]. NMIFS does not require user-defined parameters such as $\beta$ in MIFS and MIFS-U. Compared to the MIFS, MIFS-U, and mRMR. NMIFS perform better on multiple artificial datasets and benchmark problems. In addition, the authors combine NMIFS with genetic algorithm and propose the GAMIFS, which uses NMIFS to initialize a better starting point and as part of a mutation operator. During the mutation process,features with high mutual information value will have a higher probability of being selected, which speeds up the convergence of the genetic algorithm.
The rough sets for feature selection were proven to be feasible \citep{R16}. The particle swarm optimization algorithm is an excellent metaheuristic algorithm \citep{R17}. \cite{R18} proposed an algorithm based on rough sets and particle swarms. The particle swarm optimization algorithm uses a number of particle flights in the feature space by interparticle interactions to find the best feature subset. The proposed algorithm utilized UCI datasets \citep{R19} for evaluation. The experimental results are compared with a GA-based approach and other deterministic rough set reduction algorithms. The experimental results showed that the proposed algorithm produces better performance.
Intrusion detection system (IDS) is an important security device. \cite{R20} proposed a mutual information-based feature selection method for IDS. A robust intrusion detection system needs to be both high performance and high speed. The proposed method uses LSSVM for classifier construction. The KDD Cup 99 dataset is used for the evaluation process and the evaluation results indicate that the proposed method produces a high level of accuracy, especially for remote to login (R2L) and user to remote (U2R) attacks.
Salp swarm algorithm (SSA) is an excellent optimization algorithm designed for continuous problems \citep{R21}. \cite{R22} came up with a novel approach which combines SAA with chaos theory (CSAA). Simulation results showed that chaos theory improves the convergence speed of the algorithm significantly. The experiments reveal the potential of CSAA for feature selection, which can select fewer features while achieving higher classification accuracy.
Water wave optimization(WWO) is new nature-inspired metaheuristic algorithm that was developed by \citep{R23}.The approach known as WWO simulates the phenomena of water waves refraction, propagation and fragmentation to find the global optimal solution in optimization problems. A new feature selection algorithm uses a combination of rough set theory (RST) and a binary version of the water wave optimization approach (WWO) was proposed by \citep{R24}. Several datasets are used to evaluate the proposed algorithm, and the results is compared to several advanced metaheuristic algorithms. The computational results demonstrate the efficiency of the proposed algorithm in feature selection.
Finally,the PIO (Pigeon Inspired Optimizer) is an advanced bionic algorithm proposed by \citep{R25}, \cite{R26} proposed two binary schemes to improve PIO to accommodate the feature selection problem and applied it to intrusion detection system. Three popular datasets: KDD Cup 99, NSL-KDD, and UNSW-NB15, are used to test the algorithm. The proposed algorithm surpasses six state-of-the-art feature selection algorithms in terms of F-score and other metrics. Further, Cosine\_PIO selecte 7 features from KDD Cup 99, 5 features from NSL-KDD and 5 features from UNSW-NB15. It is amazing to achieve a excellent performance by so few features.
In the field of feature selection, classifiers aside, the wrapper methods are gaining popularity with the development of computer hardware. Various metaheuristic and hybrid feature selection methods have been proposed. However, these algorithms have some shortcomings: hard to understand and learn; more difficult to determine parameters; large computational cost.
To solve the above problems, the paper proposed a concise method for feature selection via normalized frequencies (\textbf{NFFS}). This method can perform feature selection at a much lower computational cost while maintaining high performance. The best feature is that it has a very simple logic, so it can be applied easily.
\section{One-hot encoding and mutual information}
\label{S3}
This section presents two tools used in this paper. Processing categorical features is an indispensable preprocessing step in machine learning, which is performed by One-hot encoding in this paper. Mutual information is an excellent filter method that can capture both the linear and nonlinear dependencies between feature and label.
\subsection{One-hot encoding}
One-hot encoding encode categorical features as a one-hot numeric array \citep{R241}. One-hot encoding projects the categorical features to a high-dimensional feature space. It allows the distance among features to be calculated more reasonably, which is important for many classifiers.
One-hot encoding uses $N$ status registers to encode $N$ states. Each register has its own individual register bits and only one of which is valid at any given time. It's easier to understand one-hot encoding with an example, let's side there is a dataset of a household item with seven samples. Length, Width and Color are used to describe each sample, but Color is not a numerical feature so it needs to be transformed. Ordinal encoding is a common encoding approach for categorical features \citep{R19}, by which feature is converted to ordinal integers. The contrast between one-hot encoding and ordinal encoding is illustrated in \autoref{F1}.
\begin{figure}
\includegraphics[width=1\textwidth]{F1}
\caption{Comparison of one-hot encoding and ordinal encoding.}
\label{F1}
\end{figure}
As \autoref{F1} shows, one-hot encoding transforms the feature ‘Color’ into four features ( Color\_A, Color\_B, Color\_C, Color\_D). The characters ( Non-numeric ) A, B, C and D indicate different colors. From the middle table in \autoref{F1}, it can be noted that each sample takes 1(numeric) in one of the four color features, while 0 is taken in the other three features ‘Color\_’. As can be seen from \autoref{F1}, one-hot encoding will expands the dimensionality of the dataset in comparison to ordinal coding. We also use \_ as a separator in the processing of the dataset in the later part of this paper.
\subsection{mutual information}
Mutual information can be applied for evaluating dependency between random variables \citep{R251}. Let $X$ ( feature ) and $Y$ ( label ) be two discrete random variables, The mutual information (\textbf{MI} value) between $X$ and $Y$ can be calculated by \autoref{E1}.
\begin{equation}
\label{E1}
\large
I(X;Y)=\sum_{x\in X}^{}\;\sum_{y\in Y}^{}p(x,y)\log\frac{p(x,y)}{p(x)p(y)}
\end{equation}
Where $I(X; Y)$ is mutual information, $p(x,y)$ is the joint probability density function, $p(x)$ and $p(y)$ are marginal density functions of $X$ and $Y$, respectively. From the equation, we know that when $X$ and $Y$ are independent of each other, their MI value is 0, otherwise it must be greater than 0.
\section{ NFFS}
\label{S4}
This section details the proposed feature selection method in two phases. Phase I of NFFS is described in \autoref{S4.1} and phase II of NFFS is located in \autoref{S4.2}. Phase II further processes the information provided by phase I and finally finds the best feature subset. \autoref{T1} lists some abbreviations appeared in this paper, which allow the paper more concise and clear.
NFFS is different from common feature selection methods in the following two points:
\begin{enumerate}
\item NFFS selects features from the feature space of the preprocessed dataset, rather than from the raw feature space. The features selected by NFFS will not contain any categorical features.
\item All steps of NFFS use the preprocessed dataset. Only the preprocessing process would touch the raw dataset. The dimensionality of the preprocessed dataset will be greater than the raw dataset.
\end{enumerate}
\begin{table}
\centering
\caption{Abbreviations used in this paper and their meanings.}
\label{T1}
\begin {tabular}{cccc}
\toprule
WV1 & AFS1 & WV2 & AFS2 \\ \midrule
\begin {tabular} [c]{@{}c@{}}Weight vector\\ obtained \\ in phase I of NFFS\end {tabular} & \begin {tabular}[c]{@{}c@{}}Alternative feature subsets\\ generated\\ in phase I of NFFS\end {tabular} & \begin {tabular}[c]{@{}c@{}}Weight vector\\ obtained \\ in phase II of NFFS\end {tabular} & \begin {tabular}[c]{@{}c@{}}Alternative feature subsets\\ generated\\ in phase II of NFFS\end {tabular} \\ \bottomrule
\end{tabular}
\end{table}
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth]{F2}
\caption{NFFS for feature selection.}
\label{F2}
\end{figure}
\subsection{Phase I of NFFS}
\label{S4.1}
The purpose of phase I of NFFS is to generate a batch of feature subsets, a portion of which can yield a slightly higher fitness values. The mechanism of this section is shown at the left side of \autoref{F2}, the steps being as follows:
\textbf{1. measure MI values}: Mutual information is used here only to evaluate the MI value for each feature. The preprocessed dataset is fed to the mutual information module, which outputs a positive floating number for each feature. The larger the number is, the stronger the relationship between feature and label, and vice versa.
\textbf{2. Find the threshold}: In the above step we obtained the MI value for each feature in the dataset. Histogram is used to analyze the distribution of MI values of features in the dataset. The distribution for these MI values of NSL-KDD dataset (A dataset used for the experiment) is shown in \autoref{F4}, from which it can be noticed that the majority of features obtained tiny MI values. A threshold is introduced to filter out these tiny MI values since it is not necessary to be calculated in the next step.
Finding the threshold requires analyzing MI values of features first, and the histogram is a handy tool. It is important to note that the threshold is selected by analyzing the histogram rather than a self-defined parameter.
\textbf{3. Obtain WV1}: A formula is used to convert MI values of features into weights of features, which is defined as in \autoref{E2}.
\begin{equation}
\label{E2}
\large
\mathrm{WV1}_i=\left\{\begin{array}{lc}\left(V_i-V_t\right)\frac{0.4}{V_{max}-V_t}+0.5&V_i>V_t\\0.5&V_i\leq V_t\end{array}\right.
\end{equation}
Where $i$ (number of features) denotes the $i$-th item of a vector, vector WV1 (weight vector obtained in phase I of NFFS) denotes weights of features, $V_i$ denotes MI value of the $i$-th feature. $V_{max}$ denotes the maximum MI value in vector $V$, while $V_t$ is the threshold from the previous step. Where 0.4 represents the upper bound for the increase of weights, which is intended to keep the weights in a suitable range for next step. Another constant, 0.5, denotes the basic weight that all features can hold.
It is clear from \autoref{E2} and \autoref{F4} that there are a large number of features with a weight of 0.5, but this is not a problem, phase I of NFFS isn't about getting an awesome result.
\textbf{4. Generate AFS1 by \emph{probability}}: When the WV1 is obtained, it's also the time to generate feature subsets. The generation process requires the use of \autoref{E3}.
\begin{equation}
\label{E3}
\large
\mathrm{AFS1}_L^i=sgn(\mathrm{WV1}_i-rand_i)=\left\{\begin{array}{lr}1&\mathrm{WV1}_i-rand_i>0\\0&\mathrm{WV1}_i-rand_i\leq0\end{array}\right.
\end{equation}
Where $L$ denotes the $L$-th generated feature subset, $i$ denotes the $i$-th item of a vector. $AFS1_L^i$ denotes the $i$-th feature of the $L$-th feature subset. AFS1$_L$ , WV1 and rand are all vector with the same dimension as the number of features in th dataset. AFS1$_L$ is a mask to represent a features subset. Each item in $rand$ is a uniform random number in the range [0,1]. In the vector AFS1$_L$, `1' means the feature is selected, while `0' means the feature is not selected.
Applying \autoref{E3}, the $L$ feature subsets constitute AFS1 (alternative feature subsets generated in phase I of NFFS). From \autoref{E3}, it can be learned that features with higher MI value will have a higher probability of being selected, as a result, the introduction of mutual information makes NFFS have a better starting point.
\textbf{5. Evaluate feature subsets}: Each feature subset in AFS1 is evaluated using classifier to obtain a fitness value. Specifically, the evaluation process consists of the following three steps: first, prepare training dataset and testing dataset according to AFS1$_L$; next, train the classifier with the training set; finally, test dataset is used to evaluate the trained classifier, and the result is the fitness value of AFS1$_L$.
As \autoref{E2} demonstrates, we use the outcomes of mutual information directly, rather than as a filter method to rank features. Phase I of NFFS is now complete, feature subsets in AFS1 and their fitness values are the raw materials for phase II of NFFS.
\autoref{Phase I of NFFS} show an overall procedure for Phase I of NFFS. As the algorithm shows that there are two layer while loops. The first layer represents a generated feature subset, while the second layer determines which features are selected in this feature subset.
Also we list the shapes of some of the variables from \autoref{Phase I of NFFS} in \autoref{Variables}, taking the NSL-KDD dataset as an example. \autoref{Variables} also contains some of the variables in \autoref{Phase II of NFFS}.
\begin{algorithm}
\caption{Phase I of NFFS.}
\label{Phase I of NFFS}
\KwIn{$L$}
\KwResult{AFS1; fitness values of AFS1}
Use mutual information to score each feature.\\
Select an appropriate threshold value.\\
Use \autoref{E2} to get WV1.\\
\While{$L>0$}{
\While{$i>0$}{Get $AFS1_L^i$ by \autoref{E3}.\\$i=i-1$}
$L=L-1$}
Evaluate the feature subsets inside AFS1.\\
\Return AFS1; fitness values of AFS1
\end{algorithm}
\begin{table}[]
\centering
\caption{Shapes of partial ariables in \autoref{Phase I of NFFS} and \autoref{Phase II of NFFS} }\label{Variables}
\begin{tabular}{ccccccc}
\toprule
\textbf{value} & \textit{WV1} & \textit{$WV2$} & \textit{AFS1} & \textit{$AFS2$} & \textit{\begin{tabular}[c]{@{}c@{}}fitless values\\ of AFS1\end{tabular}} & \textit{\begin{tabular}[c]{@{}c@{}}fitless values\\ of $AFS2$\end{tabular}} \\ \midrule
\textbf{shape} & 1x122 & 1x122 & $L$x122 & $O$x122 & $L$x1 & $O$x1 \\ \bottomrule
\end{tabular}
\end{table}
\subsection{Phase II of NFFS}
\label{S4.2}
A new weight vector (WV2) would be obtained by utilizing the raw materials provided by phase I of NFFS, which is the secret sauce for finding the optimal feature subset. The mechanism of this section is shown at the right side of \autoref{F2}, the steps being as follows:
\textbf{1. Count frequencies}: First, sort the feature subsets in AFS1 by their fitness values, the $M$ feature subsets with higher fitness values constitute AFS1$_{top}$, the $N$ $(M+N<L)$ feature subsets with lower fitness values constitute AFS1$_{bottom}$, the rest feature subsets with ordinary fitness values are not involved in the counting. Next, count how many times each feature be selected in AFS1$_{top}$ and AFS1$_{bottom}$, respectively. The counting results constitute vector $\overrightarrow{F_{top}}$ and vector $\overrightarrow{F_{bottom}}$, respectively. The dimensions of these two vectors are the same as the number of features in dataset. The i-th item in $\overrightarrow{F_{top}}$ represents the total number of occurrences of the i-th feature of the dataset in AFS1$_{top}$. The i-th item in $\overrightarrow{F_{bottom}}$ represents the total number of occurrences of the i-th feature of the dataset in AFS1$_{bottom}$.
\textbf{2. Obtain WV2}: WV2 can be derived from $\overrightarrow{F_{top}}$ and $\overrightarrow{F_{bottom}}$ by \autoref{E4}.
\begin{equation}
\label{E4}
\large
\mathrm{WV2}=\frac{\overrightarrow{F_{top}}}{\left\|\overrightarrow{F_{top}}\right\|}-\frac{\overrightarrow{F_{bottom}}}{\left\|\overrightarrow{F_{bottom}}\right\|}
\end{equation}
Where $\left\|\overrightarrow{F_{...}}\right\|$ denotes the length of a vector, the purpose of which is to normalize the vector in order to obtain a normalised frequency. The normalized frequencies have the same effect as the weights. The logic of the equation is that if a feature appears often in AFS1$_{top}$ but rarely in AFS1$_{bottom}$, then it has a high weight. Note that the style of \autoref{E4} is '$vector = vector - vector$'. This concise formula is the heart of NFFS. What the 'normalized frequencies' in the title of the paper refers to is \autoref{E4}.
From the above step, we know that the items in the vector \overrightarrow{F_{top}} and vector \overrightarrow{F_{bottom}} are the frequencies of the features. It makes sense to compare a frequency in \overrightarrow{F_{top}} with another frequency in \overrightarrow{F_{top}}. But it is meaningless to compare a frequency in \overrightarrow{F_{top}} with a frequency in \overrightarrow{F_{bottom}}. This is why normalization is needed.
\textbf{3. Generate AFS2 by \emph{sorting}}: When the WV2 is obtained, it is also the time to generate feature subsets, the generation process is much simpler than those in phase I of NFFS.
The first feature subset generated is the feature with the highest weight in WV2, the second feature subset generated is the two features with highest weight in WV2, the third feature subset generated is the three features with the highest weight in WV2, and so on. In total, O (O $<$ number of features) feature subsets is generated.These feature subsets constitute AFS2.
\textbf{4. Evaluate feature subsets and get the result}: Each feature subset in AFS2 is provided to the classifier for evaluation, and the feature subset with the highest fitness value would be selected as the output of NFFS. \autoref{Phase II of NFFS} show an overall procedure for Phase II of NFFS. NFFS only needs to evaluate ($L + O$) feature subsets to obtain result, which is a great advantage in speed compared to the heuristic algorithms, and the result is excellent.
\begin{algorithm}
\caption{Phase II of NFFS}\label{Phase II of NFFS}
\KwIn{$M, N, O$}
\KwResult{Best feature subset.}
Sort feature subsets in AFS1 by their fitness values.\\
Get AFS1$_{top}$ and AFS1$_{bottom}$ from AFS1.\\
Get $\overrightarrow{F_{top}}$ from AFS1$_{top}$.\\
Get $\overrightarrow{F_{bottom}}$ from AFS1$_{bottom}$.\\
Use \autoref{E4} WV2.\\
\While{$O>0$}{AFS2$_O$ = The $O$ feature subsets with the highest weight in WV2.\\ \tcp*[h]{AFS2$_O$ represents the $O$-th feature subset in AFS2.}\\$O$=$O$-1}
Evaluate the feature subsets inside AFS2.\\
Sort feature subsets in AFS2 by their fitness values.\\
\Return Feature subset that get the best fitness value in AFS2.
\end{algorithm}
\section{Experiments and results}
\label{S5}
In his section, we introduced the dataset used for experiments. Data preprocessing is discussed. We described the evaluation indicators and classifier used in this paper, fitness function is also be illustrated. In \autoref{S5.4}, we implemented the proposed NFFS. In \autoref{S5.5}, we described the results and compared the proposed method with several state-of-the-art feature selection methods.
\subsection{Dataset}
\label{S5.1}
NSL-KDD dataset \citep{R27} and UNSW-NB15 \citep{R27} dataset are used to evaluate the proposed feature selection method. These two dataset are authoritative real-world dataset for intrusion detection domain. Also both datasets have been provided with ready-made training dataset and testing dataset. It is not necessary to prepare the training dataset and test dataset by sampling from the unsegmented datase.
NSL-KDD dataset uses 41 features to represent a record, and each record is either an normal or attack. This dataset is a refined version of KDD Cup 99 dataset \citep{R261} and adds a item to represent the difficulty of classifying correctly. UNSW-NB15 dataset consists of 42 features and a multiclass label and a binary label, here we only use the binary label. There are no duplicate records in these two dataset.
\begin{table}
\centering
\caption{Type of features in NSL-KDD dataset.}
\label{TK}
\begin{tabular}{ll}
\toprule
\multicolumn{1}{c} {Types of features} & \multicolumn{1}{c}{Features} \\ \midrule
Binary & [ f7, f12, f14, f15, f21, f22 ] \\
Categorical & [ f2, f3, f4 ] \\
Numeric & \multicolumn{1}{p{0.8\columnwidth}}{[ f1, f5, f6, f8, f9, f10, f11, f13, f16, f17, f18, f19, f20, f23, f24, f25, f26, f27, f28, f29, f30, f31, f32, f33, f34, f35, f36, f37, f38, f39, f40, f41 ]} \\ \bottomrule
\end{tabular}
\end{table}
\begin{table}
\centering
\caption{Type of features in UNSW-NB15 dataset.}
\label{TU}
\begin{tabular}{ll}
\toprule
\multicolumn{1}{c} {Types of features} & \multicolumn{1}{c}{Features} \\ \midrule
Binary & [ f37, f42 ] \\
Categorical & [ f2, f3, f4 ] \\
Numeric & \multicolumn{1}{p{0.8\columnwidth}}{[ f1, f5, f6, f7, f8, f9, f10, f11, f12, f13, f14, f15, f16, f17, f18, f19, f20, f21, f22, f23, f24, f25, f26, f27, f28, f29, f30, f31, f32, f33, f34, f35, f36, f38, f39, f40, f41 ]} \\ \bottomrule
\end{tabular}
\end{table}
\begin{table}[]
\centering
\caption{Summary quantitative information of NSL-KDD datatset and UNSW-15 dataset.}
\label{TS}
\begin{tabular}{cccc}
\toprule
Datasat & Partition & Positive (Ratio) & Negative (Ratio) \\ \midrule
\multirow{2}{*}{NSL-KDD} & Training dataset & 58630 (46.5\%) & 67343 (53.5\%) \\
& Testing dataset & 12833 (56.9\%) & 9711 (43.1\%) \\
\multirow{2}{*}{UNSW-15} & Training dataset & 119341 (68.1\%) & 56000 (31.9\%) \\
& Testing dataset & 45332 (55.1\%) & 37000 (44.9\%) \\ \bottomrule
\end{tabular}%
\end{table}
The features of NSL-KDD dataset are presented in \autoref{TK} in a manner that is more friendly to machine learning \citep{R31}. Where 'f$n$' represents the column $n$ in the dataset file. \autoref{TU} show the features of UNSW-NB15 dataset in the same style. As shown in \autoref{TK} and \autoref{TU}, both dataset contains three categorical features. \autoref{TS} shows the distribution for NSL-KDD and UNSW-NB15. As the table shows that both dataset are relatively balanced.
\subsection{Data preprocessing}
\label{S5.2}
The first step in the experiments is to process the nonnumerical marks in dataset. Preprocessing consists of two items: convert categorical features into numeric features by one-hot encoding; convert symbolic labels into binary labels. In the experiments, we used '0' denotes normal class while '1' denotes attack class in spite of the specific type of attack. After one-hot encoding processed, 41 features inside NSL-KDD have been expanded to 122.
\subsection{Classifier related}
\label{S5.3}
\subsubsection{Classifier}
In this paper, random forest \citep {R32} with PCA \citep{R33} is used as the classifier, and PCA is regarded as a part of the classifier rather than an independent processing step \citep{R331}. All feature subsets need to be evaluated by this classifier. The PCA plays an important role as part of the classifier, the data will be processed by PCA before being fed into the classifier.
PCA can project data into an orthogonal feature space, which can remove redundant linear relationships from the original feature space. PCA makes the classifier robuster and at the same time further decreases the computational cost.
Since PCA is a scale sensitive method, it needs the help of standardization. It is important to note that CART-based random forest does not need standardization, standardization is only for PCA here. This classifier is also used to evaluate the feature selection methods from state-of-the-art related works.
\subsubsection{Evaluation indicators}
Different learning tasks in machine learning require different indicators, the evaluation indicators in this paper adopt generic nomenclature, rather than exclusive to a specific task. Accuracy, recall, precision, F-score and AUC are used to evaluate feature subsets in this paper. All indicators used can be calculated from the confusion matrix \citep{R34}, the confusion matrix is shown in \autoref{T3}, where the positive means attack class, while the negative means normal class. The four parameters in table represent the number of records that match a certain condition:
\begin{table}
\centering
\caption{Binary confusion matrix.}
\label{T3}
\begin{tabular}{lcc}
\toprule
& Predicted normal & Predicted attack \\ \midrule
Actual normal & TN & FP \\
Actual attack & FN & TP \\ \bottomrule
\end{tabular}
\end{table}
\textbf{TP (True Positive)}: Attack and predicted to be attack.
\textbf{TN (True Negative)}: Normal and predicted to be normal.
\textbf{FP (False Positive)}: Normal and predicted to be attack.
\textbf{FN (False Negative)}: Attack and predicted to be attack.
The evaluation indicators used in experiments are described as follows:
\textbf{1. Accuracy}: Measure how many records are correctly classified as in \autoref{E5}.
\begin{equation}
\label{E5}
\large
accurary=\frac{TP+TN}{TP+TN+FP+FN}
\end{equation}
\textbf{2. Recall}: Measures how many attacks could be discovered as in \autoref{E6}.
\begin{equation}
\label{E6}
\large
recall=\frac{TP}{TP+FN}
\end{equation}
\textbf{3. Precision}: Measures how many attacks classified as attack are really attack as in \autoref{E7}.
\begin{equation}
\label{E7}
\large
precision=\frac{TP}{TP+FP}
\end{equation}
\textbf{4. F-score}: A weighted average of the precision and recall as in \autoref{E8}.
\begin{equation}
\label{E8}
\large
F-score=2\times\frac{\left(precision*recall\right)}{\left(precision+recall\right)}
\end{equation}
\textbf{5. AUC(Area Under the Receiver Operating Characteristic)}: Simply put it monitors the potential of the model \citep{R35}.
Recall and precision are two of the mutually exclusive evaluation indicators. A classifier can only achieve a high F-score if it obtains high values on both recall and precision. All evaluation indicators except AUC are in the range [0,1]. The maximum value of AUC is 1, while the minimum value is 0.5. If a classifier makes a random decision, then its AUC will be 0.5.
\subsubsection{Fitness function}
In the search for the optimal feature subset, we use only F-score to constitute fitness function without considering the number of features in feature subset, which is based on two considerations: the F-score is a comprehensive indicator; and NFFS is not a wrapper method, there is no comparison of independent feature subsets. Note that fitness function is used to search for the optimal feature subset, while evaluation indicators are used to evaluate the final result.
\subsection{Perform feature selection}
\label{S5.4}
Scikit-learn is a Python module for machine learning. In this paper, experiments were completed by Scikit-learn \citep{R39}. The quality of a feature subset needs to be judged by its fitness value, but the fitness value from the classifier is highly dependent on the parameters used to train the classifier. To solve this problem, we adopt the strategy of testing a feature subset with multiple groups of parameters, and the average value is used as the fitness value of the feature subset.
\subsubsection{Phase I of NFFS}
\label{S5.4.1}
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{F4}
\caption{Distribution for MI values of features in NSL-KDD dataset.}
\label{F4}
\end{figure}
\autoref{F4} shows the MI values of features in NSL-KDD dataset, from which it can be seen that the majority of features hold tiny MI values. The threshold taken in NSL-KDD dataset is 0.05, and it is indicated by a red vertical line in \autoref{F4}. As shown in figure, a large number of features would hold a weight of 0.5 after calculation by \autoref{E2}. However, the final optimal feature subset show that majority of features are eliminated as well.
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{F5}
\caption{Distribution for fitness values of feature subsets in AFS1.}
\label{F5}
\end{figure}
After obtaining WV1 via \autoref{E2}, \autoref{E3} was used to generate 180 feature subsets, which constituted AFS1. We wanted to get a glimpse of the power of PCA. \autoref{F5} shows the performance of feature subsets in AFS1. As can be observed in the figure, the classifier with PCA shifts the bar significantly to the right, which means fitness values of feature subsets is significantly improved by PCA. The reason is that PCA removes linearly correlated redundant information from the input data, which allows a robuster classifier. Note that, this is just to a test of the significance of PCA, elsewhere PCA always accompanies classifier around.
\subsubsection{Phase II of NFFS}
\label{S5.4.2}
In experiments, 45 feature subsets with higher fitness values from AFS1 were selected to constitute AFS1$_{top}$ and 45 features with lower F-score were selected to constitute AFS1$_{bottom}$. After counting frequencies, WV2 can be calculated by \autoref{E4}, which were used to generate 70 feature subsets to constitute AFS2.
\autoref{F6} shows the performance of feature subsets in AFS1 and AFS2. As figure shown, feature subsets in AFS2 hold distinct advantages, which suggests that phase II of NFFS plays a significant role. It can also be seen that the feature subsets in AFS2 performed more stably. The phase II of NFFS is based on the phase I of NFFS but is far superior to it.
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{F6}
\caption{Distribution for fitness values of feature subsets in AFS1 and AFS2.}
\label{F6}
\end{figure}
\subsection{Result}
\label{S5.5}
\begin{table}
\centering
\caption{Results of several feature selection methods applied to the NSL-KDD dataset.}
\label{T4}
\begin{tabular}{llcp{0.36\columnwidth}}
\toprule
\multicolumn{1}{c}{Reference} & \multicolumn{1}{c}{Method} & NF & \multicolumn{1}{c}{Selected feature subset} \\ \midrule
\citep{enache2015anomaly} & BAT & 89(18) &[ f1, f2, f3, f8, f9, f13, f14, f18, f19, f20, f26, f28, f32, f33, f34, f38, f39, f40 ] \\
\citep{ambusaidi2016building} & LSSVM & 97(18) &[ f3, f4, f5, f6, f12, f23, f25, f26, f28, f29, f30, f33, f34, f35, f36, f37, f38, f39 ] \\
\citep{moustafa2015a} & Hybrid Association Rules & 13(11) &[ f2, f5, f6, f7, f12, f16, f23, f28, f31, f36, f37 ] \\
\citep{aljawarneh2017anomaly-based} & IG & 87(8) & [ f3, f4, f5, f6, f29, f30, f33, f34 ] \\
\citep{tama2019tse-ids:} & PSO & 118(37) &[ f2, f3, f4, f5, f6, f7, f8, f9, f10, f11, f12, f13, f14, f15, f17, f18, f20, f21, f22, f23, f24, f25, f26, f27, f28, f29, f31, f32, f33, f34, f35, f36, f37, f38, f39, f40, f41 ] \\
\citep{R26} & Sigmoid\_PIO & 98(18) &[ f1, f3, f4, f5, f6, f8, f10, f11, f12, f13, f14, f15, f17, f18, f27, f32, f36, f39, f41 ] \\
\citep{R26} & Cosine\_PIO & 7(5) & [ f2, f6, f10, f22, f27 ] \\ Proposed method & NFFS & 34(11) &[ f2\_icmp, f3\_IRC, f3\_aol, f3\_auth, f3\_csnet\_ns, f3\_ctf, f3\_daytime, f3\_discard, f3\_ecr\_i, f3\_http\_8001, f3\_imap4, f3\_login, f3\_name, f3\_netbios\_ssn, f3\_pop\_2, f3\_pop\_3, f3\_rje, f3\_supdup, f3\_telnet, f3\_urh\_i, f4\_OTH, f4\_RSTO, f4\_RSTOS0, f4\_RSTR, f4\_S1, f4\_SF, f6, f7, f9, f10, f21, f30, f40, f41 ]\\ \bottomrule
\end{tabular}
\end{table}
The related works used to compared with NFFS and the features they selected from NSL-KDD are summarized in \autoref{T4}. Where the number outside the parentheses in the third column (NF) denote the number of encoded features (encoded by one-hoting encoding), while the number inside the parentheses denote the number of original features. It can be noticed that the feature subset selected by NFFS is reported with the encoded format, that is because NFFS is selecting features directly from the encoded feature space. Regarding the format of the features selected by NFFS, for example, f$2$\_tcp indicates the category ‘tcp’ in the 2-th feature (communication protocol).
In order to fairly compare the quality of the feature subsets found by each feature selection method, we need to train a separate classifier for each feature subset. For each feature subset reported in \autoref{T4}, we searched for the best parameters to build a classifier for it. During the search for the optimal parameters, the explanation ratio of PCA was always set to 0.93. Thirty different random seeds was used to perform 30 runs to obtain means and standard deviations to compose the results of the method. The 30 random seeds used are integers from 7 to 36, and the random seeds start from 7 simply because we believe that 7 represents luck. The fixed random seed allows the experimental results to be accurately reproduced.
Each feature subset in \autoref{T4} was fed to the customized classifier for evaluation, and the evaluation results are presented in \autoref{T5}. Based on the results shown in \autoref{T5}, NFFS obtain the best score in terms of accuracy, precision, recall, F-score and AUC.
\begin{table}[]
\centering
\caption{The performances of the feature subsets in \autoref{T4}.}
\label{T5}
\begin{tabular}{lccccc}
\toprule
Methods & Precision±std & Recall±std & Accuracy±std & F-score±std & AUC±std \\ \midrule
BAT & 0.962±0.011 & 0.704±0.029 & 0.815±0.016 & 0.812±0.019 & 0.838±0.014 \\
LSSVM & 0.904±0.002 & 0.842±0.019 & 0.859±0.010 & 0.871±0.011 & 0.921±0.007 \\
Hybrid Association Rules& 0.956±0.009 & 0.636±0.020 & 0.776±0.011 & 0.763±0.015 & 0.802±0.012 \\
IG & 0.898±0.002 & 0.781±0.032 & 0.825±0.017 & 0.835±0.018 & 0.927±0.005 \\
PSO & 0.948±0.021 & 0.687±0.035 & 0.800±0.021 & 0.796±0.025 & 0.819±0.020 \\
Sigmoid\_PIO & 0.921±0.001 & 0.702±0.011 & 0.796±0.006 & 0.797±0.007 & 0.880±0.022 \\
Cosine\_PIO & 0.926±0.003 & 0.821±0.040 & 0.861±0.022 & 0.870±0.023 & 0.910±0.009 \\
NFFS & 0.963±0.005 & 0.852±0.028 & 0.897±0.015 & 0.904±0.016 & 0.938±0.001 \\ \bottomrule
\end{tabular}
\end{table}
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{F7}
\caption{Performances of feature selection methods in \autoref{T4}.}
\label{F7}
\end{figure}
F-score and AUC are relatively more comprehensive and pertinent indicators in machine learning, and \autoref{F7} visualizes these two indicators from \autoref{T5}. Where each bar represent mean and standard deviation of each method. As shown in \autoref{F7}, NFFS achieves the highest F-score and AUC. For F-score, NFFS gains an absolute victory without any doubt. For AUC, NFFS not only achieved the highest score, but also hold the smallest standard.
\begin{table}
\centering
\caption{Results of several feature selection methods applied to the UNSW-NB15 dataset.}
\label{Results of UNSW}
\begin{tabular}{llcp{0.36\columnwidth}}
\toprule
\multicolumn{1}{c}{Reference} & \multicolumn{1}{c}{Method} & NF & \multicolumn{1}{c}{Selected feature subset} \\ \midrule
\citep{moustafa2015a} & Hybrid Association Rules & 21(11) &[ f4, f10, f11, f18, f20, f23, f25, f32, f35, f40, f41 ] \\
\citep{tama2019tse-ids:} & PSO & 41(19) &[ f3, f4, f7, f8, f10, f11, f16, f20, f22, f24, f26, f28, f30, f32, f34, f35, f36, f41, f42 ] \\
\citep{tama2019tse-ids:} & Rule-Based & 157(13) &[ f2, f3, f7, f8, f10, f15, f17, f31, f33, f34, f35, f36, f41 ] \\
Proposed method & NFFS & 38(14) &[ f7, f10, f11, f14, f19, f27, f28, f29, f34, f35, f36, f2\_argus, f2\_bbn-rcc, f2\_br-sat-mon, f2\_cbt, f2\_crudp, f2\_dcn, f2\_ddp, f2\_eigrp, f2\_hmp, f2\_ipcv, f2\_leaf-2, f2\_netblt, f2\_ospf, f2\_ptp, f2\_scps, f2\_snp, f2\_st2, f2\_vines, f3\_pop3, f3\_ssl, f4\_CLO, f4\_CON, f4\_no ]\\ \bottomrule
\end{tabular}
\end{table}
UNSW-NB-15 is another dataset used to evaluate the NFFS. \autoref{Results of UNSW} reports the selected feature subsets from UNSW-NB15 dataset. The format of \autoref{Results of UNSW} is the same as that of \autoref{T4}. \autoref{POU} show the results of realted works and NFFS. The format of \autoref{POU} is the same as that of \autoref{T5}, the data therein are also derived from 30 runs. As the table shows, NFFS get the best result against other method in term of the five indicators.
\begin{table}[]
\centering
\caption{The performances of the feature subsets in \autoref{Results of UNSW}.}
\label{POU}
\begin{tabular}{lccccc}
\toprule
Methods & Precision±std & Recall±std & Accuracy±std & F-score±std & AUC±std \\ \midrule
Hybrid Association Rules & 0.762±0.001 & 0.968±0.001 & 0.816±0.001 & 0.853±0.001 & 0.937±0.001 \\
PSO & 0.768±0.003 & 0.983±0.004 & 0.827±0.003 & 0.862±0.002 & 0.960±0.002 \\
Rule-Based & 0.797±0.002 & 0.973±0.003 & 0.849±0.002 & 0.876±0.002 & 0.962±0.002 \\
NFFS & 0.818±0.004 & 0.985±0.001 & 0.871±0.003 &0.893±0.002 & 0.977±0.002 \\ \bottomrule
\end{tabular}
\end{table}
\section{Conclusion}
\label{S6}
In this work, a concise method for feature selection is proposed to overcome the shortcomings of metaheuristic algorithms. The proposed method is divided into two phases to implement feature selection. The proposed method is based on the following ideas: (Phase I) If the filter method considers a feature of higher importance, then the feature is selected with higher probability in the generation of feature subsets; (Phase II) If a feature is often contained by the feature subsets that perform well, while rarely being contained by the feature subsets that perform poorly, it means that the feature is beneficial to the classifier, the opposite means that the feature is harmful to the classifier.
In order to provide sufficient feature subsets with diverse performances for phase II. We generated the feature subsets by probability in phase I, which allowed the generated feature subsets better than the randomly generated ones. We used experiments to illustrate the reasons why the two phases used different strategies to generate feature subsets. The experimental results indicate that the proposed method outperformed several methods from state-of-the-art related works in terms of precision, recall, accuracy, F-score and AUC.
Future researches can investigate the use of clustering to optimize the efficiency of encoding and the application of the proposed method to semi-supervised learning.
\bibliographystyle{elsarticle-harv}
| {'timestamp': '2021-06-11T02:23:58', 'yymm': '2106', 'arxiv_id': '2106.05814', 'language': 'en', 'url': 'https://arxiv.org/abs/2106.05814'} |
\section{Introduction}
The search for a full theory of quantum gravity is a major open problem in modern physics. The difficulty to find such a theory has even raised conceptual questions on the need for the quantization of gravity \cite{eppley1977necessity, penrose1996gravity, dyson2014graviton, baym2009two, mari2016experiments, belenchia2018quantum, rydving2021gedankenexperiments, carney2021newton}. One major challenge is the lack of experimental evidence. However, in recent years experimental proposals to test the quantization of gravity have become an active research field. On the one hand, quantum gravity phenomenology offers alternative models that can be probed from astrophysical observations \cite{amelino1998tests, hossenfelder2017experimental} and table-top experiments \cite{marshall2003towards, pikovski2012probing, bekenstein2012tabletop, bassi2017gravitational}. On the other hand, the entanglement between two gravitating systems can provide indirect signatures of quantized gravity \cite{bose2017spin, marletto2017gravitationally}. In similar spirit as the latter, a recent paper by Carney, M\"{u}ller and Taylor ~\cite{Carney2021} showed that interactions between atoms and a massive systems can hint at the quantum nature of gravity. They showed that the coupling results in periodic collapses and revivals of the interferometer visibility of the atomic interferometer. The authors show that under specific conditions such as Markovianity and time independent Hamiltonians, this behavior implies entanglement between the atom and the harmonic oscillator through gravity, hence to conclude the quantum nature of gravity.
Here we study classical models for the loss and revival of visibility. Our analysis shows that this signature can be reproduced if the atom evolves according to a random unitary channel, without being coupled to another quantum system~\cite{audenaert2008random}. As the central part of the atomic interferometer is the accumulated phase during the time evolution, the idea is motivated by a previous work~\cite{armata2016quantum} where the optical phase originating from an optomechanical interaction is found to have a classical origin, such that classical models can explain supposed quantum behaviour in other proposals, such as in Ref. \cite{marshall2003towards}. For the case where both the atoms and the oscillator are described fully quantum mechanically, we study the non-classicality for a thermal harmonic oscillator, showing that it vanishes for low coupling even if the system is in the ground state. Therefore, such experiments with very low coupling strengths and at finite temperature always allow for a classical description, unless it is explicitly invalidated experimentally.
\section{the original quantum model}\label{sec:level1}
The setup described in Ref.~\cite{Carney2021} consists of an atom localized into one of two positions interacting with a quantum harmonic oscillator. The position degree of freedom of the atom can therefore be represented as a qubit.
The atom will interact gravitationally with the harmonic oscilltor according to the Hamiltonian
\begin{equation}\label{eq:qHam}
\hat{H}=\omega\hat{a}^{\dagger}\hat{a}+g(\hat{a}+\hat{a}^{\dagger})\hat{\sigma}_z.
\end{equation}
Here $\hat{a}$ and $\hat{a}^{\dagger}$ are annihilation and creation operators for the mechanical oscillator. They are related to the position and momentum operators of the mechanical oscillator via $\hat{a}=\sqrt{m\omega/2}\hat{X}+i/\sqrt{2m\omega}\hat{P}$, where $m$ and $\omega$ are the mass and frequency of the oscillator, respectively. The operator $\hat{\sigma}_z$ acts on the atom, defined as
\begin{equation}
\hat{\sigma}_z=|1\rangle\langle1|-|0\rangle\langle0|,
\end{equation}
where $|0\rangle$ and $|1\rangle$ are two states of the position degree of freedom of the atom.
The coupling strength $g$ depends on the gravitational force between the two systems (i.e. the masses of the two systems and the distance between them).
Note that throughout this article we have set $\hbar=1$.
The mechanical oscillator is initially in a thermal state at temperature $T$, described by
\begin{equation}
\rho_{\mathrm{th}}=\frac{1}{\pi\tilde{n}}\int d^2\alpha\; e^{-\frac{|\alpha|^2}{\tilde{n}}} \ketbra{\alpha}{\alpha}, \label{eq:thermal}
\end{equation}
where $\tilde{n}=(\exp(\omega/k_{\mathrm{B}}T)-1)^{-1}$ is the thermal phonon number and $\ket{\alpha}$ is a coherent state. The atom is initially in the state $|0\rangle$.
The experimental proposal in \cite{Carney2021} is to then perform interferometry on the atom.
This consists of the following steps. The Hadamard gate is applied to the atom, resulting in the transformation
\begin{subequations}\label{eq:hadamard}
\begin{align}
&|0\rangle\rightarrow \ket{+}=\frac{1}{\sqrt{2}}(|0\rangle+|1\rangle),\\
&|1\rangle\rightarrow\ket{-}=\frac{1}{\sqrt{2}}(|0\rangle-|1\rangle).
\end{align}
\end{subequations}
Then the atom-oscillator system evolves according to the Hamiltonian Eq.~\eqref{eq:qHam} for time $t$, described by the unitary operator
\begin{equation}
\hat{U}_q(t)=e^{-i\hat{H}t}.
\end{equation}
To describe the system in a thermal state, we first calculate the evolution for an arbitrary coherent state $\ket{\alpha}$.
After evolving for a time $t$ the combined atom-oscillator state is given by (up to a global phase),
\begin{align}
\hat{U}_q(t)&\ket{+}\ket{\alpha} \nonumber \\
&= \frac{1}{\sqrt{2}}\left(e^{i\theta(t)} \ket{0}\ket{\alpha_{+}(t)} + e^{-i\theta(t)}\ket{1}\ket{\alpha_{-}}\right)
\end{align}
where
\begin{subequations}
\begin{align}
\alpha_{\pm}(t) &= \alpha e^{-i\omega t}\pm \frac{g}{\omega}\left(1-e^{-i\omega t}\right), \\
\theta(t) &= \frac{g}{\omega}\text{Im}(\alpha(1-e^{-i\omega t})).
\end{align}
\end{subequations}
Therefore, if the oscillator begins in a thermal state, the combined system will evolve under $\hat{U}_{q}(t)$ to the state
\begin{align}
&\hat{U}_q(t)\left(\ketbra{+}{+}\otimes \rho_{\text{th}}\right)U^{\dagger}_{q}(t) \nonumber \\
&\;= \frac{1}{\pi\tilde{n}} \int e^{-\frac{|\alpha|^2}{\tilde{n}}}\frac{1}{2}\left(e^{i\theta(t)} \ket{0}\ket{\alpha_{+}(t)} + e^{-i\theta(t)}\ket{1}\ket{\alpha_{-}}\right) \nonumber \\
&\qquad\qquad \times \left(e^{-i\theta(t)} \bra{0}\bra{\alpha_{+}(t)} + e^{i\theta(t)}\bra{1}\bra{\alpha_{-}}\right)d^{2}\alpha. \label{eq:evolve}
\end{align}
After this evolution a phase shift $\varphi$ is applied to the atom state $|1\rangle$, which is realised by the unitary operator
\begin{equation}
\hat{U}_{\varphi}=e^{i\varphi}|1\rangle\langle1|+|0\rangle\langle0|.
\end{equation}
Then another Hadamard gate Eq.~\eqref{eq:hadamard} is applied to the atom. Finally the position of the atom is measured. The probability of the atom to be in the state $|0\rangle$ is
\begin{equation}\label{eq:probQ}
P_{q,\varphi}=\frac{1}{2}+\frac{1}{2}e^{-16\frac{g^2}{\omega^2}(\tilde{n}+\frac{1}{2})\sin^2\frac{\omega t}{2}}\cos\varphi.
\end{equation}
The interference visibility is defined as
\begin{equation}
V=\frac{\max_{\varphi}P_{q,\varphi}-\min_{\varphi}P_{q,\varphi}}{\max_{\varphi}P_{q,\varphi}+\min_{\varphi}P_{q,\varphi}}.
\end{equation}
After inserting in Eq.~\eqref{eq:probQ} we get
\begin{equation}\label{eq:qV}
V=e^{-16\frac{g^2}{\omega^2}(\tilde{n}+\frac{1}{2})\sin^2\frac{\omega t}{2}}.
\end{equation}
The visibility decays to its minimum value at $t=\pi/\omega$, when the atom is maximally entangled with the harmonic oscillator \eqref{eq:evolve}. The visibility then returns to the maximum value $1$ at $t=2\pi/\omega$, when the atom is fully disentangled from the harmonic oscillator. This pattern is repeated with the period $2\pi/\omega$. These are referred to as the periodic collapse and revival of the interference visibility, which in the fully quantum mechanical picture can be attributed to the entanglement between the two systems.
The periodic appearance and disappearance of entanglement can be clearly seen by examining the explicit form of the unitary time evolution operator. For this purpose, we generalise the $\hat{\sigma}_z$ operator in Eq.~\eqref{eq:qHam} to any operator $\hat{O}$ that commutes with $\hat{a}$ and $\hat{a}^{\dagger}$,
\begin{equation}\label{eq:qGen}
\hat{H}_O = \omega\hat{a}^{\dagger}\hat{a}+g(\hat{a}+\hat{a}^{\dagger})\hat{O}.
\end{equation}
The corresponding unitary time evolution operator can be expressed as
\begin{align}\label{eq:UO}
&\hat{U}_O(t)=e^{-i\hat{H}_Ot}\\
&=e^{i\frac{g^2}{\omega^2}(\omega t-\sin\omega t)\hat{O}^2}e^{-i\omega t\hat{a}^{\dagger}\hat{a}}e^{-\frac{g}{\omega}((e^{i\omega t}-1)\hat{a}^{\dagger}-(e^{-i\omega t}-1)\hat{a})\hat{O}}.\nonumber
\end{align}
The last exponential factor is a displacement operator of the oscillator, conditioned on the state of the atom.
At times $t=2n\pi/\omega$ where $n\in\mathbb{Z}$, the last two factors in Eq.~\eqref{eq:UO} reduce to the identity, meaning that at these times $\hat{U}_O(t)$ is independent of $\hat{a}$ and $\hat{a}^{\dagger}$, and the atom and the oscillator decouple. The first factor is usually associated with a nonlinear geometric phase gate on the mode described by $\hat{O}$. In \cite{armata2016quantum}, a similar interferometric setup was considered but with coherent states of light. In that case the non-linear factor in \eqref{eq:UO} caused an additional loss of visibility. Since the model considered here only studies interferometry with a single qubit, there is no additional loss of visibility. Indeed since $\hat{\sigma}_z^2$ is the identity operator, the non-linear factor only appears as a global phase.
\section{Semi-classical Approaches}
In this section we present several semi-classical models which reproduce the same periodic collapse and revival pattern as seen in the fully quantum mechanical case \eqref{eq:qV}. In these models, the only quantum element in the setup is the atom, which is modelled as a two-level system. We start with a general formalism where the atom is subject to a random unitary channel, then we explicitly consider three examples.
\subsection{General Formalism}
In our semi-classical models we perform the same atom interferometry experiment but we replace $\hat{U}_q(t)$ with a random unitary channel. To be specific, the atom is prepared in the $|+\rangle$ state. It then evolves under a random unitary channel whose effect on the atomic state is described by
\begin{equation}\label{eq:scState}
\rho(t)=\langle \hat{U}_{sc}(t)|+\rangle\langle+|\hat{U}_{sc}^{\dagger}(t) \rangle_c,
\end{equation}
where $\hat{U}_{sc}(t)$ a phase shift unitary operator,
\begin{equation}\label{eq:scU}
\hat{U}_{sc}(t)=e^{i\phi(t)}|1\rangle\langle1|+|0\rangle\langle0|,
\end{equation}
$\phi(t)$ is a real-valued random variable at each time, and $\langle \cdot \rangle_c$ refers to taking the average over the classical probability distribution of random variables. The phase shift $\phi(t)$ can, for instance, be generated via the Hamiltonian of the atom,
\begin{equation}
\hat{H}_{sc}=G(t)\hat{\sigma}_z,\label{eq:HamGen}
\end{equation}
where
\begin{equation}
G(t)=-\frac{1}{2}\frac{d\phi(t)}{dt}.\label{eq:gT}
\end{equation}
The atomic state can be explicitly written as
\begin{align}
\rho(t)&=\frac{1}{2}|1\rangle\langle 1|+\frac{1}{2}|0\rangle\langle 0|+\frac{1}{2}\langle e^{i\phi(t)} \rangle_c|1\rangle\langle0|\nonumber\\
&+\frac{1}{2}\langle e^{-i\phi(t)} \rangle_c|0\rangle\langle1|.
\end{align}
To finish the interferometry, same as the quantum case, a phase shift $\varphi$ is applied to $|1\rangle$, then the Hadamard gate Eq.~\eqref{eq:hadamard} acts on the atom, and finally we measure the atom position. The interference visibility is derived to be
\begin{equation}
V=|\langle e^{i\phi(t)} \rangle_c|.
\end{equation}
The condition for reproducing the quantum collapse and revival of the visibility, is therefore
\begin{equation}\label{eq:condition}
|\langle e^{i\phi(t)} \rangle_c| = e^{-16\frac{g^2}{\omega^2}(\tilde{n}+\frac{1}{2})\sin^2\frac{\omega t}{2}}.
\end{equation}
Any random unitary channel, i.e., classical probability distribution of $\phi(t)$, which can satisfy the condition Eq.~\eqref{eq:condition}, will reproduce the same visibility as a function of time as governed by the quantum interaction Hamiltonian Eq.~\eqref{eq:qHam}.
This will be true if the classical uncertainty associated with $\phi(t)$ vanishes with period $2\pi/\omega$, but remains at intermediate times.
We will now give several examples of such a $\phi(t)$.
\subsection{Example semi-classical model 1} \label{sec:ex1}
The first semi-classical model is based on a semi-classical mean-field approximation to the quantum Hamiltonian Eq.~\eqref{eq:qHam}. We assume that the atom is a two-state quantum system, while the mechanical oscillator is classical. The atom evolves according to the Hamiltonian
\begin{equation}\label{eq:sc1h}
\hat{H}_{sc1}=\sqrt{2}g x(t)\hat{\sigma}_z,
\end{equation}
where $x(t)$ is the dimensionless position of the mechanical oscillator, which is related to the physical displacement $X$ of the oscillator via the `zero-point length', $x=X\sqrt{m\omega}$. The mechanical oscillator is assumed to only see the mean-field effect of the atom, i.e., the force applied from the atom onto the mechanical oscillator is $F=-\sqrt{2m\omega}g\langle \hat{\sigma}_z \rangle$. As the atom is in the state $(|0\rangle+|1\rangle)/\sqrt{2}$ before the interaction with the mechanical oscillator starts, and the interaction Eq.~\eqref{eq:sc1h} only induces a phase difference between the two basis states $|0\rangle$ and $|1\rangle$, $F=0$ holds throughout the time evolution. Therefore we can write the time evolution of the dimensionless mechanical oscillator position as
\begin{equation}
x(t)=x_0\cos\omega t+p_0\sin\omega t,
\end{equation}
where $x_0$ is the initial value of dimensionless position and $p_0\sqrt{m\omega}$ is the initial value of the momentum. The energy of the classical oscillator is therefore
\begin{equation}
E(x_{0},p_{0}) = \frac{\omega}{2}(x_{0}^{2}+p_{0}^{2}).
\end{equation}
The Hamiltonian \eqref{eq:sc1h} induces evolution according to the unitary
\begin{equation}
U_{sc1}(t) = \exp\left(-i \sqrt{2} g \int_{0}^{t} d\tau x(\tau) \sigma_{z}\right).
\end{equation}
This is (up to a global phase) of the form \eqref{eq:scU}, where
\begin{align}\label{eq:sc1Phase}
\phi(t)&=-2\sqrt{2}g\int_0^t d\tau x(\tau)\nonumber\\
&=-2\sqrt{2}\frac{g}{\omega}(x_{0}\sin\omega t+p_{0}(1-\cos\omega t)).
\end{align}
If we assume the classical harmonic oscillator is in a thermal state at inverse temperature $\beta$, then the energy distribution is given by a Boltzmann distribution,
\begin{equation}
p(E(x_{0},p_{0})) = \frac{\beta\omega}{2\pi}e^{-\frac{\beta\omega}{2}(x_{0}^{2}+p_{0}^{2})}.
\end{equation}
Thus we see the variables $x_{0}$ and $p_{0}$ have a normal distribution with standard deviation $\sqrt{1/\beta\omega}$.
Therefore, the interferometric visibility is
\begin{align}\label{eq:sc1V}
\left|\langle e^{i\phi(t)} \rangle_c \right| &= \left|\int dx_{0}dp_{0}\, p(E(x_{0},p_{0}))\, e^{i\phi(t)}\right| \nonumber \\
&=e^{-16n_c\frac{g^2}{\omega^2}\sin^2\frac{\omega t}{2}},
\end{align}
where we defined the classical phonon number as $n_c=1/\beta\omega$.
This is in the same form as Eq.~\eqref{eq:condition} except that, the factor $\tilde{n}+1/2$ in the exponential is replaced by $n_c$. Recall that the quantum thermal phonon number is expressed as $\tilde{n}=1/(\exp(\omega/k_{\mathrm{B}}T)-1)$. For high temperature, $k_{\mathrm{B}}T\gg \omega$, we have that $\tilde{n}+1/2 \approx n_c$. The visibility Eq.~\eqref{eq:sc1V} is thus indistinguishable from the quantum visibility Eq.~\eqref{eq:qV}. If the temperature is low, the difference between $\tilde{n}+1/2$ and $n_c$ is significant. However, the quantum visibility can be reproduced if the classical oscillator begins in a higher temperature, so that the standard deviations of $x_0$ and $p_0$ are proportional to $\sqrt{\tilde{n}+1/2}$, instead of the thermal width $\sqrt{n_c}$.
We can explain the revival and collapse of the interferometric visibility in this semi-classical model as follows. The phase shift in this example is proportional to the integral of the position \eqref{eq:sc1Phase} of the classical oscillator from its initial position at time $t=0$. The uncertainty in the initial position and momentum of the oscillator translates into uncertainty into the phase shift which leads to a reduction in visibility. However since the motion of the mechanical oscillator is periodic, the integral of the position will be zero with certainty every mechanical period $t=2\pi/\omega$, implying that the phase shift at these times is certainly zero and therefore the visibility will periodically revive.
Note that this semi-classical model based on the mean-field interaction with a classical oscillator shares the same idea as the optomechanical example in Ref.~\cite{armata2016quantum}, and it has been suggested recently~\cite{hosten2021testing} to claim against the proposal in Ref.~\cite{Carney2021}. In the next subsections we describe other semi-classical models that do not correspond to the interaction of the atom with a classical thermal mechanical oscillator, but which nevertheless reproduce the same interference visibility as Eq.~\eqref{eq:qV}.
\subsection{Example semi-classical model 2}
The second semi-classical model assumes an interaction Hamiltonian
\begin{equation}\label{eq:Hsc2}
\hat{H}_{sc2}=\sqrt{2}g\tilde{x}_0\cos\left(\frac{\omega t}{2}\right)\hat{\sigma}_z,
\end{equation}
where $\tilde{x}_0$ is a Gaussian random variable with mean $0$ and standard deviation $\sqrt{\tilde{n}+1/2}$. The corresponding phase modulation $\phi(t)$ in Eq.~\eqref{eq:scU} is
\begin{equation}\label{eq:sc2Phase}
\phi(t)=-4\sqrt{2}\frac{g}{\omega}\sin\left(\frac{\omega t}{2}\right)\tilde{x}_0.
\end{equation}
At times $t=2n\pi/\omega$ where $n\in \mathbb{Z}$, $\phi(t)=0$, thus the randomness which depends on $\tilde{x}_0$ disappears, leading to full revival of the interference visibility. It is straightforward to show that the visibility Eq.~\eqref{eq:qV} is recovered.
This example might be considered a special case of the first, with $p_{0}=0$. In such a case there is no uncertainty in the initial momentum of the classical oscillator. As a result, the integral of the position \eqref{eq:sc1Phase} is periodically zero with half the period, therefore in order to match \eqref{eq:qV} in this example, the classical oscillator must have half the frequency, as seen in \eqref{eq:Hsc2}.
The phase modulation Eq.~\eqref{eq:sc2Phase} is the product of one random variable and a periodic time-dependent function. In comparison, in the first semi-classical model, Eq.~\eqref{eq:sc1Phase} contains two random variables, each one multiplied by a periodic time-dependent function. It is possible to construct more semi-classical models, by summing up larger numbers of terms, each term made of the product between a random variable and a periodic time-dependent function. In the next subsection, we will describe a systematic way of constructing semi-classical models based on the characteristic function of classical random variables.
\subsection{Characteristic function method and example semi-classical model 3}
We can construct semi-classical models directly from Eq.~\eqref{eq:scU}. The condition for reproducing the quantum visibility, Eq.~\eqref{eq:condition}, is related to the characteristic function of a random variable. At each time $t$, $\phi(t)$ is a random variable. Its characteristic function is
\begin{equation}
\Psi_{\phi(t)}(k)=\langle e^{ik\phi(t)} \rangle_c.
\end{equation}
The condition Eq.~\eqref{eq:condition} is therefore the requirement
\begin{equation}\label{eq:char}
|\Psi_{\phi(t)}(k=1)|=e^{-16\frac{g^2}{\omega^2}(\tilde{n}+\frac{1}{2})\sin^2\frac{\omega t}{2}}.
\end{equation}
There are an infinite number of $\Psi_{\phi(t)}(k)$ (and therefore $\phi(t)$), that satisfy Eq.~\eqref{eq:char}. As an example, we choose
\begin{equation}
\Psi_{\phi(t)}(k)=e^{-16k^2\frac{g^2}{\omega^2}(\tilde{n}+\frac{1}{2})\sin^2\frac{\omega t}{2}}.
\label{eq:charFun}
\end{equation}
This can be the characteristic function of a single Gaussian random variable, or the sum of several independent Gaussian random variables. For the former case, $\phi(t)$ is a Gaussian random variable with mean $0$ and time-dependent variance
\begin{equation}
\sigma^2=32\frac{g^2}{\omega^2}(\tilde{n}+\frac{1}{2})\sin^2\left(\frac{\omega t}{2}\right).
\end{equation}
Note that the semi-classical example 2 considered in the previous subsection is included in this situation. For the latter case, we apply this characteristic function method to explicitly construct another semi-classical model, named example 3. To be specific, we can split the characteristic function Eq.~\eqref{eq:charFun} into the product of two exponentials, each one corresponding to the characteristic function of a Gaussian random variable,
\begin{align}
\Psi_{\phi(t)}(k)&=e^{-16k^2\frac{g^2}{\omega^2}(\tilde{n}+\frac{1}{2})\sin^2\frac{\omega t}{2}\sin^2\omega t}\nonumber\\
&\times e^{-16k^2\frac{g^2}{\omega^2}(\tilde{n}+\frac{1}{2})\sin^2\frac{\omega t}{2}\cos^2\omega t}.
\end{align}
Thus $\phi(t)=v_1+v_2$ is the sum of two independent zero-mean Gaussian random variables $v_1$ and $v_2$, with variance
\begin{subequations}
\begin{align}
&\sigma^2_{v_1}=32\frac{g^2}{\omega^2}(\tilde{n}+\frac{1}{2})\sin^2\left(\frac{\omega t}{2}\right)\sin^2(\omega t),\\
&\sigma^2_{v_2}=32\frac{g^2}{\omega^2}(\tilde{n}+\frac{1}{2})\sin^2\left(\frac{\omega t}{2}\right)\cos^2(\omega t).
\end{align}
\end{subequations}
These can be realised by choosing
\begin{subequations}
\begin{align}
&v_1=4\sqrt{2}\frac{g}{\omega}\sin\left(\frac{\omega t}{2}\right)\sin(\omega t)x_1,\\
&v_2=4\sqrt{2}\frac{g}{\omega}\sin\left(\frac{\omega t}{2}\right)\cos(\omega t)x_2,
\end{align}
\end{subequations}
where $x_1$ and $x_2$ are two independent Gaussian random variables with mean $0$ and standard deviation $\sqrt{\tilde{n}+1/2}$. By making use of Eqs.~\eqref{eq:HamGen} and \eqref{eq:gT}, we get the Hamiltonian
\begin{align}
\hat{H}_{sc3}=&-\frac{1}{\sqrt{2}}g\Big[\left(3\sin\frac{3\omega t}{2}-\sin\frac{\omega t}{2}\right)x_1\nonumber\\
&+\left(3\cos\frac{3\omega t}{2}-\cos\frac{\omega t}{2}\right)x_2\Big]\hat{\sigma}_z.
\end{align}
\section{Non-classicality}
So far our approach has been to treat the problem in a semi-classical picture where only the atom position is a quantum mechanical degree of freedom and it is subject to a random unitary channel. In this section we look at non-classicality of the system when it is treated quantum mechanically as a whole (as described in section~\ref{sec:level1}). To analyze if entanglement can be inferred, we calculate the Wigner function negativity of the oscillator state when it interacts with the atomic interferometer. The negativity of the Wigner function is a measure of the non-classicality of a quantum state \cite{Kenfack2004}, and quantifies the extent to which the corresponding Wigner function has negative values.
The Wigner function of a quantum state $\ket{\psi}$ is defineded as
\begin{equation}
W(q,p) = \frac{1}{2\pi \hbar} \int dx \braket{q-\frac{1}{2}x}{\psi}\braket{\psi}{q+\frac{1}{2}x}e^{\frac{ipx}{\hbar}}.
\end{equation}
The negativity of a Wigner function is defined as
\begin{align}
\delta(W) &= \int dq \, dp\, \left(|W(q,p)| - W(q,p)\right) \nonumber \\
&= \int dq \, dp\, |W(q,p)| - 1. \label{eq:wigneg}
\end{align}
The scheme in \cite{Carney2021} relies on enhancing the sensitivity for detecting entanglement (by witnessing the decline and revival of the entanglement visibility) through increased temperature of the oscillator. To the extent the state \eqref{eq:evolve} is entangled, the state of the oscillator should be in a superposition and hence non-classical.
The Wigner negativity was used in \cite{Kleckner2008} to study whether an oscillator achieved non-classical states if it is located in one arm of a single photon interferometer. There it was found that the negativity decreased as the initial temperature increased. We do a similar calculation here.
\begin{figure}[t]
\centering
\includegraphics[scale=0.6]{wig_neg_plot.pdf}
\caption{Wigner negativity for the oscillator in state \eqref{eq:evolve} at a half mechanical period $t=\pi/\omega$ and after decoupling from the atom. Wigner function is given by \eqref{eq:wig}. The negativity increases for larger interaction strength $\lambda$ but decreases with increasing initial oscillator temperature.}
\label{fig:wigneg}
\end{figure}
To study the state of the oscillator directly, we consider it at a half mechanical period $t=\pi/\omega$ where the entanglement is largest, and we decouple it from the atom by projecting the atom onto the state $\ketbra{+}{+}$, so as to not destroy the oscillator superposition.
The Wigner function of the oscillator in this state can be written in dimensionless quadrature operators as
\begin{equation}
W_{\rho_{\mathrm{th}}}(Q,P)= W_{+}(Q,P)+W_{-}(Q,P)+W_{\text{int}}(Q,P),\label{eq:wig}
\end{equation}
where
\begin{subequations}
\begin{align}
&W_{\pm}(Q,P) = \frac{1}{N} \exp\left(\frac{1}{2\tilde{n}+1}\left(-2P^{2}-\left(\sqrt{2}Q\pm\sqrt{8}\lambda\right)^{2}\right)\right), \\
&W_{\text{int}}(Q,P) = \frac{2}{N}\exp\left(\frac{1}{2\tilde{n}+1}\left(-2P^{2}-2Q^{2}\right)\right)\cos\left(8P\lambda\right),
\end{align}
\end{subequations}
where $\lambda = g/\omega$ and where
\begin{equation}
N = \pi(2\tilde{n}+1)\left(1+e^{-8\lambda^{2}(2\tilde{n}+1)}\right).
\end{equation}
We see that when $\lambda=0$, the Wigner function is a Gaussian centred at the origin in phase space corresponding to the initial thermal state.
The integral \eqref{eq:wigneg} must be performed numerically, and we show the results in Fig.~\ref{fig:wigneg}. We see that the negativity increased with coupling strength $\lambda$, and when the interaction strength is zero the oscillator remains in a thermal state and hence is classical. For larger $\lambda$, the oscillator state contains more coherence at a half mechanical period and hence larger negativity. However the negativity decreases with initial oscillator temperature, and to achieve large negativity with high temperature requires a larger $\lambda$ to introduce enough coherence to compensate for the thermal noise.
This can be seen directly by rewriting \eqref{eq:wig} more compactly as
\begin{align}
&W_{\rho_{\mathrm{th}}}(Q,P)\label{eq:wigcompact} \\
&\quad= \frac{2}{N}\exp\left(\frac{-2P^{2}-2Q^{2}}{2\tilde{n}+1}\right) \nonumber \\
&\qquad\times\left(\exp\left(\frac{-8\lambda^{2}}{2\tilde{n}+1}\right)\cosh\left(\frac{8Q\lambda}{2\tilde{n}+1}\right)+\cos(8P\lambda)\right),\nonumber
\end{align}
in which the negativity of the Wigner function is caused solely by the cosine term. This term will cause more negativity with larger $\lambda$. Decreasing $\tilde{n}$ also increases the negativity as it suppresses the exp-cosh term. The Wigner function will only be negative for $Q$ sufficiently close to zero, with the troughs occurring at $P=(2n+1)\pi/8\lambda$ for $n\in \mathbb{Z}$.
In other words, the evolved state \eqref{eq:evolve} of the oscillator becomes more classical as the initial temperature increases.
This is to be expected from our first semi-classical model (see section~\ref{sec:ex1}) which identically reproduces the same visibility decline and revival as the fully quantum mechanical model in the case of high temperatures, and differs only at lower temperatures. Unless the oscillator is very close to its ground state, the non-classicality is vanishingly small, especially for very low coupling strengths. Since $\lambda \ll 1$ in Ref. \cite{Carney2021}, one would need to operate at and independently verify the ground state of the oscillator to infer entanglement generation.
Let us look more closely at the negativity in the small coupling regime $\lambda \ll 1$. Let us assume that the system is in the ground state, $\tilde{n}=0$, since as we saw earlier, the negativity decreased with increasing temperature.
From \eqref{eq:wigcompact} one can use the triangle inequality to obtain
\begin{align}
|W_{\rho_{\mathrm{th}}}(Q,P)| &\leq \frac{2}{N}\exp\left(-2P^{2}-2Q^{2}\right) \\
&\quad\times\left(\exp\left(-8\lambda^{2}\right)\cosh\left(8Q\lambda\right)+1\right).\nonumber
\end{align}
Therefore, after integrating we have
\begin{equation}
\delta(W_{\rho_{\text{th}}}) \leq \tanh\left(4\lambda^{2}\right).
\end{equation}
For $\lambda \ll 1$, we can approximate this as
\begin{equation}
\delta(W_{\rho_{\text{th}}}) \lesssim 4\lambda^{2}.
\end{equation}
Thus we see that for small $\lambda$ the negativity of the oscillator state produced in the experiment \eqref{eq:evolve} becomes vanishingly small, even when the system begins in the ground state. For finite temperature, it effectively vanishes.
\section{Discussion}
The authors of \cite{Carney2021} support their claim that the collapse and revival of the interferometric visibility is a true signature of entanglement by proving a theorem that shows that if no quantum entanglement is generated, the visibility cannot revive. The theorem rests on some assumptions, and since we claim to be providing semi-classical models which do not generate any entanglement but nevertheless display the same collapse and revival signature, we ought to discuss how our models contradict the theorem proved in \cite{Carney2021}.
Here we quote the theorem in full.
\begin{theorem}\label{thm:cmt}
Let $L$ be a channel on $H_{A}\otimes H_{B}$ where $H_{A}$ is a two-state system and $H_{B}$ is arbitrary. Assume that:
\begin{enumerate}
\item The channel $L$ generates time evolution, in a manner consistent with the time-translation invariance, thus obeying a semigroup composition law $L_{t\rightarrow t^{\prime\prime}} = L_{t\rightarrow t^{\prime}}L_{t^{\prime}\rightarrow t^{\prime\prime}}$ for all $t\leq t^{\prime}\leq t^{\prime\prime}$.
\item The two-level subsystem $H_{A}$ has its populations preserved under the time evolution, $\sigma_{z}(t) = \sigma_{z}(0)$.
\item $L$ is a separable channel: all of its Kraus operators are simple products. In particular, this means that any initial separable (non-entangled) state evolves to a separable state: $\rho(t) = L_{t}[\rho(0)]$ is separable for all separable initial states $\rho(0)$.
\end{enumerate}
Then the visibility $V(t)$ is a monotonic function of time.
\end{theorem}
In our semi-classical models, the quantum channel is the random unitary channel given by Eqs. \eqref{eq:scState} and \eqref{eq:scU}. Since it commutes with $\sigma_{z}$ it clearly satisfies assumption 2, and by appending an arbitrary Hilbert space $H_{B}$, then we see that $L \otimes \mathbb{I}$ satisfies assumption 3 as well. Thus the conflict with this theorem must lie in the first assumption. Indeed, the proof of Theorem 1 in Ref.~\cite{Carney2021} relies on the form of Lindblad master equation, where the Lindblad operators are time independent. The sufficient and necessary condition for the existence of such a Lindblad master equation is the divisibility of the master equation (the first condition of theorem~\ref{thm:cmt}). The additional assumption of time-translation invariance implies that the Lindblad operators are time-independent and this is used in the proof. This is equivalent to requiring the quantum channel satisfy a one-parameter semigroup composition law $L_{t_{1}}L_{t_{2}} = L_{t_{1}+t_{2}}$.
Our semi-classical models are not divisible, thus they do not correspond to a Lindblad master equation for the atom.
However our first semiclassical model is in fact time translation invariant.
Although there is explicit time dependence in the Hamiltonian in our semi-classical models, which indicates that the Hamiltonian is not time translation invariant, that does not mean that the corresponding random unitary channel after averaging over the classical randomness is not time translation invariant.
It is straightforward to check that our first semi-classical model is time translation invariant by showing that $L_{0\rightarrow \tau}=L_{t\rightarrow t+\tau}$ for all $t$.
The existence of our semi-classical models demonstrates that the conditions of this theorem are very restrictive, as many simple semi-classical models reproduce the decline and revival of interferometric visibility.
\section{Conclusions}
Experiments to probe the quantum nature of gravity have become promising research directions in recent years. While some proposals aim to test specific models \cite{marshall2003towards, pikovski2012probing, bekenstein2012tabletop, bassi2017gravitational}, others focus on indirect signatures of the quantization of gravity through gravitational generation of entanglement \cite{bose2017spin, marletto2017gravitationally, pedernales2021enhancing, Carney2021}. The proposed signature in Ref. \cite{Carney2021} is the loss and revival of visibility in an atomic interferometer, under specific assumptions on the dynamics. Here we show that this signature appears also in simple semi-classical models, thus such a signature by itself cannot indicate entanglement between the systems. According to the Ehrenfest theorem, the average behavior of a harmonic oscillator can be classically described. It is thus important to explore the classical picture to test whether specific signatures can reflect quantum behavior.
The semi-classical models we have discussed are reasonably general and applicable to other systems and states. In fact, a random unitary channel represents the time evolution of a quantum system under the influence of classical systems containing classical uncertainties~\cite{audenaert2008random}. As the collapse and revival of the interference visibility can be reproduced by the atom subject to random unitary channels, they cannot be considered good signatures of entanglement generation.
Our results do not contradict the claims of Ref. \cite{Carney2021} since the models we present do not satisfy the conditions on the dynamics under which entanglement can be inferred. But our findings highlight that such conditions are violated in many semi-classical scenarios. This indicates that the conditions imposed on the dynamics in Ref. \cite{Carney2021} are very restrictive: they exclude simple and reasonable classical dynamics, and thus leave little room to test against the inference of entanglement generation unless there is supplementary evidence that the conditions are satisfied. For the very low coupling strengths envisioned in the experiment, the non-classicality vanishes unless the oscillator is nearly exactly in its ground state.
It therefore remains a significant challenge to verify the quantum nature of the interaction in such an experimental scenario.
\emph{Note added} -- We were made aware of Ref.~\cite{hosten2021testing} after we complete the manuscript. Ref.~\cite{hosten2021testing} is closely related to Sec.~\ref{sec:ex1} to show that collapses and revivals can also be explained using a semi-classical mean-field model.
\acknowledgements
MSK and YM thank EPSRC for financial supports (EP/R044082/1, EP/P510257/1). IP and TG acknowledge support by the Swedish Research Council under
grant no. 2019-05615. IP also acknowledges support by the European Research Council under grant no. 742104 and The Branco Weiss Fellowship -- Society in Science.
\providecommand{\noopsort}[1]{}\providecommand{\singleletter}[1]{#1}%
| {'timestamp': '2022-01-12T02:18:05', 'yymm': '2111', 'arxiv_id': '2111.00936', 'language': 'en', 'url': 'https://arxiv.org/abs/2111.00936'} |
\section{Introduction}
Underwater robot picking is to use the robot to automatically capture sea creatures like holothurian, echinus, scallop, or starfish in an open-sea farm where underwater object detection is the key technology for locating creatures. Until now, the datasets used in this community are released by the Underwater Robot Professional Contest (URPC$\protect\footnote{Underwater Robot Professional Contest: {\bf http://en.cnurpc.org}.}$) beginning from 2017, in which URPC2017 and URPC2018 are most often used for research. Unfortunately, as the information listed in Table \ref{Info}, URPC series datasets do not provide the annotation file of the test set and cannot be downloaded after the contest.
Therefore, researchers \cite{2020arXiv200511552C,2019arXiv191103029L} first have to divide the training data into two subsets, including a new subset of training data and a new subset of testing data, and then train their proposed method and other \emph{SOTA} methods. On the one hand, training other methods results in a significant increase in workload. On the other hand, different researchers divide different datasets in different ways,
\begin{table}[t]
\renewcommand\tabcolsep{3.5pt}
\caption{Information about all the collected datasets. * denotes the test set's annotations are not available. \emph{3} in Class means three types of creatures are labeled, \emph{i.e.,} holothurian, echinus, and scallop. \emph{4} means four types of creatures are labeled (starfish added). Retention represents the proportion of images that retain after similar images have been removed.}
\centering
\begin{tabular}{|l|c|c|c|c|c|}
\hline
Dataset&Train&Test&Class&Retention&Year \\
\hline
URPC2017&17,655&985*&3&15\%&2017 \\
\hline
URPC2018&2,901&800*&4&99\%&2018 \\
\hline
URPC2019&4,757&1,029*&4&86\%&2019 \\
\hline
URPC2020$_{ZJ}$&5,543&2,000*&4&82\%&2020 \\
\hline
URPC2020$_{DL}$&6,575&2,400*&4&80\%&2020 \\
\hline
UDD&1,827&400&3&84\%&2020 \\
\hline
\end{tabular}
\label{Info}
\end{table}
\begin{figure*}[htbp]
\begin{center}
\includegraphics[width=1\linewidth]{example.pdf}
\end{center}
\caption{Examples in DUO, which show a variety of scenarios in underwater environments.}
\label{exam}
\end{figure*}
causing there is no unified benchmark to compare the performance of different algorithms.
In terms of the content of the dataset images, there are a large number of similar or duplicate images in the URPC datasets. URPC2017 only retains 15\% images after removing similar images compared to other datasets. Thus the detector trained on URPC2017 is easy to overfit and cannot reflect the real performance.
For other URPC datasets, the latter also includes images from the former, \emph{e.g.}, URPC2019 adds 2,000 new images compared to URPC2018; compared with URPC2019, URPC2020$_{ZJ}$ adds 800 new images. The URPC2020$_{DL}$ adds 1,000 new images compared to the URPC2020$_{ZJ}$. It is worth mentioning that the annotation of all datasets is incomplete; some datasets lack the starfish labels and it is easy to find error or missing labels. \cite{DBLP:conf/iclr/ZhangBHRV17} pointed out that although the CNN model has a strong fitting ability for any dataset, the existence of dirty data will significantly weaken its robustness.
Therefore, a reasonable dataset (containing a small number of similar images as well as an accurate annotation) and a corresponding recognized benchmark are urgently needed to promote community development.
To address these issues, we introduce a dataset called Detecting Underwater Objects (DUO) by collecting and re-annotating all the available underwater datasets. It contains 7,782 underwater images after deleting overly similar images and has a more accurate annotation with four types of classes (\emph{i.e.,} holothurian, echinus, scallop, and starfish).
Besides, based on the MMDetection$\protect\footnote{MMDetection is an open source object detection toolbox based on PyTorch. {\bf https://github.com/open-mmlab/mmdetection}}$ \cite{chen2019mmdetection} framework, we also provide a \emph{SOTA} detector benchmark containing efficiency and accuracy indicators, providing a reference for both academic research and industrial applications. It is worth noting that JETSON AGX XAVIER$\protect\footnote{JETSON AGX XAVIER is an embedded development board produced by NVIDIA which could be deployed in an underwater robot. Please refer {\bf https://developer.nvidia.com/embedded/jetson-agx-xavier-developer-kit} for more information.}$ was used to assess all the detectors in the efficiency test in order to simulate robot-embedded environment. DUO will be released in https://github.com/chongweiliu soon.
In summary, the contributions of this paper can be listed as follows.
$\bullet$ By collecting and re-annotating all relevant datasets, we introduce a dataset called DUO with more reasonable annotations as well as a variety of underwater scenes.
$\bullet$ We provide a corresponding benchmark of \emph{SOTA} detectors on DUO including efficiency and accuracy indicators which could be a reference for both academic research and industrial applications.
\pagestyle{empty}
\section{Background}
In the year of 2017, underwater object detection for open-sea farming is first proposed in the target recognition track of Underwater Robot Picking Contest 2017$\protect\footnote{From 2020, the name has been changed into Underwater Robot Professional Contest which is also short for URPC.}$ (URPC2017) which aims to promote the development of theory, technology, and industry of the underwater agile robot and fill the blank of the grabbing task of the underwater agile robot. The competition sets up a target recognition track, a fixed-point grasping track, and an autonomous grasping track. The target recognition track concentrates on finding the {\bf high accuracy and efficiency} algorithm which could be used in an underwater robot for automatically grasping.
The datasets we used to generate the DUO are listed below. The detailed information has been shown in Table \ref{Info}.
{\bf URPC2017}: It contains 17,655 images for training and 985 images for testing and the resolution of all the images is 720$\times$405. All the images are taken from 6 videos at an interval of 10 frames. However, all the videos were filmed in an artificial simulated environment and pictures from the same video look almost identical.
{\bf URPC2018}: It contains 2,901 images for training and 800 images for testing and the resolutions of the images are 586$\times$480, 704$\times$576, 720$\times$405, and 1,920$\times$1,080. The test set's annotations are not available. Besides, some images were also collected from an artificial underwater environment.
{\bf URPC2019}: It contains 4,757 images for training and 1029 images for testing and the highest resolution of the images is 3,840$\times$2,160 captured by a GOPro camera. The test set's annotations are also not available and it contains images from the former contests.
{\bf URPC2020$_{ZJ}$}: From 2020, the URPC will be held twice a year. It was held first in Zhanjiang, China, in April and then in Dalian, China, in August. URPC2020$_{ZJ}$ means the dataset released in the first URPC2020 and URPC2020$_{DL}$ means the dataset released in the second URPC2020. This dataset contains 5,543 images for training and 2,000 images for testing and the highest resolution of the images is 3,840$\times$2,160. The test set's annotations are also not available.
{\bf URPC2020$_{DL}$}: This dataset contains 6,575 images for training and 2,400 images for testing and the highest resolution of the images is 3,840$\times$2,160. The test set's annotations are also not available.
{\bf UDD \cite{2020arXiv200301446W}}: This dataset contains 1,827 images for training and 400 images for testing and the highest resolution of the images is 3,840$\times$2,160. All the images are captured by a diver and a robot in a real open-sea farm.
\begin{figure}[t]
\begin{center}
\includegraphics[width=1\linewidth]{pie.pdf}
\end{center}
\caption{The proportion distribution of the objects in DUO.}
\label{pie}
\end{figure}
\begin{figure*}
\centering
\subfigure[]{\includegraphics[width=3.45in]{imagesize.pdf}}
\subfigure[]{\includegraphics[width=3.45in]{numInstance.pdf}}
\caption{(a) The distribution of instance sizes for DUO; (b) The number of categories per image.}
\label{sum}
\end{figure*}
\section{Proposed Dataset}
\subsection{Image Deduplicating}
As we explained in Section 1, there are a large number of similar or repeated images in the series of URPC datasets. Therefore, it is important to delete duplicate or overly similar images and keep a variety of underwater scenarios when we merge these datasets together. Here we employ the Perceptual Hash algorithm (PHash) to remove those images. PHash has the special property that the hash value is dependent on the image content, and it remains approximately the same if the content is not significantly modified. Thus we can easily distinguish different scenarios and delete duplicate images within one scenario.
After deduplicating, we obtain 7,782 images (6,671 images for training; 1,111 for testing). The retention rate of the new dataset is 95\%, which means that there are only a few similar images in the new dataset. Figure \ref{exam} shows that our dataset also retains various underwater scenes.
\subsection{Image Re-annotation}
Due to the small size of objects and the blur underwater environment, there are always missing or wrong labels in the existing annotation files. In addition, some test sets' annotation files are not available and some datasets do not have the starfish annotation. In order to address these issues, we follow the next process which combines a CNN model and manual annotation to re-annotate these images. Specifically, we first train a detector (\emph{i.e.,} GFL \cite{li2020generalized}) with the originally labeled images. After that, the trained detector predicts all the 7,782 images. We treat the prediction as the groundtruth and use it to train the GFL again. We get the final GFL prediction called {\bf the coarse annotation}. Next, we use manual correction to get the final annotation called {\bf the fine annotation}. Notably, we adopt the COCO \cite{Belongie2014} annotation form as the final format.
\subsection{Dataset Statistics}
{\bf The proportion of classes}: The total number of objects is 74,515. Holothurian, echinus, scallop, and starfish are 7,887, 50,156, 1,924, and 14,548, respectively. Figure \ref{pie} shows the proportion of each creatures where echinus accounts for 67.3\% of the total. The whole data distribution shows an obvious long-tail distribution because the different economic benefits of different seafoods determine the different breed quantities.
{\bf The distribution of instance sizes}: Figure \ref{sum}(a) shows an instance size distribution of DUO. \emph{Percent of image size} represents the ratio of object area to image area, and \emph{Percent of instance} represents the ratio of the corresponding number of objects to the total number of objects. Because of these small creatures and high-resolution images, the vast majority of objects occupy 0.3\% to 1.5\% of the image area.
{\bf The instance number per image}: Figure \ref{sum}(b) illustrates the number of categories per image for DUO. \emph{Number of instances} represents the number of objects one image has, and \emph{ Percentage of images} represents the ratio of the corresponding number of images to the total number of images. Most images contain between 5 and 15 instances, with an average of 9.57 instances per image.
{\bf Summary}:
In general, smaller objects are harder to detect. For PASCAL VOC \cite{Everingham2007The} or COCO \cite{Belongie2014}, roughly 50\% of all objects occupy no more than 10\% of the image itself, and others evenly occupy from 10\% to 100\%.
In the aspect of instances number per image, COCO contains 7.7 instances per image and VOC contains 3. In comparison, DUO has 9.57 instances per image and most instances less than 1.5\% of the image size.
Therefore, DUO contains almost exclusively massive small instances and has the long-tail distribution at the same time, which means it is promising to design a detector to deal with massive small objects and stay high efficiency at the same time for underwater robot picking.
\section{Benchmark}
Because the aim of underwater object detection for robot picking is to find {\bf the high accuracy and efficiency} algorithm, we consider both the accuracy and efficiency evaluations in the benchmark as shown in Table \ref{ben}.
\subsection{Evaluation Metrics}
Here we adopt the standard COCO metrics (mean average precision, \emph{i.e.,} mAP) for the accuracy evaluation and also provide the mAP of each class due to the long-tail distribution.
{\bf AP} -- mAP at IoU=0.50:0.05:0.95.
{\bf AP$_{50}$} -- mAP at IoU=0.50.
{\bf AP$_{75}$} -- mAP at IoU=0.75.
{\bf AP$_{S}$} -- {\bf AP} for small objects of area smaller than 32$^{2}$.
{\bf AP$_{M}$} -- {\bf AP} for objects of area between 32$^{2}$ and 96$^{2}$.
{\bf AP$_{L}$} -- {\bf AP} for large objects of area bigger than 96$^{2}$.
{\bf AP$_{Ho}$} -- {\bf AP} in holothurian.
{\bf AP$_{Ec}$} -- {\bf AP} in echinus.
{\bf AP$_{Sc}$} -- {\bf AP} in scallop.
{\bf AP$_{St}$} -- {\bf AP} in starfish.
For the efficiency evaluation, we provide three metrics:
{\bf Param.} -- The parameters of a detector.
{\bf FLOPs} -- Floating-point operations per second.
{\bf FPS} -- Frames per second.
Notably, {\bf FLOPs} is calculated under the 512$\times$512 input image size and {\bf FPS} is tested on a JETSON AGX XAVIER under MODE$\_$30W$\_$ALL.
\subsection{Standard Training Configuration}
We follow a widely used open-source toolbox, \emph{i.e.,} MMDetection (V2.5.0) to produce up our benchmark. During the training, the standard configurations are as follows:
$\bullet$ We initialize the backbone models (\emph{e.g.,} ResNet50) with pre-trained parameters on ImageNet \cite{Deng2009ImageNet}.
$\bullet$ We resize each image into 512 $\times$ 512 pixels both in training and testing. Each image is flipped horizontally with 0.5 probability during training.
$\bullet$ We normalize RGB channels by subtracting 123.675, 116.28, 103.53 and dividing by 58.395, 57.12, 57.375, respectively.
$\bullet$ SGD method is adopted to optimize the model. The initial learning rate is set to be 0.005 in a single GTX 1080Ti with batchsize 4 and is decreased by 0.1 at the 8th and 11th epoch, respectively. WarmUp \cite{2019arXiv190307071L} is also employed in the first 500 iterations. Totally there are 12 training epochs.
$\bullet$ Testing time augmentation (\emph{i.e.,} flipping test or multi-scale testing) is not employed.
\subsection{Benchmark Analysis}
Table \ref{ben} shows the benchmark for the \emph{SOTA} methods. Multi- and one- stage detectors with three kinds of backbones (\emph{i.e.,} ResNet18, 50, 101) give a comprehensive assessment on DUO. We also deploy all the methods to AGX to assess efficiency.
In general, the multi-stage (Cascade R-CNN) detectors have high accuracy and low efficiency, while the one-stage (RetinaNet) detectors have low accuracy and high efficiency. However, due to recent studies \cite{zhang2019bridging} on the allocation of more reasonable positive and negative samples in training, one-stage detectors (ATSS or GFL) can achieve both high accuracy and high efficiency.
\begin{table*}[htbp]
\renewcommand\tabcolsep{3.0pt}
\begin{center}
\caption{Benchmark of \emph{SOTA} detectors (single-model and single-scale results) on DUO. FPS is measured on the same machine with a JETSON AGX XAVIER under the same MMDetection framework, using a batch size of 1 whenever possible. R: ResNet.}
\label{ben}
\begin{tabular}{|l|l|c|c|c|ccc|ccc|cccc|}
\hline
Method&Backbone&Param.&FLOPs&FPS&AP&AP$_{50}$&AP$_{75}$&AP$_{S}$&AP$_{M}$&AP$_{L}$&AP$_{Ho}$&AP$_{Ec}$&AP$_{Sc}$&AP$_{St}$ \\
\hline
\emph{multi-stage:} &&&&&&&&&&&&&& \\
\multirow{3}{*}{Faster R-CNN \cite{Ren2015Faster}}
&R-18&28.14M&49.75G&5.7&50.1&72.6&57.8&42.9&51.9&48.7&49.1&60.1&31.6&59.7\\
&R-50&41.14M&63.26G&4.7&54.8&75.9&63.1&53.0&56.2&53.8&55.5&62.4&38.7&62.5\\
&R-101&60.13M&82.74G&3.7&53.8&75.4&61.6&39.0&55.2&52.8&54.3&62.0&38.5&60.4\\
\hline
\multirow{3}{*}{Cascade R-CNN \cite{Cai_2019}}
&R-18&55.93M&77.54G&3.4&52.7&73.4&60.3&\bf 49.0&54.7&50.9&51.4&62.3&34.9&62.3\\
&R-50&68.94M&91.06G&3.0&55.6&75.5&63.8&44.9&57.4&54.4&56.8&63.6&38.7&63.5\\
&R-101&87.93M&110.53G&2.6&56.0&76.1&63.6&51.2&57.5&54.7&56.2&63.9&41.3&62.6\\
\hline
\multirow{3}{*}{Grid R-CNN \cite{lu2019grid}}
&R-18&51.24M&163.15G&3.9&51.9&72.1&59.2&40.4&54.2&50.1&50.7&61.8&33.3&61.9\\
&R-50&64.24M&176.67G&3.4&55.9&75.8&64.3&40.9&57.5&54.8&56.7&62.9&39.5&64.4\\
&R-101&83.24M&196.14G&2.8&55.6&75.6&62.9&45.6&57.1&54.5&55.5&62.9&41.0&62.9\\
\hline
\multirow{3}{*}{RepPoints \cite{yang2019reppoints}}
&R-18&20.11M&\bf 35.60G&5.6&51.7&76.9&57.8&43.8&54.0&49.7&50.8&63.3&33.6&59.2\\
&R-50&36.60M&48.54G&4.8&56.0&80.2&63.1&40.8&58.5&53.7&56.7&65.7&39.3&62.3\\
&R-101&55.60M&68.02G&3.8&55.4&79.0&62.6&42.2&57.3&53.9&56.0&65.8&39.0&60.9\\
\hline
\hline
\emph{one-stage:} &&&&&&&&&&&&&& \\
\multirow{3}{*}{RetinaNet \cite{Lin2017Focal}}
&R-18&19.68M&39.68G&7.1&44.7&66.3&50.7&29.3&47.6&42.5&46.9&54.2&23.9&53.8\\
&R-50&36.17M&52.62G&5.9&49.3&70.3&55.4&36.5&51.9&47.6&54.4&56.6&27.8&58.3\\
&R-101&55.16M&72.10G&4.5&50.4&71.7&57.3&34.6&52.8&49.0&54.6&57.0&33.7&56.3\\
\hline
\multirow{3}{*}{FreeAnchor \cite{2019arXiv190902466Z}}
&R-18&19.68M&39.68G&6.8&49.0&71.9&55.3&38.6&51.7&46.7&47.2&62.8&28.6&57.6\\
&R-50&36.17M&52.62G&5.8&54.4&76.6&62.5&38.1&55.7&53.4&55.3&65.2&35.3&61.8\\
&R-101&55.16M&72.10G&4.4&54.6&76.9&62.9&36.5&56.5&52.9&54.0&65.1&38.4&60.7\\
\hline
\multirow{3}{*}{FoveaBox \cite{DBLP:journals/corr/abs-1904-03797}}
&R-18&21.20M&44.75G&6.7&51.6&74.9&57.4&40.0&53.6&49.8&51.0&61.9&34.6&59.1\\
&R-50&37.69M&57.69G&5.5&55.3&77.8&62.3&44.7&57.4&53.4&57.9&64.2&36.4&62.8\\
&R-101&56.68M&77.16G&4.2&54.7&77.3&62.3&37.7&57.1&52.4&55.3&63.6&38.9&60.8\\
\hline
\multirow{3}{*}{PAA \cite{2020arXiv200708103K}}
&R-18&\bf 18.94M&38.84G&3.0&52.6&75.3&58.8&41.3&55.1&50.2&49.9&64.6&35.6&60.5\\
&R-50&31.89M&51.55G&2.9&56.8&79.0&63.8&38.9&58.9&54.9&56.5&66.9&39.9&64.0\\
&R-101&50.89M&71.03G&2.4&56.5&78.5&63.7&40.9&58.7&54.5&55.8&66.5&42.0&61.6\\
\hline
\multirow{3}{*}{FSAF \cite{zhu2019feature}}
&R-18&19.53M&38.88G&\bf 7.4&49.6&74.3&55.1&43.4&51.8&47.5&45.5&63.5&30.3&58.9\\
&R-50&36.02M&51.82G&6.0&54.9&79.3&62.1&46.2&56.7&53.3&53.7&66.4&36.8&62.5\\
&R-101&55.01M&55.01G&4.5&54.6&78.7&61.9&46.0&57.1&52.2&53.0&66.3&38.2&61.1\\
\hline
\multirow{3}{*}{FCOS \cite{DBLP:journals/corr/abs-1904-01355}}
&R-18&\bf 18.94M&38.84G&6.5&48.4&72.8&53.7&30.7&50.9&46.3&46.5&61.5&29.1&56.6\\
&R-50&31.84M&50.34G&5.4&53.0&77.1&59.9&39.7&55.6&50.5&52.3&64.5&35.2&60.0\\
&R-101&50.78M&69.81G&4.2&53.2&77.3&60.1&43.4&55.4&51.2&51.7&64.1&38.5&58.5\\
\hline
\multirow{3}{*}{ATSS \cite{zhang2019bridging}}
&R-18&\bf 18.94M&38.84G&6.0&54.0&76.5&60.9&44.1&56.6&51.4&52.6&65.5&35.8&61.9\\
&R-50&31.89M&51.55G&5.2&58.2&\bf 80.1&66.5&43.9&60.6&55.9&\bf 58.6&67.6&41.8&64.6\\
&R-101&50.89M&71.03G&3.8&57.6&79.4&65.3&46.5&60.3&55.0&57.7&67.2&42.6&62.9\\
\hline
\multirow{3}{*}{GFL \cite{li2020generalized}}
&R-18&19.09M&39.63G&6.3&54.4&75.5&61.9&35.0&57.1&51.8&51.8&66.9&36.5&62.5\\
&R-50&32.04M&52.35G&5.5&\bf 58.6&79.3&\bf 66.7&46.5&\bf 61.6&55.6&\bf 58.6&\bf 69.1&41.3&\bf 65.3\\
&R-101&51.03M&71.82G&4.1&58.3&79.3&65.5&45.1&60.5&\bf 56.3&57.0&\bf 69.1&\bf 43.0&64.0\\
\hline
\end{tabular}
\end{center}
\end{table*}
Therefore, in terms of accuracy, the accuracy difference between the multi- and the one- stage methods in AP is not obvious, and the AP$_{S}$ of different methods is always the lowest among the three size AP. For class AP, AP$_{Sc}$ lags significantly behind the other three classes because it has the smallest number of instances. In terms of efficiency, large parameters and FLOPs result in low FPS on AGX, with a maximum FPS of 7.4, which is hardly deployable on underwater robot. Finally, we also found that ResNet101 was not significantly improved over ResNet50, which means that a very deep network may not be useful for detecting small creatures in underwater scenarios.
Consequently, the design of high accuracy and high efficiency detector is still the main direction in this field and there is still large space to improve the performance.
In order to achieve this goal, a shallow backbone with strong multi-scale feature fusion ability can be proposed to extract the discriminant features of small scale aquatic organisms; a specially designed training strategy may overcome the DUO's long-tail distribution, such as a more reasonable positive/negative label sampling mechanism or a class-balanced image allocation strategy within a training batch.
\section{Conclusion}
In this paper, we introduce a dataset (DUO) and a corresponding benchmark to fill in the gaps in the community. DUO contains a variety of underwater scenes and more reasonable annotations. Benchmark includes efficiency and accuracy indicators to conduct a comprehensive evaluation of the \emph{SOTA} decoders. The two contributions could serve as a reference for academic research and industrial applications, as well as promote community development.
\bibliographystyle{IEEEbib}
| {'timestamp': '2021-06-11T02:19:10', 'yymm': '2106', 'arxiv_id': '2106.05681', 'language': 'en', 'url': 'https://arxiv.org/abs/2106.05681'} |
\section{Introduction}
\subsection{Motivating questions and nonsensical pictures}
To introduce the problems discussed in this paper, consider some
pictures. Suppose that ${\bf{a}} = (a_1, \ldots, a_d)$
is a vector with positive entries, $I = \left[0, \sum a_i \right)$ is an interval,
$\sigma$ is a permutation on $d$ symbols, and $\mathcal{T}=\mathcal{T}_{\sigma}({\bf{a}}): I \to I$ is the
interval exchange obtained by cutting up $I$ into segments of lengths
$a_i$ and permuting them according to $\sigma$. A fruitful technique
for studying the dynamical properties of $\mathcal{T}$ is to consider it as
the return map to a transverse segment along the vertical foliation in
a translation surface, i.e. a union of polygons with edges glued
pairwise by translations. See Figure \ref{figure: Masur} for an example with one polygon;
note that the interval exchange determines the
horizontal coordinates of vertices, but there are many
possible choices of the vertical coordinates.
\begin{figure}[htp]
\input{masur.pstex_t}
\caption{Masur's construction: an
interval exchange embedded in a one-polygon translation surface.}\name{figure: Masur}
\end{figure}
Given a translation surface $q$ with a transversal, one may deform it by
applying the horocycle flow, i.e. deforming the polygon with the linear map
\eq{eq: horocycle matrix}{
h_s = \left( \begin{matrix} 1 & s \\ 0 & 1 \end{matrix} \right).
}
The return map to a transversal in $h_sq$ depends on $s$, so we
get a one-parameter family $\mathcal{T}_s$ of interval exchange
transformations (Figure \ref{figure: horocycle polygon}). For
sufficiently small $s$, one has $\mathcal{T}_s = \mathcal{T}_\sigma({\bf{a}}(s)),$ where
${\bf{a}}(s) = {\bf{a}} + s\mathbf{b}$ is a line segment, whose derivative $\mathbf{b} = (b_1,
\ldots, b_d)$ is determined
by the {\em heights} of the vertices of the polygon. We will consider
an inverse problem: given a line segment ${\bf{a}}(s) = {\bf{a}}+s\mathbf{b}$, does there exist a
translation surface $q$ such that for all sufficiently small $s$,
$\mathcal{T}_\sigma({\bf{a}}(s))$ is the return map along vertical leaves to a
transverse segment in $h_sq$? Attempting to interpret this question with pictures,
we see that some choices of $\mathbf{b}$ lead to a translation
surface while others lead to nonsensical pictures -- see Figure
\ref{figure: nonsense}. The solution to this problem is given by
Theorem \ref{thm: sullivan2}.
\begin{figure}[htp]
\includegraphics{relcone2_pspdftex.eps}
\caption{The horocycle action gives a linear one parameter family of interval
exchanges} \name{figure: horocycle polygon}
\end{figure}
\begin{figure}[htp]
\centerline{\hfill \includegraphics[scale=0.8]{nonsense1_pspdftex.eps} \hfill
\includegraphics[scale=0.8]{nonsense2_pspdftex.eps} \hfill}
\caption{The choice $\mathbf{b} = (2, 1, -1, -2)$
(left) gives a translation surface but
what about $\mathbf{b}=(0, 3, -1, -2)$?} \name{figure: nonsense}
\end{figure}
Now consider a translation surface $q$ with two
singularities. We may consider the operation of moving one singularity
horizontally with respect to the other. That is, at time $s$, the line
segments joining one singularity to the other are made longer by $s$,
while line segments joining a singularity to itself are unchanged. For
small values of $s$, one obtains a new translation surface $q_s$ by
examining the picture. But for large values of $s$, some of the
segments in the figure cross each other and it is not clear whether
the operation defined above gives rise to a well-defined surface. Our
Theorem \ref{thm: real rel main} shows that the operation of moving the zeroes is
well-defined for all values of $s$, provided one rules out the obvious
obstruction that two singularities connected by a horizontal segment
collide.
\begin{figure}[htp] \name{figure: real rel}
\centerline{\hfill \includegraphics[scale=0.45]{rel1_pspdftex.eps} \hfill
\includegraphics[scale=0.45]{rel2_pspdftex.eps} \hfill \includegraphics[scale=0.45]{rel3_pspdftex.eps} \hfill}
\caption{The singularity $\circ$ is moved to the right with
respect to $\bullet$, and the
picture becomes nonsensical.}
\end{figure}
\subsection{Main geometrical result}
Let $S$ be a compact oriented surface of genus $g \geq 2$ and $\Sigma
\subset S$ a finite subset. A {\em
translation surface structure} on $(S,\Sigma)$ is an atlas of charts into the
plane, whose domains cover
$S \smallsetminus \Sigma$, and such that the
transition maps are translations. Such structures
arise naturally in complex
analysis and in the study of interval exchange transformations and
polygonal billiards and have been the subject of intensive research,
see the recent surveys \cite{MT, Zorich survey}.
Several geometric structures on the plane can
be pulled back to $S \smallsetminus \Sigma$ via the atlas, among them the foliations of the
plane by horizontal and vertical lines. We call the resulting
oriented foliations of $S \smallsetminus \Sigma$ the {\em horizontal and vertical
foliation}
respectively. Each can be completed to a singular foliation on $S$,
with a pronged {\em singularity} at each point of $\Sigma$. Label the
points of $\Sigma$ by $\xi_1, \ldots, \xi_k$ and fix natural
numbers $r_1, \ldots, r_k $. We say that
the translation surface is {\em of
type $\mathbf{r} = (r_1, \ldots, r_k)$}
if the horizontal and vertical foliations have a $2(r_j+1)$-pronged
singularity at each $\xi_j$.
By pulling back $dy$ (resp. $dx$) from the plane, the
horizontal (vertical) foliation arising from a translation
surface structure is equipped with a {\em transverse measure},
i.e. a
family of measures on each arc transverse to the foliation which is
invariant under holonomy along leaves. We will call an oriented singular foliation
on $S$, with singularities in
$\Sigma$, which is equipped with a transverse
measure a {\em measured foliation on $(S, \Sigma)$}. We caution the reader that we
deviate from the convention adopted in several
papers on this subject, by considering the number and orders of
singularities as
part of the structure of a measured foliation; we call these the {\em
type} of the foliation. In other words, we do not consider two measured
foliations which differ by a Whitehead move to be the same.
Integrating the transverse measures gives rise to two
well-defined cohomology classes in the relative cohomology group
$H^1(S, \Sigma; {\mathbb{R}})$. That is we obtain a map
$$\mathrm{hol} : \left\{ \mathrm{translation\ surfaces\ on\ }
(S,\Sigma) \right \} \to \left(H^1(S, \Sigma; {\mathbb{R}})\right)^2.
$$
This map is a local homeomorphism and serves to give coordinate
charts to the set of translation surfaces (see {\mathcal S} \ref{subsection: strata} for
more details), but it is not globally injective. For example,
precomposing with a homeomorphism which acts trivially on homology may
change a marked translation surface structure but does not change its
image in under hol; see \cite{McMullen American J} for more examples.
On the other hand it is not hard to see that the pair of horizontal
and vertical measured foliations associated to a translation surface
uniquely determine it, and hence the question arises of reconstructing
the translation surface from just the cohomological data recorded by
hol. Our main theorems give results in this direction.
To state them we define the set of {\em (relative) cycles carried
by ${\mathcal{F}}$}, denoted $H_+^{\mathcal{F}}$, to be the image in $H_1(S,
\Sigma; {\mathbb{R}})$ of all (possibly atomic) transverse measures on ${\mathcal{F}}$
(see {\mathcal S}\ref{subsection: cohomology}).
\begin{thm}
\name{thm: sullivan1}
Suppose ${\mathcal{F}}$ is a measured foliation on $(S, \Sigma)$, and $\mathbf{b} \in H^1(S, \Sigma;
{\mathbb{R}})$. Then the following are equivalent:
\begin{enumerate}
\item
There is a measured foliation ${\mathcal{G}}$ on $(S, \Sigma)$, everywhere
transverse to ${\mathcal{F}}$ and of the same type, such that ${\mathcal{G}}$ represents $\mathbf{b}$.
\item
Possibly after replacing $\mathbf{b}$ with $-\mathbf{b}$, for any $\delta \in H_+^{{\mathcal{F}}}$, $\mathbf{b}\cdot \delta >0$.
\end{enumerate}
\end{thm}
After proving Theorem \ref{thm: sullivan1} we learned from F. Bonahon
that it has a long and interesting history. Similar result were proved
by
Thurston \cite{Thurston stretch maps} in the context of train tracks
and measured laminations, and by Sullivan \cite{Sullivan} in a very
general context involving foliations. Bonahon neglected to mention his
own contribution \cite{Bonahon}. Our result is a `relative
version' in that we control the type of the foliation,
and need to be careful with the singularities. This explains why our
definition of $H_+^{\mathcal{F}}$ includes the relative cycles carried by
critical leaves of ${\mathcal{F}}$. The
proof we present here is
close to the
one given by Thurston.
The arguments proving Theorem \ref{thm: sullivan1} imply the following
stronger statement (see {\mathcal S} \ref{section: prelims} for detailed
definitions):
\begin{thm}\name{thm: hol homeo}
Given a topological singular foliation ${\mathcal{F}}$ on $(S, \Sigma)$, let $\til {\mathcal{H}}
({\mathcal{F}})$ denote the set of marked translation surfaces whose vertical
foliation is topologically equivalent to ${\mathcal{F}}$. Let $\mathbb{A}({\mathcal{F}}) \subset
H^1(S, \Sigma; {\mathbb{R}})$ denote the set of cohomology classes corresponding
to (non-atomic) transverse measures on ${\mathcal{F}}$. Let $\mathbb{B}({\mathcal{F}}) \subset
H^1(S, \Sigma; {\mathbb{R}})$ denote the set of cohomology classes that pair
positively with all elements of $H^{\mathcal{F}}_+ \subset H_1(S, \Sigma)$. Then
$$
\mathrm{hol} : \til {\mathcal{H}} ({\mathcal{F}}) \to \mathbb{A}({\mathcal{F}}) \times \mathbb{B}({\mathcal{F}})
$$
is a homeomorphism.
\end{thm}
\subsection{Applications}
We present two applications of Theorem \ref{thm: sullivan1}. The first
concerns the generic properties of interval exchange transformations.
Let $\sigma$ be a permutation on
$d$ symbols and let ${\mathbb{R}}^d_+$ be the vectors ${\bf{a}} = (a_1, \ldots, a_d)$
for which $a_i>0, \, i=1, \ldots, d.$ The pair ${\bf{a}}, \sigma$ determines
an {\em interval exchange} $\mathcal{T}_\sigma({\bf{a}})$ by subdividing the
interval $I_{\bf{a}}=\left[ 0, \sum a_i \right)$ into
$d$ subintervals of lengths $a_i$, which are
permuted according to $\sigma$. In 1982
Masur \cite{Masur-Keane} and Veech \cite{Veech-zippers} confirmed a
conjecture of Keane, proving (assuming that $\sigma$ is irreducible) that almost
every ${\bf{a}}$, with respect to Lebesgue measure on ${\mathbb{R}}^d_+$, is {\em uniquely
ergodic}, i.e. the only invariant measure for $\mathcal{T}_\sigma({\bf{a}})$ is
Lebesgue measure. On the other hand Masur and Smillie
\cite{MS} showed that the set of non-uniquely ergodic interval exchanges
is large in the sense of Hausdorff dimension.
A basic problem is to understand the finer structure of the set of
non-uniquely ergodic interval exchanges.
Specifically, motivated by analogous developments in diophantine
approximations, we will ask: For which curves
$\ell \subset {\mathbb{R}}^d_+$ is the non-uniquely ergodic set of zero
measure, with respect to the natural measure on the curve?
Which properties of a measure $\mu$ on ${\mathbb{R}}^d_+$
guarantee that $\mu$-a.e. ${\bf{a}}$ is uniquely
ergodic?
\ignore{
In a celebrated paper \cite{KMS}, Kerckhoff, Masur and Smillie
solved an important special case of this problem and introduced a
powerful dynamical technique. Given a rational angled polygon
$\mathcal{P}$, they
considered the interval exchange $\mathcal{T}_{\mathcal{P}}(\theta)$ which is
obtained from the
billiard flow in $\mathcal{P}$ in initial direction $\theta$ as the
return map to a subset of $\mathcal{P}$, and
proved that for every $\mathcal{P}$, for almost every
$\theta$ (with respect to Lebesgue
measure on $[0, 2\pi]$), $\mathcal{T}_\mathcal{P}(\theta)$ is uniquely
ergodic. The result was later extended by Veech \cite{Veech-fractals},
who showed that Lebesgue measure
can be replaced by any measure on $[0, 2\pi]$ satisfying a certain
decay condition, for
example the coin-tossing measure on Cantor's
middle thirds set.
The above results correspond to the
statement that for a certain $\sigma$ (depending on $\mathcal{P}$),
$\mu$-a.e. ${\bf{a}} \in {\mathbb{R}}^d_+$ is uniquely ergodic for
certain measures $\mu$ supported on line segments in ${\mathbb{R}}^d_+$.
}
In this paper we obtain several results in this direction, involving
three ingredients: a curve in ${\mathbb{R}}^d_+$; a
measure supported on the curve; and a dynamical property of interval
exchanges. The goal will be to
understand the dynamical properties of points in the support of the
measure. We state these results, and some open questions in this
direction, in {\mathcal S} \ref{section: results on iets}. To illustrate them we state
a special case,
which may be thought of as an interval exchanges analogue of a
famous
result of Sprindzhuk (Mahler's conjecture, see e.g.
\cite[\S4]{dimasurvey}):
\begin{thm}
\name{thm: mahler curve}
For $d \geq 2,$ let
\eq{eq: mahler curve}{
{\bf{a}}(x) = \left(x, x^2, \ldots, x^d\right)
}
and let
$\sigma(i)= d+1-i$. Then ${\bf{a}}(x)$ is uniquely ergodic
for Lebesgue a.e. $x>0$.
\end{thm}
Identifying two translation structures which differ by a
precomposition with an orientation preserving homeomorphism of $S$
which fixes each point of $\Sigma$ we obtain the {\em stratum
${\mathcal{H}}(\mathbf{r})$ of
translation surfaces of type $\mathbf{r}$}.
There is an action of $G = \operatorname{SL}(2,{\mathbb{R}})$ on ${\mathcal{H}}(\mathbf{r})$, and its
restriction to the subgroup $\{h_s\}$ as in \equ{eq: horocycle matrix}
is called the {\em horocycle flow}.
To prove our results on unique ergodicity we employ the strategy,
introduced in
\cite{KMS}, of lifting interval exchanges to translation
surfaces, and studying the
dynamics of the $G$-action on ${\mathcal{H}}(\mathbf{r})$. Specifically we use
quantitative
nondivergence estimates \cite{with Yair} for the horocycle flow. Theorem \ref{thm: sullivan1} is used
to characterize the lines in ${\mathbb{R}}^d_+$ which may be lifted to horocycle
paths.
\medskip
The second application concerns an operation of `moving singularities
with respect to each other' which has been discussed in the literature
under various names (cf. \cite[\S9.6]{Zorich survey}) and which we
now define. Let
$\til {\mathcal{H}}(\mathbf{r})$ be the {\em stratum of marked translation
surfaces of type $\mathbf{r}$}, i.e. two translation surface structures are equivalent
if they differ by precomposition by a
homeomorphism of $S$ which fixes $\Sigma$ and is isotopic to the
identity rel $\Sigma$. Integrating transverse measures as above induces
a well-defined map $\til {\mathcal{H}}(\mathbf{r}) \to H^1(S, \Sigma; {\mathbb{R}}^2)$ which
can be used to endow $\til {\mathcal{H}}(\mathbf{r})$ (resp. ${\mathcal{H}}(\mathbf{r})$)
with the structure of an affine manifold (resp. orbifold), such that
the natural map $\til {\mathcal{H}}(\mathbf{r}) \to
{\mathcal{H}}(\mathbf{r})$ is an orbifold cover. We describe foliations
on $\til {\mathcal{H}}(\mathbf{r})$ which descend to well-defined foliations on
${\mathcal{H}}(\mathbf{r})$. The two summands in the splitting
\eq{eq: splitting}{
H^1(S, \Sigma; {\mathbb{R}}^2) \cong H^1(S, \Sigma; {\mathbb{R}}) \oplus H^1(S, \Sigma;
{\mathbb{R}})}
induce two foliations on $\til {\mathcal{H}}(\mathbf{r})$, which we call the {\em
real foliation} and {\em imaginary foliation} respectively. Also, considering the
exact sequence in cohomology
\eq{eq: defn Res}{
H^0(S;{\mathbb{R}}^2) \to H^0(\Sigma ; {\mathbb{R}}^2) \to H^1(S, \Sigma ; {\mathbb{R}}^2)
\stackrel{\mathrm{Res}}{\to} H^1(S; {\mathbb{R}}^2) \to \{0\},
}
we obtain a natural subspace $\ker \mathrm{Res} \subset H^1(S, \Sigma;
{\mathbb{R}}^2)$, consisting of the cohomology classes which
vanish on the subspace of `absolute periods' $H_1(S) \subset
H_1(S, \Sigma).$
The foliation induced on $\til {\mathcal{H}}(\mathbf{r})$ is called the
{\em REL foliation} or {\em kernel foliation.} Finally, intersecting
the leaves of the real
foliation with those of the REL foliation yields the {\em
real REL foliation}.
It has leaves of dimension $k-1$ (where $k=|\Sigma|$). Two nearby translation
surfaces $q$ and $q'$ are in the same plaque if the integrals of the flat structures along all
closed curves are the same on $q$ and $q'$, and if the integrals of
curves joining {\em distinct} singularities only differ in their
horizontal component.
Intuitively, $q'$ is obtained from $q$ by fixing one singularity as a
reference point and moving the other singularities
horizontally. Understanding this foliation is
important for the study of the dynamics of the horocycle flow. It was
studied in some restricted settings in \cite{EMM, CW}, where it was
called {\em Horiz}.
The leaves of the kernel foliation, and hence the real REL foliation,
are equipped with a natural translation structure, modeled on the
vector space $\ker \, \mathrm{Res} \cong H^0(\Sigma; {\mathbb{R}})/H^0(S, {\mathbb{R}})$. One
sees easily that the leaf of $q$ is incomplete if, when moving along
the leaf, a saddle connection
on $q$ is made to have length zero, i.e. if `singularities
collide'. Using Theorems \ref{thm: sullivan1} and \ref{thm: hol homeo} we
show in Theorem \ref{thm: real rel main} that this is the only
obstruction to completeness of leaves. This implies that on a
large set, the leaves of real REL are the orbits of an action. More
precisely, let ${\mathcal Q}$ be the set of translation surfaces
with no horizontal saddle connections, in a finite cover $\hat{{\mathcal{H}}}$ of
${\mathcal{H}}({\mathbf{r}})$ (we take a finite cover to make ${\mathcal{H}}({\mathbf{r}})$ into a manifold). This is a set
of full measure which is invariant under the group $B$ of upper
triangular matrices in $G$. We show that it coincides with the set of
complete real REL leaves. Let $F$ denote the group $B \ltimes
{\mathbb{R}}^{k-1}$, where $B$ acts on ${\mathbb{R}}^{k-1}$
via
$$\left( \begin{matrix} a & b \\ 0 & 1/a \end{matrix} \right) \cdot \vec{v} = a\vec{v}.$$
We prove:
\begin{thm}
\name{thm: real rel action}
The group $F$ acts on ${\mathcal Q}$ continuously and affinely, preserving the
natural measure, and leaving invariant the subset of translation surfaces
of area one. The action of $B$ is the same as that obtained by restricting the
$G$-action, and the ${\mathbb{R}}^{k-1}$-action is transitive on each real REL leaf in
${\mathcal Q}$.
\end{thm}
Note that while the $F$-action is continuous, ${\mathcal Q}$ is not
complete: it is the complement in $\hat{{\mathcal{H}}}$ of a dense
countable union of proper affine submanifolds with boundary. Also note
that the leaves of the real foliation or the kernel foliation
are not orbits of a group action on $\hat{{\mathcal{H}}}$ --- but see
\cite{EMM} for a related discussion of pseudo-group-actions.
\ignore{
The proof of Theorem \ref{thm: real rel action} relies on another
geometrical result of independent interest (Theorem \ref{thm: fiber
singleton}). It asserts that for a measured foliation ${\mathcal{F}}$ and
a cohomology class $\mathbf{b} \in H^1(S, \Sigma; {\mathbb{R}})$, there is at most one
${\mathbf q} \in \til {\mathcal{H}}$ whose horizontal foliation is ${\mathcal{F}}$ and whose
vertical foliation represents $\mathbf{b}$.
\subsection{Mahler's question}
A vector ${\mathbf{x}} \in {\mathbb{R}}^d$ is {\em very well approximable} if
for some $\varepsilon>0$ there are infinitely many ${\mathbf p} \in {\mathbb{Z}}^d, q
\in {\mathbb{N}}$ satisfying $\|q{\mathbf{x}} - {\mathbf p} \| < q^{-(1/d + \varepsilon)}.$ It is
a classical fact that almost every (with respect to Lebesgue measure) ${\mathbf{x}}
\in {\mathbb{R}}^d$ is not very well approximable, but that the set of very
well approximable vectors is large in the sense of Hausdorff
dimension. A famous conjecture of
Mahler from the 1930's is that for almost every (with respect to
Lebesgue measure on the real line) $x \in {\mathbb{R}}$, the vector
\eq{eq: mahler curve}{
\left(
x, x^2, \ldots, x^d
\right)
}
is not very well approximable. The conjecture was settled
by Sprindzhuk in the 1960's and spawned many additional questions and
results of a similar nature. A general formulation of the
problem is to describe measures $\mu$ on ${\mathbb{R}}^d$ for which almost
every ${\mathbf{x}}$ is not very well approximable. A powerful technique
involving dynamics on homogeneous spaces was introduced by Kleinbock
and Margulis \cite{KM}, see \cite{dimasurvey} for
a survey and \cite{KLW} for recent developments and open
questions.
In this paper we discuss an analogous problem concerning
interval exchange transformations and dynamics on strata of
translation surfaces. Let $\sigma$ be a permutation on
$d$ symbols and let ${\mathbb{R}}^d_+$ be the vectors ${\bf{a}} = (a_1, \ldots, a_d)$
for which $a_i>0, \, i=1, \ldots, d.$ The pair ${\bf{a}}, \sigma$ determine
an {\em interval exchange} $\mathcal{T}_\sigma({\bf{a}})$ by subdividing the
interval $I_{\bf{a}}=\left[ 0, \sum a_i \right)$ into
$d$ subintervals of lengths $a_i$, which are
permuted according to $\sigma$. We are interested in
the dynamical properties of $\mathcal{T}_\sigma({\bf{a}})$. We will assume
throughout this paper that
$\sigma$ is irreducible and admissible in Veech's sense, see
{\mathcal S}\ref{subsection: admissible} below; otherwise the question
is reduced to studying an interval exchange on
fewer intervals. In answer to a conjecture of Keane, it was proved by
Masur \cite{Masur-Keane} and Veech \cite{Veech-zippers} that almost
every ${\bf{a}}$ (with respect to Lebesgue measure on ${\mathbb{R}}^d_+$) is {\em uniquely
ergodic}, i.e. the only invariant measure for $\mathcal{T}_\sigma({\bf{a}})$ is
Lebesgue measure. On the other hand Masur and Smillie
\cite{MS} showed that the set of non-uniquely ergodic interval exchanges
is large in the sense of Hausdorff dimension. In this paper we
consider the problem of describing
measures $\mu$ on ${\mathbb{R}}^d_+$ such that $\mu$-a.e. ${\bf{a}}$ is uniquely
ergodic.
In a celebrated paper \cite{KMS}, Kerckhoff, Masur and Smillie
solved an important special case of this problem and introduced a
powerful dynamical technique. Given a rational angled polygon
$\mathcal{P}$, they
considered the interval exchange $\mathcal{T}_{\mathcal{P}}(\theta)$ which is
obtained from the
billiard flow in $\mathcal{P}$ in initial direction $\theta$ as the
return map to a subset of $\mathcal{P}$, and
proved that for every $\mathcal{P}$, for almost every
$\theta$ (with respect to Lebesgue
measure on $[0, 2\pi]$), $\mathcal{T}_\mathcal{P}(\theta)$ is uniquely
ergodic. The result was later extended by Veech \cite{Veech-fractals},
who showed that Lebesgue measure
can be replaced by any measure on $[0, 2\pi]$ satisfying a certain
decay condition, for
example the coin-tossing measure on Cantor's
middle thirds set.
The above results correspond to the
statement that for a certain $\sigma$ (depending on $\mathcal{P}$),
$\mu$-a.e. ${\bf{a}} \in {\mathbb{R}}^d_+$ is uniquely ergodic for
certain measures $\mu$ supported on line segments in ${\mathbb{R}}^d_+$.
\subsection{Statement of results}
We begin with some definitions.
For each
${\bf{a}} \in {\mathbb{R}}^d_+$
the interval exchange transformation $\mathcal{T}_{\sigma}({\bf{a}})$ is defined on
$I=I_{\bf{a}}$
by setting
$x_0=x'_0=0$ and for $i=1,
\ldots, d$,
\eq{eq: disc points2}{
x_i=x_i({\bf{a}})
= \sum_{j=1}^i a_j, \ \ \
x'_i =x'_i({\bf{a}})
= \sum_{j=1}^i a_{\sigma^{-1}(j)}
}
then for every $x \in I_i=[x_{i-1}, x_i)$ we have
\eq{eq: defn iet}{
\mathcal{T}(x)=\mathcal{T}_{\sigma}({\bf{a}})(x)=x-x_{i-1}+x'_{\sigma(i)-1}
}
Given $\mathbf{b} \in {\mathbb{R}}^d$, we use \equ{eq: disc points2} to define
\eq{eq: defn of y(b)}{
y_i = x_i(\mathbf{b}), \ \ y'_i=x'_i(\mathbf{b})
}
and
\eq{eq: defn L}{L=L_{{\bf{a}},\mathbf{b}}: I \to {\mathbb{R}}, \ \ \
L(x) = y_{i} - y'_{\sigma(i)} \
\mathrm{for} \ x\in I_i.}
If there are $i,j \in \{1, \ldots , d\}$ (not necessarily distinct)
and $m>0$ such that $\mathcal{T}^m(x_i)=x_j$ we will say that $(i,j,m)$ is a {\em
connection} for $\mathcal{T}$.
We denote the set of invariant
probability measures for $\mathcal{T}$ by $\MM_{\bf{a}}$, and
the set of connections by ${\mathcal L}_{\bf{a}}$.
\begin{Def}
We say that $({\bf{a}}, \mathbf{b}) \in {\mathbb{R}}^d_+ \times {\mathbb{R}}^d$ is {\em positive} if
\eq{eq: positivity}{\int L \, d\mu >0
\
\ \mathrm{for \ any \ } \mu \in \MM_{{\bf{a}}}
}
and
\eq{eq: connections positive}{\sum_{n=0}^{m-1}L(\mathcal{T}^nx_i) > y_i-y_j \ \
\ \mathrm{for \ any \ } (i,j,m) \in {\mathcal L}_{{\bf{a}}}.
}
\end{Def}
Let $B(x,r)$ denote the interval $(x-r, x+r)$ in ${\mathbb{R}}$.
We say that a finite regular Borel measure $\mu$ on ${\mathbb{R}}$ is {\em
decaying and Federer} if there are positive
$C, \alpha, D$ such that
for every $x \in
{\mathrm{supp}} \, \mu$ and every $0<
\varepsilon, r<1$,
\eq{eq: decaying and federer}{
\mu \left(B(x,\varepsilon r) \right) \leq C\varepsilon^\alpha \mu\left(B(x,r) \right)
\ \ \ \mathrm{and} \ \ \ \mu \left( B(x, 3r) \right) \leq D\mu\left(B(x,r) \right).
}
It is not hard to show that Lebesgue measure, and the
coin-tossing measure on Cantor's middle thirds set, are both decaying
and Federer. More constructions of such measures are given in
\cite{Veech-fractals, bad}.
Let $\dim$ denote Hausdorff dimension, and for $x\in{\mathrm{supp}}\,\mu$ let
$$
\underline{d}_\mu(x) = \liminf_{r\to 0}\frac{\log
\mu\big(B(x,r)\big)}{\log r}.
$$
Now let
\eq{eq: defn epsn}{
\varepsilon_n({\bf{a}}) = \min \left \{\left|\mathcal{T}^k(x_i) - \mathcal{T}^{\ell}(x_j)\right| :
0\leq k, \ell \leq n, 1 \leq i ,j \leq d-1, (i,k) \neq (j,\ell)
\right\},
}
where $\mathcal{T} = \mathcal{T}_\sigma({\bf{a}})$.
We say that ${\bf{a}}$ is
{\em of recurrence type} if $\limsup n\varepsilon_n({\bf{a}})
>0$
and
{\em of bounded type} if $\liminf
n\varepsilon_n({\bf{a}})>0$.
It is known by work of Masur, Boshernitzan, Veech and Cheung that
if ${\bf{a}}$ is of recurrence type then it is uniquely ergodic, but that
the converse does not hold.
We have:
\begin{thm}[Lines]
\name{cor: inheritance, lines}
Suppose $({\bf{a}}, \mathbf{b})$ is positive. Then there is $\varepsilon_0>0$ such
that the following hold for ${\bf{a}}(s) = {\bf{a}}+s\mathbf{b}$ and
for every decaying and Federer measure $\mu$ with ${\mathrm{supp}} \, \mu
\subset (-\varepsilon_0, \varepsilon_0)$:
\begin{itemize}
\item[(a)]
For $\mu$-almost every $s$, ${\bf{a}}(s)$ is of
recurrence type.
\item[(b)]
$\dim \, \left \{s \in {\mathrm{supp}} \, \mu : {\bf{a}}(s) \mathrm{ \ is \ of \ bounded \
type} \right \} \geq \inf_{x \in {\mathrm{supp}} \, \mu} \underline{d}_{\mu}(x).$
\item[(c)]
$\dim \left \{s \in (-\varepsilon_0, \varepsilon_0) : {\bf{a}}(s) \mathrm{ \ is \ not \ of \
recurrence \ type } \right \} \leq 1/2.$
\end{itemize}
\end{thm}
In fact we will prove a quantitative strengthening of (a), see {\mathcal S}
\ref{section: nondivergence}.
\begin{thm}[Curves]
\name{cor: ue on curves}
Let $I$ be an interval, let $\mu$ be a decaying and Federer measure on
$I$, and let $\beta: I \to
{\mathbb{R}}^d_+$ be a $C^2$ curve, such that for $\mu$-a.e. $s \in I$,
$(\beta(s), \beta'(s))$ is positive. Then for $\mu$-a.e. $s \in I$,
$\beta(s)$ is of recurrence type.
\end{thm}
\subsection{The dynamical method: horocycle flow and the lifting problem}
Our approach is the one
employed in \cite{KMS}.
There is a well-known `lifting procedure' (see \cite{ZK,
Veech-zippers, Masur-Keane}) which associates to $\sigma$
a stratum ${\mathcal{H}}$ of translation surfaces, such that for any ${\bf{a}} \in
{\mathbb{R}}^d_+$ there is a {\em lift} $q \in {\mathcal{H}}$, such that $\mathcal{T}_\sigma({\bf{a}})$
is the return
map to a transversal for the flow along vertical leaves for $q$. It is
crucial for our purposes that in this lifting procedure $q$ is not
uniquely determined by ${\bf{a}}$; indeed ${\bf{a}}$ determines the vertical
measured foliation but there is additional freedom in choosing the
horizontal measured foliation.
There is an
action of $G=\operatorname{SL}(2, {\mathbb{R}})$ on ${\mathcal{H}}$. The restriction to the subgroup
$\{g_t\}$ (resp. $\{h_s\}$) of diagonal (resp. upper triangular
unipotent) matrices is called the {\em geodesic} (resp. {\em
horocycle}) flow. Masur \cite{Masur Duke} proved that if ${\bf{a}}$ is not
uniquely ergodic then $\{g_tq: t>0\}$ is divergent in ${\mathcal{H}}$. Thus
given a curve $\beta(s)$ in ${\mathbb{R}}^d_+$, our goal is to construct a curve
$q_s$ in ${\mathcal{H}}$ such that $q_s$ is a lift of $\beta(s)$ and $\{g_tq_s :
t>0\}$ is not divergent for a.e. $s$. To rule out divergence, we fix a
large compact $K \subset {\mathcal{H}}$ and show that for all large $t$ and most
$s$, $g_tq_s \in K$. That is we require a nondivergence estimate for the curve $s \mapsto
g_tq_s$, asserting that it spends most of its time in a
large compact set independent of $t$. Such nondivergence estimates
were developed for the horocycle flow in \cite{with Yair}; to prove the
theorem it suffices to show that $g_tq_s= h_sq'$ is a horocycle, or
more generally, can be well approximated by horocycles.
This leads to the question of which directions tangent to ${\mathbb{R}}^d_+$ are
projections of horocycles. More precisely, for each ${\bf{a}}\in
{\mathbb{R}}^d_+$, thinking of ${\mathbb{R}}^d$ as the tangent space to ${\mathbb{R}}^d_+$ at ${\bf{a}}$,
let ${\mathcal{C}}_{{\bf{a}}}$ be the set of $\mathbf{b} \in {\mathbb{R}}^d$ for which there
exists a lift $q = q({\bf{a}}, \mathbf{b})$ in ${\mathcal{H}}$
such that $h_sq$ is a lift of ${\bf{a}}+s\mathbf{b}$ for all sufficiently small
$s$. It is easy to see that $q({\bf{a}}, \mathbf{b})$, if it exists, is uniquely
determined by ${\bf{a}}$ and $\mathbf{b}$. Indeed, the requirement that $q$ is a
lift of ${\bf{a}}$ fixes the vertical
transverse measure for $q$, and since under the horocycle flow, the
horizontal transverse measure is the derivative (w.r.t. $s$) of the
vertical transverse measure for $h_sq$, one finds that $\mathbf{b} =
\frac{d}{ds}\left( {\bf{a}} + s\mathbf{b} \right)$ determines the horizontal transverse
measure. It remains to characterize the ${\bf{a}}$ and $\mathbf{b}$ for which such a
$q$ exists; our main geometrical results, Theorems \ref{thm:
sullivan1} and \ref{thm: sullivan2}, show that
\eq{eq: defn cone}{
{\mathcal{C}}_{{\bf{a}}} = \{\mathbf{b} \in {\mathbb{R}}^d: ({\bf{a}}, \mathbf{b}) \mathrm{\ is \ positive} \}.
}
This is an open convex cone which is never empty, and is typically a
half-space. Moreover the
set of positive pairs is open in the (trivial) tangent bundle
${\mathbb{R}}^d_+ \times {\mathbb{R}}^d$.
After proving Theorem \ref{thm: sullivan1} we learned from F. Bonahon
that it has a long and interesting history. Similar result were proved
by
Thurston \cite{Thurston stretch maps} in the context of train tracks
and measured laminations, and by Sullivan \cite{Sullivan} in a very
general context involving foliations. Bonahon neglected to mention his
own contribution \cite{Bonahon}. Our result is a `relative
version' in that we discuss strata of translation surfaces
and need to be careful with the singularities. This accounts for our
condition \equ{eq: connections positive} which is absent from the
previous variants of the result.
The proof we present here is
very close to the
one given by Thurston.
\subsection{Open problems}
The
developments in the theory of diophantine approximations originating
with Mahler's conjecture motivated the following.
\begin{conj}[Cf. \cite{cambridge}]
\name{conj: main}
\begin{enumerate}
\item (Lines)
If ${\bf{a}}$ is uniquely ergodic and $\ell$ is a line in
${\mathbb{R}}^d_+$ passing through ${\bf{a}}$ then there is neighborhood $\mathcal{U}$
of ${\bf{a}}$ such that almost every ${\bf{a}}' \in \ell \cap \mathcal{U}$
(with respect to Lebesgue measure on $\ell$) is
uniquely ergodic.
\item (Curves)
If ${\bf{a}}(s)$ is an analytic curve in ${\mathbb{R}}^d_+$ whose image is not
contained in a proper affine subspace, then for a.e. $s$ (with
respect to Lebesgue measure on the real line), ${\bf{a}}(s)$ is
uniquely ergodic.
\end{enumerate}
\end{conj}
These conjectures are interval exchange analogues of results of
\cite{KM, dima - GAFA}.
The methods of this paper rely on properties of the horocycle
flow, and as such are insufficient for proving Conjecture \ref{conj:
main}(1). Namely any line
$\ell$ tangent to the subspace ${\mathrm {REL}}$ (see {\mathcal S} \ref{section: REL}) can never by
lifted to a
horocycle path. This motivates a special case of Conjecture \ref{conj:
main}.
\begin{conj}
\name{conj: REL}
Let ${\bf{a}} \in {\mathbb{R}}^d_+$ be uniquely ergodic.
Then there is a neighborhood $\mathcal{U}$
of $0$ in the
subspace ${\mathrm {REL}}$ so that ${\bf{a}} + \mathcal{U} \subset {\mathbb{R}}^d_+$ and
${\bf{a}}+\mathbf{b}$ is uniquely ergodic for almost every $\mathbf{b} \in \mathcal{U}$.
\end{conj}
}
\subsection{Organization of the paper}
We present the proof of Theorem \ref{thm:
sullivan1} in \S3 and of Theorem \ref{thm: hol homeo}, in {\mathcal S}
\ref{section: dev homeo}. We interpret these theorems in the language of
interval exchanges in \S5. This interpretation furnishes a link
between line segments in the space of interval exchanges, and
horocycle paths in a corresponding stratum of translation surfaces: it
turns out that the line segments which may be lifted to horocycle
paths form a cone in the tangent space to interval exchange space, and
this cone can be explicitly described in terms of a bilinear form
studied by Veech.
We begin \S6 with a brief discussion of Mahler's question in
diophantine approximation, and the question it motivates for interval
exchanges. We then state in detail our results for
generic properties of interval exchanges. The proofs of these results occupy
\S7--\S10. Nondivergence results for horocycles make it possible to
analyze precisely the properties of interval exchanges along a
line segment, in the cone of directions described in {\mathcal S} \ref{section: REL}. To obtain
information about curves we approximate them by line segments, and
this requires the quantitative nondivergence results obtained in
\cite{with Yair}.
In \S11 we prove our results concerning real REL. These sections
may be read independently of \S6--\S10. We conclude with a discussion
which connects real REL with some of the objects encountered in
\S7-\S10.
\subsection{Acknowledgements}
We thank John Smillie for many valuable discussions.
We thank Francis Bonahon for pointing out the connection between our
Theorem \ref{thm: sullivan1} and previous work of Sullivan and
Thurston. The
authors were supported by
BSF grant 2004149, ISF grant 584/04 and NSF grant DMS-0504019.
\section{Preliminaries}\name{section: prelims}
In this section we recall some standard facts and set our
notation. For more information we refer the reader to
\cite{MT, Zorich survey} and the references therein.
\subsection{Strata of translation surfaces}\name{subsection: strata}
Let $S, \, \Sigma = (\xi_1, \ldots, \xi_k), \, \mathbf{r}=(r_1,
\ldots, r_k)$ be as in
the introduction. A {\em translation
structure} (resp., a {\em marked translation structure}) of type
$\mathbf{r}$ on $(S, \Sigma)$
is an equivalence class of $(U_{\alpha}, \varphi_{\alpha})$, where:
\begin{itemize}
\item
$(U_{\alpha}, \varphi_{\alpha})$ is an atlas of charts for $S\smallsetminus
\Sigma$;
\item
the transition functions $\varphi_{\beta} \circ \varphi^{-1}_{\alpha}$
are of the form
$${\mathbb{R}}^2 \ni \vec{x} \mapsto \vec{x}+c_{\alpha, \beta};$$
\item
around each $\xi_j \in
\Sigma$ the charts glue together to form a cone point with cone angle
$2\pi(r_j+1)$.
\end{itemize}
By definition $(U_{\alpha}, \varphi_{\alpha}), \ (U'_{\beta},
\varphi'_{\beta})$ are equivalent if there is an orientation preserving
homeomorphism $h: S\to S$ (for a marked structure, isotopic to the
identity via an isotopy fixing $\Sigma$), fixing all points of $\Sigma$, such that
$(U_{\alpha}, \varphi_{\alpha})$
is compatible with $\left(h(U'_{\beta}), \varphi'_{\beta} \circ h^{-1}\right).$
Thus the equivalence class ${\mathbf q}$ of a marked translation surface is a subset
of that of the corresponding translation surface $q$, and we will say
that ${\mathbf q}$ is obtained from $q$ by {\em specifying a marking}, $q$
from ${\mathbf q}$ by {\em forgetting the marking}, and
write $q =\pi({\mathbf q})$.
Note that our convention is that singularities are labelled.
Pulling back $dx$ and $dy$ from the coordinate charts we obtain two
well-defined closed 1-forms, which we can integrate along any path
$\alpha$ on $S$. If $\alpha$ is a cycle or has endpoints in $\Sigma$
(a relative cycle), then the result, which we denote by
$${\mathrm{hol}}(\alpha, {\mathbf q}) = \left(\begin{matrix} x(\alpha, {\mathbf q}) \\
y(\alpha, {\mathbf q}) \end{matrix} \right) \in {\mathbb{R}}^2,$$
depends only on the homology class of $\alpha$ in $H_1(S, \Sigma)$. We
let ${\mathrm{hol}}({\mathbf q}) = {\mathrm{hol}}(\cdot, {\mathbf q})$ be the corresponding element of $H^1(S, \Sigma;
{\mathbb{R}}^2)$, with coordinates $x({\mathbf q}), y({\mathbf q})$ in $H^1(S, \Sigma; {\mathbb{R}})$.
A {\em saddle connection} for $q$ is a straight segment which connects
singularities and does not contain singularities in its interior.
The set of all (marked) translation surfaces on $(S,
\Sigma)$ of type $\mathbf{r}$ is called the {\em stratum of
(marked) translation surface of type $\mathbf{r}$} and is denoted by
${\mathcal{H}}(\mathbf{r})$ (resp. $\til {\mathcal{H}}(\mathbf{r})$). We have suppressed
the dependence on $\Sigma$ from the notation since for a given type
$\mathbf{r}$ there is an
isomorphism between the corresponding set of translation surfaces on
$(S, \Sigma)$ and on $(S, \Sigma')$ for any other finite subset
$\Sigma' = \left(\xi'_1, \ldots, \xi'_k\right)$.
The map ${\mathrm{hol}}: \til {\mathcal{H}} \to H^1(S, \Sigma; {\mathbb{R}}^2)$ just defined gives
local charts for $\til {\mathcal{H}}$, endowing it (resp. ${\mathcal{H}}$) with the
structure of an affine manifold (resp. orbifold).
To see how this works, fix a triangulation $\tau$ of $S$ with vertices
in $\Sigma$. Then ${\mathrm{hol}}({\mathbf q})$ associates a vector in the plane to each
oriented edge in $\tau$, and hence associates an oriented Euclidean
triangle to each oriented triangle of $\tau$. If all the orientations
are consistent, then a translation structure with the same holonomy as
${\mathbf q}$ can be realized explicitly by gluing the Euclidean triangles to
each other. Let $\til {\mathcal{H}}_\tau$ be the set of all translation
structures obtained in this way (we say that $\tau$ is {\em realized
geometrically} in such a structure). Then the restriction ${\mathrm{hol}}:
\til {\mathcal{H}}_{\tau} \to H^1(S, \Sigma;
{\mathbb{R}}^2)$ is injective and maps onto an open subset. Conversely every
${\mathbf q}$ admits some geometric triangulation (e.g. a Delaunay
triangulation as in \cite{MS}) and hence ${\mathcal{H}}$ is covered
by the ${\mathcal{H}}_{\tau}$, and so these provide an atlas for a linear
manifold structure on $\til {\mathcal{H}}$. We should remark that a topology on
$\til {\mathcal{H}}$ can be defined independently of this, by considering nearly
isometric comparison maps between different translation structures,
and that this topology is the same as that induced by the charts of
hol.
Let $\Mod(S, \Sigma)$ denote the mapping class group, i.e. the
orientation preserving homeomorphisms of $S$ fixing $\Sigma$
pointwise, up to an isotopy fixing $\Sigma$.
The map hol is $\Mod(S, \Sigma)$-equivariant. This means that for
any $\varphi \in \Mod(S, \Sigma)$, ${\mathrm{hol}}({\mathbf q} \circ \varphi) =
\varphi_* {\mathrm{hol}}({\mathbf q})$, which is nothing more than the linearity of the
holonomy map with respect to its first argument.
One can show that the $\Mod(S, \Sigma)$-action on $\til {\mathcal{H}}$
is properly discontinuous. Thus ${\mathcal{H}} = \til {\mathcal{H}} /\Mod(S, \Sigma)$ is a
linear orbifold and $\pi: \til {\mathcal{H}} \to {\mathcal{H}}$ is an orbifold covering
map.
Since $\Mod(S, \Sigma)$ contains a finite index torsion-free subgroup
(see e.g. \cite[Chap. 1]{Ivanov}), there is a finite cover
$\hat{{\mathcal{H}}} \to {\mathcal{H}}$ such that $\hat{{\mathcal{H}}}$ is a manifold, and we have
\eq{eq: dimension of HH}{
\dim {\mathcal{H}} = \dim \til {\mathcal{H}} = \dim \hat{{\mathcal{H}}}= \dim H^1(S, \Sigma; {\mathbb{R}}^2)= 2(2g+k-1).
}
The Poincar\'e Hopf index theorem implies that
\eq{eq: Gauss Bonnet}{\sum r_j = 2g-2.}
There is an action of $G = \operatorname{SL}(2,{\mathbb{R}})$ on ${\mathcal{H}}$ and on $\til {\mathcal{H}}$ by post-composition on each
chart in an atlas. The projection $\pi : \til {\mathcal{H}} \to {\mathcal{H}}$ is $G$-equivariant.
The $G$-action is linear in the homology coordinates, namely, given a
marked translation surface structure ${\mathbf q}$ and $\gamma \in
H_1(S, \Sigma)$, and given $g \in G$, we have
\eq{eq: G action}{
{\mathrm{hol}}(\gamma, g{\mathbf q}) = g \cdot {\mathrm{hol}}(\gamma, {\mathbf q}),
}
where on the right hand side, $g$ acts on ${\mathbb{R}}^2$ by matrix
multiplication.
We will write
\[
g_t = \left(\begin{array}{cc} e^{t/2} & 0 \\ 0 & e^{-t/2}
\end{array}
\right), \ \ \ \ \
r_{\theta}=
\left(\begin{array}{cc}
\cos \theta & -\sin \theta \\
\sin \theta & \cos \theta
\end{array}
\right).
\]
\ignore{
\combarak{We may or may not the discussion of the entire moduli space
below, depending on what we can say
about `nondivergence along REL'.}
For a fixed genus $g$, the different strata of genus $g$ glue together
to form the {\em moduli space of translation surfaces of genus
$g$}. This space, which we denote by $\Omega_g$, is
sometimes referred to as the {\em bundle of holomorphic $1$-forms over
moduli space.} It is also a non-compact orbifold in which the strata
are locally open sub-orbifolds. The $G$-action extends continuously to
an action on $\Omega_g$. The precise definition of the orbifold
topology on $\Omega_g$ will not play a role in this paper.
}
\subsection{Interval exchange transformations}
\name{subsection: iets}
Suppose $\sigma$ is a permutation on $d$ symbols.
For each
$${\bf{a}} \in {\mathbb{R}}^d_+ = \left\{(a_1, \ldots, a_d) \in {\mathbb{R}}^d :
\forall i, \, a_i>0 \right \}$$
we have an interval exchange transformation $\mathcal{T}_{\sigma}({\bf{a}})$ defined by
dividing the interval
$$I=I_{\bf{a}} = \left[0, \sum a_i \right)$$
into subintervals of lengths $a_i$ and permuting them
according to $\sigma.$ It is
customary to take these intervals as closed on the left and open on
the right, so that the resulting map has $d-1$ discontinuities and is
left-continuous.
More precisely, set
$x_0=x'_0=0$ and for $i=1,
\ldots, d$,
\eq{eq: disc points2}{
x_i=x_i({\bf{a}})
= \sum_{j=1}^i a_j, \ \ \
x'_i =x'_i({\bf{a}})
= \sum_{j=1}^i a_{\sigma^{-1}(j)} = \sum_{\sigma(k)\leq i} a_k;
}
then for every $x \in I_i=[x_{i-1}, x_i)$ we have
\eq{eq: defn iet}{
\mathcal{T}(x)=\mathcal{T}_{\sigma}({\bf{a}})(x)=x-x_{i-1}+x'_{\sigma(i)-1} = x-x_i+x'_{\sigma(i)}.
}
In particular, if (following Veech \cite{Veech-measures}) we let $Q$ be
the alternating bilinear form
given by
\eq{eq: defn Q1}{
Q({\mathbf{e}}_i, {\mathbf{e}}_j) = \left\{\begin{matrix}1 && i>j, \, \sigma(i)<\sigma(j) \\ -1
&& i<j, \, \sigma(i) >\sigma(j) \\ 0 && \mathrm{otherwise}
\end{matrix} \right.
}
where ${\mathbf{e}}_1, \ldots, {\mathbf{e}}_d$ is the standard basis of ${\mathbb{R}}^d$,
then
$$\mathcal{T}(x) - x= Q({\bf{a}}, {\mathbf{e}}_i).
$$
An interval exchange $\mathcal{T}: I \to I$ is said to be {\em minimal} if
there are no proper closed $\mathcal{T}$-invariant subsets of $I$.
We say
that $\mathcal{T}$ is {\em uniquely ergodic} if the only invariant measure for
$\mathcal{T}$, up to scaling, is Lebesgue measure.
We will say that ${\bf{a}} \in {\mathbb{R}}^d_+$ is minimal or uniquely ergodic if
$\mathcal{T}_{\sigma}({\bf{a}})$ is.
Below we will assume that
$\sigma$ is {\em irreducible}, i.e. there is no $k<d$ such that
$\sigma$ leaves the subset $\{1, \ldots,
k\}$ invariant, and {\em admissible} (in Veech's sense), see
{\mathcal S} \ref{subsection: transversal}.
For the questions about interval exchanges which we will
study, these hypotheses on $\sigma$ entail no loss of generality.
It will be helpful to consider a more general class of maps which we
call {\em generalized interval exchanges.} Suppose $J$ is a finite
union of intervals. A generalized interval exchange $\mathcal{T}: J \to J$ is
an orientation preserving piecewise isometry of $J$, i.e. it is a map
obtained by subdividing $J$ into finitely many subintervals and
re-arranging them to obtain $J$. These maps are not often considered because
studying their dynamics easily reduces to studying interval
exchanges. However they will arise naturally in our setup.
\subsection{Measured foliations, transversals, and interval exchange
induced by a translation surface}\name{subsection: transversal}
Given a surface $S$ and a finite $\Sigma \subset S$, a
{\em singular foliation (on $S$ with singularities in $\Sigma$)} is a
foliation ${\mathcal{F}}$ on $S \smallsetminus \Sigma$ such that for any $z \in \Sigma$ there
is $k=k_z \geq 3$ such that ${\mathcal{F}}$ extends to form a $k$-pronged
singularity at $z$. A singular foliation ${\mathcal{F}}$ is orientable if there
is a continuous choice of a direction on each leaf. If ${\mathcal{F}}$
is orientable then $k_z$ is even for all
$z$. Leaves which meet the singularities are called {\em critical.}
A {\em transverse measure} on a singular foliation ${\mathcal{F}}$ is a family
of measures defined on arcs transverse to the
foliation and invariant under restriction to subsets and isotopy
through transverse arcs. A {\em measured foliation} is a singular
foliation equipped with a transverse measure, which we further require
has no atoms and has full support (no open arc has measure zero). We will only consider
orientable singular foliations which
can be equipped with a transverse measure. This implies that the
surface $S$ is decomposed into finitely many domains on each of which
the foliation is either minimal (any ray is dense) or periodic (any
leaf is periodic). A periodic component is also known as a {\em
cylinder}. These components are separated by saddle
connections.
Given a flat surface
structure ${\mathbf q}$ on $S$, pulling back via charts the vertical and
horizontal foliations on ${\mathbb{R}}^2$ give oriented singular foliations on
$S$ called the vertical and horizontal foliations,
respectively. Transverse measures are defined by integrating the
pullbacks of $dx$ and $dy$, i.e. they correspond to the holonomies
$x({\mathbf q})$ and $y({\mathbf q})$. Conversely, given two oriented everywhere
transverse measured foliations
on $S$ with singularities in $\Sigma$, one obtains an atlas of charts
as in {\mathcal S} \ref{subsection: strata} by integrating the measured
foliation. I.e., for each $z \in S \smallsetminus \Sigma$, a local coordinate
system is obtained by taking a simply connected neighborhood $U
\subset S \smallsetminus \Sigma$ of $z$ and defining the two coordinates of
$\varphi(w) \in {\mathbb{R}}^2$ to be the integral of the measured foliations
along some path connecting $z$ to $w$ (where the orientation of the
foliations is used to determine the sign of the integral). One can
verify that this procedure produces an atlas with the required
properties.
We will often risk confusion by using the
symbol ${\mathcal{F}}$ to denote both a measured foliation and the
corresponding singular foliation supporting it. A singular
foliation is called {\em minimal} if any noncritical leaf is dense,
and {\em uniquely ergodic} if there is a unique (up to scaling)
transverse measure on $S$ which is supported on noncritical
leaves. Where confusion is unavoidable we say that
$q$ is minimal or uniquely ergodic if its vertical foliation is.
At a singular point $p \in \Sigma$ with $k$ prongs, a small
neighborhood of $p$ divides into $k$ foliated disks, glued along
leaves of ${\mathcal{F}}$, which we call {\em foliated half-disks}. A foliated
half-disk is either contained in a single periodic component or in a
minimal component.
Now let ${\mathcal{F}}$ be a singular foliation on a surface $S$ with
singularities in $\Sigma$. We will consider three kinds of
transversals to ${\mathcal{F}}$.
\begin{itemize}
\item
We define a {\em transverse system} to be an injective map
$\gamma: J \to S$ where $J$ is a finite union of intervals $J_i$, the
restriction of $\gamma$ to each interval $J_i$ is a smooth embedding,
the image
of the interior of $\gamma$ intersects every non-critical leaf of ${\mathcal{F}}$
transversally, and does not intersect $\Sigma$.
\item
We define a {\em
judicious curve} to be a transverse system $\gamma: J \to S$ with $J$
connected, such that $\gamma$ begins and ends at
singularities, and the interior of $\gamma$ intersects all leaves
including critical ones.
\item
We say a transverse system $\gamma$ is {\em special} if all its
components are of the following types (see Figure \ref{figure:
special}):
\begin{itemize}
\item For every foliated half-disk $D$ of a singularity $p \in
\Sigma$ which is contained in a minimal component, there is a
component of $\gamma$ whose interior intersects $D$ and terminating
at $p$. This component of $\gamma$ meets $\Sigma$ at only one
endpoint.
\item
For every periodic
component (cylinder) $P$ of ${\mathcal{F}}$, $\gamma$ contains one arc crossing $P$
and joining two singularities on opposite sides of $P$.
\end{itemize}
\end{itemize}
\begin{figure}[htp] \name{figure: special}
\center{\includegraphics{special2.ps}}
\caption{A {\em special} transverse system cuts across periodic components
and into minimal components.}
\end{figure}
Note that since every non-critical leaf in a minimal component is
dense, the non-cylinder edges of a special transverse system can be
made as short as we like, without destroying the property that they
intersect every non-critical leaf.
In each of these cases, we can
parametrize points of $\gamma$ using the transverse measure, and
consider the first return map to $\gamma$ when moving up along
vertical leaves. When $\gamma$ is a judicious curve, this
is an interval exchange transformation which we denote by $\mathcal{T}({\mathcal{F}}, \gamma)$,
or by $\mathcal{T}(q, \gamma)$ when ${\mathcal{F}}$ is the vertical foliation of
$q$. Then there is a unique choice of $\sigma$ and ${\bf{a}}$ with $\mathcal{T}({\mathcal{F}},
\gamma) = \mathcal{T}_{\sigma}({\bf{a}})$, and with $\sigma$ an irreducible
admissible permutation. The corresponding number of intervals is
\eq{eq: for dim count}{
d=2g+|\Sigma|-1;
}
note that $d = \dim H^1(S, \Sigma; {\mathbb{R}}) = \frac12 \dim {\mathcal{H}}$ if $q \in
{\mathcal{H}}$.
The return
map to a transverse system is a generalized interval exchange. We
denote it also by $\mathcal{T}({\mathcal{F}}, \gamma)$. In each of the
above cases, any non-critical leaf returns to $\gamma$ infinitely many
times.
If $\gamma$ is a transversal to ${\mathcal{F}}$ then
$\mathcal{T}({\mathcal{F}}, \gamma)$ completely determines the transverse measure on
${\mathcal{F}}.$ In particular the vertical foliation of $q$ is
uniquely ergodic (minimal) if and only if $\mathcal{T}(q, \gamma)$ is for some
(any) transverse system $\gamma$.
There is an inverse construction which associates with an irreducible
permutation $\sigma$ a surface $S$ of genus $g$, and a $k$-tuple ${\mathbf{r}}$
satisfying \equ{eq: Gauss Bonnet}, such that the
following holds. For any ${\bf{a}} \in {\mathbb{R}}^d_+$ there is a translation
surface structure $q \in {\mathcal{H}}({\mathbf{r}})$, and a transversal $\gamma$ on
$S$ such that $\mathcal{T}_\sigma({\bf{a}}) = \mathcal{T}(q, \gamma).$ Variants of this
construction can be found in \cite{ZK, Masur-Keane,
Veech-zippers}. Veech's admissibility condition amounts to requiring
that there is no transverse arc on $S$ for which $\mathcal{T}(q, \gamma)$
has fewer discontinuities.
Fixing $\sigma$, we say that a flat structure $q$ on $S$ is {\em a
lift} of ${\bf{a}}$ if there is a
judicious curve $\gamma$ on $S$ such that $\mathcal{T}(q,\gamma) =
\mathcal{T}_\sigma({\bf{a}})$. It is known that for any $\sigma$, there is a stratum
${\mathcal{H}}$ such that all lifts of all ${\bf{a}}$ lie in ${\mathcal{H}}$. We call it the
{\em stratum corresponding to $\sigma$.}
\ignore{
\subsection{Constructing a translation surface from an interval
exchange}\name{subsection: admissible}
There is an inverse construction which associates with an irreducible, admissible
permutation $\sigma$ a surface $S$ of genus $g$, and a $k$-tuple $r_1,
\ldots, r_k$ satisfying \equ{eq: Gauss Bonnet}, such that the
following holds. For any ${\bf{a}} \in {\mathbb{R}}^d_+$ there is a translation
surface structure $q \in {\mathcal{H}}(r_1, \ldots, r_k)$, and a transversal $\gamma$ on
$S$ such that $\mathcal{T}_\sigma({\bf{a}}) = \mathcal{T}(q, \gamma).$ Variants of this
construction can be found in \cite{ZK, Masur-Keane,
Veech-zippers}. Masur's construction will be convenient for us in the
sequel, and we recall it now.
Set
\eq{eq: Masur choice}{b_i = \sigma(i) - i,}
and in analogy with \equ{eq: disc points2}, set $y_0=y'_0=0,$ and for
$i=1, \ldots, d$,
\eq{eq: disc points1}{
y_i=y_i(\mathbf{b})= \sum_{j=1}^i b_j, \ \ \
y'_i =y'_i(\mathbf{b}) = \sum_{j=1}^i b_{\sigma^{-1}(j)} =
\sum_{\sigma(k)\leq i} b_k.
}
Then it is immediate from the irreducibility of $\sigma$ that $y_i>0$
and $y'_i<0$ for $i=1, \ldots, d-1$.
Set $x_i=x_i({\bf{a}}),\
x'_i = x'_i({\bf{a}})$ as in \equ{eq: disc points2},
and
\eq{eq: points in plane}{
P_i = (x_i, y_i), \ \ P'_i = (x'_i, y'_i).}
By the above the $P_i$ (resp. $P'_i$) lie above (resp. below) the
x-axis, and $P_0=P'_0, P_d=P'_d.$
Let $f$ (resp. $g$) be the piecewise linear functions
whose graph is the curve passing through all the points $P_i$
(resp. $P'_i$), and let
\eq{eq: defn mathcalP}{
\mathcal{P} = \left\{(x,y)
: x \in I,\, g(x) \leq y \leq f(x) \right
\}.
}
Then $\mathcal{P}$ is the closure of
a connected domain with polygonal boundary, see Figure \ref{figure: Masur}.
We may identify $(x,f(x))$ in the
upper boundary to $(\mathcal{T} x,g(\mathcal{T} x))$ in the lower
boundary to obtain a translation surface $q$. These
identifications induce an equivalence relation on
$\{P_0, \ldots, P_d, P'_0, \ldots, P'_d\}$, and one may compute
\combarak{the computation is in the tex file if we want to say more}
the total angle around each equivalence class. By definition, $\sigma$ is
{\em admissible in Veech's sense} if the total angle around each is more than $2\pi$,
so that each $P_i$ and $P'_j$ is identified with a singularity.
We have explicitly constructed an atlas of charts for a flat structure
$q$
on a surface of genus $g$, where $g$ is computed via \equ{eq: Gauss
Bonnet} from the angles around singularities.
\ignore{
To compute these
equivalence classes and the
total angle around each, suppose for $i \in \{1, \ldots, d-1\}$ that we are given a
downward-pointing vector based at $P_i$. If we rotate
this vector through an angle of $2\pi$, using boundary identifications so
that the vector is always pointing into the interior of $\mathcal{P}$,
we obtain a downward-pointing vector based at $P_{\til \sigma(i)}$,
where $\til \sigma : \{1,
\ldots, d-1\} \to \{1, \ldots, d-1\}$ is an auxilliary permutation
defined by
$$\til
\sigma(i) = \left\{ \begin{matrix}
\sigma^{-1}(\sigma(1)-1) && \sigma(i+1) =1 \\
\sigma^{-1}(d) && \sigma(i+1) = \sigma(d)+1 \\
\sigma^{-1}(\sigma(i+1)-1) &&
\mathrm{otherwise}
\end{matrix} \right.
$$
(note this differs slightly from the auxilliary permutation defined in
\cite{Veech-zippers}). The singularities are the equivalence classes
of $P_i$ under the relation generated by $i \sim \til \sigma(i)$ and
$0 \sim \sigma^{-1}(1)-1, d \sim \sigma^{-1}(d).$ The total angle
around the a singularity is $2\pi k$ where $k$ is the length of the
corresponding cycle for $\til \sigma$. We will say that
$\sigma$ is {\em admissible} if all cycles for $\til
\sigma$ contain at least two elements, i.e., $r_j$ as in {\mathcal S} 2.2
satisfy $r_j\geq 1$ for
all $j$.
Thus $q_0 \in {\mathcal{H}} = {\mathcal{H}}(\mathbf{r})$ where $\mathbf{r}=(r_1, \ldots,
r_k)$.
, and in particular, by \equ{eq: Gauss Bonnet} we have
constructed a topological surface $S$ of genus at least 2.
}
Let $\gamma$ be a path passing from left to right through the
interior of
$\mathcal{P}$.
By construction $\gamma$ is judicious for $q$, and
$\mathcal{T}(q, \alpha) = \mathcal{T}_{\sigma}({\bf{a}}_0)$.
\begin{figure}[htp] \name{figure: Masur}
\input{masur.pstex_t}
\caption{Masur's construction: a flat surface constructed from an
interval exchange via a polygon.}
\end{figure}
Fixing $\sigma$, we say that a flat structure $q$ on $S$ is {\em a lift} of ${\bf{a}}$ if there is a
judicious curve $\gamma$ on $S$ such that $\mathcal{T}_\sigma(q,\gamma) =
\mathcal{T}_\sigma({\bf{a}})$. It is known that for any $\sigma$, there is a stratum
${\mathcal{H}}$ such that all lifts of all ${\bf{a}}$ lie in ${\mathcal{H}}$. We call it the
{\em stratum corresponding to $\sigma$.}
\ignore{
Clearly $\mathcal{P}$, with the edge identifications described above,
gives a cellular decomposition of $S$ containing the points of $\Sigma$
as vertices (0-cells), which we will continue by $\mathcal{P}$. In
particular there is a natural identification of $H^1(S, \Sigma; {\mathbb{R}})$ with
$H^1(\mathcal{P}, \Sigma; {\mathbb{R}})$. It is easy to see (and also follows
via \equ{eq: for dim count} by a dimension count) that the edges of $\mathcal{P}$ are
linearly independent cycles in $H_1(\mathcal{P}, \Sigma; {\mathbb{R}})$. Thus
there is no distinction between $H^1({\mathcal P}, \Sigma; {\mathbb{R}})$ and $C^1({\mathcal P},
\Sigma; {\mathbb{R}})$ (the space of 1-cochains).
}
We have shown that
there is a lift for any ${\bf{a}}$. It will be important for us that there
are many possible lifts; indeed in Masur's construction above, there
are many possible choices of $\mathbf{b}$ which would have worked just as well
as \equ{eq: Masur choice}. The set of all such $\mathbf{b}$'s will be
identified in
{\mathcal S}\ref{section: pairs}.
}
\subsection{Decomposition associated with a transverse system}
\name{subsection: decompositions}
Suppose ${\mathcal{F}}$ is an oriented singular foliation on $(S, \Sigma)$ and
$\gamma: J \to S$ is a transverse system to
${\mathcal{F}}$. There is an associated
cellular decomposition $\mathcal{B}=\mathcal{B}(\gamma)$ of $(S, \Sigma)$ defined as
follows. Let $\mathcal{T} = \mathcal{T}({\mathcal{F}}, \gamma)$ be the generalized interval
exchange corresponding to $\gamma$.
The 2-cells in $\mathcal{B}$ correspond to the intervals of continuity
of $\mathcal{T}$. For each such interval $I$, the corresponding cell consists
of the union of interiors of leaf intervals beginning at $I$ and
ending at $\mathcal{T}(I)$. Hence it fibers over $I$ and hence has the
structure of an open topological rectangle. The boundary of a 2-cell
lies in $\gamma$ and in certain segments of leaves, and the union of
these form the 1-skeleton. The 0-skeleton consists of points of
$\Sigma$, endpoints of $\gamma$, and points of discontinuity of $\mathcal{T}$
and $\mathcal{T}^{-1}$. Edges of the 1-skeleton lying on $\gamma$ will be
called {\em transverse edges} and edges lying on ${\mathcal{F}}$ will be called {\em
leaf edges}. Leaf edges inherit an orientation from ${\mathcal{F}}$ and
transverge edges inherit the transverse orientation induced by ${\mathcal{F}}$.
Note that opposite boundaries of a 2-cell could come from the same
points in $S$: a
particular example occurs for a special transverse system, where if
there is a transverse edge crossing a cylinder, that cylinder is
obtained as a single 2-cell with its bottom and top edges
identified. Such a 2-cell is called a {\em cylinder cell}.
It is helpful to consider a {\em spine} for $\mathcal{B}$, which we denote
$\chi = \chi(\gamma)$, and is composed of the 1-skeleton of $\mathcal{B}$
together with one leaf $\ell_R$ for every rectangle $R$, traversing
$R$ from bottom to top. The spine is closely related to Thurston's
train tracks; indeed, if we delete from $\chi$ the singular points
$\Sigma$ and the leaf edges that meet them, and collapse each element
of the transversal $\gamma$ to a point, we obtain a train track that
`carries ${\mathcal{F}}$' in Thurston's sense. But note that keeping the deleted
edges allows us to keep track of information relative to $\Sigma$, and
in particular, to keep track of the saddle connections in ${\mathcal{F}}$.
\ignore{
Write $\gamma_1,
\ldots, \gamma_k$ for the intervals of continuity of $\mathcal{T}({\mathcal{F}}, \gamma)$; these
are by definition connected subarcs which are relatively open in
$\gamma$. Let $S_i$ be the union of pieces of leaves starting in the
interior of $\gamma_i$ and going along ${\mathcal{F}}$ in the positive direction until the next
meeting with $\gamma$. The interior of each $S_i$ does not contain any
point of $\Sigma$ since $\mathcal{T}({\mathcal{F}}, \gamma_i)$ is continuous, hence is
homeomorphic to a disk.
The 2-cells of $\mathcal{B}(\gamma)$ are the closures of the $S_i$'s. The
1-cells are either pieces of leaves of ${\mathcal{F}}$, or segments in $\gamma$
which are on the boundary of the $S_i$'s. We call the former {\em leaf
edges} and the latter {\em transverse edges}. We orient the leaf edges
according to the orientation of ${\mathcal{F}}$, and the transverse edges so
they always cross the foliation from left to right. We include 0-cells so that
a 1-cell cannot
contain a nontrivial segment in both ${\mathcal{F}}$ and $\gamma$.
We will need to modify $\mathcal{B}(\gamma)$ when ${\mathcal{F}}$ has periodic
leaves. In this case, the periodic leaves form finitely many annuli,
each of which is a union of 2-cells of $\mathcal{B}(\gamma)$. In each such
annulus we modify $\mathcal{B}(\gamma)$ so that the annulus contains exactly
one cell which goes fully around the annulus, that is,
is bounded by two periodic leaf edges and one transverse edge bounding
the cell from both sides \combarak{here we need a figure}. We
call such a cell a {\em cylinder cell}
and the transverse edge running inside it, a {\em
cylinder transverse edge}. Note that by construction,
a periodic leaf is contained in the interior of a cylinder cell if and
only if it intersects $\gamma$ exactly once.
The 0-cells include all points of $\Sigma$, hence any cohomology class
$\beta \in
H^1(S, \Sigma; {\mathbb{R}})$ can be represented by a closed 1-cochain on
the 1-skeleton $\mathcal{B}_1 \subset \mathcal{B}(\gamma)$; i.e. a function
$\hat{\beta}$ defined on $\mathcal{B}_1$ which
vanishes on boundaries of 2-cells. Such a representative is
not canonical, since any two may differ by a 1-coboundary.
\begin{prop}
\name{prop: cell decomposition}
Let $S$ be a surface and let ${\mathcal{F}}$ be a measured foliation on $S$ with
singularities in $\Sigma$. Let $\gamma_0$ be a transverse
system and let $\hat{\beta}$ be a 1-cocycle on $\mathcal{B}(\gamma_0)$
which is positive on leaf edges and vanishes on transverse
edges, except possibly on cylinder transverse edges.
Then there is a flat surface structure $q_0$ on $S$ for which ${\mathcal{F}}$ is
the vertical measured foliation, and whose horizontal measured
foliation represents the same
element in $H^1(S, \Sigma; {\mathbb{R}})$ as
$\hat{\beta}$.
\end{prop}
\begin{proof}
Let $\hat{\alpha}$ be the 1-cocycle represented by the measured
foliation ${\mathcal{F}}$, so that $\hat{\alpha}$ vanishes on leaf edges. To any edge
$e \in \mathcal{B}(\gamma)$ we assign coordinates $(\hat{\alpha}(e),
\hat{\beta}(e))$.
Let $R$ be a cell of $\mathcal{B}(\gamma_0)$. It has 2
pairs of opposite sides: a pair of {\em horizontal sides} made of
transverse edges going in opposite
directions, and a pair of {\em vertical sides} made of
leaf edges going in opposite directions. The assumption on
$\hat{\beta}$ ensures that vertical sides are
assigned vertical vectors, and the orientation of these vectors
respects the orientation on $\mathcal{B}(\gamma)$. If $R$ is not a cylinder
cell, the assumption on $\hat{\beta}$ also ensures that horizontal
sides are assigned horizontal vectors respecting the orientation.
Since $\hat{\alpha},
\hat{\beta}$ are cocycles, opposite sides have the same length. That is
$R$ has
the geometry of a Euclidean rectangle. Similarly, if $R$ is a cylinder
cell, it has the geometry of a Euclidean parallelogram with
vertical sides.
By linearity of $\hat{\alpha}, \hat{\beta}$, the rectangles and
parallelograms
can be glued to each other consistently. This produces an explicit
atlas of charts for $q_0$ with the advertized properties.
\end{proof}
}
\combarak{This was not precise enough. I am guessing that $Q({\bf{a}}, \mathbf{b})$
is the intersection of ${\bf{a}}$ and $\mathbf{b}$ when ${\bf{a}}$ is thought of as an
element of $H_1(S, \Sigma)$ and $\mathbf{b}$, as an element of $H^1(S, \Sigma)
\cong H_1(S \smallsetminus \Sigma)$. Is this right?}
\combarak{A remark on the relation of $\mathcal{B}(\gamma)$ to train tracks
belongs here.}
\subsection{Transverse cocycles, homology and
cohomology}\name{subsection: cohomology}
We now describe cycles supported on a foliation ${\mathcal{F}}$ and their dual
cocycles.
We will see that a transverse measure $\mu$ on ${\mathcal{F}}$ defines an
element $[c_\mu] \in H_1(S, \Sigma)$, expressed concretely as a cycle
$c_\mu$ in the spine of $\chi(\gamma)$ of a transverse
system. Poincar\'e duality identifies $H_1(S, \Sigma)$ with $H^1(S \smallsetminus
\Sigma)$, and the dual $[d_\mu]$ of $[c_\mu]$ is represented by the
cochain corresponding to integrating the measure $\mu$.
If $\mu$ has no atoms, then in fact we obtain $[c'_\mu] \in H_1(S \smallsetminus
\Sigma)$, and its dual $[d'_mu]$ lies in $H^1(S, \Sigma)$. The natural
maps $H_1(S \smallsetminus \Sigma) \to H_1(S, \Sigma)$ and $H^1(S, \Sigma) \to
H^1(S \smallsetminus \Sigma)$ take $[c'_\mu]$ to $[c_\mu]$ and $[d'_\mu]$ to
$[d_\mu]$ respectively.
We will now describe these constructions in more detail.
Let $\gamma$ be a transverse system and $\chi(\gamma)$ the spine of
its associated complex $\mathcal{B}(\gamma)$ as above. Given $\mu$ we define a
1-chain on $\chi$ as follows. For each rectangle $R$ whose bottom side
is an interval $\kappa$ in $\gamma$, set $\mu(R) = \mu({\rm int} \,
\kappa)$ (using the interior is important here because of possible
atoms in the boundary). For each leaf edge $f$ of $\mathcal{B}$, set
$\mu(\{f\})$ to be the transverse measure of $\mu$ across $f$ (which
is 0 unless the leaf $f$ is an atom of $\mu$). The 1-chain
$x = \sum_R \mu(R) \ell_R + \sum_f \mu(\{ f\}) f + z$ may not be a
cycle, but we note that invariance of $\mu$ implies that, on each
component of $\gamma$, teh sum of measures taken with sign (ingoing
vs. outgoing) is 0, so that $\partial x$ restricted to each component
is null-homologous. Hence by `coning off' $\partial x$ in each
component of $\gamma$ we can obtain a cycle of the form:
\eq{eq: explicit form cycle}{
c_\mu = \sum_{R \ \mathrm{rectangle}} \mu(R) \ell_R + \sum_{f
\ \mathrm{leaf \ edge \ of \ } \mathcal{B}} \mu(\{f\}) f + z,
}
where $z$ is a 1-chain supported in $\gamma$ such that $\partial z =
-\partial x$. Invariance and additivity of $\mu$ imply that the
homology class in $H_1(S, \Sigma)$ is independent of the choice of
$\gamma$.
The cochain $d_\mu$ is constructed as follows: in any product
neighborhood $U$ for ${\mathcal{F}}$ in $S \smallsetminus \Sigma$, integration of $\mu$
gives a map $U \to {\mathbb{R}}$, constant along leaves (but discontinuous at
atomic leaves). On any oriented path in $U$ with endpoints off the
atoms, the value of the cochain is obtained by mapping endpoints to
${\mathbb{R}}$ and subtracting. Via subdivision this extends to to a cochain
defined on 1-chains whose boundary misses atomic leaves. This cochain
is a cocycle via additivity and invariance of the measures, and
suffices to give a cohomology class (or one may extend it to all
1-chains by a suitable chain-homotopy perturbing vertices on atomic
leaves slightly).
In the case with no atoms, we note that the expression for $c_\mu$ has
no terms of the form $\mu(\{f\}) f$, and hence we get a cocycle in $S
\smallsetminus \Sigma$. The definition of the cochain extends in that case to
neighborhoods of singular points, and evaluates consistently on
relative 1-chains, giving a class in $H^1(S, \Sigma)$.
A cycle corresponding to a transverse measure will be called a {\em
(relative) cycle carried by ${\mathcal{F}}$}. The set of all (relative) cycles
carried by ${\mathcal{F}}$ is a convex cone in $H_1(S, \Sigma; {\mathbb{R}})$ which we
denote by $H^{\mathcal{F}}_+$. Since we allow atomic measures, we can think of
(positively oriented) saddle connections or closed leaves in ${\mathcal{F}}$ as
elements of $H^{\mathcal{F}}_+$. Another way of constructing cycles carried by
${\mathcal{F}}$ is the Schwartzman asymptotic cycle construction
\cite{Schwartzman}.
These are projectivized limits of long loops which are mostly in ${\mathcal{F}}$
but may be
closed by short segments transverse to ${\mathcal{F}}$.
It is easy to see that $H^{{\mathcal{F}}}_+ \cap H_1(S)$ is the convex cone
over the asymptotic cycles, or equivalently the image of the
non-atomic transverse measures, and that $H^{{\mathcal{F}}}_+$ is the convex
cone over asymptotic cycles and positive saddle connections in ${\mathcal{F}}$.
Generically (when ${\mathcal{F}}$ is uniquely ergodic and contains no
saddle connections) ${\mathcal{H}}_+^{\mathcal{F}}$ is one-dimensional
and is spanned by the so-called {\em asymptotic cycle} of
${\mathcal{F}}$.
\ignore{
A {\em transverse cocycle} to a
singular foliation ${\mathcal{F}}$ is
a function $\beta$ assigning a number $\beta(\gamma) \in {\mathbb{R}}$ to any
arc transverse to ${\mathcal{F}}$, which is {\em invariant}
($\beta(\gamma)=\beta(\gamma')$ when $\gamma$ and $\gamma'$
are isotopic via an isotopy through transverse arcs which moves points
along leaves), and {\em strongly finitely additive} ($\beta(\gamma \cup
\gamma') = \beta(\gamma)+\beta(\gamma')$ if $\gamma, \gamma'$ have
disjoint interiors). Note that the strong finite additivity condition is
formulated so that it rules out `atoms'.
With a transverse cocycle $\beta$ on an oriented singular foliation
${\mathcal{F}}$ one can associate a cohomology class in $H^1(S, \Sigma;
{\mathbb{R}})$. Although this construction is well-known we will describe it in
detail. We will define a cover of $S$ by small open sets
$U$, and for each
$U$ define a 1-cochain $\hat{\beta}_U$ supported on $U$ (i.e. a map giving
values to each 1-chain with image in $U$). Then for each path $\delta :
[0,1] \to S$ representing a 1-chain in $H_1(S, \Sigma)$, we will define
$\beta(\delta)$ as $\sum_1^n \hat{\beta}_{U_i}(\delta_i)$, where the
image of $\delta_i$ is contained in $U_i$ and $\delta$ is the
concatenation of the $\delta_i$.
The $U$ and $\hat{\beta}_U$ are defined as follows. If $x \in
S\smallsetminus\Sigma$, we take a neighborhood $U$ of $x$ such that ${\mathcal{F}}|_U$ has
a product structure and any two points of $U$ which are not in the
same plaque can be joined by an arc
in $U$ which is everywhere transverse to ${\mathcal{F}}$. Then for $\delta: [0,1] \to U$,
if $\delta(0)$ and $\delta(1)$ are in the same plaque we set
$\hat\beta_U(\delta)=0$, and otherwise, we let $\til \delta$ be an
everywhere transverse arc from $\delta(0)$
to $\delta(1)$,
and put $\hat{\beta}_U(\delta) = \pm
\beta(\til \delta)$, where the sign is positive if and only if $\til \delta$
crosses leaves from left to right. If $x \in \Sigma$, take a
neighborhood $U$ of $x$ so that any
point in $U$ can be joined to $x$ by a transverse arc or an arc in $U$
contained in a leaf, and so that there is a
branched cover $U \to {\mathbb{R}}^2$, branched at $x$, such that ${\mathcal{F}}|_U$ is
the pullback of a product foliation on ${\mathbb{R}}^2$. Then for $\delta: [0,1] \to
U$, we let $\til \delta_1, \til \delta_2$ be
arcs in $U$ from $\delta(0)$ to $x$ and from $x$ to $\delta(1)$ everywhere
transverse to ${\mathcal{F}}$, or contained in ${\mathcal{F}}$, and let $\hat{\beta}_U(\delta) = \pm \beta(\til
\delta_1) \pm \beta(\til \delta_2)$, with signs chosen as before. One
can check (using finite additivity) that this definition does not
depend on the choices and (using invariance) that $\beta(\delta)$
only depends on the homology class of $\delta$.
Let $\gamma$ be a transverse system for ${\mathcal{F}}$. Subdivide
$\gamma$ into finitely many sub-arcs $I$ on which $\mathcal{T}({\mathcal{F}}, \gamma)$
is continuous, and for each $I$ choose one segment $\ell_I$ starting
at $I$, ending at $\gamma$, contained in a leaf, with the leaf's
orientation, and with no points of $\gamma$ in its
interior. Collapsing each connected component of $\gamma$ to a point
we get a 1-dimensional cell
complex which is a deformation retract \combarak{NOT!} of $S \smallsetminus
\Sigma$. The 1-skeleton of this cell
complex is sometimes called the {\em ribbon graph} or {\em train
track} associated to $S, {\mathcal{F}}, \gamma$. It is a graph
embedded in $S$ and endowed with a cyclic ordering of the edges at
each vertex.
The chain
\eq{eq: concrete cycle}{
\sum_I \beta(I) \ell_I}
can be shown (using strong finite additivity) to be independent of
the choice of the $I$'s and closed, and (using invariance) that the
corresponding homology class is independent of the transverse system
$\gamma$. Note in particular that if $\gamma$ is a judicious curve, we
only have to specify the value of
$\beta$ on the maximal segments of continuity for $\mathcal{T}({\mathcal{F}}, \gamma)$, and that
any choice of such values is possible.
Given a train track $\tau$ carrying ${\mathcal{F}}$ and a transverse
cocycle $\beta$ on ${\mathcal{F}}$ we can identify $\beta$ with real-valued
weights on the branches of $\tau$ which satisfy the switch condition.
For instance, if we are given a judicious curve $\gamma$ for ${\mathcal{F}}$ we
can use it to define a train track $\tau$, with one switch and $d$ branches,
which carries ${\mathcal{F}}$: we simply collapse $\gamma$ to form the switch
and form one branch for each domain of continuity of $\mathcal{T}({\mathcal{F}},
\gamma)$, following ${\mathcal{F}}$ along in the positive direction until its
next intersection with $\gamma$. Note that $\tau$ is filling since
$\gamma$ intersects every leaf of ${\mathcal{F}}$, and is oriented since ${\mathcal{F}}$
is. A judicious system will similarly give rise to a more general
train track.
We can use this to associate with $\gamma$ and $\beta$ a homology
class ${\mathbf{x}} = {\mathbf{x}}({\mathcal{F}}, \beta, \gamma)$ in $H_1(S \smallsetminus \Sigma ;{\mathbb{R}}).$
This is just the formal sum of the
oriented branches of $\tau$, weighted according to $\beta$; it is
closed because of the switch condition.
\combarak{Please check the following paragraph. Does it make sense?
And does it matter whether
$\gamma$ is connected, i.e. does it have to be `judicious'?}.
By Poincar\'e duality, we can also think
of $\beta$ as an element of the relative cohomology group
$H^1(S, \Sigma; {\mathbb{R}})$; indeed the intersection pairing $\iota:H_1(S \smallsetminus
\Sigma; {\mathbb{R}}) \times H_1(S,\Sigma; {\mathbb{R}}) \to {\mathbb{R}}$ is a nondegenerate bilinear form, so
can be used to identify $H_1(S \smallsetminus \Sigma; {\mathbb{R}})$ with $H^1(S,
\Sigma; {\mathbb{R}})$. Explicitly, if $\gamma$ is a judicious curve, $\beta$
is as in
\equ{eq: concrete cycle}, and $\delta$ is an oriented path which is
either closed or joins points
of $\Sigma$, then
\eq{eq: explicit cocycle}{
\beta(\delta) = \sum_I \beta(I) \iota(\delta, \ell_I).
}
\combarak{NOT! $\ell_I$ need not be an element in $H_1(S \smallsetminus \Sigma)$
if not closed.}
If $\alpha$ is a cycle in $H_1(S, \Sigma; {\mathbb{R}})$ and
$\beta$ is a transverse cocyle we write $\alpha \cdot \beta$ for the
value of $\beta$ on $\alpha$, and if ${\mathcal{F}}$ is a measured foliation we
write $[{\mathcal{F}}]$ for the cohomology class of ${\mathcal{F}}$.
Let $\gamma$ be judicious for ${\mathcal{F}}$.
It is easily checked \combarak{Here we need the dual triangulation to
a train track -- but I think that can be omitted} that the cycles
$\ell_I \in H_1(S \smallsetminus \Sigma; {\mathbb{R}})$ described above are linearly
independent.
It is instructive to compare transverse cocycles to transverse measures.
Like a transverse cocycle, a transverse measure is also
invariant under holonomy along leaves and is finitely
additive. Moreover it is uniquely determined by its values on a finite
number of arcs which intersect every leaf, so one may think of a
transverse measure as a special kind of transverse cocycle taking only
non-negative values. Note however that the additivity
requirement for measures is slightly
different, and this allows transverse measures to be atomic. Thus we
consider a Dirac measure on a saddle connection in ${\mathcal{F}}$ to be a
transverse measure.
Now given an oriented singular foliation ${\mathcal{F}}$, a {\em transverse
measure} is a function assigning a non-negative number $\mu(\gamma)$ to any
transverse arc $\gamma$, which is invariant and {\em finitely
additive} (i.e. $\mu(\gamma \cup \gamma') = \mu(\gamma)+\mu(\gamma')$
if $\gamma, \gamma'$ are
disjoint). There is a standard procedure for completing $\mu$ to a
measure defined on the Borel subsets of any transverse arc. Note that
the finite additivity condition does not rules out `atoms'.
As we saw, transverse cocycles give rise to cohomology
classes. If a measured foliation ${\mathcal{F}}$ is non-atomic then it satisfies
the strong finite additivity condition and hence gives rise to a cohomology
class, which we will denote by $[{\mathcal{F}}]$. \combarak{Do we need to assume
non-atomic?} Moreover, given ${\mathcal{F}}$,
considered only as a singular foliation, with any
transverse measure $\mu$ on ${\mathcal{F}}$ we can also associate a homology class
in $H_1(S, \Sigma; {\mathbb{R}})$. Once again we will risk offending our readers
by describing this well-known construction in detail.
Let $\gamma: J \to S$ be a transverse system to ${\mathcal{F}}$, let $\mathcal{T} =
\mathcal{T}({\mathcal{F}}, \gamma)$, and let $\mathcal{B}=
\mathcal{B}(\gamma)$ be the corresponding cell complex as in {\mathcal S}
\ref{subsection: decompositions}.
For each interval $J_i$ in $J$, we fix a point $x_i \in
\gamma(J_i)$. Now suppose a rectangle $R$ in $\mathcal{B}$ has bottom (resp. top)
transverse boundary components in $J_1, J_2$. Let $\ell_R$ be the path
from $x_1$ along $\gamma(J_1)$ to a point $x$ on the bottom edge of
$R$, then along ${\mathcal{F}}$ to $\mathcal{T}(x)$, and then along $\gamma(J_2)$ to
$x_2$. By invariance, the atomic part of $\mu$ must be supported on
saddle connections, so for a saddle connection $f$ contained in ${\mathcal{F}}$,
represent it by a concatenation $\ell_f$ of leaf edges in $\mathcal{B}$.
Set
\eq{eq: explicit form cycle}{
\alpha=\alpha_\mu = \sum_{R \mathrm{\ a \ rectangle}} \mu(R) \ell_R
\ + \sum_{f \
\mathrm{saddle \ connection}}
\mu(\{f\}) \ell_f,}
where $\mu(R)$ is the width of $R$ as measured by $\mu$.
Note that the paths $\ell_R$ may not begin or end at points of
$\Sigma$. However one can check (using finite additivity)
that $\alpha$ is actually an element of $H_1(S, \Sigma; {\mathbb{R}})$, and
(using invariance) that it does not depend on the choice of $\gamma$.
A cycle corresponding to a transverse
measure will be called a {\em cycle carried by ${\mathcal{F}}$.}
The set of all cycles carried by ${\mathcal{F}}$ is a convex cone in $H_1(S,
\Sigma; {\mathbb{R}})$ which we denote by $H^{{\mathcal{F}}}_+$. Since we allow atomic
measures, we can think of (postively oriented) saddle connections or
closed leaves in
${\mathcal{F}}$ as elements of $H^{{\mathcal{F}}}_+$. Another way of constructing cycles
carried by ${\mathcal{F}}$ is the
Schwartzman asymptotic cycle construction \cite{Schwartzman}.
These are projectivized limits of long loops which are mostly in ${\mathcal{F}}$
but may be
closed by short segments transverse to ${\mathcal{F}}$.
It is easy to see that $H^{{\mathcal{F}}}_+ \cap H_1(S)$ is the convex cone
over the asymptotic cycles, and that $H^{{\mathcal{F}}}_+$ is the convex
cone over asymptotic cycles and positive saddle connections in ${\mathcal{F}}$.
Generically (when ${\mathcal{F}}$ is uniquely ergodic and contains no
saddle connections) ${\mathcal{H}}_+^{\mathcal{F}}$ is one-dimensional
and is spanned by the so-called {\em asymptotic cycle} of
${\mathcal{F}}$.
}
\ignore{
Given a transverse system $\gamma$ for ${\mathcal{F}}$, we will be interested in
cellular decompositions of $S$ with vertices in either the image of
$\gamma$ or $\Sigma$, and whose edges are either in ${\mathcal{F}}$ or in the
image of $\gamma$. We will call such a cellular decomposition {\em
adapted} to ${\mathcal{F}}$ and $\gamma$. Its edges have a natural orientation:
those which are parts of leaves are oriented by the orientation of
${\mathcal{F}}$, and those which are in the image of $\gamma$ can be oriented by
requiring that they cross leaves of ${\mathcal{F}}$ from left to right.
\begin{prop} Suppose $\gamma$ is a transverse system for ${\mathcal{F}}$, and
we are given a cellular decomposition of $S$ which is adapted to ${\mathcal{F}}$
and $\gamma$. Suppose $\beta$ is a transverse cocycle on $S$ such that
for each edge $\alpha$ in the
decomposition, $\alpha \cdot \beta>0$. Then there is a measured foliation
foliation ${\mathcal{G}}$ on $S$, everywhere transverse to ${\mathcal{F}}$, with singularities in $\Sigma$, such that
$\beta = [{\mathcal{G}}]$.
\end{prop}
To
formulate it, suppose
${\mathcal{F}}$ is a singular foliation on $S$ with singularities in
$\Sigma$. For $z \in \Sigma$ suppose the singularity at $z$ is
$k_z$-pronged (so $k_z \geq 4$ is an even integer). Let $\tau$ be a filling
oriented train track carrying ${\mathcal{F}}$. Thus the complementary regions
containing $z \in \Sigma$ is bounded by $k_z$
branches of $\tau$. An {\em augmentation} is a smooth oriented embedded graph
$\hat{\tau}$ containing $\tau$ and containing in addition
oriented branches connecting each vertex of each complementary region
to a distinguished point inside the complementary region. Note that
these additional branches need not be leaves of ${\mathcal{F}}$. In
particular the vertices
of $\hat{\tau}$ are the union of $\Sigma$ and the switches of $\tau$. Note that
$\hat{\tau}$ defines a triangulation of $S$ --- each side of each
branch $b$ of $\tau$ determines a triangle with one vertex in $\Sigma$ and
two vertices on the endpoints of $b$.
\begin{prop}
\name{prop: building a foliation}
Suppose ${\mathcal{F}}, \Sigma, \tau, \hat{\tau}$ are as above. Suppose also that
there is a function $\beta$ which assigns to each
edge $b$ of $\hat{\tau}$ a number $\beta(b)>0$ such
that for each triangle of the triangulation
$s_1, s_2, s_3$ so that
\eq{eq: condition on sides}{
\beta(s_1)+\beta(s_2) = \beta(s_3),}
where the vertex between $s_1$ and $s_2$ is in $\Sigma$.
Then there is an orientable singular foliation ${\mathcal{G}}$ on $S$,
transverse to ${\mathcal{F}}$, with singularities in
$\Sigma$ and such that the singularity at $z$ is $k_z$-pronged, on
which $\beta$ defines a transverse measure on ${\mathcal{G}}$.
\end{prop}
\begin{proof}
Let
$\Delta$ be a triangle in the triangulation with sides $s_1, s_2,
s_3$, taken with the positive orientation, so that $s_i \cdot \beta
>0$ for each $i$. Since $\beta$ is a cocycle we have $\sum \varepsilon_i s_i
\cdot \beta =0$, where
$\varepsilon_i = \pm 1$ according as the orientation of $s_i$ agrees with the
orientation induced by $\Delta$. So after a change of indices we have
$s_1 \cdot \beta + s_2 \cdot \beta = s_3 \cdot \beta$. Now one can define a
measured foliation ${\mathcal{G}}={\mathcal{G}}_{\Delta}$ on $\Delta$, without
singularities, which is transverse to ${\mathcal{F}}$ on the interior of
$\Delta$,
such that all the sides of $\Delta$ are perpendicular to leaves of
${\mathcal{G}}$, and the
measure of side $s_i$ is $s_i \cdot \beta$. Gluing together all the
${\mathcal{G}}_{\Delta}$ gives a measured foliation ${\mathcal{G}}$ on $S$ with $\beta =
[{\mathcal{G}}]$.
\end{proof}
We now suppose ${\mathcal{F}}$ is equipped with a transverse cocycle, which we
denote by $\hat{{\mathcal{F}}}$, and
explain how to use $\gamma$ to associate with it a class in the
relative cohomology group $H^1(S, \Sigma; {\mathbb{R}})$. Let $c_i$ be the
system of curves above. Collapsing $\gamma$ to a point, we obtain a
bouquet which is homotopic to $S \smallsetminus \Sigma$. Let $\Delta$ be the
graph dual to these curves, so that each edge in $\Delta$ intersects
exactly one of the $c_i$'s once. We specify an orientation on the
edges in $\Delta$ by requiring that $\delta_i$ always crosses $c_i$
from left to right \combarak{Does it make a difference which way we
choose?}. The $\delta_i$ generate $H_(S,
\Sigma; {\mathbb{R}})$, and if we assign to $\delta_i$ the number which the
transverse cocycles assigns $c_i$, one can check that we obtain a
well-defined element of $H^1(S, \Sigma; {\mathbb{R}})$. We denote this element
by ${\mathbf{y}} = {\mathbf{y}}(\hat{{\mathcal{F}}}, \gamma)$. Note that the $y_i$ will in general
have different signs.
\combarak{Is this OK? Why is this well-defined?}
\subsection{Decomposition associated with a transverse system}
\name{subsection: decompositions}
Suppose $S$ is a surface with an oriented singular foliation ${\mathcal{F}}$,
with singularities in $\Sigma$, and $\gamma$ is a transverse system to
${\mathcal{F}}$. There is an associated
cellular decomposition $\mathcal{B}(\gamma)$ of $S$ defined as follows. Write $\gamma_1,
\ldots, \gamma_k$ for the intervals of continuity of $\mathcal{T}({\mathcal{F}}, \gamma)$; these
are by definition connected subarcs which are relatively open in
$\gamma$. Let $S_i$ be the union of pieces of leaves starting in the
interior of $\gamma_i$ and going along ${\mathcal{F}}$ in the positive direction until the next
meeting with $\gamma$. The interior of each $S_i$ does not contain any
point of $\Sigma$ since $\mathcal{T}({\mathcal{F}}, \gamma_i)$ is continuous, hence is
homeomorphic to a disk.
The 2-cells of $\mathcal{B}(\gamma)$ are the closures of the $S_i$'s. The
1-cells are either pieces of leaves of ${\mathcal{F}}$, or segments in $\gamma$
which are on the boundary of the $S_i$'s. We call the former {\em leaf
edges} and the latter {\em transverse edges}. We orient the leaf edges
according to the orientation of ${\mathcal{F}}$, and the transverse edges so
they always cross the foliation from left to right. We include 0-cells so that
a 1-cell cannot
contain a nontrivial segment in both ${\mathcal{F}}$ and $\gamma$.
We will need to modify $\mathcal{B}(\gamma)$ when ${\mathcal{F}}$ has periodic
leaves. In this case, the periodic leaves form finitely many annuli,
each of which is a union of 2-cells of $\mathcal{B}(\gamma)$. In each such
annulus we modify $\mathcal{B}(\gamma)$ so that the annulus contains exactly
one cell which goes fully around the annulus, that is,
is bounded by two periodic leaf edges and one transverse edge bounding
the cell from both sides \combarak{here we need a figure}. We
call such a cell a {\em cylinder cell}
and the transverse edge running inside it, a {\em
cylinder transverse edge}. Note that by construction,
a periodic leaf is contained in the interior of a cylinder cell if and
only if it intersects $\gamma$ exactly once.
The 0-cells include all points of $\Sigma$, hence any cohomology class
$\beta \in
H^1(S, \Sigma; {\mathbb{R}})$ can be represented by a closed 1-cochain on
the 1-skeleton $\mathcal{B}_1 \subset \mathcal{B}(\gamma)$; i.e. a function
$\hat{\beta}$ defined on $\mathcal{B}_1$ which
vanishes on boundaries of 2-cells. Such a representative is
not canonical, since any two may differ by a 1-coboundary.
\begin{prop}
\name{prop: cell decomposition}
Let $S$ be a surface and let ${\mathcal{F}}$ be a measured foliation on $S$ with
singularities in $\Sigma$. Let $\gamma_0$ be a transverse
system and let $\hat{\beta}$ be a 1-cocycle on $\mathcal{B}(\gamma_0)$
which is positive on leaf edges and vanishes on transverse
edges, except possibly on cylinder transverse edges.
Then there is a flat surface structure $q_0$ on $S$ for which ${\mathcal{F}}$ is
the vertical measured foliation, and whose horizontal measured
foliation represents the same
element in $H^1(S, \Sigma; {\mathbb{R}})$ as
$\hat{\beta}$.
\end{prop}
\begin{proof}
Let $\hat{\alpha}$ be the 1-cocycle represented by the measured
foliation ${\mathcal{F}}$, so that $\hat{\alpha}$ vanishes on leaf edges. To any edge
$e \in \mathcal{B}(\gamma)$ we assign coordinates $(\hat{\alpha}(e),
\hat{\beta}(e))$.
Let $R$ be a cell of $\mathcal{B}(\gamma_0)$. It has 2
pairs of opposite sides: a pair of {\em horizontal sides} made of
transverse edges going in opposite
directions, and a pair of {\em vertical sides} made of
leaf edges going in opposite directions. The assumption on
$\hat{\beta}$ ensures that vertical sides are
assigned vertical vectors, and the orientation of these vectors
respects the orientation on $\mathcal{B}(\gamma)$. If $R$ is not a cylinder
cell, the assumption on $\hat{\beta}$ also ensures that horizontal
sides are assigned horizontal vectors respecting the orientation.
Since $\hat{\alpha},
\hat{\beta}$ are cocycles, opposite sides have the same length. That is
$R$ has
the geometry of a Euclidean rectangle. Similarly, if $R$ is a cylinder
cell, it has the geometry of a Euclidean parallelogram with
vertical sides.
By linearity of $\hat{\alpha}, \hat{\beta}$, the rectangles and
parallelograms
can be glued to each other consistently. This produces an explicit
atlas of charts for $q_0$ with the advertized properties.
\end{proof}
}
\subsection{Intersection pairing} \name{subsection: intersection}
Via Poincar\'e duality, the canonical pairing on $H^1(S, \Sigma) \times H_1(S, \Sigma)$ becomes
the intersection pairing on $H_1(S \smallsetminus \Sigma) \times H_1(S,
\Sigma)$. In the former case we denote this pairing by $(d, c) \mapsto
d(c)$, and in the latter, by $(c,c') \mapsto c \cdot c'$. Suppose
${\mathcal{F}}$ and ${\mathcal{G}}$ are two mutually transverse oriented singular
foliations, with transverse measures $\mu$ and $\nu$ respectively. If
we allow $\mu$ but not $\nu$ to have atoms, then $[c_\mu] \in H_1(S,
\Sigma)$ and $[c_\nu] \in H_1(S \smallsetminus \Sigma)$ so we have the
intersection pairing
\eq{eq: foliation pairing}{
c_\nu \cdot c_\mu = d_{\nu}(c_\mu) = \int_S \nu \times \mu.
}
In other words we integrate the transverse measure of ${\mathcal{G}}$ along the
leaves of ${\mathcal{F}}$, and then integrate against the transverse measure of
${\mathcal{F}}$. (The sign of $\nu \times \mu$ should be chosen so that it is
positive when the orientation of ${\mathcal{G}}$ agrees with the transverse
orientation of ${\mathcal{F}}$). We can see this explicitly by choosing a
transversal $\gamma$ for ${\mathcal{F}}$ lying in the leaves of ${\mathcal{G}}$ . Then the
cochain representing $d_\nu$ is 0 along $\gamma$, and using the form
\equ{eq: explicit form cycle} we have
\eq{eq: pair c d}{
d_\nu(c_\mu) = \sum_R \mu(R) \int_{\ell_R} \nu \, + \sum_f \mu(\{f\})
\int_f \nu.
}
\subsubsection{Judicious case} Now suppose $\gamma$ is a judicious
transversal. In this case the pairing of $H^1(S, \Sigma)$ and $H_1(S,
\Sigma)$ has a concrete form which we will use in {\mathcal S} \ref{section:
pairs}.
There is a cell decomposition $\mathcal{D}$ of $S$ that is dual to
$\mathcal{B}$, defined as follows. Because $\gamma$ intersects all leaves, and
terminates at $\Sigma$ on both ends, each rectangle $R$ of $\mathcal{B}$ has
exactly one point of $\Sigma$ on each of its leaf edges. Connect these
two points by a transverse arc in $R$ and let $\mathcal{D}^1$ be the
union of these arcs. $\mathcal{D}^1$ cuts $S$ into a disk
$\mathcal{D}^2$, bisected by $\gamma$. Indeed, upward flow from
$\gamma$ encounters $\mathcal{D}^1$ in a sequence of edges which is
the {\em upper boundary} of the disk, and downward flow encounters the
{\em lower boundary} which goes through the edges of $\mathcal{D}^1$
in a permuted order, in fact exactly the permutation $\sigma$ of the
interval exchange $\mathcal{T}({\mathcal{F}}, \gamma)$.
A class in $H^1(S, \Sigma)$ is determined by its values on the
(oriented) edges of $\mathcal{D}^1$, and in fact this gives a basis,
which we can label by the intervals of continuity of $\mathcal{T}({\mathcal{F}},
\gamma)$ (the condition that the sum is 0 around the boundary of the
disk is satisfied automatically). The Poincar\'e dual basis for $H_1(S
\smallsetminus \Sigma)$ is given by the loops $\hat{\ell}_R$ obtained by joining
the endpoints of $\ell_R$ along $\gamma$.
The pairing restricted to non-negative homology is computed by the
form $Q$ of \equ{eq: defn Q1}. Ordering the rectangles $R_1, \ldots,
R_d$ according to their bottom arcs along $\gamma$ and writing
$\hat{\ell}_i = \hat{\ell}_{R_i}$, we note that $\hat{\ell}_i$ and
$\hat{\ell}_j$ have nonzero intersection number precisely when the
order of $i$ and $j$ is reversed by $\sigma$
(i.e. $(i-j)(\sigma(i)-\sigma(j)) < 0$), and in particular (accounting
for sign),
\eq{eq:Q intersection}{
\hat{\ell}_i \cdot \hat{\ell}_j = Q({\mathbf{e}}_i, {\mathbf{e}}_j).
}
In other words, $Q$ is the intersection pairing on $H_1(S \smallsetminus \Sigma)
\times H_1(S \smallsetminus \Sigma)$ with the given basis (note that this form is
degenerate, as the map $H_1(S \smallsetminus \Sigma) \to H_1(S, \Sigma)$ has a
kernel).
\section{The lifting problem}
In this section we prove Theorem \ref{thm: sullivan1}.
\begin{proof}[Proof of Theorem \ref{thm: sullivan1}]
We first explain the easy direction (1) $\Longrightarrow$ (2). Since
${\mathcal{F}}$ is everywhere transverse to ${\mathcal{G}}$, and the surface is connected,
reversing the orientation of ${\mathcal{G}}$ if necessary we can assume
that positively oriented paths in leaves of ${\mathcal{F}}$ always cross ${\mathcal{G}}$
from left to right. Therefore, using \equ{eq: foliation pairing} and
\equ{eq: pair c d}, we find that
$\mathbf{b}(\delta) > 0$ for any $\delta \in {\mathcal{H}}_+^{\mathcal{F}}$.
Before proving the converse we indicate the idea of proof. We will
consider a sequence of finer and finer cell decompositions associated
to a shrinking sequence of special transverse systems. As the
transversals shrink, the associated train tracks split, each one being
carried by the previous one. We examine the weight that a
representative of $\mathbf{b}$ places on the vertical leaves in the cells of
these decompositions (roughly speaking the branches of the associated
train tracks). If any of these remain non-positive for all time, then
a limiting argument produces an invariant measure on ${\mathcal{F}}$ which has
non-positive pairing with $\mathbf{b}$, a contradiction. Hence eventually all
cells have positive `heights' with respect to $\mathbf{b}$, and can be
geometrically realized as rectangles. The proof is made complicated by
the need to keep track of the singularities, and in particular by the
appearance of cylinder cells in the decomposition.
Fix a
special transverse system $\gamma$ for ${\mathcal{F}}$ (see {\mathcal S} \ref{subsection:
transversal}), and let $\mathcal{B} = \mathcal{B}(\gamma)$ be the corresponding cell
decomposition as in {\mathcal S}\ref{subsection: decompositions}. Given a path
$\alpha$ on $S$ which is contained in a
leaf of ${\mathcal{F}}$ and begins and ends in transverse edges of $\mathcal{B}$, we
will say that $\alpha$
is {\em parallel to saddle connections} if there is a continuous
family of arcs $\alpha_s$ contained in leaves, where $\alpha_0 =
\alpha$, $\alpha_1$ is a union of saddle connections in ${\mathcal{F}}$, and the
endpoints of each $\alpha_s$ are in $\gamma$.
We claim
that for any $N$ there is a special transverse system
$\gamma_N \subset \gamma$ such that the following holds: for any
leaf edge $e$ of $\mathcal{B}_N = \mathcal{B}(\gamma_N)$, either $e$ is parallel to
saddle connections or, when moving along $e$ from bottom to top, we
return to $\gamma \smallsetminus \gamma_N$ at least $N$ times before returning to
$\gamma_N$.
Indeed, if the claim is false then for some $N$ and any special
$\gamma' \subset \gamma$ there is a leaf
edge $e'$ in $\mathcal{B}_N$ which is not parallel to saddle connections,
and starts and ends at points $x$, $y$ in $\gamma'$, making at
most $N$ crossings with $\gamma \smallsetminus \gamma'$.
Now take $\gamma'_j$ to be shorter
and shorter special transverse subsystems of $\gamma$, denote
the corresponding edge by $e_j$ and the points by $x_j, \,
y_j$. Passing to
a subsequence, we find that $x_j,y_j$ converge to points in $\Sigma$
and $e_j$ converges to a concatentation of at most $N$ saddle
connections joining these points. In particular, for large enough $j$,
$e_j$ is parallel to saddle connections, a contradiction proving the claim.
Since $\gamma$ is special, each periodic component of ${\mathcal{F}}$ consists
of one 2-cell of $\mathcal{B}$, called a {\em cylinder cell}, with its top and
bottom boundaries identified along an edge traversing the component,
which we call a {\em cylinder transverse edge}. The same holds for
$\mathcal{B}_N$, and our construction ensures that $\mathcal{B}$ and $\mathcal{B}_N$ contain
the same cylinder cells and the same cylinder transverse edges.
Let $\beta$ be a singular (relative) 1-cocycle in $(S, \Sigma)$
representing $\mathbf{b}$. We claim that we may choose $\beta$ so that it
vanishes on non-cylinder transverse edges. Indeed, each such edge
meets $\Sigma$ only at one endpoint, so they can be
deformation-retracted to $\Sigma$, and pulling back a cocycle via this
retraction gives $\beta$. Note that $\beta$ assigns a well-defined
value to the periodic leaf edges, namely the value of $\mathbf{b}$ on the
corresponding loops.
In general $\beta$ may assign non-positive heights to rectangles, and
thus it does not assign any reasonable geometry to $\mathcal{B}$. We now claim
that there is a positive $N$ such that for any leaf edge $e \in
\mathcal{B}_N$,
$$\beta(e)>0.$$
Since a saddle connection in ${\mathcal{F}}$ represents an element of
$H^{{\mathcal{F}}}_+$, our assumption implies that
$\beta(e)>0$ for any leaf edge of $\mathcal{B}_N$ which is a saddle
connection. Moreover by construction, if $e$ is parallel to saddle
connections, then $\beta(e) = \sum \beta(e_i)$ for
saddle connections $e_i$, so again $\beta(e)>0$. Now suppose by contradiction
that for any $N$ we can find a leaf edge $e_N$ in $\mathcal{B}_N$ which is not
parallel to saddle connections and such that
$C_N = \left|e_N \cap \gamma \right| \geq N$
and
$$\limsup_{N \to \infty} \frac{\beta(e_N)}{C_N} \leq 0.$$
Passing to a subsequence we define
a measure $\mu$
on $\gamma$ as a weak-* limit of the measures
$$\nu_N(I) = \frac{|e_N \cap I|}{C_N}, \ \
\mathrm{where \ } I \subset \gamma \mathrm{ \ is \ an \ interval;}$$
it is invariant under the
return map to $\gamma$ and thus defines a transverse measure on ${\mathcal{F}}$
representing a class $[c_\mu] \in H_+^{\mathcal{F}}$. Moreover by construction it
has no atoms and gives measure zero to the cylinder cells. We will evaluate
$\beta(c_\mu)$.
For each rectangle $R$ in $\mathcal{B}$, let $\theta_R \subset
\gamma$ be the transverse arc on the bottom of $R$ and $\ell_R$ a leaf
segment going through $R$ from bottom to top, as in {\mathcal S}
\ref{subsection: decompositions}.
Since $e_N$ is not parallel to saddle connections, its intersection
with each $R$ is a union of arcs parallel to $\ell_R$. Since $\beta$
gives all such arcs the same values $\beta(\ell_R)$, we have
$$
\beta(e_N)=\beta\left(
\sum_R |e_N \cap \theta_R| \ell_R
\right) = \sum_R |e_N \cap \theta_R| \beta (\ell_R).
$$
By \equ{eq: explicit form cycle}, since $\mu$ has no atoms we can
write
$$
c_\mu = \sum_R \mu(\theta_R) \ell_R +z
$$
where $z \subset \gamma$. Since $\beta$ vanishes along $\gamma$ we
have
$$
\mathbf{b}(c_\mu) = \sum_R \mu(\theta_R) \beta(\ell_R) = \sum_R \lim_N
\frac{|e_N \cap \theta_R}{C_N} \beta(\ell_R) = \lim_N
\frac{\beta(e_N)}{C_N} \leq 0.
$$
This contradicts the hypothesis, proving the claim.
The claim implies that the
topological rectangles in $\mathcal{B}_N$ can be given a compatible Euclidean
structure, using the transverse measure of ${\mathcal{F}}$ and $\beta$ to measure respectively
the horizontal and vertical components of all relevant edges. Note
that all non-cylinder cells become metric rectangles, and the cylinder
cells become metric parallelograms. Thus we have constructed a
translation surface structure on $(S, \Sigma)$ whose horizontal foliation ${\mathcal{G}}$
represents $\beta$, as required.
\end{proof}
\ignore{
Now suppose there are $s$ saddle connections in ${\mathcal{F}}$, and let
$\lambda$ be one of them. By assumption $\beta(\lambda)>0$. Moreover
$\lambda$ crosses $\gamma$ a finite number of times, so there is $R$
(independent of $N$) so that for
each edge $e$ in $\mathcal{B}_N$ which is
part of a saddle connection, $|\hat{\beta}_N(e)| \leq R$. Take $N$
large enough so that for every edge $e$ in $\mathcal{B}_N$
which is not part of a saddle connection,
$\hat{\beta}_N(e) > 2sR.$
For each $\lambda$ we will adjust $\hat{\beta}_N$ by adding a
coboundary, as follows. By Lemma \ref{lem: many crossings} there are
edges $\lambda_1, \lambda_2$ in $\mathcal{B}_N$ such that $\lambda =
\lambda_1+\lambda_2$, and the $\lambda_j$ have one vertex in $\Sigma$ and one
in some interval $\gamma' \subset \gamma_N$. Moreover $\gamma'$ does
not intersect any other saddle connection. If both
$a = \hat{\beta}_N(\lambda_1), b=\hat{\beta}_N(\lambda_2)$ are positive we do not change
$\hat{\beta}_N$, and if say $a<0$ we define a function $h$
on the vertices of $\mathcal{B}_N$ by $h(v)= (b-a)/2$ for all vertices
$v \in \gamma'$ and $h(v)=0$
for all other vertices. Adding the corresponding
1-coboundary $\delta h $ to $\hat{\beta}_N$ we obtain a cohomologous
cocycle whose value on each $\lambda_j$ is
$\hat{\beta}_n(\lambda)/2>0,$ which vanishes on transverse edges, and
whose value on all leaf edges has changed by at most $2R$.
Continuing in this fashion $s$ times,
we replace $\hat{\beta}_N$ with a cohomologous cocycle
$\hat{\beta}_0$ which satisfies the
conditions of Proposition
\ref{prop: cell decomposition}. We may take
${\mathcal{G}}$ to be the horizontal measured foliation of $q_0$.
\end{proof}
\begin{proof}[Proof of Lemma \ref{lem: many crossings}]
First we remark that a connected component of $\gamma$ may be open, closed, or
neither. We only assume that it has non-empty interior. When we say a
subsystem is open we mean open as a subset of $\gamma$.
Separate $\gamma$ into minimal and periodic components;
that is $\gamma = \gamma_m \cup \gamma_p$ where the interior of
$\gamma_p$ consists of periodic leaves, the interior of
$\gamma_m$
consists of leaves whose closure is minimal, and the boundaries of
$\gamma_p$ and $\gamma_m$ intersect ${\mathcal{F}}$ in saddle connections.
We remove from $\gamma$ a finite set of
points so that any $x \in \gamma$ whose forward trajectory hits a
singularity and is not on a saddle connection visits $\gamma$ at least
$N$ times before reaching the
singularity, and so that each vertical saddle connection intersects $\gamma$
at most once. This ensures that (i) holds and that (iii) holds for all
points $x$ on critical leaves in $\gamma$, such that the ray from $x$
does not return to $\gamma_N$ before reaching $\Sigma$. To guarantee (ii), from every
connected component $\bar{\gamma}$ of $\gamma$ which intersects two or
more vertical saddle
connections, if an interval between saddle connections is minimal we
remove a subinterval from it to disconnect $\bar{\gamma}$.
Since the
interval is minimal, the remaining part of $\bar{\gamma}$ still
intersects every leaf. If the
interval is periodic we disconnect $\bar{\gamma}$ at some interior
point $x\in \gamma_p$ and move one of the connected components up or
down at $x$.
Now note that the maximal length of a piece
of leaf which starts and ends
at $\gamma_m$ and has no points of $\gamma_m$ in its interior is
bounded. Thus there is $T$ such that any $x \in \gamma_m$, the piece
of leaf $\ell_T(x)$ of length
$T$ issuing from $x$ intersects $\gamma_m$ at least $N$ times. Since there are no
periodic leaves in $\gamma_m$, there is
$\varepsilon>0$ such that for any $x \in \gamma_m$,
$${\mathrm{dist}} \left (x, \gamma_m \cap \ell_T(x) \smallsetminus x \right) > \varepsilon.
$$
To satisfy (iii) take subintervals in $\gamma_m$ of length
$\varepsilon$.
\end{proof}
We say that $\gamma$ is a {\em finite system} on $S, \Sigma$ if it is a
finite union of simple arcs with endpoints in $\Sigma$ and
non-intersecting interiors.
Modifying the above we obtain a stronger conclusion:
\begin{thm}
\name{thm: atlas}
Let $S, \Sigma, {\mathcal{F}}, \beta$ be as in Theorem \ref{thm:
sullivan1}, let $\alpha \in H^{\mathcal{F}}_+$, and let $\til {\mathcal{H}}$ be the
corresponding stratum of marked flat surfaces. Then there is an open set
$\mathcal{U}$ in $H^1(S, \Sigma;
{\mathbb{R}}^2) = \left(H^1(S, \Sigma; {\mathbb{R}}) \right) ^2$ containing $(\alpha,
\beta)$, a finite system $\gamma$ on $S,\Sigma$, and a map ${\mathbf q}:
\mathcal{U} \to \til {\mathcal{H}}$ such that the
following hold:
\begin{itemize}
\item
$\alpha$ and $\beta$ represent the vertical and horizontal foliations
of ${\mathbf q} (\alpha, \beta)$ respectively.
\item
For any $(\alpha, \beta) \in \mathcal{U}$, $\gamma$ is a transverse
system for the vertical foliation of ${\mathbf q}(\alpha, \beta)$.
\item
There is a 1-cocycle $\hat{\beta}$ on the 1-skeleton of $\mathcal{B}(\gamma)$
which represents $\beta$ and satisfies (i) and (ii) of ...
\end{itemize}
\end{thm}
\begin{lem}\name{lem: new construction}
A `special' transverse system is one that has one s.c. across each
cylinder, and one short segment from each singularity along each
(transversal) prong which goes into a minimal component. The segment
is short enough so that it does not cross any saddle connection in
${\mathcal{F}}$. Given a special transverse system $\gamma$
,
take the decomposition $\mathcal{B}(\gamma)$ obtained by moving along ${\mathcal{F}}$ from each
component of the transverse system back to itself. This has a
structure of a decomposition into topological rectangles, with leaf
edges containing all the saddle connections, and with each cylinder
filling up one rectangle.
\end{lem}
\begin{lem}\name{lem: special}
Given a special transverse system $\gamma$, and $N$, there is a
special transverse system $\gamma_N \subset \gamma$ such that each
rectangle of $\mathcal{B}(\gamma_N)$ which does not
have a saddle connection on its boundary, goes through $\gamma$ at
least $N$ times before returning to $\gamma_N$.
\end{lem}
\begin{proof}
By contradiction get shorter and shorter segments for $\gamma_N$ at
each prong, and for each $N$, have a segment along foliation starting at some
component $I_j$ and returning to another $I_k$ before crossing
$\gamma$ $N$ times. Can assume $I_j, I_k$ constant. Taking limit get a
saddle connection joining singularities at endpoints of $I_j, I_k$,
contradiction.
\end{proof}
}
\section{The homeomorphism theorem}\name{section: dev homeo}
We now prove Theorem \ref{thm: hol homeo}, which states that
$$
{\mathrm{hol}}: \til {\mathcal{H}}({\mathcal{F}}) \to \mathbb{A}({\mathcal{F}}) \times \mathbb{B}({\mathcal{F}})
$$
is a homeomorphism, where $\til {\mathcal{H}}({\mathcal{F}})$ is the set of marked
translation surface structures with vertical foliation topologically
equivalent to ${\mathcal{F}}$, $\mathbb{A}({\mathcal{F}}) \subset H^1(S, \Sigma)$ is the
set of Poincar\'e duals of asymptotic cycles of ${\mathcal{F}}$, and $\mathbb{B}({\mathcal{F}})
\subset H^1(S, \Sigma)$ is the set of $\mathbf{b}$ such that $\mathbf{b}(\alpha)>0$
for all $\alpha \in H^{{\mathcal{F}}}_+$.
\begin{proof}
The fact that ${\mathrm{hol}}$ maps $\til {\mathcal{H}}({\mathcal{F}})$ to $\mathbb{A}({\mathcal{F}}) \times
\mathbb{B}({\mathcal{F}})$ is an immediate consequence of the definitions, and of the
easy direction of Theorem \ref{thm: sullivan1}. That it is continuous
is also clear from definitions. That ${\mathrm{hol}}|_{\til {\mathcal{H}}({\mathcal{F}})}$ maps onto
$\mathbb{A}({\mathcal{F}}) \times \mathbb{B}({\mathcal{F}})$ is the hard direction of Theorem
\ref{thm: sullivan1}. Injectivity is a consequence of the following:
\begin{lem}\name{lem: fiber singleton}
Let $\til {\mathcal{H}}$ be a stratum of marked translation surfaces of type
$(S, \Sigma)$. Fix a singular measured foliation on $(S, \Sigma)$, and
let $\mathbf{b} \in H^1(S, \Sigma; {\mathbb{R}})$. Then there is at most one ${\mathbf q} \in
\til {\mathcal{H}}$ with vertical foliation ${\mathcal{F}}$ and horizontal foliation
representing $\mathbf{b}$.
\end{lem}
\begin{proof}[Proof of Lemma \ref{lem: fiber singleton}]
Suppose that ${\mathbf q}_1$ and ${\mathbf q}_2$ are two marked translation surfaces,
such that the vertical measured foliation of both is ${\mathcal{F}}$, and the
horizontal measured foliations ${\mathcal{G}}_1, {\mathcal{G}}_2$ both represent $\mathbf{b}$. We
need to show that ${\mathbf q}_1={\mathbf q}_2$. Let $\gamma$ be a special transverse
system to ${\mathcal{F}}$ (as in {\mathcal S} \ref{subsection: transversal} and the proof of Theorem \ref{thm:
sullivan1}). Recall that the non-cylinder edges of $\gamma$ can be
made as small as we like. Since ${\mathcal{F}}$ and ${\mathcal{G}}_1$ are transverse, we
may take each non-cylinder segment of $\gamma$ to be contained in
leaves of ${\mathcal{G}}_1$.
In a sufficiently small neighborhood $U$ of any $p \in \Sigma$, we may
perform an isotopy of ${\mathbf q}_2$, preserving the leaves of ${\mathcal{F}}$, so as
to make ${\mathcal{G}}_2$ coincide with ${\mathcal{G}}_1$. This follows from the fact that
in ${\mathbb{R}}^2$, the leaves of any foliation transverse to the vertical
foliation can be expressed as graphs over the horizontal
direction. Having done this, we may choose the non-cylinder segments
of $\gamma$ to be contained in such neighborhoods, and hence
simultaneously in leaves of ${\mathcal{G}}_1$ and ${\mathcal{G}}_2$.
\ignore{
it has one cylinder edge going across
each cylinder of periodic leaves for ${\mathcal{F}}$, and its other segments
start at singularities and go into minimal components of ${\mathcal{F}}$ without
crossing saddle connections in ${\mathcal{F}}$. Make $\gamma$ smaller by
deleting segments, so it has only one segment going into each minimal
component. Since ${\mathcal{F}}$ and ${\mathcal{G}}_1$ are
transverse, we may take $\gamma$ to be contained in leaves of
${\mathcal{G}}_1$.
We claim that, after making $\gamma$ smaller if necessary, we may
precompose ${\mathbf q}_2$ with an isotopy rel
$\Sigma$, in order to assume that $\gamma$ is also contained in leaves
of ${\mathcal{G}}_2$, i.e. that ${\mathcal{G}}_1$ and ${\mathcal{G}}_2$ coincide along
$\gamma$. Indeed, if $\alpha$ is a sufficiently short arc in $\gamma$
going from a singularity $x$ into a minimal set $M$, we may isotope
along ${\mathcal{F}}$ in $M$
until the leaf of ${\mathcal{G}}_2$ starting at $x$ reaches $\alpha$. Since $M$
contains no other segments of $\gamma$ we may perform this operation
independently on each minimal component. Now suppose $\alpha$ is a
cylinder edge in $\gamma$, so that $\alpha$ connects two
singularities on opposite sides of a cylinder $P$. In particular
$\int_\alpha {\mathcal{G}}_1 = \mathbf{b}(\alpha) = \int_\alpha {\mathcal{G}}_2$, so we may
perform an isotopy inside $P$ until $\alpha$ is also contained in a
leaf of ${\mathcal{G}}_2$, as claimed.
}
Now consider the cell decomposition $\mathcal{B}=\mathcal{B}(\gamma)$, and let $\beta_i =
[{\mathcal{G}}_i]$ be the 1-cocycle on $\mathcal{B}$ obtained by integrating
${\mathcal{G}}_i$. For a transverse non-cylinder edge $e$ we have $\beta_1(e)=\beta_2(e)=0$
since $e$ is a leaf for both foliations. If $e$ is contained in a leaf
of ${\mathcal{F}}$, we may join its endpoints to $\Sigma$ by paths $d$ and $f$
along $\gamma$. Then $\delta = d+ e+ f$ represents an element of
$H_1(S, \Sigma)$ so that $\beta_1(\delta) =
\beta_2(\delta)=\mathcal{B}(\delta)$. Since $\beta_i(d)= \beta_i(f)=0$ we have
$\beta_i(e) = \mathcal{B}(\delta)$. For a cylinder edge $e$, its endpoints are
already on $\Sigma$ so $\beta_1(e)=\beta_2(e)$. We have shown that
$\beta_1 = \beta_2$ on all edges of $\mathcal{B}$.
Recall from the proof of Theorem \ref{thm: sullivan1} that ${\mathbf q}_i$ may
be obtained explicitly by giving each cell the structure of a
Euclidean rectangle or parallelogram (the latter for cylinder cells)
as determined by ${\mathcal{F}}$ and $\beta_i$ on the edges. Therefore ${\mathbf q}_1 =
{\mathbf q}_2$.
\end{proof}
Finally we need to show that the inverse of hol is continuous. This is
an elaboration of the well-known fact that hol is a local
homeomorphism, which we can see as follows. Let ${\mathbf q} \in \til {\mathcal{H}}$,
and consider a geometric triangulation $\tau$ of ${\mathbf q}$ with vertices
in $\Sigma$ (e.g. a Delaunay triangulation \cite{MS}). The shape of
each triangle is uniquely and continuously determined by the hol image
of each of its edges. Hence if we choose a neighborhood $\mathcal{U}$ of ${\mathbf q}$
small enough so that none of the triangles becomes degenerate, we have
a homeomorphism ${\mathrm{hol}}: \mathcal{U} \to \mathcal{V}$ where $\mathcal{V}
= {\mathrm{hol}}(\mathcal{U}) \subset H^1(S,
\Sigma; {\mathbb{R}}^2)$.
If for ${\mathbf q}' \in \mathcal{U}$, the first coordinate $x ({\mathbf q}')$ of ${\mathrm{hol}}({\mathbf q}')$
lies in $\mathbb{A}({\mathcal{F}})$, then we claim that ${\mathbf q}' \in \til {\mathcal{H}}({\mathcal{F}})$. This
is because the vertical foliation is determined by the weights that
$x({\mathbf q})$ assigns to edges of the triangulation. By Lemma \ref{lem:
fiber singleton},
${\mathbf q}'$ is the unique preimage of ${\mathrm{hol}}({\mathbf q}')$ in $\til
{\mathcal{H}}({\mathcal{F}})$. Hence $\left({\mathrm{hol}}|_{\mathcal{U}}\right)^{-1}$ and $\left({\mathrm{hol}}|_{\til {\mathcal{H}}({\mathcal{F}})}\right)^{-1}$
coincide on their overlap, so continuity of one implies continuity of
the other.
\end{proof}
\section{Positive pairs}\name{section: pairs}
We now reformulate Theorem \ref{thm: sullivan1} in the language of
interval exchanges, and derive several useful consequences. For this
we need some more definitions.
Let the notation be as in {\mathcal S}\ref{subsection: iets}, so that $\sigma$
is an irreducible and
admissible permutation on $d$ elements.
The tangent space $T{\mathbb{R}}^d_+$ has a natural product structure $T{\mathbb{R}}^d_+={\mathbb{R}}^d_+
\times {\mathbb{R}}^d$ and a corresponding affine structure.
Given ${\bf{a}} \in {\mathbb{R}}^d_+, \, \mathbf{b} \in {\mathbb{R}}^d$, we can think of $({\bf{a}}, \mathbf{b})$ as
an element of $T{\mathbb{R}}^d_+$. We will be using the same symbols ${\bf{a}},
\mathbf{b}$ which were previously used to denote cohomology classes; the
reason for this will become clear momentarily. Let $\mathcal{T} =
\mathcal{T}_{\sigma}({\bf{a}})$ be the interval exchange associated with $\sigma$
and ${\bf{a}}$.
For $\mathbf{b} \in
{\mathbb{R}}^d$, in analogy with \equ{eq: disc points2},
define
$y_i(\mathbf{b}), \, y'_i(\mathbf{b})$ via \eq{eq: disc points1}
{
y_i=y_i(\mathbf{b})= \sum_{j=1}^i b_j, \ \ \
y'_i =y'_i(\mathbf{b}) = \sum_{j=1}^i b_{\sigma^{-1}(j)} =
\sum_{\sigma(k)\leq i} b_k.
}
In the case of Masur's construction (Figure \ref{figure: Masur}), the $y_i$ are the
heights of the points in the upper boundary of the polygon, and the
$y'_j$ in the lower.
Consider the following step functions $f,g, L: I \to {\mathbb{R}}$, depending on
${\bf{a}}$ and $\mathbf{b}$:
\eq{eq: defn L}{
\begin{split}
f(x) & = y_i \ \ \ \ \ \ \ \ \mathrm{for}\ x \in I_i = [x_{i-1}, x_i) \\
g(x) & = y'_i \ \ \ \ \ \ \ \ \mathrm{for} \ x \in I'_i = [x'_{i-1}, x'_i) \\
L(x) & = f(x) - g(\mathcal{T}(x)).
\end{split}
}
Note that for $Q$ as in \equ{eq: defn Q1} and $x \in I_i$ we have
\eq{eq: relation L Q}{
L(x) = Q({\mathbf{e}}_i, \mathbf{b}) .
}
If there are $i,j \in \{0, \ldots , d-1\}$
(not necessarily distinct)
and $m>0$ such that $\mathcal{T}^m(x_i)=x_j$ we will say that $(i,j,m)$ is a {\em
connection} for $\mathcal{T}$.
We denote the set of invariant non-atomic
probability measures for $\mathcal{T}$ by $\MM_{\bf{a}}$, and
the set of connections by ${\mathcal L}_{\bf{a}}$.
\begin{Def}
We say that $({\bf{a}}, \mathbf{b}) \in {\mathbb{R}}^d_+ \times {\mathbb{R}}^d$ is {\em a positive pair} if
\eq{eq: positivity}{\int L \, d\mu >0
\
\ \mathrm{for \ any \ } \mu \in \MM_{{\bf{a}}}
}
and
\eq{eq: connections positive}{\sum_{n=0}^{m-1}L(\mathcal{T}^nx_i) > y_i-y_j \ \
\ \mathrm{for \ any \ } (i,j,m) \in {\mathcal L}_{{\bf{a}}}.
}
\end{Def}
As explained in {\mathcal S} \ref{subsection: transversal}, following \cite{ZK}
one can construct a surface $S$ with a finite subset $\Sigma$,
a foliation ${\mathcal{F}}$ on $(S, \Sigma)$, and a judicious curve $\gamma$ on
$S$ such that $\mathcal{T}_\sigma({\bf{a}}) = \mathcal{T}({\mathcal{F}}, \gamma)$ (we identify
$\gamma$ with $I$ via the transverse measure).
Moreover, as in {\mathcal S} \ref{subsection: intersection} there is a complex
$\mathcal{D}$ with a single 2-cell $\mathcal{D}^2$ containing $\gamma$
as a properly embedded arc, and whose boundary is divided by $\partial
\, \gamma$ into two arcs each of surjects to the 1-skeleton of
$\mathcal{D}$. The upper arc is divided by $\Sigma$ into $d$ oriented
segments $K_1, \ldots, K_d$ images of the segments $I_i$ under flow
along ${\mathcal{F}}$. The vector $\mathbf{b}$ can be interpreted as a class in $H^1(S,
\Sigma)$ by assigning $b_i$ to the segment $K_i$. The Poincar\'e dual
of $\mathbf{b}$ in $H_1(S \smallsetminus \Sigma)$ is written $\beta = \sum b_i \hat{\ell}_i$,
where $\hat{\ell}_i$ are as in {\mathcal S} \ref{subsection: intersection}.
Using these we show:
\begin{prop}
\name{prop: positive interpretation}
$({\bf{a}}, \mathbf{b})$ is a positive pair if and only if $\mathbf{b} (\alpha) >0 $
for any $\alpha \in H^{{\mathcal{F}}}_+$.
\end{prop}
\begin{proof}
We show that implication $\implies$, the converse being similar.
It suffices to consider the cases where
$\alpha \in H^{\mathcal{F}}_+$ corresponds to $\mu \in \MM_\mathcal{T}$ or to a
positively oriented saddle connection in ${\mathcal{F}}$, because a general
element in $H^{\mathcal{F}}_+$ is a convex combination of these.
In the first case, define $a'_k = \mu(I_k)$ and ${\bf{a}}' = \sum a'_k
{\mathbf{e}}_k$. The corresponding homology class in $H_1(S \smallsetminus \Sigma)$ is
$\alpha = \sum a'_k \hat{\ell}_k$. Hence, as we saw in \equ{eq:Q
intersection},
$$
\mathbf{b}(\alpha) = \beta \cdot \alpha = Q({\bf{a}}', \mathbf{b}) = \sum a'_k Q({\mathbf{e}}_k, \mathbf{b})
\stackrel{\equ{eq: relation L Q}}{=} \sum \mu(I_k) L|_{I_k} = \int L
\, d\mu >0.
$$
\ignore{
In the first case, since Poincar\'e
duality is given by the intersection pairing, which is recorded by
$Q$, we have
\[
\beta \cdot \alpha_\mu
= Q\left(\alpha_\mu, \mathbf{b}\right) \stackrel{\equ{eq: explicit form
cycle}}{=} Q\left(\sum_k \mu(I_k) \ell_k\right)
= \sum \mu(I_k) Q({\mathbf{e}}_k, \mathbf{b}) \stackrel{\equ{eq: relation L Q}}{=}
\int L \, d\mu >0.
\]
In the second case, given a connection $(i,j,m)$ for $\mathcal{T}$ we form an
explicit path in $\mathcal{B}(\gamma)$ representing it. The path has the form
$e + \sum_{n=1}^{m-1} \ell_{\mathcal{T}^n(x_{i})} + f$, where $\ell_z = \ell_k$
if $z \in I_k$, $e$ goes from the singularity $\xi_{i}$ `above' $x_{i}$ to
$\gamma$, and $f$ goes from $\gamma$ to $\xi_{j}$. Also let $e_1$ be the
path from $\xi_{i}$ to $\gamma$ on the side of the rectangle above
$I_{i}$; then $e+e_1 = \ell_{i}$.
We denote by $\iota: H_1(S, \Sigma; {\mathbb{R}}) \times H_1(S \smallsetminus \Sigma; {\mathbb{R}})
\to {\mathbb{R}} $ the intersection pairing.
By \equ{eq: relation L Q} we have $\iota(\ell_k, \mathbf{b})
= L|_{I_k}$. Perturbing the paths
representing $\ell_k, f$ and $e_1$, we see that $e_1$ only crosses the
paths $\ell_k$ for $k>i$, in the negative direction. Therefore
$$\iota(e_1, \mathbf{b}) = -\sum_{k> i} b_k =
B+ y_{i}, \ \ \mathrm{where} \ B = -\sum b_k.$$
Similarly $\iota(f, \mathbf{b}) = B+
y_j$. Therefor
\[
\begin{split}
\beta \cdot \alpha
& =
\iota \left(e + \sum_{n=1}^{m-1}
\ell_{\mathcal{T}^n(x_{i})} + f, \mathbf{b}\right) = \iota \left( -e_1 + \sum_{n=0}^{m-1}
\ell_{\mathcal{T}^n(x_{i})} + f, \mathbf{b}\right) \\
\\ & = -y_{i}+ \sum_{n=0}^{m-1} L(\mathcal{T}^nx_i)+y_{j} \stackrel{\equ{eq: connections positive}}{>}0.
\end{split}
\]
}
Now consider the second case. Given a connection $(i,j,m)$ for $\mathcal{T}$,
the corresponding saddle connection $\alpha$ meets the disk
$\mathcal{D}^2$ in a union of leaf segments $\eta_1, \ldots, \eta_m$
where each $\eta_n$ is the leaf segment in $\mathcal{D}^2$
intersecting the interval $\gamma = I$ in the point $\mathcal{T}^n(x_i)$. This
point lies in some interval $I_r$ and some interval $I'_s$. If we let
$\hat{\eta}_n$ be a line segment connecting the left endpoint of
$I'_s$ to the left endpoint of $I_r$ then the chain $\sum
\hat{\eta}_n$ is homologous to $\sum \eta_n$ (note that the first
endpoint of $\eta_1$ and the last endpoint of $\eta_m$ do not
change). Now we apply our cocycle $\mathbf{b}$ to each $\hat{\eta}_n$ to
obtain
$$
\mathbf{b}(\hat{\eta}_n) = y_r - y'_s = f(\mathcal{T}^n(x_i)) - g(\mathcal{T}^n(x_i)).
$$
Summing, we get
\[
\begin{split}
\mathbf{b}(\alpha) & = \mathbf{b} \left (\sum \eta_n \right) = \mathbf{b} \left (\sum
\hat{\eta}_n \right)\\ & = \sum f(\mathcal{T}^n(x_i)) -
g(\mathcal{T}^n(x_i)) \\
& = -f(x_i) + f(\mathcal{T}^m(x_i)) + \sum_{n=0}^{m-1} f(\mathcal{T}^n(x_i)) -
g(\mathcal{T}^{n+1}(x_i)) \\
& = -y_i+y_j + \sum_{n=0}^{m-1} L(\mathcal{T}^n(x_i)) >0.
\end{split}
\]
\end{proof}
Now we can state the interval exchange version of Theorem \ref{thm: sullivan1}:
\begin{thm}
\name{thm: sullivan2}
Let $\til {\mathcal{H}}$ be the stratum of marked translation surfaces corresponding to
$\sigma$. Then for any positive pair $({\bf{a}}_0, \mathbf{b}_0)$ there is a
neighborhood $\mathcal{U}$ of $({\bf{a}}_0, \mathbf{b}_0)$ in $T{\mathbb{R}}^d_+$, and a map
${\mathbf q}: \mathcal{U} \to \til {\mathcal{H}}$ such that the following hold:
\begin{itemize}
\item[(i)]
${\mathbf q}$ is an affine map and a local homeomorphism.
\item[(ii)]
For any $({\bf{a}}, \mathbf{b}) \in \mathcal{U}$, ${\mathbf q}({\bf{a}},\mathbf{b})$ is a lift of ${\bf{a}}$.
\item[(iii)]
Any $({\bf{a}}, \mathbf{b})$ in $\mathcal{U}$ is positive.
\item[(iv)]
Suppose $({\bf{a}}, \mathbf{b}) \in \mathcal{U}$ and $\varepsilon_0>0$ is small enough so
that $({\bf{a}}+s\mathbf{b}, \mathbf{b}) \in \mathcal{U}$ for $|s| \leq \varepsilon_0$. Then $h_s {\mathbf q}({\bf{a}}, \mathbf{b})
= {\mathbf q}({\bf{a}}+s\mathbf{b}, \mathbf{b})$ for $|s | \leq \varepsilon_0$.
\end{itemize}
\end{thm}
\begin{proof}
Above and in {\mathcal S} \ref{subsection: intersection} we identified ${\mathbb{R}}^d$
with $H^1(S, \Sigma; {\mathbb{R}})$, obtaining an injective affine map
$$ T{\mathbb{R}}^d_+ \cong {\mathbb{R}}^d_+ \times {\mathbb{R}}^d \to H^1(S, \Sigma;
{\mathbb{R}})^2 \cong H^1(S, \Sigma; {\mathbb{R}}^2)$$
and we henceforth identify $T{\mathbb{R}}^d_+$ with its image.
Given a positive pair $({\bf{a}}_0, \mathbf{b}_0)$, as discussed at the end of {\mathcal S}
\ref{subsection: transversal} we obtain a surface $(S, \Sigma)$
together with a measured foliation ${\mathcal{F}} = {\mathcal{F}}({\bf{a}}_0)$ and a judicious
transversal $\gamma_0$, so that the return map $\mathcal{T}({\mathcal{F}}, \gamma)$ is
equal to $\mathcal{T}_\sigma({\bf{a}}_0)$, where $I$ parametrizes $\gamma_0$ via the
tranverse measure.
The positivity condition, together with Proposition \ref{prop:
positive interpretation} and Theorem
\ref{thm: sullivan1} give us a translation surface structure ${\mathbf q} =
{\mathbf q}({\bf{a}}_0, \mathbf{b}_0)$ whose vertical foliation is ${\mathcal{F}}$ and whose image
under hol is $({\bf{a}}_0, \mathbf{b}_0)$.
Let $\tau$ be a geometric triangulation on ${\mathbf q}$. Assume for the
moment that $\tau$ has no vertical edges, and in particular that each
edge for $\tau$ is transverse to ${\mathcal{F}}$. Now consider ${\bf{a}}$ very close
to ${\bf{a}}_0$. The map $\mathcal{T}_\sigma({\bf{a}})$ is close to $\mathcal{T}_\sigma({\bf{a}}_0)$ and
hence induces a foliation ${\mathcal{F}}({\bf{a}})$ whose leaves are nearly parallel
to those of ${\mathcal{F}}({\bf{a}}_0)$. More explicitly, ${\mathcal{F}}({\bf{a}})$ is obtained by
modifying ${\mathcal{F}}({\bf{a}}_0)$ slightly in a small neighborhood of $\gamma$ so
that it remains transverse to both $\gamma$ and $\tau$, and so that
the return map becomes $\mathcal{T}_\sigma({\bf{a}})$. This can be done if ${\bf{a}}$ is
in a sufficiently small neighborhood of ${\bf{a}}_0$.
Now if $({\bf{a}}, \mathbf{b})$ is sufficiently close to $({\bf{a}}_0, \mathbf{b}_0)$, the pair
$({\bf{a}}, \mathbf{b})$ assign to edges of $\tau$ vectors which retain the
orientation induced by $({\bf{a}}_0, \mathbf{b}_0)$. Hence we obtain a new geometric
triangulation, for which ${\mathcal{F}}({\bf{a}})$ is transverse to the edges, has
transverse measure agreeing with ${\bf{a}}$ on the edges, and hence is still
realized by the vertical foliation. Moreover $\gamma_0$ is still
transverse to the new foliation and the return map is the correct
one. That is what we wanted to show.
Returning to the case where $\tau$ is allowed to have vertical edges:
note that at most one edge in a triangle can be vertical. Hence, if we
remove the vertical edges we are left with a decomposition whose cells
are Euclidean triangles and quadrilaterals, and whose edges are
tranverse to ${\mathcal{F}}$. The above argument applies equally well to this
decomposition.
\medskip
We have shown that in a neighborhood of $({\bf{a}}_0, \mathbf{b}_0)$, the map ${\mathbf q}$
maps to $\til {\mathcal{H}}_\tau$, and is a local inverse for hol. Hence it is
affine and a local homeomorphism, establishing (i). Part (ii) is by
definition part of our construction. Part (iii) follows from the
implication (1) $\implies$ (2) in Theorem \ref{thm: sullivan1}. In
verifying (iv) we use \equ{eq: G action}.
\end{proof}
The following useful observation follows immediately:
\begin{cor}
\name{cor: realizable open}
The set of positive pairs is open.
\end{cor}
We also record the following corollary of our construction, coming
immediately from the description we gave of the variation of the
triangles in the geometric triangulation $\tau$:
\begin{cor}\name{cor: same gamma}
Suppose $\sigma, \til {\mathcal{H}}$, a positive pair $({\bf{a}}_0, \mathbf{b}_0)$, and ${\mathbf q}:
\mathcal{U} \to \til {\mathcal{H}}$ are as in Theorem \ref{thm: sullivan2}. The
structures ${\mathbf q}({\bf{a}}, \mathbf{b})$ can be chosen in their isotopy class so that
the following holds:
A single curve $\gamma$ is a judicious transversal for the vertical
foliation of ${\mathbf q}({\bf{a}}, \mathbf{b})$ for all $({\bf{a}}, \mathbf{b}) \in \mathcal{U}$; the
flat structures vary continuously with $({\bf{a}}, \mathbf{b})$, meaning that the
charts in the atlas, modulo translation, vary continuously; the return
map to $\gamma$ satisfies $\mathcal{T}({\mathbf q}, \gamma) = \mathcal{T}_\sigma({\bf{a}})$. In
particular, for any ${\mathbf q}' = {\mathbf q}({\bf{a}}, \mathbf{b})$, there is $\varepsilon>0$ such that
for $|s|< \varepsilon, {\bf{a}}(a) = {\bf{a}}+s\mathbf{b}$, we have
$$
\mathcal{T}_\sigma({\bf{a}}(s)) = \mathcal{T}(h_s {\mathbf q}', \gamma).
$$
\end{cor}
\ignore{
\begin{remark}
Let $Q$ be as in \equ{eq: defn Q1}. Note that $L(x) = Q({\mathbf{e}}_i, \mathbf{b})$
for $x \in I_i$, and that if $\lambda$ is Lebesgue measure on $I$, then
\eq{eq: defn Q}{
Q({\bf{a}}, \mathbf{b}) = \sum a_i Q({\mathbf{e}}_i, \mathbf{b}) = \sum a_i L|_{I_i}
= \int L \, d\lambda
}
is the area of the polygon $\mathcal P$ defined in \equ{eq: defn
mathcalP}. Thus requirement \equ{eq: positivity} can be interpreted as
saying that the area of a surface with horizontal (resp. vertical)
measured foliation determined by ${\bf{a}}$ (resp. $\mathbf{b}$) should be positive
with respect to any transverse measure for ${\bf{a}}$.
Note also that if we fix ${\mathcal{F}}$ and $\gamma$ then ${\mathbb{R}}^d$ is identified
with $H_1(S \smallsetminus \Sigma; {\mathbb{R}})$ via \equ{eq:
concrete cycle}. Under this identification, the standard basis vectors
${\cal{E}}_i$ cycles whose images in $H_1(S ; {\mathbb{R}})$ is non-trivial, and examining
\equ{eq: defn Q1} one recognizes the intersection pairing on $H_1(S ;
{\mathbb{R}})$. Thus \equ{eq: positivity} can also be interepreted as saying
that the classes represented by ${\bf{a}}, \mathbf{b}$ should have positive
intersection. However, \equ{eq: connections positive} admits no
interpretation on $S$; it is
\end{remark}
}
\section{Mahler's question for interval exchanges and its generalizations}
\name{section: results on iets}
A vector ${\mathbf{x}} \in {\mathbb{R}}^d$ is {\em very well approximable} if
for some $\varepsilon>0$ there are infinitely many ${\mathbf p} \in {\mathbb{Z}}^d, q
\in {\mathbb{N}}$ satisfying $\|q{\mathbf{x}} - {\mathbf p} \| < q^{-(1/d + \varepsilon)}.$ It is
a classical fact that almost every (with respect to Lebesgue measure) ${\mathbf{x}}
\in {\mathbb{R}}^d$ is not very well approximable, but that the set of very
well approximable vectors is large in the sense of Hausdorff
dimension.
Mahler conjectured in the 1930's that for almost every (with respect to
Lebesgue measure on the real line) $x \in {\mathbb{R}}$, the vector ${\bf{a}}(x)$ as in
\equ{eq: mahler curve},
is not very well approximable. This famous conjecture was settled
by Sprindzhuk in the 1960's and spawned many additional questions of a
similar nature. A general formulation of the
problem is to describe measures $\mu$ on ${\mathbb{R}}^d$ for which almost
every ${\mathbf{x}}$ is not very well approximable. See \cite{dimasurvey} for
a survey.
In this section we apply Theorems \ref{thm: sullivan1} and \ref{thm:
sullivan2} to analogous problems concerning
interval exchange transformations.
Fix a permutation $\sigma$ on $d$ symbols which is irreducible and
admissible. In answer to a conjecture of Keane, it was proved by
Masur \cite{Masur-Keane} and Veech \cite{Veech-zippers} that almost
every ${\bf{a}}$ (with respect to Lebesgue measure on ${\mathbb{R}}^d_+$) is uniquely
ergodic. On the other hand Masur and Smillie
\cite{MS} showed that the set of non-uniquely ergodic interval exchanges
is large in the sense of Hausdorff dimension.
In this paper we consider the problem of describing
measures $\mu$ on ${\mathbb{R}}^d_+$ such that $\mu$-a.e. ${\bf{a}}$ is uniquely
ergodic. In a celebrated paper \cite{KMS}, it was shown that for
certain $\sigma$ and certain line
segments $\ell \subset {\mathbb{R}}^d_+$ (arising from a problem in billiards on
polygons), for $\mu$-almost every ${\bf{a}} \in \ell$,
$\mathcal{T}_\sigma({\bf{a}})$ is uniquely
ergodic, where $\mu$ denotes Lebesgue measure on $\ell$. This was
later abstracted in
\cite{Veech-fractals}, where the same result was shown to hold for a
general class of measures in place of $\mu$. Our strategy is strongly
influenced by these papers.
Before stating our results we introduce more terminology. Let $B(x,r)$
denote the interval $(x-r, x+r)$ in ${\mathbb{R}}$.
We say that a finite regular Borel measure $\mu$ on ${\mathbb{R}}$ is {\em
decaying and Federer} if there are positive
$C, \alpha, D$ such that
for every $x \in
{\mathrm{supp}} \, \mu$ and every $0<
\varepsilon, r<1$,
\eq{eq: decaying and federer}{
\mu \left(B(x,\varepsilon r) \right) \leq C\varepsilon^\alpha \mu\left(B(x,r) \right)
\ \ \ \mathrm{and} \ \ \ \mu \left( B(x, 3r) \right) \leq D\mu\left(B(x,r) \right).
}
It is not hard to show that Lebesgue measure, and the
coin-tossing measure on Cantor's middle thirds set, are both decaying
and Federer. More constructions of such measures are given in
\cite{Veech-fractals, bad}.
Let $\dim$ denote Hausdorff dimension, and for $x\in{\mathrm{supp}}\,\mu$ let
$$
\underline{d}_\mu(x) = \liminf_{r\to 0}\frac{\log
\mu\big(B(x,r)\big)}{\log r}.
$$
Now let
\eq{eq: defn epsn}{
\varepsilon_n({\bf{a}}) = \min \left \{\left|\mathcal{T}^k(x_i) - \mathcal{T}^{\ell}(x_j)\right| :
0\leq k, \ell \leq n, 1 \leq i ,j \leq d-1, (i,k) \neq (j,\ell)
\right\},
}
where $\mathcal{T} = \mathcal{T}_\sigma({\bf{a}})$.
We say that ${\bf{a}}$ is
{\em of recurrence type} if $\limsup n\varepsilon_n({\bf{a}})
>0$
and
{\em of bounded type} if $\liminf
n\varepsilon_n({\bf{a}})>0$.
It is known by work of Masur, Boshernitzan, Veech and Cheung that
if ${\bf{a}}$ is of recurrence type then it is uniquely ergodic, but that
the converse does not hold -- see {\mathcal S} \ref{section: saddles} below for more details.
We have:
\begin{thm}[Lines]
\name{cor: inheritance, lines}
Suppose $({\bf{a}}, \mathbf{b})$ is a positive pair. Then there is $\varepsilon_0>0$ such
that the following hold for ${\bf{a}}(s) = {\bf{a}}+s\mathbf{b}$ and
for every decaying and Federer measure $\mu$ with ${\mathrm{supp}} \, \mu
\subset (-\varepsilon_0, \varepsilon_0)$:
\begin{itemize}
\item[(a)]
For $\mu$-almost every $s$, ${\bf{a}}(s)$ is of
recurrence type.
\item[(b)]
$\dim \, \left \{s \in {\mathrm{supp}} \, \mu : {\bf{a}}(s) \mathrm{ \ is \ of \ bounded \
type} \right \} \geq \inf_{x \in {\mathrm{supp}} \, \mu} \underline{d}_{\mu}(x).$
\item[(c)]
$\dim \left \{s \in (-\varepsilon_0, \varepsilon_0) : {\bf{a}}(s) \mathrm{ \ is \ not \ of \
recurrence \ type } \right \} \leq 1/2.$
\end{itemize}
\end{thm}
\begin{thm}[Curves]
\name{cor: ue on curves}
Let $I$ be an interval, let $\mu$ be a decaying and Federer measure on
$I$, and let $\beta: I \to
{\mathbb{R}}^d_+$ be a $C^2$ curve, such that for $\mu$-a.e. $s \in I$,
$(\beta(s), \beta'(s))$ is positive. Then for $\mu$-a.e. $s \in I$,
$\beta(s)$ is of recurrence type.
\end{thm}
Following some preliminary work, we will prove Theorem \ref{cor:
inheritance, lines} in {\mathcal S}
\ref{section: inheritance lines} and Theorem \ref{cor: ue on curves} in {\mathcal S}
\ref{section: mahler curves}. In {\mathcal S} \ref{section: nondivergence} we
will prove a strengthening of Theorem \ref{cor: inheritance,
lines}(a).
\section{Saddle connections, compactness criteria}
\name{section: saddles}
A link between the $G$-action and unique ergodicity questions was made
in the following fundamental result.
\begin{lem}[{Masur \cite{Masur Duke}}]\name{lem: Masur}
If $q \in {\mathcal{H}}$ is not uniquely ergodic then the trajectory
$\{g_tq: t \geq 0\}$ is {\em divergent}, i.e. for any compact
$K \subset {\mathcal{H}}$ there is $t_0$ such that for all $t
\geq t_0$, $g_tq \notin K.$
\end{lem}
Masur's result is in fact stronger as it provides divergence in
the moduli space of quadratic differentials. The converse statement is
not true, see \cite{CM}. It is known (see \cite[Prop. 3.12]{Vorobets}
and \cite[{\mathcal S} 2]{Veech-fractals})
that ${\bf{a}}$ is of recurrence (resp. bounded) type
if and only if the forward geodesic trajectory of any of its lifts
returns infinitely often to (resp. stays in) some compact subset of
${\mathcal{H}}$. It follows using Lemma \ref{lem: Masur} that if $\mathcal{T}$ is
of recurrence type then it is uniquely ergodic. In this section we
will prove a quantitative version of these results, linking the
behavior of $G$-orbits to the size of the quantity $n \varepsilon_n({\bf{a}})$.
We denote the set
of all saddle connections for a marked translation surface ${\mathbf q}$ by
${\mathcal L}_{\mathbf q}$. There is a natural
identification of ${\mathcal L}_{\mathbf q}$ with ${\mathcal L}_{g{\mathbf q}}$ for any $g \in G$. We define
$$\phi(q) = \min \left\{ \ell(\alpha,{\mathbf q} ) : \alpha \in {\mathcal L}_{\mathbf q}
\right\},$$
where ${\mathbf q} \in \pi^{-1}(q)$ and
$\ell(\alpha, {\mathbf q}) = \max \{|x(\alpha, {\mathbf q}) |,
|y(\alpha, {\mathbf q}) | \}.$ Let
${\mathcal{H}}_1$ be the area-one sublocus in ${\mathcal{H}}$, i.e. the set of $q \in {\mathcal{H}}$
for which the total area of the surface is one.
A standard compactness criterion for each stratum asserts that a set
$X\subset {\mathcal{H}}_1$ is compact if and only if
$$\inf_{q \in X} \phi(q) >0.$$
Thus, for each $\varepsilon>0$,
$$K_{\varepsilon} = \left\{q \in {\mathcal{H}}_1: \phi(q) \geq \varepsilon \right\}
$$
is compact, and $\{K_\varepsilon\}_{\varepsilon >0}$ form an exhaustion of
${\mathcal{H}}_1$. We have:
\begin{prop}
\name{lem: relating lengths and disconts}
Suppose $\gamma$ is judicious for $q$. Then there
are positive $\kappa, c_1, c_2, n_0$ such that for $\mathcal{T} = \mathcal{T}(q,
\gamma)$ we have
\begin{itemize}
\item If $n\ge n_0,$ $\zeta\ge n\varepsilon_n(\mathcal{T})$, and $e^{t/2}=n\sqrt{2c_2/\zeta}$, then
\eq{eq: first one}{
\phi(g_tq) \le \kappa \sqrt\zeta.
}
\item If $n = \lfloor \kappa e^{t/2}\rfloor$, then
\eq{eq: second one}{
n\varepsilon_n(\mathcal{T}) \leq \kappa \phi(g_{t}q).
}
\end{itemize}
Moreover,
$\kappa, c_1, c_2, n_0$ may be taken to be
uniform for $q$ ranging over a compact subset of ${\mathcal{H}}$ and $\gamma$
ranging over smooth curves of uniformly bounded length, with return
times to the curve bounded above and below.
\end{prop}
\begin{proof}
We first claim that
\eq{eq: defn epsn alt}{
\varepsilon_n({\bf{a}}) = \min \left\{\left|x_i - \mathcal{T}^r
x_j\right|: 1 \leq i,j \leq d-1, |r| \leq n, (j,r) \neq (i,0)\right\}.
}
Indeed, if the minimum
in \equ{eq: defn epsn} is equal to $|\mathcal{T}^kx_i - \mathcal{T}^\ell x_j|$ with
$\ell \geq k \geq 1$, then the interval between
$\mathcal{T}^kx_i$ and $\mathcal{T}^\ell x_j$ does not contain discontinuities for
$\mathcal{T}^{-k}$ (if it did the
minimum could be made smaller). This implies that $\mathcal{T}^{-k}$ acts as
an isometry on this interval so that $|x_i - \mathcal{T}^{\ell - k}x_j| = \varepsilon_n(\mathcal{T})$.
Similarly, if the minimum in \equ{eq: defn epsn alt} is obtained for
$i,j$ and $r=-k\in[-n,0]$ then the interval between $x_i$ and
$\mathcal{T}^{-k} x_j$ has no discontinuities of $\mathcal{T}^k$, so that the same
value is also obtained for $|\mathcal{T}^k x_i - x_j|$. Hence the minimum in
\equ{eq: defn epsn alt} equals the minimum in \equ{eq: defn epsn}.
Let ${\mathbf q} \in \pi^{-1}(q)$ and let $n_0\ge 1$.
Suppose the return times to
$\gamma$ along the vertical foliation
are bounded below and above by $c_1$ and $c_2$,
respectively.
Making $c_2$ larger we can also assume that the total
variation in the vertical direction along $\gamma$ is no more than
$c_2$.
Write $n \varepsilon_n(\mathcal{T}) = n|x_i - \mathcal{T}^rx_j| \le\zeta$ and let $t$ be as in
\equ{eq: first one}.
Let $\sigma_i$ and $\sigma_j$ be the singularities of ${\mathbf q}$ lying
vertically above $x_i$ and $x_j$.
Let $\alpha$ be the path moving vertically from $\sigma_j$ along the
forward trajectory of $x_j$
until $\mathcal{T}^rx_j$, then along $\gamma$ to $x_i$, and vertically up to $\sigma_i$.
Then $|x({\mathbf q},\alpha)| = \varepsilon_n(\mathcal{T}) \le
\zeta/n$ and $|y( {\mathbf q},\alpha)| \leq c_2r + c_2 \leq 2nc_2$ for $n
\geq n_0$. Therefore, since $e^{t/2} = n\sqrt{2c_2/\zeta}$, we have
\[
\begin{split}
|x(g_t{\mathbf q},\alpha)| & = e^{t/2} |x({\mathbf q},\alpha)| \leq \sqrt{2c_2\zeta}, \\
|y(g_t{\mathbf q},\alpha)| & = e^{-t/2} |y( {\mathbf q},\alpha)| \leq \sqrt{2c_2\zeta},
\end{split}
\]
so $\ell(\alpha, g_{t}{\mathbf q}) \leq \kappa \sqrt\zeta$, where $\kappa = \sqrt{2c_2}.$
A shortest representative for $\alpha$ with respect to $g_{t}{\mathbf q}$
is a concatenation $\bar\alpha$ of saddle connections. Since $\alpha$ travels
monotonically along both horizontal and vertical foliations of ${\mathbf q}$,
a Gauss-Bonnet argument tells us that $\bar\alpha$ does the same, so that
the coordinates of its saddle connections have consistent signs.
Hence the same bound holds for each of those saddle
connections, giving \equ{eq: first one}.
\medskip
Now we establish (\ref{eq: second one}).
Let $\alpha$ be a saddle connection minimizing
$\ell(\cdot,g_t{\mathbf q})$, and write
$x_t = x(g_t{\mathbf q},\alpha)$ and
$y_t = y(g_t{\mathbf q},\alpha)$. Without loss of generality (reversing the
orientation of $\alpha$ if necessary)
we may assume that $x_t \ge 0$.
Minimality means
$$\phi = \phi(g_{t}q) = \max (x_t,|y_t|).$$
In ${\mathbf q}$, the coordinates of $\alpha$ satisfy
$$
x_0 = e^{-t/2}x_t \le e^{-t/2}\phi
$$
and
$$
|y_0| = e^{t/2}|y_t| \le e^{t/2}\phi.
$$
\begin{figure}[htp]
\includegraphics{vstrip}
\caption{The vertical strip ${\mathcal{U}}$ minus rays $R_\sigma$ immerses
isometrically in $S$.}
\name{figure: vstrip}
\end{figure}
Let ${\mathcal{U}}$ be the strip $[0,x_0]\times{\mathbb{R}}$ in ${\mathbb{R}}^2$, and let $v\subset
{\mathcal{U}}$ be the line segment connecting $v_-=(0,0)$ to
$v_+=(x_0,y_0)$. A neighborhood of $v$ in ${\mathcal{U}}$ embeds in $S$ by a local
isometry that preserves horizontal and vertical directions. We can
extend this to an isometric immersion $\psi:{\mathcal{U}}' \to
S$, where ${\mathcal{U}}'$ has the following form:
There is a discrete set $\hhat\Sigma \subset {\mathcal{U}} \setminus int(v)$, and for
each $\sigma=(x,y)\in\hhat \Sigma$ a vertical ray $R_\sigma$ of the
form $\{x\}\times(y,\infty)$ (``upward pointing'') or
$\{x\}\times(-\infty,y)$ (``downward pointing''), so that the rays are
pairwise disjoint, disjoint from $v$, and ${\mathcal{U}}' = {\mathcal{U}} \setminus
\bigcup_{\sigma\in\hhat\Sigma} R_\sigma$ (see Figure \ref{figure: vstrip}).
The map $\psi$ takes
$\hhat\Sigma$ to $\Sigma$, and it is defined by extending the
embedding at each $p\in v$ maximally along the vertical line through
$p$ (in both directions) until the image encounters a singularity in $\Sigma$.
(We include $v_-$ and $v_+$ in $\hat\Sigma$, and for these two points
delete both an upward and a downward pointing ray.)
Let $\hhat\gamma$ be the preimage $\psi^{-1}(\gamma)$. This is a union
of arcs properly embedded in ${\mathcal{U}}'$, and transverse to the vertical
foliation in ${\mathbb{R}}^2$. By definition of $c_1$ and $c_2$, each vertical
line in ${\mathcal{U}}'$ is cut by $\hhat\gamma$ into segments of length at
least $c_1$ and at most $c_2$. Moreover the total vertical extent of
each component of $\hhat\gamma$ is at most $c_2$.
Consider $\gamma_1$ the component of $\hhat\gamma$ that meets the
downward ray based at $v_+$ at the highest point $\hat r$. The other
endpoint $\hat p$ of $\gamma_1$ lies on some other ray $R_\sigma$.
The width of $\gamma_1$ is at most $x_0$, so the image points $r =
\psi(\hat r) $ and $p = \psi(\hat p)$ satisfy $|p-r|\le x_0$, with
respect to the induced transverse measure on $\gamma$. We now check
that $p$ and $r$ are images of discontinuity points of $\mathcal{T}$, by
controlled powers of $\mathcal{T}$.
By choice of $\gamma_1$, the upward leaf emanating from $r$ encounters
the singularity $\psi(v_+)$ before it returns to $\gamma$, and hence
$r$ itself is a discontinuity point $x_i$.
For $p$, Let us write $\sigma = (x,y)$ and $p=(x,y')$. Suppose first
that $y_0\ge 0$. There are now two cases. If $R_\sigma$ lies above $v$ (and hence
is upward pointing): we have $y'\ge y \ge 0$, and moreover (since the
vertical variation of $\gamma_1$ is bounded) $y' \le y_0+c_2$.
The segment of
$R_\sigma$ between $\sigma $ and $p$ is cut by $\hhat \gamma$
(incident from the right) into at
most $(y'-y)/c_1$ pieces, and this implies that there is some $k\ge 0$
bounded by
$$
k \le \frac{y_0+c_2}{c_1} \le \frac{e^{t/2}\phi + c_2}{c_1}
$$
such that $p= T^k x_j$ for some discontinuity $x_j$.
If $R_\sigma$ lies below $v$ and is downward pointing: we have $y' \le
y \le y_0$ and $y' \ge y_0-2c_2$, so that by the same logic there is $k\ge
0$ with
$$
k \le \frac{2c_2}{c_1}
$$
such that $T^k p = x_j$ for some discontinuity $x_j$.
Hence in either case we have
\eq{eq: x0 bound}{
|T^m x_j - x_i| \le x_0 \le e^{-t/2}\phi
}
where $-2c_2/c_1 \le m \le (y_0+c_2)/c_1$.
If $y_0 <0$ there is a similar analysis, yielding the bound
\equ{eq: x0 bound} where now $(y_0-2c_2)/c_1 \le m \le c_2/c_1$.
Noting that $\phi < 1$ by
area considerations,
if we take $n = \lfloor
\kappa e^{t/2}\rfloor$, where $\kappa = 1+(1+2c_2)/c_1$, then we
guarantee $|m| \le n$, and hence get
$$
n\varepsilon_n \le \kappa e^{t/2} e^{-t/2}\phi \le \kappa\phi.
$$
\medskip
\end{proof}
\ignore{
xxxxxxxxx
\begin{prop}
\name{lem: relating lengths and disconts}
Suppose $\gamma$ is judicious for $q$. Then there
are positive $\kappa, c_1, c_2, n_0$ such that for $\mathcal{T} = \mathcal{T}(q,
\gamma)$ we have
\eq{eq: first one}{
n\varepsilon_n(\mathcal{T}) < \zeta, \, n \geq n_0, \, t=2\log n +
\log \frac{2c_2}{\zeta} \ \implies \
\phi(g_tq) \leq \kappa \sqrt{\zeta}
}
and
\eq{eq: second one}{
n\varepsilon_n(\mathcal{T}) \leq \kappa \phi(g_{t}q) \ \ \mathrm{for} \
n=n(t)=
e^{t/2}/c_
.
}
Moreover $\kappa, c_1, c_2, n_0$ may be taken to be
uniform for $q$ ranging over a compact subset of ${\mathcal{H}}$ and $\gamma$
ranging over smooth curves of uniformly bounded length, with return
times to the curve bounded above and below.
\end{prop}
\combarak{There is a small issue here which perhaps requires a better
explanation. What does it mean for these curves to have uniformly
bounded lengths? I think of these curves as sitting on some fixed
surface with a smooth structure. Then we can measure their lengths in
the different translation surface structures. The confusing point is
that there is an equivalence relation identifying translation
structures which differ by a homeo, and this homeo may distort the
length of the curves, so the quantity is not well-defined. Should I
say that we take a homeo for which the
curve has small length?? Actually for the applications, we can assume
that $\gamma$ is the `same' curve (in some marking), whatever that means. }
\begin{proof}
We first claim that $\varepsilon_n({\bf{a}}) = \min \{|x_i - \mathcal{T}^r
x_j|: 1 \leq i,j \leq d-1, 0 \leq r \leq n, (j,r) \neq (i,0)\}.$ Indeed, if the minimum
in \equ{eq: defn epsn} is equal to $|\mathcal{T}^kx_i - \mathcal{T}^\ell x_j|$ with
$\ell \geq k \geq 1$, then the interval between
$\mathcal{T}^kx_i$ and $\mathcal{T}^\ell x_j$ does not contain discontinuities for
$\mathcal{T}^{-k}$ (if it did the
minimum could be made smaller). This implies that $\mathcal{T}^{-k}$ acts as
an isometry on this interval so that $|x_i - \mathcal{T}^{\ell - k}x_j| = \varepsilon_n(\mathcal{T})$.
Let ${\mathbf q} \in \pi^{-1}(q)$ and let $n_0=3$. Suppose the return times to
$\gamma$ (i.e.
the length of a segment going from $\gamma$ back to $\gamma$ along
vertical leaves) is bounded below and above by $c_1$ and $c_2$
respectively. Making $c_2$ larger we can assume that the total
variation in the vertical direction along $\gamma$ is no more than
$c_2$.
Write $n \varepsilon_n(\mathcal{T}) = |x_i - \mathcal{T}^rx_j| <\zeta$ and $t$ as in
\equ{eq: first one}, so that
$e^{t/2} = \sqrt{\frac{2c_2}{ \zeta}} n.$
Let $\alpha'$ be the path which moves vertically from $x_j$ until
$\mathcal{T}^rx_j$ and then along $\gamma$ to $x_i$, and let $\alpha$ be a
path obtained by joining to the two endpoints of $\alpha'$ the
shortest vertical segments to singularities. Then $x(\alpha, {\mathbf q}) <
\zeta/n$ and $y(\alpha, {\mathbf q}) \leq c_2r + 3c_2 \leq 2nc_2$ for $n \geq n_0$. Therefore
\[
\begin{split}
x(\alpha, g_{t}{\mathbf q}) & = e^{t/2} x(\alpha, {\mathbf q}) < \sqrt{2c_2\zeta}, \\
y(\alpha, g_{t}{\mathbf q}) & = e^{-t/2} y(\alpha, {\mathbf q}) \leq \sqrt{2c_2\zeta},
\end{split}
\]
so $\ell(\alpha, g_{t}{\mathbf q}) \leq \kappa \sqrt{\zeta}$, where $\kappa = \sqrt{2c_2}.$
A shortest representative for $\alpha$ with respect to $g_{t}{\mathbf q}$
is a concatenation of saddle connections, which are not longer than
$\alpha$. This proves \equ{eq: first one}.
Now let $\alpha$ be the shortest saddle connection for $g_tq$, that is,
$$\phi = \phi(g_{t}q) = \max \{|x|, |y|\},
\ \ \ \ \mathrm{where} \ x= x(\alpha, g_t {\mathbf q}), \ y = y(\alpha, g_t{\mathbf q}) $$
(if there are several $\alpha$ where this minimum is attained choose
one with minimal $x$-component, and if there are several of these,
choose the one with minimal $y$-component).
By choosing an orientation on $\alpha$ we can assume that $y>0$, and
we assume first that $x>0$.
Set $c = c_1/2$ and $n=n(t)$ as in
\equ{eq: second one}, so
that
\[
\begin{split}
x & \leq \phi
e^{-t/2} = \frac{2\phi}{c_1n} \\\
y
& \leq \phi
e^{t/2} = \frac{\phi c_1n}{2}.
\end{split}
\]
Make $n_0$ large enough so that $n_0 \geq 2c_2/c_1$.
Suppose $\alpha$ starts at singularity $\xi_1$ and ends at
singularity $\xi_2$. Let $L$ be a vertical leaf starting at
$\xi_1$ making an angle of no more than $\pi$ with
$\alpha$.
Consider the rectangle $R = [0,x] \times [-y,y]$ in the
plane and consider an isometry
$\tau$
of a small neighborhood $R$ of the origin, mapping the origin to
$\xi_1$
and the vertical side to $L$. The only possible obstruction to
continuing $\tau$ to an embedding of $R$ in $S$ which is isometric with
respect to $q$, is that an
interior point of $R$ map to a singularity. However this is
impossible since it would imply that $S$ contains a saddle connection
shorter than $\alpha$ with respect to $g_t{\mathbf q}$. Thus $R$ is embedded,
its left boundary segment lies
on $L$, and $\alpha$ cuts across it. The only singularities on the
boundary of $R$ are $\xi_1 = \tau(0,0)$ and $\xi_2 = \tau(x,y)$. Now
consider the points $x_i$ (resp. $x_j, p$) obtained
by continuing down along a vertical leaf on the side of $R$ from $\xi_1$
(resp. $\xi_2$, $\tau(0,y)$)
until the first intersection with $\gamma$. By construction $x_i, x_j$ are
discontinuities of $\mathcal{T}$. Since $x_i$ and $p$ lie on the
same leaf there is $r$ such that $\mathcal{T}^r x_i = p$,
$x=|p-x_j|=|\mathcal{T}^r(x_i) - x_j|$. By construction the line
segment from $x_i$ to $p$ along $L$
has length at least $rc_1$ so $y \geq rc_1 - c_2$, that is
$$rc_1 \leq y+c_2 \leq c_1n/2 + c_2 \leq c_1n$$
so that $r \leq n$.
This implies
$$\varepsilon_n(\mathcal{T}) \leq x \leq 2\phi/c_1n,
$$
so setting $\kappa = 2/c_1$ we obtain \equ{eq: second one}.
For the case $x<0$, we choose $R= [x, 0] \times [-y,y]$, let $L$ be
the leaf going down from
$\xi_2$ and making an angle of less than $\pi$ with $\alpha$, and
embed $R$ as before. We choose $p$ and $x_j$ as before. Then there is
a discontinuity $x_i$ which is the intersection of a vertical segment
from $\xi_1$ with $\gamma$, such that $\mathcal{T}(x_i), \mathcal{T}^2(x_i), \ldots,
\mathcal{T}^r(x_i)=p$ lie on the right hand edge of $R$. With these choices of
$x_i, x_j, p$ the argument is the same as the one given above.
\end{proof}
}
\section{Mahler's question for lines}\name{section:
inheritance lines}
In this section we will derive Theorem \ref{cor: inheritance, lines}
from Theorem \ref{thm: sullivan2} and earlier results
of \cite{KMS, Masur Duke, with Dima, bad}.
We will need the following:
\begin{prop}
\name{lem: horocycles and circles}
For any $|\theta| < \pi/2$, there is a bounded subset $\Omega \subset G$
such that for any $t \geq 0$, and any $q \in {\mathcal{H}}$, there is $w \in
\Omega$ such that $g_t h_{-\tan \theta} q = w g_t r_{\theta} q.$
\end{prop}
\begin{proof}
Let
$$x = \left(\begin{matrix}1/\cos \theta & 0 \\ -\sin \theta & \cos
\theta \end{matrix} \right) \in G.
$$
Then $g_t x g_{-t}$ converges in
$G$ as $t \to \infty$, and we set $\Omega = \{g_txg_{-t} : t \geq
0\}$.
Since $xr_\theta = h_{-\tan \theta}$ and
$g_t h_{-\tan \theta} q = g_t x
r_\theta q = g_t x
g_{-t} g_t r_\theta q$, the claim follows.
\end{proof}
\begin{proof}[Proof of Theorem \ref{cor: inheritance, lines}]
Let $({\bf{a}}, \mathbf{b})$ be positive, let ${\mathcal{U}}$ be a neighborhood
of $({\bf{a}}, \mathbf{b})$ in ${\mathbb{R}}^d_+ \times {\mathbb{R}}^d$, let ${\mathbf q}: {\mathcal{U}} \to {\mathcal{H}}$ as in
Theorem \ref{thm: sullivan2}, let $q = \pi \circ {\mathbf q}$ where $\pi: \til
{\mathcal{H}} \to {\mathcal{H}}$ is the natural projection, and let $\varepsilon_0>0$ so that
${\bf{a}}(s)={\bf{a}}+s\mathbf{b} \in {\mathcal{U}}$ for all $s \in (-\varepsilon_0, \varepsilon_0)$. Making $\varepsilon_0$
smaller if necessary, let $\gamma$
be a judicious curve for $q$ such that $\mathcal{T}_\sigma({\bf{a}}(s)) = \mathcal{T}(h_sq,
\gamma)$ for all $s \in (-\varepsilon_0, \varepsilon_0)$. By Theorem \ref{thm:
sullivan2} and Proposition \ref{lem: relating
lengths and disconts}, ${\bf{a}}(s)$ is of recurrence (resp. bounded) type
if and only if there is a compact subset $K \subset {\mathcal{H}}$ such that
$\{t >0 : g_th_sq \in K\}$ is unbounded (resp., is equal to $(0,
\infty)$). The main result of \cite{KMS} is that for any $q$, for
Lebesgue-a.e. $\theta \in (-\pi, \pi)$ there is a compact $K
\subset
{\mathcal{H}}$ such that $\{t>0: g_tr_{\theta}q \in K\}$ is unbounded. Thus (a)
(with $\mu$ equal to Lebesgue measure)
follows via Proposition
\ref{lem: horocycles and circles}. For a general measure $\mu$, the
statement will follow from
Corollary \ref{cor: strengthening} below.
Similarly (b) follows from \cite{with Dima}
for $\mu$ equal to
Lebesgue measure, and from \cite{bad} for a general decaying Federer
measure, and (c) follows from \cite{Masur Duke}.
\end{proof}
\section{Quantitative nondivergence for horocycles}
\name{section: nondivergence}
In this section we will recall a quantitative nondivergence result for
the horocycle flow, which
is a variant of results in \cite{with Yair}, and will be crucial for
us. The theorem was stated
without proof in \cite[Prop. 8.3]{bad}. At the end of the section we
will use it to obtain a strengthening
of Theorem \ref{cor: inheritance, lines}(a).
Given positive constants $C, \alpha, D$, we say that a regular finite Borel measure $\mu$ on ${\mathbb{R}}$ is {\em
$(C, \alpha)$-decaying and $D$-Federer} if \equ{eq: decaying and
federer} holds for all $x \in {\mathrm{supp}} \, \mu$ and all $0 < \varepsilon, r <1.$
For an interval
$J=B(x,r)$ and $c>0$ we write $cJ = B(x, cr)$. Let ${\mathcal{H}}_1$ and $K_\varepsilon$
be as in {\mathcal S}\ref{section: saddles}, and let $\til {\mathcal{H}}_1 = \pi^{-1}({\mathcal{H}}_1)$.
\begin{thm}
\name{thm: nondivergence, general}
Given a stratum ${\mathcal{H}}$ of translation surfaces\footnote{The
result is also valid (with identical proof) in the more general setup
of quadratic differentials.}, there are positive
constants $C_1, \lambda, \rho_0$, such that for any
$(C,\alpha)$-decaying and $D$-Federer measure $\mu$ on
an interval $B\subset {\mathbb{R}}$, the following holds. Suppose $J \subset {\mathbb{R}}$
is an interval with $3J
\subset B$,
$0<\rho
\leq \rho_0, \, {\mathbf q}
\in \til{{\mathcal{H}}}_1$, and suppose
\eq{eq: condn on J}{\forall \delta \in {\mathcal L}_{{\mathbf q}}, \ \sup_{s \in J}\,
\ell(\delta, h_s{\mathbf q}) \geq \rho.}
Then for any $0<\varepsilon<\rho$:
\begin{equation}
\label{eq - precise yet long}
\mu\left( \{s \in J: h_s \pi({\mathbf q}) \notin K_{\varepsilon} \right \}) \leq
C' \left(\frac{\varepsilon}{\rho}\right)^{\lambda \alpha}\mu(J),
\end{equation}
where $C' = C_1 2^\alpha C D$.
\end{thm}
\begin{proof}
The proof is similar
to that of \cite[Thm.\ 6.10]{with Yair}, but with the assumption that
$\mu$ is Federer
substituting for condition (36) of that paper.
To avoid repetition we give the proof
making reference to \cite{with Yair} when necessary.
Let $\lambda, \rho_0, C_1$ substitute for $\gamma, \rho_0, C$ as in
\cite[Proof of Thm. 6.3]{with Yair}.
For an interval $J \subset {\mathbb{R}}$, let $|J|$ denote its length.
For a function $f : {\mathbb{R}} \to {\mathbb{R}}_+$ and $\varepsilon>0$, let
$$J_{f, \varepsilon} = \{x \in J: f(x)<\varepsilon\} \ \ \mathrm{and } \ \|f\|_J =
\sup_{x \in J} f(x).$$
For $\delta \in {\mathcal L}_{\mathbf q}$ let $\ell_{\delta}$ be the function
$\ell_{\delta}(s) = \ell(\delta, h_s{\mathbf q})$.
Suppose, for ${\mathbf q} \in \til {\mathcal{H}}, \, \delta \in {\mathcal L}_{{\mathbf q}}$ and an interval
$J$, that $\|\ell_{\delta} \|_{J} \geq \rho.$ An elementary computation
(see \cite[Lemma
4.4]{with Yair}) shows that $J_{\varepsilon} = J_{\ell_{\delta}, \varepsilon}$ is a
subinterval of $J$ and
\begin{equation}
\label{eq: ratio of lengths}
|J_{\varepsilon}| \leq \frac{2 \varepsilon}{\rho} |J|.
\end{equation}
Suppose that $\mu$ is
$(C,\alpha)$-decaying and $D$-Federer, and ${\mathrm{supp}} \, \mu \cap
J_{\varepsilon} \neq \varnothing$.
Let $x \in {\mathrm{supp}} \, \mu \cap J_{\varepsilon}$.
Note that $J_{\varepsilon} \subset B(x, |J_{\varepsilon}|)$ and $B(x, |J|) \subset
3J.$
One has
\[
\begin{split}
\mu(J_{\varepsilon}) & \leq \mu(B(x, |J_{\varepsilon}|)) \\
& \stackrel{\mathrm{decay, \ } \equ{eq: ratio of
lengths}}{\leq} C \left(\frac{2
\varepsilon}{\rho} \right)^{\alpha} \mu(B(x, |J|)) \\
& \leq 2^{\alpha} C \left(\frac{\varepsilon}{\rho} \right)^{\alpha} \mu(3J)
\leq C'' \left(\frac{\varepsilon}{\rho} \right)^{\alpha} \mu(J),
\end{split}
\]
where $C'' = 2^\alpha CD$.
This shows that if $J$ is an interval, ${\mathbf q} \in \til {\mathcal{H}}$, and $\delta \in
\mathcal{L}_{{\mathbf q}}$ is such that $\|\ell_{\delta}\|_J \geq \rho$, then for
any $0<\varepsilon <\rho$,
\begin{equation*}
\frac{\mu(J_{\ell_{\delta}, \varepsilon})}{\mu(J)} \leq C''
\left( \frac{\varepsilon}{\rho} \right)^{\alpha}.
\end{equation*}
Now to obtain \equ{eq - precise yet long}, define $F(x)=C''x^{\alpha}$
and repeat the proof of \cite[Theorem
6.3]{with Yair}, but using $\mu$ instead of Lebesgue measure on ${\mathbb{R}}$
and using
\cite[Prop. 3.4]{with Yair} in place of \cite[Prop. 3.2]{with
Yair}.
\end{proof}
\begin{cor}
\name{cor: return to compacts}
For any stratum ${\mathcal{H}}$ of translation surfaces and any $C, \alpha, D$
there is a compact $K \subset {\mathcal{H}}_1$ such that for any $q \in {\mathcal{H}}_1$, any
unbounded $\mathcal{T} \subset {\mathbb{R}}_+$ and
any $(C, \alpha)$-decaying and $D$-Federer measure $\mu$ on an interval
$J \subset {\mathbb{R}}$, for $\mu$-a.e. $s \in J$ there is a sequence $t_n \to
\infty, \, t_n \in \mathcal{T}$ such that $g_{t_n}h_sq \in K$.
\end{cor}
\begin{proof}
Given $C, \alpha, D$, let $\lambda, \rho_0, C'$ be as in Theorem
\ref{thm: nondivergence,
general}. Let $\varepsilon$ be small enough so that
$$C' \left (\frac{\varepsilon}{\rho_0} \right)^{\lambda \alpha} <1,
$$
and let $K=K_{\varepsilon}$. Suppose to the contrary that for some
$(C,\alpha)$-decaying and $D$-Federer measure $\mu$ on some interval
$J_0$ we have
$$\mu(A) > 0, \ \ \mathrm{where} \ A=\{s \in J_0 :
\exists t_0 \, \forall t \in \mathcal{T} \cap (t_0, \infty), \, g_th_sq \notin K\}.$$
Then there is $A_0 \subset A$ and $t_0>0$ such that $\mu(A_0)>0$ and
\eq{eq: property A0}{
s \in A_0, \ t \in \mathcal{T} \cap (t_0, \infty) \ \implies \ g_th_sq \notin K.}
By a general density theorem, see e.g. \cite[Cor. 2.14]{Mattila},
there is an interval $J$ with $3J \subset J_0$ such that
\eq{eq: from density}{
\frac{\mu(A_0 \cap J)}{\mu(J)} > C' \left (\frac{\varepsilon}{\rho_0} \right)^{\lambda
\alpha}.}
We claim that by taking $t > t_0$ sufficiently large
we can assume that for all $\delta \in {\mathcal L}_{{\mathbf q}}$ there is $s \in J$
such that
$\ell(\delta, g_th_s{\mathbf q})
\geq \rho_0.$
This will guarantee that \equ{eq: condn on J} holds for the horocycle
$s \mapsto g_th_sq = h_{e^ts}g_tq$, and conclude the proof since \equ{eq:
property A0} and \equ{eq: from
density} contradict \equ{eq - precise yet long} (with $g_{t}q$ in
place of $q$).
It remains to prove the claim. Let $\zeta = \phi(q)$
so that for any $\delta \in {\mathcal L}_{{\mathbf q}}$,
$$\ell(\delta, {\mathbf q}) = \max \left
\{|x(\delta, {\mathbf q})|, |y(\delta, {\mathbf q})| \right\}
\geq \zeta.$$
If $|x(\delta, {\mathbf q} )| \geq \zeta$, then
$ |x(\delta, g_t{\mathbf q})| = e^{t/2}
|x(\delta, {\mathbf q})| \geq \zeta e^{t/2},$ and if
$|y(\delta, {\mathbf q})| \geq \zeta$ then the function
$$s \mapsto x(\delta, g_t h_s{\mathbf q}) = e^{t/2}\left(x(\delta,
{\mathbf q})+sy(\delta, {\mathbf q}) \right)$$
has slope $|e^{t/2}y(\delta, {\mathbf q})| \geq e^{t/2} \zeta$, hence
$\sup_{s \in J}
|x(\delta, g_t h_s {\mathbf q})| \geq \zeta e^{t/2}
\frac{|J|}{2}.$
Thus the claim holds when $\zeta e^{t/2} \geq \max \left\{\rho_0, \rho_0
\frac{|J|}{2} \right \}.$
\end{proof}
This yields a strengthening of Theorem \ref{cor:
inheritance, lines}(a).
\begin{cor}
\name{cor: strengthening}
Suppose $({\bf{a}}, \mathbf{b})$ is positive, and write ${\bf{a}}(s) = {\bf{a}}+s\mathbf{b}$. There is
$\varepsilon_0>0$ such that
given $C, \alpha, D$ there is $\zeta>0$ such that if $\mu$ is $(C,
\alpha)$-decaying and $D$-Federer, and ${\mathrm{supp}} \, \mu \subset (-\varepsilon_0,
\varepsilon_0)$, then for $\mu$-almost every $s$, $\limsup n
\varepsilon_n({\bf{a}}(s)) \geq \zeta.$
\end{cor}
\begin{proof}
Repeat the proof of Theorem \ref{cor: inheritance, lines}, using
Corollary \ref{cor: return to
compacts} and Proposition
\ref{lem: relating lengths and disconts}
instead of \cite{KMS}.
\end{proof}
\section{Mahler's question for curves}\name{section: mahler curves}
In this section we prove Theorems \ref{thm:
mahler curve} and \ref{cor: ue on curves} by deriving
them from a
stronger statement.
\begin{thm}
\name{thm: curves}
Let $J \subset {\mathbb{R}}$ be a compact interval, let $\beta: J \to {\mathbb{R}}^d_+$ be a
$C^2$ curve, let $\mu$ be a decaying Federer measure on
$J$, and suppose that for every $s_1, s_2 \in J$, $(\beta(s_1),
\beta'(s_2))$ is a positive pair. Then there
is $\zeta>0$ such that for $\mu$-a.e. $s \in J, $ $
\limsup_{n \to \infty} n\varepsilon_n(\beta(s)) \geq
\zeta$.
\end{thm}
\begin{proof}[Derivation of Theorem \ref{cor: ue on curves} from
Theorem \ref{thm: curves}]
If Theorem \ref{cor: ue on curves} is false then there is $A \subset I$
with $\mu(A)>0$ such that for all $s \in A$, $\beta(s)$ is not of
recurrence type but
$(\beta(s), \beta'(s))$ is positive. Let $s_0 \in A \cap {\mathrm{supp}} \, \mu$
so that $\mu(A \cap J) > 0$ for any open interval $J$
containing $s_0$. Since the set
of positive pairs is open (Corollary \ref{cor: realizable open}), there is
an open $J$ containing $s_0$ such that $(\beta(s_1), \beta'(s_2))$ is
positive for every $s_1, s_2 \in J$, so Theorem \ref{thm: curves}
implies that $\beta(s)$ is of recurrence type for almost every $s \in
J$, a contradiction.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm: mahler curve}]
Let ${\bf{a}}(x)$ be as in \equ{eq: mahler curve} and let $\|\cdot \|_1$ be
the 1-norm on ${\mathbb{R}}^d$. Since unique ergodicity is unaffected by
dilations, it is
enough to verify the conditions of Theorem \ref{cor: ue on curves} for
the permutation $\sigma(i)=d+1-i$, a
decaying Federer measure $\mu$, and for
$$\beta(s) = \frac{{\bf{a}}(s)}{\|{\bf{a}}(s)\|_1} = \frac{1}{s+\cdots +s^d}
\left(s, s^2, \ldots, s^d\right).$$
For any connection $(i,j,m)$ the
set
$\left \{{\bf{a}} \in \Delta: (i,j,m) \in {\mathcal L}_{\bf{a}} \right\}$
is a proper affine subspace of ${\mathbb{R}}^d_+$ transversal to $\{(x_1,
\ldots, x_d): \sum x_i=1\}$, and since $\beta(s)$ is
analytic and not contained in any such affine subspace, the set
$\{s \in I : \beta(s) \mathrm{\ has \ connections}\}$
is countable, so $\beta(s)$ is without connections for
$\mu$-a.e. $s$.
Letting $R=R(s) = s+\cdots+s^d$, we have
$$\beta'(s) = \frac{1}{R^2} \left(\gamma_1(s), \ldots, \gamma_d(s)
\right), \ \ \ \mathrm{where} \ \ \gamma_i(s) = \sum_{\ell = i}^{i+d-1}
\left(2i-\ell-1 \right) s^{\ell}.
$$
Then for
$j=1, \ldots, d-1$ and $\ell =1, \ldots, j+d-1$, setting $a= \max\{1,
\ell+1-d\},\, b=\min\{j, \ell\}$ we find:
\[
\begin{split}
R^2y_j & = \sum_{i=1}^j \gamma_i(s) =\sum_{\ell=1}^{j+d-1} \left(\sum_{i=a}^b (2i-\ell-1) \right)
s^\ell \\
& = \sum_{\ell =1}^{j+d-1}\left[ b(b+1)-a(a-1) -
(b-a+1)(\ell+1)\right] s^{\ell}.
\end{split}
\]
Considering separately the 3 cases $1 \leq \ell \leq j,\ j < \ell \leq
d, \ d<\ell$ one sees that in every
case $y_j < 0$. Our choice of $\sigma$ insures that for $j = 1,
\ldots, d-1$,
$y'_j = -y_{d-j}>0 $. This implies via \equ{eq: defn L} that $L<0$ on
$I$, thus
for all $s$ for which $\beta(s)$ is without
connections, $\left(\beta(s), -\beta'(s)\right)$ is positive. Define
$\hat{\beta}(s)=\beta(-s)$, so that $\left(\hat{\beta}(s), \hat{\beta}'(s)\right)
= \left(\beta(-s), -\beta'(-s)\right)$ is positive for a.e. $s<0$. Thus Theorem
\ref{cor: ue on curves} applies to $\hat{\beta}$, proving the claim.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm: curves}]
Let ${\mathcal{H}}$ be the stratum corresponding to $\sigma$. For a
$(C, \alpha)$-decaying and $D$-Federer measure $\mu$, let $C', \rho_0, \lambda$ be the
constants as in Theorem \ref{thm: nondivergence, general}. Choose
$\varepsilon>0$ small enough so that
\eq{eq: choice of epsilon}{
B = C' \left(\frac{\varepsilon}{\rho_0}
\right)^{\lambda \alpha} <1,}
and let $K=K_{\varepsilon}.$
By making $J$ smaller if necessary, we can assume that for all $s_1,
s_2 \in J$, there is a translation surface $q(s_1,s_2) = \pi\circ {\mathbf q}(\beta(s_1),
\beta'(s_2))$ corresponding to the positive pair $(\beta(s_1),
\beta'(s_2))$ via Theorem \ref{thm: sullivan2}. That is
$Q=\{q(s_1, s_2): s_i \in J\}$ is a bounded subset of ${\mathcal{H}}$ and
$q(s_1, s_2)$ depends continuously on $s_1, s_2$. By appealing to
Corollary \ref{cor: same gamma}, we can also assume that there is a
fixed curve $\gamma$ so that $\mathcal{T}_\sigma(\beta(s_1)) = \mathcal{T}(q(s_1,
s_2), \gamma),$ for $s_1, s_2 \in J$. Define $q(s) = q(s,s)$.
By rescaling we may assume with no loss of
generality that $q(s) \in {\mathcal{H}}_1$ for all $s$, and by
making $K$ larger let us assume that $Q \subset K.$
By continuity,
the return times to $\gamma$ along vertical leaves for
$q(s_1, s_2)$
are uniformly bounded from above and below, and the length of $\gamma$
with respect to the flat structure given by $q(s_1, s_2)$ is uniformly
bounded.
We claim that there is $C_1$,
depending only on $Q$, such that for any interval $J_0 \subset {\mathbb{R}}$
with $0 \in J_0$,
any $t>0$, any $q \in Q$, any ${\mathbf q} \in
\pi^{-1}(q)$ and any $\delta \in
{\mathcal L}_{{\mathbf q}}$, we have
$$\sup_{s \in J_0} \ell(\delta, g_t h_{s} {\mathbf q}) \geq C_1 |J_0| e^{t/2}.$$
Here $|J_0|$ is the length of $J_0$.
Indeed, let
$\theta = \inf\{\phi(q): q \in Q\},
$
which is a positive number since $Q$ is bounded.
Let $C_1 = \min\left\{\frac{\theta}{|J_0|},
\frac{\theta}{2} \right\}$, and let $q \in Q, \, {\mathbf q} \in \pi^{-1}(q),
\, \delta \in {\mathcal L}_{{\mathbf q}}$. Then
$\max \left
\{|x(\delta, {\mathbf q})|, |y(\delta, {\mathbf q})| \right\} \geq \theta.$
Suppose
first that $|x(\delta, {\mathbf q} )| \geq \theta$, then
$$\sup_{s \in J_0}\ell(\delta, g_th_s{\mathbf q}) \geq |x(\delta, g_t{\mathbf q})| = e^{t/2}
|x(\delta, {\mathbf q})| \geq \theta e^{t/2} \geq C_1 |J_0| e^{t/2}.$$
Now if $|y(\delta, {\mathbf q})| \geq \theta$ then the function
$$s \mapsto x(\delta, g_t h_s{\mathbf q}) = e^{t/2}\left(x(\delta,
{\mathbf q})+sy(\delta, {\mathbf q}) \right)$$
has slope $|e^{t/2}y(\delta, {\mathbf q})| \geq e^{t/2} \theta$, hence
$$\sup_{s \in J_0} \ell(\delta, g_th_s{\mathbf q}) \geq \sup_{s \in J_0}
|x(\delta, g_t h_s {\mathbf q})| \geq e^{t/2}
\theta |J_0|/2 \geq C_1 |J_0| e^{t/2}.$$
This proves the claim.
\medskip
For each $s_0, s \in J$ let ${\bf{a}}^{(s_0)}(s) =
\beta(s_0)+\beta'(s_0)(s-s_0)$ be the linear approximation to $\beta$
at $s_0$.
Using the fact that $\beta$ is a $C^2$-map, there is $\til C$ such
that
\eq{eq: small distance}{\max_{s \in J_0} \| \beta(s) - {\bf{a}}^{(s_0)}(s)
\|
< \til C | J_0|^2
}
whenever $J_0 \subset
J$ is a subinterval
centered at $s_0$.
Let $\kappa$ and $c_2$ be as in Proposition \ref{lem: relating
lengths and disconts}, let $\varepsilon$ be as chosen in \equ{eq: choice of
epsilon}, let
$$
C_2 = \frac{\rho_0}{2C_1}, \ \ \
\zeta_1 <\left(\frac{\varepsilon}{\kappa} \right)^2 \ \ \ \mathrm{and} \ \
\zeta = \zeta_1 \cdot \frac{c_2}{d^2\til C}.
$$
If the theorem is false then $\mu(A)>0$, where
$$A = \{s \in J: \limsup n\varepsilon_n(\beta(s)) < \zeta \}.$$
Moreover there is $N$ and
$A_0 \subset A$ such that $\mu(A_0)>0$ and
\eq{eq: uniformly small}{
n \geq N, \, s \in A_0 \ \implies \ n\varepsilon_n(\beta(s)) < \zeta.
}
Using
\cite[Cor. 2.14]{Mattila} let $s_0 \in A_0$ be a density point, so that
for any sufficiently small interval $J_0$ centered at $s_0$ we have
\eq{eq: density argument}{
\mu(A_0 \cap J_0) >B \mu(J_0),
}
where $B$ is as in \equ{eq: choice of epsilon}.
For $t>0$ we will write
$$c(t) = C_2 e^{-t/2} \ \ \ \mathrm{and}\ \ \ J_{t} =
B(s_0,c(t)).$$
Let ${\bf{a}}(s) = {\bf{a}}^{(s_0)}(s)$ and let $\til q(s)=h_{s-s_0}q(s_0)$, which is the surface
$\pi \circ {\mathbf q} ({\bf{a}}(s), \beta'(s_0))$.
The trajectory $s
\mapsto g_t \til q(s) = g_t h_{s-s_0}q(s_0) = h_{e^ts} g_th_{-s_0}
q(s_0)$ is a horocycle path, which, by the claim and the choice of
$C_2$, satisfies \equ{eq: condn
on J} with $\rho = \rho_0$ and $J=J_t$ for all $t>0$. Therefore
\eq{eq: measure estimate 2}{
\mu \left\{s \in J_t : g_t \til q(s) \notin K \right \} \leq B \mu (J_t).
}
Now for a large $t>0$ to be specified below, let
\eq{eq: defn n2}{
n_1 =
\sqrt{\frac{\zeta_1}{2c_2}}e^{t/2} \ \ \ \mathrm{and} \ \ n_2 = \frac{2c_2}{d^2
\til C} n_1.
}
By making $\til C$ larger we can assume that $n_2 < n_1$.
For large enough $t$ we will have $n_2 > n_0$ (as in Proposition
\ref{lem: relating lengths and disconts}) and $n_2 > N$ (as in
\equ{eq: uniformly small}).
We now claim
\eq{eq: remains to prove}{
s \in J_t, \ n_2 \varepsilon_{n_2} (\beta(s))
< \zeta \ \implies \ n_1\varepsilon_{n_1}({\bf{a}}(s)) < \zeta_1.
}
Assuming this, note that by \equ{eq:
first one}, \equ{eq: defn n2} and the choice of $\zeta_1$, if $n_1
\varepsilon_{n_1}({\bf{a}}(s)) < \zeta_1$ then $g_t \til q(s)
\notin K$. Combining \equ{eq: uniformly small} and \equ{eq: remains to
prove} we see that $A_0 \cap J_t \subset \{s \in J_t: g_t \til q(s)
\notin K\}$ for all large enough $t$. Combining this with \equ{eq:
density argument} we find a contradiction to \equ{eq: measure estimate
2}.
\medskip
It remains to prove \equ{eq: remains to prove}. Let $r \leq n_2$,
let $\mathcal{T}= \mathcal{T}_\sigma({\bf{a}}(s)), \ {\mathcal {S}}= \mathcal{T}_\sigma(\beta(s))$, let $x_i,
x_j$ be discontinuities of $\mathcal{T}$ and let $x'_i, x'_j$ be the
corresponding discontinuities of ${\mathcal {S}}$. By choice of $\til C$ we have
$$\| \beta(s) - {\bf{a}}(s)\| < \til C e^{-t},
$$
where $\| \cdot \|$ is the max norm on ${\mathbb{R}}^d$.
Suppose first
that $x_i$ and $x'_i$ have the same itinerary
under $\mathcal{T}$ and ${\mathcal {S}}$ until the $r$th iteration; i.e. $\mathcal{T}^kx_i$ is
in the $\ell$th interval of continuity of $\mathcal{T}$ if and only if ${\mathcal {S}}^k
x'_i$ is in the $\ell$th interval of continuity of ${\mathcal {S}}$ for $k \leq
r$. Then one sees from \equ{eq: disc points2} and \equ{eq: defn iet} that
$$|\mathcal{T}^r x_i - {\mathcal {S}}^r x'_i| \leq \sum_0^r d^2 \| \beta(s) - {\bf{a}}(s)\|.$$
Therefore, using \equ{eq: defn n2},
\[
\begin{split}
\left| x_i - \mathcal{T}^rx_j - (x'_i - {\mathcal {S}}^r x'_j) \right | & \leq
\left| x_i - x'_i \right | + \left| \mathcal{T}^rx_j - {\mathcal {S}}^r x'_j \right | \\
& \leq 2 \sum_1^r d^2 \| \beta(s) - {\bf{a}}(s) \| \\
& \leq 2 d^2 n_2 \til C e^{-t} = \frac{\zeta_1}{2n_1}.
\end{split}
\]
If $n_1\varepsilon_{n_1}({\bf{a}}(s)) \geq \zeta_1$ then by the triangle inequality
we find
that $|x'_i - {\mathcal {S}}^rx'_j| < \zeta_1/2n_1>0$. In particular this shows
that for all $i$, $x_i$ and $x'_i$ do have the same itinerary under
$\mathcal{T}$ and ${\mathcal {S}}$, and moreover
\[
\begin{split}
\varepsilon_{n_2}( \beta(s)) & \geq \varepsilon_{n_2}({\bf{a}}(s)) - \frac{\zeta_1}{2n_1} \\
& \geq \varepsilon_{n_1}({\bf{a}}(s)) - \frac{\zeta_1}{2n_1} \\
& > \frac{\zeta_1}{2n_1} = \zeta_1 \frac{n_2}{2n_1} \cdot \frac{1}{n_2} = \frac{\zeta}{n_2},
\end{split}
\]
as required.
\end{proof}
\ignore{
\section{The fiber of the developing map}\name{section: fiber singleton}
It is
well-known that the map ${\mathrm {dev}}: \til {\mathcal{H}} \to H^1(S, \Sigma;
{\mathbb{R}}^2)$ is not injective. For example, precomposing with a
homeomorphism which acts trivially on homology may change a marked
translation surface structure but does not change its image in $H^1(S, \Sigma;
{\mathbb{R}}^2)$; see \cite{McMullen American J} for more examples. However we haven
the following useful statement:
\begin{thm}
\name{thm: fiber singleton}
Let $\til {\mathcal{H}}$ be a stratum of marked translation surfaces of type
$(S, \Sigma)$. Fix a singular foliation ${\mathcal{F}}$ on $(S, \Sigma)$, and let $\mathbf{b} \in
H^1(S, \Sigma; {\mathbb{R}})$. Then there is at most one ${\mathbf q} \in \til {\mathcal{H}}$ with
horizontal foliation ${\mathcal{F}}$, and vertical foliation representing $\mathbf{b}$.
\end{thm}
\begin{proof}
Suppose that ${\mathbf q}_1$ and ${\mathbf q}_2$ are two marked translation surfaces,
such that the horizontal measured foliation of both is ${\mathcal{F}}$, and the
vertical measured foliations ${\mathcal{G}}_1, {\mathcal{G}}_2$ both represent $\mathbf{b}$. We
need to show that ${\mathbf q}_1={\mathbf q}_2$. Let $\gamma$ be a special transverse
system to ${\mathcal{F}}$; recall that it has one cylinder edge going across
each cylinder of periodic leaves for ${\mathcal{F}}$, and its other segments
start at singularities and go into minimal components of ${\mathcal{F}}$ without
crossing saddle connections in ${\mathcal{F}}$. Make $\gamma$ smaller by
deleting segments, so it has only one segment going into each minimal
component. Since ${\mathcal{F}}$ and ${\mathcal{G}}_1$ are
transverse, we may take $\gamma$ to be contained in leaves of
${\mathcal{G}}_1$.
We claim that, after making $\gamma$ smaller if necessary, we may
precompose ${\mathbf q}_2$ with an isotopy rel
$\Sigma$, in order to assume that $\gamma$ is also contained in leaves
of ${\mathcal{G}}_2$, i.e. that ${\mathcal{G}}_1$ and ${\mathcal{G}}_2$ coincide along
$\gamma$. Indeed, if $\alpha$ is a sufficiently short arc in $\gamma$
going from a singularity $x$ into a minimal set $M$, we may isotope
along ${\mathcal{F}}$ in $M$
until the leaf of ${\mathcal{G}}_2$ starting at $x$ reaches $\alpha$. Since $M$
contains no other segments of $\gamma$ we may perform this operation
independently on each minimal component. Now suppose $\alpha$ is a
cylinder edge in $\gamma$, so that $\alpha$ connects two
singularities on opposite sides of a cylinder $P$. In particular
$\int_\alpha {\mathcal{G}}_1 = \mathbf{b}(\alpha) = \int_\alpha {\mathcal{G}}_2$, so we may
perform an isotopy inside $P$ until $\alpha$ is also contained in a
leaf of ${\mathcal{G}}_2$, as claimed.
Now consider the cell decomposition $\mathcal{B}=\mathcal{B}(\gamma)$, and let $\beta_i =
[{\mathcal{G}}_i]$ be the 1-cocycle on $\mathcal{B}$ obtained by integrating
${\mathcal{G}}_i$. For a transverse edge $e$ we have $\beta_1(e)=\beta_2(e)=0$
since $e$ is a leaf for both foliations. If $e$ is contained in a leaf
of ${\mathcal{F}}$, we may take transverse edges $d$ and $f$ so that the path
$\delta$ starting along $d$ and continuing with $e$ and then $f$
connects two singularities
($d$ or $f$ may be absent if $e$ begins or ends in $\Sigma$).
That is, $\delta$ represents an
element of $H_1(S, \Sigma). $ Then, since
$\beta_i(d)=\beta_i(f)=0$ we have
$\beta_i(e) = \beta_i(\delta) = \mathbf{b}(\delta).$
This implies that $\beta_1 = \beta_2$ on $\mathcal{B}$. Recall that
${\mathbf q}_i$ may be obtained explicitly by gluing together the rectangles
of $\mathcal{B}$, where the geometry of the rectangles is given by ${\mathcal{F}}$ and
$\beta_i$. Therefore ${\mathbf q}_1 = {\mathbf q}_2$.
\end{proof}
}
\section{Real REL}
This section contains our results concerning the real REL
foliation, whose definition we now recall. Let ${\mathbf q}$ be a
marked translation surface of type ${\mathbf{r}}$, where $k$ (the number of
singularities) is at least 2. Recall that ${\mathbf q}$ determines a
cohomology class ${\mathrm{hol}}({\mathbf q})$ in $H^1(S, \Sigma ; {\mathbb{R}}^2)$, where for a relative
cycle $\gamma \in H_1(S, \Sigma),$ the value ${\mathbf q}$ takes on
$\gamma$ is ${\mathrm{hol}}(\gamma, {\mathbf q})$. Also recall that there are open sets
$\til {\mathcal{H}}_\tau$ in $\til {\mathcal{H}}({\mathbf{r}})$, corresponding to a given triangulation of
$(S, \Sigma)$, such that the map hol restricted to $\til {\mathcal{H}}_\tau$ endows
$\til{\mathcal{H}}$ with a linear manifold structure. Now recall the map Res as in
\equ{eq: defn Res}, let $V_1$ be the first summand in the splitting \equ{eq:
splitting}, and let $W = V_1 \cap \ker \,\operatorname{Res}, $ so that $\dim W =
k-1,$ where $k = |\Sigma|$. The REL foliation is modeled on $\ker \,
\operatorname{Res}$, the real foliation is modeled on $V_1$, and the real REL
foliation is modeled on $W$.
That is, a ball $\mathcal{U} \subset \til {\mathcal{H}}_\tau$ provides a product
neighborhood for these foliations, where ${\mathbf q}, {\mathbf q}_1 \in \mathcal{U}$
belong to the same plaque for REL, real, or real REL, if
${\mathrm{hol}}({\mathbf q})-{\mathrm{hol}}({\mathbf q}_1)$ belongs respectively to $\ker \operatorname{Res}, V_1,$ or
$W$.
Recall that ${\mathcal{H}}$ is an orbifold and the orbifold cover $\pi: \til {\mathcal{H}}
\to {\mathcal{H}}$ is defined by taking a quotient by the $\Mod(S,
\Sigma)$-action.
Since hol is equivariant with respect to the action of the group $\Mod(S,
\Sigma)$ on $\til {\mathcal{H}}$ and $H^1(S, \Sigma; {\mathbb{R}}^2)$, and (by naturality
of the splitting \equ{eq:
splitting} and the sequence \equ{eq: defn Res}) the subspaces $V_1, W,$
and $\ker \, \operatorname{Res}$ are $\Mod(S, \Sigma)$-invariant, the foliations
defined by these subspaces on $\til {\mathcal{H}}$ descend naturally to ${\mathcal{H}}$.
More precisely leaves in $\til {\mathcal{H}}$ descend to `orbifold leaves' on
${\mathcal{H}}$, i.e. leaves in $\til {\mathcal{H}}$ map to immersed sub-orbifolds in
${\mathcal{H}}$. In order to avoid dealing with orbifold foliations, we
pass to a finite cover $\hat{{\mathcal{H}}}
\to {\mathcal{H}}$ which is a manifold, as explained in {\mathcal S} \ref{subsection: strata}.
\begin{prop}
\name{prop: independent of marking}
The REL and real REL
leaves on both $\til {\mathcal{H}}$ and $\hat {\mathcal{H}}$ have a well-defined translation structure.
\end{prop}
\begin{proof}
A translation structure on a leaf $\mathcal{L}$
amounts to saying that there is a fixed vector space $V$ and an
atlas of charts on $\mathcal{L}$ taking values in $V$, so that transition
maps are translations. We take $V = \ker \, \operatorname{Res}$ for the REL leaves,
and $V=W$ for the real REL leaves. For each
${\mathbf q}_0 \in \til {\mathcal{H}}$, the atlas is obtained by taking
the chart ${\mathbf q} \mapsto {\mathrm{hol}}({\mathbf q})-{\mathrm{hol}}({\mathbf q}_0)$. By the definition of
the corresponding foliations, these are homeomorphisms in a
sufficiently small neighborhood of ${\mathbf q}_0$, and the fact that
transition maps are translations is immediate. In order to check that
this descends to $\hat {\mathcal{H}}$, let $\Lambda \subset \Mod (S, \Sigma)$ be
the finite-index torsion free subgroup so that $\hat {\mathcal{H}} = \til {\mathcal{H}}
/\Lambda$, and let $\varphi \in \Lambda$.
We need to show that if $\hat{{\mathbf q}} = {\mathbf q} \circ \varphi, \ \hat{{\mathbf q}}_0 =
{\mathbf q}_0 \circ \varphi,$ then ${\mathrm{hol}}({\mathbf q})- {\mathrm{hol}}({\mathbf q}_0) = {\mathrm{hol}}(\hat{{\mathbf q}}) -
{\mathrm{hol}}(\hat{{\mathbf q}}_0)$. Since ${\mathrm{hol}}$ is $\Mod(S, \Sigma)$-equivariant,
this amounts to checking that $\varphi$ acts trivially on $\ker \,
\operatorname{Res}$.
Invoking \equ{eq: defn Res}, this follows from our convention that any $\varphi \in \Mod(S,
\Sigma)$ fixes each point of $\Sigma$, so acts trivially on
$H^0(\Sigma)$.
\end{proof}
It is an interesting question to understand the geometry of individual
leaves. For the REL foliation this is a challenging problem, but for
the real REL foliation, our main theorems give a complete answer.
Given a
marked translation surface ${\mathbf q}$ we say that a saddle connection $\delta$ for
${\mathbf q}$ is {\em horizontal} if $y({\mathbf q}, \delta)=0$. Note that for a generic
${\mathbf q}$ there are no horizontal saddle connections, and in any stratum
there is a uniform upper bound on their number.
\begin{thm}\name{thm: real rel main}
Let ${\mathbf q} \in \til {\mathcal{H}}$ and let $\mathcal{V} \subset W$ such that for
any $\mathbf{c} \in \mathcal{V}$ there is a
path $\left\{\mathbf{c}_t\right\}_{t \in [0,1]}$ in $\mathcal{V}$ from 0 to $\mathbf{c}$
such that for any horizontal saddle connection
for ${\mathbf q}$ and any $t \in [0,1]$, ${\mathrm{hol}}({\mathbf q}, \delta) + \mathbf{c}_t (\delta)
\neq 0$.
Then there is a continuous map $\psi: \mathcal{V} \to \til {\mathcal{H}}$ such
that for any $\mathbf{c} \in \mathcal{V}$,
\eq{eq: desired}{{\mathrm{hol}}(\psi(\mathbf{c})) = {\mathrm{hol}}({\mathbf q}) + (\mathbf{c},0),}
and the horizontal foliations of $\psi(\mathbf{c})$ is the same as that of
${\mathbf q}$. Moreover the image of $\psi$ is contained in the REL leaf of
${\mathbf q}$.
\end{thm}
\begin{proof}
Let ${\mathcal{F}}$ and ${\mathcal{G}}$ be the vertical and horizontal foliations of
${\mathbf q}$ respectively. We will apply Theorem \ref{thm: hol homeo}, reversing the roles
of the horizontal and vertical foliations of ${\mathbf q}$; that is we use
${\mathcal{G}}$ in place of ${\mathcal{F}}$. To do this, we will check that the map which
sends $\mathbf{c} \in \mathcal{V}$ to
$(x({\mathbf q})+\mathbf{c}, y({\mathbf q})) \in H^1(S, \Sigma)^2
$
has its image in
$\mathbb{B}({\mathcal{G}}) \times \mathbb{A}({\mathcal{G}})$ (notation as in Theorem \ref{thm: hol
homeo}), and thus
$$\psi(\mathbf{c}) = {\mathrm{hol}}^{-1}(x({\mathbf q})+\mathbf{c}, y({\mathbf q}))$$
is continuous and satisfies \equ{eq: desired}.
Clearly $y({\mathbf q})$, the cohomology class represented by ${\mathcal{G}}$, is in
$\mathbb{A}({\mathcal{G}})$. To check that $x({\mathbf q})+\mathbf{c} \in \mathbb{B}({\mathcal{G}})$, we need to show
that for any element $\delta \in H^{\mathcal{G}}_+$, $x({\mathbf q}, \delta) +
\mathbf{c}(\delta)>0$. To see this, we treat separately the cases when
$\delta$ is a horizontal saddle connection, and when
$\delta$ is represented by a foliation cycle, i.e. an
element of $H_1(S, \Sigma)$ which is in the image of $H_1(S)$ (and
belongs to the convex cone spanned by the asymptotic cycles).
If $\delta$ is a foliation cycle, since
$\mathbf{c} \in W \subset \ker \, \mathrm{Res}$, the easy direction in
Theorem \ref{thm: sullivan1} implies
$$
x({\mathbf q}, \delta) + \mathbf{c}(\delta) = x({\mathbf q}, \delta)>0.
$$
If $\delta$ is represented by a saddle connection, let $\{\mathbf{c}_t\}$ be
a path from 0 to $\mathbf{c}$ in $\mathcal{V}$. Again, by the easy direction
in Theorem \ref{thm: sullivan1}, $x({\mathbf q}, \delta)>0$, and the function
$x({\mathbf q}, \delta)+\mathbf{c}_t(\delta)$ is a continuous function of $t$, which
does not vanish by hypothesis. This implies again that $x({\mathbf q}, \delta)
+\mathbf{c}(\delta)>0$.
To check $\psi(\mathbf{c})$ is contained in the real
REL leaf of ${\mathbf q}$, it suffices to note that according to \equ{eq:
desired}, in any local chart provided
by hol, $\psi(\mathbf{c}_t)$ moves along plaques of the foliation.
\end{proof}
Since the leaves are modelled on $W$, taking $\mathcal{V}=W$ we obtain:
\begin{cor}
If ${\mathbf q}$ has no horizontal saddle connections, then there is a
homeomorphism $\psi: W \to \til {\mathcal{H}}$
onto the leaf of ${\mathbf q}$.
\end{cor}
Moreover the above maps are compatible with the transverse
structure for the real REL foliation. Namely, let $\psi_{\mathbf q}$ be the
map in Theorem \ref{thm: real rel main}. We have:
\begin{cor}\name{cor: foliation compatible}
For any ${\mathbf q}_0 \in \til {\mathcal{H}}$ there is a neighborhood
$\mathcal{V}$ of 0 in $W$, and a submanifold $\mathcal{U}' \subset
\til {\mathcal{H}}$ everywhere transverse to real-REL leaves and containing
${\mathbf q}_0$ such that:
\begin{enumerate}
\item
For any ${\mathbf q} \in \mathcal{U}'$, $\psi_{{\mathbf q}}$ is defined on $\mathcal{V}$.
\item
The map $\mathcal{U}' \times \mathcal{V}$ defined by $({\mathbf q}, \mathbf{c})
\mapsto \psi_{{\mathbf q}}(\mathbf{c})$ is a homeomorphism onto its image, which is
a neighborhood $\mathcal{U}$ of ${\mathbf q}_0$.
\item
Each plaque of the real REL foliation in $\mathcal{U}$ is of the form
$\psi_{{\mathbf q}} (\mathcal{V})$.
\end{enumerate}
\end{cor}
\begin{proof}
Let $\mathcal{U}_0$ be a bounded neighborhood of ${\mathbf q}_0$ on which hol is
a local homeomorphism and let $\mathcal{U}' \subset \mathcal{U}_0$ be
any submanifold everywhere transverse to the real REL leaves and of
complementary dimension. For example we can take $\mathcal{U}'$ to be
the pre-image under hol of a small ball around ${\mathrm{hol}}({\mathbf q}_0)$ in an
affine subspace of $H^1(S, \Sigma; {\mathbb{R}}^2)$ which is complementary to
$W$.
Since $\mathcal{U}_0$ is bounded there is a
uniform lower bound $r$ on the lengths of saddle connections for ${\mathbf q} \in
\mathcal{U}_0$. If we let $\mathcal{V}_0$ be the ball of radius $r/2$
around 0 in
$W$, then the conditions of Theorem \ref{thm: real rel main} are
satisfied for $\mathcal{V}_0$ and any ${\mathbf q} \in
\mathcal{U}'$. Thus (1) holds for any $\mathcal{V} \subset
\mathcal{V}_0$. Taking for $\mathcal{V}$ a small ball around 0 we can assume that
$\Psi_{{\mathbf q}}(\mathcal{V}) \subset \mathcal{U}_0$ for any
${\mathbf q}$. From \equ{eq: desired} and the choice of $\mathcal{U}'$ it
follows that $\{\psi_{{\mathbf q}}(\mathbf{c}): {\mathbf q} \in \mathcal{U}', \mathbf{c} \in
\mathcal{V}\}$ is a neighborhood of ${\mathrm{hol}}({\mathbf q}_0)$ in $H^1(S, \Sigma;
{\mathbb{R}}^2)$. Assertions (2),(3) now follow from the fact that
${\mathrm{hol}}|_{\mathcal{U}_0}$ is a homeomorphism.
\end{proof}
\ignore{
Suppose $\mathcal{K} \times \mathcal{V} \subset
\mathcal{U} \times \mathcal{V}$ is compact. Since ${\mathrm{hol}}: \til {\mathcal{H}} \to
H^1(S, \Sigma; {\mathbb{R}}^2)$ is continuous, the image ${\mathrm{hol}}(\mathcal{K})$ is
compact, so there are triangulations
$\tau_1, \ldots, \tau_k$ of $(S, \Sigma)$ such that
$${\mathrm{hol}}(\mathcal{U}_0 \times \mathcal{V}_0) \subset \bigcup_1^k {\mathrm{hol}}
(\til {\mathcal{H}}_{\tau_i}).$$
Since ${\mathrm{hol}}|_{\til {\mathcal{H}}_{\tau_i}}$ is a homeomorphism for each $i$, we
can choose
${\mathbf q}_1$ is in the leaf of ${\mathbf q}$. For this, we
repeat the above proof to construct, for any $s \in [0,1]$, a
translation surface ${\mathbf q}(s)$ such that ${\mathrm{hol}}({\mathbf q}(s))-{\mathrm{hol}}({\mathbf q})=s\mathbf{c}$,
and such that the horizontal foliation of each ${\mathbf q}(s)$ is ${\mathcal{G}}$. For
each $s_0$ there is a neighborhood of ${\mathbf q}(s_0)$ in $\til {\mathcal{H}}$ in
which the real REL leaf is well-defined, and in particular there is
an open interval $I_{s_0}$ containing $s_0$ so that for $s \in [0,1] \cap
I_{s_0}$, there is ${\mathbf q}'(s)$ in the same leaf as ${\mathbf q}(s_0)$ such
that ${\mathrm{hol}}({\mathbf q}'(s)) - {\mathrm{hol}}({\mathbf q}(s_0)) = (s-s_0)\mathbf{c}$. By Lemma \ref{lem: fiber
singleton}, we must
have ${\mathbf q}(s) = {\mathbf q}'(s)$. Taking a finite subcover from the cover
$\{I_s\}$ of $[0,1]$ we see
that along the entire path, the surfaces ${\mathbf q}(s)$ belong to the same
real REL leaf, as required.
}
\begin{proof}[Proof of Theorem \ref{thm: real rel action}]
Let $\hat{{\mathcal{H}}}, \, \til {\mathcal{H}}$ be as above, let $\pi: \til {\mathcal{H}} \to \hat{{\mathcal{H}}}$ be the
projection, let $\til Q$ denote the subset of translation surfaces in $\til{{\mathcal{H}}}$
without horizontal saddle connections, and let $Q =
\pi(\til Q)$. Note that $Q$ and $\til Q$ are $B$-invariant, where $B$
is the subgroup of upper triangular matrices in $G$, acting on $\hat{{\mathcal{H}}}, \,
\til {\mathcal{H}}$ in the usual way. We will extend the $B$-action to an action
of $F = B \ltimes W$.
The action of $W$ is defined as follows. For each ${\mathbf q} \in \til Q$,
the conditions of Theorem
\ref{thm: real rel main} are vacuously satisfied for $\mathcal{V} =
W$, and we define $\mathbf{c} {\mathbf q} =
\psi_{{\mathbf q}}(\mathbf{c})$.
We first prove the group action law
\eq{eq: group law}{
(\mathbf{c}_1+\mathbf{c}_2){\mathbf q} = \mathbf{c}_1(\mathbf{c}_2
{\mathbf q})
} for the action of $W$. This follows from \equ{eq: desired}, associativity of
addition in $H^1(S, \Sigma; {\mathbb{R}}^2)$, and the uniqueness in Lemma
\ref{lem: fiber singleton}.
Thus
we have defined an
action of $W$ on $\til Q$, and by Proposition \ref{prop: independent of
marking} this descends to a well-defined action on $Q$.
To see that the action map $W \times
\til Q \to \til Q
$
is continuous,
we take $w_n \to w$ in $W$, ${\mathbf q}^{(n)} \to
{\mathbf q}$ in $\til Q$, and need to show that
\eq{eq: induction}{
w_n {\mathbf q}^{(n)} \to w{\mathbf q}.
}
Corollary \ref{cor:
foliation compatible} implies that for any $t$ there are
neighborhoods $\mathcal{U}$ of $(tw){\mathbf q}$ and $\mathcal{V}$ of 0 in $W$
such that the map
$$
\mathcal{U} \times \mathcal{V} \to \til {\mathcal{H}}, \ \ ({\mathbf q}, \mathbf{c}) \mapsto
\mathbf{c} {\mathbf q}
$$
is continuous. By compactness we can find
$\mathcal{U}_i, i=1, \ldots, k$, a partition $0 = t_0 < \cdots <
t_k = 1$, and a fixed open $\mathcal{V} \subset W$ such that
\begin{itemize}
\item
$\{(tw){\mathbf q} : t \in [t_{i-1}, t_i]\} \subset \mathcal{U}_i$.
\item
$(t_i -t_{i-1})w \in \mathcal{V}$.
\end{itemize}
It now follows by induction on $i$ that
$t_i w^{(n)} {\mathbf q}_n \to (t_iw) {\mathbf q}$ for each $i$, and putting $i=k$ we
get \equ{eq: induction}.
The action is
affine and measure preserving since in the
local charts given by $H^1(S, \Sigma; {\mathbb{R}}^2)$, it is defined by vector
addition. Since the area of a surface can be computed in terms of its
absolute periods alone, this action preserves the subset of area-one
surfaces. A simple
calculation using \equ{eq: G action} shows that for ${\mathbf q} \in \til Q, \,
\mathbf{c} \in W$ and
$$g = \left(\begin{matrix} a & b \\ 0 & a^{-1} \end{matrix} \right) \in
B,
$$
we have
$g \mathbf{c} {\mathbf q} = (a \mathbf{c}) g {\mathbf q}$. That is, the actions of $B$ and $W$
respect the commutation relation defining $F$, so that we have defined
an $F$-action on $\til Q$. To check continuity of the action, let $f_n
\to f$ in $F$ and ${\mathbf q}_n \to {\mathbf q} \in \til Q$. Since $F$ is a
semi-direct product we can write $f_n = w_n b_n$, where $w_n \to w$
in $W$ and $b_n \to b$ in $B$. Since the $B$-action is
continuous, $b_n {\mathbf q}_n \to b{\mathbf q}$, and since (as verified above) the
$W$-action on $\til Q$ is continuous,
$$f_n {\mathbf q}_n = w_n (b_n {\mathbf q}_n) \to
w (b {\mathbf q}) = f{\mathbf q}.$$
\end{proof}
\section{Cones of good directions}
\name{section: REL}
Suppose $\sigma$ is an irreducible and admissible permutation, and let
\eq{eq: defn cone}{
{\mathcal{C}}_{{\bf{a}}} = \{\mathbf{b} \in {\mathbb{R}}^d: ({\bf{a}}, \mathbf{b}) \mathrm{\ is \ positive } \}
\subset T_{{\bf{a}}} {\mathbb{R}}^d_+.
}
As we showed in Corollary \ref{cor: same gamma}, this is the set of
good directions at the tangent space of ${\bf{a}}$,
i.e. the directions of lines which may be lifted to horocycles.
In this section we will relate the cones ${\mathcal{C}}_{{\bf{a}}}$
with the bilinear form $Q$ as in \equ{eq: defn Q1}, and show there are
`universally good' directions for $\sigma$, i.e. specify
certain $\mathbf{b}$ such that $\mathbf{b} \in {\mathcal{C}}_{\bf{a}}$ for
all ${\bf{a}}$ which are without connections. We will also find `universally
bad' directions, i.e. directions which do not belong to ${\mathcal{C}}_{{\bf{a}}}$ for
any ${\bf{a}}$; these will be seen to be related to real REL.
Set
${\mathcal{C}}_{{\bf{a}}}^+ = \{\mathbf{b} \in {\mathbb{R}}^d: \mathrm{\equ{eq: positivity} \ holds} \}$, so
that ${\mathcal{C}}_{{\bf{a}}} \subset {\mathcal{C}}_{{\bf{a}}}^+$ for all ${\bf{a}}$, and ${\mathcal{C}}_{{\bf{a}}}^+ = {\mathcal{C}}_{{\bf{a}}}$ when
${\bf{a}}$ has no connections. Now let
\eq{defn of C}{
{\mathcal{C}} = \left\{\mathbf{b} \in {\mathbb{R}}^d: \forall i, \, Q({\mathbf{e}}_i, \mathbf{b})>0 \right \}.
}
We have:
\begin{prop}
\name{prop: characterization of cone}
${\mathcal{C}}$ is a nonempty open convex cone, and
${\mathcal{C}} = \bigcap_{{\bf{a}}} {\mathcal{C}}_{{\bf{a}}}^+.$
\end{prop}
\begin{proof}
It is clear that ${\mathcal{C}}$ is open and convex. It follows from
\equ{eq: relation L Q} that
$${\mathcal{C}}
= \left \{ \mathbf{b} \in {\mathbb{R}}^d: \forall x \in
I, \, \forall {\bf{a}} \in {\mathbb{R}}^d_+, \, L_{{\bf{a}},\mathbf{b}}(x)>0 \right \}
.
$$
The irreducibility of
$\sigma$ implies that $\mathbf{b}_0=(b_1, \ldots, b_d)$ defined by
$b_i = \sigma(i) - i$ (as in \cite{Masur-Keane})
satisfies $y_i(\mathbf{b}_0) > 0 > y'_i(\mathbf{b}_0)$ for all $i$, so
by \equ{eq: defn L}, $L_{{\bf{a}}, \mathbf{b}_0}$
is everywhere positive irrespective of ${\bf{a}}$.
This shows that $\mathbf{b}_0$ belongs to ${\mathcal{C}}$, and moreover that ${\mathcal{C}}$ is
contained in ${\mathcal{C}}^+_{{\bf{a}}}$ for any ${\bf{a}}$.
For the inclusion $\bigcap_{{\bf{a}}} {\mathcal{C}}^+_{{\bf{a}}} \subset {\mathcal{C}}$,
suppose $\mathbf{b} \notin {\mathcal{C}}_{{\bf{a}}}^+$,
so that for some $\mu \in \MM_{{\bf{a}}}$ we have $\int L \, d\mu \leq 0.$
Writing ${\bf{a}}' = (a'_1, \ldots, a'_d)$, where $a'_j = \mu(I_j)$, we have
$$
Q({\bf{a}}', \mathbf{b}) =\sum a'_i Q({\mathbf{e}}_i, \mathbf{b}) =\int L \, d\mu \leq 0,$$
so that $\mathbf{b}
\notin {\mathcal{C}}.$
\end{proof}
Note that in the course of the proof of Theorem \ref{thm: mahler
curve}, we actually showed that
${\bf{a}}'(s) \in -{\mathcal{C}}$ for all $s>0$. Indeed, given a curve $\{\alpha(s)\}
\subset {\mathbb{R}}^d_+$, the easiest way to show that $\alpha(s)$ is uniquely
ergodic for a.e. $s$, is to show that $\alpha'(s) \in \pm {\mathcal{C}}$
for a.e. $s$.
\medskip
Let $\mathbf{R}$ denote the null-space of $Q$, that is
$$\mathbf{R} = \{\mathbf{b} \in {\mathbb{R}}^d : Q(\cdot, \mathbf{b}) \equiv 0\}
.$$
\begin{prop}
\name{prop: description REL}
$\mathbf{R} = {\mathbb{R}}^d \smallsetminus \bigcup_{{\bf{a}}} \pm {\mathcal{C}}_{{\bf{a}}}.$
\end{prop}
\begin{proof}
By \equ{eq: relation L Q}, $\mathbf{R} = \{\mathbf{b} \in {\mathbb{R}}^d: L_{{\bf{a}},\mathbf{b}} (x)
\equiv 0\},$ so that
containment $\subset$ is clear. Now suppose $\mathbf{b} \notin \mathbf{R}$, that
is $Q({\mathbf{e}}_i, \mathbf{b}) \neq 0$ for some $i$, and by continuity there is an open
subset $\mathcal{U}$ of ${\mathbb{R}}^d_+$ such that $Q({\bf{a}}, \mathbf{b}) \neq 0$ for ${\bf{a}}
\in \mathcal{U}$. Now taking ${\bf{a}} \in \mathcal{U}$ which is uniquely
ergodic and without connections we have $\mathbf{b} \in \pm {\mathcal{C}}_{{\bf{a}}}.$
\end{proof}
Consider the map $({\bf{a}}, \mathbf{b}) \mapsto {\mathbf q}({\bf{a}}, \mathbf{b})$
defined in Theorem \ref{thm: sullivan2}. It is easy to see that the
image of an open subset of ${\mathbb{R}}^d_+ \times \{\mathbf{b}\}$ is a plaque for the real foliation.
Additionally, recalling that $Q({\bf{a}}, \mathbf{b})$ records the intersection
pairing on $H_1(S, \Sigma) \times H_1(S \smallsetminus \Sigma)$, and that the
intersection pairing gives the duality $H_1(S\smallsetminus \Sigma) \cong H^1(S,
\Sigma)$, one finds that the image of $\mathbf{R} \times \{\mathbf{b}\}$ is a
plaque for the real REL foliation. That is,
Proposition \ref{prop: description REL} says that the tangent
directions in $T{\mathbb{R}}^d_+$ which can never be realized as horocycle
directions, are precisely the directions in the real REL leaf.
\combarak{Check that you agree. Along the way I had to use the
symmetry of $Q$ one more time, i.e. $\mathbf{R} = \{{\bf{a}}: Q({\bf{a}}, \cdot)
\equiv 0\}$ and I am not too happy about it. This is related to my
question at the end of {\mathcal S} 2.4}
\ignore{
\section{Stable foliation for geodesics, horocycles, and real REL}
This section contains some observations regarding the objects we have
considered, and some open questions.
Recall that the action of $\{g_t\}$ on ${\mathcal{H}}$ is called the {\em
geodesic flow}. Dynamically, the real foliation defined via
\equ{eq: splitting} may be regarded as the unstable
foliation for this flow (see \cite{Veech geodesic flow}). This
foliation contains two sub-foliations
UNFINISHED
Let $\mathbf{R}$ denote the null-space of $Q$, that is
$$\mathbf{R} = \left\{\mathbf{b} \in {\mathbb{R}}^d : Q(\cdot, \mathbf{b}) \equiv 0 \right\}
.$$
\begin{prop}
\name{prop: description REL}
$\mathbf{R} = {\mathbb{R}}^d \smallsetminus \bigcup_{{\bf{a}}} \pm {\mathcal{C}}_{{\bf{a}}}.$
\end{prop}
\begin{proof}
We have $\mathbf{R} = \{\mathbf{b} \in {\mathbb{R}}^d: L_{{\bf{a}},\mathbf{b}} (x) \equiv 0\}$
(as before the condition for $L_{{\bf{a}}, \mathbf{b}} \equiv 0$ does not depend on
${\bf{a}}$), so that
containment $\subset$ is clear. Now suppose $\mathbf{b} \notin \mathbf{R}$, that
is $Q({\mathbf{e}}_i, \mathbf{b}) \neq 0$ for some $i$, and by continuity there is an open
subset $\mathcal{U}$ of ${\mathbb{R}}^d_+$ such that $Q({\bf{a}}, \mathbf{b}) \neq 0$ for ${\bf{a}}
\in \mathcal{U}$. Now taking ${\bf{a}} \in \mathcal{U}$ which is uniquely
ergodic and without connections we have $\mathbf{b} \in \pm {\mathcal{C}}_{{\bf{a}}}.$
\end{proof}
}
| {'timestamp': '2011-02-24T02:01:15', 'yymm': '1102', 'arxiv_id': '1102.4719', 'language': 'en', 'url': 'https://arxiv.org/abs/1102.4719'} |
\section{Introduction}
Majumdar\cite{maj47} and Papapetrou\cite{papa47} studied charged
dusts in the light of general relativity, and later also by many
others (see Ray et al.\cite{remlz03} for a detailed reference).
Charged fluid spheres have been studied by Bekenstein\cite{bek71},
Bonnor\cite{bon80}, Zhang et al.\cite{zhang82}, etc. Zhang et
al.\cite{zhang82} indirectly verified that the structure of a
star, for a degenerate relativistic fermi gas, is significantly
affected by the electric charge just when the charge density is
close to the mass density.
We took trapped protons to be the charge carriers in the star. The
effect of charge does not depend on its sign by our formulation.
The energy density which appears from the electrostatic field
will {\it add up} to the total energy density of the system,
which in turn will help in the {\it gaining} of the total mass
of the system. The modified Tolman-Oppenheimer-Volkoff (TOV)
equation now has extra terms due to the presence of the
Maxwell-Einstein stress tensor. We solve the modified TOV
equation for polytropic EOS assuming that the charge density
goes with the matter density.
The detailed relations are shown in Ray et al.\cite{remlz03}. We
used the modified TOV and see the effect of charge on a model
independent polytropic EOS. We assumed the charge is proportional
to the mass density ($\epsilon$) like $\rho_{ch}=f \times
\epsilon $ where $\epsilon=\rho c^2$ is in [MeV/fm$^3$].
The polytropic EOS is given by $P=\kappa \rho^{1+1/n} $ where $n$
is the polytropic index and for our present choice of EOS, we
took $n=1.5$ and $\kappa=0.05[fm]^{8/3}$.
\begin{figure}[ht]
\centerline{\epsfxsize=5cm\epsfbox{m-rhoc.eps}} \caption{Central
density against mass for different values of the factor
$f$.\label{fig:poly-e-m}}
\end{figure}
In Fig.(\ref{fig:poly-e-m}), we plot the mass as function of the
central density, for different values of the charge fraction
$f$. For the charge fraction $f=0.0001$, we do not see any
departure on the stellar structure from that of the chargeless
case. This value of $f$ is $critical$ because any increase in the
value beyond this, shows enormous effect on the structure. The
increase of the maximum mass of the star is very much non-linear,
as can be seen from the Fig. (\ref{fig:poly-e-m}).
\begin{figure}[htbp]
\centerline{\psfig{figure=q-m-poly-n15.eps,width=5cm}}
\caption{\label{fig:poly-q-m}The variation of the charge with
mass (right) for different $f$.}
\end{figure}
The Q$\times$ M diagram of Fig.(\ref{fig:poly-q-m}) shows the
mass of the stars against their surface charge. We have made the
charge density proportional to the energy density and so it
was expected that the charge, which is a volume integral of the
charge density, will go in the same way as the mass, which is
also a volume integral over the mass density. The slope of the
curves comes from the different charge fractions. If we consider
that the maximum allowed charge estimated by the condition ($U
\simeq \sqrt{8\pi P} < \sqrt{8\pi\epsilon}$) for $\frac{dP}{dr}$
to be negative (Eq.~12 of Ray et al.\cite{remlz03})), we see that
the curve for the maximum charge in Fig.(\ref{fig:poly-q-m}) has
a slope of 1:1 (in a charge scale of 10$^{20}$
Coulomb\cite{remlz03}). This scale can easily be understood if we
write the charge as
$Q=\sqrt{G}M_\odot\frac{M}{M_\odot}\simeq10^{20}\frac{M}{M_\odot}{\rm
Coulomb}.$ This charge Q is the charge at the surface of the
star where the pressure and also $\frac{dP}{dr}$ are zero.
The total mass of the star increases with increasing charge
because the electric energy density $adds~on$ to the mass energy
density. This change in the mass is low for smaller charge
fraction and going up to 12 times the value of chargeless
case for maximum allowed charge fraction $f=0.0011$. The most
effective term in Eq.~12 of Ray et al.\cite{remlz03} is the
factor ($\rm M_{tot}+4\pi r^3P^*$). $P^*= P-\frac{U^2}{8\pi}$ is
the effective pressure of the system because the effect of charge
decreases the outward fluid pressure, negative in sign to the
inward gravitational pressure. With the increase of charge, P$^*$
decreases, and hence the gravitational negative part of Eq.~12 of
Ray et al.\cite{remlz03} decreases. So, with the softening of the
pressure gradient, the system allows more radius for the star
until it reaches the surface where the pressure (and
$\frac{dP}{dr}$) goes to zero. Since $\frac{U^2}{8\pi}$ cannot
be too much larger than the pressure in order to maintain
$\frac{dP}{dr}$ negative, so we have a limit on the charge,
which comes from the relativistic effects of the gravitational
force and not just only from the repulsive Coulombian part.
In our study, we have shown that a high density system like a
neutron star can hold huge charge of the order of 10$^{20}$
Coulomb considering the global balance of forces. With the
increase of charge, the maximum mass of the star recedes back to
a lower density regime. The stellar mass also increases rapidly
in the critical limit of the maximum charge content, the systems
can hold. The radius also increases accordingly, however keeping
the M/R ratio increasing with charge. The increase in mass is
primarily brought in by the softening of the pressure gradient due
to the presence of a Coulombian term coupled with the
Gravitational matter part. Another intrinsic increase in the mass
term comes through the addition of the electric energy density to
the mass density of the system. The stability of these charged
stars are however ruled out from the consideration of forces
acting on individual charged particles. They face enormous radial
repulsive force and leave the star in a very short time. This
creates an imbalance of forces and the gravitational force
overwhelms the repulsive Coulomb and fluid pressure forces and
the star collapses to a charged black hole. Finally, these
charged stars are supposed to be very short lived, and are the
intermediate state between a supernova collapse and charged black
holes\cite{remlz03}.
| {'timestamp': '2006-04-17T23:16:50', 'yymm': '0604', 'arxiv_id': 'nucl-th/0604039', 'language': 'en', 'url': 'https://arxiv.org/abs/nucl-th/0604039'} |
\section{Introduction}
\label{sec:intro}
An airborne hyperspectral imaging sensor is capable of simultaneously acquiring the same spatial scene in a contiguous and multiple narrow (0.01 - 0.02 $\mu$m) spectral wavelength (color) bands \cite{Shaw02, manolakis2003hyperspectral, manolakis_lockwood_cooley_2016, 974715, 7564440, ZHANG20153102, 6555921, PLAZA2009S110, 6509473, bookJocekynToaPPEAR, DallaMura2011, prasad2011optical, chanussotDec2009}. When all the spectral bands are stacked together, the result is a hyperspectral image (HSI) whose cross-section is a function of the spatial coordinates and its depth is a function of wavelength. Hence, an HSI is a 3-D data cube having two spatial dimensions and one spectral dimension. Thanks to the narrow acquisition, the HSI could have hundreds to thousands of contiguous spectral bands. Having this very high level of spectral detail gives better capability to see the unseen.
\\
Each band of the HSI corresponds to an image of the surface covered by the field of view of the hyperspectral sensor, whereas each ``pixel'' in the HSI is a $p$-dimensional vector, $\mathbf{x}\in\mathbb{R}^p$ ($p$ stands for the total number of spectral bands), consisting of a spectrum characterizing the materials within the pixel. The ``spectral signature'' of $ \mathbf{x}$ (also known as ``reflectance spectrum''), shows the fraction of incident energy, typically sunlight, that is reflected by a material from the surface of interest as a function of the wavelength of the energy \cite{JoanaThesis}.
\\
The HSI usually contains both pure and mixed pixels \cite{manolakis2003hyperspectral, Manolakis09, 6048900, 6504505, 5658102, 6351978}. A pure pixel contains only one single material, whereas a mixed pixel contains multiple materials, with its spectral signature representing the aggregate of all the materials in the corresponding spatial location. The latter situation often arises because hyperspectral images are collected hundreds to thousands of meters away from an object so that the object becomes smaller than the size of a pixel. Other scenarios might involve, for example, a military target hidden under foliage or covered with camouflage material.
\\
\\
With the rich information afforded by the high spectral dimensionality, hyperspectral imagery has found many applications in various fields, such as astronomy, agriculture \cite{Patel2001, Datt2003}, mineralogy \cite{Lehmann2001}, military \cite{manolakis2002detection, Stein02, 4939406}, and in particular, target detection \cite{Shaw02, manolakis2003hyperspectral, Manolakis14, Manolakis09, manolakis2002detection, 7739987, 7165577, 8069001, Frontera14, 6378408}. Usually, the detection is built using a binary hypothesis test that chooses between the following competing null and alternative hypothesis: target absent ($H_0$), that is, the test pixel $\mathbf{x}$ consists only of background, and target present ($H_1$), where $\mathbf{x}$ may be either fully or partially occupied by the target material.
\\
It is well known that the signal model for hyperspectral test pixels is fundamentally different from the additive model used in radar and communications applications \cite{manolakis_lockwood_cooley_2016, Manolakis09}. We can regard each test pixel $\mathbf{x}$ as being made up of $\mathbf{x} = \alpha \, \mathbf{t}$ + $(1-\alpha) \, \mathbf{b}$, where $0 \leq \alpha \leq 1$ designates the target fill-fraction, $\mathbf{t}$ is the spectrum of the target, and $\mathbf{b}$ is the spectrum of the background. This model is known as replacement signal model, and hence, when a target is present in a given HSI, it replaces (that is, removes) an equal part of the background \cite{manolakis_lockwood_cooley_2016}. For notational convenience, sensor noise has been incorporated into the target and background spectra; that is, the vectors $\mathbf{t}$ and $\mathbf{b}$ include noise \cite{manolakis_lockwood_cooley_2016}.
\\
In particular, when $\alpha=0$, the pixel $\mathbf{x}$ is fully occupied by the background material (that is, the target is not present). When $\alpha = 1$, the pixel $\mathbf{x}$ is fully occupied by the target material and is usually referred to as the full or resolved target pixel. Whereas when $0<\alpha<1$, the pixel $\mathbf{x}$ is partially occupied by the target material and is usually referred to as the subpixel or unresolved target \cite{Manolakis09}.
\\
\\
A prior target information can often be provided to the user. In real world hyperspectral imagery, this prior information may not be only related to its spatial properties (e.g. size, shape, texture) and which is usually not at our disposal, but to its spectral signature. The latter usually hinges on the nature of the given HSI where the spectra of the targets of interests have been already measured by some laboratories or with some hand-held spectrometers.
\\
Different Gaussian-based target detectors (e.g. Matched Filter \cite{Manolakis00, Nasrabadi08}, Normalized Matched Filter \cite{kraut1999cfar}, and Kelly detector \cite{Kelly86}) have been developed. In these classical detectors, the target of interest to detect is known, that is, its spectral signature is fully provided to the user. However, these detectors present several limitations in real-world hyperspectral imagery. The task of understanding and solving these limitations presents significant challenges for hyperspectral target detection.
\begin{itemize}
\item{\bf Challenge one:} One of the major drawbacks of the aforementioned classical target detectors is that they depend on the unknown covariance matrix (of the background surrounding the test pixel) whose entries have to be carefully estimated, especially in large dimensions \cite{LEDOIT2004365, LedoitHoney, AhmadCamsap2017}, and to ensure success under different environment \cite{5606730, 6884641, 6894189}. However, estimating large covariance matrices has been a longstanding important problem in many applications and has attracted increased attention over several decades.
When the spectral dimension is considered large compared to the sample size (which is the usual case), the traditional covariance estimators are estimated with a lot of errors unless some covariance regularization methods are considered \cite{LEDOIT2004365, LedoitHoney, AhmadCamsap2017}. It implies that the largest or smallest estimated coefficients in the matrix tend to take on extreme values not because this is ``the truth'', but because they contain an extreme amount of error \cite{LEDOIT2004365, LedoitHoney}. This is one of the main reasons why the classical target detectors usually behave poorly in detecting the targets of interests in a given HSI.
\\
In addition, there is always an explicit assumption (specifically, Gaussian) on the statistical distribution characteristics of the observed data. e statistical distribution characteristics of the observed data. For instance, most materials are treated as Lambertian because their bidirectional reflectance distribution function characterizations are usually not available, but the actual reflection is likely to have both a diffuse component and a specular component. This latter component would result in gross corruption of the data. In addition, spectra from multiple materials are usually assumed to interact according to a linear mixing model; nonlinear mixing effects are not represented and will contribute to another source of noise
\\
\item{\bf Challenge two:} The classical target detectors that depend on the target to detect $\mathbf{t}$, use only a single reference spectrum for the target of interest. This may be inadequate since in real world hyperspectral imagery, various effects that produce variability to the material spectra (e.g. atmospheric conditions, sensor noise, material composition, and scene geometry) are inevitable \cite{803418, 1000320}.
For instance, target signatures are typically measured in laboratories or in the field with handheld spectrometers that are at most a few inches from the target surface. Hyperspectral images, however, are collected at huge distances away from the target and have significant atmospheric effects present.
\end{itemize}
Recent years have witnessed a growing interest on the notion of sparsity as a way to model signals. The basic assumption of this model is that natural signals can be represented as a ``sparse'' linear combination of atom signals taken from a dictionary. In this regard, two main issues need to be addressed: 1) how to represent a signal in the sparsest way, for a given dictionary and 2) how to construct an accurate dictionary in order to successfully representing the signal.
\\
Recently, a signal classification technique via sparse representation was developed for the application of face recognition \cite{4483511}. It is observed that aligned faces of the same object with varying lighting conditions approximately lie in a low-dimensional subspace \cite{1177153}. Hence, a test face image can be sparsely represented by atom signals from all classes. This representation approach has also been exploited in several other signal classification problems such as iris recognition \cite{5339067}, tumor classification \cite{1177153asf}, and hyperspectral imagery unmixing \cite{doi818245, 6555921, 6200362}.
\\
In this context, Chen {\it et al.} \cite{Chen11b} have been inspired by the work in \cite{4483511}, and developed an approach for sparse representation of hyperspectral test pixels. In particular, each test pixel $\mathbf{x} \in\mathbb{R}^p$ in a given HSI, be it target or background, is assumed to lie in a low dimensional subspace of the $p$-dimensional spectral-measurement space. Hence, it can be represented by a very few atom signals taken from dictionaries, and the recovered sparse representation can be used directly for detection. For example, if a test pixel $\mathbf{x}$ contains the target (that is, $\mathbf{x} = \alpha \, \mathbf{t} + (1-\alpha)\, \mathbf{b}$, with $0<\alpha \leq 1$), thus it can be sparsely represented by atom signals taken from the target dictionary (denoted as $\mathbf{A}_t)$; whereas, if $\mathbf{x}$ is only a background pixel (that is, it does not contain the target, e.g., $\alpha=0$), thus, it can be sparsely represented by atom signals taken from the background dictionary (denoted as $\mathbf{A}_b$).
Very recently, Zhang {\it et al.} \cite{Zhang15} have extended the work done by Chen {\it et al.} in \cite{Chen11b} by combining the idea of binary hypothesis and sparse representation together, obtaining a more complete and realistic sparsity model than in \cite{Chen11b}. More precisely, Zhang {\it et al.} \cite{Zhang15} have assumed that if the test pixel $\mathbf{x}$ belongs to hypothesis $H_0$ (that is, the target is absent), it will be modeled by the $\mathbf{A}_b$ only; otherwise, it will be modeled by the union of $\mathbf{A}_b$ and $\mathbf{A}_t$. This in fact yields a competition between the two hypotheses corresponding to the different pixel class label.
\\
\\
These sparse representation methods \cite{Chen11b, Zhang15} are being independent on the unknown covariance matrix, behave well in large dimensions, distributional free and invariant to atmospheric effects. More precisely, they can alleviate the spectral variability caused by atmospheric effects, and can also better deal with a greater range of noise phenomena. The main drawback of these approaches is that they usually lack a sufficiently universal dictionary, especially for the background $\mathbf{A}_b$; some form of in-scene adaptation would be desirable. The background dictionary $\mathbf{A}_b$ is usually constructed using an adaptive scheme (that is, a local method) which is based on a dual concentric window centered on the test pixel, with an inner window region (IWR) centered within an outer window region (OWR), and only the pixels in the OWR will constitute the samples for $\mathbf{A}_b$. Clearly, the dimension of IWR is very important and has a strong impact on the target detection performance since it aims to enclose the targets of interests to be detected. It should be set larger than or equal to the size of all the desired targets of interests in the corresponding HSI, so as to exclude the target pixels from erroneously appearing in $\mathbf{A}_b$. However, information about the target size in the image is usually not at our disposal. It is also very unwieldy to set this size parameter when the target could be of irregular shape (e.g. searching for lost plane parts of a missing aircraft). Another tricky situation is when there are multiple targets in close proximity in the image (e.g. military vehicles in long convoy formation).
\\
\\
Hence, an important third challenge appears:
\begin{itemize}
\item {\bf Challenge three:} The construction of $\mathbf{A}_b$ for the sparse representation methods is a very challenging problem since a contamination of it by the target pixels can potentially affect the target detection performance.
\end{itemize}
$\\$
In this chapter, we handle all the aforementioned challenges by making very little specific assumptions about the background or target \cite{8462257, 8677268}. Based on a modification of the recently developed Robust Principal Component Analysis (RPCA) \cite{Candes11}, our method decomposes an input HSI into a background HSI (denoted by $\mathbf{L}$) and a sparse target HSI (denoted by $\mathbf{E}$) that contains the targets of interests.
\\
While we do not need to make assumptions about the size, shape, or number of the targets, our method is subject to certain generic constraints that make less specific assumption on the background or the target. These constraints are similar to those used in RPCA \cite{Candes11, NIPS2009_3704}, including:
\begin{enumerate}
\item the background is not too heavily cluttered with many different materials with multiple spectra, so that the background signals should span a low-dimensional subspace, a property that can be expressed as the low rank condition of a suitably formulated matrix \cite{ChenYu, Zhang15b, 7322257, 8260545, 8126244, Ahmad2017a, 8126244, 8260545, 7312998};
\item the total image area of all the target(s) should be small relative to the whole image (i.e. spatially sparse), e.g., several hundred pixels in a million-pixel image, though there is no restriction on a target shape or the proximity between the targets.
\end{enumerate}
Our method also assumes that the target spectra are available to the user and that the atmospheric influence can be accounted for by the target dictionary $\mathbf{A}_t$. This pre-learned target dictionary $\mathbf{A}_t$ is used to cast the general RPCA into a more specific form, specifically, we further factorize the sparse component $\mathbf{E}$ from RPCA into the product of $\mathbf{A}_t$ and a sparse activation matrix $\mathbf{C}$ \cite{8462257}. This modification is essential to disambiguate the true targets from the background
\begin{figure}[!tbp]
\centering
\minipage{0.84\textwidth}
\includegraphics[width=\linewidth]{target_scheme.png}
\endminipage
\caption{Sparse target HSI: our novel target detector.}\label{fig:image1}
\end{figure}
$\\$
After decomposing a given HSI into the sum of low-rank HSI and a sparse HSI containing only the targets with the background is suppressed, the detector is simply defined by the sparse HSI. That is, the targets are detected at the non-zero entries of the sparse HSI. Hence, a novel target detector is developed and which is simply a sparse HSI generated automatically from the original HSI, but containing only the targets with the background is suppressed (see Fig. \ref{fig:image1}).
\\
The main advantages of our proposed detector are the following: 1) independent on the unknown covariance matrix; 2) behaves well in large dimensions; 3) distributional free; 4) invariant to atmospheric effects via the use of a target dictionary $\mathbf{A}_t$ and 5) does not require a background dictionary to be constructed.
\\
\\
This chapter is structured along the following lines. First comes an overview of some related works in section \ref{sec:RelatedWorks}. In Section \ref{sec:Maincontribution}, the proposed decomposition model as well as our novel target detector are briefly outlined. Section \ref{sec:experimentss} presents real experiments to gauge the effectiveness of the proposed detector for hyperspectral target detection. This chapter ends with a summary of the work and some directions for future work.
\\
\\
{\em Summary of Main Notations:} Throughout this chapter, we depict vectors in lowercase boldface letters and matrices in uppercase boldface letters. The notation $(.)^T$ and $\mathrm{Tr}(.)$ stand for the transpose and trace of a matrix, respectively. In addition, $\mathrm{rank}(.)$ is for the rank of a matrix. A variety of norms on matrices will be used. For instance, $\mathbf{M}$ is a matrix, and $[\mathbf{M}]_{:,j}$ is the $j$th column. The matrix $l_{2,0}$, $l_{2,1}$ norms are defined by $\left\Vert\mathbf{M}\right\Vert_{2,0} = \# \left\{ j \, : \, \left\Vert\left[\mathbf{M}\right]_{:,j}\right\Vert_2 \, \not= \, 0\right\}$, and $\left\Vert\mathbf{M}\right\Vert_{2,1} = \sum_{j} \left\Vert\left[\mathbf{M}\right]_{:,j}\right\Vert_2$, respectively. The Frobenius norm and the nuclear norm (the sum of singular values of a matrix) are denoted by $\left\Vert\mathbf{M}\right\Vert_F$ and $\left\Vert\mathbf{M}\right\Vert_* = \Tr\left(\mathbf{M}^T \,\mathbf{M}\right)^{(1/2)}$, respectively.
\section{Related works}
\label{sec:RelatedWorks}
Whatever the real application may be, somehow the general RPCA model needs to be subject to further assumptions for successfully distinguishing the true targets from the background.
Besides the generic RPCA and our proposed modification discussed in section \ref{sec:intro}, there have been other modifications of RPCA.
For example, the generalized model of RPCA, named the Low-Rank Representation (LRR) \cite{6180173}, allows the use of a subspace basis as a dictionary or just uses self-representation to obtain the LRR. The major drawback in LRR is that the incorporated dictionary has to be constructed from the background and to be pure from the target samples. This challenge is similar to the aforementioned background dictionary $\mathbf{A}_b$ construction problem. If we use the self-representation form of LRR, the presence of a target in the input image may only bring about a small increase in rank and thus be retained in the background \cite{8677268}.
\\
\\
In the earliest models using a low rank matrix to represent background \cite{Candes11, NIPS2009_3704, SPCPCandes}, no prior knowledge on the target was considered. In some applications such as Speech enhancement and hyperspectral imagery, we may expect some prior information about the target of interest, which can be provided to the user. Thus, incorporating this information about the target into the separation scheme in the general RPCA model should allow us to potentially improve the target extraction performance. For example, Chen and Ellis \cite{6701883} and Sun and Qin \cite{7740039} proposed a Speech enhancement system by exploiting the knowledge about the likely form of the targeted speech. This was accomplished by factorizing the sparse component from RPCA into the product of a dictionary of target speech templates and a sparse activation matrix. The proposed methods in \cite{6701883} and \cite{7740039} typically differ on how the fixed target dictionary of speech spectral templates is constructed.
Our proposed model in section \ref{sec:Maincontribution} is very related to \cite{6701883} and \cite{7740039}. In real-world hyperspectral imagery, the prior target information may not be only related to its spatial properties (e.g. size, shape, and texture), which is usually not at our disposal, but to its spectral signature. The latter usually hinges on the nature of the given HSI where the spectra of the targets of interests present have been already measured by some laboratories or with some handheld spectrometers.
\\
In addition, by using physical models and the MORTRAN atmospheric modeling program \cite{Berk89}, a number of samples for a specific target can be generated under various atmospheric conditions.
\section{Main Contribution}
\label{sec:Maincontribution}
Suppose an HSI of size $h \times w \times p$, where $h$ and $w$ are the height and width of the image scene, respectively, and $p$ is the number of spectral bands. Our proposed modification of RPCA is mainly based on the following steps:
\begin{enumerate}
\item Let us consider that the given HSI contains $q$ pixels $\{ \mathbf{x}_i\}_{i\in[1,\,q]}$ of the form:
\begin{align*}
\mathbf{x}_i = \alpha_i \, \mathbf{t}_i + (1 - \alpha_i) \, \mathbf{b}_i, ~~~~0<\alpha_i \leq 1\,,
\end{align*}
where $\mathbf{t}_i$ represents the known target that replaces a fraction $\alpha_i$ of the background $\mathbf{b}_i$ (i.e. at the same spatial location). The remaining ($e-q$) pixels in the given HSI, with $e = h \times w$, are thus only background ($\alpha=0$).
$\\$
\item We assume that all $\{\mathbf{t}_i\}_{i\in[1,\,q]}$ consist of similar materials, thus they should be represented by a linear combination of $N_t$ common target samples $\{\mathbf{a}^t_{j}\}_{j \in [1, \, N_t]}$, where $\mathbf{a}_j^t \in\mathbb{R}^p$ (the superscript $t$ is for target), but weighted with different set of coefficients $\{\beta_{i,j}\}_{j\in[1, N_t]}$. Thus, each of the $q$ targets is represented as:
\begin{align*}
\mathbf{x}_i = \alpha_i \sum\limits_{j=1}^{N_t} \Big(\beta_{i,j} \, \mathbf{a}^t_{j} \Big) + (1 - \alpha_i) \, \mathbf{b}_i \hspace{0.5cm} i \in [1,q]\, .
\end{align*}
\item We rearrange the given HSI into a two-dimensional matrix $\mathbf{D}\in \mathbb{R}^{e \times p}$, with $e = h \times w$ (by lexicographically ordering the columns). This matrix $\mathbf{D}$, can be decomposed into a low rank matrix $\mathbf{L}_0$ representing the pure background, a sparse matrix capturing any spatially small signal residing in the known target subspace, and a noise matrix $\mathbf{N}_0$. More precisely, the model is:
\begin{align*}\label{eq:mod2}
\small
\mathbf{D} &= \mathbf{L}_0 + (\mathbf{A}_t \, \mathbf{C}_0)^T + \mathbf{N}_0
\end{align*}
where $(\mathbf{A}_t\mathbf{C}_0)^T$ is the sparse target matrix, ideally with $q$ non-zero rows representing $\alpha_i \{\mathbf{t}^T_i\}_{i\in[1,q]}$ , with target dictionary $\mathbf{A}_t \in \mathbb{R}^{p \times N_t}$ having columns representing target samples $\{\mathbf{a}^t_{j}\}_{j \in [1, N_t]}$, and a coefficient matrix $\mathbf{C}_0\in\mathbb{R}^{N_t \times e}$ that should be a sparse column matrix, again ideally containing $q$ non-zero columns each representing $\alpha_i [\beta_{i,1}, \, \cdots, \, \beta_{i, N_t}]^T$, $i \in [1, q]$. $\mathbf{N}_0$ is assumed to be independent and identically distributed Gaussian noise with zero mean and unknown standard deviation.
$\\$
\item After reshaping $\mathbf{L}_0$, $(\mathbf{A}_t\, \mathbf{C}_0)^T$ and $\mathbf{N}_0$ back to a cube of size $h \times w \times p$, we call these entities the ``low rank background HSI'', ``sparse target HSI'', and ``noise HSI'', respectively.
\end{enumerate}
In order to recover the low rank matrix $\mathbf{L}_0$ and sparse target matrix $(\mathbf{A}_t\mathbf{C}_0)^T$, we consider the following minimization problem:
\vspace{-0.8mm}
\begin{equation}
\label{eq:ert5}
\underset{\mathbf{L}, \mathbf{C}} {\mathrm{min}} \, \Bigl\{\tau \, \mathrm{rank}(\mathbf{L})+ \lambda \, \left\Vert\mathbf{C}\right\Vert_{2,0} + \left\Vert\mathbf{D} - \mathbf{L} - (\mathbf{A}_t\mathbf{C})^T\right\Vert_F^2 \Bigr\}\,,
\end{equation}
where $\tau$ controls the rank of $\mathbf{L}$, and $\lambda$ the sparsity level in $\mathbf{C}$.
\subsection{Recovering a low rank background matrix and a sparse target matrix by convex optimization}
We relax the rank term and the $||.||_{2,0}$ term to their convex proxies. More precisely, we use nuclear norm $||\mathbf{L}||_*$ as a surrogate for the rank$(\mathbf{L})$ term, and the $l_{2,1}$ norm for the $l_{2,0}$ norm\footnote{A natural suggestion could be that the rank of $\mathbf{L}$ usually has a physical meaning (e.g., number of endmembers in background), and thus, why not to minimize the latter two terms in eq. \eqref{eq:convex_model} with the constraint that the rank of $\mathbf{L}$ should not be larger than a fixed value $d$? That is,
\begin{center}
$\underset{\mathbf{L}, \mathbf{C}} {\mathrm{min}} \, \Bigl\{\lambda \,||\mathbf{C}||_{2,1} + ||\mathbf{D} - \mathbf{L} - (\mathbf{A}_t \mathbf{C})^T||_F^2 \Bigr\}\,,
~~s.t.~~ \mathrm{rank}(\mathbf{L}) \leq d$.
\end{center}
In our opinion, assuming that the number of endmembers in background is known exactly will be a strong assumption and our work will be less general as a result. One can assume $d$ to be some upper bound, in which case, the suggested formulation is a possible one. However, solving such a problem (with a hard constraint that the rank should not exceed some bound) is in general a NP-hard problem, unless there happens to be some special form in the objective which allows for a tractable solution. Thus, we adopt the soft constraint form with the nuclear norm as a proxy for the rank of $\mathbf{L}$; this is an approximation commonly done in the field and is found to give good solutions in many problems empirically.}.
\\
\\
We now need to solve the following convex minimization problem:
\vspace{-0.8mm}
\begin{align}{\label{eq:convex_model}}
\small
\underset{\mathbf{L}, \mathbf{C}} {\mathrm{min}} \, \Bigl\{\tau \,\left\Vert\mathbf{L}\right\Vert_*+ \lambda \,\left\Vert\mathbf{C}\right\Vert_{2,1} + \left\Vert\mathbf{D} - \mathbf{L} - (\mathbf{A}_t \mathbf{C})^T\right\Vert_F^2 \Bigr\}\,,
\end{align}
Problem \eqref{eq:convex_model} is solved via an alternating minimization of two subproblems. Specifically, at each iteration $k$:
\begin{subequations}\label{eq:sub}
\footnotesize
\begin{alignat}{2}
\label{eq:sub1a}
\mathbf{L}^{(k)} &= \underset{\mathbf{L}} {\mathrm{argmin}} \, \Bigl\{\left\Vert\mathbf{L} - \left(\mathbf{D} - \left(\mathbf{A}_t \, \mathbf{C}^{(k-1)}\right)^T\right)\right\Vert_F^2 + \tau \,\left\Vert\mathbf{L}\right\Vert_* \, \Bigr\}\,, \\
\label{eq:sub2b}
\mathbf{C}^{(k)} &= \underset{\mathbf{C}} {\mathrm{argmin}} \, \Bigl\{\left\Vert\left(\mathbf{D} - \mathbf{L}^{(k)}\right)^T - \mathbf{A}_t \, \mathbf{C}\right\Vert_F^2 + \lambda \,\left\Vert\mathbf{C}\right\Vert_{2,1} \, \Bigr\}.
\end{alignat}
\end{subequations}
The minimization sub-problems \eqref{eq:sub1a} \eqref{eq:sub2b} are convex and each can be solved optimally.
\\
\\
{\bf Solving sub-problem \eqref{eq:sub1a}:}
We solve subproblem \eqref{eq:sub1a} via the Singular Value Thresholding operator \cite{SVT2010}. We assume that $\left(\mathbf{D} - \left(\mathbf{A}_t \, \mathbf{C}^{(k-1)}\right)^T \right)$ has a rank equal to $r$.
According to theorem 2.1 in \cite{SVT2010}, subproblem \eqref{eq:sub1a} admits the following closed-form solution:
\begin{center}
\noindent \fbox{\parbox{7cm}{%
$~~~~~~~~~~~~\mathbf{L}^{(k)} = D_{\tau/2}\left(\mathbf{D} - \left(\mathbf{A}_t \, \mathbf{C}^{(k-1)}\right)^T \right)$\vspace{0.3cm}\\
$~~~~~~~~~~~~~~~~~~~~= \mathbf{U}^{(k)} \, D_{\tau/2} \left( \mathbf{S}^{(k)} \right) \, \mathbf{V}^{(k)T}$\vspace{0.3cm}\\
$~~~~~~~~~~~~~~~~~~~~= \mathbf{U}^{(k)} \, \diag \left(\left\{ \left(s_t^{(k)} - \frac{\tau}{2}\right)_+ \right\} \right) \, \mathbf{V}^{(k)T}$
}}
\end{center}
where $\mathbf{S}^{(k)} = \diag \left( \left\{s_t^{(k)}\right\}_{1 \leq t \leq r}\right)$, and $D_{\tau/2}(.)$ is the singular value shrinkage operator.
\\
The matrices $\mathbf{U}^{(k)} \in \mathbb{R}^{e \times r}$, $\mathbf{S}^{(k)} \in \mathbb{R}^{r \times r}$ and $\mathbf{V}^{(k)} \in \mathbb{R}^{p \times r}$ are generated by the singular value decomposition of $\left(\mathbf{D} - \left(\mathbf{A}_t \, \mathbf{C}^{(k-1)}\right)^T \right)$.
\begin{proof}
Since the function $\Bigl\{\left\Vert\mathbf{L} - \left(\mathbf{D} - \left(\mathbf{A}_t \, \mathbf{C}^{(k-1)}\right)^T\right)\right\Vert_F^2 + \tau \,\left\Vert\mathbf{L}\right\Vert_* \, \Bigr\}$ is strictly convex, it is easy to see that there exists a unique minimizer, and we thus need to prove that it is equal to $D_{\tau/2}\left(\mathbf{D} - \left(\mathbf{A}_t\, \mathbf{C}^{(k-1)}\right)^T \right)$. Note that to understand about how the aforementioned closed-form solution has been obtained, we provide in detail the proof steps that have been given in \cite{SVT2010}.
\\
To do this, let us first find the derivative of subproblem \eqref{eq:sub1a} w.r.t. $\mathbf{L}$ and set it to zero. We obtain:
\begin{equation}
\label{eq:r45t}
\left(\mathbf{D} - \left(\mathbf{A}_t \, \mathbf{C}^{(k-1)}\right)^T \right) - \hat{\mathbf{L}} = \frac{\tau}{2} \, \partial \left\Vert\hat{\mathbf{L}}\right\Vert_*
\end{equation}
Set $\hat{\mathbf{L}} = D_{\tau/2}\left(\mathbf{D} - \left(\mathbf{A}_t \,\mathbf{C}^{(k-1)}\right)^T \right)$ for short. In order to show that $\hat{\mathbf{L}}$ obeys \eqref{eq:r45t}, suppose the singular value decomposition (SVD) of $\left(\mathbf{D} - \left(\mathbf{A}_t \,\mathbf{C}^{(k-1)}\right)^T \right)$ is given by:
\begin{equation*}
\left(\mathbf{D} - \left(\mathbf{A}_t \,\mathbf{C}^{(k-1)}\right)^T \right) = \mathbf{U}_0 \, \mathbf{S}_0\,\mathbf{V}_0^T + \mathbf{U}_1\,\mathbf{S}_1\,\mathbf{V}_1^T \, ,
\end{equation*}
where $\mathbf{U}_0$, $\mathbf{V}_0$ (resp. $\mathbf{U}_1$, $\mathbf{V}_1$) are the singular vectors associated with singular values larger than $\tau/2$ (resp. inferior than or equal to $\tau/2$).
With these notations, we have:
\begin{equation*}
\hat{\mathbf{L}} = D_{\tau/2}\left(\mathbf{U}_0 \,\mathbf{S}_0 \,\mathbf{V}_0^T \right) = \left(\mathbf{U}_0 \, \left(\mathbf{S}_0 - \frac{\tau}{2}\, \mathbf{I}\right)\, \mathbf{V}_0^T \right)
\end{equation*}
Thus, if we return back to equation \eqref{eq:r45t}, we obtain:
\begin{align*}
\mathbf{U}_0 \, \mathbf{S}_0\,\mathbf{V}_0^T + \mathbf{U}_1\,\mathbf{S}_1\,\mathbf{V}_1^T - \mathbf{U}_0\, \left(\mathbf{S}_0 - \frac{\tau}{2}\, \mathbf{I}\right)\,\mathbf{V}_0^T = \frac{\tau}{2} \, \partial \left\Vert\hat{\mathbf{L}}\right\Vert_* \, ,
\\
\Rightarrow \mathbf{U}_1\, \mathbf{S}_1\,\mathbf{V}_1^T + \mathbf{U}_0 \,\frac{\tau}{2}\,\mathbf{V}_0^T = \frac{\tau}{2} \, \partial \left\Vert\hat{\mathbf{L}}\right\Vert_* \, ,
\\
\Rightarrow \left(\mathbf{U}_0\, \mathbf{V}_0^T + \mathbf{W}\right) = \partial \left\Vert\hat{\mathbf{L}}\right\Vert_* \, ,
\end{align*}
where $\mathbf{W} = \displaystyle \frac{2}{\tau} \, \mathbf{U}_1\, \mathbf{S}_1\, \mathbf{V}_1^T$. \\
Let $\mathbf{U}_L\, \mathbf{S}_L\, \mathbf{V}_L$ be the SVD of $\mathbf{L}$, it is known \cite{4797640, Lewis2003, WATSON199233} that
\begin{equation*}
\partial \left\Vert\hat{\mathbf{L}}\right\Vert_* = \left\{ \mathbf{U}_L\, \mathbf{V}_L^T + \mathbf{W} \, : \, \mathbf{W}\in\mathbb{R}^{e \times p}, \, \mathbf{U}_L^T\, \mathbf{W} = \mathbf{0}, \, \mathbf{W}\, \mathbf{V}_L = \mathbf{0}, \, \left\Vert\mathbf{W}\right\Vert_2 \leq 1\right\} \, .
\end{equation*}
Hence, $\left(\mathbf{D} - \left(\mathbf{A}_t \, \mathbf{C}^{(k-1)}\right)^T \right) - \hat{\mathbf{L}} = \displaystyle\frac{\tau}{2} \, \partial \left\Vert\hat{\mathbf{L}}\right\Vert_*$, which concludes the proof.
\end{proof}
$\\$
{\bf Solving sub-problem \eqref{eq:sub2b}:}
\eqref{eq:sub2b} can be solved by various methods, among which we adopt the Alternating Direction Method of Multipliers (ADMM) \cite{Boyd:2011:DOS:2185815.2185816}.
More precisely, we introduce an auxiliary variable $\mathbf{F}$ into sub-problem \eqref{eq:sub2b} and recast it into the following form:
\begin{equation}
\label{eq:problemm}
\left(\mathbf{C}^{(k)}, \mathbf{F}^{(k)}\right) = \underset{s.t.~~ \mathbf{C} = \mathbf{F}} {\mathrm{argmin}} \, \left\{\left\Vert \left(\mathbf{D} - \mathbf{L}^{(k)}\right)^T - \mathbf{A}_t \, \mathbf{C}\right\Vert_F^2 + \lambda \, \left\Vert\mathbf{F}\right\Vert_{2,1}\right\}
\end{equation}
Problem \eqref{eq:problemm} is then solved as follows (scaled form of ADMM):
\begin{subequations}\label{eq:subbb}
\small
\begin{alignat}{2}
\label{eq:sub11a}
\mathbf{C}^{(k)} &= \underset{\mathbf{C}} {\mathrm{argmin}} \, \left\{\left\Vert \left(\mathbf{D} - \mathbf{L}^{(k)}\right)^T - \mathbf{A}_t \,\mathbf{C}\right\Vert_F^2 + \frac{\rho^{(k-1)}}{2}\, \left\Vert\mathbf{C} - \mathbf{F}^{(k-1)} + \frac{1}{\rho^{(k-1)}}\, \mathbf{Z}^{(k-1)}\right\Vert_F^2 \, \right\}\,, \\
\label{eq:sub22b}
\mathbf{F}^{(k)} &= \underset{\mathbf{F}} {\mathrm{argmin}} \, \left\{\lambda \, \left\Vert\mathbf{F}\right\Vert_{2,1} + \frac{\rho^{(k-1)}}{2}\,\left\Vert \mathbf{C}^{(k)} - \mathbf{F} + \frac{1}{\rho^{(k-1)}} \, \mathbf{Z}^{(k-1)}\right\Vert_F^2\right\}\,, \\
\label{eq:sub33c}
\mathbf{Z}^{(k)} &= \mathbf{Z}^{(k-1)} + \rho^{(k-1)} \,\left(\mathbf{C}^{(k)} - \mathbf{F}^{(k)}\right)\,,
\end{alignat}
\end{subequations}
where $\mathbf{Z} \in \mathbb{R}^{N_t \times e}$ is the Lagrangian multiplier matrix, and $\rho$ is a positive scalar.
\\
\\
{\em Solving sub-problem \eqref{eq:sub11a}}:
\begin{align*}
-2\, \mathbf{A}_t^T \left(\left(\mathbf{D} - \mathbf{L}^{(k)}\right)^T - \mathbf{A}_t\, \mathbf{C} \right) + \rho^{(k-1)} \, \left( \mathbf{C} - \mathbf{F}^{(k-1)} + \frac{1}{\rho^{(k-1)}} \, \mathbf{Z}^{(k-1)}\right) = \mathbf{0} \, ,
\\
\Rightarrow \left(2\, \mathbf{A}_t^T \, \mathbf{A}_t + \rho^{(k-1)}\, \mathbf{I} \right) \, \mathbf{C} = \rho^{(k-1)} \, \mathbf{F}^{(k-1)} - \mathbf{Z}^{(k-1)} + 2\, \mathbf{A}_t^T \, \left(\mathbf{D} - \mathbf{L}^{(k)}\right)^T \, .
\end{align*}
This implies:
\[
\boxed{\mathbf{C}^{(k)} = \left( 2\, \mathbf{A}_t^T \, \mathbf{A}_t + \rho^{(k-1)} \, \mathbf{I}\right)^{-1} \, \left(\rho^{(k-1)}\, \mathbf{F}^{(k-1)} - \mathbf{Z}^{(k-1)} + 2\, \mathbf{A}_t^T \, \left(\mathbf{D} - \mathbf{L}^{(k)}\right)^T \right)} \, .
\]
\\
{\em Solving sub-problem \eqref{eq:sub22b}}:
\\
According to Lemma 3.3 in \cite{doi:10.1137/080730421} and Lemma 4.1 in \cite{6180173}, subproblem \eqref{eq:sub22b} admits the following closed form solution:
\[
\boxed{[\mathbf{F}]_{:, j}^{(k)} = \max \left( \left\Vert\left[\mathbf{C}\right]_{:,j}^{(k)} + \frac{1}{\rho^{(k-1)}} \, \left[\mathbf{Z}\right]_{:,j}^{(k-1)}\right\Vert_2 - \frac{\lambda}{\rho^{(k-1)}}, \, 0 \right) \, \left( \displaystyle \frac{\left[\mathbf{C}\right]_{:,j}^{(k)} +
{\frac{1}{\rho^{(k-1)}}}\, \left[\mathbf{Z}\right]_{:,j}^{(k-1)}}{\left\Vert \left[\mathbf{C}\right]_{:,j}^{(k)} + \frac{1}{\rho^{(k-1)}}\, \left[\mathbf{Z}\right]_{:,j}^{(k-1)}\right\Vert_2}\right)
}
\]
\\
\begin{proof}
At the $j^{th}$ column, subproblem \eqref{eq:sub22b} refers to:
\begin{equation*}
\left[\mathbf{F}\right]_{:,j}^{(k)} = \underset{ \left[\mathbf{F}\right]_{:,j}} {\mathrm{argmin}} \, \left\{\lambda \, \left\Vert \left[\mathbf{F}\right]_{;,j}\right\Vert_2 + \frac{\rho^{(k-1)}}{2}\,\left\Vert\left[\mathbf{C}\right]_{:,j}^{(k)} - \left[\mathbf{F}\right]_{:,j} + \frac{1}{\rho^{(k-1)}}\, \left[\mathbf{Z}\right]_{:,j}^{(k-1)}\right\Vert_2^2 \, \right\} \, .
\end{equation*}
By finding the derivative w.r.t $[\mathbf{F}]_{:,j}$ and setting it to zero, we obtain:
\begin{eqnarray}
\label{eq:huyrz}
-\rho^{(k-1)} \, \left( \left[\mathbf{C}\right]_{:,j}^{(k)} - \left[\mathbf{F}\right]_{:,j} + \frac{1}{\rho^{(k-1)}}\, \left[\mathbf{Z}\right]_{:,j}^{(k-1)} \right) + \frac{\lambda \, \left[\mathbf{F}\right]_{:,j}}{\left\Vert\left[\mathbf{F}\right]_{:,j}\right\Vert_2} = \mathbf{0}\, \nonumber
\\
\Rightarrow \left[\mathbf{C}\right]_{:,j}^{(k)} + \frac{1}{\rho^{(k-1)}}\, \left[\mathbf{Z}\right]_{:,j}^{(k-1)} = \left[\mathbf{F}\right]_{:,j} + \frac{\lambda \, \left[\mathbf{F}\right]_{:,j}}{\rho^{(k-1)}\, \left\Vert\left[\mathbf{F}\right]_{:,j}\right\Vert_2} \, .
\end{eqnarray}
By computing the $l_2$ norm of \eqref{eq:huyrz}, we obtain:
\begin{equation}
\label{eq:bbcv}
\left\Vert \left[\mathbf{C}\right]_{:,j}^{(k)} + \frac{1}{\rho^{(k-1)}}\, \left[\mathbf{Z}\right]_{:,j}^{(k-1)}\right\Vert_2 = \left\Vert\left[\mathbf{F}\right]_{:,j}\right\Vert_2 + \frac{\lambda}{\rho^{(k-1)}} \, .
\end{equation}
From equation \eqref{eq:huyrz} and equation \eqref{eq:bbcv}, we have:
\begin{equation}
\label{eq:wwwe}
\displaystyle \frac{\left[\mathbf{C}\right]_{:,j}^{(k)} + \displaystyle\frac{1}{\rho^{(k-1)}}\, \left[\mathbf{Z}\right]_{:,j}^{(k-1)}}{ \left\Vert\left[\mathbf{C}\right]_{:,j}^{(k)} + \displaystyle\frac{1}{\rho^{(k-1)}}\, \left[\mathbf{Z}\right]_{:,j}^{(k-1)}\right\Vert_2} = \frac{\left[\mathbf{F}\right]_{:,j}}{\left\Vert\left[\mathbf{F}\right]_{:,j}\right\Vert_2} \, .
\end{equation}
Consider that:
\begin{equation}
\label{eq:heqrtqr}
[\mathbf{F}]_{:,j}= \left\Vert[\mathbf{F}]_{:,j}\right\Vert_2 \times \displaystyle \frac{[\mathbf{F}]_{:,j}}{ \left\Vert[\mathbf{F}]_{:,j}\right\Vert_2} \, .
\end{equation}
By replacing $ \left\Vert[\mathbf{F}]_{:,j}\right\Vert_2$ from \eqref{eq:bbcv} into \eqref{eq:heqrtqr}, and $\displaystyle\frac{[\mathbf{F}]_{:,j}}{ \left\Vert[\mathbf{F}]_{:,j}\right\Vert_2}$ from \eqref{eq:wwwe} into \eqref{eq:heqrtqr}, we conclude the proof.
\end{proof}
\subsection{Some initializations and convergence criterion}
We initialize $\mathbf{L}^{(0)} = \mathbf{C}^{(0)} = \mathbf{F}^{(0)} = \mathbf{Z}^{(0)} = \boldsymbol{0}$, $\rho^{(0)} = 10^{-4}$ and update $\rho^{(k)} = 1.1 \, \rho^{(k-1)}$. The criteria for convergence of sub-problem \eqref{eq:sub2b} is $ \left\Vert\mathbf{C}^{(k)} - \mathbf{F}^{(k)} \right\Vert_F^2 \leq 10^{-6}$.
\\
\\
For Problem \eqref{eq:convex_model}, we stop the iteration when the following convergence criterion is satisfied:
\begin{align*}
\frac{ \left\Vert\mathbf{L}^{(k)} - \mathbf{L}^{(k-1)} \right\Vert_F}{ \left\Vert\mathbf{D}\right\Vert_F} \leq \epsilon~~~~~\text{and} ~~~~~
\frac{ \left\Vert\left(\mathbf{A}_t \, \mathbf{C}^{(k)}\right)^T - \left(\mathbf{A}_t \, \mathbf{C}^{(k-1)}\right)^T\right\Vert_F}{ \left\Vert\mathbf{D}\right\Vert_F} \leq \epsilon
\end{align*}
where $\epsilon>0$ is a precision tolerance parameter. We set $\epsilon = 10^{-4}$.
\subsection{Our novel target detector: $(\mathbf{A}_t\mathbf{C})^T$}
\label{sec:first_dete_strat}
We use $(\mathbf{A}_t \mathbf{C})^T$ directly for the detection. Note that for this detector, we require as few false alarms as possible to be deposited in the target image, but we do not need the target fraction to be entirely removed from the background (that is, a very weak target separation can suffice). As long as enough of the target fractions are moved to the target image, such that non-zero support is detected at the corresponding pixel location, it will be adequate for our detection scheme. From this standpoint, we should choose a $\lambda$ value that is relatively large so that the target image is really sparse with zero or little false alarms, and only the signals that reside in the target subspace specified by $\mathbf{A}_t$ will be deposited there.
\section{Experiments and Analysis}
\label{sec:experimentss}
To obtain the same scene as in Fig. 8 in \cite{Swayze10245}, we have concatenated two sectors labeled as ``f970619t01p02\_r02\_sc03.a.rf'' and ``f970619t01p02\_r02\_sc04.a.rfl'' from the online Cuprite data \cite{CupriteHSIOnline}. We shall call the resulting HSI as ``Cuprite HSI'' (see Fig. \ref{fig:cupriteHSI}). The Cuprite HSI is a mining district area, which is well understood mineralogically \cite{Swayze10245, JGRE:JGRE1642}. It contains well-exposed zones of advanced argillic alteration, consisting principally of kaolinite, alunite, and hydrothermal silica. It was acquired by the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) in June 23, 1995 at local noon and under high visibility conditions by a NASAER-2 aircraft flying at an altitude of 20 km. It is a $1024 \times 614$ image and consists of 224 spectral (color) bands in contiguous (of about 0.01 $\mu$m) wavelengths ranging exactly from 0.4046 to 2.4573 $\mu$m. Prior to some analysis of the Cuprite HSI, the spectral bands 1-4, 104-113, and 148-167 are removed due to the water absorption in those bands. As a result, a total of 186 bands are used.
\begin{figure}[!tbp]
\centering
\minipage{2.5\textwidth}
\hspace{-1.6cm}\includegraphics[width=0.5\linewidth]{ggh_new-eps-converted-to.pdf}
\endminipage
\caption{The Cuprite HSI of size $1024 \times 614 \times 186$. We exhibit the mean power in db over the 186 spectral bands.}\label{fig:cupriteHSI}
\end{figure}
\begin{figure}[!tbp]
\minipage{0.50\textwidth}
\includegraphics[width=\linewidth]{alunite_samples_new2-eps-converted-to.pdf}
\endminipage\hfill
\minipage{0.50\textwidth}
\includegraphics[width=\linewidth]{original_image_alunite_new_cube.png}
\endminipage\hfill
\centering
\minipage{0.50\textwidth}
\includegraphics[width=\linewidth]{original_image_alunite_new-eps-converted-to.pdf}
\endminipage
\caption{A $100 \times 100 \times 186$ ``Alunite HSI'' generated by 72 pure alunite samples picked from the Cuprite HSI (72 pixels from the solid red ellipses in Fig. \ref{fig:cupriteHSI}). For the third image, we exhibit the mean power in db over the 186 spectral bands.}\label{fig:image2}
\end{figure}
\begin{figure}[!]
\minipage{0.52\textwidth}
\includegraphics[width=\linewidth]{alunite_new-eps-converted-to.pdf}
\begin{center}
\bf (a)
\end{center}
\endminipage\hfill
\minipage{0.52\textwidth}
\includegraphics[width=\linewidth]{kaolinite-eps-converted-to.pdf}
\begin{center}
\bf (b)
\end{center}
\endminipage
\caption{Three-point band depth images for both {\bf(a)} alunite and {\bf(b)} kaolinite.}\label{fig:image3}
\end{figure}
$\\$
$\\$
By referring to Figure 8 in \cite{Swayze10245}, we picked 72 pure alunite pixels from the Cuprite HSI (72 pixels located inside the solid red ellipses in Fig. \ref{fig:cupriteHSI}) and generate a $100 \times 100 \times 186$ HSI zone formed by these pixels. We shall call this small HSI zone as ``Alunite HSI'' (see Fig. \ref{fig:image2}), and which will be used for the target evaluations later. We incorporate, in this zone, seven target blocks (each of size $6\times 3$) with $\alpha\in[0.01, \, 1]$ (all have the same $\alpha$ value), placed in long convoy formation all formed by the same target $\mathbf{t}$ that we picked from the Cuprite HSI and which will constitute our target of interest to be detected. The target $\mathbf{t}$ replaces a fraction $\alpha\in[0.01, \, 1]$ from the background; specifically, the following values of $\alpha$ are considered: 0.01, 0.02, 0.05, 0.1, 0.3, 0.5, 0.8, and 1.
\\
In the experiments, two kinds of targets $\mathbf{t}$ are considered:
\begin{enumerate}
\item `$\mathbf{t}$' that represents the buddingtonite target,
\item `$\mathbf{t}$' that represents the kaolinite target.
\end{enumerate}
More precisely, our detector $\left(\mathbf{A}_t\mathbf{C}\right)^T$ is evaluated on two target detection scenarios:
\begin{itemize}
\item{\bf Evaluation on an easy target (buddingtonite target):} It has been noted by Gregg et. al. \cite{Swayze10245} that the ammonia in the Tectosilicate mineral type, known as buddingtonite, has a distinct N-H combination absorption at 2.12 $\mu$m, a position similar to that of the cellulose absorption in dried vegetation, from which it can be distinguished based on its narrower band width and asymmetry. Hence, the buddingtonite mineral can be considered as an ``easy target''
because it does not look like any other mineral with its distinct 2.12$\mu$m absorption (that is, it is easily recognized based on its unique 2.12$\mu$m absorption band).
\\
In the experiments, we consider the ``buddingtonite'' pixel at location (731, 469) in the Cuprite HSI (the center of the dash-dotted yellow circle in Fig. \ref{fig:cupriteHSI}) as the buddingtonite target $\mathbf{t}$ to be incorporated in the Alunite HSI for $\alpha\in[0.01, \, 1]$.
$\\$
\item{\bf Evaluation on a challenging target (kaolinite target)}\footnote{We thank Dr. Gregg A. Swayze from the United States Geological Survey (USGS) who has suggested us to evaluate our model \eqref{eq:convex_model} on the distinction between alunite and kaolinite minerals.}{\bf :} The paradigm in military applications hyperspectral imagery seems to center on finding the target but ignoring all the rest. Sometimes, that rest is important, especially if the target is well matched to the surroundings. It has been shown by Gregg et al. \cite{Swayze10245} that both alunite and kaolinite minerals have overlapping spectral features, and thus, discrimination between these two minerals is very challenging \cite{doi:10.1029/2002JE001847, Swayze10245}.
\\
In the experiments, we consider the ``kaolinite'' pixel at location (672, 572) in the Cuprite HSI (the center of the dotted blue circle in Fig. \ref{fig:cupriteHSI}) as the kaolinite target $\mathbf{t}$ to be incorporated in the Alunite HSI for $\alpha\in[0.01, \, 1]$.
\\
Fig. 4(a) exhibits a three-point band depth image for our alunite background that shows the locations where an absorption feature, centered near 2.17 $\mu$m, is expressed in spectra of surface materials. Fig. 4(b) exhibits a three-point band depth image for our kaolinite target that shows the locations where an absorption feature, centered near 2.2 $\mu$m, is expressed in spectra of surface materials. As we can observe, there is subtle difference between the alunite and kaolinite three-point band depth images, showing that the successful spectral distinction between these two minerals is a very challenging task to achieve\footnote{We have been inspired by Fig. 8D-E in \cite{doi:10.1029/2002JE001847} to provide a close example of it in this chapter as can be shown in Fig. \ref{fig:image3}.} \cite{doi:10.1029/2002JE001847}.
\end{itemize}
\subsection{Construction of the target dictionary $\mathbf{A}_t$}
\label{sec:dict_const}
\begin{figure}[!tbp]
\begin{center}
\large \bf When $\mathbf{A}_t$ is constructed from background samples
\end{center}
\minipage{0.52\textwidth}
\includegraphics[width=\linewidth]{originalHSI_alpha1_buddingtonite_cube.png}
\endminipage\hfill
\minipage{0.52\textwidth}
\includegraphics[width=\linewidth]{Ourdetector_alpha1_poordictionary_buddingtonite.png}
\endminipage\hfill
\minipage{0.52\textwidth}
\includegraphics[width=\linewidth]{originalHSI_alpha1_kaolinite_cube.png}
\endminipage\hfill
\minipage{0.52\textwidth}
\includegraphics[width=\linewidth]{Ourdetector_alpha1_poordictionary_kaolinite.png}
\endminipage
\caption{Evaluation of our detector $(\mathbf{A}_t\mathbf{C})^T$ for detecting the buddingtonite and kaolinite target (for $\alpha=1$) from the Alunite HSI when $\mathbf{A}_t$ is contsructed from some background pixels acquired from the Alunite HSI.} \label{fig:poor_dictionary}
\end{figure}
An important problem that requires a very careful attention is the construction of an appropriate dictionary $\mathbf{A}_t$ in order to well capture the target and distinguish it from the background. Thus, if $\mathbf{A}_t$ does not well represent the target of interest, our model in \eqref{eq:convex_model} may fail on discriminating the targets from the background. For example, Fig. \ref{fig:poor_dictionary} shows the detection results of our detector $(\mathbf{A}_t\mathbf{C})^T$ when $\mathbf{A}_t$ is constructed from some of the background pixels in the Alunite HSI. We can obviously observe that our detector is not able to capture the targets mainly because of the poor dictionary $\mathbf{A}_t$ constructed.
\\
In reality, the target present in the HSI can be highly affected by the atmospheric conditions, sensor noise, material composition, and scene geometry, that may produce huge variations on the target spectra. In view of these real effects, it is very difficult to model the target dictionary $\mathbf{A}_t$ well. But this raises the question on ``{\em how these effects should be dealt with?}''.
\\
Some scenarios for modelling the target dictionary have been suggested in the literature. For example, by using physical models and the MORTRAN atmospheric modeling program \cite{Berk89}, target spectral signatures can be generated under various atmospheric conditions. For simplicity, we handle this problem in this work by exploiting target samples available in some online spectral libraries. More precisely, $\mathbf{A}_t$ can be constructed via the United States Geological Survey (USGS - Reston) spectral library \cite{Clark93}. However, the user can also deal with the Advanced Spaceborne Thermal Emission and Reflection (ASTER) spectral library \cite{Baldridge09} that includes data from the USGS spectral library, the Johns Hopkins University (JHU) spectral library, and the Jet Propulsion Laboratory (JPL) spectral library.
\\
\\
There are three buddingtonite samples available in the ASTER spectral library and will be considered to construct the dictionary $\mathbf{A}_t$ for the detection of our buddingtonite target (see Fig. \ref{fig:dictionaries_kao_bud} (first column)); whereas six kaolinite samples are available in the USGS spectral library and will be acquired to construct $\mathbf{A}_t$ for the detection of our kaolinite target (see Fig. \ref{fig:dictionaries_kao_bud} (second column)).
\\
\\
For instance, it is usually difficult to find, for a specific given target, a sufficient number of available samples in the online spectral libraries. Hence, the dictionary $\mathbf{A}_t$ can still not be sufficiently selective and accurate. This is the most reason why problem \eqref{eq:convex_model} may fail to well capture the targets from the background.
\begin{figure}[!tbp]
\minipage{0.52\textwidth}
\includegraphics[width=\linewidth]{dictionary_buddingtonite-eps-converted-to.pdf}
\endminipage\hfill
\minipage{0.52\textwidth}
\includegraphics[width=\linewidth]{dictionary_kaolinite-eps-converted-to.pdf}
\endminipage
\caption{Both target dictionaries for the detection of buddingtonite and kaolinite.}\label{fig:dictionaries_kao_bud}
\end{figure}
\newpage
\subsection{Target Detection Evaluation}
We now aim to qualitatively evaluate the target detection performances of our detector $(\mathbf{A}_t\mathbf{C})^T$ on both buddingtonite and kaolinite target detection scenarios when $\mathbf{A}_t$ is constructed from target samples available in the online spectral libraries (from Fig. \ref{fig:dictionaries_kao_bud}). As can be seen from Fig. \ref{fig:detection_evaluations}, our detector is able to detect the buddingtonite targets with no false alarms until $\alpha \leq 0.1$ where a lot of false alarms appear.
\\
For the detection of kaolinite, it was difficult to have a clean detection (that is, without false alarms) especially for $0.1<\alpha\leq1$. This is to be expected since the kaolinite target is well matched to the alunite background (that is, kaolinite and alunite have overlapping spectral features), and hence, discrimination between them is very challenging.
\\
\\
It is interesting to note (results are omitted in this chapter) that if we consider $\mathbf{A}_t = \mathbf{t}$ (that is, we are searching for the exact signature $\mathbf{t}$ in the Alunite HSI), the buddingtonite and even the kaolinite targets are able to be detected with no false alarms for $0.1<\alpha\leq1$. When $\alpha\leq0.1$, a lot of false alarms appear, but, the detection performance for both buddngtonite and kaolinite targets remains better than to those in Fig. \ref{fig:detection_evaluations}.
\begin{figure}[!tbp]
\begin{center}
\large \bf When $\mathbf{A}_t$ is constructed from target samples
\end{center}
\begin{center}
\large$\bf \boldsymbol{\alpha} = 1$
\end{center}
\minipage{0.54\textwidth}
\includegraphics[width=\linewidth]{originalHSI_alpha1_buddingtonite_cube.png}
\endminipage\hfill
\minipage{0.54\textwidth}
\includegraphics[width=\linewidth]{Ourdetector_alpha1_onlinedictionary_buddingtonite_cube_new.png}
\endminipage\hfill
\minipage{0.54\textwidth}
\includegraphics[width=\linewidth]{originalHSI_alpha1_kaolinite_cube.png}
\endminipage\hfill
\minipage{0.54\textwidth}
\includegraphics[width=\linewidth]{Ourdetector_alpha1_onlinedictionary_kaolinite.png}
\endminipage
\begin{center}
\large$\bf \boldsymbol{\alpha} = 0.8$
\end{center}
\minipage{0.54\textwidth}
\includegraphics[width=\linewidth]{originalHSI_alpha08_buddingtonite_cube.png}
\endminipage\hfill
\minipage{0.54\textwidth}
\includegraphics[width=\linewidth]{Ourdetector_alpha08_onlinedictionary_buddingtonite_cube_new.png}
\endminipage\hfill
\minipage{0.54\textwidth}
\includegraphics[width=\linewidth]{originalHSI_alpha08_kaolinite_cube.png}
\endminipage\hfill
\minipage{0.54\textwidth}
\includegraphics[width=\linewidth]{Ourdetector_alpha08_onlinedictionary_kaolinite.png}
\endminipage
\end{figure}
\begin{figure}[!tbp]
\begin{center}
\large$\bf \boldsymbol{\alpha} = 0.5$
\end{center}
\minipage{0.54\textwidth}
\includegraphics[width=\linewidth]{originalHSI_alpha05_buddingtonite_cube.png}
\endminipage\hfill
\minipage{0.54\textwidth}
\includegraphics[width=\linewidth]{Ourdetector_alpha05_onlinedictionary_buddingtonite_cube_new.png}
\endminipage\hfill
\minipage{0.54\textwidth}
\includegraphics[width=\linewidth]{originalHSI_alpha05_kaolinite_cube.png}
\endminipage\hfill
\minipage{0.54\textwidth}
\includegraphics[width=\linewidth]{Ourdetector_alpha05_onlinedictionary_kaolinite.png}
\endminipage
\begin{center}
\large$\bf \boldsymbol{\alpha} = 0.3$
\end{center}
\minipage{0.54\textwidth}
\includegraphics[width=\linewidth]{originalHSI_alpha03_buddingtonite_cube.png}
\endminipage\hfill
\minipage{0.54\textwidth}
\includegraphics[width=\linewidth]{Ourdetector_alpha03_onlinedictionary_buddingtonite_cube_new.png}
\endminipage\hfill
\minipage{0.54\textwidth}
\includegraphics[width=\linewidth]{originalHSI_alpha03_kaolinite_cube.png}
\endminipage\hfill
\minipage{0.54\textwidth}
\includegraphics[width=\linewidth]{Ourdetector_alpha03_onlinedictionary_kaolinite.png}
\endminipage
\end{figure}
\begin{figure}[!tbp]
\begin{center}
\large$\bf \boldsymbol{\alpha} = 0.1$
\end{center}
\minipage{0.54\textwidth}
\includegraphics[width=\linewidth]{originalHSI_alpha01_buddingtonite_cube.png}
\endminipage\hfill
\minipage{0.54\textwidth}
\includegraphics[width=\linewidth]{Ourdetector_alpha01_onlinedictionary_buddingtonite_cube_new.png}
\endminipage\hfill
\minipage{0.54\textwidth}
\includegraphics[width=\linewidth]{originalHSI_alpha01_kaolinite_cube.png}
\endminipage\hfill
\minipage{0.54\textwidth}
\includegraphics[width=\linewidth]{Ourdetector_alpha01_onlinedictionary_kaolinite.png}
\endminipage
\begin{center}
\large$\bf \boldsymbol{\alpha} = 0.05$
\end{center}
\minipage{0.54\textwidth}
\includegraphics[width=\linewidth]{originalHSI_alpha005_buddingtonite_cube.png}
\endminipage\hfill
\minipage{0.54\textwidth}
\includegraphics[width=\linewidth]{Ourdetector_alpha005_onlinedictionary_buddingtonite_cube_new.png}
\endminipage\hfill
\minipage{0.54\textwidth}
\includegraphics[width=\linewidth]{originalHSI_alpha005_kaolinite_cube.png}
\endminipage\hfill
\minipage{0.54\textwidth}
\includegraphics[width=\linewidth]{Ourdetector_alpha005_onlinedictionary_kaolinite.png}
\endminipage
\end{figure}
\begin{figure}[!tbp]
\begin{center}
\large$\bf \boldsymbol{\alpha} = 0.02$
\end{center}
\minipage{0.54\textwidth}
\includegraphics[width=\linewidth]{originalHSI_alpha002_buddingtonite_cube.png}
\endminipage\hfill
\minipage{0.54\textwidth}
\includegraphics[width=\linewidth]{Ourdetector_alpha002_onlinedictionary_buddingtonite_cube_new.png}
\endminipage\hfill
\minipage{0.54\textwidth}
\includegraphics[width=\linewidth]{originalHSI_alpha002_kaolinite_cube.png}
\endminipage\hfill
\minipage{0.54\textwidth}
\includegraphics[width=\linewidth]{Ourdetector_alpha002_onlinedictionary_kaolinite.png}
\endminipage
\begin{center}
\large$\bf \boldsymbol{\alpha} = 0.01$
\end{center}
\minipage{0.54\textwidth}
\includegraphics[width=\linewidth]{originalHSI_alpha001_buddingtonite_cube.png}
\endminipage\hfill
\minipage{0.54\textwidth}
\includegraphics[width=\linewidth]{Ourdetector_alpha001_onlinedictionary_buddingtonite_cube_new.png}
\endminipage\hfill
\minipage{0.54\textwidth}
\includegraphics[width=\linewidth]{originalHSI_alpha001_kaolinite_cube.png}
\endminipage\hfill
\minipage{0.54\textwidth}
\includegraphics[width=\linewidth]{Ourdetector_alpha001_onlinedictionary_kaolinite.png}
\endminipage
\caption{Evaluation of our detector $(\mathbf{A}_t\mathbf{C})^T$ for detecting the buddingtonite and kaolinite target (for $\alpha \in [0.01, \, 1]$) when $\mathbf{A}_t$ is contsructed from target samples in the online spectral libraries.}\label{fig:detection_evaluations}
\end{figure}
\newpage
\section{Conclusion and Future Work}
In this chapter, the well-known Robust Principal Component Analysis (RPCA) is exploited for target detection in hyperspectral imagery. By taking similar assumptions to those used in RPCA, a given hyperspectral image (HSI) has been decomposed into the sum of a low rank background HSI (denoted as $\mathbf{L}$) and a sparse target HSI (denoted as $\mathbf{E}$) that only contains the targets (with the background is suppressed) \cite{8677268}. In order to alleviate the inadequacy of RPCA on distinguishing the true targets from the background, we have incorporated into the RPCA imaging, the prior target information that can often be provided to the user. In this regard, we have constructed a pre-learned target dictionary $\mathbf{A}_t$, and thus, the given HSI is being decomposed as the sum of a low rank background HSI $\mathbf{L}$ and a sparse target HSI denoted as $\left(\mathbf{A}_t\mathbf{C}\right)^T$, where $\mathbf{C}$ is a sparse activation matrix.
\\
In this work, the sparse component $\left(\mathbf{A}_t\mathbf{C}\right)^T$ was only the object of interest, and thus, used directly for the detection. More precisely, the targets are deemed to be present at the non-zero entries of the sparse target HSI. Hence, a novel target detector is developed and which is simply a sparse HSI generated automatically from the original HSI, but containing only the targets of interests with the background is suppressed.
\\
The detector is evaluated on real experiments, and the results of which demonstrate its effectiveness for hyperspectral target detection, especially on detecting targets that have overlapping spectral features with the background.
\\
\\
The $l_1$-norm regularizer, a continuous and convex surrogate, has been studied extensively in the literature \cite{Tibshirani96, 10.2307/3448465} and has been applied successfully to many applications including signal/image processing, biomedical informatics, and computer vision \cite{doi:10.1093/bioinformatics/btg308, 4483511, Beck:2009:FIS:1658360.1658364, 4799134, Ye:2012:SMB:2408736.2408739}. Although the $l_1$ norm based sparse learning formulations have achieved great success, they have been shown to be suboptimal in many cases \cite{CandAs2008, Zhanggg, zhang2013}, since the $l_1$ is still too far away from the ideal $l_0$ norm. To address this issue, many non-convex regularizers, interpolated between the $l_0$ norm and the $l_1$ norm, have been proposed to better approximate
the $l_0$ norm. They include $l_q$ norm ($0<q<1$) \cite{Fourc1624}, Smoothly Clipped Absolute Deviation \cite{Fan01}, Log-Sum Penalty \cite{Candes08}, Minimax Concave Penalty \cite{zhang2010aa}, Geman Penalty \cite{392335, 5153291}, and Capped-$l_1$ penalty \cite{Zhanggg, zhang2013, Gong12}.
\\
In this regard, from problem \eqref{eq:convex_model}, it will be interesting to use other proxies than the $l_{2,1}$ norm, closer to $l_{2,0}$, in order to probably alleviate the $l_{2,1}$ artifact and also the manual selection problem of both $\tau$ and $\lambda$.
But although the non-convex regularizers (penalties) are appealing in sparse learning, it remains a very big challenge to solve the corresponding non-convex optimization problems.
\begin{acknowledgement}
The authors would greatly thank Dr.~Gregg A. Swayze from the United States Geological Survey (USGS) for his time in providing them helpful remarks about the cuprite data and especially on the buddingtonite, alunite and kaolinite minerals.
\end{acknowledgement}
\bibliographystyle{IEEEbib}
| {'timestamp': '2019-04-22T02:05:44', 'yymm': '1904', 'arxiv_id': '1904.09030', 'language': 'en', 'url': 'https://arxiv.org/abs/1904.09030'} |
\section{Introduction}
Throught this article, we fix a differential field $k$ of characteristic zero. Let $E$ be a differential field extension of $k$ and let $'$ denote the derivation on $E$. We say that $E$ is a \textit{ liouvillian extension} of $k$ if $E=k(t_1,\cdots,t_n)$ and there is a tower of differential fields
\begin{equation*}k=k_0\subset k_1\subset\cdots\subset k_n=E\end{equation*}
such that for each $i$, $k_i=k_{i-1}(t_i)$ and either $t'_i\in k_{i-1}$ or $t'_i/t_i\in k_{i-1}$ or $t_i$ is
algebraic over $k_{i-1}$. If $E=k(t_1,\cdots,t_n)$ is a liouvillian extension of $k$ such that $t'_i\in k_{i-1}$ for each $i$ then we call $E$ an \textit{iterated antiderivative extension} of $k$. A solution of a differential equation over $k$ is said to be liouvillian over $k$ if the solution belongs to some liouvillian extension of $k$.
Let $\mathcal{P}$ be an $n+1$ variable polynomial over $k$. We are concerned with the liouvillian solutions of the differential equation \begin{equation}\label{firstdiffereqn}\mathcal{P}(y,y',\cdots, y^{(n)})=0.\end{equation} We prove in theorem \ref{solutiondense} that \begin{itemize}
\item[] if $E$ is a liouvillian extension of $k$ and $K$ is a differential field intermediate to $E$ and $k$ then $K=k\langle u_1,\cdots,u_l\rangle$ where for each $i$, the element $u_i$ satisfies a linear differential equation over $k\langle u_1,\cdots, u_{i-1}\rangle$. Moreover if $E$ is an iterated antiderivative extension of $k$ having the same field of constants as $k$ then each $u_i$ can be chosen so that $u'_i\in k(u_1,\cdots, u_{i-1})$, that is, $K$ is also an iterated antiderivative extension of $k$. \end{itemize}
Our result regarding iterated antiderivative extensions generalises the main result of \cite{Sri2010} to differential fields $k$ having a non algebraically closed field of constants. The main ingredient used in the proof our theorem is the lemma \ref{ldesol} using which we also obtain the following interesting results concerning solutions of non linear differential equations.
\begin{itemize}
\item[A.] In remark \ref{order2}, we show that if $E$ is an iterated antiderivative extension of $k$ having the same field of constants and if $y\in E$ and $y\notin k$ satisfies a differential equation $y'=\mathcal{P}(y)$, where $\mathcal{P}$ is a polynomial in one variable over $k$, then degree of $\mathcal{P}$ must be less than or equal to $2$. \\
\item[B.] Let $C$ be an algebraically closed field of characteristic zero with the trivial derivation. In proposition \ref{M-B}, we prove that for any rational function $\mathcal{R}$ in one variable over $C$, the differential equation $y'=\mathcal{R}(y)$ has a non constant liouvillian solution $y$ if and only if $1/\mathcal{R}(y)$ is of the form $\partial z/\partial y$ or $(1/az)(\partial z/ \partial y)$ for some $z\in C(y)$ and for some non zero element $a\in C$. This result generalises and provides a new proof for a result of Singer (see \cite{Sin1975}, corollary 2). We also prove that
for any polynomial $\mathcal{P}$ in one variable over $C$ such that the degree of $\mathcal{P}$ is greater than or equal to $3$ and that $\mathcal{P}$ has no repeated roots, the differential equation $(y')^2=\mathcal{P}(y)$ has no non constant liouvillian solution over $C$. This result appears as proposition \ref{hyperellipticcurves} and it generalises an observation made by Rosenlicht \cite{M.Ros} concerning non constant liouvillian solutions of the elliptic equation $(y')^2=y^3+ay+b$ over complex numbers with a non zero discriminant. \\
\item[C.] Using theorem \ref{solutiondense} one can construct a family of differential equations with only algebraic solutions: Let $\a_2,\a_3,$ $\cdots,\a_n\in k$ such that $x'\neq \a_2$ and $x'\neq \a_3$ for any $x\in k$. Let $\overline{k}$ be an algebraic closure of $k$ and let $E$ be a liouvillian extension of $\overline{k}$ with $C_E=C_{\overline{k}}$. We prove in proposition \ref{algsolutions} that if there is an element $y\in E$ such that $$y'=\a_ny^n+\cdots+\a_3y^3+\a_2y^2$$ then $y\in \overline{k}$.
\end{itemize}
In a future publication, the author hopes to develop the techniques in this paper further to provide an algorithm to solve the following problem, which appears as ``Problem 7" in \cite{Sin1990}: Give a procedure to decide if a polynomial first order differential equation $\mathcal{P}(y,y')=0$, over the ordinary differential field $\mathbb{C}(x)$ with the usual derivation $d/dx$, has an elementary solution and to find one if it does.
{\bf Preliminaries and Notations.}
A \textsl{derivation} of the field $k$, denoted by $'$, is an additive endomorphism of $k$ that satisfies the Leibniz law $(xy)'=x'y+xy'$ for every $x,y\in k$. A field equipped
with a derivation map is called a \textsl{differential field}. For any $y\in k$, we will denote first and second derivative of $y$ by $y'$ and $y''$ respectively and for $n\geq 3$, the $n$th derivative of $y$ will be denoted by $y^{(n)}$. The set of \textsl{constants} $C_E$ of a differential field $E$ is the kernel of the endomorphism $'$ and it can be seen that the set of constants is a differential subfield of $k$. Let $E$ and $k$ be differential fields. We say that $E$ is a \textsl{differential field extension} of $k$ if $E$ is a field extension of $k$ and the restriction of the derivation of $E$ to $k$ coincides with the derivation of $k$. Whenever we write $k\subset E$ be differential fields, we mean that $E$ is a differential field extension of $k$ and we write $y\in E-k$ to mean that $y\in E$ and $y\notin k$. The transcendence degree of a field extension $E$ of $k$ will be denoted by tr.d$(E|k)$. If $E$ is a field (respectively a differential field), $M$ is a subfield (respectively a differential subfield) of $E$ and $K$ is a subset of $E$ then the smallest subfield (respectively the smallest differential subfield) of $E$ containing both $M$ and $K$ will be denoted by $M(K)$ (respectively by $M\langle K\rangle$). It is easy to see that the field $M(K)$ is a differential field if both $M$ and $K$ are differential subfields of a differential field. It is well known that every derivation of $k$ can be uniquely extended to a derivation of any algebraic extension of $k$ and in particular, to any algebraic closure $\overline{k}$ of $k$. Thus if $E$ is a differential field extension of $k$ and tr.d$(E|k)=1$ then for any $y\in E$ transcendental over $k$ the derivation $\partial/\partial y$ on $k(y)$ uniquely extends to a derivation of $E$. We refer the reader to \cite{Kap}, \cite{Mag} and \cite{Put-Sin} for basic theory of differential fields and for the reader's convenience and for easy reference, we record a basic result concerning liouvillian extensions in the following theorem.
\begin{theorem}\label{construction} Let $k$ be a differential field, $E$ be a field extension of $k$ and $w\in E$ be transcendental over $k$. For an element $\a\in k$ if there is no element $x\in k$ such that $x'=\a$ then there is a unique derivation on $k(w)$ such that $w'=\a$ and that $C_{k(w)}=C_k$. Similarly, if there is no element $x\in k$ such that $x'=l\a x$ for any positive integer $l$ then there is a unique derivation on $k(w)$ such that $w'=\a w$ and that $C_{k(w)}=C_k$. Finally, if $E$ is a differential field extension of $k$ having an element $u$ algebraic over $k$ such that $u'\in k$ then there is an element $\b\in k$ such that $u'=\b'$. \end{theorem}
\section{Role of First Order Equations}
In this section we prove our main result. The proof is based on the theory of linearly disjoint fields and in particular, we will heavily rely on the following lemma.
\lemma \label{ldesol}
Let $k\subset E$ be differential fields and let $K$ and $M$ be differential fields intermediate to $E$ and $k$ such that $K$ and $M$ are linearly disjoint over $k$ as fields. Suppose that there is an element $y\in M(K)-M$ such that $y'=fy+g$ for some $f,g\in M$. Then there exist a monic linear differential polynomial $L(Y)$ over $k$ of degree $\geq 1$ and an element $u\in K-k$ such that $L(u)=0$.
\begin{proof}
Let $\mathcal{B}:=\{e_\a\ |\ \a\in J\}$ be a basis of the $k-$vector space $M$. Since $M$ and $K$ are linearly disjoint, $\mathcal{B}$ is also a basis of $M(K)=K(M)$ as a $K-$vector space. There exist elements $\a_{1},\cdots,\a_r,$ $\a_{r+1},\cdots,\a_t\in J$ such that
\begin{equation}
y=\sum^r_{j=1}u_je_{\a_j},
\end{equation}
where $u_j\in K$ and for $i=1,\cdots, r,$
\begin{equation}
e'_{\a_i}=\sum^t_{p=1}n_{pi}e_{\a_p},\quad fe_{\a_i}=\sum^t_{p=1}l_{pi}e_{\a_p},\quad g=\sum^t_{p=1}m_pe_{\a_p},
\end{equation}
where each $n_{pi}, l_{pi}$ and $m_p$ belongs to $k$.
Then \begin{align*}
y'&=\sum^r_{j=1}u'_je_{\a_j}+\sum^r_{j=1}u_j\sum^t_{p=1}n_{pj}e_{\a_p}\\
&=\sum^r_{j=1}u'_je_{\a_j}+\sum^t_{p=1}\left(\sum^r_{j=1}n_{pj}u_j\right)e_{\a_p}\\
&=\sum^r_{p=1}\left(u'_p+\sum^r_{j=1}n_{pj}u_j\right)e_{\a_p}+\sum^t_{p=r+1}\left(\sum^r_{j=1}n_{pj}u_j\right)e_{\a_p}
\end{align*}
and $fy=\sum^r_{j=1}u_jfe_{\a_j}$ $=\sum^t_{p=1}\left(\sum^r_{j=1}l_{pj}u_j\right)e_{\a_p}$.
Now, from the equation $y'=fy+g$, we obtain for $p=1,\cdots,r,$ that
\begin{equation}
u'_p+\sum^r_{j=1}n_{pj}u_j=\sum^r_{j=1}l_{pj}u_j+m_p.
\end{equation}
Consider the $k-$vector space $\mathcal{K}:=span_k\{1,u_1,\cdots,u_r\}$. From the above equation, it is clear that $\mathcal{K}$ is a (finite dimensional) differential $k-$vector space and that $k\subset \mathcal{K}\subset K$. Then for every $u\in \mathcal{K}-k$, there exists a non negative integer $n$ such that $u=u^{(0)}, u^{(1)},\cdots,u^{(n)}$ are linearly independent over $k$ and that $u^{(n+1)}=\sum^n_{i=0}a_iu^{(i)}$, for some $a_i\in k$. Hence the lemma is proved.
\end{proof}
\begin{theorem}\label{solutiondense}
Let $E$ be a liouvillian extension field of $k$ and let $K$ be a differential field intermediate to $E$ and $k$. Then \begin{itemize} \item [I.]$K=k\langle u_1,\cdots,u_l\rangle$ where for each $i$, $u_i$ satisfies a linear homogeneous differential equation over $k\langle u_1,\cdots, u_{i-1}\rangle$.\\ \item[II.] If $E=k(t_1,\cdots,t_n)$ is an iterated antiderivative extension of $k$ with $C_E=C_k$ then $K$ is an iterated antiderivative extension of $k$. That is, $K=k(v_1,\cdots,v_m)$, where $v'_i\in k(v_1,\cdots,v_{i-1})$ .\\ \item[III.] If $k$ is an algebraically closed field then there is an element $u\in K-k$ such that $u'=au+b$ for some $a,b\in k$.\\ \item[IV.]Finally, if $k$ is algebraically closed field and $c'=0$ for all $c\in k$, then there is an element $z\in K-k$ such that either $z'=1$ or $z'=az$ for some element $a\in k$. \end{itemize}
\end{theorem}
\begin{proof}
We will use an induction on tr.d$(E|k)$ to prove item I.
For any differential field $k^*$ intermediate to $E$ and $k$ we observe that $E$ is a liouvillian extension of $k^*$ and therefore for differential fields $E\supset K\supset k^*$ such that tr.d$(E|k^*)<$tr.d$(E|k)$, we shall assume that the item I of our theorem holds.
If $u\in K$ is algebraic over $k$ then the vector space dimension $[k(u):k]=m$ for some positive integer $m$. Since $k(u)$ is a differential field, the set $\{u, u',u'',\cdots, u^{(m)}\}$ is linearly dependent and therefore $u$ satisfies a linear differential equation over $k$. Since
$k$ is of characteristic zero, there exist an element $u_1\in K$ such that that $k(u_1)$ is the algebraic closure of $k$ in $K$. Replacing $k$ with $k(u_1)$, if necessary, we shall assume that $k$ is algebraically closed in $K$ and that tr.d$(K|k)\geq 1$.
Let $E=k(t_1,\cdots,t_n)$, $k_0:=k$ and choose the largest positive integer $m$ so that $k_{m-1}$ and $K$ are linearly disjoint over $k$. Since $k$ is algebraically closed in $K$ and the characteristic of $k$ is zero, the field $k_{m-1}$ is algebraically closed in $k_{m-1}(K)$. Now our choice of $m$ guarantees that $t_m$ is not algebraic over $k_{m-1}$ and that $t_m$ is algebraic over $k_{m-1}(K)$. Let $\mathcal{P}(X)=\sum^n_{i=0}a_{n}X^n$ be the monic irreducible polynomial of $t_m$ over $k_{m-1}(K)$ and let $l$ be the smallest integer such that $a_{n-l}\notin k_{m-1}$. Expanding the equation $(\sum^n_{i=0}a_{i}t^i_m)'=0$ and comparing it with the equation $\sum^n_{i=0}a_{i}t^i_m=0$, we obtain
$$a'_{n-l}= \begin{cases} -(n-(l-1))ra_{n-(l-1)} &\ \mbox{if }\ t'_m=r\in k_{m-1} \\
(n-l)ra_{n-l} &\ \mbox{if }\ t'_m/t_m=r\in k_{m-1}. \end{cases} $$
Thus there is an element $a_{n-l}\in k_{m-1}(K)-k_{m-1}$ such that either $a'_{n-l} \in k_{m-1}$ or $a'_{n-l}/a_{n-l}\in k_{m-1}$. Now we apply lemma \ref{ldesol} and obtain an element $u_2\in K-k$ such that $L(u_2)=0$, where $L(Y)=0 $ is a linear differential equation over $k$ of order $\geq 1$. Since $k$ is algebraically closed in $K$, such an element $u_2$ must be transcendental over $k$ and we now prove item I by setting $k^*=k\langle u_2\rangle$ and invoking the induction hypothesis.
Let $E$ be an iterated antiderivative extension of $k$ with $C_E=C_k$. We shall choose elements $t_1,\cdots,t_n\in E$ so that $E=k(t_1,\cdots,t_n)$ and that $t_1,\cdots,t_n$ are algebraically independent over $k$ (see \cite{Sri2010}, theorem 2.1). Let $L(Y)=0$ be a linear homogeneous differential equation over $k$ of smallest positive degree $n$ such that $L(u)=0$ for some $u\in K-k$. Let $u\in k_{i}-k_{i-1}$ and observe that the ring $k_{i-1}[t_i]$ is a differential ring with no non trivial differential ideals (see \cite{Put-Sin}, example 1.18). Consider the differential $k-$vectorspace $W:=$span$_k\{u,u',\cdots,u^{(n-1)}\}$ and choose a nonzero element $g\in k_{i-1}[t_i]$ so that $gu^{(j)}\in k_{i-1}[t_i]$ for all $j$. Then the set $I=\{h\in k_{i-1}[t_i]\ |\ hW\subset k_{i-1}[t_i]\}$ is a non zero differential ideal of $k_{i-1}[t_i]$ and therefore $I=R$. Thus $1.u=u\in k_{i-1}[t_i]$ and let $u=a_pt^p_i+\cdots+a_0$, where $a_p\neq 0$ and for each $j$, $a_j\in k_{i-1}$. Then $L(a_pt^p_{i}+\cdots+a_0)=0$ together with the fact that $t_i$ is transcendental over $k_{i-1}$ implies $L(a_p)=0$ and thus we have found a non zero solution $a_p$ of $L(Y)=0$ in $k_{i-1}$. Repeating this argument, one can then show that there is a non zero element $d\in k$ such that $L(d)=0$. This means that $L(Y)=L_{n-1}(L_1(Y))$, where $L_1(Y)=Y'-(d'/d)Y$ and $L_{n-1}(Y)=0$ is a linear homogeneous differential equation over $k$ of order $n-1$. Since $L_1(u)\in K$, from our choice of $u$ and $n$, we obtain that $L_1(u)\in k$. Let $v=u/d$ and observe that $v\in K-k$ and that $v'\in k$. Since $u\in k_i-k_{i-1}$, we know that $t_i$ is algebraic over $k_{i-1}(v)$ and since $C_E=C_k$, it follows from theorem \ref{construction} that $t_i\in k_{i-1}(v)$. Thus $k(v)\subset K\subset E=k(v)(t_1,\cdots,t_{i-1},t_{i+1},\cdots,t_n)$ and an induction on the transcendence degree, as in the proof of I, will prove II.
Suppose that the differential field $k$ is an algebraically closed field. Let $L(Y)=0$ be a linear homogeneous differential equation over $k$ of smallest positive degree $n$ such that $L(u)=0$ for some $u\in K-k$. Then $L(Y)=0$ admits a liouvillian solution and since $k$ is algebraically closed, the field of constants $C_k$ is algebraically closed as well. It follows from \cite{Sin1981}, theorem $2.4$ that $L(Y)=L_{n-1}(L_1(Y))$, where $L_{n-1}(Y)=0$ and $L_1(Y)=0$ are linear homogeneous differential equations over $k$ of degrees $n-1$ and $1$ respectively. Thus from the choice of $L$, we must have $L_1(u)\in k$.
Suppose that $k$ is an algebraically closed field with the trivial derivation. Assume that there is no $z\in K-k$ such that $z'=0$. We know from II that there is an element $u\in K-k$ such that $L(u)=0$, where $L(Y)=Y'-bY-c$ for some $b,c\in k$. If $b=0$ then let $z=(u/c)$ and observe that $z'=1$. Therefore we shall assume that $b\neq 0$. We will now show that there is an element $z\in k(u)-k$ such that $z'=mb z$ for some integer $m>0$. Suppose that there is no such integer $m$. Then we shall consider the field $E=k(u)(X)$, where $X$ is transcendental over $k(u)$ and define a derivation on $E$ with $X'=bX$. Then from theorem \ref{construction} we obtain that $C_E=C_{k(u)}$. But
$$\left(\frac{u}{X}\right)'=\left(\frac{-c}{bX}\right)'$$
and therefore $(bu+c)(1/bX)\in C_{k(u)}\subset k(u)$. Thus we obtain $X\in k(u)$, which is absurd. Hence the theorem is proved. \end{proof}
\begin{remark}\label{order2}
Let $E$ be an iterated antiderivative extension of $k$ with $C_E=C_k$. Then $E=k(t_1,\cdots,t_n)$, where $t_1,\cdots,t_n$ are algebraically independent over $k$ and furthermore, the differential field $E$ remains a purely transcendental extension over any of its differential subfields containing $k$ (see \cite{Sri2010}, theorems 2.1 \& 2.2). Suppose that $y\in E-k$ and that $y'=\mathcal{P}(y)$ for some polynomial $\mathcal{P}$ in one variable over the differential field $k$. We will now show that deg $\mathcal{P}\leq 2$. The field $k(y)$ is a differential field intermediate to $E$ and $k$ and therefore, by theorem \ref{solutiondense}, there is an element $z\in k(y)-k$ such that $z'\in k$. Moreover, the element $y$ is algebraic over the differential subfield $k(z)$. Then since $E$ is a purely transcendental extension of $k(z)$, we obtain that $y\in k(z)$. Thus $k(y)=k(z)$ and from L{\"u}roth's theorem, we know that there are elements $a,b,c,d\in k$ such that $ad-bc\neq 0$ and that $$z=\frac{ay+b}{cy+d}.$$
Let $\mathcal{P}(y)= a_ny^n+a_{n-1}y^{n-1}+\cdots+a_0$, $a_n\neq 0$ and $z'=r\in k$ and observe that \begin{equation*}(a'y+b'+a\mathcal{P}(y))(cy+d)-(c'y+d'+c\mathcal{P}(y))(ay+b)=r(cy+d)^2.\end{equation*}
Since $a_n\neq 0$ and $ad-bc\neq 0$, comparing the coefficients of $y^n$ in the above equation, we obtain that $n\leq 2$. On the other hand, the differential equation $Y'=-rY^2$ has a solution in $E-k$, namely, $1/z$ and thus the bound $n\leq 2$ is sharp.
\end{remark}
\section{First Order Non Linear Differential Equations}
Let $C$ be an algebraically closed field of characteristic zero and view $C$ as a differential field with the trivial derivation $c'=0$ for all $c\in C$. Consider the field $C(X)$ of rational functions in one variable $X$. We are interested in the non constant liouvillian solutions of the following differential equations \begin{enumerate} \item[(i)] $y'=\mathcal{R}(y)$, where $\mathcal{R}(X)\in C(X)$ is a non zero element and \item[(ii)] $(y')^2=\mathcal{P}(y),$ where $\mathcal{P}(X)\in C[X]$ and $\mathcal{P}(X)$ has no repeated roots in $C$. \end{enumerate}
Suppose that there is a non constant element $y$ in some differential field extension of $C$ satisfying either (i) or (ii) then $C(y,y')$ must be a differential field, the element $y$ must be transcendental over $C$ and for any $z\in C(y)$, we have
\begin{equation}\label{two derivations} z'=y' \frac{\partial z}{\partial y}. \end{equation}
In the next proposition we will generalise a result of Singer (see \cite{Sin1975}, corollary 2) concerning elementary solutions of first order differential equations.
\begin{proposition}\label{M-B}
Let $C$ be an algebraically closed field of characteristic zero with the trivial derivation and let $\mathcal{R}(X)\in C(X)$ be a non zero element. The equation $Y'=\mathcal{R}(Y)$ has a non constant solution $y$ which is liouvillian over $C$ if and only if there is an element $z\in C(y)$ such that
$$\dfrac{1}{\mathcal{R}(y)}\quad\text{is of the form}\quad \frac{\partial z}{\partial y} \ \ \text{or}\ \
\dfrac{\frac{\partial z}{\partial y}}{a z}$$
for some non zero element $a\in C$.
\end{proposition}
\begin{proof}
Let $y$ be a non constant liouvillian solution of $Y'=\mathcal{R} (Y)$. From equation \ref{two derivations}, we observe that if $z\in C(y)$ and $z'=0$ then $\partial z/\partial y=0$ and since the field of constants of $C(y)$ with the derivation $\partial/\partial y$ equals $C$, we obtain that $z\in C$. Since $y$ is liouvillian, the differential field $M=C(y)$ is contained in some liouvillian extension field of $C$ and therefore, applying theorem \ref{solutiondense}, we obtain an element $z\in C(y)-C$ such that either $z'=1$ or $z'=az$ for some (non zero) $a\in C$. Now it follows immediately from equation \ref{two derivations} that $1/\mathcal{R}(y)$ has the desired form. Let $y'=\mathcal{R}(y)$ and $1/\mathcal{R}(y)$ equals $\partial z/\partial y$ or $(1/az)(\partial z/ \partial y)$ for some $z\in C(y)$ and for some non zero element $a\in C$. Then since $z'= y'(\partial z/\partial y)$, we obtain $z'=1$ or $z'=az$. From the fact that $C(y)\supset C(z)$, we see that $y$ is algebraic over the liouvillian extension $C(z)$ of $C$ and thus $C(y)$ is a liouvillan extension of $C$. \end{proof}
\begin{proposition}\label{hyperellipticcurves}
Let $C$ be an algebraically closed field of characteristic zero with the trivial derivation. Let $\mathcal{P}(X)\in C[X]$ be a polynomial of degree $\geq 3$ with no repeated roots. Then the differential equation $(Y')^2=\mathcal{P}(Y)$ has no non constant liouvillian solution over $C$. In particular, the elliptic function $y$ such that $(y')^2=y^3+ay+b$, where $\frac{a^3}{27}+\frac{b^2}{4}\neq 0$, is not liouvillian.
\end{proposition}
\begin{proof}
Suppose that there is a non constant liouvillian solution $y$ satisfying the equation $(Y')^2=\mathcal{P}(Y)$. Since $C$ is algebraically closed, such an element $y$ must be transcendental over $C$. Applying theorem \ref{solutiondense} to the differential field $C(y,y')$, we obtain that there is an element $z\in C(y,y')-C$ such that either $z'=1$ or $z'=az$ for some $a\in C$. We will first show that there is no $z\in C(y,y')-C$ with $z'=az$ for any $a\in C$.
For ease of notation, let $P=\mathcal{P}(y)$ and observe that $P$ is not a square in $C[y]$ and thus $y'\notin C(y)$ and therefore $y'$ lies in a quadratic extension of $C(y)$.
Write $z=A+By'$, where $A,B\in C[y]$ and $B\neq 0$. Then, taking derivatives, we obtain $A'+B'y'+By''=z'$ and using equation \ref{two derivations}, we obtain $$y' \frac{\partial A}{\partial y} + P\frac{\partial B}{\partial y}+ \frac{B}{2} \frac{\partial P}{\partial y}=z'.$$
If there is a non zero element $z\in C(y,y')$ such that $z'=az$ then, by comparing coefficients of the above equation, we obtain \begin{align}&\frac{\partial A}{\partial y}=aB\label{first eqn}\qquad\text{and}\\ &P\frac{\partial B}{\partial y}+ \frac{B}{2} \frac{\partial P}{\partial y}=aA\label{second eqn}.\end{align}
Multiplying the equation \ref{second eqn} by $2B$ and using \ref{first eqn}, we obtain $\partial (B^2 P)/\partial y=\partial A^2/\partial y. $
Thus there is a non zero constant $c\in C$ such that \begin{equation}\label{hypeleqn}B^2P=A^2+c.\end{equation} Write $A=A_1/A_2$ and $B=B_1/B_2$, where $A_1,A_2,B_1,B_2$ are polynomials in $C[y]$ such that $A_1,A_2$ are relatively prime, $B_1,B_2$ are relatively prime and $A_2,B_2$ are monic. Then since $P$ has no square factors, it follows from the equation $$B^2_1A^2_2P=A^2_1B^2_2+cA^2_2B^2_2$$ that $A_2=B_2$. The equation \ref{first eqn}, together with our assumption that $A_2$ and $B_2$ are monic, forces $A_2=B_2=1$ and we obtain deg $A $= 1+deg $B$. Now from equation \ref{hypeleqn}, we have $$2\ \text{deg}\ A -2+\ \text{deg}\ P=2\ \text{deg} \ A$$ and thus we obtain deg $ P=2$, which contradicts our assumption.
Now let us suppose that $z\in C(y,y')-C$ such that $z'=1$. Then $\partial A/\partial y=0$, which imples $A\in C$ and we have $(\partial B/\partial y)P+ (B/2) (\partial P/\partial y)=1$. Thus \begin{equation}\label{elliptic} \frac{\partial (B^2P)}{\partial y}=2B.\end{equation}
It is clear from the above equation that $B$ cannot be a polynomial and that $B$ cannot have a pole of order $1$. On the other hand if $c\in C$ is a pole of $B$ of order $m\geq 1$ then we shall write $B=R+\sum^m_{i=1}\b_i/(x-c)^i$, where $R\in C(y)$ and $c$ is not a pole of $R$. Since $P$ has no repeated roots, we conclude that $B^2P$ has a pole at $c$ of order $\geq 2m-1$ and thus $c$ is a pole of $\partial (B^2P)/\partial y$ of order $\geq 2m$. This contradicts equation \ref{elliptic}.
\end{proof}
Consider the ring of rational functions in two variables $C[Y,X]$. For constants $a\in C-\{0\}$ and $b\in C$, define a derivation $D$ of $C[Y,X]$ by setting $D(Y)=X$ and $D(X)=a/2$. Then $D(X^2-aY-b)=0$ and therefore the ideal $I=\langle X^2-(aY+b)\rangle$ is a differential ideal as well as a prime ideal. Thus the factor ring $C[Y,X]/I$ is a differential ring as well as a domain. Extend the derivation to the field of fractions $E$ of $C[Y,X]/I$. Now, in the differential field $E$, we have elements $x,y$ such that $y'=x$, $x'=a/2$ and $(y')^2=ay+b$. Let $z=2x/a$ and note that $z'=1$. Since tr.d$(E|C)=1$, the field $E$ is algebraic over $C(z)$ and thus $E$ is liouvillian over $C$. Thus the equation $(Y')^2=aY+b$ admits non constant liouvillian solutions. For the polynomial $f(X)=1-X^2\in \mathbb{C}(X)$, where $\mathbb{C}$ is the field of complex numbers, we have $(\frac{d}{dx}(\sin x))^2=f(\sin x)$ and thus $n\geq 3$ is necessary for the proposition \ref{hyperellipticcurves} to hold.
In the next proposition, we shall construct differential equations whose liouvillian solutions, from liouvillian extensions having the same field of constants, are all algebraic over the ground field.
\begin{proposition}\label{algsolutions}
Let $\overline{k}$ be an algebraic closure of $k$ and let $E$ be a liouvillian extension of $\overline{k}$ with $C_E=C_{\overline{k}}$. Let $$F(Y,Y')=Y'-\a_nY^n-\cdots-\a_{2}Y^{n-1}-\a_1Y,$$
where $\a_1,\a_2,$ $\cdots,\a_n\in k$ and suppose that there is an element $y\in E-\overline{k}$ such that $F(y,y')=0$.
\begin{itemize}
\item [I.] If there is an element $\gamma\in \overline{k}$ such that $\gamma'=\a_1 \gamma$ then there is no $z\in \overline{k}(y)-\overline{k}$ such that $z'/z\in \overline{k}$.
\item[II.] If $\a_1=0$ and $x'\neq \a_2$ for all $x\in k$ then there is an element $w\in \overline{k}(y)-\overline{k}$ and an element $v\in \overline{k}$ such that $w'=\a_2$ and that $v'=\a_3$.
\end{itemize}
\end{proposition}
\begin{proof}
Suppose that there is an element $z\in\overline{k}(y)-\overline{k}$ such that $z'=\a z$ for some $\a\in \overline{k}$. Write $z=P/Q$ for some relatively prime polynomials $P=\sum^{s}_{i=0}a_iy^i$ and $Q=\sum^t_{j=0}b_jy^j$ in $\overline{k}[y]$ with $b_t=1$. Taking derivative, we obtain $\a PQ=P'Q-PQ'.$ Replacing $z$ and $\a$ by $1/z$ and $-\a$, if necessary, we shall assume that $b_0\neq 0$. Let $r$ be the smallest non zero integer such that $a_r\neq 0$. Then we have \begin{align}&\a(a_ry^{r}+ \cdots ) (b_0+\cdots)= (b'_0+\cdots)(a_ry^r+\cdots)\\&-((a'_r+ra_r\a_1)y^r+\cdots)(b_0+\cdots) \notag
\end{align}
Now we compare the coefficients of $y^r$ and obtain that $$b'_0a_r-a'_rb_0-\a a_rb_0=r\a_1 a_r b_0$$ and thus $\left(\gamma^r a_r z/b_0\right)'=0.$ This contradicts our assumption that $C_{E}\neq C_{\overline{k}}$ and thus item I is proved.
Let us assume that $\a_1=0$ and that $x'\neq \a_2$ for all $x\in k$. From theorem \ref{solutiondense}, we obtain an element $z\in \overline{k}(y)-\overline{k}$ with $z'=\a z+\b$ for some $\a,\b\in \overline{k}$. We claim that there is an element $z_1\in \overline{k}(y)-\overline{k}$ with $z'_1\in \overline{k}.$ Write $z=P/Q$ for some relatively prime polynomials $P=\sum^{s}_{i=0}a_iy^i$ and $Q=\sum^t_{j=0}b_jy^j$ in $\overline{k}[y]$ with $b_t=1$. Taking derivative, we obtain \begin{equation}\label{Abel} \a PQ+\b Q^2=P'Q-PQ'.\end{equation}
Comparing the constant terms of the above equation, we have $\a a_0b_0+\b b^2_0=a'_0b_0-a_0b'_0$. If $b_0\neq 0$ then $(a_0/b_0)'=\a(a_0/b_0)+\b$ and thus $[z-(a_0/b_0)]'=\a[z-(a_0/b_0)]$. This contradicts item I of the proposition. Thus $b_0=0$ and we choose the smallest integer $m$ such that $b_m\neq 0$ and observe that $a_0\neq 0$. From equation \ref{Abel}, we have
\begin{align}\label{inhomo}&\a(b_my^{m}+ b_{m+1}y^{m+1}+\cdots ) (a_0+a_1y+\cdots)+\b(b^2_my^{2m}+ \\ & 2b_mb_{m+1}y^{2m+1}+\cdots ) = (a'_0+a'_1y+\cdots)(b_my^m+\cdots)
-(b'_my^m +\notag\\ &(mb_m\a_2+b'_{m+1})y^{m+1}+\cdots)(a_0+a_1y+\cdots) \notag
\end{align}
Compare the coefficients of $y^m$ and obtain $\a a_0b_m=a'_0b_m-a_0b'_m$. Therefore $(a_0/b_m)'=\a (a_0/b_m)$ and since $z'=\a z+\b$, it follows that
\begin{equation} \label{antipresent}\left(\frac{b_mz}{a_0}\right)'= \frac{\b b_m}{a_0}.\end{equation}
Taking $z_1=b_mz/a_0$, we prove our claim. Thus, in the above calculations, we shall suppose that $\a=0$. Then we have \begin{align}\label{antieqn}& \b(b^2_my^{2m}+ 2b_mb_{m+1}y^{2m+1}+\cdots )=(a'_0+ a'_1y+\cdots)(b_my^m+ b_{m+1}y^{m+1}\\ &+\cdots)- (a_0+a_1 y+\cdots) (b'_my^m+(mb_m\a_2+b'_{m+1})y^{m+1}+\cdots) \notag
\end{align}
Equating the coefficients of $y^m$, we obtain $a'_0b_m-b'_ma_0=0$ and thus for some non zero $c\in C_{\overline{k}}$, we have $ca_0=b_m$. Note that if $m\geq 2$ then comparing the coefficient of $y^{m+1}$, we obtain \begin{equation}\label{inhomo2}a'_0b_{m+1}-a_0b'_{m+1}+a'_1b_m-a_1b'_m=mb_m\a_2a_0\end{equation} and substituting $b_m=ca_0$, we obtain that $[(b_{m+1}+ca_1)/(mca_0)]'=\a_2$. Since $(b_{m+1}+ca_1)/(mca_0)\in \overline{k}$, we shall apply theorem \ref{construction} and obtain a contradiction to our assumption on $\a_2$. Therefore $m=1$ and from equation \ref{antieqn}, we have $f'=\a_2+c\b$, where $f=(b_{2}+ca_1)/(ca_0)$. Then $(f-cz)'=\a_2$, where $f-cz\in \overline{k}(y)$ and $f-cz\notin \overline{k}$. This proves the first part of item II.
Now, in all the above calculations, we shall assume that $\a=0$ and $\b=\a_2$. Then $f'=(1+c)\a_2$ and note that if $c\neq -1$ then $(f/(1+c))'=\a_2$ and again we obtain a contradiction to our assumption on $\a_2$. Thus $c=-1$. Now, $f'=0$ and consequently we have $a_1-b_2=c_1a_0$ for some constant $c_1\in C_{\overline{k}}$. We compare the coefficients of $y^3$ and obtain \begin{align}a'_2b_1-a_2b'_1+a'_1b_2-a_1b'_2+a'_0b_3-a_0b'_3= 2b_1b_2\a_2+2a_0b_2\a_2+a_0b_1\a_3\notag.\end{align} Substituting $b_1=-a_0$ and $b_2=a_1-c_1a_0$ in the above equation, we obtain
\begin{align}-a'_2a_0+a_2a'_0-c_1a'_1a_0+c_1a_1a'_0+a'_0b_3-a_0b'_3= -a^2_0\a_3\notag.\end{align}
Then it is easy to see that $v'=\a_3$ for $v=(a_2+c_1a_1+b_3)/a_0\in \overline{k}$. \end{proof}
\subsection*{Final Comments} One can slightly extend the proposition \ref{algsolutions} to include differential equations of the form $y'=\a_ny^n+\cdots+\a_3y^3+\a_2y^2+\a_1y$, when $\gamma'/\gamma=\a_1$ for some non zero $\gamma\in k$ and $x'\neq \gamma \a_2$ and $x'\neq \gamma^{2}\a_3$ for any $x\in k$. The proof that there are no elements in $E-\overline{k}$ satisfying the differential equation follows immediately from proposition \ref{algsolutions} once we observe that $(y/\gamma)'=$ $\gamma^{n-1}\a_n(y/\gamma)^n+\cdots+\gamma^2\a_3(y/\gamma)^3+\gamma\a_2(y/\gamma)^2$. Let $\mathbb{C}(x)$ be the field of rational functions in one variable over the field of complex numbers with the usual derivation $'=d/dx$. It is evident that $Y=-x$ is a liouvillian solution of
the differential equation \begin{equation}\label{abeleqn}Y'= \frac{1}{x^3} Y^3+\frac{1}{x^2} Y^2+\frac{1}{x} Y.\end{equation}
Since $x'/x=1/x$, we see that $y$ is a solution of equation \ref{abeleqn} if and only if $y/x$ is a solution of $Y'=(1/x)(Y^3+Y^2)$. Now it follows from proposition \ref{algsolutions} that equation \ref{abeleqn} has no solution in $E-\overline{\mathbb{C}(x)}$ for any liouvillian extension $E$ of $\overline{\mathbb{C}(x)}$ with $C_E=C_{\overline{\mathbb{C}(x)}}$.
\bibliographystyle{elsarticle-num}
| {'timestamp': '2015-12-14T02:08:48', 'yymm': '1512', 'arxiv_id': '1512.03615', 'language': 'en', 'url': 'https://arxiv.org/abs/1512.03615'} |
\section{Introduction}}
In conventional thermodynamics, phase transition is an important subject -- it explains how a substance undergoes a change from one phase to another. For black holes, the investigation of phase transition was started long ago by Davis \cite{Davies:1978mf} and was taken up later in a more rigorous way by Hawking and Page \cite{Hawking:1982dh}. Over the years, many different ways have been proposed to characterize and explore the different phases of a black hole. Recently two approaches attracted a lot of attention. In one approach, one mainly looks for the divergence of specific heat and inverse of isothermal compressibility \cite{Banerjee1}--\cite{Lala:2012jp}. Here black hole mass is treated as internal energy and cosmological constant ($\Lambda$), if it appears, is taken as a constant number. In the other approach, which is confined to AdS black holes, mass is treated as enthalpy and $\Lambda$ is regarded as a pressure term \cite{Dolan:2010ha,Dolan:2011xt}.
An interesting feature of the second approach is that the thermodynamical variables of the black hole satisfy a van der Waal like equation of state and the critical exponents are identical to those of standard van der Waals system. This was first shown for charged $AdS$ black hole \cite{Kubiznak:2012wp} and then for higher dimensional charged and rotating black hole \cite{Gunasekaran:2012dq}. Later a similar study was done for charged topological black hole \cite{Zhao:2013oza}, quantum corrected black hole \cite{Ma:2014vxa}, dilaton black hole \cite{Dehghani:2014caa,xo}, Gauss-Bonnet black hole \cite{Cai:2013qga,Zou:2014mha} and also for Lovelock gravity \cite{Dolan:2014vba}-\cite{Xu:2014tja}, quasitopological gravity \cite{Mann} and conformal gravity \cite{Xu:2014kwa}. This extended phase space approach is also found to be quite successful in equilibrium state space geometry \cite{Hendi:2015hoa,Hendi:2016njy}.
A remarkable feature of these studies \cite{Kubiznak:2012wp}-\cite{xo} and similar other studies \cite{Zou:2013owa}-\cite{Azreg-Ainou:2014twa} (for review, see \cite{Altamirano:2014tva,Kubiznak:2016qmn}) is that the values of the critical exponents are independent of the metric if one restricts oneself to any one of the two approaches. Thus there is a clear signature of universality with the universality class being characterized by the approach one choses to use. It is only too natural to ask: what makes them universal?
Very recently we provided an method of exploring the critical exponents in the context of the first approach with a minimal amount of information about the metric \cite{Mandal:2016anc}. Here black hole satisfies usual first law of thermodynamics, its mass is interpreted as internal energy and $\Lambda$ is treated as a pure constant having no significant role to play. We demonstrated how it is possible to extract the critical exponents with just a single assumption -- a phase transition point exists. It turned out that, this assumption makes the whole analysis very easy and transparent. In this paper our goal is the same but this time our study is addressed to the second approach. Here, we want to treat black hole mass as enthalpy so that $\Lambda$ can be treated as an intensive variable (i.e., a pressure-like term).
Reassuringly, using this assumption, we have been successful in calculating the critical exponents elegantly. This satisfactorily complements our previous work \cite{Mandal:2016anc}.
Let us now mention the things we achieved here. Present analysis clearly shows that if there is a van der Waal type phase transition for a set of black holes, the values of critical exponents must be same for that set and these values can be calculated without any further assumption about the spacetime. Although people intuitively expects this result, it has not been shown explicitly earlier. As already mentioned, before the calculation of critical exponents researchers give significant effort to find a spacetime which exhibits liquid-vapour type of phase transition. Our work shows that once this type of spacetime is found, calculation of critical exponents for that spacetime is not really necessary. In this sense, present paper nicely complement the existing literature.
Before going into the main analysis, let us mention the essential inputs that have been used to achieve our goal. Since the general structure of the Smarr formula and the first law of thermodynamics are very much universal, they are considered to be valid in all black holes. In addition to this we also assume that there is a van der Waal like critical point about which two phases of black holes exist. These simple ingredients turn out to be adequate for our purpose.
{\it Notations}: $C_V$ and $K_T$ denote the specific heat at constant volume ($V$) and isothermal compressibility at constant temperature ($T$), respectively. $P$ stands for the pressure of the system. The critical values of pressure, temperature and volume are specified by $P_c$, $T_c$ and $V_c$. We mark order parameter, the difference between the volumes of two phases, by $\eta$.
\vskip 2mm
\noindent
{\section{Critical Phenomena: a unified picture}}
The critical exponents ($\alpha, \beta, \gamma, \delta$) for a van der Waals thermodynamic system are defined as \cite{book}
\begin{eqnarray}
&&C_v \sim |t|^{-\alpha}~;
\nonumber
\\
&& \eta \sim v_l-v_s\sim |t|^{\beta}~;
\nonumber
\\
&& K_T = -\frac{1}{v}\Big(\frac{\partial v}{\partial P}\Big)_T \sim |t|^{-\gamma}~;
\nonumber
\\
&&P-P_c \sim |v-v_c|^{\delta}~;
\label{exponent}
\end{eqnarray}
where $t=T/T_c-1$. Here $v$ stands for specific volume which for a gaseous system is defined as volume per molecules of the system \cite{book}. Usually, every quantity is calculated in terms of the specific volume. $l$ and $s$ in the subscript refer to the two phases of the system; e.g. in the case of normal thermodynamics, $l$ and $s$ may be vapour and liquid phases.
The critical point is a point of inflection, denoted by two conditions $(\partial P/\partial v)_T = 0 = (\partial^2P/\partial v^2)_T$ where $P$ is expressed as a function of temperature and specific volume.
For the AdS black hole case, it has been observed that the specific volume is a function of $r_+$, the location of event horizon (e.g. see \cite{Kubiznak:2012wp} for AdS-RN black hole and for other cases, refs. can be availed from \cite{Altamirano:2014tva,Kubiznak:2016qmn}). On the other hand, as we shall see later, thermodynamic volume $V$, in general, is also a function of $r_+$ only {\footnote{In general, the thermodynamic volume ($V$) is a function of $r_+$, angular momentum and charges of the black hole \cite{Cvetic:2010jb}. We shall explain later that the whole analysis for Van der Waals like phase transition happens for fixed values of angular momentum and charges and so, in principle, we can take $V$ as a function of $r_+$ only.}}. Therefore it is obvious that $P$ can also be expressed as function of $T$ and $V$. Now to find the critical exponent $\alpha$ one needs to find the entropy. This is done by taking the derivative of Helmholtz free energy (expressed in terms of $T$ and $r_+$), with respect to $T$, keeping $r_+$ constant. On the other hand, for other exponents, $P=P(T,v)$ has to be expanded around the critical point. One can verify that the expansion for $P (T,v)$ will have the same form as that for $P(T,V)$ where the variable $(v/v_c)-1$ plays the same role as $V/V_c - 1$. Hence, to find the critical exponents, we can work with $V$ instead of $v$ as well. So in our analysis, at the critical point, we impose the conditions as
\begin{equation}
\frac{\partial P}{\partial V}=0 = \frac{\partial^2P}{\partial V^2}~.
\end{equation}
Here in taking the derivatives, $T$ is kept constant and $P$ is expressed as a function of $V$ and $T$. The same was also mentioned and adopted in the original work \cite{Kubiznak:2012wp} (see footnote $5$ of this paper).
We shall use these conditions together with the basic definitions to find the values of the critical exponents for a AdS black hole.
To start with, let us consider the two general results which are universally established: one is the general form of Smarr forumla and the other one is the first law of thermodynamics.
The Smarr formula for a $D$-dimensional AdS black hole, in general, can be taken as
\begin{equation}
M=f_1(D)TS-f_2(D)PV + f_3(D)XY~,
\label{Smarr}
\end{equation}
where $M$, $T$ and $S$ are mass, temperature and entropy of the horizon while $P$ and $V$ are thermodynamic pressure and volume, respectively. The values of the unknown functions $f_1, f_2, f_3$ depend on the spacetime dimensions only. For example, in $D=4$, one has $f_1=2$, $f_2=2$ and $f_3=1$. We shall see that the explicit forms of them are not required for our main purpose. In the above equation, $X$ stands for electric potential $\Phi$ or angular velocity $\Omega$ or both whereas $Y$ refers to charge $Q$ or angular momentum $J$ or both for a black hole. The corresponding first law of thermodynamics is given by
\begin{equation}
dM = TdS+XdY+VdP~.
\label{law1}
\end{equation}
Note that, in the above expression $M$ is not the energy and the present form is not identical to the standard form of first law\footnote{Discussion on ``non--standard'' first law of thermodynamics and Smarr formula due to $\Lambda$ can be found in \cite{Dolan:2010ha}, \cite{Dolan:2011xt}, \cite{Caldarelli:1999xj}-\cite{Dolan:2011jm}.}. But this can be expressed in the standard form:
\begin{equation}
dE=TdS+XdY-PdV
\end{equation}
if one identifies the energy of the black hole as
\begin{equation}
E=M-PV~.
\label{energy}
\end{equation}
Thus $M$ is the enthalpy of black hole. Using (\ref{Smarr}), the Helmohtz free energy ($F$) of the system is found to be
\begin{equation}
F=E-TS=(f_1-1)TS - (f_2+1)PV+f_3XY~.
\label{Hfree}
\end{equation}
Now for a black hole, if one keeps $Y$ constant then mass ($M$) has to be a function of pressure ($P$) and location of the horizon ($r_+$); i.e. $M=M(r_+,P)$. The reason is as follows. The location of the horizon is defined by the vanishing of the metric coefficient and in general this coefficient is a function of $M$, $Y$ and $P$ (as $P=-\Lambda/8\pi$) for a AdS black hole. Therefore, $r_+=r_+(M,Y,P)$ and hence for a fixed value of $Y$, $M$ is a function of $r_+$ and $P$. On the other hand, we have $V=V(r_+)$ and in general $X=X(r_+)$. Here it must be pointed out that, in general $V$ depends on not only $r_+$ but also on the angular momentum or the existing charges of the black hole; i.e. on $Y$ (For example, see \cite{Cvetic:2010jb}). But since the analysis is done for fixed value of $Y$, without any loss of generality, we can take $V$ as function of $r_+$ only. Of course, if there is any deviation from it, this has to be treated separately. The first law of thermodynamics (\ref{law1}) shows that $S$ can be expressed as a function of the black hole parameters; i.e. $S = S(M,Y,P)$. So by the above argument we have $S=S(r_+,P)$. Therefore the Smarr formula (\ref{Smarr}) shows that for a fixed value of $Y$, one finds the solution for pressure from the equation
\begin{equation}
P = \frac{f_1TS(r_+,P)}{f_2V(r_+)}-\frac{M(r_+,P)}{f_2V(r_+)}+\frac{f_3X(r_+)Y}{f_2V(r_+)}~,
\label{P}
\end{equation}
which has to be function of both temperature and horizon radius; i.e. $P=P(T,r_+)$. Hence the free energy (\ref{Hfree}) as well as $S$ are in general functions of both $T$ and $r_+$. Since entropy can be calculated from the relation $S=-(\partial F/\partial T)_V$ ($\equiv$ $-(\partial F/\partial T)_{r_+}$ as $V=V(r_+)$), the evaluated value can be, in general, function of both temperature and horizon radius. Now we find from (\ref{Hfree})
\begin{eqnarray}
-S=\Big(\frac{\partial F}{\partial T}\Big)_{r_+} &=& (f_1-1)S + (f_1-1) T \Big(\frac{\partial S}{\partial T}\Big)_{r_+}
\nonumber
\\
&-&(f_2+1)V(r_+)\Big(\frac{\partial P}{\partial T}\Big)_{r_+}~.
\label{PF}
\end{eqnarray}
This gives,
\begin{eqnarray}
0 &=& f_1S + (f_1-1) T \Big(\frac{\partial S}{\partial T}\Big)_{r_+}
\nonumber
\\
&-&(f_2+1)V(r_+)\Big(\frac{\partial P}{\partial T}\Big)_{r_+}~.
\label{PF1}
\end{eqnarray}
For a large class of black holes, it is found that pressure is a linear function of temperature, i.e.
\begin{eqnarray}
P=p_0(r_+)+Tp_1(r_+)~.
\label{PTR}
\end{eqnarray}
This, we want to take as an input for the AdS black holes because it has been observed that those which exhibit van der Waals like behaviour, have the above feature when $P$ is expressed as a function of $T$ and $r_+$. The same also happens if one looks at the van der Waals equation of state (e.g. see \cite{book,Kubiznak:2012wp}). Another way to see it is as follows. In later discussion we shall see that near the critical point, $P$ to the leading order, is a linear function of $t=(T/T_c)-1$. Since all the critical exponents are determined by the analysis very near to the critical point, there is no loss of generality to take an expression like (\ref{PTR}).
This, when substituted in (\ref{PF1}), gives
\begin{eqnarray}
f_1S+ (f_1-1) T \Big(\frac{\partial S}{\partial T}\Big)_{r_+}-f_1a(r_+)=0
\end{eqnarray}
where $a(r_+)=\frac{1}{f_1}(f_2+1)V(r_+)p_1(r_+)$. The solution of above first order differential equation is
\begin{eqnarray}
S=a(r_+)+C(r_+)T^{-\frac{f_1}{f_1-1}}
\end{eqnarray}
where $C(r_+)$ is an integration constant. Since $f_1/(f_1-1)$ is always positive for $D\geq 4$, the last term of the R.H.S. diverges as $T\rightarrow 0$. So to be consistent with the third law of thermodynamics, $C(r_+)$ must be zero. This leads to $(\partial S/\partial T)_V =(\partial S/\partial T)_{r_+}= 0$;
i.e. the specific heat at constant volume $C_V = T(\partial S/\partial T)_{V} = 0$ and hence by (\ref{exponent}) one finds the value of the critical exponent $\alpha=0$.
Next we shall find the other critical exponents. Remember that the pressure here is in general a function of both $T$ and $r_+$. Now since thermodynamic volume $V$ is a function of $r_+$ only, we take $P$ as function of $T$ and $V$ for our purpose. Let us expand $P(T,V)$ around the critical values $T_c$ and $V_c$:
\begin{eqnarray}
P = &&P_c + \Big[\Big(\frac{\partial P}{\partial T}\Big)_{V}\Big]_c (T-T_c)
\nonumber
\\
&&+ \frac{1}{2!} \Big[\Big(\frac{\partial^2 P}{\partial^2 T}\Big)_{V}\Big]_c (T-T_c)^2
\nonumber
\\
&&+ \Big[\Big(\frac{\partial^2P}{\partial T\partial V}\Big)\Big]_c(T-T_c)(V-V_c)
\nonumber
\\
&&+\frac{1}{3!}\Big[\Big(\frac{\partial^3P}{\partial V^3}\Big)_T\Big]_c (V-V_c)^3+\dots~.
\end{eqnarray}
In the above expression, we have used the fact that at the critical point $(\partial P/\partial V)_c=0=(\partial^2P/\partial V^2)_c$. Defining two new variables $t=T/T_c-1$ and $\omega = V/V_c-1$, and above expression is written as,\begin{equation}
P=P_c+Rt+Bt\omega+D\omega^3+Kt^2;
\label{Pexpansion}
\end{equation}
where $R, B, D,K$, etc. are constants calculated from the derivatives at the critical point. Here we have ignored the other higher order terms since they are very small.
Now for the usual van der Waal's system, below critical point there is a portion between the vapor and liquid phases where both of them coexists which is either unstable or meta-stable and therefore experimentally this exact form of the isotherm is not observed. Hence the ends of the vapor and liquid phases are usually connected by a straight line, parallel to the volume axis, to match with the obtained isotherm for a substance by direct experiment. Then one obtains two portions of van der Waal's isotherm and the straight line is drawn in a such a way that the area of both of them are equal so that one has $\oint VdP = 0$. This is known as Maxwell's area law. Since we are treating the AdS black holes similar to van der Waal's system, this law can be used here also \cite{Wei:2014qwa,Belhaj:2014eha}. In addition, by keeping the analogy between the volumes of vapor and liquid phases, it will be denoted that $\omega_l$ and $\omega_s$ correspond to the large and small volumes of black hole in two different phases. In between these two we have mixture of both of them.
Since we are interested in the isotherms, to use Maxwell's area law we find $dP = (Bt+3D\omega^2)d\omega$ for constant $t$, which yields
\begin{equation}
\int_{\omega_l}^{\omega_s}\omega(Bt+3D\omega^2)d\omega + \int_{\omega_l}^{\omega_s}(Bt+3D\omega^2)d\omega =0~.
\label{omega}
\end{equation}
Next note that for the usual system, the end point of vapor and the staring point of liquid have same pressure. Similarly, here also the pressure does not change, i.e. $P_l=P_s$, and then (\ref{Pexpansion}) implies
\begin{equation}
Bt(\omega_l-\omega_s) + D(\omega_l^3-\omega_s^3)=0~.
\label{first}
\end{equation}
So the second integral of (\ref{omega}) vanishes. Therefore it reduces to
\begin{equation}
Bt(\omega_l^2-\omega_s^2)+\frac{3D}{2}(\omega_l^4-\omega_s^4) = 0~.
\label{second}
\end{equation}
One can now easily find the non-trivial solutions of the above two equations. These are $\omega_l =(-Bt/D)^{1/2}$ and $\omega_s=-(-Bt/D)^{1/2}$. Therefore we find
\begin{equation}
\eta \sim V_l-V_s = (\omega_l-\omega_s)V_c\sim |t|^{1/2}~,
\end{equation}
which yields $\beta=1/2$.
To find $\gamma$ we need to calculate $K_T$, given in (\ref{exponent}). So we first find $(\partial P/\partial V)_T$ from (\ref{Pexpansion}). Upto first (leading) order it is given by
\begin{equation}
\Big(\frac{\partial P}{\partial V}\Big)_T \simeq \frac{B}{V_c}t~,
\end{equation}
where one needed to use $\partial \omega/\partial V = 1/V_c$. Therefore the value of $K_T$ near the critical point is
\begin{equation}
K_T \simeq \frac{1}{Bt}\sim t^{-1}~.
\end{equation}
This implies $\gamma=1$. Next for $T=T_c$, the expression (\ref{Pexpansion}) for pressure yields
\begin{equation}
P-P_c\sim \omega^3\sim (V-V_c)^3~;
\end{equation}
i.e. the value of the critical exponent is $\delta=3$.
It must be pointed out that the derived critical exponents satisfy the following scaling laws:
\begin{eqnarray}
&&\alpha+2\beta+\gamma=2;
\nonumber
\\
&&\gamma=\beta(\delta-1)~.
\label{scaling}
\end{eqnarray}
As is well known, these scaling laws are universal in nature. Their thermodynamic analysis can be found in \cite{Stanley1}.
Note that in our analysis we consider the thermodynamic volume as function of horizon radius only. In general, this is not always true. For example when black hole has some hair, $V$ is not just the volume inside the black hole but also the integral of the scalar potential and hence it depends on both $r_+$ and the charges corresponding to the hairs (for instance, see \cite{Cvetic:2010jb,Caceres:2015vsa}). Also for some black hole solutions, it may be possible that the entropy is not only a function of $r_+$ and $T$ but also function of other parameters of the spacetime. There is a hairy case \cite{Hennigar:2015wxa} in which this happens though volume is function of $r_+$ alone. Then it might come in mind that the present general approach incorporates only those cases which are consistent with the {\it no hair theorem}. In this regard, remember that in our present analysis, $Y$ is kept constant; i.e. we are taking the canonical ensemble. Then if there is any hairy charge that should not be varied also and hence $V$ will be a function of $r_+$ alone. In this situation, such solutions behaves as Van der-Waal's system. Of course for an analysis in the grand canonical ensemble, one needs to be careful and must deal them in a separate way.
There is another example \cite{Sadeghi:2016dvc} where $S=S(r_+,T)$ corresponding to a solution with logarithmic correction to entropy. This can be dealt in this present approach.
\vskip 2mm
\noindent
{\section{Conclusions}}
Using the tools of thermodynamics, phase transition of completely dissimilar systems like chemical, magnetic, hydrodynamic etc. has been studied thoroughly. Black hole phase transition is a relatively new observation which deserves careful study. Though the first order phase transition from non-extremal to extremal black hole is known for some time, there are various other types of phase transition. A new type of phase transition has been found recently where cosmological constant ($\Lambda$) is treated as dynamical variable (instead of a constant) equivalent to pressure of hydrodynamic system \cite{Kubiznak:2012wp}. In this interpretation, phase transition of different black holes has been shown to be quite analogous to van der Waal's system. Interestingly critical exponents found from different metrics are same. This naturally deserves some explanation.
It is well known that critical exponents of quite different systems can be same. But for black holes, this point is not well appreciated. In our previous work \cite{Mandal:2016anc} we showed that, starting from few very general assumptions about the black hole spacetime, critical exponents can be calculated. In that work we did not treat $\Lambda$ as a dynamical variable. Present paper is a continuation of our previous work. Here we take a different interpretation of $\Lambda$. In this work we showed that, the single assumption of existence of liquid-vapour type phase transition for a spacetime determines the values of critical exponents. No other assumption regarding the variables or dimensions of spacetime is necessary.
Till now values of critical exponents are obtained first by finding a suitable metric which exhibits this type of phase transition and then by explicit calculation using the definitions of exponents. Though expected, it remained unexplained why these values are same for different $AdS$ black holes. In this paper, we give a completely satisfactory answer to this question. Thus our present work nicely complements the existing scientific works in this field.
It may be mentioned that, the analysis shows that the values of the critical exponents are very much universal in nature and also they are independent of spacetime dimensions. In literature, different critical exponents have been found very recently \cite{Mann}, usually known as ``nonstandard'' critical exponents, for black hole solutions in higher curvature gravity theory. These are not due to the analysis around the standard critical point; rather around {\it isolated} one. In that respect our analysis is different from them. Of course it would be very much worthwhile to look at such non-standard phase transition.
\vskip 4mm
{\section*{Acknowledgments}}
We thank to the anonymous referees for pointing out some important issues which helped to improve the earlier version. We also thank Juan F. Pedraza for making several interesting and valuable comments on the first version of our paper.
The research of one of the authors (BRM) is supported by a START-UP RESEARCH GRANT (No. SG/PHY/P/BRM/01) from Indian Institute of Technology
Guwahati, India.
| {'timestamp': '2017-08-21T02:06:05', 'yymm': '1609', 'arxiv_id': '1609.06224', 'language': 'en', 'url': 'https://arxiv.org/abs/1609.06224'} |
\section{Introduction}
Low Surface Brightness (LSB) galaxies are the most unevolved class of galaxies
in our nearby Universe (Impey \& Bothun \cite{ImpeyBothun1997}). They are
optically dim with diffuse stellar disks (Auld et al. \cite{auld.etal.2006}),
massive HI gas disks (O'Neil et al. \cite{oneil.etal.2004};
Matthews, van Driel, Monnier-Ragaigne \cite{matthews.etal.2001}) but have low star formation rates
compared to regular spiral galaxies (McGaugh \cite{McGaugh.1994}). They are halo dominated galaxies
(de Blok \& McGaugh \cite{deblok.etal.1996}; Kuzio de Naray, McGaugh \& de Blok \cite{KuziodeNaray.etal.2008};
Coccato et al. \cite{Coccato.etal.2008}); this may account for the weak spiral arms and small bar perturbations
observed in these galaxies (Mihos, de Blok \& McGaugh \cite{Mihos.etal.1997};
Mayer \& Wadsley \cite{Mayer.Wadsley.2004}). Although the most commonly observed LSB galaxies are
the dwarf LSB galaxies (Sabatini et al. \cite{Sabatini.etal.2003}), a significant fraction of
LSB galaxies are
large spirals having prominent bulges (Beijersbergen, de Blok \& van der Hulst \cite{Beijersbergen.etal.1999}).
These giant LSB (GLSB) galaxies have extended LSB disks that are poor in star formation
and dust (Rahman et al. \cite{Rahman.etal.2007}; Hinz et al. \cite{Hinz.etal.2007}).
The bulge dominated GLSB galaxies often show
AGN activity (Schombert \cite{Schombert.1998}; Das et al. \cite{Das.etal.2009}).
Even though the optical properties of LSB galaxies have been investigated in great depth, not much
is known about their molecular gas content. This is important as a knowledge of the cold gas distribution in
LSB galaxies will help us understand star formation processes in these galaxies. Surveys of LSB galaxies
show that they have fairly massive HI disks that may be more than twice the size of the optical disk
(de Blok et al. \cite{deblok.etal.1996}; Pickering et al. \cite{pickering.etal.1997};
Das et al. \cite{Das.etal.2007}). In this paper we examine the molecular gas distribution in a GLSB galaxy
and see how it relates to the overall star formation in its' disk. Molecular gas has been detected in only
a handful of LSB galaxies (O'Neil, Hofner \& Schinnerer \cite{oneil.etal.2000};
Matthews \& Gao \cite{matthews.gao.2001}; O'Neil, Schinnerer \& Hofner \cite{oneil.etal.2003};
Matthews et al. \cite{matthews.etal.2005}; Das et al. \cite{Das.etal.2006}). In most cases the galaxies
were large spirals with extended optically dim disks. The low detection rate of
molecular emission from LSB galaxies is probably due to several factors related to the poor star formation
rate in these galaxies (e.g.~de Blok \& van der Hulst \cite{deblok.vanderHulst.1998}); factors such as
the lower dust content, lower metallicity and the lower surface denity of cold, neutral gas in these galaxies.
All of these properties lead to a slower rate of gas cooling and molecule formation. For the few galaxies
where molecular gas has been detected, not much is known about the gas extent and distribution. Such
information is important if we want to understand star formation and disk evolution in LSB galaxies.
To investigate the molecular gas and star formation in GLSB galaxies we studied the CO distribution in
a galaxy where molecular gas has been detected, F568-6 or Malin~2 as it is widely known
(Das et al. \cite{Das.etal.2006}). It is a nearly face-on GLSB galaxy at a distance of 201~Mpc.
It has a prominent bulge and a very extended LSB disk. Its parameters are summarised in Table~1.
There are several localized star forming regions distributed over its inner disk.
Its metallicity is one third of the solar metallicity in value which is
relatively high for an LSB galaxy (McGaugh \cite{McGaugh.1994}).
The CO observations of Malin~2 were conducted using the HERA instrument mounted on the
30~m IRAM telescope. Our
main aim was to examine the molecular gas distribution; determine its extent,
total gas mass and surface density.
\section{Observations}
During March 2007 we observed the CO(2--1) line in Malin~2 with the HERA beam array
(Schuster et al. \cite{Schuster.etal.2004}) on the IRAM 30m
telescope at a fequency of 220.372~GHz. We specifically used this array as it has a
wide field of view and good sensitivity. HERA is made of 9 receivers
in a $3\times3$ array spaced by $24^{\prime\prime}$ on the sky. The backend used was the
Wideband Line Multiple Autocorrelator (WILMA). The total bandwidth was 930~Hz; it was made up of
465 channels of 2MHz each. The typical system temperatures $T_{sys}$ were in the range 200-250 K for 8
receivers; one receiver had a systematically higher $T_{sys}$ in the range 350-450K. The
mean FWHM of each of the nine beams is $11.7^{\prime\prime}$. As our main goal was to detect
the CO emission line from the
disk of Malin~2, we kept the array fixed on the sky in the standard pointed mode with the central
beam pointed to the galactic nucleus; the field was tracked to get deep integrations at fixed
positions in the disk. This enabled us to achieve good sensitivity
but was at the cost of undersampling the source. We obtained deep
observations at nine positions of the galaxy with a total on source integration time of
11 hours (Figure~1). The observations were frequency switched. The intrinsic velocity resolution
was 2.7~km~s$^{-1}$; the data was then smoothed using a Hanning squared function.
We reduced the data using the CLASS software of the GILDAS package \footnote{\url{http://www.iram.fr/IRAMFR/GILDAS}}
by fitting a first order baseline to all spectra within a window going
from -400 to +400\,km\,s$^{-1}$ about the galaxys' systemic velocity of 13830~km~s$^{-1}$. This
window was the same for all nine spectra. The noise
level is not the same for all receivers; the lowest is 0.7mK
and the highest is 1.6 mK in the 10.9 km/s channels. The noise is also not uniform across the band.
Hence we computed the baseline and the noise in the -400 +400 window using emission free channels only.
All the details are given in Table~2. The conversion factor from K to Jy is 8.6~Jy~K$^{-1}$.
To convert from the antenna temperature
scale $T_A^*$ to the main beam temperature $T_{mb}$ we multiplied $T_A^*$ by the factor
$F_{eff}/B_{eff}~=~1.67$ where $F_{eff}$ is the forward beam efficiency and $B_{eff}$ is the
main beam efficiency.
\begin{table}
\caption[]{Galaxy Parameters - Malin~2}
\label{table1}
$$
\begin{array}{p{0.35\linewidth}ll}
\hline
\noalign{\smallskip}
Parameter & Value & Reference \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
Other Names & F568-6 & NED \\
Distance & 201~Mpc & NED \\
Heliocentric Velocity & 13830~km~s^{-1} & NED \\
Position (epoch 2000) & 10^{h}39^{m}52^{s}.5, +20^{\circ}50^{\prime}49^{\prime\prime} & 2MASS \\
Size ($D_{25}^{\prime}$) & 1.67 & (a) \\
Position angle & 70^{\circ} & NED \\
Inclination & 38^{\circ} & (a) \\
HI Linewidth ($W_{20,corr}$) & 637~km~s^{-1} & (b) (c) \\
HI Mass & 4.2\times10^{10}~M_{\odot} & (b) \\
Dynamical Mass & 2.5\times10^{12}~M_{\odot} & (b) \\
(R~$<107$~Kpc) & & \\
Disk Central Brightness & 22.1~mag~arcsec^{-2} & (b) (d) \\
Disk Scale Length & 18.8~Kpc & (b) \\
Total R Band Magnitude & -23.6~mag & (b) \\
\noalign{\smallskip}
\hline
\end{array}
$$
\begin{list}{}{}
\item[$^{\mathrm{a}}$] Matthews et al. (2001)
\item[$^{\mathrm{b}}$] Pickering et al. (1997)
\item[$^{\mathrm{c}}$] Corrected for inclination
\item[$^{\mathrm{d}}$] Not corrected for extinction
\end{list}
\end{table}
\section{Results}
We have detected CO(2-1) emission from several positions in the disk as well
as from the center of Malin~2. In
the following paragraphs we present our results and discuss their implications.\\
\noindent
{\bf 1.~CO(2-1) Detections~:~}Figure~2 shows the CO(2-1) emission spectra observed from
nine locations across Malin~2. The offsets from the galaxy center
are indicated in each box. CO(2-1) emission was detected from four out of nine positions
at line intensities above $3\sigma$. The line at (-24, 24) is a a hint of emission rather
than a sure detection. At (0, 0) the line is broad but is a $3\sigma$ detection.
We estimated the line parameters (flux, width and central velocity)
by fitting a gaussian to each spectrum. We determined the noise level for each of the nine
positions. The results are listed in Table~2 and the best fit gaussians are overlaid on the
spectra in Figure~2.
\noindent
{\bf 2.~Comparison with previous CO observations~:~}
Molecular gas has been detected earlier in this galaxy by Das et al.
(2006) with the IRAM 30m telescope and the BIMA array. However, due to a
correlator setup problem the BIMA map was incomplete, it was covering a
velocity range corresponding to CO(1--0) line emission from the east side
of the galaxy only. In addition, the 30m telescope was pointed toward two
directions only at 7$''$ and $\sim$35$''$ east of the nucleus (see
Fig.\,1). Although the source was observed simultaneously at 115\,GHz and
230\,GHz with the 30m telescope, only CO(1--0) emission was detected;
the CO(2--1) line was not detected from either positions.
Our present observations cover the CO emission over a wider
velocity and spatial extent than these previous observations and
hence give a better idea of the distribution of molecular gas in the inner
disk of Malin 2. These
observations are also the first detections of the CO(2--1) line in Malin 2.
Our detection lies below the detection limit of the older observations of
Das et al. (2006). Their CO(2--1) emission spectrum had a noise (rms)
of 2.6~mK whereas our line detections have a peak of 2 to 3~mK (Figure~2).
Near the galaxy center, where there is overlap with the previous CO
detection, the CO line
velocities are in the same direction and are similar in shape and in width.
Close to the center of the galaxy the CO(1--0) line as detected by
Das et al. (2006) is broad ($\sim$200\,km\,s$^{-1}$) and asymmetric with the
red side being slightly more prominent than the blue side.
The CO(2-1) line detected at the center of the galaxy by our new HERA
observations is $243\pm76~km~s^{-1}$ wide and at a centroid position
offset by $77\pm34~km~s^{-1}$ relative to the systemic velocity given in Table~1.
Investigating the cause of this offset would require interferometric observations.
As the beams and pointing directions are different (23$''$ for the CO(1--0) line and
11$''$ for the CO(2--1) line) the fluxes cannot be directly compared. But if the molecular
gas were uniformly distributed we would expect a line ratio in temperature scale close in the
range of values found by Braine \& Combes (1992) with the same telescope
i.e., 0.4 to 1.2. We checked this for the HERA central beam whose
pointing direction is the closest to the central pointing of Das et al (2006). A rough comparison
of the brightness in $mK~km~s^{-1}$ gives
$I_{\rm CO(2-1)}/I_{\rm CO(1-0)}$~=~0.43/1.01~=~0.43
(where the CO(2--1) line intensity from the HERA central pointing
is 0.43 and the CO(1--0) intensity from IRAM is 1.01). This value is at the lower end of the line ratio range
for most galaxies but given the uncertainties (e.g.~gas distribution), the value shows that the CO(2--1)
line intensity we measure is consistent with the previous CO(1--0) detection.
\noindent
{\bf 3.~Comparison with HI observations~:~}
High sensitivity HI VLA observations of Malin 2 were presented by Pickering et al. (1997). These reveal the
distribution of the HI gas and its kinematics at a spatial resolution of $\sim$19$''$. The HI disk
is lop-sided, the HI emission being significantly more prominent and extending farther out on
the western side of the nucleus than on the eastern side. The central region is essentialy devoid
of HI gas. Although we have only a partial view of the molecular disk due to our under-sampled data
its properties have similitude with HI.
Based on the HI velocity field, the centroid velocity of the HI line profiles at the
positions (24,0), (-24,0) and (-24,24), respectively +90, -150 and -110 km\,s$^{-1}$ relative to the
systemic velocity, are in excellent agreement to those for the molecular gas (Tab. 2, col {\it e}).
Hence the HI and molecular gas appear to share a common velocity assymmetry when considering the
positions (24,0) and (-24,0).
The higher spatial and spectral resolutions in CO allow to better constrain
the velocity dispersion of the gaseous component. The CO profile with the
narrowest line width is at (-24,0). Given the spectral response, this leads to
a deconvolved velocity dispersion of 13 km\,s$^{-1}$. It is the best constraint
that we can get for the velocity dispersion of the gas $\sigma_{gas}$ in the disk of
Malin 2 given the facts that {\it a)} it is obtained in a direction close to
the major axis where beam smearing effects due to the rotation of the disk are minimum
and {\it b)} we do not observe the blue-shifted wing or
secondary velocity component that Pickering et al. discovered and had to blank
to determine and analyze the rotation curve. These authors interpreted this
secondary component as high velocity clouds, gas infalling into
the disk southwest of the nucleus. It is best seen $\sim$20'' southwest of the nucleus
where it is blue-shifted by as much as $\sim$100\,km\,s$^{-1}$. Given the correspondance
with HII regions in that direction and the fact that there is a relatively moderate HI column
density there they suggested that this infall induces star formation
(Wyder et al. \cite{Wyder.etal.2009}).
We do not detect CO emission at (-24,-24) which is only $\sim$6'' further away in the southwest.
It could be that the disk component in that region is depeleted not only in HI but also
in molecular gas. CO observations spatially fully sampled would be required to confirm
this hypothesis.
In Table 2, the maximum CO(2--1) line velocities are reached in the offsets (24,0) and (-24,0) which is
consistent with the
position angle being close to horizontal $\sim70^{\circ}$ (2MASS images; Jarrett et al. \cite{Jarrett.etal.2003}).
We note a velocity asymmetry between the emission at (24,0) and that at (-24,0), which could be due to infalling
gas or a lopsided potential. We can estimate the rotational velocity from these lines.
Assuming an inclination of 38$^{\circ}$ (Matthews et al. \cite{matthews.etal.2001})
it is of the order ($\sim 150/\sin(i)\,km\,s^{-1}$ which implies a dynamical mass within
the 24$''$ (i.e. 23\,kpc) radius of $\sim3\times10^{11}~M_{\odot}$.
The broad line at the center has a width close to twice the velocities at
(24,0) and (-24,0) offsets. This could be due to the rotation curve turning point
being inside the central beam, i.e., inside a radius of 6$''$ (6\,kpc).
This central region which is essentially devoid of HI is where there is the strongest
emission in CO; higher angular resolution CO observations are required to determine the
rotation curve in that region. Would the curve continue to steadily decrease up to
the nucleus that would point to a model where the gas in the central bulge region
is kinematically hot, e.g. because of gas infall to feed the AGN.
\noindent
{\bf 4.~Molecular Gas Mass:~}
We derived the molecular gas surface densities for each line detection and the results are summarised in Table~2.
The column densitites are derived from the line fluxes using
$N(H_{2})~=~X~(1/0.9)~\int~T_{mb}[CO(2-1)]~dv$; where I(2-1)/I(1-0)~=~0.9
(Braine \& Combes \cite{Braine.Combes.1992}) and the $CO$ to $H_{2}$
conversion factor is given by $X=2\times 10^{20}~cm^{-2}(K~km~s^{-1})$
for cool virialized molecular clouds (Dickman et al. \cite{Dickman.etal.1986}). The
value of $X$ is somewhat uncertain for LSB disks, given their low metallicities and our poor
knowledge of the ISM conditions in these galaxies. However, for a first estimate we
used this value to determine the surface densities of molecular gas $\Sigma_{mol}$
in Malin~2 (Col. f Table~2). Assuming the eight directions observed in the disk form a
distribution of surface brightness values which is representative for the disk of
Malin~2 up to a radius of $40^{\prime\prime }$, the mean CO(2--1) brightness
is $\sim0.15~K~km~s^{-1}$.
The central region has a beam averaged surface density that is 3 times higher than
the mean brightness of the disk. The lower limit for $M_{mol}$ is $\sim~4.9\times10^{8}M_{\odot}~$ and is
given by the sum of the four detections in column~{\it f} of Table~2 multiplied by the beam solid angle.
To determine the upper limit for $M_{mol}$, we used the noise
(see Table~2, column~{\it a}) to derive upper limits for the molecular gas surface densities for the regions
where there were no detections including (-24,24) where there is a hint of emission. The upper limit to the
total mass of molecular gas within $R~<~40^{\prime\prime}$
is thus $M_{mol}\sim~8.3\times10^{8}~M_{\odot}$.
If the molecular disk does not extend beyond $40^{\prime\prime}$, then the molecular gas mass
corresponds to $\sim1.2 - 2$\% of the HI gas mass in the galaxy. This is similar to that observed in normal spiral
galaxies that have molecular gas masses typically a few percent of the total HI mass.
\noindent
{\bf 5.~Star Formation threshold~:~}
The molecular gas disk in Malin~2 is extended and relatively massive with a significant amount of
molecular gas but its star formation activity is low compared to normal spirals
(Wyder et al. \cite{Wyder.etal.2009}). For galactic disks star formation appears to be controled by the
onset of gravitational instabilities (Kennicutt \cite{Kennicutt.1989}). A simple single-fluid
Toomre disk stability model predicts threshold densities in agreement with the observations.
This critical density is given by
$\Sigma_{crit}~=~\alpha\kappa\sigma_{gas}/3.36~G$
where $\kappa$ is the epicyclic frequency and $\sigma_{gas}$ the velocity dispersion of the gas.
From a sample of spiral galaxies Kennicutt found $\alpha$ $\simeq$ 0.69.
In his study, to evaluate $\alpha$,
Kennicutt assumed that all the galaxies in his sample have approximately the same velocity dispersion,
$\sim6 km~s^{-1}$, and that it is independent of the radial distance within a galaxy. This is a good
approximation given what is known for the galaxies of his sample. Hence the prediction is that,
to have disk galaxies such as the LSBs immune to star formation, the gas surface density should
be below this critical density.
Pickering et al. (\cite{pickering.etal.1997}) determined rotation curves for 4 GLSBs. These allow
them to determine the epicyclic
frequency necessary to obtain the radial profiles of $\Sigma_{crit}$. They show that, by assuming that
$\alpha = 0.69$ is valid for these LSB galaxies and that $\sigma_{gas}=10 km~s^{-1}$, the critical
density is, in some regions, close or lower than the observed HI surface density, in spite of the fact
that there is no conspicuious star formation activity there. This leads them to postulate
that $\sigma_{gas}$ must be larger than the assumed value.
In the specific case of Malin 2 Pickering et al. find that the
HI surface density never exceeds the critical density except at the location where HI is
the brightest (this direction is located a half way between the positions (-24,24) and (-24,0)).
When considering the 9 directions listed in table 2 only (0,0) coincides with an H$\alpha$
source. Using the present CO observations we can examine two questions:
{\it 1)} does the gas surface density remain below the critical density
when the contribution from the molecular gas component is taken into account
as did Kennicutt in his analysis? {\it 2)} is $\sigma_{gas} \le 10 km~s^{-1}$ used to
determine $\Sigma_{crit}$ adapted in the case of giant LSBs such as Malin 2?
Column {\it g} in Table 2 gives the HI surface brightnesses. Comparing
these with the molecular surface densities (col. {\it f}), the molecular gas surface
densities appear to amount to typically 1/2 to 1/3 the HI surface densities,
with the exception for the position (-24,0) where HI largely dominates. Column {\it h} gives
the total gas surface density and in col. {\it i} this quantity relative to the critical
density. By using $\alpha = 0.69$, for internal consistency, the values in col. {\it h} and {\it i}
are determined using the same hypothesis as those adopted by Kennicutt for the conversion
factor X and the term Z which accounts for the elements heavier than H.
For $\sigma_{gas}$ we adopted the value of $13~km~s^{-1}$
that we measured with good S/N at (-24,0) (sect 3.3). Better constrained than the $10~km~s^{-1}$
adopted by Pickering et al., this value is still conservative given the fact that
it is the lowest velocity dispersion (col. {\it d}) amongst the 5 directions where CO
is detected. Formally, all the values in col. {\it i} being smaller than unity it could be
concluded that the disk of Malin 2 is sufficiently stable to prevent star formation. However,
the directions (-24,0) and (-24,24) have surface densities close to the threshold preventing
us to consider this as firm solid results. Also
the observed kinematics in western and north-western parts of Malin 2 appears complex because
of high velocity clouds (HVC). It could be that the star formation activity seen in Malin 2
is more related to the interaction of these HVCs with the disk than to growth of density
fluctuations originating from instabilities in the self-graviting rotating disk. A fully sampled
image of the Malin 2 and high signal-to-noise to disentangle different velocity components
and measure the line widthes are necessary to better understand the properties of Malin 2.
\begin{figure}
\centering
\includegraphics[width=8cm,angle=-90]{F5686_pointing.3.ps}
\caption{The figure shows the R band image of the galaxy Malin~2 with
the HERA footprint overlaid as nine black circles. The red circles correspond to
the previous IRAM observations at 115GHz (Das et al. 2006). Each circle has a
diameter equal to the FWHM of the corresponding beam.}
\label{Fig1}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=8cm,angle=-90]{allF568.ps}
\caption{The panel shows the CO(2-1) spectra obtained with the nine HERA receivers.
The offsets in arcsecond from the galaxy center are marked on the
top left hand corner of each figure. CO(2-1) emission was detected from five
locations. The gaussian fits for these detections are overlaid on the spectra.
The x axes are the velocity offsets from the galaxy systemic velocity ($v_{\rm hel}=13,830~km~s^{-1}$)
and the y axes are the measured antenna temperatures ($T_A^*$) in $mK$.}
\label{Fig1}%
\end{figure*}
\begin{table*}
\caption[]{CO(2-1) fluxes, kinematical parameters, molecular, HI and total gas surface densities, }
\vspace{-3mm}
\label{table1}
$$
\begin{array}{ccccrrrrcccc}
\hline
\noalign{\smallskip}
\multicolumn{2}{c}{Offset}
& rms~noise
& {\Delta~I_{CO}}
& \multicolumn{2}{c}{\Delta~V_{FWHM}}
& \multicolumn{2}{c}{<V>}
& \Sigma_{\rm mol}
& \Sigma_{\rm HI}
& \multicolumn{1}{c}{\Sigma_{\rm gas}}
&{\frac{\Sigma_{\rm gas}}{\Sigma_{\rm crit}}}\\
\multicolumn{2}{c}{(arcsec)}
& (mK)
& (K~km~s^{-1})
& \multicolumn{2}{c} {(km~s^{-1})}
& \multicolumn{2}{c}{(km~s^{-1})}
& (M_{\odot}pc^{-2})
& (M_{\odot}pc^{-2})
& (M_{\odot}pc^{-2})
& \\
\multicolumn{2}{c}{a}
&b
&c
&\multicolumn{2}{c}{d}
&\multicolumn{2}{c}{e}
&f
&g
&h
&i\\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
+24 & +24 & 0.7 & <~0.12 &\multicolumn{2}{c}{0.39} & \multicolumn{2}{c}{....} & <~0.4 & 2.1 & <~3.0 & ... \\
+24 & +00 & 0.9 & 0.37\pm0.07 & 82~\pm&20 & 91~\pm&~~7 & 1.2\pm0.2 & 2.5 & 4.8 & 0.7 \\
+24 & -24 & 0.8 & <~0.13 & \multicolumn{2}{c}{0.45} & \multicolumn{2}{c}{....} & <~0.5 & 1.8 & <~2.8 & ... \\
+00 & +24 & 0.9 & 0.28\pm0.05 & 46~\pm&10 & 14~\pm&~~4 & 0.9\pm0.2 & 2.7 & 4.6 & 0.7 \\
+00 & +00 & 0.8 & 0.45\pm0.11 & 243~\pm&76 & 77~\pm&34 & 1.5\pm0.4 & 0.9 & 3.4 & \\
+00 & -24 & 0.9 & <~0.15 & \multicolumn{2}{c}{0.50} & \multicolumn{2}{c}{....} & <~0.5 & 2.2 & <~3.3 & ... \\
-24 & +24 & 1.6 & \sim0.35\pm0.13 & 79~\pm&41 & -102~\pm&14 & \sim1.2\pm0.4 & 2.2 & \sim4.4 & \sim0.9 \\
-24 & +00 & 0.9 & 0.20\pm0.04 & 33~\pm&~7 & -142~\pm&~~4& 0.7\pm0.1 & 4.0 & 5.7 & 0.9 \\
-24 & -24 & 0.7 & <~0.12 & \multicolumn{2}{c}{0.39} & \multicolumn{2}{c}{....} & <~0.4 & 1.8 & <~2.7 & ... \\
\noalign{\smallskip}
\hline
\end{array}
$$
\begin{list}{}{}
\item[$^{\mathrm{a}}$] $\Delta\alpha$ and $\Delta\delta$ offsets relative to the position of the nucleus $\alpha,\delta$ (J2000): $10^{h}39^{m}52^{s}.5, +20^{\circ}50^{\prime}49^{\prime\prime}$
\item[$^{\mathrm{b}}$] at 10.9 $km~s^{-1}$ velocity resolution.
\item[$^{\mathrm{c}}$] $\Delta I_{\rm{CO(1-0)}}=\int T_{\rm MB} dv$ with $T_{\rm MB}=1.67 T_A^*$. For the non-detections,
we take $T_A^*$=noise and assume $\Delta V=100~km~s^{-1}$.
\item[$^{\mathrm{d}}$] fitted line widths, not deconvolved by the spectral response.
\item[$^{\mathrm{e}}$] fitted, relative to the systemic velocity 13830$~km~s^{-1}$.
\item[$^{\mathrm{f}}$] $\Sigma_{\rm mol}~=~2m_{p}N(H_{2})$. Values for X=2.0 10$^{20}~cm^{-2}~K~km~s^{-1}$ (ref. sect. 3.4).
\item[$^{\mathrm{g}}$] from Fig. 3 of Pickering et al. (\cite{pickering.etal.1997}).
\item[$^{\mathrm{h}}$] $\Sigma_{\rm gas} = (1+Z)~(\Sigma_{\rm HI}+\frac{X}{2.0 10^{20}}\Sigma_{\rm mol})~cos(i).$
with Z=0.45, X=2.8 10$^{20}~cm^{-2}~K~km~s^{-1}$ and i=38$^{\circ}$.
Values in brackets assume $\Sigma_{\rm mol} \ll \Sigma_{\rm HI}$
\item[$^{\mathrm{i}}$] using $\Sigma_{\rm crit}$ from Fig. 10 of Pickering et al. (\cite{pickering.etal.1997}),
re-scaled for $\sigma_{gas}=13~km~s^{-1}$
\end{list}
\end{table*}
\section{Conclusions}
\begin{enumerate}
\item We report the first detection of CO(2-1) line emission from the disk of a giant
LSB galaxy. Our observations reveal the presence of extended molecular gas in the disk of the LSB galaxy
Malin~2. The radial extent of the molecular disk is at least 34~kpc and its mean line brightness
$\sim 0.15 K~km~s^{-1}$ (on the $T_{mb}$ temperature scale). With an angular
resolution of $11.7^{\prime\prime}$ this is $\sim 2/3$ the beam-averaged integrated line intensity
in the central region.
\item When we compared the molecular and HI gas masses of Malin~2, we found that the
molecular gas fraction is $\sim$1.2~-~2\% of the HI gas mass.
At radii $\sim~$15 kpc in the directions where CO is detected, the molecular gas
fraction is typically 30 to 50\% of the HI gas surface densities.
\item We estimated the total surface density of the neutral gas and found that it is always
below the critical threshold density for gravitational instability.
Hence the disk of Malin 2 is overall stable against large scale star formation.
\end{enumerate}
\begin{acknowledgements}
We are grateful to the IRAM staff at Pico Veleta for excellent support at the
telescope. IRAM is supported by INSU/CNRS (France), MPG (Germany) and IGN (Spain).
The authors would also like to thank Alice Quillen for providing the R-band images of Malin~2.
This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated
by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the
National Aeronautics and Space Administration.
\end{acknowledgements}
| {'timestamp': '2010-06-11T02:00:51', 'yymm': '1006', 'arxiv_id': '1006.1973', 'language': 'en', 'url': 'https://arxiv.org/abs/1006.1973'} |
\section{Introduction}
\subsection{Background and Motivation}
Binary authorship analysis aims at recovering the developer’s identity information through executable binary files. It is critical in various software engineering applications, especially security-related scenarios, such as copyright infringement detection, cybercrime tracking, and malicious software (malware) forensics, where source code is rarely available. In this paper, we formulate the problem as a practical \emph{binary authorship verification task}. As shown in figure \ref{intro_task}, given an anonymous binary sample, our goal is to determine whether it is developed by a specific programmer of the candidate set. Each candidate programmer has a small set of binaries that can characterize his programming style. The difficulty of the task is that the actual developer of the questioned binary is likely not to belong to the candidate set but from the wild, which is in line with the realistic experiences of professional binary analysts \cite{marquis2015big}.
Binary authorship verification is nontrivial as it faces two main challenges:
\textbf{(a)} Binary samples of the same programmer may implement relevant or completely different functions. Prior researchers manually designed programming stylistic features to establish the unified programmer templates. It heavily relies on expert experience and can not capture specific programming behaviors that have not appeared in the known samples.
Deep learning can perform automatic feature extraction and has strong generalization capabilities. It has achieved promising results in many software engineering applications \cite{wang2020detecting} \cite{tian2020evaluating} \cite{wang2021mulcode} \cite{bui2021infercode}. However, the collected samples of candidate authors are usually very limited, making it challenging to train the deep neural network with a large number of parameters.
\textbf{(b)} Modern software development is usually completed by cooperation, even including malware production, like APT attackers usually cooperate to launch staged events. The collaborators' code snippets may introduce noise when identifying the major contributor, and it is more challenging to implement organization-level authorship verification due to the mixture of different programmers' styles.
\begin{figure}[!t]
\centering
\includegraphics[width=6.6cm]{intro_task3_crop.pdf}
\caption{Binary authorship verification task}
\label{intro_task}
\vspace*{-1.2\baselineskip}
\end{figure}
\subsection{Limitation of Prior Art}
Most previous binary authorship analysis approaches try to solve the problem by traditional supervised machine learning algorithms \cite{rosenblum2011wrote} \cite{alrabaee2014oba2} \cite{meng2016fine} \cite{alrabaee2018leveraging} \cite{caliskancoding}. They extract the known programmers' stylistic features from training examples and identify the developer of the anonymous binaries in the test stage. Table \ref{qualitative_sota_table} lists the state-of-the-art approaches. Rosenblum. et al. \cite{rosenblum2011wrote} extracted bytecode n-gram, instruction idioms, and CFG graphlets to create author-style templates. BinAuthor \cite{alrabaee2018leveraging} constructed the authors' choices set to recognize their programming habits. Caliskan-Islam. et al. \cite{caliskancoding} extracted hybrid features from disassembly instructions and decompiled code to model the author's style survives compilation.
The above approaches require manual feature engineering, which relies on domain knowledge and has been shown to be dataset-specific. To automatically extract the programmer's style characteristics, Bineye \cite{alrabaee2019bineye} proposed a deep learning-based authorship attribution method, which uses three convolutional neural networks (CNNs) on gray-scale images, opcode sequences, and function invocations. Although showing performance improvement, deep neural networks rely on sufficient collected samples with annotations to fit the large-scale parameters, which is usually unrealistic in real-world binary forensics scenarios.
Furthermore, one common dilemma of most existing approaches is the inability to handle the developers who do not belong to the candidate set. Caliskan-Islam \emph{et al.} \cite{caliskancoding} decided to accept or reject the results according to the classifier's confidence score. Rosenblum \emph{et al.} \cite{rosenblum2011wrote} calculated the stylistic similarities between anonymous binary pairs based on a distance metric for authorship clustering. However, their performance is not ideal when dealing with large size of candidates and insufficient samples. In this paper, we consider the real-life experience of professional binary analysts and formulate the \emph{binary authorship verification task}. Our goal is to determine whether a binary is developed by a candidate author with limited samples based on the preserved programming style.
\begin{table}
\caption{State-of-the-art binary authorship analysis approaches}
\label{qualitative_sota_table}
\centering
\begin{tabular}{m{84pt}<{\centering}|m{37pt}<{\centering}|m{94pt}<{\centering}}
\toprule
Approach & Feature engineering & Handling unknown programmers from the wild \\
\midrule
Rosenblum \emph{et al.} \cite{rosenblum2011wrote} & Manual & \ding{51} (by stylistic similarity) \\
Oba2 \cite{alrabaee2014oba2} & Manual & \ding{55} \\
Meng \emph{et al.} \cite{meng2016fine} & Manual & \ding{55} \\
BinAuthor \cite{alrabaee2018leveraging} & Manual & \ding{55} \\
Caliskan-Islam \emph{et al.} \cite{caliskancoding} & Manual & \ding{51} (by confidence score) \\
Bineye \cite{alrabaee2019bineye} & Automatic & \ding{55} \\
\bottomrule
\end{tabular}
\vspace*{-1.2\baselineskip}
\end{table}
\subsection{Proposed Method}
In this paper, we propose \textbf{BinMLM}, a \underline{bin}ary authorship verification framework with flow-aware \underline{m}ixture-of-shared \underline{l}anguage \underline{m}odel. It addresses the above challenges as follows:
Specific programming languages have strict syntax restrictions. Therefore, developers' personal habits only account for a small percentage of the program, while most parts are overlapping general patterns. We interpret programmers' personal style as the selection and combination of \emph{universal programming patterns}, such as the preferences for specific programming paradigms, branch and loop forms, organization structures of user-defined functions, and exception handling ways. These stylistic preferences are implicit in the instruction execution sequence.
We train RNN language models with powerful sequence modeling capabilities on the disassembly code to automatically characterize the styles of candidate programmers, avoiding explicit feature engineering. To recover the high-level stylistic patterns from the binaries, we extract consecutive opcode traces as the essential units of the language model and preserve the flow information in the basic block bi-grams of CFG.
We introduce a \emph{mixture-of-shared} architecture with multiple encoders shared among all developers to model the universal programming patterns from different views. Each programmer is assigned a gate layer to learn the specific combination weights of shared representations, reflecting their preferences of multiple generic patterns. Compared to training a separate language model for each author, this layer sharing mechanism can alleviate the insufficient data problem by effectively utilizing the small-scale training samples.
We utilize an \emph{optimization pipeline} to reduce the noise of collaborators and further isolate the small proportion of unique styles from generic programming patterns. We first train the overall parameters jointly. Specifically, we randomly select an author's mini-batches each time and optimize the shared encoders and the corresponding author-specific layers. After that, we fix the parameters of shared layers and separately fine-tune each author-specific decoder with specially designed regularization terms. Joint training enables the model to learn the general programming patterns better. Author-specific fine-tuning facilitates the model to explore how to decode personal styles accurately, improving the robustness of major author characterization in multi-programmer collaboration scenarios.
\subsection{Key Contributions}
We summarize our major contributions as follows:
\begin{itemize}
\item We formulate a practical \emph{binary authorship verification} task, which considers that the developers of anonymous binaries may not belong to the candidate set but from the wild. Our task setting is in line with the working process of software forensic experts, and we provide an effective solution for them to perform automatic analysis.
\item We creatively characterize the authors' programming styles by training the RNN language model on flow-aware instruction execution traces, which can avoid manual feature engineering.
\item We design a novel \emph{mixture-of-shared} language model and an effective \emph{optimization pipeline}. With multiple shared encoders and author-specific gate layers, we can fully utilize limited samples and model the developers' combination preferences of general programming patterns. Furthermore, our \emph{optimization pipeline} can eliminate additional noise and accurately distill developers' unique stylistic characteristics.
\item
We conduct extensive experiments to evaluate BinMLM on datasets with different numbers of developers and samples. Results show that it can surpass baselines built on the state-of-the-art feature set by a large margin (AUC = 0.83 $\sim$ 0.94, 4.73\% $\sim$ 19.46\% improvement). Moreover, BinMLM remains robust in multi-programmer collaboration scenarios and can perform practical organization-level verification on a real-world APT malware dataset.
\end{itemize}
\section{Problem Defination}
In this section, we formalize the binary authorship verification task as follows:
We first construct a candidate set $P$ = ($p_1$, $p_2$, ..., $p_n$) of $n$ known programmers, and each programmer $p_i$ has several previously collected binary samples $B_i$ = ($b_{i_1}$, $b_{i_2}$, ... ,$b_{i_m}$). Then for an authorship verification pair $\langle p_i, b_j \rangle$, we aim to determine whether the anonymous binary $b_j$ is developed by the candidate programmer $p_i$. Note that we assume $b_j$ is developed by a single programmer or has a major contributor in multi-programmer collaboration scenarios, and the actual developer of $b_j$ may belong to $P$ or may come from the wild.
Previous binary authorship analysis methods directly adopt a close-world classification setting, which assumes the anonymous binaries to be analyzed are all developed by known programmers from the candidate set. It is inconsistent with the real-world software forensic scenarios, and our proposed binary authorship verification task can handle the problem from a more realistic and comprehensive perspective.
\section{Preliminaries}
\subsection{RNN Language Model}
Language models has shown strong text style characterization abilities in previous literature \cite{bagnall2015author} \cite{ge2016authorship} \cite{ouyang2020gated}. For an input sentence $S$ = ($u_1$, $u_2$,..., $u_l$), language model quantitatively describe the joint probability $p(S)$ as:
\begin{equation}
p(S)=\prod_{i=1}^{l} p\left(u_{t} \mid u_{1:t-1}\right)
\end{equation}
$u_i$ denotes the statistical unit of the language model. In this paper, we train the language model on the disassembly instruction sequence to characterize developers' programming styles. We use recurrent neural network (RNN) to encode $u_{1:t-1}$ \cite{mikolov2010recurrent}. For an input sequence, we first transform each token $u_i$ into a vector $\mathbf{e}_t$ through an embedding layer, then feed the sequence into RNN layer to model the context representation. At each time-step $t$ of RNN, the hidden state $\mathbf{h}_t$ is updated as follows:
\vspace*{-0.3\baselineskip}
\begin{equation}
\mathbf{h}_t=RNN\left(\mathbf{h}_{t-1}, \mathbf{e}_{t-1}\right)
\end{equation}
After that, $\mathbf{h}_t$ is fed into a fully-connected layer with \emph{softmax} function, which acts as a decoder to estimate the probability distribution of the next unit:
\vspace*{-0.3\baselineskip}
\begin{equation}
y\left(u_{t}\right)=f\left(\mathbf{W} \cdot \mathbf{h}_t+\mathbf{b}\right)
\end{equation}
\vspace*{-0.8\baselineskip}
\begin{equation}
p\left({u_t} \mid u_{1:t-1}\right)=\frac{e^{ \left(y\left(u_{t}\right)\right)}}{\sum_{k=1}^{|V|} e^{ \left(y\left(u_{k}\right)\right)}}
\end{equation}
$f$ is the nonlinear activation function, $|V|$ is the vocabulary size of the statistical units. The loss function of the language model is defined as the cross-entropy between the predicted distribution and the ground-truth:
\vspace*{-0.3\baselineskip}
\begin{equation}
L(S)=-\sum_{t=1}^{m} u_t \log \left(p\left({u_t} \mid u_{1:t-1}\right)\right)
\vspace*{-0.3\baselineskip}
\end{equation}
\begin{figure}[!t]
\centering
\includegraphics[width=6cm]{mmoe_intro_crop.pdf}
\caption{Mixture-of-Shared language model architecture}
\label{mos_arch}
\vspace*{-1.2\baselineskip}
\end{figure}
\begin{figure*}[!t]
\centering
\includegraphics[width=13cm]{update_workflow_mixture_crop.pdf}
\caption{Binary authorship verification workflow of BinMLM}
\label{sys_overview}
\vspace*{-1.2\baselineskip}
\end{figure*}
\subsection{Mixture-of-Shared Network Architecture}
To conduct binary authorship verification formalized in section \uppercase\expandafter{\romannumeral2}, we expect to train a language model for each candidate author on his binary samples. Each author's programming style will lead to the differences of the learned probability $p(S)$, and a binary has a relatively high probability on its actual developer's language model. Since the number of candidate programmers can be large, setting an independent network for each author will limit the model's scalability. In addition, the syntax rules of programming languages are much stricter than natural language. Developers use a fixed set of keywords and APIs, and the assembly instructions compiled from high-level source code have more limited operations types. Therefore, developers' personal styles mainly come from the selection and combination of universal programming patterns.
Based on these considerations, we exploit a mixture-of-shared network to train the language models of candidate programmers simultaneously. Figure \ref{mos_arch} shows the network architecture. We set up multiple RNN encoders shared among all programmers to map the processed samples into different subspaces, modeling the generic programming patterns from different perspectives. For programmer $p_i$, the shared encoder $j$ will generate the corresponding hidden representation $\mathbf{h}_{j}^{(i)}$.
To extract the developer's specific preference for the combination of common programming patterns, we build a separate gate layer for each programmer to learn the importance weights of shared representations generated by different RNN encoders. Specifically, the weight parameters $\mathbf{g_{i}}$ of programmer $p_i$ is calculated by a feed-forward layer as follows:
\vspace*{-0.3\baselineskip}
\begin{equation}
\mathbf{g}_{i}=\sigma \left(\mathbf{W}_{g_i} \cdot \mathbf{E}_{i}+\mathbf{b}_{g_i}\right)
\end{equation}
$\mathbf{E}_{i}$ denotes the embedded vectors of the input sequence. We compute the context representation $\mathbf{r}_{i}$ by the element-wise multiplication of $\mathbf{h}_{j}^{(i)}$ with their corresponding gates weights:
\vspace*{-0.3\baselineskip}
\begin{equation}
\mathbf{r}_{i}=\sum_{j=1}^{s} \mathbf{g}_i \cdot \mathbf{h}_{j}^{(i)}
\end{equation}
$s$ is the number of shared RNN encoders. $\mathbf{r}_{i}$ is finally fed into the corresponding author-specific linear decoder to generate predictions of subsequent units.
Our model is inspired by the text style characterization approaches proposed by Bagnall \cite{bagnall2015author} and Ouyang \emph{et al.} \cite{ouyang2020gated}, and the multi-task learning network designed by Ma \emph{et al.} \cite{ma2018modeling}. Such an architecture allows all programmers' samples to optimize the shared layers, and in turn optimize each specific decoder, improving the generalization ability of low-resource programmers' models. Moreover, multiple shared encoders and author-specific gates can flexibly combine the subspace of the universal programming patterns, accurately characterizing the programmer's personal style.
\section{Proposed Method}
\subsection{Overview}
The general idea of our proposed binary authorship verification framework BinMLM is to characterize the programmer's unique style contained in binary files by language model and simultaneously learn specific language models of large numbers of candidate programmers. To this end, BinMLM involves four major modules (shown in Figure \ref{sys_overview}).
\begin{itemize}
\item Binary pre-processing module (\emph{Module 1}): We disassemble the input binary file and extract the CFG structure. Then we extract the instructions traces with control flow semantics and use the opcode n-grams as the essential units of the subsequent programming style characterization process. This module is a fundamental component shared by the following modules.
\item External pre-training module (\emph{Module 2}): We construct a large-scale external binary code corpus without authorship annotations and pre-process it by \emph{Module 1}. Then we pre-train a general language model on these wild binary files to initialize the subsequent models.
\item Candidate programmer style characterization module (\emph{Module 3}): We introduce a mixture-of-shared architecture with multiple shared encoders and author-specific gate layers and decoders to train the language models of all candidate programmers. We first jointly optimize the overall parameters by binary samples of all programmers, then freeze the shared layer and individually fine-tune the separate decoders with additional regularization items.
\item Authorship verification module (\emph{Module 4}): For a verification pair $\langle p_i, b_j \rangle$. We pre-process binary $b_j$ and evaluate its loss value $l_i$ on the trained language model of programmer $p_i$. Then we use the loss array $\mathbf{L}^{(j)}$ of $b_j$ on all candidate programmers' language models to normalize $l_i^{(j)}$ and obtain the final verification score.
\end{itemize}
\subsection{Binary File Pre-processing}
\begin{figure*}[!t]
\centering
\includegraphics[width=12cm]{binary_opcode_crop2.pdf}
\caption{Binary file pre-processing module}
\label{binary_process}
\vspace*{-1.2\baselineskip}
\end{figure*}
Figure \ref{binary_process} shows the details of our binary file pre-processing module. The major components include processing the disassembly code and building language models on the collected binary samples.
\subsubsection{Process disassembly code}
A binary to be analyzed is represented as discrete byte streams, and we use disassembling tool to extract its corresponding CFG structure. The nodes in CFG, called basic blocks, act as the smallest units of sequential instruction executions. The edges represent the control flow transfer between basic blocks, which may be caused by \emph{if-else} branches, loop structures, or jumps across blocks.
To preserve the flow transfer semantics in CFG, we first connect the endpoint basic blocks of all edges as basic block bi-grams. Then we extract the instruction traces within the two basic blocks sequentially as the sequence $I$ = ($i_1$, $i_2$, ..., $i_l$), $l$ is the number of instructions. Assembly instruction is composed of an opcode and zero to more operands. The opcode specifies the operation to be conducted. The operands specify registers, immediate literals, or memory locations, which can be different under various compilation environments. To improve the versatility of our framework and reduce the scale of the language model's vocabulary, we only keep the opcode in the instruction sequence and update the sequence to $O$ = ($o_1$, $o_2$, ..., $o_l$), $o_i$ represents the opcode at position $i$.
We explore how to better characterize the developer’s programming style in the binaries by comparing the disassembly code with the original high-level source code. Figure \ref{source_assembly} shows a source code snippet and part of its corresponding assembly instruction traces. We can see that a short statement in the source code, such as $maxAns=max(maxAns, pre)$, will be transformed into a relatively long assembly instruction sequence, and the corresponding opcode sequence is $MOV$, ..., $CALL$, $MOV$, which implements the operations of register loading, parameter reading, and function invocations. Therefore, consecutive short opcode traces can potentially reflect the programming patterns of source code statements. Since the number of the assembly instruction complied from each statement is uncertain, we slide a window fixed to size $n$ on the original opcode sequence to obtain $G$ = ($g_1$, $g_2$, ..., $g_l$), $g_i$ denotes an opcode n-gram representing a small continuous sequence of operations composed of $n$ instructions.
\begin{figure}[!t]
\centering
\includegraphics[width=9cm]{sourcecode_binary_crop.pdf}
\caption{Source code statements and counterpart assembly instructions}
\label{source_assembly}
\vspace*{-1.2\baselineskip}
\end{figure}
\subsubsection{Build language model for binaries}
We train the RNN language model on the extracted opcode n-gram sequences of binaries to characterize the developer's programming style. The RNN encoding layer extracts the context representation of the previous opcode n-gram units, and the linear decoding layer estimates the probability distribution of the next unit. In our experiment, we found that setting the subsequent opcode n-gram unit of multiple hops as the prediction target of the language model can generate more distinguishable author-specific styles. Because increasing the task difficulty encourages the model to dig out more robust programming patterns from the training samples.
\subsection{External Language Model Pre-training}
The collected binaries of candidate programmers are insufficient for training RNN language models with a large number of parameters. Note that traditional text language models are usually trained on millions of tokens, but our collected samples may only contain a few thousand opcode n-gram units. To tackle this problem, we first pre-train a common RNN language model on the opcode n-gram sequences extracted from a large-scale disassembled external binary corpus. The pre-trained model, with an opcode n-gram embedding layer, an LSTM encoding layer, and a linear decoding layer, contains the general programming features of these wild binary files. It will be used to initialize the parameters of candidate programmers' language models.
\subsection{Candidate Programmer Style Characterization}
\subsubsection{Mixture-of-shared language model}
We use the collected binaries to train an author-specific language model for each candidate programmer. As defined in section \uppercase\expandafter{\romannumeral2}, there are $n$ programmers in the candidate set $P$ = ($p_1$, $p_2$, ..., $p_n$). Each programmer $p_i$ has a set of binary samples $B_i$ = ($b_{i_1}$, $b_{i_2}$, ... , $b_{i_m}$). We process each binary in $B_i$ as control-flow aware opcode n-gram sequences, and each sequence is represented as $G_i$ = ($g_{i_1}$, $g_{i_2}$,..., $g_{i_l}$). The processed sequences of the programmers' binary samples are grouped into input mini-batches to train the corresponding language models.
We utilize the mixture-of-shared architecture elaborated in section \uppercase\expandafter{\romannumeral3} to train the binary code language models of all candidate programmers. We first deploy a shared opcode n-gram embedding layer to convert the input sequence $G_i$ into a vector sequence $\mathbf{E_i}$ = ($\mathbf{e}_{i_1}$, $\mathbf{e}_{i_2}$, …, $\mathbf{e}_{i_l}$). Then we use multiple shared RNN encoders with the same structure to encode common programming patterns from different views. The $j$-th shared RNN encodes the serialized context information into hidden state vectors $\mathbf{H_i}$ = ($\mathbf{h}_{i_1}^{(j)}$, $\mathbf{h}_{i_2}^{(j)}$, …, $\mathbf{h}_{i_l}^{(j)}$). Next, for each developer, we set up a separate gate layer to produce a weighted aggregation of hidden states generated by multiple RNNs as Equation 7. It can learn the developers' combination preferences of universal programming patterns and form their personal style on this basis. Finally, we feed the mixed context representation $\mathbf{R_i}$ = ($\mathbf{r}_{i_1}$, $\mathbf{r}_{i_2}$, …, $\mathbf{r}_{i_l}$) into $p_i$ 's independent linear decoder with the \emph{softmax} function to estimate the subsequent units. The author-specific decoder is trained on the developer's own samples.
We initialize the parameters of all shared layers and author-specific decoders by the language model pre-trained on the external binary code corpus. The loss $\operatorname{L}_{p_i}$ of programmer $p_i$'s language model is the average cross-entropy losses on the mini-batches constructed by his binary samples. The overall loss of our mixture-of-shared language model is the weighted sum of all programmers' losses, $w_i$ denotes the weight of $\operatorname{L}_{p_i}$:
\vspace*{-0.3\baselineskip}
\begin{equation}
\operatorname{L}_{lm} = \sum_{i=1}^{n} w_i \cdot \operatorname{L}_{p_i}
\end{equation}
\subsubsection{Optimization pipeline}
The optimization pipeline of the mixture-of-shared language model is divided into a joint training phase and a fine-tuning phase to learn generic programming patterns and author-specific stylistic features. During the joint training phase, we randomly select a programmer's mini-batches as the input of each iteration, optimizing the shared layers and the corresponding specific gate layers and linear decoders. Then in the fine-tuning phase, we freeze the optimized shared embedding layer and LSTM encoders, as well as the gate layers, and individually fine-tune the decoders of each programmer.
According to our analysis, the programmer's distinguishable unique styles only account for a small percentage of the binary code, while most parts are common programming patterns. Our optimization pipeline can effectively exploit the limited training samples of each candidate to distill the author-specific stylistic features and eliminate additional noise.
\subsubsection{Regularization of specific-decoders}
Extracting opcode n-grams will cause the vocabulary of the statistical units to increase significantly as the value of $n$ increases, thereby enlarging the sample space of the language model. In the fine-tuning stage, each programmer's decoder is only optimized by its own small size of binary samples. To limit the parameter space of the author-specific language models, we add regularization terms between a pair of decoders to encourage their parameters to be more similar and prevent overfitting on the inadequate training data.
Specifically, when fine-tuning the linear decoder of programmer $p_i$, we calculate the $l_1$ distance among its parameters $\mathbf{W}_{p_i}$, $\mathbf{b}_{p_i}$, and the parameters of other programmer's decoders as the regularization loss. We average the regularization loss of all programmers as $\operatorname{L_{reg}}$ and add it to the loss of the overall language model $\operatorname{L_{lm}}$. $\lambda$ is the weight coefficient of $\operatorname{L_{reg}}$:
\vspace*{-0.3\baselineskip}
\begin{equation}
\operatorname{L_{reg}}=\sum_{i=1}^{n} \sum_{j=1, j \neq i}^{n}\left|\mathbf{W}_{p_i}-\mathbf{W}_{p_j}\right|+\left|\mathbf{b}_{p_i}-\mathbf{b}_{p_j}\right|
\end{equation}
\vspace*{-0.3\baselineskip}
\begin{equation}
\operatorname{L}_{total} = \operatorname{L}_{lm} + \lambda \cdot \operatorname{L_{reg}}
\vspace*{-0.3\baselineskip}
\end{equation}
\subsection{Binary Authorship Verification}
In the binary authorship verification phase, for a verification pair $\langle p_i, b_j \rangle$, our goal is to determine whether the anonymous binary sample $b_j$ is developed by programmer $p_i$. We first process $b_j$ in the same way as the binary files of the training phrase, disassemble it into a CFG structure and extract the opcode n-gram sequences of the basic block bi-grams. Then we feed the processed sequences into the trained mixture-of-shared language models to get the corresponding loss array $\mathbf{L}^{(j)}$ = [${l_1}^{(j)}$, ${l_2}^{(j)}$, ..., ${l_i}^{(j)}$, ... ${l_n}^{(j)}$] of $b_j$ on the binary code language models of all programmers in the candidate set $P$. We jointly determine the verification result based on the loss value ${l_i}^{(j)}$ of $b_j$ on $p_i$'s language model and the loss array $\mathbf{L}^{(j)}$. If ${l_i}^{(j)}$ is relatively small, it is more likely that $b_j$ was developed by the programmer $p_i$.
To be more specific, we calculate the average value $\operatorname{Avg}({\mathbf{L}^{(j)})}$ of $\mathbf{L}^{(j)}$ and its variance $\operatorname{Var}({\mathbf{L}^{(j)})}$ to normalize ${l_i}^{(j)}$ and obtain the verification score $s(i, j)$ of pair $\langle p_i, b_j \rangle$:
\vspace*{-0.3\baselineskip}
\begin{equation}
s(i, j)=\frac{{l_i}^{(j)}-\operatorname{Avg}({\mathbf{L}^{(j)})}}{\operatorname{Var}({\mathbf{L}^{(j)}})}
\vspace*{-0.3\baselineskip}
\end{equation}
\begin{table*}
\caption{Main results on different datasets}
\begin{threeparttable}
\label{baseline_res}
\centering
\begin{tabular}{p{40pt}<{\centering}p{125pt}<{\centering}|p{40pt}<{\centering}p{40pt}<{\centering}p{40pt}<{\centering}p{40pt}<{\centering}p{40pt}<{\centering}p{40pt}<{\centering}}
\multirow{2}{*}{Approach} & \multirow{2}{*}{Setting} & \multicolumn{2}{c}{GCJ-C (N = 50, m = 5)} & \multicolumn{2}{c}{GCJ-C++ (N = 100, m = 5)} & \multicolumn{2}{c}{Codeforces (N = 100, m = 20)} \\
& & AUC-ROC & AP & AUC-ROC & AP & AUC-ROC & AP \\
\midrule
\multirow{4}{*}{\texttt{Sim-Base}} & with full features & 0.8448 & \underline{0.8656} & \underline{0.7820} & 0.7950 & 0.7154 & 0.7209 \\
& w/o CFG n-grams & \underline{0.8526} & 0.8526 & 0.7497 & 0.7596 & \underline{0.7159} & \underline{0.7267} \\
& with opcode n-grams & 0.7587 & 0.7915 & 0.7718 & 0.7882 & 0.6132 & 0.6186 \\
& with opcode n-grams + CFG n-grams & 0.7801 & 0.7979 & 0.7801 & \underline{0.7979} & 0.6239 & 0.6264 \\
\midrule
\multirow{2}{*}{\texttt{Con-Base}} & with feature selection & 0.7702 & 0.7719 & 0.6011 & 0.6014 & 0.6184 & 0.6155 \\
& w/o feature selection & 0.7623 & 0.6249 & 0.6120 & 0.6153 & 0.7613 & 0.6209 \\
\midrule
\multicolumn{2}{c|}{\textbf{BinMLM}} & \textbf{0.8929} & \textbf{0.9135} & \textbf{0.8414} & \textbf{0.8588} & \textbf{0.8552} & \textbf{0.8640} \\
\multicolumn{2}{c|}{$\varDelta$ to the best results of baselines} & + 4.73\% & + 5.53\% & + 7.59\% & + 7.63\% & + 19.46\% & + 18.89\% \\
\bottomrule
\end{tabular}
\begin{tablenotes}
\footnotesize
\item[*] GCJ is the abbreviation of Google Code Jam dataset. $N$ denotes the number of candidate programmers, $m$ is the number of support samples. The underlines indicate the best results of baselines. Bold indicates the overall best results.
\end{tablenotes}
\end{threeparttable}
\label{main_result_table}
\end{table*}
\section{Evaluation}
In this section, we conduct extensive experiments to evaluate our proposed binary authorship verification framework, BinMLM. First, we describe the dataset and evaluation metrics of our experiments (section \uppercase\expandafter{\romannumeral5}.A). Next, we compare BinMLM with baselines built on the state-of-the-art feature set (section \uppercase\expandafter{\romannumeral5}.B.1) and evaluate BinMLM on datasets of different numbers of candidate programmers (section \uppercase\expandafter{\romannumeral5}.B.2) and different numbers of collected samples (section \uppercase\expandafter{\romannumeral5}.B.3). Then we evaluate how the designed core components contribute to performance improvements (section \uppercase\expandafter{\romannumeral5}.C). Furthermore, we construct synthetic datasets with different noise proportions to evaluate BinMLM in multi-programmer collaboration scenarios (section \uppercase\expandafter{\romannumeral5}.D). Finally, we explore the organization-level authorship verification ability of BinMLM on a real-world APT malware dataset (section \uppercase\expandafter{\romannumeral5}.E).
\begin{figure*}[!t]
\centering
\includegraphics[width=14cm]{updated2_multi_subplots_authornum_linechart.pdf}
\caption{Results under under different numbers of programmers}
\label{diff_programmers}
\vspace*{-1.2\baselineskip}
\end{figure*}
\subsection{Experiment Setup}
\subsubsection{Datasets}
We use the following data sources for evaluation:
\textbf{(a)} Google Code Jam (GCJ): GCJ is an annual programming competition organized by Google \cite{gcj}. We collect participants' solutions written in C and C++ language from 2008 to 2020.
\textbf{(b)} Codeforces: We collect open submissions written in C++ from the competitive programming contest hosted by the online judge website Codeforces \cite{codeforces}.
\textbf{(c)} APT-Malware: We collect malware samples of ten real-world APT groups from open source threat intelligence reports to evaluate BinMLM in security-related authorship verification scenarios.
For each data source, we construct the corresponding dataset according to the following rules: We randomly select $N$ programmers to construct the candidate set $P$. Each programmer has a small set of $m$ binaries as training samples with known annotations. In the verification phase, we sample anonymous binaries to construct authorship verification pairs. For a positive pair, we randomly select an unseen binary sample of the candidate programmer. For negative pairs, half of the test samples are developed by other $N-1$ programmers in the candidate set, and the others by randomly sampled remaining developers who do not belong to this set. The ratio of positive and negative verification pairs is 1:1.
\subsubsection{Evaluation metrics}
We use two complementary metrics to evaluate BinMLM: AUC-ROC (Area under ROC curve) and AP (average precision). These two indicators can evaluate the performance of the binary authorship verification approaches without selecting specific thresholds.
\subsubsection{Implementation Details}
We implement our prototype with PyTorch framework and use dissembler $radare2$ \cite{radare2} to extract the CFG and instruction sequences in the corresponding basic blocks. The parameters of our neural network is optimized by Adam \cite{kingma2014adam} with a learning rate of 1e-2, and we truncated the gradient of the language model for 20 time-steps. The hidden dimension of the embedding layer and LSTM encoder is set to 64. The number of the shared encoders is set to 5. The weight of the regularization loss is set to 1e-4. Under this hyperparameter setting, BinMLM can achieve best results on our development sets.
\begin{figure*}[!t]
\centering
\includegraphics[width=14cm]{updated2_multi_subplots_samplernum_linechart.pdf}
\caption{Results under different numbers of binary sapmles}
\label{diff_samples}
\vspace*{-1.2\baselineskip}
\end{figure*}
\subsection{Main results on different datasets}
\subsubsection{Comparision with Baselines}
Among the six existing binary authorship attribution approaches listed in table \ref{qualitative_sota_table}, only \cite{rosenblum2011wrote} and \cite{caliskancoding} can deal with programmers from the wild by stylistic similarity and classifier confidence score, respectively. We build our baselines based on these two implementation ways and conduct evaluations on the binary authorship verification task. We use the state-of-the-art feature set proposed by Caliskan-Islam \emph{et al.} \cite{caliskancoding}. Specifically, they extracted:
\textbf{(a)} Instruction traces of disassembly code, including instruction uni-grams, bi-grams, and tri-grams in a single line of assembly, and 6-grams spanning two consecutive assembly lines.
\textbf{(b)} Uni-grams and bi-grams of basic blocks extracted from the CFG of the binary sapmle.
\textbf{(c)} Abstract syntax tree (AST) node features extracted from the decompiled code.
Considering the time cost and the unstable success rate in real-world binary decompilation, we use the first two types of feature sets above, which can be easily constructed from disassembly code and shown to be important according to the feature selection results of the original paper.
We construct two groups of baselines based on the feature set. The first (\texttt{Sim-Base}) is built on similarity measures. For a known programmer $p_i$, we construct the feature vector of each collected sample and take the average as the programmer's stylistic representation. For an anonymous sample $b_j$, we build the feature vector in the same way and calculate the verification scores of the pair $\langle p_i, b_j \rangle$ by the cosine similarity between the corresponding vectors. We also use our proposed flow-aware opcode n-grams to construct variant feature sets.
The second group of baselines (\texttt{Con-Base}) is implemented based on the experiment designed by Caliskan-Islam \emph{et al.} \cite{caliskancoding} for the open-world scenario, in which the author of the test binary sample may not belong to the known programmers set. It exploits the confidence score of the classifier to determine whether to accept or reject the result. For a verification pair $\langle p_i, b_j \rangle$, we take the percentage of trees in the random forest that voted for $p_i$ when classifying $b_j$ as the confidence score, and then we use the normalized margin of the highest and second-highest confidences as the final verification score. Our implementation way is consistent with the original paper.
Table \ref{main_result_table} shows the binary authorship verification performance of \texttt{Sim-Base}, \texttt{Con-Base}, and BinMLM. We evaluate \texttt{Sim-Base} with different features combination ways and \texttt{Con-Base} with or without feature selection for comprehensive comparisons. Overall, the performance of \texttt{Sim-Base} is better than \texttt{Con-Base}, and BinMLM significantly surpasses them with 4.73\% $\sim$ 19.46\% improvement.
\subsubsection{Performance under different numbers of programmers}
In this section, we evaluate the performance of BinMLM with the different candidate set sizes. Increasing the number of candidate programmers will test BinMLM's ability to characterize diverse developers' styles. Figure \ref{diff_programmers} shows the comparison of BinMLM and the two baselines \texttt{Con-Base} and \texttt{Sim-Base}. We gradually increase the number of candidates from 50 to 500 on GCJ-C, GCJ-C++, and Codeforces datasets.
It can be seen that when the number of candidate programmers increases, the performance of \texttt{Con-Base} implemented by the classifier confidence score decreases significantly on GCJ-C++ and Codeforces datasets, while BinMLM and \texttt{Sim-Base} remain relatively stable. When the number of programmers is in the range of 300 to 500, the AUC-ROC of \texttt{Con-Base} may be lower than 0.6, indicating a poor authorship verification ability. The AUC-ROC of \texttt{Sim-Base} ranges from 0.7231 to 0.8821. BinMLM significantly outperforms the two baselines under each setting. When the candidate set size is 500, the AUC-ROC values of BinMLM on the three datasets reach 0.9098, 0.8610, and 0.8279, respectively. This proves that our proposed author-specific programming style characterization method has strong versatility and can effectively perform the binary authorship verification for large-scale candidate programmers.
\subsubsection{Performance under different numbers of binary sapmles}
In real-world software forensics scenarios, especially security-related applications, the collected binaries are often very insufficient, and annotating the ground-truth is highly dependent on expert knowledge. Limited samples can not adequately reflect the author's programming behavior when implementing different functions. We evaluate BinMLM and the baselines with different numbers of support samples per candidate author. Similar to \cite{abuhamad2018large}, we merge the GCJ solutions with the same and complex participant ID from different years to increase the sample scales for comprehensive evaluations. Figure \ref{diff_samples} shows the comparison results. It can be seen that with the increase of the sample size, the performance of BinMLM has been steadily improved because the shared encoders and author-specific layers of the mixture-of-shared architecture can be trained better, and each developer's style can be extracted more accurately. However, more samples do not benefit \texttt{Con-Base} and \texttt{Sim-Base} as much because the diversified functions of each author's programs may lead \texttt{Con-Base}'s classifier to be confused and cause the similarity between the candidate author's profile and the test sample features unstable.
When the number of samples is very small, BinMLM still has significant advantages over \texttt{Con-Base} and \texttt{Sim-Base}. With only five support binaries, the AUC-ROC of BinMLM on the three datasets are 0.8929, 0.8341, and 0.7578, respectively. On Codeforces dataset with the more challenging setting of 100 authors and five support samples, the accuracy of BinMLM drops slightly, but still surpasses \texttt{Con-Base} and \texttt{Sim-Base} by a large margin. It concludes that BinMLM can accurately extract the programming styles of candidate authors with very limited binary samples.
\begin{table}
\caption{Results of different opcode n-grams and different hops.}
\label{opcode_ngram_table}
\centering
\begin{tabular}{p{100pt}<{\centering}|p{32pt}<{\centering}p{32pt}<{\centering}p{32pt}<{\centering}}
\toprule
Setting & GCJ-C & GCJ-C++ & Codeforces \\
\midrule
Opcode 1-gram & 0.8216 & 0.6935 & 0.7857 \\
Opcode 2-gram & 0.8592 & 0.7529 & 0.8290\\
Opcode 3-gram & 0.8671 & 0.8053 & 0.8519 \\
Opcode 4-gram & 0.8816 & 0.8235 & 0.8561 \\
\midrule
Predict target units of 1-hop & 0.8826 & 0.8204 & 0.8429 \\
Predict target units of 2-hops & 0.8869 & 0.8285 & 0.8514 \\
\midrule
\textbf{BinMLM} (5-gram, 3-hops)& \textbf{0.8929} & \textbf{0.8414} & \textbf{0.8552}\\
\bottomrule
\end{tabular}
\vspace*{-1.4\baselineskip}
\end{table}
\begin{table*}
\caption{Effectiveness of each component in BinMLM framework}
\label{ablation_studies}
\centering
\begin{tabular}{p{120pt}<{\centering}|p{40pt}<{\centering}p{40pt}<{\centering}p{40pt}<{\centering}p{40pt}<{\centering}p{40pt}<{\centering}p{40pt}<{\centering}}
\toprule
\multirow{2}{*}{Approach} & \multicolumn{2}{c}{GCJ-C} & \multicolumn{2}{c}{GCJ-C++} & \multicolumn{2}{c}{Codeforces} \\
& AUC-ROC & AP & AUC-ROC & AP & AUC-ROC & AP \\
\midrule
\emph{MoS + OPT + REG} (\textbf{BinMLM}) & \textbf{0.8929} & \textbf{0.9135} & \textbf{0.8414} & \textbf{0.8588} & \textbf{0.8552} & \textbf{0.8640} \\
\emph{MoS + OPT} & 0.8632 & 0.8856 & 0.7682 & 0.7670 & 0.8451 & 0.8523 \\
\emph{MoS} & 0.8703 & 0.8901 & 0.7682 & 0.7666 & 0.8421 & 0.8495 \\
\emph{Single-encoder} & 0.8578 & 0.8809 & 0.7528 & 0.7483 & 0.8380 & 0.8453 \\
\emph{Naive} & 0.8492 & 0.8760 & 0.7357 & 0.7397 & 0.8273 & 0.8309 \\
\bottomrule
\end{tabular}
\vspace*{-1.0\baselineskip}
\end{table*}
\begin{table*}
\caption{Results on multi-programmer collaboration datasets}
\label{multiauthor_res_table}
\centering
\begin{tabular}{p{55pt}<{\centering}|p{38pt}<{\centering}p{38pt}<{\centering}p{38pt}<{\centering}|p{38pt}<{\centering}p{38pt}<{\centering}p{38pt}<{\centering}|p{38pt}<{\centering}p{38pt}<{\centering}p{38pt}<{\centering}}
\toprule
\multirow{2}{*}{Major Proportion} & \multicolumn{3}{c|}{GCJ-C (N = 50, m = 8)} & \multicolumn{3}{c|}{GCJ-C++ (N = 100, m = 10)} & \multicolumn{3}{c}{Codeforces (N = 100, m = 30)} \\
& \texttt{Con-Base} & \texttt{Sim-Base} & BinMLM & \texttt{Con-Base} & \texttt{Sim-Base} & BinMLM & \texttt{Con-Base} & \texttt{Sim-Base} & BinMLM \\
\midrule
1.0 & 0.8100 & 0.8569 & \textbf{0.9201} & 0.6416 & 0.7779 & \textbf{0.8652} & 0.6442 & 0.7360 & \textbf{0.8612} \\
0.9 & 0.7786 & 0.8497 & \textbf{0.9020} & 0.6347 & 0.7778 & \textbf{0.8419} & 0.6241 & 0.6347 & \textbf{0.8376} \\
0.8 & 0.7769 & 0.8621 & \textbf{0.9088} & 0.6063 & 0.7642 & \textbf{0.8344} & 0.6083 & 0.7090 & \textbf{0.8310} \\
0.7 & 0.7677 & 0.8264 & \textbf{0.9049} & 0.6013 & 0.7628 & \textbf{0.8210} & 0.6170 & 0.6938 & \textbf{0.8324} \\
0.6 & 0.7455 & 0.7949 & \textbf{0.8736} & 0.6051 & 0.7549 & \textbf{0.8346} & 0.5972 & 0.6873 & \textbf{0.8048} \\
\bottomrule
\end{tabular}
\vspace*{-1.4\baselineskip}
\end{table*}
\subsection{Ablation studies of BinMLM}
We design three groups of ablation studies to evaluate the effects of BinMLM's core components. The dataset settings of our ablation studies are the same as section V.B.1.
The first group of ablation studies compare opcode n-grams with different values of $n$, and results in table \ref{opcode_ngram_table} show that the performance is positively correlated with the value of $n$ in the range of 1 to 5. When the $n$ exceeds 5, we still observe a slight improvement, but we do not increase it further due to limited time and computing resources.
The second group of ablation studies evaluate the performance when predicting subsequent units of different hops. As shown in table \ref{opcode_ngram_table}, predicting target units of more hops can improve the verification performance. We finally set $n$ to 5 and the number of hops to 3.
The third group of ablation studies evaluate the contribution of
each part of our model. We set up five variant models for comparison: \textbf{(a)} \emph{MoS (mixture-of-shared architecture) + OPT + REG} (original BinMLM): The \emph{MoS} architecture of BinMLM contains multiple shared encoders to extract common programming patterns from different views, and a separate gate layer for each programmer to learn the combination weights of shared representations. We train \emph{MoS} with the optimization pipeline combining joint-training and author-specific fine-tuning. Then we add the regularization loss in the fine-tuning stage to encourage the parameters of specific linear decoders to be more similar and prevent overfitting.
\textbf{(b)} \emph{MoS + OPT}: Remove the regularization loss in the fine-tuning stage.
\textbf{(c)} \emph{MoS}: Remove the optimization pipeline of BinMLM.
\textbf{(d)} \emph{Single-encoder}: Set up a single RNN encoder shared among all programmers.
\textbf{(e)} \emph{Naive} architecture: Train a separate RNN language model for each programmer.
Table \ref{ablation_studies} shows the performance of BinMLM and its variant models on the three datasets. The results prove that the mixture-of-shared architecture, optimization pipeline, and specially designed regularization loss significantly improve BinMLM on the binary authorship verification task.
\subsection{Robutness on multi-programmer collaboration datasets}
Modern software is usually developed through cooperation. Code snippets of collaborators will introduce noise when characterizing the major developer's style. We construct synthetic datasets to evaluate BinMLM in multi-programmer collaboration scenarios. Gong \emph{et al.} \cite{gong2021code} conducted an empirical study on open source software projects. They found that for most program files, about 80\% of their code lines are contributed by a single programmer. So we assume each program is completed by a major contributor and several other developers. For a verification pair $\langle p_i, b_j \rangle$, we randomly select two other programmers as collaborators and mix their program fragments into the sample $b_j$. We adjust the proportion of the collaborator's fragments to control the difficulty of the task, and we ensure that the proportion of the program developed by the original author of $b_j$ remains above 60\%.
Table \ref{multiauthor_res_table} shows the results on the multi-programmer collaborative datasets. With the increase of the proportion of collaborators' programs, the difficulty of modeling the style of the major developer has increased, and the performance of the three approaches has decreased correspondingly. But under each setting, BinMLM still significantly outperforms the baselines based on similarity metrics and classifier confidence scores. The AUC-RO of BinMLM remains above 0.8, and the decline degree ranges from 0.0306 to 0.0564. It proves that our designed programmer's style characterizing method based on the mixture-of-shared language models and specific optimization pipeline performs more robust when dealing with the mixed programming style of collaborators.
\subsection{Case study in a real-world scenario}
In this section, we provide case study to demonstrate the effectiveness of BinMLM in real-world scenarios. Binary analysts care about the common coding habits of authors in a specific organization, but existing work mainly conducts manual analysis of individual cases \cite{marquis2015big} \cite{alrabaee2018leveraging}.
We perform systematic experiments on the authorship verification of malware samples produced by specific APT (Advanced Persistent Threat) attack groups. APT attacks target particular companies or nations for political or commercial motivations. The malware samples adopt in the attack are likely to contain the identity information of the hacker team.
We collect malware from public threat intelligence sources and construct the APT-Malware dataset. The malware comes from ten famous APT events includes \emph{Equation Group}, \emph{Gorgon Group}, \emph{DarkHotel}, \emph{Energetic Bear}, \emph{Winnti}, \emph{APT 10}, \emph{APT 21}, \emph{APT 28}, \emph{APT 29}, and \emph{APT 30}. Compared with the GCJ and Codeforces datasets, the binary samples of the APT-Malware dataset are much more complicated, and the label granularity is coarser, reflecting that the samples were developed by multiple collaborators in specific organizations.
To perform staged attacks, malware belonging to the same APT group may implement very different functions, and some samples are intentionally obfuscated to prevent detection by anti-virus products, which makes it more difficult to locate the attacker's identity. We randomly select five support samples for each attack group to train the APT-specific language models. Considering data imbalance in real-world scenarios, where the proportion of negative verification pairs should be relatively larger, we set the ratio of the negative pairs to positive pairs in the range of 1 to 8. Half of the test samples in negative pairs are from the benign developers of the GCJ dataset.
Figure \ref{apt_malware_res} shows the authorship verification performance of BinMLM on the APT-Malware dataset under different negative to positive ratios. We can see that the AUC-ROC value of BinMLM is relatively stable and remains in the range of 0.7736 to 0.8172, while as the proportion of negative verification pairs increases, the AP value gradually decreases since it is more sensitive to the skewed datasets. When the ratio of the positive and negative pairs is 1:5, the AUC-ROC and AP of BinMLM are 0.8172 and 0.7617, respectively. Overall, BinMLM can extract programming style information related to identity characteristics from APT groups' malware samples and achieve organization-level authorship verification with decent performance, providing valuable auxiliary evidence for identifying the organization behind the APT event.
\begin{figure}[!t]
\centering
\includegraphics[width=7.7cm]{doubley_bar_aptmalware_negratio.pdf}
\caption{Results on the APT-Malware dataset}
\label{apt_malware_res}
\vspace*{-1.4\baselineskip}
\end{figure}
\section{Related Work}
\subsection{Source code authorship analysis}
Source code contains rich style characteristics which can be very helpful in software forensics, such as plagiarism detection and copyright investigation \cite{yang2017authorship} \cite{burrows2014comparing} \cite{caliskan2015anonymizing} \cite{alsulami2017source} \cite{kang2019assessing} \cite{bogomolov2021authorship} \cite{gong2021code} \cite{wang20203}. For source code de-anonymizing, Caliskan-Islam \emph{et al.} \cite{caliskan2015anonymizing} constructed a dynamic code stylometry feature set including lexical, layout, and syntactic features. Abuhamad \emph{et al.} \cite{abuhamad2018large} selected code n-grams with high TF-IDF scores and fed them into the RNN network to extract the deep representations. Alsulami \emph{et al.} \cite{alsulami2017source} converted the AST into multiple subtrees and applied hierarchical bi-LSTM to extract the syntax features. Bogomolov \emph{et al.} \cite{bogomolov2021authorship} designed two language-agnostic models work with path-based code fragment representations. Some researchers performed authorship identification on Android platform \cite{kalgutkar2018android} \cite{kalgutkar2018android2} \cite{wang20203}. Wang \cite{wang20203} divided packages of Android applications into modules and built the author's coding habit fingerprint of the main module.
\subsection{Binary authorship analysis}
Authorship analysis targeting binaries is significant, because analysts cannot obtain source code in many real-world scenarios \cite{marquis2015big} \cite{rosenblum2011wrote} \cite{alrabaee2014oba2} \cite{alrabaee2018leveraging} \cite{alrabaee2019bineye}. Rosenblum \emph{et al.} \cite{rosenblum2011wrote} built author stylistic templates on features like instruction idioms and CFG graphlets. Alrabaee \emph{et al.} \cite{alrabaee2018leveraging} first filtered out irrelevant compiler functions, then characterized the author's habits contained in user-defined functions by programming choices. Caliskan-Islam \emph{et al.} \cite{caliskancoding} extracted multi-source style features from disassembly code and the AST of decompiled code. Meng \emph{et al.} \cite{meng2016fine} conducted empirical analysis on open source software and concluded that binary authorship attribution should be performed at the basic block granularity. The manual feature engineering processes of the above approaches are dataset-dependent and rely on domain knowledge. BinEye \cite{alrabaee2019bineye} proposed an intelligent deep learning-based binary authorship attribution model. It applied three CNNs on binary gray-scale images, API call sequence, and opcode sequences.
\subsection{Text authorship analysis}
Text authorship analysis has a more extended research history \cite{bagnall2015author} \cite{ouyang2020gated} \cite{kalgutkar2019code} \cite{ding2017learning} \cite{bevendorff2019bias}. Our task setting is inspired by the open-world text stylometry problems defined by Stolerman \emph{et al.} \cite{stolerman2013classify} and the shared PAN series of competitions \cite{pannetwork}. For text authorship verification, Bagnall \cite{bagnall2015author} designed a char-level language model with a single encoder and independent softmax groups to model the authors' writing styles, which won first place in the PAN-2015 competition. Ouyang \emph{et al.} \cite{ouyang2020gated} found that the POS (Part of Speech)-level language model can better characterize authors' syntactic styles. They designed a gated unit in the decoding stage to integrate common writing patterns and specific styles.
\section{Conclusion}
In this paper, we formulate a practical binary authorship verification task, which can handle binary samples from unknown programmers and accurately reflect the real-life experiences of software forensic experts. We implement a binary authorship verification framework, BinMLM. It trains author-specific language models on the flow-aware opcode n-gram sequences to automatically characterize the developer's programming styles. We exploit a mixture-of-shared architecture to fully use the limit training samples and model the developer's combination preference of multiple universal programming patterns. Through an effective optimization pipeline, BinMLM can separate the programmer's unique style from a large portion of general patterns. Extensive experiments show that BinMLM outperforms baselines built on the state-of-the-art feature set by a large margin and remains robust in multi-author collaboration scenarios and organization-level verification on real-world APT malware datasets.
\section*{Acknowledgment}
The authors would like to thank the anonymous reviewers for their insightful comments. This work was supported by the National Natural Science Foundation of China under Grant U1736218. The corresponding author is Yongzheng Zhang.
\small
\bibliographystyle{IEEEtran}
| {'timestamp': '2022-03-10T02:08:21', 'yymm': '2203', 'arxiv_id': '2203.04472', 'language': 'en', 'url': 'https://arxiv.org/abs/2203.04472'} |
\section{Introduction}
Face detection is a long-standing problem in computer
vision with extensive applications including face alignment, face analysis, face recognition, etc. Starting from the pioneering work of Viola-Jones~\cite{DBLP:journals/ijcv/ViolaJ04}, face detection has made great progress. The performances on several well-known datasets have been improved consistently, even tend to be saturated. To further improve the performance of face detection has become a challenging issue. In our opinion, there remains room for improvement in two aspects: (a) \emph{recall efficiency}: number of false positives needs to be reduced at the high recall rates; (b) \emph{location accuracy}: accuracy of the bounding box location needs to be improved. These two problems are elaborated as follows.
\begin{figure}[t]
\centering
\subfigure[Effect on Class Imbalance]{
\label{fig:sc}
\includegraphics[width=0.45\linewidth]{sample_change.jpg}}
\subfigure[Recall Efficiency]{
\label{fig:prc}
\includegraphics[width=0.45\linewidth]{pr_comp.jpg}}
\subfigure[Adjusted Anchor]{
\label{fig:aa}
\includegraphics[width=0.45\linewidth]{adjusted_anchor_crop.jpg}}
\subfigure[Location Accuracy]{
\label{fig:apc}
\includegraphics[width=0.45\linewidth]{ap_change.jpg}}
\vspace{-3.0mm}
\caption{The effects of STC and STR on recall efficiency and location accuracy. (a) The STC and STR increase the positives/negatives ratio by about $38$ and $3$ times respectively, (b) which improve the precision by about $20\%$ at high recall rates. (c) The STR provides better initialization for the subsequent regressor, (d) which produces more accurate locations, {\em i.e.}, as the IoU threshold increases, the AP gap gradually increases.}
\vspace{-4.0mm}
\label{fig:effect-stc-str}
\end{figure}
On the one hand, the average precision (AP) of current face detection algorithms is already very high, but the precision is not high enough at high recall rates, {\em e.g.}, as shown in Figure \ref{fig:prc} of RetinaNet~\cite{DBLP:conf/iccv/LinPRK17}, the precision is only about $50\%$ (half of detections are false positives) when the recall rate is equal to $90\%$, which we define as the \emph{low recall efficiency}. Reflected on the shape of the Precision-Recall curve, it has extended far enough to the right, but not steep enough. The reason is that existing algorithms pay more attention to pursuing high recall rate but ignore the problem of excessive false positives. Analyzing with anchor-based face detectors, they detect faces by classifying and regressing a series of preset anchors, which are generated by regularly tiling a collection of boxes with different scales and aspect ratios. To detect the tiny faces, {\em e.g.}, less than $16\times16$ pixels, it is necessary to tile plenty of small anchors over the image. This can improve the recall rate yet cause the the extreme class imbalance problem, which is the culprit leading to excessive false positives. To address this issue, researchers propose several solutions. R-CNN-like detectors~\cite{DBLP:conf/iccv/Girshick15,DBLP:journals/pami/RenHG017} address the class imbalance by a two-stage cascade and sampling heuristics. As for single-shot detectors, RetinaNet proposes the focal loss to focus training on a sparse set of hard examples and down-weight the loss assigned to well-classified examples. RefineDet~\cite{DBLP:journals/corr/abs-1711-06897} addresses this issue using a preset threshold to filter out negative anchors. However, RetinaNet takes all the samples into account, which also leads to quite a few false positives. Although RefineDet filters out a large number of simple negative samples, it uses hard negative mining in both two steps, and does not make full use of negative samples. Thus, the recall efficiency of them both can be improved.
On the other hand, the location accuracy in the face detection task is gradually attracting the attention of researchers. Although current evaluation criteria of most face detection datasets~\cite{fddbTech,DBLP:conf/cvpr/YangLLT16} do not focus on the location accuracy, the WIDER Face Challenge\footnote{http://wider-challenge.org} adopts MS COCO~\cite{DBLP:conf/eccv/LinMBHPRDZ14} evaluation criterion, which puts more emphasis on bounding box location accuracy. To visualize this issue, we use different IoU thresholds to evaluate our trained face detector based on RetinaNet on the WIDER FACE dataset. As shown in Figure \ref{fig:apc}, as the IoU threshold increases, the AP drops dramatically, indicating that the accuracy of the bounding box location needs to be improved. To this end, Gidaris et al.~\cite{DBLP:conf/iccv/GidarisK15} propose iterative regression during inference to improve the accuracy. Cascade R-CNN~\cite{DBLP:journals/corr/abs-1712-00726} addresses this issue by cascading R-CNN with different IoU thresholds. RefineDet~\cite{DBLP:journals/corr/abs-1711-06897} applies two-step regression to single-shot detector. However, blindly adding multi-step regression to the specific task ({\em i.e.}, face detection) is often counterproductive.
In this paper, we investigate the effects of two-step classification and regression on different levels of detection layers and propose a novel face detection framework, named Selective Refinement Network (SRN), which selectively applies two-step classification and regression to specific levels of detection layers. The network structure of SRN is shown in Figure \ref{fig:framework}, which consists of two key modules, named as the Selective Two-step Classification (STC) module and the Selective Two-step Regression (STR) module. Specifically, the STC is applied to filter out most simple negative samples (illustrated in Figure \ref{fig:sc}) from the low levels of detection layers, which contains $88.9\%$ samples. As shown in Figure \ref{fig:prc}, RetinaNet with STC improves the recall efficiency to a certain extent. On the other hand, the design of STR draws on the cascade idea to coarsely adjust the locations and sizes of anchors (illustrated in Figure \ref{fig:aa}) from high levels of detection layers to provide better initialization for the subsequent regressor. In addition, we design a Receptive Field Enhancement (RFE) to provide more diverse receptive fields to better capture the extreme-pose faces. Extensive experiments have been conducted on AFW, PASCAL face, FDDB, and WIDER FACE benchmarks and we set a new state-of-the-art performance.
In summarization, we have made the following main contributions to the face detection studies:
\begin{itemize}
\item We present a STC module to filter out most simple negative samples from low level layers to reduce the classification search space.
\item We design a STR module to coarsely adjust the locations and sizes of anchors from high level layers to provide better initialization for the subsequent regressor.
\item We introduce a RFE module to provide more diverse receptive fields for detecting extreme-pose faces.
\item We achieve state-of-the-art results on AFW, PASCAL face, FDDB, and WIDER FACE datasets.
\end{itemize}
\begin{figure*}[ht!]
\centering
\includegraphics[width=0.95\linewidth]{framework.jpg}
\caption{Network structure of SRN. It consists of STC, STR, and RFB. STC uses the first-step classifier to filter out most simple negative anchors from low level detection layers to reduce the search space for the second-step classifier. STR applies the first-step regressor to coarsely adjust the locations and sizes of anchors from high level detection layers to provide better initialization for the second-step regressor. RFE provides more diverse receptive fields to better capture extreme-pose faces.
}
\label{fig:framework}
\end{figure*}
\section{Related Work}
Face detection has been a challenging research field since its emergence in the 1990s. Viola and Jones pioneer to use Haar features and AdaBoost to train a face detector with promising accuracy and efficiency~\cite{DBLP:journals/ijcv/ViolaJ04}, which inspires several different approaches afterwards~\cite{DBLP:journals/pami/LiaoJL16,DBLP:journals/ijcv/BrubakerWSMR08,DBLP:conf/iccv/PhamC07}. Apart from those, another important job is the introduction of Deformable Part Model (DPM)~\cite{DBLP:conf/eccv/MathiasBPG14,DBLP:conf/cvpr/YanLWL14,DBLP:conf/cvpr/ZhuR12}.
Recently, face detection has been dominated by the CNN-based methods.
CascadeCNN~\cite{DBLP:conf/cvpr/LiLSBH15} improves detection accuracy by training a serious of interleaved CNN models and following work~\cite{DBLP:conf/cvpr/QinYLH16} proposes to jointly train the cascaded CNNs to realize end-to-end optimization. MTCNN~\cite{DBLP:journals/spl/ZhangZLQ16} proposes a joint face detection and alignment method using multi-task cascaded CNNs. Faceness~\cite{DBLP:conf/iccv/YangLLT15} formulates face detection as scoring facial parts responses to detect faces under severe occlusion. UnitBox~\cite{DBLP:conf/mm/YuJWCH16} introduces an IoU loss for bounding box prediction. EMO~\cite{zhu2018seeing} proposes an Expected Max Overlapping score to evaluate the quality of anchor matching. SAFD~\cite{hao2017scale} develops a scale proposal stage which automatically normalizes face sizes prior to detection. S$^{2}$AP~\cite{song2018beyond} pays attention to specific scales in image pyramid and valid locations in each scales layer. PCN~\cite{shi2018real} proposes a cascade-style structure to rotate faces in a coarse-to-fine manner. Recent work~\cite{bai2018finding} designs a novel network to directly generate a clear super-resolution face from a blurry small one.
Additionally, face detection has inherited some achievements from generic object detectors, such as Faster R-CNN~\cite{DBLP:journals/pami/RenHG017}, SSD~\cite{DBLP:conf/eccv/LiuAESRFB16}, FPN~\cite{DBLP:conf/cvpr/LinDGHHB17} and RetinaNet~\cite{DBLP:conf/iccv/LinPRK17}.
Face R-CNN~\cite{wang2017face} combines Faster R-CNN with hard negative mining and achieves promising results.
Face R-FCN~\cite{wang2017detecting} applies R-FCN in face detection and makes according improvements.
The face detection model for finding tiny faces~\cite{DBLP:conf/cvpr/HuR17} trains separate detectors for different scales. S$^{3}$FD~\cite{DBLP:conf/iccv/abs-1708-05237} presents multiple strategies onto SSD to compensate for the matching problem of small faces. SSH~\cite{DBLP:conf/iccv/NajibiSCD17} models the context information by large filters on each prediction module. PyramidBox~\cite{tang2018pyramidbox} utilizes contextual information with improved SSD network structure.
FAN~\cite{wang2017fan} proposes an anchor-level attention into RetinaNet to detect the occluded faces. In this paper, inspired by the multi-step classification and regression in RefineDet~\cite{DBLP:journals/corr/abs-1711-06897} and the focal loss in RetinaNet, we develop a state-of-the-art face detector.
\section{Selective Refinement Network}
\subsection{Network Structure}
The overall framework of SRN is shown in Figure \ref{fig:framework}, we describe each component as follows.
{\flushleft \textbf{Backbone.} }
We adopt ResNet-50~\cite{DBLP:conf/cvpr/HeZRS16} with 6-level feature pyramid structure as the backbone network for SRN. The feature maps extracted from those four residual blocks are denoted as C2, C3, C4, and C5, respectively. C6 and C7 are just extracted by two simple down-sample $3\times3$ convolution layers after C5. The lateral structure between the bottom-up and the top-down pathways is the same as~\cite{DBLP:conf/cvpr/LinDGHHB17}. P2, P3, P4, and P5 are the feature maps extracted from lateral connections, corresponding to C2, C3, C4, and C5 that are respectively of the same spatial sizes, while P6 and P7 are just down-sampled by two $3\times3$ convolution layers after P5.
{\flushleft \textbf{Dedicated Modules.} }
The STC module selects C2, C3, C4, P2, P3, and P4 to perform two-step classification, while the STR module selects C5, C6, C7, P5, P6, and P7 to conduct two-step regression. The RFE module is responsible for enriching the receptive field of features that are used to predict the classification and location of objects.
{\flushleft \textbf{Anchor Design.} }
At each pyramid level, we use two specific scales of anchors ({\em i.e.}, $2S$ and $2\sqrt{2}S$, where $S$ represents the total stride size of each pyramid level) and one aspect ratios ({\em i.e.}, $1.25$). In total, there are $A=2$ anchors per level and they cover the scale range $8-362$ pixels across levels with respect to the network's input image.
{\flushleft \textbf{Loss Function.} }
We append a hybrid loss at the end of the deep architecture, which leverage the merits of the focal loss and the smooth L$_1$ loss to drive the model to focus on more hard training examples and learn better regression results.
\subsection{Selective Two-Step Classification}
Introduced in RefineDet~\cite{DBLP:journals/corr/abs-1711-06897}, the two-step classification is a kind of cascade classification implemented through a two-step network architecture, in which the first step filters out most simple negative anchors using a preset negative threshold $\theta=0.99$ to reduce the search space for the subsequent step. For anchor-based face detectors, it is necessary to tile plenty of small anchors over the image to detect small faces, which causes the extreme class imbalance between the positive and negative samples. For example, in the SRN structure with the $1024\times1024$ input resolution, if we tile $2$ anchors at each anchor point, the total number of samples will reach $300k$. Among them, the number of positive samples is only a few dozen or less. To reduce search space of classifier, it is essential to do two-step classification to reduce the false positives.
However, it is unnecessary to perform two-step classification in all pyramid levels. Since the anchors tiled on the three higher levels ({\em i.e.}, P5, P6, and P7) only account for $11.1\%$ and the associated features are much more adequate. Therefore, the classification task is relatively easy in these three higher pyramid levels. It is thus dispensable to apply two-step classification on the three higher pyramid levels, and if applied, it will lead to an increase in computation cost. In contrast, the three lower pyramid levels ({\em i.e.}, P2, P3, and P4) have the vast majority of samples ($88.9\%$) and lack of adequate features. It is urgently needed for these low pyramid levels to do two-step classification in order to alleviate the class imbalance problem and reduce the search space for the subsequent classifier.
Therefore, our STC module selects C2, C3, C4, P2, P3, and P4 to perform two-step classification. As the statistical result shown in Figure \ref{fig:sc}, the STC increases the positive/negative sample ratio by approximately $38$ times, from around $1$:$15441$ to $1$:$404$. In addition, we use the focal loss in both two steps to make full use of samples. Unlike RefineDet~\cite{DBLP:journals/corr/abs-1711-06897}, the SRN shares the same classification module in the two steps, since they have the same task to distinguish the face from the background. The experimental results of applying the two-step classification on each pyramid level are shown in Table \ref{tab:stc_per_level}. Consistent with our analysis, the two-step classification on the three lower pyramid levels helps to improve performance, while on the three higher pyramid levels is ineffective.
The loss function for STC consists of two parts, {\em i.e.}, the loss in the first step and the second step. For the first step, we calculate the focal loss for those samples selected to perform two-step classification. And for the second step, we just focus on those samples that remain after the first step filtering. With these definitions, we define the loss function as:
\begin{equation}
\begin{aligned}
{\cal L}_\text{STC} (\{p_i\},\{q_i\})=\frac{1}{N_{\text{s}_1}} \sum_{i\in \Omega}{\cal L}_{\text{FL}}(p_i,l_i^\ast) \\
+ \frac{1}{N_{\text{s}_2}} \sum_{i\in \Phi}{\cal L}_{\text{FL}}(q_i, l_i^\ast),
\end{aligned}
\label{1}
\end{equation}
where $i$ is the index of anchor in a mini-batch, $p_i$ and $q_i$ are the predicted confidence of the anchor $i$
being a face in the first and second steps, $l_i^\ast$ is the ground truth class label of anchor $i$, $N_{\text{s1}}$ and $N_{\text{s2}}$ are the numbers of positive anchors in the first and second steps, $\Omega$ represents a collection of samples selected for two-step classification, and $\Phi$ represents a sample set that remains after the first step filtering. The binary classification loss ${\cal L}_{\text{FL}}$ is the sigmoid focal loss over two classes (face {\em vs.} background).
\subsection{Selective Two-Step Regression}
In the detection task, to make the location of bounding boxes more accurate has always been a challenging problem. Current one-stage methods rely on one-step regression based on various feature layers, which is inaccurate in some challenging scenarios, {\em e.g.}, MS COCO-style evaluation standard. In recent years, using cascade structure \cite{DBLP:journals/corr/abs-1711-06897,DBLP:journals/corr/abs-1712-00726} to conduct multi-step regression is an effective method to improve the accuracy of the detection bounding boxes.
However, blindly adding multi-step regression to the specific task ({\em i.e.}, face detection) is often counterproductive. Experimental results (see Table \ref{tab:str_per_level}) indicate that applying two-step regression in the three lower pyramid levels impairs the performance. The reasons behind this phenomenon are twofold: 1) the three lower pyramid levels are associated with plenty of small anchors to detect small faces. These small faces are characterized by very coarse feature representations, so it is difficult for these small anchors to perform two-step regression; 2) in the training phase, if we let the network pay too much attention to the difficult regression task on the low pyramid levels, it will cause larger regression loss and hinder the more important classification task.
Based on the above analyses, we selectively perform two-step regression on the three higher pyramid levels. The motivation behind this design is to sufficiently utilize the detailed features of large faces on the three higher pyramid levels to regress more accurate locations of bounding boxes and to make three lower pyramid levels pay more attention to the classification task. This divide-and-conquer strategy makes the whole framework more efficient.
The loss function of STR also consists of two parts, which is shown as below:
\begin{equation}
\begin{aligned}
{\cal L}_\text{STR}(\{x_i\},\{t_i\})=\sum_{i\in \Psi}[l_i^\ast=1]{\cal L}_{\text{r}}(x_i, g_i^\ast) \\
+ \sum_{i\in \Phi}[l_i^\ast=1]{\cal L}_{\text{r}}(t_i, g_i^\ast),
\end{aligned}
\label{1}
\end{equation}
where $g_i^\ast$ is the ground truth location and size of anchor $i$, $x_i$ is the refined coordinates of the anchor $i$ in the first step, $t_i$ is the coordinates of the bounding box in the second step, $\Psi$ represents a collection of samples selected for two-step regression, $l_i^\ast$ and $\Phi$ are the same as defined in STC. Similar to Faster R-CNN \cite{DBLP:journals/pami/RenHG017}, we use the smooth L$_1$ loss as the regression loss $L_{\text{r}}$. The Iverson bracket indicator function $[l_i^\ast=1]$ outputs $1$ when the condition is true, {\em i.e.}, $l_i^\ast=1$ (the anchor is not the negative), and $0$ otherwise. Hence $[l_i^\ast=1]{\cal L}_{\text{r}}$ indicates that the regression loss is ignored for negative anchors.
\subsection{Receptive Field Enhancement}
At present, most detection networks utilize ResNet and VGGNet as the basic feature extraction module, while both of them possess square receptive fields. The singleness of the receptive field affects the detection of objects with different aspect ratios. This issue seems unimportant in face detection task, because the aspect ratio of face annotations is about $1$:$1$ in many datasets. Nevertheless, statistics shows that the WIDER FACE training set has a considerable part of faces that have an aspect ratio of more than $2$ or less than $0.5$. Consequently, there is mismatch between the receptive field of network and the aspect ratio of faces.
To address this issue, we propose a module named Receptive Field Enhancement (RFE) to diversify the receptive field of features before predicting classes and locations. In particular, RFE module replaces the middle two convolution layers in the class subnet and the box subnet of RetinaNet. The structure of RFE is shown in Figure \ref{fig:rfe}. Our RFE module adopts a four-branch structure, which is inspired by the Inception block \cite{DBLP:conf/cvpr/SzegedyLJSRAEVR15}. To be specific, first, we use a $1\times1$ convolution layer to decrease the channel number to one quarter of the previous layer. Second, we use $1\times k$ and $k\times 1$ ($k=3$ and $5$) convolution layer to provide rectangular receptive field. Through another $1\times1$ convolution layer, the feature maps from four branches are concatenated together. Additionally, we apply a shortcut path to retain the original receptive field from previous layer.
\begin{figure}[t!]
\centering
\includegraphics[width=0.7\linewidth]{RFE_structure.jpg}
\caption{Structure of RFE module.}
\label{fig:rfe}
\end{figure}
\section{Training and Inference}
{\flushleft \textbf{Training Dataset.} }
All the models are trained on the training set of the WIDER FACE dataset~\cite{DBLP:conf/cvpr/YangLLT16}. It consists of $393,703$ annotated face bounding boxes in $32,203$ images with variations in pose, scale, facial expression, occlusion, and lighting condition. The dataset is split into the training ($40\%$), validation ($10\%$) and testing ($50\%$) sets, and defines three levels of difficulty: Easy, Medium, Hard, based on the detection rate of EdgeBox~\cite{DBLP:conf/eccv/ZitnickD14}.
{\flushleft \textbf{Data Augmentation.} }
To prevent over-fitting and construct a robust model, several data augmentation strategies are used to adapt to face variations, described as follows.
\begin{itemize}
\item[1)] Applying some photometric distortions introduced in previous work~\cite{DBLP:journals/corr/Howard13} to the training images.
\item[2)] Expanding the images with a random factor in the interval $[1,2]$ by the zero-padding operation.
\item[3)] Cropping two square patches and randomly selecting one for training. One patch is with the size of the image's shorter side and the other one is with the size determined by multiplying a random number in the interval $[0.5,1.0]$ by the image's shorter side.
\item[4)] Flipping the selected patch randomly and resizing it to $1024\times1024$ to get the final training sample.
\end{itemize}
{\noindent \textbf{Anchor Matching.} }
During the training phase, anchors need to be divided into positive and negative samples. Specifically, anchors are assigned to ground-truth face boxes using an intersection-over-union (IoU) threshold of $\theta_{p}$; and to background if their IoU is in $[0, \theta_{n})$. If an anchor is unassigned, which may happen with overlap in $[\theta_{n}, \theta_{p})$, it is ignored during training. Empirically, we set $\theta_{n}=0.3$ and $\theta_{p}=0.7$ for the first step, and $\theta_{n}=0.4$ and $\theta_{p}=0.5$ for the second step.
{\flushleft \textbf{Optimization.} }
The loss function for SRN is just the sum of the STC loss and the STR loss, {\em i.e.}, ${\cal L} = {\cal L}_\text{STC} + {\cal L}_\text{STR}$. The backbone network is initialized by the pretrained ResNet-50 model~\cite{DBLP:journals/ijcv/RussakovskyDSKS15} and all the parameters in the newly added convolution layers are initialized by the ``xavier" method. We fine-tune the SRN model using SGD with $0.9$ momentum, $0.0001$ weight decay, and batch size $32$. We set the learning rate to $10^{-2}$ for the first $100$ epochs, and decay it to $10^{-3}$ and $10^{-4}$ for another $20$ and $10$ epochs, respectively. We implement SRN using the PyTorch library~\cite{paszke2017pytorch}.
{\flushleft \textbf{Inference.} }
In the inference phase, the STC first filters the regularly tiled anchors on the selected pyramid levels with the negative confidence scores larger than the threshold $\theta=0.99$, and then STR adjusts the locations and sizes of selected anchors. After that, the second step takes over these refined anchors, and outputs top $2000$ high confident detections. Finally, we apply the non-maximum suppression (NMS) with jaccard overlap of $0.5$ to generate the top $750$ high confident detections per image as the final results.
\begin{figure*}[t]
\centering
\subfigure[AFW]{
\label{fig:AFW}
\includegraphics[width=0.325\linewidth]{AFW.jpg}}
\subfigure[PASCAL face]{
\label{fig:PASCAL}
\includegraphics[width=0.325\linewidth]{PASCAL.jpg}}
\subfigure[FDDB]{
\label{fig:FDDB}
\includegraphics[width=0.325\linewidth]{FDDB.jpg}}
\caption{Evaluation on the common face detection datasets.}
\label{fig:evaluation}
\end{figure*}
\section{Experiments}
We first analyze the proposed method in detail to verify the effectiveness of our contributions. Then we evaluate the final model on the common face detection benchmark datasets, including AFW \cite{DBLP:conf/cvpr/ZhuR12}, PASCAL Face \cite{DBLP:journals/ivc/YanZLL14}, FDDB \cite{fddbTech}, and WIDER FACE \cite{DBLP:conf/cvpr/YangLLT16}.
\subsection{Model Analysis}
We conduct a set of ablation experiments on the WIDER FACE dataset to analyze our model in detail. For a fair comparison, we use the same parameter settings for all the experiments, except for specified changes to the components. All models are trained on the WIDER FACE training set and evaluated on the validation set.
{\flushleft \textbf{Ablation Setting.} }
To better understand SRN, we ablate each component one after another to examine how each proposed component affects the final performance. Firstly, we use the ordinary prediction head in \cite{DBLP:conf/iccv/LinPRK17} instead of the proposed RFE. Secondly, we ablate the STR or STC module to verity their effectiveness. The results of ablation experiments are listed in Table \ref{tab:ablation} and some promising conclusions can be drawn as follows.
\begin{table}[h]
\centering
\setlength{\tabcolsep}{9.0pt}
\caption{Effectiveness of various designs on the AP performance.}
\footnotesize{
\begin{tabular}{c|ccccc}
\toprule[0.5pt]
\multicolumn{1}{c|}{Component} & \multicolumn{4}{c}{SRN}\\
\hline
STC & & \Checkmark & & \Checkmark & \Checkmark \\
STR & & & \Checkmark & \Checkmark & \Checkmark \\
RFE & & & & & \Checkmark \\
\hline
{\em Easy} subset & 95.1 & 95.3 & 95.9 & 96.1 &\textbf{96.4}\\
{\em Medium} subset & 93.9 & 94.4 & 94.8 & 95.0 &\textbf{95.3}\\
{\em Hard} subset & 88.0 & 89.4 & 88.8 & 90.1 &\textbf{90.2}\\
\bottomrule[1.5pt]
\end{tabular}}
\label{tab:ablation}
\end{table}
{\flushleft \textbf{Selective Two-step Classification.} }
Experimental results of applying two-step classification to each pyramid level are shown in Table \ref{tab:stc_per_level}, indicating that applying two-step classification to the low pyramid levels improves the performance, especially on tiny faces. Therefore, the STC module selectively applies the two-step classification on the low pyramid levels ({\em i.e.}, P2, P3, and P4), since these levels are associated with lots of small anchors, which are the main source of false positives. As shown in Table \ref{tab:ablation}, we find that after using the STC module, the AP scores of the detector are improved from $95.1\%$, $93.9\%$ and $88.0\%$ to $95.3\%$, $94.4\%$ and $89.4\%$ on the Easy, Medium and Hard subsets, respectively. In order to verify whether the improvements benefit from reducing the false positives, we count the number of false positives under different recall rates. As listed in Table \ref{tab:fp_num}, our STC effectively reduces the false positives across different recall rates, demonstrating the effectiveness of the STC module.
\vspace{-1.5mm}
\begin{table}[h]
\centering
\setlength{\tabcolsep}{3pt}
\caption{AP performance of the two-step classification applied to each pyramid level.}
\setlength{\tabcolsep}{5.2pt}
\begin{tabular}{c|c|cccccc}
\toprule[1.5pt]
STC & B & P2 & P3 & P4 & P5 & P6 & P7 \\
\hline
{\em Easy} & 95.1 & \bf 95.2 & \bf 95.2 & \bf 95.2 & 95.0 & 95.1 & 95.0 \\
{\em Medium} & 93.9 & \bf 94.2 & \bf 94.3 & \bf 94.1 & 93.9 & 93.7 & 93.9 \\
{\em Hard} & 88.0 & \bf 88.9 & \bf 88.7 & \bf 88.5 & 87.8 & 88.0 & 87.7 \\
\bottomrule[1.5pt]
\end{tabular}
\vspace{-5mm}
\label{tab:stc_per_level}
\end{table}
\begin{table}[h]
\centering
\setlength{\tabcolsep}{3pt}
\caption{Number of false positives at different recall rates.}
\setlength{\tabcolsep}{2.0pt}
\begin{tabular}{c|cccccc}
\toprule[1.5pt]
Recall ($\%$) & 10 & 30 & 50 & 80 & 90 & 95 \\
\hline
$\#$ FP of RetinaNet & 3 & 24 & 126 & 2801 & 27644 & 466534\\
$\#$ FP of SRN (STC only) & 1 & 20 & 101 & 2124 & 13163 & 103586\\
\bottomrule[1.5pt]
\end{tabular}
\vspace{-1.5mm}
\label{tab:fp_num}
\end{table}
\begin{figure*}[h]
\centering
\subfigure[Val: Easy]{
\label{fig:ve}
\includegraphics[width=0.325\linewidth]{ve.jpg}}
\subfigure[Val: Medium]{
\label{fig:vm}
\includegraphics[width=0.325\linewidth]{vm.jpg}}
\subfigure[Val: Hard]{
\label{fig:vh}
\includegraphics[width=0.325\linewidth]{vh.jpg}}
\subfigure[Test: Easy]{
\label{fig:te}
\includegraphics[width=0.325\linewidth]{te.jpg}}
\subfigure[Test: Medium]{
\label{fig:tm}
\includegraphics[width=0.325\linewidth]{tm.jpg}}
\subfigure[Test: Hard]{
\label{fig:th}
\includegraphics[width=0.325\linewidth]{th.jpg}}
\caption{Precision-recall curves on WIDER FACE validation and testing subsets.}
\label{fig:wider-face-ap}
\end{figure*}
{\flushleft \textbf{Selective Two-step Regression.} }
We only add the STR module to our baseline detector to verify its effectiveness. As shown in Table \ref{tab:ablation}, it produces much better results than the baseline, with $0.8\%$, $0.9\%$ and $0.8\%$ AP improvements on the Easy, Medium, and Hard subsets. Experimental results of applying two-step regression to each pyramid level (see Table \ref{tab:str_per_level}) confirm our previous analysis. Inspired by the detection evaluation metric of MS COCO, we use $4$ IoU thresholds \{0.5, 0.6, 0.7, 0.8\} to compute the AP, so as to prove that the STR module can produce more accurate localization. As shown in Table \ref{tab:aps}, the STR module produces consistently accurate detection results than the baseline method. The gap between the AP across all three subsets increases as the IoU threshold increases, which indicate that the STR module is important to produce more accurate detections. In addition, coupled with the STC module, the performance is further improved to $96.1\%$, $95.0\%$ and $90.1\%$ on the Easy, Medium and Hard subsets, respectively.
\vspace{-1.5mm}
\begin{table}[h]
\centering
\setlength{\tabcolsep}{3pt}
\caption{AP performance of the two-step regression applied to each pyramid level.}
\setlength{\tabcolsep}{5.2pt}
\begin{tabular}{c|c|cccccc}
\toprule[1.5pt]
STR & B & P2 & P3 & P4 & P5 & P6 & P7 \\
\hline
{\em Easy} & 95.1 & 94.8 & 94.3 & 94.8 & \bf 95.4 & \bf 95.7 & \bf 95.6 \\
{\em Medium} & 93.9 & 93.4 & 93.7 & 93.9 & \bf 94.2 & \bf 94.4 & \bf 94.6 \\
{\em Hard} & 88.0 & 87.5 & 87.7 & 87.0 & \bf 88.2 & \bf 88.2 & \bf 88.4 \\
\bottomrule[1.5pt]
\end{tabular}
\vspace{-5mm}
\label{tab:str_per_level}
\end{table}
\begin{table}[h]
\centering
\caption{AP at different IoU thresholds on the WIDER FACE Hard subset.}
\setlength{\tabcolsep}{10.5pt}
\begin{tabular}{c|ccccc}
\toprule[1.5pt]
IoU & 0.5 & 0.6 & 0.7 & 0.8 \\
\hline
{RetinaNet} & 88.1 & 76.4 & 57.8 & 28.5\\
{SRN (STR only)} & 88.8 & 83.4 & 66.5 & 38.2\\
\bottomrule[1.5pt]
\end{tabular}
\vspace{-0.5mm}
\label{tab:aps}
\end{table}
{\flushleft \textbf{Receptive Field Enhancement.} }
The RFE is used to diversify the receptive fields of detection layers in order to capture faces with extreme poses. Comparing the detection results between fourth and fifth columns in Table \ref{tab:ablation}, we notice that RFE consistently improves the AP scores in different subsets, {\em i.e.}, $0.3\%$, $0.3\%$, and $0.1\%$ APs on the Easy, Medium, and Hard categories. These improvements can be mainly attributed to the diverse receptive fields, which is useful to capture various pose faces for better detection accuracy.
\subsection{Evaluation on Benchmark}
{\flushleft \textbf{AFW Dataset.} }
It consists of $205$ images with $473$ labeled faces. The images in the dataset contains cluttered backgrounds with large variations in both face viewpoint and appearance. We compare SRN against seven state-of-the-art methods and three commercial face detectors ({\em i.e.}, Face.com, Face++ and Picasa). As shown in Figure \ref{fig:AFW}, SRN outperforms these state-of-the-art methods with the top AP score ($99.87\%$).
{\flushleft \textbf{PASCAL Face Dataset.} }
It has $1,335$ labeled faces in $851$ images with large face appearance and pose variations. We present the precision-recall curves of the proposed SRN method and six state-of-the-art methods and three commercial face detectors ({\em i.e.}, SkyBiometry, Face++ and Picasa) in Figure \ref{fig:PASCAL}. SRN achieves the state-of-the-art results by improving $4.99\%$ AP score compared to the second best method STN \cite{DBLP:conf/eccv/ChenHW016}.
{\flushleft \textbf{FDDB Dataset.} }
It contains $5,171$ faces annotated in $2,845$ images with a wide range of difficulties, such as occlusions, difficult poses, and low image resolutions. We evaluate the proposed SRN detector on the FDDB dataset and compare it with several state-of-the-art methods. As shown in Figure \ref{fig:FDDB}, our SRN sets a new state-of-the-art performance, {\em i.e.}, $98.8\%$ true positive rate when the number of false positives is equal to $1000$. These results indicate that SRN is robust to varying scales, large appearance changes, heavy occlusions, and severe blur degradations that are prevalent in detecting face in unconstrained real-life scenarios.
{\flushleft \textbf{WIDER FACE Dataset.} }
We compare SRN with eighteen state-of-the-art face detection methods on both the validation and testing sets. To obtain the evaluation results on the testing set, we submit the detection results of SRN to the authors for evaluation. As shown in Figure \ref{fig:wider-face-ap}, we find that SRN performs favourably against the state-of-the-art based on the average precision (AP) across the three subsets, especially on the Hard subset which contains a large amount of small faces. Specifically, it produces the best AP scores in all subsets of both validation and testing sets, {\em i.e.}, $96.4\%$ (Easy), $95.3\%$ (Medium) and $90.2\%$ (Hard) for validation set, and $95.9\%$ (Easy), $94.9\%$ (Medium) and $89.7\%$ (Hard) for testing set, surpassing all approaches, which demonstrates the superiority of the proposed detector.
\section{Conclusion}
In this paper, we have presented SRN, a novel single shot face detector, which consists of two key modules, {\em i.e.}, the STC and the STR. The STC uses the first-step classifier to filter out most simple negative anchors from low level detection layers to reduce the search space for the second-step classifier, so as to reduce false positives. And the STR applies the first-step regressor to coarsely adjust the locations and sizes of anchors from high level detection layers to provide better initialization for the second-step regressor, in order to improve the location accuracy of bounding boxes. Moreover, the RFE is introduced to provide diverse receptive fields to better capture faces in some extreme poses. Extensive experiments on the AFW, PASCAL face, FDDB and WIDER FACE datasets demonstrate that SRN achieves the state-of-the-art detection performance.
\clearpage
\small
\bibliographystyle{aaai}
| {'timestamp': '2018-09-11T02:02:41', 'yymm': '1809', 'arxiv_id': '1809.02693', 'language': 'en', 'url': 'https://arxiv.org/abs/1809.02693'} |
\section{Detailed investigations of event pairs showing marginal evidence of lensing}
\label{sec:appendix}
\begin{figure*}[tbh]
\includegraphics[width=0.8\linewidth]{Plots/gw170104_gw170814.pdf}
\caption{Marginalized 2D and 1D posterior distributions of the parameters that are included in the consistency test, for the event pair GW170104 (blue), GW170814(red). Here, ${m_1^z, m_2^z}$ are the redshifted component masses, ${a_1, a_2}$ are the dimensionless spin magnitudes, ${\theta_{a1}, \theta_{a2}}$ are the polar angle of the spin orientations (with respect to the orbital angular momentum), ${\alpha, \sin \delta}$ denote the sky location, and $\theta_{J_N}$ is the orientation of the total angular momentum of the binary (with respect to the line of sight). The solid (dashed) condors corrsponds to the $90\%(50\%)$ confidence levels of the 2D distributions. The inset plot shows the marginalized posterior distributions of the sky localization parameters for these events. Overall, the posteriors have some levels of overlap, thus resulting in a considerable Bayes factor of ${\mc{B}_\textsc{u}^\textsc{l}} \sim {198}$ supporting the lensing hypothesis, purely based on parameter consistency. However, galaxy lenses are unlikely to produce time delay of 7 months between the images, resulting in a small Bayes factor ${\mc{R}_\textsc{u}^\textsc{l}}\sim 4 \times 10^{-3}$ based on time delay considerations.}
\label{fig:corner_170104_170814}
\end{figure*}
Here we present additional investigations on the event pairs that show marginal evidence of multiply-imaged lensing in the analysis presented in Sec.~\ref{sec:multipleimages}, providing a qualitative explanation of the Bayes factors presented in that section in terms of the overlap of the estimated posteriors from these event pairs. Figure~\ref{fig:corner_170104_170814} presents the 2D and 1D marginalized posterior distributions of the parameters that are included in the consistency test, for the event pair GW17014-GW170814. Posteriors have appreciable levels of overlap in many parameters, thus resulting in a considerable Bayes factor of ${\mc{B}_\textsc{u}^\textsc{l}} \sim {198}$ supporting the lensing hypothesis, purely based on parameter consistency. However, galaxy lenses are unlikely to produce time delay of 7 months between the images~\citep{Haris:2018vmn}, resulting in a small Bayes factor ${\mc{R}_\textsc{u}^\textsc{l}}\sim 4 \times 10^{-3}$ based on time delay considerations.
Figure~\ref{fig:corner_150914_170809} shows similar plots for the event pair GW150914-GW170809. Although marginalized 1D posteriors have some levels of overlap in many parameters, 2D posteriors show good separation in many parameters, e.g., in $\mathcal{M}^z - \chi_{\rm eff}$. The resulting Bayes factor supporting the lensing hypothesis, based on parameter consistency is ${\mc{B}_\textsc{u}^\textsc{l}} \sim {29}$. However, galaxy lenses are unlikely to produce time delay of 23 months between the images, resulting in a small Bayes factor ${\mc{R}_\textsc{u}^\textsc{l}} \sim \times 10^{-4}$ based on time delay considerations. Figure~\ref{fig:corner_170809_170814} shows similar plots for the event pair GW170809-GW170814. Here also, the 2D posteriors of several parameters (e.g., in $\mathcal{M}^z - \chi_{\rm eff}$) show poor overlaps, suggesting that the full multidimensional posteriors do not have significant overlap. The resultant Bayes factor for parameter consistency is ${\mc{B}_\textsc{u}^\textsc{l}} \sim {1.2}$, even though, the time delay between these events is consistent with galaxy lenses, producing a Bayes factor ${\mc{R}_\textsc{u}^\textsc{l}} \sim {3.3}$ based on time delay.
\begin{figure*}[tbh]
\includegraphics[width=0.8\linewidth]{Plots/gw150914_gw170809.pdf}
\caption{Same as Fig.~\ref{fig:corner_170104_170814}, except that the figure corresponds to the 150914 (blue), GW170809 (red) event pair. The inset plot shows the marginalized posterior distributions of the redshifted chirp mass $\mathcal{M}^z$ and effective spin $\chi_{\rm eff}$ for these events. Marginalized 1D posteriors have some levels of overlap in many parameters; however 2D posteriors show good separation in many parameters, e.g., in $\mathcal{M}^z - \chi_{\rm eff}$. The resulting Bayes factor supporting the lensing hypothesis, based on parameter consistency is ${\mc{B}_\textsc{u}^\textsc{l}} \sim {29}$. However, galaxy lenses are unlikely to produce time delay of 23 months between the images, resulting in a small Bayes factor ${\mc{R}_\textsc{u}^\textsc{l}} \sim \times 10^{-4}$ based on time delay considerations.}
\label{fig:corner_150914_170809}
\end{figure*}
\begin{figure*}[tbh]
\includegraphics[width=0.8\linewidth]{Plots/gw170809_gw170814.pdf}
\caption{Same as Fig.~\ref{fig:corner_170104_170814}, except that the figure corresponds to the GW170809 (blue), GW170814 (red) event pair. Marginalized 1D posteriors have some levels of overlap in many parameters; however 2D posteriors show good separation in many parameters, e.g., in $\mathcal{M}^z - \chi_{\rm eff}$. The resulting Bayes factor supporting the lensing hypothesis, based on parameter consistency is ${\mc{B}_\textsc{u}^\textsc{l}} \sim {1.2}$.}
\label{fig:corner_170809_170814}
\end{figure*}
\section{Introduction}
\label{sec:intro}
\input{intro.tex}
\section{No evidence of lensing magnification}
\label{sec:magnification}
\input{lensingmag.tex}
\section{No evidence of multiple images}
\label{sec:multipleimages}
\input{multiimage.tex}
\section{No evidence of wave optics effects}
\label{sec:waveoptics}
\input{waveoptics.tex}
\section{Outlook}
\label{sec:outlook}
\input{outlook.tex}
\bigskip
\paragraph{Acknowledgments:} We thank the LIGO Scientific Collaboration and Virgo Collaboration for providing the data of binary black hole observations during the first two observation runs of Advanced LIGO and Virgo. PA's research was supported by the Science and Engineering Research Board, India through a Ramanujan Fellowship, by the Max Planck Society through a Max Planck Partner Group at ICTS-TIFR, and by the Canadian Institute for Advanced Research through the CIFAR Azrieli Global Scholars program. SK acknowledges support from national post doctoral fellowship (PDF/2016/001294) by Scientific and Engineering Research Board, India. OAH is supported by the Hong Kong Ph.D. Fellowship Scheme (HKPFS) issued by the Research Grants Council (RGC) of Hong Kong. The work described in this paper was partially supported by grants from the Research Grants Council of the Hong Kong (Project No. CUHK 14310816, CUHK 24304317 and CUHK 14306218) and the Direct Grant for Research from the Research Committee of the Chinese University of Hong Kong. KKYN acknowledges support of the National Science Foundation, and the LIGO Laboratory. LIGO was constructed by the California Institute of Technology and Massachusetts Institute of Technology with funding from the National Science Foundation and operates under cooperative agreement PHY-0757058. Computations were performed at the ICTS cluster Alice and the LIGO Hanford cluster. KH, SK, AKM and PA thank Tejaswi Venumadhav, B. Sathyaprakash, Jolien Creighton, Xiaoshu Liu, Ignacio Magana Hernandez and Chad Hanna for useful discussions. OAH, KKYN and TGFL also acknowledge useful input from Peter~T.~H.~Pang, Rico K.~L.~Lo.
\bibliographystyle{apj}
| {'timestamp': '2019-01-30T02:18:02', 'yymm': '1901', 'arxiv_id': '1901.02674', 'language': 'en', 'url': 'https://arxiv.org/abs/1901.02674'} |
\section{Introduction}
Automatic speaker verification (ASV) has several applications such as voice biometrics for commercial applications, speaker detection in surveillance, speaker diarization, etc. A speaker is enrolled by a sample utterance(s), and the task of ASV is to detect whether the target speaker is present in a given test utterance or not. Several challenges have been organized over the years for benchmarking and advancing speaker verification technology such as the NIST speaker recognition Evaluation (SRE) challenge 2019 \cite{Sadjadi19plan}, the VoxCeleb speaker recognition challenge (VoxSRC) \cite{nagrani2017voxceleb} and the VOiCES challenge \cite{Nandwana2019}.
The major challenges in speaker verification include the language mismatch in testing, short duration audio and the presence of noise/reverberation in the speech data.
The state-of-the-art systems in speaker verification use a model to extract embeddings of fixed dimension from utterances of variable duration. The earlier approaches based on unsupervised Gaussian mixture model (GMM) i-vector extractor \cite{dehak2010front} have been recently replaced with neural embedding extractors \cite{snyder2016deep,snyder2018x} which are trained on large amounts of supervised speaker classification tasks. These fixed dimensional embeddings are pre-processed with a length normalization \cite{garcia2011analysis} technique followed by probabilistic linear discriminant analysis (PLDA) based backend modeling approach \cite{kenny2010bayesian}.
In our previous work, we had explored a discriminative neural PLDA (NPLDA) approach \cite{ramoji2020pairwise} to backend modeling where a discriminative similarity function was used. The learnable parameters of the NPLDA model were optimized using an approximation of the minimum detection cost function (DCF). This model also showed good improvements in our SRE evaluations and the VOiCES from a distance challenge \cite{ramoji2020leap, ramoji2020nplda}. In this paper, we extend this work to propose a joint modeling framework that optimizes both the front-end x-vector embedding model and the backend NPLDA model in a single end-to-end (E2E) neural framework. The proposed model is initialized with the pre-trained x-vector time delay neural network (TDNN). The NPLDA E2E is fully trained on pairs of speech utterances starting directly from the mel-frequency cepstral coefficient (MFCC) features. The advantage of this method is that both the embedding extractor as well as the final score computation is optimized on pairs of utterances and with the speaker verification metric. With experiments on the NIST SRE 2018 and 2019 datasets, we show that the proposed NLPDA E2E model improves significantly over the baseline system using x-vectors and generative PLDA modeling.
\section{Related Prior Work}
The common approaches for scoring in speaker verification systems include support vector machines (SVMs) \cite{campbell2006support}, and the probabilistic linear discriminant analysis (PLDA) \cite{kenny2010bayesian}. Some efforts on pairwise generative and discriminative modeling are discussed in \cite{cumani2013pairwise,cumani2014large,cumani2014generative}. The discriminative version of PLDA with logistic regression and support vector machine (SVM) kernels has also been explored in ~\cite{burget2011discriminatively}. In this work, the authors use the functional form of the generative model and pool all the parameters needed to be trained into a single long vector. These parameters are then discriminatively trained using the SVM loss function with pairs of input vectors. The discriminative PLDA (DPLDA) is however prone to over-fitting on the training speakers and leads to degradation on unseen speakers in SRE evaluations~\cite{villalba2020state}. The regularization of embedding extractor network using a Gaussian backend scoring has been investigated in \cite{Ferrer2019}.
Other recent developments in this direction includes efforts in using the approximate DCF metric for text dependent speaker verification \cite{Mingote2019}.
\begin{figure*}[t]
\centering
\includegraphics[width=0.9\textwidth,trim={1.5cm 0.3cm 1.5cm 0.5cm},clip]{e2e_is2020_std.pdf}
\caption{End-to-End x-vector NPLDA architecture for Speaker Verification.}
\label{fig:e2e}
\end{figure*}
Recently, some end-to-end approaches for speaker verification have been examined. For example, in~\cite{rohdin2018end}, the PLDA scoring which is done with the i-vector extraction has been jointly derived using a deep neural network architecture and the entire model is trained using a binary cross entropy training criterion. In \cite{wan2018generalized}, a generalized end to end loss by minimizing the centroid means of within speaker distances while maximizing across speaker distances was proposed. In another E2E effort, the use of triplet loss has been explored \cite{Zhang2017}. However, in spite of these efforts, most state of the art systems use a generative PLDA backend model with x-vectors and similar neural network embeddings.
\section{Background}
\subsection{Generative Gaussian PLDA (GPLDA) }
The PLDA model on the processed x-vector embedding $\boldsymbol{\eta} _r$ (after centering, LDA transformation and unit length normalization) is given by
\begin{equation}
\boldsymbol{\eta} _r = \Phi \boldsymbol{\omega} + \boldsymbol{\epsilon}_r
\end{equation}
where $\boldsymbol{\omega}$ is the latent speaker factor with a Gaussian prior of $\mathcal{N}(\textbf{0},\textbf{I})$, $\Phi$ characterizes the speaker sub-space matrix, and $\boldsymbol{\epsilon}_r$ is the residual assumed to have distribution $\mathcal{N}(\textbf{0},\boldsymbol{\Sigma})$.
For scoring, a pair of embeddings, $\boldsymbol{\eta}_e$ from the enrollment recording and $\boldsymbol{\eta}_t$ from the test recording are used with the PLDA model to compute the log-likelihood ratio score given by
\begin{equation}\label{eq:plda_scoring}
s(\boldsymbol{\eta}_e, \boldsymbol{\eta}_t) = \boldsymbol{\eta}_e^{^{\intercal}} \boldsymbol{Q}\boldsymbol{\eta}_e + \boldsymbol{\eta}_t^{^{\intercal}} \boldsymbol{Q}\boldsymbol{\eta}_t + 2\boldsymbol{\eta}_e^{^{\intercal}} \boldsymbol{P}\boldsymbol{\eta}_t + \text{const}
\end{equation}
where,
\begin{eqnarray}
\boldsymbol{Q} & = & \boldsymbol{\Sigma} _{tot} ^{-1} - (\boldsymbol{\Sigma} _{tot} - \boldsymbol{\Sigma} _{ac} \boldsymbol{\Sigma} _{tot}^{-1} \boldsymbol{\Sigma} _{ac})^{-1} \\
\boldsymbol{P} & = & \boldsymbol{\Sigma} _{tot} ^{-1} \boldsymbol{\Sigma} _{ac} (\boldsymbol{\Sigma} _{tot} - \boldsymbol{\Sigma} _{ac} \boldsymbol{\Sigma} _{tot}^{-1} \boldsymbol{\Sigma} _{ac})^{-1}
\end{eqnarray}
with $\boldsymbol{\Sigma} _{tot} = \Phi \Phi ^T + \boldsymbol{\Sigma}$ and $\boldsymbol{\Sigma} _{ac} = \Phi \Phi ^T$.
In the kaldi implementation of PLDA, a diagonalizing transformation which simultaneously diagonalizes the within and between speaker covariances is computed which reduces $\boldsymbol{P}$ and $\boldsymbol{Q}$ to diagonal matrices.
\pagebreak
\subsection{NPLDA}\label{sec:PldaNet}
In the discriminative NPLDA approach \cite{ramoji2020nplda}, we construct the pre-processing steps of LDA as first affine layer, unit-length normalization as a non-linear activation and PLDA centering and diagonalization as another affine transformation. The final PLDA pair-wise scoring given in Eq.~\ref{eq:plda_scoring} is implemented as a quadratic layer in Fig.~\ref{fig:e2e}. Thus, the NPLDA implements the pre-processing of the x-vectors and the PLDA scoring as a neural backend.
\subsubsection{Cost Function}\label{sec:costfuncs}
To train the NPLDA for the task of speaker verification, we sample pairs of x-vectors representing target (from same speaker) and non-target hypothesis (from different speakers). The normalized detection cost function (DCF) \cite{van2007introduction} for a detection threshold $\theta$ is defined as:
\begin{align}\label{eq:det_cost}
C_{Norm}(\beta,\theta) = P_{Miss}(\theta) + \beta P_{FA}(\theta)
\end{align}
where $\beta$ is an application based weight defined as
\begin{align}
\beta = \frac{C_{FA} (1-P_{target})}{C_{Miss}P_{target}}
\end{align}
where $C_{Miss}$ and $C_{FA}$ are the costs assigned to miss and false alarms, and $P_{target}$ is the prior probability of a target trial.
$P_{Miss}$ and $P_{FA}$ are the probability of miss and false alarms respectively, and are computed by applying a detection threshold of $\theta$ to the log-likelihood ratios.
A differentiable approximation of the normalized detection cost was proposed in \cite{ramoji2020nplda, Mingote2019}.
\begin{align}
P_{Miss}^{\text{(soft)}}(\theta) &= \frac{\sum_{i=1}^{N} t_i \left[1-{\sigma}(\alpha(s_i-\theta))\right]}{\sum_{i=1}^{N} t_i} \\
P_{FA}^{\text{(soft)}}(\theta) &= \frac{\sum_{i=1}^{N} (1-t_i) {\sigma}(\alpha(s_i - \theta))}{\sum_{i=1}^{N} (1-t_i)}
\end{align}
\noindent Here, $i$ is the trial index, $s_i$ is the system score and $t_i$ denotes the ground truth label for trial $i$, and $\sigma$ denotes the sigmoid function. $N$ is the total number of trials in the minibatch over which the cost is computed. By choosing a large enough value for the warping factor $\alpha$, the approximation can be made arbitrarily close to the actual detection cost function for a wide range of thresholds.
The minimum detection cost (minDCF) is achieved at a threshold where the DCF is minimized.
\begin{align}
\text{minDCF} = \underset{\theta}{\min} \,\,C_{Norm}(\beta, \theta)
\end{align}
The threshold $\theta$ is included in the set of learnable parameters of the neural network. This way, the network learns to minimize the minDCF as a function of all the parameters through backpropagation.
\section{End-to-end modeling}
The model we explore is a concatenated version of two parameter tied x-vector extractors (TDNN networks~\cite{snyder2019speaker}) with the NPLDA model (Fig.~\ref{fig:e2e}). \footnote{The implementation of this model can be found in \url{https://github.com/iiscleap/E2E-NPLDA}}
The end-to-end model processes the mel frequency cepstral coefficients (MFCCs) of a pair of utterances to output a score. The MFCC features are passed through nine time delay neural network (TDNN) layers followed by a statistic pooing layer. The statistics pooling layer is followed by a fully connected layer with unit length normalization non-linearity. This is followed by a linear layer and a quadratic layer as a function of the enrollment and test embeddings to output a score. The parameters of the TDNN and the affine layers of the enrollment and test side of the E2E model are tied, which makes the model symmetric.
\subsection{GPU memory considerations}\label{ssec:gpu}
We can estimate the memory required for a single iteration (batch update) of training as the sum of memory required to store the network parameters, gradients, forward and backward components of each batch. In this end-to-end network, each training batch of $N$ trials can have upto $2N$ unique utterances assuming there are no repetitions. For simplicity, let us assume each of the utterances corresponds to $T$ frames. We denote $k_i$ to be the dimension of the input to the $i^\text{th}$ TDNN layer, with a TDNN context of $c_i$ frames. The total memory required can then be estimated as
$2NT \sum_{i}{k_ic_i} \times 16 \text{ bytes.} $. The GPU memory is limited by the total number of frames that go into the TDNN, which is denoted by the factor $2NT$. A large batchsize of $2048$, as was used in \cite{ramoji2020leap}, is infeasible for the end-to-end model (results in GPU memory load of $240$GB). Hence, we resorted to a sampling strategy to reduce the GPU memory constraints.
\subsection{Sampling of Trials}
\label{ssec:sampling}
In this current work, in order to avoid memory explosion in the x-vector extraction stage of the E2E model, we propose to use a small number of utterances ($64$) in a batch with about $20$ sec. of audio in each utterance. These $64$ utterances are drawn from $m$ speakers where $m$ ranges from $3-8$. These $64$ utterances are split randomly into two halves for each speaker to form enrollment and test side of trials. The MFCC features of the enrollment and test utterances are transformed to utterance embeddings $\eta _e$ and $\eta _t$ (as shown in Fig.~\ref{fig:e2e}). Each pair of enrollment, and test utterances is given a label as to whether the trial belongs to the target class (same speaker) or non-target class (different speakers). In this way, while the number of utterances is small, the number of trials used in the batch is $1024$. Using the label information and the cost function defined in Eq.~\ref{eq:det_cost}, the gradients are back-propagated to update the entire E2E model parameters.
This algorithm is applied separately to the male and female partitions of each training dataset to ensure the trials are gender and domain matched. All the $64$ utterances used in a batch come from the same gender and same dataset (to avoid cross gender, cross language trials). The algorithm is repeated multiple times with different number of speakers ($m$), for the male and female partitions of every dataset. Finally, all the training batches are pooled together and randomized.
In contrast, the trial sampling algorithm used in our previous work on NPLDA \cite{ramoji2020nplda, ramoji2020leap} was much simpler. For each gender of each dataset, we sample an enrollment utterance from a randomly sampled speaker, and sample another utterance from either the same speaker or a different speaker to get a target or a non-target trial. This was done without any repetition of utterances, to ensure that each utterance appears once per sampling epoch. This procedure was repeated numerous times for multiple datasets and for both genders to obtain the required number of trials. All the trials were then pooled together, shuffled and split into batches of $1024$ or $2048$ trials.
It is worth noting that the batch statistics of the two sampling methods are significantly different. A batch of trials in the previous sampling method (Algo. 1) can contain trials from multiple datasets and gender, whereas in the modified sampling method, which we will refer as Algo. 2, all the trials in a batch are from a particular gender of a particular dataset.
\section{Experiments and Results}
The work is an extension of our work in \cite{ramoji2020leap}. The x-vector model is trained using the extended time-delay neural network (E-TDNN) architecture described in \cite{snyder2019speaker}. This uses 10 layers of TDNNs followed by a statistics pooling layer. Once the network is trained, x-vectors of 512 dimensions are extracted from the affine component of layer 12 in the E-TDNN architecture. By combining the Voxceleb 1\&2 dataset \cite{nagrani2017voxceleb} with Switchboard, Mixer 6, SRE04-10, SRE16 evaluation set and SRE18 evaluation sets, we obtained with $2.2$M recordings from $13539$ speakers. The datasets were augmented with the 5-fold augmentation strategy similar to the previous models. In order to reduce the weighting given to the VoxCeleb speakers (out-of-domain compared to conversational telephone speech (CTS)), we also subsampled the VoxCeleb augmented portion to include only $1.2$M utterances. The x-vector model is trained using $30$ dimensional MFCC features using a $30$-channel mel-scale filter bank spanning the frequency range $200$ Hz - $3500$ Hz,, mean-normalized over a sliding window of up to 3 seconds and with $13539$ dimensional targets using the Kaldi toolkit. More information about the model can be found in \cite{ramoji2020leap}.
The various backend PLDA models are trained on the SRE18 evaluation dataset. The evaluation datasets used include the SRE18 development and the SRE19 evaluation datasets. We perform several experiments under various conditions. The primary baseline to benchmark our systems is the Gaussian PLDA backend implementation in the Kaldi toolkit (GPLDA). The Kaldi implementation models the average embedding x-vector of each training speaker. The x-vectors are centered, dimensionality reduced using LDA to 170 dimensions, followed by unit length normalization.
\pagebreak
In the traditional x-vector system, the statistic pooling layer computes the mean and standard deviation of the final TDNN layer. These two statistics then are concatenated into a fixed dimensional embedding.
We also perform experiments where we use variance instead of the standard deviation and compare the performance.
In the following sections, we study the influence of reduced training duration, and provide a performance comparison of the sampling method (Algo. 1 vs Algo. 2). We then compare the performance of Gaussian PLDA (GPLDA), Neural PLDA (NPLDA), and the proposed end-to-end approach (E2E). The PLDA backend training dataset used is the SRE18 Evaluation dataset. We report our results on the SRE18 Development set and the SRE19 Evaluation dataset using two cost metrics - equal error rate (EER) and minimum DCF ($C_{Min}$), which are the primary cost metrics for SRE19 evaluations.
\subsection{Influence of training utterance duration}\label{ssec:exp:dur}
As discussed in Section \ref{ssec:sampling}, due to GPU memory considerations and ease of implementation, we create a modified dataset by splitting longer utterances into 20 second chunks (2000 frames) after voice activity detection (VAD) and mean normalization. We compare the performances of the models on the modified dataset and the original one. The results are reported in Table \ref{tab:utt}. We observe that the performance of the systems are quite comparable. This allows us to proceed using these conditions in the implementation of the End-to-End model. All subsequent reported models use 20 second chunks for PLDA training.
\begin{table}[t]
\centering
\resizebox{\linewidth}{!}{
\begin{tabular}{@{}c|c|c|c|c|c@{}}
\toprule
\multirow{2}{*}{Model} & \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Duration of \\ utterance \end{tabular}} & \multicolumn{2}{c|}{SRE18 Dev} & \multicolumn{2}{c}{SRE19 Eval} \\ \cmidrule(l){3-6} & & EER (\%) & $C_{Min}$ & EER (\%)& $C_{Min}$ \\ \midrule
GPLDA (G1) & Full & 6.43 & 0.417 & 6.18 & 0.512 \\
GPLDA (G2) & 20 secs & 5.96 &0.436 & 5.80 & 0.518\\
\midrule
NPLDA (N1) & Full & 5.33 & 0.389 & 5.10 & 0.443 \\
NPLDA (N2) & 20 secs & 5.57 & 0.359 & 5.32 & 0.432 \\ \bottomrule
\end{tabular}}
\vspace{0.25cm}
\caption{Performance comparison of training utterance durations (Full utterance vs 20 second segmenting) on GPLDA and NPLDA\cite{ramoji2020leap} models}
\vspace{-0.5cm}
\label{tab:utt}
\end{table}
\subsection{Comparison of sampling algorithms with NPLDA}
The way the training trials are generated is crucial to how the model trains and its performance. The performance comparison of the two sampling techniques with PLDA models trained on SRE18 Evaluation dataset can be seen in Table \ref{tab:sampling}. Although the nature of batch wise trials has changed significantly in terms of number of speakers in each batch and gender matched batches in the proposed new sampling method (Algo. 2), we see that its performance is comparable to our previous sampling method (Algo. 1).
\begin{table}[t]
\centering
\resizebox{\linewidth}{!}{\begin{tabular}{@{}c|c|c|c|c|c@{}}
\toprule
\multirow{2}{*}{Model} &\multirow{2}{*}{Sampling} & \multicolumn{2}{c|}{SRE18 Dev} & \multicolumn{2}{c}{SRE19 Eval} \\ \cmidrule(l){3-6}
& & EER (\%) & $C_{Min}$ & EER (\%) & $C_{Min}$ \\ \midrule
NPLDA (N2) & Algo. 1 & 5.57 & 0.359 & 5.32 & 0.432 \\
NPLDA (N3) & Algo. 2 & 5.23 & 0.338 & 5.73 & 0.439 \\
\bottomrule
\end{tabular}}
\vspace{0.25cm}
\caption{Performance comparison with different sampling techniques using NPLDA\cite{ramoji2020leap} method using previous sampling method (Algo. 1) and proposed new sampling method (Algo. 2)}
\label{tab:sampling}
\end{table}
\subsection{End-to-End (E2E)}
Using the proposed sampling method, we generate batches of 1024 trials using 64 utterances per batch. Both the NPLDA and E2E models were trained with this batch size. We use the Adam optimizer for the backpropagation learning. The performance of these models are reported in Table \ref{tab:e2e}. The NPLDA model is initialized with the GPLDA model. The initialization details of the models along with the pooling functions are reported in the table. We compare performances using two different statistics (StdDev or Var). We observe significant improvements in NPLDA over the GPLDA system and subsequently in E2E system over the NPLDA. Comparing E2E and GPLDA when we use standard deviation as the pooling function, we observe relative improvements of about $23$\% and $22$\% in SRE18 development and SRE19 evaluation sets, respectively in terms of the $C_{Min}$ metric. The relative improvements between E2E and GPLDA when we use Var as the pooling function are about $33$\% and $20$\% for SRE18 development and SRE19 evaluation sets, respectively for the $C_{Min}$ metric. Though, the cost function in the neural network aims to minimize the detection cost function (DCF), we also see improvements in the EER metric using the proposed approach. These results show that the joint E2E training with a single neural pipeline and optimization results in improved speaker recognition performance.
\begin{table}[t!]
\centering
\resizebox{\linewidth}{!}{
\begin{tabular}{@{}l|c|c|c|c|c|c@{}}
\toprule
\multirow{2}{*}{Model} & \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Pooling \\ function \end{tabular}} & \multirow{2}{*}{Init.} & \multicolumn{2}{c|}{SRE18 Dev} & \multicolumn{2}{c}{SRE19 Eval} \\ \cmidrule(l){4-7}
& \multicolumn{1}{l|}{} & & EER (\%) & $C_{Min}$ & EER (\%) & $C_{Min}$ \\ \midrule
GPLDA (G2) & StdDev & - & 5.96 & 0.436 & 5.80 & 0.518 \\
GPLDA (G3) & Var & - & 7.23 & 0.459 & 6.33 & 0.560 \\
NPLDA (N2) & StdDev & G2 & 5.57 & 0.359 & 5.32 & 0.432 \\
NPLDA (N4) & Var & G3 & 6.05 & 0.377 & 5.91 & 0.465 \\
E2E (E1) & StdDev & N2 & \textbf{5.36} & 0.337 & \textbf{5.31} & \textbf{0.405} \\
E2E (E2) & Var & N4 & 5.60 & \textbf{0.307} & 5.43 & 0.446 \\ \bottomrule
\end{tabular}}
\vspace{0.25cm}
\caption{Performance comparison between GPLDA, NPLDA and E2E models using standard deviation and variance as the secondary pooling functions. The model that was used to initialize the network is denoted in the 3rd column}
\label{tab:e2e}
\vspace{-0.5cm}
\end{table}
\section{Summary and Conclusions}
This paper explores a step in the direction of a neural End-to-End (E2E) approach in speaker verification tasks. It is an extension of our work on a discriminative neural PLDA (NPLDA) backend. The proposed model is a single elegant end-to-end approach that optimizes directly from acoustic features like MFCCs with a verification cost function to output a likelihood ratio score. We discuss the influence of the factors that were key in implementing the E2E model. This involved modifying the duration of the training utterance and developing a new sampling technique to generate training trials. The model shows considerable improvements over the generative Gaussian PLDA and the NPLDA models on the NIST SRE 2018 and 2019 datasets. One drawback of the proposed method is the requirement to initialize the E2E model with pre-trained weights of an x-vector network.
Future work in this direction could include investigating better sampling algorithms such as the use of curriculum learning \cite{ranjan2017curriculum}, different loss functions, improved architecture for the embedding extractor using attention and other sequence models such as LSTMs etc.
\bibliographystyle{IEEEtran}
| {'timestamp': '2020-08-12T02:08:55', 'yymm': '2008', 'arxiv_id': '2008.04527', 'language': 'en', 'url': 'https://arxiv.org/abs/2008.04527'} |
\section{Introduction}
\noindent This is the first of three papers that develop and use structures which are counted by a ``parabolic'' generalization of the Catalan numbers. Apart from some motivating remarks, it can be read by anyone interested in tableaux. It is self-contained, except for a few references to its tableau precursors \cite{Wi2} and \cite{PW1}. Fix $n \geq 1$ and set $[n-1] := \{1,2,...,n-1\}$. Choose a subset $R \subseteq [n-1]$ and set $r := |R|.$ The section on $R$-Catalan numbers can be understood as soon as a few definitions have been read. Our ``rightmost clump deleting'' chains of sets defined early in Section 4 became Exercise 2.202 in Stanley's list \cite{Sta} of interpretations of the Catalan numbers.
Consider the ordered partitions of the set $[n]$ with $r+1$ blocks of fixed sizes that are determined by using $R$ to specify ``dividers''. These ordered partitions can be viewed as being the ``inverses'' of multipermutations whose $r+1$ multiplicities are determined by $R$. Setting $J := [n-1] \backslash R$, these multipermutations depict the minimum length coset representatives forming $W^J$ for the quotient of $S_n$ by the parabolic subgroup $W_J$. We refer to the standard forms of the ordered partitions as ``$R$-permutations''. When $R = [n-1]$, the $R$-permutations are just the permutations of $[n]$. The number of 312-avoiding permutations of $[n]$ is the $n^{th}$ Catalan number. In 2012 we generalized the notion of 312-pattern avoidance for permutations to that of ``$R$-312-avoidance'' for $R$-permutations. Here we define the ``parabolic $R$-Catalan number'' to be the number of $R$-312-avoiding $R$-permutations.
Let $N \geq 1$ and fix a partition $\lambda$ of $N$. The shape of $\lambda$ has $N$ boxes; assume that it has at most $n$ rows. Let $\mathcal{T}_\lambda$ denote the set of semistandard Young tableaux of shape $\lambda$ with values from $[n]$. The content weight monomial $x^{\Theta(T)}$ of a tableau $T$ in $\mathcal{T}_\lambda$ is formed from the census $\Theta(T)$ of the values from $[n]$ that appear in $T$. The Schur function in $x_1, ..., x_n$ indexed by $\lambda$ can be expressed as the sum over $T$ in $\mathcal{T}_\lambda$ of the content weight monomials $x^{\Theta(T)}$. Let $R_\lambda \subseteq [n-1]$ be the set of column lengths in the shape $\lambda$ that are less than $n$. The type A Demazure characters (key polynomials) in $x_1, ..., x_n$ can be indexed by pairs $(\lambda,\pi)$, where $\lambda$ is a partition as above and $\pi$ is an $R_\lambda$-permutation. We refer to these as ``Demazure polynomials''. The Demazure polynomial indexed by $(\lambda,\pi)$ can be expressed as the sum of the monomials $x^{\Theta(T)}$ over a set $\mathcal{D}_\lambda(\pi)$ of ``Demazure tableaux'' of shape $\lambda$ for the $R$-permutation $\pi$.
Regarding $\mathcal{T}_\lambda$ as a poset via componentwise comparison, it can be seen that the principal order ideals $[T]$ in $\mathcal{T}_\lambda$ form convex polytopes in $\mathbb{Z}^N$. The set $\mathcal{D}_\lambda(\pi)$ can be seen to be a certain subset of the ideal $[Y_\lambda(\pi)]$, where the tableau $Y_\lambda(\pi)$ is the ``key'' of $\pi$. It is natural to ask for which $R$-permutations $\pi$ one has $\mathcal{D}_\lambda(\pi) = [Y_\lambda(\pi)]$. Our first main result is: If $\pi$ is an $R_\lambda$-312-avoiding $R_\lambda$-permutation, then the tableau set $\mathcal{D}_\lambda(\pi)$ is all of the principal ideal $[Y_\lambda(\pi)]$ (and hence is convex in $\mathbb{Z}^N$). Our second main result is conversely: If $\mathcal{D}_\lambda(\pi)$ forms a convex polytope in $\mathbb{Z}^N$ (this includes the principal ideals $[Y_\lambda(\pi)]$), then the $R_\lambda$-permutation $\pi$ is $R_\lambda$-312-avoiding. So we can say exactly when one has $\mathcal{D}_\lambda(\pi) = [Y_\lambda(\pi)]$. Our earlier papers \cite{Wi2} and \cite{PW1} gave the first tractable descriptions of the Demazure tableau sets $\mathcal{D}_\lambda(\pi)$. Those results provide the means to prove the main results here.
Demazure characters arose in 1974 when Demazure introduced certain $B$-modules while studying singularities of Schubert varieties in the $G/B$ flag manifolds. Flagged Schur functions arose in 1982 when Lascoux and Sch{\"u}tzenberger were studying Schubert polynomials for the flag manifold $GL(n)/B$. Like the Demazure polynomials, the flagged Schur functions in $x_1, ..., x_n$ can be expressed as sums of the weight monomial $x^{\Theta(T)}$ over certain subsets of $\mathcal{T}_\lambda$. Reiner and Shimozono \cite{RS} and then Postnikov and Stanley \cite{PS} described coincidences between the Demazure polynomials and the flagged Schur functions. Beginning in 2011, our original motivation for this project was to better understand their results. In the second paper \cite{PW3} in this series, we deepen their results: Rather than obtaining coincidences at the level of polynomials, we employ the main results of this paper to obtain the coincidences at the underlying level of the tableau sets that are used to describe the polynomials. Fact \ref{fact320.3}, Proposition \ref{prop320.2}, and Theorem \ref{theorem340} are also needed in \cite{PW3}. In Section 8 we indicate why our characterization of convexity for the sets $\mathcal{D}_\lambda(\pi)$ may be of interest in algebraic geometry and representation theory.
Each of the two main themes of this series of papers is at least as interesting to us as is any one of the stated results in and of itself. One of these themes is that the structures used in the three papers are counted by numbers that can be regarded as being ``parabolic'' generalizations of the Catalan numbers. In these three papers these structures are respectively introduced to study convexity, the coincidences, and to solve a problem concerning the ``nonpermutable'' property of Gessel and Viennot. It turned out that by 2014, Godbole, Goyt, Herdan, and Pudwell had independently introduced \cite{GGHP} a general notion of pattern avoidance for ordered partitions that includes our notion of $R$-312-avoidance for $R$-permutations. Apparently their motivations for developing their definition were purely enumerative. Chen, Dai, and Zhou obtained \cite{CDZ} further enumerative results. As a result of the work of these two groups, two sequences were added to the OEIS. As is described in our last section, one of those is formed from the counts considered here for a sequence of particular cases. The other of those is formed by summing the counts considered here for all cases. In our series of papers, the parabolic Catalan count arises ``in nature'' in a number of interrelated ways. In this first paper this quantity counts ``gapless $R$-tuples'', ``$R$-rightmost clump deleting chains'', and convex Demazure tableau sets. The parabolic Catalan number further counts roughly another dozen structures in our two subsequent papers. After the first version \cite{PW2} of this paper was initially distributed, we learned of a different (but related) kind of parabolic generalization of the Catalan numbers due to M{\"u}hle and Williams. This is described at the end of this paper.
The other main theme of this series of papers is the ubiquity of some of the structures that are counted by the parabolic Catalan numbers. The gapless $R$-tuples arise as the images of the $R$-312-avoiding $R$-permutations under the $R$-ranking map in this paper and as the minimum members of the equivalence classes for the indexing $n$-tuples of a generalization of the flagged Schur functions in our second paper. Moreover, the $R$-gapless condition provides half of the solution to the nonpermutability problem considered in our third paper \cite{PW4}. Since the gapless $R$-tuples and the structures equivalent to them are enumerated by a parabolic generalization of Catalan numbers, it would not be surprising if they were to arise in further contexts.
The material in this paper first appeared as one-third of the overly long manuscript \cite{PW2}. The second paper \cite{PW3} in this series presents most of the remaining material from \cite{PW2}. Section 11 of \cite{PW3} describes the projecting and lifting processes that relate the notions of 312-avoidance and of $R$-312-avoidance.
Definitions are presented in Sections 2 and 3. In Section 4 we reformulate the $R$-312-avoiding $R$-permutations as $R$-rightmost clump deleting chains and as gapless $R$-tuples. To prepare for the proofs of our two main results, in Section 5 we associate certain tableaux to these structures. Our main results are presented in Sections 6 and 7. Section 8 indicates why convexity for the sets of $\mathcal{D}_\lambda(\pi)$ may be of further interest, and Section 9 contains remarks on enumeration.
\section{General definitions and definitions of $\mathbf{\emph{R}}$-tuples}
In posets we use interval notation to denote principal ideals and convex sets. For example, in $\mathbb{Z}$ one has $(i, k] = \{i+1, i+2, ... , k\}$. Given an element $x$ of a poset $P$, we denote the principal ideal $\{ y \in P : y \leq x \}$ by $[x]$. When $P = \{1 < 2 < 3 < ... \}$, we write $[1,k]$ as $[k]$. If $Q$ is a set of integers with $q$ elements, for $d \in [q]$ let $rank^d(Q)$ be the $d^{th}$ largest element of $Q$. We write $\max(Q) := rank^1(Q)$ and $\min(Q) := rank^q(Q)$. A set $\mathcal{D} \subseteq \mathbb{Z}^N$ for some $N \geq 1$ is a \textit{convex polytope} if it is the solution set for a finite system of linear inequalities.
Fix $n \geq 1$ throughout the paper. Except for $\zeta$, various lower case Greek letters indicate various kinds of $n$-tuples of non-negative integers. Their entries are denoted with the same letter. An $nn$-\textit{tuple} $\nu$ consists of $n$ \emph{entries} $\nu_i \in [n]$ that are indexed by \emph{indices} $i \in [1,n]$. An $nn$-tuple $\phi$ is a \textit{flag} if $\phi_1 \leq \ldots \leq \phi_n$. An \emph{upper tuple} is an $nn$-tuple $\upsilon$ such that $\upsilon_i \geq i$ for $i \in [n]$. The upper flags are the sequences of the $y$-coordinates for the above-diagonal Catalan lattice paths from $(0, 0)$ to $(n, n)$. A \emph{permutation} is an $nn$-tuple that has distinct entries. Let $S_n$ denote the set of permutations. A permutation $\pi$ is $312$-\textit{avoiding} if there do not exist indices $1 \leq a < b < c \leq n$ such that $\pi_a > \pi_b < \pi_c$ and $\pi_a > \pi_c$. (This is equivalent to its inverse being 231-avoiding.) Let $S_n^{312}$ denote the set of 312-avoiding permutations. By Exercises 116 and 24 of \cite{Sta}, these permutations and the upper flags are counted by the Catalan number $C_n := \frac{1}{n+1}\binom{2n}{n}$.
Fix $R \subseteq [n-1]$ through the end of Section 7. Denote the elements of $R$ by $q_1 < \ldots < q_r$ for some $r \geq 0$. Set $q_0 := 0$ and $q_{r+1} := n$. We use the $q_h$ for $h \in [r+1]$ to specify the locations of $r+1$ ``dividers'' within $nn$-tuples: Let $\nu$ be an $nn$-tuple. On the graph of $\nu$ in the first quadrant draw vertical lines at $x = q_h + \epsilon$ for $h \in [r+1]$ and some small $\epsilon > 0$. In Figure 7.1 we have $n = 9$ and $R = \{ 2, 3, 5, 7 \}$. These $r+1$ lines indicate the right ends of the $r+1$ \emph{carrels} $(q_{h-1}, q_h]$ \emph{of $\nu$} for $h \in [r+1]$. An \emph{$R$-tuple} is an $nn$-tuple that has been equipped with these $r+1$ dividers. Fix an $R$-tuple $\nu$; we portray it by $(\nu_1, ... , \nu_{q_1} ; \nu_{q_1+1}, ... , \nu_{q_2}; ... ; \nu_{q_r+1}, ... , \nu_n)$. Let $U_R(n)$ denote the set of upper $R$-tuples. Let $UF_R(n)$ denote the subset of $U_R(n)$ consisting of upper flags. Fix $h \in [r+1]$. The $h^{th}$ carrel has $p_h := q_h - q_{h-1}$ indices. The $h^{th}$ \emph{cohort} of $\nu$ is the multiset of entries of $\nu$ on the $h^{th}$ carrel.
An \emph{$R$-increasing tuple} is an $R$-tuple $\alpha$ such that $\alpha_{q_{h-1}+1} < ... < \alpha_{q_h}$ for $h \in [r+1]$. Let $UI_R(n)$ denote the subset of $U_R(n)$ consisting of $R$-increasing upper tuples. Consult Table 2.1 for an example and a nonexample. Boldface entries indicate failures. It can be seen that $|UI_R(n)| = n! / \prod_{h=1}^{r+1} p_h! =: \binom{n}{R}$. An $R$-\textit{permutation} is a permutation that is $R$-increasing when viewed as an $R$-tuple. Let $S_n^R$ denote the set of $R$-permutations. Note that $| S_n^R| = \binom{n}{R}$. We refer to the cases $R = \emptyset$ and $R = [n-1]$ as the \emph{trivial} and \emph{full cases} respectively. Here $| S_n^\emptyset | = 1$ and $| S_n^{[n-1]} | = n!$ respectively. An $R$-permutation $\pi$ is $R$-$312$-\textit{containing} if there exists $h \in [r-1]$ and indices $1 \leq a \leq q_h < b \leq q_{h+1} < c \leq n$ such that $\pi_a > \pi_b < \pi_c$ and $\pi_a > \pi_c$. An $R$-permutation is $R$-$312$-\textit{avoiding} if it is not $R$-$312$-containing. (This is equivalent to the corresponding multipermutation being 231-avoiding.) Let $S_n^{R\text{-}312}$ denote the set of $R$-312-avoiding permutations. We define the \emph{$R$-parabolic Catalan number} $C_n^R$ by $C_n^R := |S_n^{R\text{-}312}|$.
\begin{figure}[h!]
\begin{center}
\begin{tabular}{lccc}
\underline{Type of $R$-tuple} & \underline{Set} & \underline{Example} & \underline{Nonexample} \\ \\
$R$-increasing upper tuple & $\alpha \in UI_R(n)$ & $(2,6,7;4,5,7,8,9;9)$ & $(3,5,\textbf{5};6,\textbf{4},7,8,9;9)$ \\ \\
$R$-312-avoiding permutation & $\pi \in S_n^{R\text{-}312}$ & $(2,3,6;1,4,5,8,9;7)$ & $(2,4,\textbf{6};1,\textbf{3},7,8,9;\textbf{5})$ \\ \\
Gapless $R$-tuple & $\gamma \in UG_R(n)$ & $(2,4,6;4,5,6,7,9;9)$ & $(2,4,6;\textbf{4},\textbf{6},7,8,9;9)$ \\ \\
\end{tabular}
\caption*{Table 2.1. (Non-)Examples of R-tuples for $n = 9$ and $R = \{3,8\}$.}
\end{center}
\end{figure}
Next we consider $R$-increasing tuples with the following property: Whenever there is a descent across a divider between carrels, then no ``gaps'' can occur until the increasing entries in the new carrel ``catch up''. So we define a \emph{gapless $R$-tuple} to be an $R$-increasing upper tuple $\gamma$ such that whenever there exists $h \in [r]$ with $\gamma_{q_h} > \gamma_{q_h+1}$, then $s := \gamma_{q_h} - \gamma_{q_h+1} + 1 \leq p_{h+1}$ and the first $s$ entries of the $(h+1)^{st}$ carrel $(q_h, q_{h+1} ]$ are $\gamma_{q_h}-s+1, \gamma_{q_h}-s+2, ... , \gamma_{q_h}$. The failure in Table 2.1 occurs because the absence of the element $5 \in [9]$ from the second carrel creates a gap. Let $UG_R(n) \subseteq UI_R(n)$ denote the set of gapless $R$-tuples. Note that a gapless $\gamma$ has $\gamma_{q_1} \leq \gamma_{q_2} \leq ... \leq \gamma_{q_r} \leq \gamma_{q_{r+1}}$. So in the full $R = [n-1]$ case, each gapless $R$-tuple is a flag. Hence $UG_{[n-1]}(n) = UF_{[n-1]}(n)$.
An $R$-\textit{chain} $B$ is a sequence of sets $\emptyset =: B_0 \subset B_1 \subset \ldots \subset B_r \subset B_{r+1} := [n]$ such that $|B_h| = q_h$ for $h \in [r]$. A bijection from $R$-permutations $\pi$ to $R$-chains $B$ is given by $B_h := \{\pi_1, \pi_2, \ldots, \pi_{q_h}\}$ for $h \in [r]$. We indicate it by $\pi \mapsto B$. The $R$-chains for the two $R$-permutations appearing in Table 2.1 are $\emptyset \subset \{2, 3, 6\} \subset \{ 1,2,3,4,5,6,8,9 \} \subset [9]$ and $\emptyset \subset \{2, 4, 6\} \subset \{1,2,3,4,6,7,8,9\} \subset [9]$. Fix an $R$-permutation $\pi$ and let $B$ be the corresponding $R$-chain. For $h \in [r+1]$, the set $B_h$ is the union of the first $h$ cohorts of $\pi$. Note that $R$-chains $B$ (and hence $R$-permutations $\pi$) are equivalent to the $\binom{n}{R}$ objects that could be called ``ordered $R$-partitions of $[n]$''; these arise as the sequences $(B_1 \backslash B_0, B_2\backslash B_1, \ldots, B_{r+1}\backslash B_r)$ of $r+1$ disjoint nonempty subsets of sizes $p_1, p_2, \ldots, p_{r+1}$. Now create an $R$-tuple $\Psi_R(\pi) =: \psi$ as follows: For $h \in [r+1]$ specify the entries in its $h^{th}$ carrel by $\psi_i := \text{rank}^{q_h-i+1}(B_h)$ for $i \in (q_{h-1},q_h]$. For a model, imagine there are $n$ discus throwers grouped into $r+1$ heats of $p_h$ throwers for $h \in [r+1]$. Each thrower gets one throw, the throw distances are elements of $[n]$, and there are no ties. After the $h^{th}$ heat has been completed, the $p_h$ longest throws overall so far are announced in ascending order. See Table 2.2. We call $\psi$ the \emph{rank $R$-tuple of $\pi$}. As well as being $R$-increasing, it can be seen that $\psi$ is upper: So $\psi \in UI_R(n)$.
\vspace{.125in}
\begin{figure}[h!]
\begin{center}
\begin{tabular}{lccc}
\underline{Name} & \underline{From/To} & \underline{Input} & \underline{Image} \\ \\
Rank $R$-tuple & $\Psi_R: S_n^R \rightarrow UI_R(n)$ & $(2,4,6;1,5,7,8,9;3)$ & $(2,4,6;5,6,7,8,9;9)$ \\ \\
Undoes $\Psi_R|_{S_n^{R\text{-}312}}$ & $\Pi_R: UG_R(n) \rightarrow S_n^{R\text{-}312}$ & $(2,4,6;4,5,6,7,9;9)$ & $(2,4,6;1,3,5,7,9;8)$ \\ \\
\end{tabular}
\caption*{Table 2.2. Examples for maps of $R$-tuples for $n = 9$ and $R = \{3, 8 \}$.}
\end{center}
\end{figure}
The map $\Psi_R$ is not injective; for example it maps another $R$-permutation $(2,4,6;3,5,7,8,9;1)$ to the same image as in Table 2.2. In Proposition \ref{prop320.2}(ii) it will be seen that the restriction of $\Psi_R$ to $S_n^{R\text{-}312}$ is a bijection to $UG_R(n)$ whose inverse is the following map $\Pi_R$: Let $\gamma \in UG_R(n)$. See Table 2.2. Define an $R$-tuple $\Pi_R(\gamma) =: \pi$ by: Initialize $\pi_i := \gamma_i$ for $i \in (0,q_1]$. Let $h \in [r]$. If $\gamma_{q_h} > \gamma_{q_h+1}$, set $s:= \gamma_{q_h} - \gamma_{q_h+1} + 1$. Otherwise set $s := 0$. For $i$ in the right side $(q_h + s, q_{h+1}]$ of the $(h+1)^{st}$ carrel, set $\pi_i := \gamma_i$. For $i$ in the left side $(q_h, q_h + s]$, set $d := q_h + s - i + 1$ and $\pi_i := rank^d( \hspace{1mm} [\gamma_{q_h}] \hspace{1mm} \backslash \hspace{1mm} \{ \pi_1, ... , \pi_{q_h} \} \hspace{1mm} )$. In words: working from right to left, fill in the left side by finding the largest element of $[\gamma_{q_h}]$ not used by $\pi$ so far, then the next largest, and so on. In Table 2.2 when $h=1$ the elements $5,3,1$ are found and placed into the $6^{th}, 5^{th}$, and $4^{th}$ positions. (Since $\gamma$ is a gapless $R$-tuple, when $s \geq 1$ we have $\gamma_{q_h + s} = \gamma_{q_h}$. Since `gapless' includes the upper property, here we have $\gamma_{q_h +s} \geq q_h + s$. Hence $| \hspace{1mm} [\gamma_{q_h}] \hspace{1mm} \backslash \hspace{1mm} \{ \pi_1, ... , \pi_{q_h} \} \hspace{1mm} | \geq s$, and so there are enough elements available to define these left side $\pi_i$. ) Since $\gamma_{q_h} \leq \gamma_{q_{h+1}}$, it can inductively be seen that $\max\{ \pi_1, ... , \pi_{q_h} \} = \gamma_{q_h}$.
When we restrict our attention to the full $R = [n-1]$ case, we will suppress most prefixes and subscripts of `$R$'. Two examples of this are: an $[n-1]$-chain becomes a \emph{chain}, and one has $UF(n) = UG(n)$.
\section{Shapes, tableaux, connections to Lie theory}
A \emph{partition} is an $n$-tuple $\lambda \in \mathbb{Z}^n$ such that $\lambda_1 \geq \ldots \geq \lambda_n \geq 0$. Fix such a $\lambda$ for the rest of the paper. We say it is \textit{strict} if $\lambda_1 > \ldots > \lambda_n$. The \textit{shape} of $\lambda$, also denoted $\lambda$, consists of $n$ left justified rows with $\lambda_1, \ldots, \lambda_n$ boxes. We denote its column lengths by $\zeta_1 \geq \ldots \geq \zeta_{\lambda_1}$. The column length $n$ is called the \emph{trivial} column length. Since the columns are more important than the rows, the boxes of $\lambda$ are transpose-indexed by pairs $(j,i)$ such that $1 \leq j \leq \lambda_1$ and $1 \leq i \leq \zeta_j$. Sometimes for boundary purposes we refer to a $0^{th}$ \emph{latent column} of boxes, which is a prepended $0^{th}$ column of trivial length. If $\lambda = 0$, its shape is the \textit{empty shape} $\emptyset$. Define $R_\lambda \subseteq [n-1]$ to be the set of distinct non-trivial column lengths of $\lambda$. Note that $\lambda$ is strict if and only if $R_\lambda = [n-1]$, i.e. $R_\lambda$ is full. Set $|\lambda| := \lambda_1 + \ldots + \lambda_n$.
A \textit{(semistandard) tableau of shape $\lambda$} is a filling of $\lambda$ with values from $[n]$ that strictly increase from north to south and weakly increase from west to east. The example tableau below for $n = 12$ has shape $\lambda = (7^5, 5^4, 2^2)$. Here $R_\lambda = \{5, 9, 11 \}$. Let $\mathcal{T}_\lambda$ denote the set of tableaux $T$ of shape $\lambda$. Under entrywise comparison $\leq$, this set $\mathcal{T}_\lambda$ becomes a poset that is the distributive lattice $L(\lambda, n)$ introduced by Stanley. The principal ideals $[T]$ in $\mathcal{T}_\lambda$ are clearly convex polytopes in $\mathbb{Z}^{|\lambda|}$. Fix $T \in \mathcal{T}_\lambda$. For $j \in [\lambda_1]$, we denote the one column ``subtableau'' on the boxes in the $j^{th}$ column by $T_j$. Here for $i \in [\zeta_j]$ the tableau value in the $i^{th}$ row is denoted $T_j(i)$. The set of values in $T_j$ is denoted $B(T_j)$. Columns $T_j$ of trivial length must be \emph{inert}, that is $B(T_j) = [n]$. The $0^{th}$ \textit{latent column} $T_0$ is an inert column that is sometimes implicitly prepended to the tableau $T$ at hand: We ask readers to refer to its values as needed to fulfill definitions or to finish constructions. We say a tableau $Y$ of shape $\lambda$ is a $\lambda$-\textit{key} if $B(Y_l) \supseteq B(Y_j)$ for $1 \leq l \leq j \leq \lambda_1$. The example tableau below is a $\lambda$-key. The empty shape has one tableau on it, the \textit{null tableau}. Fix a set $Q \subseteq [n]$ with $|Q| =: q \geq 0$. The \textit{column} $Y(Q)$ is the tableau on the shape for the partition $(1^q, 0^{n-q})$ whose values form the set $Q$. Then for $d \in [q]$, the value in the $(q+1-d)^{th}$ row of $Y(Q)$ is $rank^d(Q)$.
\begin{figure}[h!]
\begin{center}
\ytableausetup{boxsize = 1.5em}
$$
\begin{ytableau}
1 & 1 & 1 & 1 & 1 & 1 & 1\\
2 & 2 & 3 & 3 & 3 & 4 & 4\\
3 & 3 & 4 & 4 & 4 & 6 & 6\\
4 & 4 & 5 & 5 & 5 & 7 & 7\\
5 & 5 & 6 & 6 & 6 & 10 & 10\\
6 & 6 & 7 & 7 & 7\\
7 & 7 & 8 & 8 & 8\\
8 & 8 & 9 & 9 & 9\\
9 & 9 & 10 & 10 & 10\\
10 & 10 \\
12 & 12
\end{ytableau}$$
\end{center}
\end{figure}
The most important values in a tableau of shape $\lambda$ occur at the ends of its rows. Using the latent column when needed, these $n$ values from $[n]$ are gathered into an $R_\lambda$-tuple as follows: Let $T \in \mathcal{T}_\lambda$. We define the \textit{$\lambda$-row end list} $\omega$ of $T$ to be the $R_\lambda$-tuple given by $\omega_i := T_{\lambda_i}(i)$ for $i \in [n]$. Note that for $h \in [r+1]$, down the $h^{th}$ ``cliff'' from the right in the shape of $\lambda$ one has $\lambda_i = \lambda_{i^\prime}$ for $i, i^\prime \in (q_{h-1}, q_{h} ]$. In the example take $h = 2$. Then $q_2 = 9$ and $q_1 = 5$. Here $\lambda_i = 5 = \lambda_{i'}$ for $i, i' \in (5,9]$. Reading off the values of $T$ down that cliff produces the $h^{th}$ cohort of $\omega$. Here this cohort of $\omega$ is $\{7, 8, 9, 10\}$. These values are increasing. So $\omega \in UI_{R_\lambda}(n)$.
For $h \in [r]$, the columns of length $q_h$ in the shape $\lambda$ have indices $j$ such that $j \in (\lambda_{q_{h+1}}, \lambda_{q_h}]$. When $h=2$ we have $j \in (\lambda_{11}, \lambda_9] = (2,5]$ for columns of length $q_2 = 9$. A bijection from $R$-chains $B$ to $\lambda$-keys $Y$ is obtained by juxtaposing from left to right $\lambda_n$ inert columns and $\lambda_{q_h}-\lambda_{q_{h+1}}$ copies of $Y(B_h)$ for $r \geq h \geq 1$. We indicate it by $B \mapsto Y$. For $h = 2$ here there are $\lambda_9 - \lambda_{11} = 5-2 = 3$ copies of $Y(B_2)$ with $B_2 = (0, 10] \backslash \{2\}$. Unfortunately we need to have the indices $h$ of the column lengths $q_h$ decreasing from west to east while the column indices $j$ increase from west to east. Hence the elements of $B_{h+1}$ form the column $Y_j$ for $j = \lambda_{q_{h+1}}$ while the elements of $B_h$ form $Y_{j+1}$. A bijection from $R_\lambda$-permutations $\pi$ to $\lambda$-keys $Y$ is obtained by following $\pi \mapsto B$ with $B \mapsto Y$. The image of an $R_\lambda$-permutation $\pi$ is called the \emph{$\lambda$-key of $\pi$}; it is denoted $Y_\lambda(\pi)$. The example tableau is the $\lambda$-key of $\pi = (1,4,6,7,10; 3,5,8,9; 2, 12; 11)$. It is easy to see that the $\lambda$-row end list of the $\lambda$-key of $\pi$ is the rank $R_\lambda$-tuple $\Psi_{R_\lambda}(\pi) =: \psi$ of $\pi$: Here $\psi_i = Y_{\lambda_i}(i)$ for $i \in [n]$.
Let $\alpha \in UI_{R_\lambda}(n)$. Define $\mathcal{Z}_\lambda(\alpha)$ to be the subset of tableaux $T \in \mathcal{T}_\lambda$ with $\lambda$-row end list $\alpha$. To see that $\mathcal{Z}_\lambda(\alpha) \neq \emptyset$, for $i \in [n]$ take $T_j(i) := i$ for $j \in [1, \lambda_i)$ and $T_{\lambda_i}(i) := \alpha_i$. This subset is closed under the join operation for the lattice $\mathcal{T}_\lambda$. We define the \emph{$\lambda$-row end max tableau $M_\lambda(\alpha)$ for $\alpha$} to be the unique maximal element of $\mathcal{Z}_\lambda(\alpha)$. The example tableau is an $M_\lambda(\alpha)$.
When we are considering tableaux of shape $\lambda$, much of the data used will be in the form of $R_\lambda$-tuples. Many of the notions used will be definitions from Section 2 that are being applied with $R := R_\lambda$. The structure of each proof will depend only upon $R_\lambda$ and not upon how many times a column length is repeated: If $\lambda^\prime, \lambda^{\prime\prime} \in \Lambda_n^+$ are such that $R_{\lambda^\prime} = R_{\lambda^{\prime\prime}}$, then the development for $\lambda^{\prime\prime}$ will in essence be the same as for $\lambda^\prime$. To emphasize the original independent entity $\lambda$ and to reduce clutter, from now on rather than writing `$R$' or `$R_\lambda$' we will replace `$R$' by `$\lambda$' in subscripts and in prefixes. Above we would have written $\omega \in UI_\lambda(n)$ instead of having written $\omega \in UI_{R_\lambda}(n)$ (and instead of having written $\omega \in UI_R(n)$ after setting $R := R_\lambda$). When $\lambda$ is a strict partition, we omit most `$\lambda$-' prefixes and subscripts since $R_\lambda = [n-1]$.
To connect to Lie theory, fix $R \subseteq [n-1]$ and set $J := [n-1] \backslash R$. The $R$-permutations are the one-rowed forms of the ``inverses'' of the minimum length representatives collected in $W^J$ for the cosets in $W /W_J$, where $W$ is the Weyl group of type $A_{n-1}$ and $W_J$ is its parabolic subgroup $\langle s_i: i \in J \rangle$. A partition $\lambda$ is strict exactly when the weight it depicts for $GL(n)$ is strongly dominant. If we take the set $R$ to be $R_\lambda$, then the restriction of the partial order $\leq$ on $\mathcal{T}_\lambda$ to the $\lambda$-keys depicts the Bruhat order on that $W^J$. Further details appear in Sections 2, 3, and the appendix of \cite{PW1}.
\section{Rightmost clump deleting chains, gapless $\mathbf{\emph{R}}$-tuples}
We show that if the domain of the simple-minded global bijection $\pi \mapsto B$ is restricted to $S_n^{R\text{-}312} \subseteq S_n^R$, then a bijection to a certain set of chains results. And while it appears to be difficult to characterize the image $\Psi_R(S_n^R) \subseteq UI_R(n)$ of the $R$-rank map for general $R$, we show that restricting $\Psi_R$ to $S_n^{R\text{-}312}$ produces a bijection to the set $UG_R(n)$ of gapless $R$-tuples.
Given a set of integers, a \emph{clump} of it is a maximal subset of consecutive integers. After decomposing a set into its clumps, we index the clumps in the increasing order of their elements. For example, the set $\{ 2,3,5,6,7,10,13,14 \}$ is the union $L_1 \hspace{.5mm} \cup \hspace{.5mm} L_2 \hspace{.5mm} \cup \hspace{.5mm} L_3 \hspace{.5mm} \cup \hspace{.5mm} L_4$, where $L_1 := \{ 2,3 \}, L_2 := \{ 5,6,7 \},$ $L_3 := \{ 10 \}, L_4 := \{ 13,14 \}$.
For the first part of this section we temporarily work in the context of the full $R = [n-1]$ case. A chain $B$ is \textit{rightmost clump deleting} if for $h \in [n-1]$ the element deleted from each $B_{h+1}$ to produce $B_h$ is chosen from the rightmost clump of $B_{h+1}$. More formally: It is rightmost clump deleting if for $h \in [n-1]$ one has $B_{h} = B_{h+1} \backslash \{ b \}$ only when $[b, m] \subseteq B_{h+1}$, where $m := max (B_{h+1})$. For $n = 3$ there are five rightmost clump deleting chains, whose sets $B_3 \supset B_2 \supset B_1$ are displayed from the top in three rows:
\begin{figure}[h!]
\begin{center}
\setlength\tabcolsep{.1cm}
\begin{tabular}{ccccc}
1& &2& &\cancel{3}\\
&1& &\cancel{2}& \\
& &\cancel{1}& & \\
\end{tabular}\hspace{10mm}
\begin{tabular}{ccccc}
1& &2& &\cancel{3}\\
&\cancel{1}& &2& \\
& &\cancel{2}& & \\
\end{tabular}\hspace{10mm}
\begin{tabular}{ccccc}
1& &\cancel{2}& &3\\
&1& &\cancel{3}& \\
& &\cancel{1}& & \\
\end{tabular}\hspace{10mm}
\begin{tabular}{ccccc}
\cancel{1}& &2& &3\\
&2& &\cancel{3}& \\
& &\cancel{2}& & \\
\end{tabular}\hspace{10mm}
\begin{tabular}{ccccc}
\cancel{1}& &2& &3\\
&\cancel{2}& &3& \\
& &\cancel{3}& & \\
\end{tabular}
\end{center}
\end{figure}
\noindent To form the corresponding $\pi$, record the deleted elements from bottom to top. Note that the 312-containing permutation $(3;1;2)$ does not occur. Its triangular display of $B_3 \supset B_2 \supset B_1$ deletes the `1' from the ``left'' clump in the second row.
After Part (0) restates the definition of this concept, we present four reformulations of it:
\begin{fact}\label{fact320.1}Let $B$ be a chain. Set $\{ b_{h+1} \} := B_{h+1} \backslash B_h$ for $h \in [n-1]$. Set $m_h := \max (B_h)$ for $h \in [n]$. The following conditions are equivalent to this chain being rightmost clump deleting:
\noindent(0) For $h \in [n-1]$, one has $[b_{h+1}, m_{h+1}] \subseteq B_{h+1}$.
\noindent(i) For $h \in [n-1]$, one has $[b_{h+1}, m_h] \subseteq B_{h+1}$.
\noindent(ii) For $h \in [n-1]$, one has $(b_{h+1}, m_h) \subset B_h$.
\noindent(iii) For $h \in [n-1]$: If $b_{h+1} < m_h$, then $b_{h+1} = \max([m_h] \backslash B_h)$.
\noindent(iii$^\prime$) For $h \in [n-1]$, one has $b_{h+1} = \max([m_{h+1}] \backslash B_h)$. \end{fact}
The following characterization is related to Part (ii) of the preceeding fact via the correspondence $\pi \longleftrightarrow B$:
\begin{fact}\label{fact320.2}A permutation $\pi$ is 312-avoiding if and only if for every $h \in [n-1]$ we have \\ $(\pi_{h+1}, \max\{\pi_1, ... , \pi_{h}\}) \subset \{ \pi_1, ... , \pi_{h} \}$. \end{fact}
Since the following result will be generalized by Proposition \ref{prop320.2}, we do not prove it here. Part (i) is Exercise 2.202 of \cite{Sta}.
\begin{prop}\label{prop320.1}For the full $R = [n-1]$ case we have:
\noindent (i) The restriction of the global bijection $\pi \mapsto B$ from $S_n$ to $S_n^{312}$ is a bijection to the set of rightmost clump deleting chains. Hence there are $C_n$ rightmost clump deleting chains.
\noindent (ii) The restriction of the rank tuple map $\Psi$ from $S_n$ to $S_n^{312}$ is a bijection to $UF(n)$ whose inverse is $\Pi$. \end{prop}
\noindent Here when $R = [n-1]$, the map $\Pi : UF(n) \longrightarrow S_n^{312}$ has a simple description. It was introduced in \cite{PS} for Theorem 14.1. Given an upper flag $\phi$, recursively construct $\Pi(\phi) =: \pi$ as follows: Start with $\pi_1 := \phi_1$. For $i \in [n-1]$, choose $\pi_{i+1}$ to be the maximum element of $[\phi_{i+1}] \backslash \{ \pi_1, ... , \pi_{i} \}$.
Now fix $R \subseteq [n-1]$. Let $B$ be an $R$-chain. More generally, we say $B$ is \textit{$R$-rightmost clump deleting} if this condition holds for each $h \in [r]$: Let $B_{h+1} =: L_1 \cup L_2 \cup ... \cup L_f$ decompose $B_{h+1}$ into clumps for some $f \geq 1$. We require $L_e \cup L_{e+1} \cup ... \cup L_f \supseteq B_{h+1} \backslash B_{h} \supseteq L_{e+1} \cup ... \cup L_f$ for some $e \in [f]$. This condition requires the set $B_{h+1} \backslash B_h$ of new elements that augment the set $B_h$ of old elements to consist of entirely new clumps $L_{e+1}, L_{e+2}, ... , L_f$, plus some further new elements that combine with some old elements to form the next ``lower'' clump $L_e$ in $B_{h+1}$. When $n = 14$ and $R = \{3, 5, 10 \}$, an example of an $R$-rightmost clump deleting chain is given by $\emptyset \subset \{ 1 \text{-} 2, 6 \} \subset \{ 1\text{-}2, 5\text{-}6, 8 \} \subset \{ 1\text{-}2, 4\text{-}5\text{-}6\text{-}7\text{-}8, 10, 13\text{-}14 \}$ $\subset \{1\text{-}2\text{-}3\text{-}...\text{-}13\text{-}14\}$. Here are some reformulations of the notion of $R$-rightmost clump deleting:
\begin{fact}\label{fact320.3}Let $B$ be an $R$-chain. For $h \in [r]$, set $b_{h+1} := \min (B_{h+1} \backslash B_{h} )$ and $m_h := \max (B_h)$. The following conditions are equivalent to this chain being $R$-rightmost clump deleting:
\noindent (i) For $h \in [r]$, one has $[b_{h+1}, m_{h}] \subseteq B_{h+1}$.
\noindent (ii) For $h \in [r]$, one has $(b_{h+1}, m_{h}) \subset B_{h+1}$.
\noindent (iii) For $h \in [r]$, let $s$ be the number of elements of $B_{h+1} \backslash B_{h}$ that are less than $m_{h}$. These must be the $s$ largest elements of $[m_{h}] \backslash B_{h}$. \end{fact}
\noindent Part (iii) will again be used in \cite{PW3} for projecting and lifting 312-avoidance.
The following characterization is related to Part (ii) of the preceding fact via the correspondence $\pi \longleftrightarrow B$:
\begin{fact}\label{fact320.4}An $R$-permutation $\pi$ is $R$-312-avoiding if and only if for every $h \in [r]$ one has \\ $( \min\{\pi_{q_{h}+1}, ... , \pi_{q_{h+1}} \} , \max \{\pi_1, ... , \pi_{q_{h}}\} ) \subset \{ \pi_1, ... , \pi_{q_{h+1}} \}$. \end{fact}
Is it possible to characterize the rank $R$-tuple $\Psi_R(\pi) =: \psi$ of an arbitrary $R$-permutation $\pi$? An \emph{$R$-flag} is an $R$-increasing upper tuple $\varepsilon$ such that $\varepsilon_{q_{h+1} +1 - u} \geq \varepsilon_{q_{h} +1 - u}$ for $h \in [r]$ and $u \in [\min\{ p_{h+1}, p_{h}\}]$. It can be seen that $\psi$ is necessarily an $R$-flag. But the three conditions required so far (upper, $R$-increasing, $R$-flag) are not sufficient: When $n = 4$ and $R = \{ 1, 3 \}$, the $R$-flag $(3;2,4;4)$ cannot arise as the rank $R$-tuple of an $R$-permutation. In contrast to the upper flag characterization in the full case, it might not be possible to develop a simply stated sufficient condition for an $R$-tuple to be the rank $R$-tuple $\Psi_R(\pi)$ of a general $R$-permutation $\pi$. But it can be seen that the rank $R$-tuple $\psi$ of an $R$-312-avoiding permutation $\pi$ is necessarily a gapless $R$-tuple, since a failure of `gapless' for $\psi$ leads to the containment of an $R$-312 pattern. Building upon the observation that $UG(n) = UF(n)$ in the full case, this seems to indicate that the notion of ``gapless $R$-tuple'' is the correct generalization of the notion of ``flag'' from $[n-1]$-tuples to $R$-tuples. (It can be seen directly that a gapless $R$-tuple is necessarily an $R$-flag.)
Two bijections lie at the heart of this work; the second one will again be used in \cite{PW3} to prove Theorem 9.1.
\begin{prop}\label{prop320.2}For general $R \subseteq [n-1]$ we have:
\noindent (i) The restriction of the global bijection $\pi \mapsto B$ from $S_n^R$ to $S_n^{R\text{-}312}$ is a bijection to the set of $R$-rightmost clump deleting chains.
\noindent (ii) The restriction of the rank $R$-tuple map $\Psi_R$ from $S_n^R$ to $S_n^{R\text{-}312}$ is a bijection to $UG_R(n)$ whose inverse is $\Pi_R$. \end{prop}
\begin{proof}Setting $b_h = \min\{\pi_{q_{h}+1}, ... , \pi_{q_{h+1}} \}$ and $m_{h} = \max \{\pi_1, ... , \pi_{q_{h}}\}$, use Fact \ref{fact320.4}, the $\pi \mapsto B$ bijection, and Fact \ref{fact320.3}(ii) to confirm (i). As noted above, the restriction of $\Psi_R$ to $S_n^{R\text{-}312}$ gives a map to $UG_R(n)$. Let $\gamma \in UG_R(n)$ and construct $\Pi_R(\gamma) =: \pi$. Let $h \in [r]$. Recall that $\max\{ \pi_1, ... , \pi_{q_h} \} = \gamma_{q_h}$. Since $\gamma$ is $R$-increasing it can be seen that the $\pi_i$ are distinct. So $\pi$ is an $R$-permutation. Let $s \geq 0$ be the number of entries of $\{ \pi_{q_{h}+1} , ... , \pi_{q_{h+1}} \}$ that are less than $\gamma_{q_{h}}$. These are the $s$ largest elements of $[\gamma_{q_{h}}] \backslash \{ \pi_1, ... , \pi_{q_{h}} \}$. If in the hypothesis of Fact \ref{fact320.3} we take $B_h := \{\pi_1, ... , \pi_{q_h} \}$, we have $m_h = \gamma_{q_h}$. So the chain $B$ corresponding to $\pi$ satisfies Fact \ref{fact320.3}(iii). Since Fact \ref{fact320.3}(ii) is the same as the characterization of an $R$-312-avoiding permutation in Fact \ref{fact320.4}, we see that $\pi$ is $R$-312-avoiding. It can be seen that $\Psi_R[\Pi_R(\gamma)] = \gamma$, and so $\Psi_R$ is surjective from $S_n^{R\text{-}312}$ to $UG_R(n)$. For the injectivity of $\Psi_R$, now let $\pi$ denote an arbitrary $R$-312-avoiding permutation. Form $\Psi_R(\pi)$, which is a gapless $R$-tuple. Using Facts \ref{fact320.4} and \ref{fact320.3}, it can be seen that $\Pi_R[\Psi_R(\pi)] = \pi$. Hence $\Psi_R$ is injective. \end{proof}
\section{Row end max tableaux, gapless (312-avoiding) keys}
We study the $\lambda$-row end max tableaux of gapless $\lambda$-tuples. We also form the $\lambda$-keys of the $R$-312-avoiding permutations and introduce ``gapless'' lambda-keys. We show that these three sets of tableaux coincide.
Let $\alpha \in UI_\lambda(n)$. The values of the $\lambda$-row end max tableau $M_{\lambda}(\alpha) =: M$ can be determined as follows: For $h \in [r]$ and $j \in (\lambda_{q_{h+1}}, \lambda_{q_h}]$, first set $M_j(i) = \alpha_i$ for $i \in (q_{h-1}, q_h]$. When $h > 1$, from east to west among columns and south to north within a column, also set $M_j(i) := \min\{ M_j(i+1)-1, M_{j+1}(i) \}$ for $i \in (0, q_{h-1}]$. Finally, set $M_j(i) := i$ for $j \in (0, \lambda_n]$ and $i \in (0,n]$. (When $\zeta_j = \zeta_{j+1}$, this process yields $M_j = M_{j+1}$.) The example tableau in Section 3 is $M_\lambda(\alpha)$ for $\alpha = (1,4,6,7,10;7,8,9,10;10,12; 12)$. There we have $s = 4$ and $s = 1$ respectively for $h =1$ and $h=2$:
\begin{lem}\label{lemma340.1}Let $\gamma$ be a gapless $\lambda$-tuple. The $\lambda$-row end max tableau $M_{\lambda}(\gamma) =: M$ is a key. For $h \in [r]$ and $j := \lambda_{q_{h+1}}$, the $s \geq 0$ elements in $B(M_{j}) \backslash B(M_{j+1})$ that are less than $M_{j+1}(q_{h}) = \gamma_{q_{h}}$ are the $s$ largest elements of $[\gamma_{q_{h}}] \backslash B(M_{j+1})$. \end{lem}
\begin{proof} Let $h \in [r]$ and set $j := \lambda_{q_{h+1}}$. We claim $B(M_{j+1}) \subseteq B(M_j)$. If $M_j(q_{h} + 1) = \gamma_{q_{h}+1} > \gamma_{q_h} = M_{j+1}(q_{h})$, then $M_j(i) = M_{j+1}(i)$ for $i \in (0, q_{h}]$ and the claim holds. Otherwise $\gamma_{q_{h} + 1} \leq \gamma_{q_{h}}$. The gapless condition on $\gamma$ implies that if we start at $(j, q_{h}+1)$ and move south, the successive values in $M_j$ increment by 1 until some lower box has the value $\gamma_{q_{h}}$. Let $i \in (q_{h}, q_{h+1}]$ be the index such that $M_j(i) = \gamma_{q_{h}}$. Now moving north from $(j,i)$, the values in $M_j$ decrement by 1 either all of the way to the top of the column, or until there is a row index $k \in (0, q_{h})$ such that $M_{j+1}(k) < M_j(k+1)-1$. In the former case set $k := 0$. In the example we have $k=1$ and $k=0$ respectively for $h=1$ and $h=2$. If $k > 0$ we have $M_j(x) = M_{j+1}(x)$ for $x \in (0,k]$. Now use $M_j(k+1) \leq M_{j+1}( k+1)$ to see that the values $M_{j + 1}(k+1), M_{j+1}( k+2), ... , M_{j+1}( q_{h})$ each appear in the interval of values $[ M_j(k+1), M_j(i) ]$. Thus $B(M_{j+1}) \subseteq B(M_j)$. Using the parenthetical remark made before the lemma's statement, we see that $M$ is a key. There are $q_{h+1} - i$ elements in $B(M_j) \backslash B(M_{j+1})$ that are larger than $M_{j+1}( q_{h}) = \gamma_{q_{h}}$. So $s := (q_{h+1} - q_{h}) - (q_{h+1} - i) \geq 0$ is the number of values in $B(M_j) \backslash B(M_{j+1})$ that are less than $\gamma_{q_{h}}$. These $s$ values are the complement in $[ M_j(k+1), M_j(i) ]$ of the set $\{ \hspace{1mm} M_{j+1}(x) : x \in [k+1, q_{h}] \hspace{1mm} \}$, where $M_j(i) = M_{j+1}(q_{h}) = \gamma_{q_{h}}$. \end{proof}
We now introduce a tableau analog to the notion of ``$R$-rightmost clump deleting chain''. A $\lambda$-key $Y$ is \textit{gapless} if the condition below is satisfied for $h \in [r-1]$: Let $b$ be the smallest value in a column of length $q_{h+1}$ that does not appear in a column of length $q_{h}$. For $j \in (\lambda_{q_{h +2}}, \lambda_{q_{h+1}}]$, let $i \in (0, q_{h+1}]$ be the shared row index for the occurrences of $b = Y_j(i)$. Let $m$ be the bottom (largest) value in the columns of length $q_{h}$. If $b > m$ there are no requirements. Otherwise: For $j \in (\lambda_{q_{h +2}}, \lambda_{q_{h+1}}]$, let $k \in (i, q_{h+1}]$ be the shared row index for the occurrences of $m = Y_j(k)$. For $j \in (\lambda_{q_{h + 2}}, \lambda_{q_{h+1}}]$ one must have $Y_j(i+1) = b+1, Y_j(i+2) = b+2, ... , Y_j(k-1) = m-1$ holding between $Y_j(i) = b$ and $Y_j(k) = m$. (Hence necessarily $m - b = k - i$.) The tableau shown above is a gapless $\lambda$-key.
Given a partition $\lambda$ with $R_\lambda =: R$, our next result considers three sets of $R$-tuples and three sets of tableaux of shape $\lambda$:
\noindent (a) The set $\mathcal{A}_R$ of $R$-312-avoiding permutations and the set $\mathcal{P}_\lambda$ of their $\lambda$-keys.
\noindent (b) The set $\mathcal{B}_R$ of $R$-rightmost clump deleting chains and the set $\mathcal{Q}_\lambda$ of gapless $\lambda$-keys.
\noindent (c) The set $\mathcal{C}_R$ of gapless $R$-tuples and the set $\mathcal{R}_\lambda$ of their $\lambda$-row end max tableaux.
\newpage
\begin{thm}\label{theorem340}Let $\lambda$ be a partition and set $R := R_\lambda$.
\noindent (i) The three sets of tableaux coincide: $\mathcal{P}_\lambda = \mathcal{Q}_\lambda = \mathcal{R}_\lambda$.
\noindent (ii) An $R$-permutation is $R$-312-avoiding if and only if its $\lambda$-key is gapless.
\noindent (iii) If an $R$-permutation is $R$-312-avoiding, then the $\lambda$-row end max tableau of its rank $R$-tuple is its $\lambda$-key.
\noindent The restriction of the global bijection $B \mapsto Y$ from all $R$-chains to $R$-rightmost clump deleting chains is a bijection from $\mathcal{B}_R$ to $\mathcal{Q}_\lambda$. The process of constructing the $\lambda$-row end max tableau is a bijection from $\mathcal{C}_R$ to $\mathcal{R}_\lambda$. The bijection $\pi \mapsto B$ from $\mathcal{A}_R$ to $\mathcal{B}_R$ induces a map from $\mathcal{P}_\lambda$ to $\mathcal{Q}_\lambda$ that is the identity. The bijection $\Psi_R$ from $\mathcal{A}_R$ to $\mathcal{C}_R$ induces a map from $\mathcal{P}_\lambda$ to $\mathcal{R}_\lambda$ that is the identity.\end{thm}
\noindent Part (iii) will again be used in \cite{PW3} to prove Theorem 9.1; there it will also be needed for the discussion in Section 12. In the full case when $\lambda$ is strict and $R = [n-1]$, the converse of Part (iii) holds: If the row end max tableau of the rank tuple of a permutation is the key of the permutation, then the permutation is 312-avoiding. For a counterexample to this converse for general $\lambda$, choose $n = 4, \lambda = (2,1,1,0)$, and $\pi = (4;1,2;3)$. Then $Y_\lambda(\pi) = M_\lambda(\psi)$ with $\pi \notin S_n^{R\text{-}312}$. The bijection from $\mathcal{C}_R$ to $\mathcal{R}_\lambda$ and the equality $\mathcal{Q}_\lambda = \mathcal{R}_\lambda$ imply that an $R$-tuple is $R$-gapless if and only if it arises as the $\lambda$-row end list of a gapless $\lambda$-key.
\begin{proof}
\noindent For the first of the four map statements, use the $B \mapsto Y$ bijection to relate Fact \ref{fact320.3}(i) to the definition of gapless $\lambda$-key. The map in the second map statement is surjective by definition and is also obviously injective. Use the construction of the bijection $\pi \mapsto B$ and the first map statement to confirm the equality $\mathcal{P}_\lambda = \mathcal{Q}_\lambda$ and the third map statement. Part (ii) follows.
To prove Part (iii), let $\pi \in S_n^{R\text{-}312}$. Create the $R$-chain $B$ corresponding to $\pi$ and then its $\lambda$-key $Y := Y_\lambda(\pi)$. Set $\gamma := \Psi_R(\pi)$ and then $M := M_\lambda(\gamma)$. Clearly $B(Y_{\lambda_v}) = B_1 = \{ \gamma_1, ... , \gamma_v \} = B(M_{\lambda_v})$ for $v := q_1$. Proceed by induction on $h \in [r]$: For $v := q_h$ assume $B(Y_{\lambda_v}) = B(M_{\lambda_v})$. So $\max[B(Y_{\lambda_v})] = Y_{\lambda_v}(v) = M_{\lambda_v}(v) = \gamma_v$. Rename the example $\alpha$ before Lemma \ref{lemma340.1} as $\gamma$. Viewing that tableau as $M_\lambda( \gamma ) =: M$, for $h=2$ we have $M_5(9) = \gamma_9 = 10$. Set $v^\prime := q_{h+1}$. Let $s_Y$ be the number of values in $B(Y_{\lambda_{v^\prime}}) \backslash B(Y_{\lambda_v})$ that are less than $\gamma_v$. Viewing the example tableau as $Y$, for $h=2$ we have $s_Y = 1$. Since $\gamma_v \in B(Y_{\lambda_v})$, the number of values in $B(Y_{\lambda_{v^\prime}}) \backslash B(Y_{\lambda_v})$ that exceed $\gamma_v$ is $p_{h+1} - s_Y$. These values are the entries in $\{ \pi_{v+1} , ... , \pi_{v^\prime} \}$ that exceed $\gamma_v$. So from $\gamma := \Psi_R(\pi)$ and the description of $M_\lambda(\gamma)$ it can be seen that these values are exactly the values in $B(M_{\lambda_{v^\prime}}) \backslash B(M_{\lambda_v})$ that exceed $\gamma_v$. Let $s_M$ be the number of values in $B(M_{\lambda_{v^\prime}}) \backslash B(M_{\lambda_v})$ that are less than $\gamma_v$. Since $M$ is a key by Lemma \ref{lemma340.1} and $\gamma_v \in B(M_{\lambda_v})$, we have $s_M = p_{h+1} - (p_{h+1}-s_Y) = s_Y =: s$. From Proposition \ref{prop320.2}(i) we know that $B$ is $R$-rightmost clump deleting. By Fact \ref{fact320.3}(iii) applied to $B$ and Lemma \ref{lemma340.1} applied to $\gamma$, we see that for both $Y$ and for $M$ the ``new'' values that are less than $\gamma_v$ are the $s$ largest elements of $[\gamma_v] \backslash B(Y_{\lambda_v}) = [\gamma_v] \backslash B(M_{\lambda_v})$. Hence $Y_{\lambda_{v^\prime}} = M_{\lambda_{v^\prime}}$. Since we only need to consider the rightmost columns of each length when showing that two $\lambda$-keys are equal, we have $Y = M$. The equality $\mathcal{P}_\lambda = \mathcal{R}_\lambda$ and the final map statement follow. \end{proof}
\begin{cor}When $\lambda$ is strict, there are $C_n$ gapless $\lambda$-keys. \end{cor}
\section{Sufficient condition for Demazure convexity}
Fix a $\lambda$-permutation $\pi$. We define the set $\mathcal{D}_\lambda(\pi)$ of Demazure tableaux. We show that if $\pi$ is $\lambda$-312-avoiding, then the tableau set $\mathcal{D}_\lambda(\pi)$ is the principal ideal $[Y_\lambda(\pi)]$.
First we need to specify how to form the \emph{scanning tableau} $S(T)$ for a given $T \in \mathcal{T}_\lambda$. See page 394 of \cite{Wi2} for an example of this method. Given a sequence $x_1, x_2, ...$, its \emph{earliest weakly increasing subsequence (EWIS)} is $x_{i_1}, x_{i_2}, ...$, where $i_1 = 1$ and for $u > 1$ the index $i_u$ is the smallest index satisfying $x_{i_u} \geq x_{i_{u-1}}$. Let $T \in \mathcal{T}_\lambda$. Draw the shape $\lambda$ and fill its boxes as follows to produce $S(T)$: Form the sequence of the bottom values of the columns of $T$ from left to right. Find the EWIS of this sequence, and mark each box that contributes its value to this EWIS. The sequence of locations of the marked boxes for a given EWIS is its \emph{scanning path}. Place the final value of this EWIS in the lowest available location in the leftmost available column of $S(T)$. This procedure can be repeated as if the marked boxes are no longer part of $T$, since it can be seen that the unmarked locations form the shape of some partition. Ignoring the marked boxes, repeat this procedure to fill in the next-lower value of $S(T)$ in its first column. Once all of the scanning paths originating in the first column have been found, every location in $T$ has been marked and the first column of $S(T)$ has been created. For $j> 1$, to fill in the $j^{th}$ column of $S(T)$: Ignore the leftmost $(j-1)$ columns of $T$, remove all of the earlier marks from the other columns, and repeat the above procedure. The scanning path originating at a location $(l,k) \in \lambda$ is denoted $\mathcal{P}(T;l,k)$. It was shown in \cite{Wi2} that $S(T)$ is the ``right key'' of Lascoux and Sch\"{u}tzenberger for $T$, which was denoted $R(T)$ there.
As in \cite{PW1}, we now use the $\lambda$-key $Y_\lambda(\pi)$ of $\pi$ to define the set of \emph{Demazure tableaux}: $\mathcal{D}_\lambda(\pi) :=$ \\ $\{ T \in \mathcal{T}_\lambda : S(T) \leq Y_\lambda(\pi) \}$. We list some basic facts concerning keys, scanning tableaux, and sets of Demazure tableaux. Since it has long been known that $R(T)$ is a key for any $T \in \mathcal{T}_\lambda$, having $S(T) = R(T)$ gives Part (i). Part (ii) is easy to deduce from the specification of the scanning method. The remaining parts follow in succession from Part (ii) and the bijection $\pi \mapsto Y$.
\begin{fact}\label{fact420}Let $T \in \mathcal{T}_\lambda$ and let $Y \in \mathcal{T}_\lambda$ be a key.
\noindent (i) $S(T)$ is a key and hence $S(T) \in \mathcal{T}_\lambda$.
\noindent (ii) $T \leq S(T)$ and $S(Y) = Y$.
\noindent (iii) $Y_\lambda(\pi) \in \mathcal{D}_\lambda(\pi)$ and $\mathcal{D}_\lambda(\pi) \subseteq [Y_\lambda(\pi)]$.
\noindent (iv) The unique maximal element of $\mathcal{D}_\lambda(\pi)$ is $Y_\lambda(\pi)$.
\noindent (v) The Demazure sets $\mathcal{D}_\lambda(\sigma)$ of tableaux are nonempty subsets of $\mathcal{T}_\lambda$ that are precisely indexed by the $\sigma \in S_n^\lambda$. \end{fact}
For $U \in \mathcal{T}_\lambda$, define $m(U)$ to be the maximum value in $U$. (Define $m(U) := 1$ if $U$ is the null tableau.) Let $T \in \mathcal{T}_\lambda$. Let $(l,k) \in \lambda$. As in Section 4 of \cite{PW1}, define $U^{(l,k)}$ to be the tableau formed from $T$ by finding and removing the scanning paths that begin at $(l,\zeta_l)$ through $(l, k+1)$, and then removing the $1^{st}$ through $l^{th}$ columns of $T$. (If $l = \lambda_1$, then $U^{(l,k)}$ is the null tableau for any $k \in [\zeta_{\lambda_1}]$.) Set $S := S(T)$. Lemma 4.1 of \cite{PW1} states that $S_l(k) = \text{max} \{ T_l(k), m(U^{(l,k)}) \}$.
To reduce clutter in the proofs we write $Y_\lambda(\pi) =: Y$ and $S(T) =: S$.
\begin{prop}\label{prop420.1}Let $\pi \in S^\lambda_n$ and $T \in \mathcal{T}_\lambda$ be such that $T \leq Y_\lambda(\pi)$. If there exists $(l,k) \in \lambda$ such that $Y_l(k) < m(U^{(l,k)})$, then $\pi$ is $\lambda$-312-containing. \end{prop}
\begin{proof}Reading the columns from right to left and then each column from bottom to top, let $(l,k)$ be the first location in $\lambda$ such that $m(U^{(l,k)}) > Y_l(k)$. In the rightmost column we have $m(U^{(\lambda_1,i)}) = 1$ for all $i \in [\zeta_{\lambda_1}]$. Thus $m(U^{(\lambda_1,i)}) \leq Y_{\lambda_1}(i)$ for all $i \in [\zeta_{\lambda_1}]$. So we must have $l \in [1, \lambda_1)$. There exists $j > l$ and $i \leq k$ such that $m(U^{(l,k)}) = T_j(i)$. Since $T \leq Y$, so far we have $Y_l(k) < T_j(i) \leq Y_j(i)$. Note that since $Y$ is a key we have $k < \zeta_l$. Then for $k < f \leq \zeta_l$ we have $m(U^{(l,f)}) \leq Y_l(f)$. So $T \leq Y$ implies that $S_l(f) \leq Y_l(f)$ for $k < f \leq \zeta_l$.
Assume for the sake of contradiction that $\pi$ is $\lambda$-312-avoiding. Theorem \ref{theorem340}(ii) says that its $\lambda$-key $Y$ is gapless. If the value $Y_l(k)$ does not appear in $Y_j$, then the columns that contain $Y_l(k)$ must also contain $[Y_l(k), Y_j(i)]$: Otherwise, the rightmost column that contains $Y_l(k)$ has index $\lambda_{q_{h+1}}$ for some $h \in [r-1]$ and there exists some $u \in [Y_l(k), Y_j(i)]$ such that $u \notin Y_{\lambda_{q_{h+1}}}$. Then $Y$ would not satisfy the definition of gapless $\lambda$-key, since for this $h+1$ in that definition one has $b \leq u$ and $u \leq m$. If the value $Y_l(k)$ does appear in $Y_j$, it appears to the north of $Y_j(i)$ there. Then $i \leq k$ implies that some value $Y_l(f) < Y_j(i)$ with $f < k$ does not appear in $Y_j$. As above, the columns that contain the value $Y_l(f) < Y_l(k)$ must also contain $[Y_l(f), Y_j(i)]$. In either case $Y_l$ must contain $[Y_l(k), Y_j(i)]$. This includes $T_j(i)$.
Now let $f > k$ be such that $Y_l(f) = T_j(i)$. Then we have $S_l(f) > S_l(k) = \text{max} \{T_l(k),m(U^{(l,k)}) \}$ $\geq T_j(i) = Y_l(f)$. This is our desired contradiction. \end{proof}
As in Section 5 of \cite{PW1}: When $m(U^{(l,k)}) > Y_l(k)$, define the set $A_\lambda(T,\pi;l,k) := \emptyset$. Otherwise, define $A_\lambda(T,\pi;l,k) := [ k , \text{min} \{ Y_l(k), T_l(k+1) -1, T_{l+1}(k) \} ] $. (Refer to fictitious bounding values $T_l(\zeta_l + 1) := n+1$ and $T_{\lambda_l + 1}(l) := n$.)
\begin{thm}\label{theorem420}Let $\lambda$ be a partition and $\pi$ be a $\lambda$-permutation. If $\pi$ is $\lambda$-312-avoiding, then $\mathcal{D}_\lambda(\pi) = [Y_\lambda(\pi)]$. \end{thm}
\begin{proof}The easy containment $\mathcal{D}_\lambda(\pi) \subseteq [Y_\lambda(\pi)]$ is Fact \ref{fact420}(iii). Conversely, let $T \leq Y$ and $(l,k) \in \lambda$. The contrapositive of Proposition \ref{prop420.1} gives $A_\lambda(T,\pi;l,k) = [ k , \text{min} \{ Y_l(k), T_l(k+1) - 1, T_{l+1}(k) \} ]$. Since $T \leq Y$, we see that $T_l(k) \in A_\lambda(T,\pi;l,k)$ for all $(l,k) \in \lambda$. Theorem 5.1 of \cite{PW1} says that $T \in \mathcal{D}_\lambda(\pi)$. \end{proof}
\noindent This result is used in \cite{PW3} to prove Theorem 9.1(ii).
\section{Necessary condition for Demazure convexity}
Continue to fix a $\lambda$-permutation $\pi$. We show that $\pi$ must be $\lambda$-312-avoiding for the set of Demazure tableaux $\mathcal{D}_\lambda(\pi)$ to be a convex polytope in $\mathbb{Z}^{|\lambda|}$. We do so by showing that if $\pi$ is $\lambda$-312-containing, then $\mathcal{D}_\lambda(\pi)$ does not contain a particular semistandard tableau that lies on the line segment defined by two particular keys that are in $\mathcal{D}_\lambda(\pi)$.
\begin{thm}\label{theorem520}Let $\lambda$ be a partition and let $\pi$ be a $\lambda$-permutation. If $\mathcal{D}_\lambda(\pi)$ is convex in $\mathbb{Z}^{|\lambda|}$, then $\pi$ is $\lambda$-312-avoiding. \end{thm}
\noindent This result is used in \cite{PW3} to prove Theorem 9.1(iii) and Theorem 10.3.
\begin{proof}For the contrapositive, assume that $\pi$ is $\lambda$-312-containing. Here $r := |R_\lambda| \geq 2$. There exists $1 \leq g < h \leq r$ and some $a \leq q_g < b \leq q_h < c$ such that $\pi_b < \pi_c < \pi_a$. Among such patterns, we specify one that is optimal for our purposes. Figure 7.1 charts the following choices for $\pi = (4,8;9;2,3;1,5;6,7)$ in the first quadrant. Choose $h$ to be minimal. So $b \in (q_{h-1}, q_h]$. Then choose $b$ so that $\pi_b$ is maximal. Then choose $a$ so that $\pi_a$ is minimal. Then choose $g$ to be minimal. So $a \in (q_{g-1} , q_g]$. Then choose any $c$ so that $\pi_c$ completes the $\lambda$-312-containing condition.
These choices have led to the following two prohibitions; see the rectangular regions in Figure 7.1:
\noindent (i) By the minimality of $h$ and the maximality of $\pi_b$, there does not exist $e \in (q_g, q_h]$ such that $\pi_b < \pi_e < \pi_c$.
\noindent (ii) By the minimality of $\pi_a$, there does not exist $e \in [q_{h-1}]$ such that $\pi_c < \pi_e < \pi_a$.
\noindent If there exists $e \in [q_g]$ such that $\pi_b < \pi_e < \pi_c$, choose $d \in [q_g]$ such that $\pi_d$ is maximal with respect to this condition; otherwise set $d = b$. So $\pi_b \leq \pi_d$ with $d \leq b$. We have also ruled out:
\noindent (iii) By the maximality of $\pi_d$, there does not exist $e \in [q_g]$ such that $\pi_d < \pi_e < \pi_c$.
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=1]{Figure11point1}\label{fig71}
\caption*{Figure 7.1. Prohibited regions (i), (ii), and (iii) for $\pi = (4,8;9;2,3;1,5;6,7)$.}
\end{center}
\end{figure}
Set $Y := Y_\lambda(\pi)$. Now let $\chi$ be the permutation resulting from swapping the entry $\pi_b$ with the entry $\pi_d$ in $\pi$; so $\chi_b := \pi_d, \chi_d := \pi_b$, and $\chi_e := \pi_e$ when $e \notin \{ b, d \}$. (If $d = b$, then $\chi = \pi$ with $\chi_b = \pi_b = \chi_d = \pi_d$.) Let $\bar{\chi}$ be the $\lambda$-permutation produced from $\chi$ by sorting each cohort into increasing order. Set $X := Y_\lambda(\bar{\chi})$. Let $j$ denote the column index of the rightmost column with length $q_h$; so the value $\chi_b = \pi_d$ appears precisely in the $1^{st}$ through $j^{th}$ columns of $X$. Let $f \leq h$ be such that $d \in (q_{f-1}, q_f]$, and let $k \geq j$ denote the column index of the rightmost column with length $q_f$. The swap producing $\chi$ from $\pi$ replaces $\pi_d = \chi_b$ in the $(j+1)^{st}$ through $k^{th}$ columns of $Y$ with $\chi_d = \pi_b$ to produce $X$. (The values in these columns may need to be re-sorted to meet the semistandard criteria.) So $\chi_d \leq \pi_d$ implies $X \leq Y$ via a column-wise argument.
Forming the union of the prohibited rectangles for (i), (ii), and (iii), we see that there does not exist $e \in [q_{h-1}]$ such that $\pi_d = \chi_b < \pi_e < \pi_a$. Thus we obtain:
\noindent (iv) For $l > j$, the $l^{th}$ column of $X$ does not contain any values from $[\chi_b, \pi_a)$.
\noindent Let $(j,i)$ denote the location of the $\chi_b$ in the $j^{th}$ column of $X$ (and hence $Y$). So $Y_j(i) = \pi_d$. By (iv) and the semistandard conditions, we have $X_{j+1}(u) = \pi_a$ for some $u \leq i$. By (i) and (iii) we can see that $X_j(i+1) > \pi_c$. Let $m$ denote the column index of the rightmost column of $\lambda$ with length $q_g$. This is the rightmost column of $X$ that contains $\pi_a$. Let $\mu \subseteq \lambda$ be the set of locations of the $\pi_a$'s in the $(j+1)^{st}$ through $m^{th}$ columns of $X$; note that $(j+1, u) \in \mu$. Let $\omega$ be the permutation obtained by swapping $\chi_a = \pi_a$ with $\chi_b = \pi_d$ in $\chi$; so $\omega_a := \chi_b = \pi_d$, $\omega_b := \chi_a = \pi_a$, $\omega_d := \chi_d = \pi_b$, and $\omega_e := \pi_e$ when $e \notin \{ d, a, b \}$. Let $\bar{\omega}$ be the $\lambda$-permutation produced from $\omega$ by sorting each cohort into increasing order. Set $W := Y_\lambda(\bar{\omega})$. By (iv), obtaining $\omega$ from $\chi$ is equivalent to replacing the $\pi_a$ at each location of $\mu$ in $X$ with $\chi_b$ (and leaving the rest of $X$ unchanged) to obtain $W$. So $\chi_b < \pi_a$ implies $W < X$.
Let $T$ be the result of replacing the $\pi_a$ at each location of $\mu$ in $X$ with $\pi_c$ (and leaving the rest unchanged). So $T < X \leq Y$. See the conceptual Figure 7.2 for $X$ and $T$; the shaded boxes form $\mu$. In particular $T_{j+1}(u) = \pi_c$. This $T$ is not necessarily a key; we need to confirm that it is semistandard. For every $(q,p) \notin \mu$ we have $W_q(p) = T_q(p) = X_q(p)$. By (iv), there are no values in $X$ in any column to the right of the $j^{th}$ column from $[\pi_c, \pi_a)$. The region $\mu$ is contained in these columns. Hence we only need to check semistandardness when moving from the $j^{th}$ column to $\mu$ in the $(j+1)^{st}$ column. Here $u \leq i$ implies $T_j(u) \leq T_j(i) = \pi_d < \pi_c = T_{j+1}(u)$. So $T \in \mathcal{T}_\lambda$.
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=1]{Figure11point2Phase3}
\caption*{Figure 7.2. Values of $X$ (respectively $T$) are in upper left (lower right) corners.}
\end{center}
\end{figure}
Now we consider the scanning tableau $S(T) =: S$ of $T$: Since $(j, i+1) \notin \mu$, we have $T_j(i+1) = X_j(i+1)$. Since $X_j(i+1) > \pi_c = T_{j+1}(u)$, the location $(j+1,u)$ is not in a scanning path $\mathcal{P}(T;j,i^\prime)$ for any $i^\prime > i$. Since $T_j(i) = \chi_b = \pi_d < \pi_c$, the location $(j+1,v)$ is in $\mathcal{P}(T;j,i)$ for some $v \in [u,i]$. By the semistandard column condition one has $T_{j+1}(v) \geq T_{j+1}(u) = \pi_c$. Thus $S_j(i) \geq \pi_c > \chi_b = \pi_d = Y_j(i)$. Hence $S(T) \nleq Y$, and so $T \notin \mathcal{D}_\lambda(\pi)$. Since $T \in [Y]$, we have $\mathcal{D}_\lambda(\pi) \neq [Y]$.
In $\mathbb{R}^{|\lambda|}$, consider the line segment $U(t) = W + t(X-W)$, where $0 \leq t \leq 1$. Here $U(0) = W$ and $U(1) = X$. The value of $t$ only affects the values at the locations in $\mu$. Let $x := \frac{\pi_c - \chi_b}{\pi_a - \chi_b}$. Since $\chi_b < \pi_c < \pi_a$, we have $0 < x < 1$. The values in $\mu$ in $U(x)$ are $\chi_b + \frac{\pi_c - \chi_b}{\pi_a-\chi_b}(\pi_a-\chi_b) = \pi_c$. Hence $U(x) = T$. Since $X$ and $W$ are keys, by Fact \ref{fact420}(ii) we have $S(X) = X$ and $S(W) = W$. Then $W < X \leq Y$ implies $W \in \mathcal{D}_\lambda(\pi)$ and $X \in \mathcal{D}_\lambda(\pi)$. Thus $U(0), U(1) \in \mathcal{D}_\lambda(\pi)$ but $U(x) \notin \mathcal{D}_\lambda(\pi)$. If a set $\mathcal{E}$ is a convex polytope in $\mathbb{Z}^N$ and $U(t)$ is a line segment with $U(0), U(1) \in \mathcal{E}$, then $U(t) \in \mathcal{E}$ for any $0 < t < 1$ such that $U(t) \in \mathbb{Z}^N$. Since $0 < x < 1$ and $U(x) = T \in \mathbb{Z}^{|\lambda|}$ with $U(x) \notin \mathcal{D}_\lambda(\pi)$, we see that $\mathcal{D}_\lambda(\pi)$ is not a convex polytope in $\mathbb{Z}^{|\lambda|}$. \end{proof}
When one first encounters the notion of a Demazure polynomial, given Facts \ref{fact420}(iii)(iv) it is natural to ask when $\mathcal{D}_\lambda(\pi;x)$ is simply all of the ideal $[Y_\lambda(\pi)]$. Since principal ideals in $\mathcal{T}_\lambda$ are convex polytopes in $\mathbb{Z}^{|\lambda|}$, we can answer this question while combining Theorems \ref{theorem420} and \ref{theorem520}:
\begin{cor}\label{cor520}Let $\pi \in S_n^\lambda$. The set $\mathcal{D}_\lambda(\pi)$ of Demazure tableaux of shape $\lambda$ is a convex polytope in $\mathbb{Z}^{|\lambda|}$ if and only if $\pi$ is $\lambda$-312-avoiding if and only if $\mathcal{D}_\lambda(\pi) = [Y_\lambda(\pi)]$. \end{cor}
\noindent When $\lambda$ is the strict partition $(n,n-1, ..., 2,1)$, this convexity result appeared as Theorem 3.9.1 in \cite{Wi1}.
\section{Potential applications of convexity}
In addition to providing the core content needed to prove the main results of \cite{PW3}, our convexity results might later be useful in some geometric or representation theory contexts. Our re-indexing of the $R$-312-avoiding phenomenon with gapless $R$-tuples could also be useful. Fix $R \subseteq [n-1]$; inside $G := GL_n(\mathbb{C})$ this determines a parabolic subgroup $P := P_R$. If $R = [n-1]$ then $P$ is the Borel subgroup $B$ of $G$. Fix $\pi \in S_n^R$; this specifies a Schubert variety $X(\pi)$ of the flag manifold $G \slash P$.
Pattern avoidance properties for $\pi$ have been related to geometric properties for $X(\pi)$: If $\pi \in S_n$ is 3412-avoiding and 4231-avoiding, then the variety $X(\pi) \subseteq G \slash B$ is smooth by Theorem 13.2.2.1 of \cite{LR}. Since a 312-avoiding $\pi$ satisfies these conditions, its variety $X(\pi)$ is smooth. Postnikov and Stanley \cite{PS} noted that Lakshmibai called these the ``Kempf'' varieties. It could be interesting to extend the direct definition of the notion of Kempf variety from $G \slash B$ to all $G \slash P$, in contrast to using the indirect definition for $G \slash P$ given in \cite{HL}.
Berenstein and Zelevinsky \cite{BZ} emphasized the value of using the points in convex integral polytopes to describe the weights-with-multiplicities of representations. Fix a partition $\lambda$ of some $N \geq 1$ such that $R_\lambda = R$. Rather than using the tableaux in $\mathcal{T}_\lambda$ to describe the irreducible polynomial character of $G$ with highest weight $\lambda$ (Schur function of shape $\lambda$), the corresponding Gelfand-Zetlin patterns (which have top row $\lambda$) can be used. These form an integral polytope in $\mathbb{Z}^{n \choose 2}$ that is convex. In Corollary 15.2 of \cite{PS}, Postnikov and Stanley formed convex polytopes from certain subsets of the GZ patterns with top row $\lambda$; these had been considered by Kogan. They summed the weights assigned to the points in these polytopes to obtain the Demazure polynomials $d_\lambda(\pi;x)$ that are indexed by the 312-avoiding permutations. The convex integral polytope viewpoint was used there to describe the degree of the associated embedded Schubert variety $X(\pi)$ in the full flag manifold $G \slash B$.
Now assume that $\lambda$ is strict. Here $R = [n-1]$ and the $R$-312-avoiding permutations are the 312-avoiding permutations. The referee of \cite{PW3} informed us that Kiritchenko, Smirnov, and Timorin generalized Corollary 15.2 of \cite{PS} to express \cite{KST} the polynomial $d_\lambda(\pi;x)$ for any $\pi \in S_n^\lambda$ as a sum over the points in certain faces of the GZ polytope for $\lambda$ that are determined by $\pi$. Only one face is used exactly when $\pi$ is 312-avoiding. At a glance it may appear that their Theorem 1.2 implies that the set of points used from the GZ integral polytope for $d_\lambda(\pi;x)$ is convex exactly when $\pi$ is 312-avoiding. So that referee encouraged us to remark upon the parallel 312-avoiding phenomena of convexity in $\mathbb{Z}^N$ for the tableau set $\mathcal{D}_\lambda(\pi)$ and of convexity in $\mathbb{Z}^{n \choose 2}$ for the set of points in these faces. But we soon saw that when $\lambda$ is small it is possible for the union of faces used for $d_\lambda(\pi;x)$ to be convex even when $\pi$ is not 312-avoiding. See Section 12 of \cite{PW3} for a counterexample. To obtain convexity, one must replace $\lambda$ by $m\lambda$ for some $m \geq 2$. In contrast, our Corollary \ref{cor520} holds for all $\lambda$.
Postnikov and Stanley remarked that the convex polytope of GZ patterns in the 312-avoiding case was used by Kogan and Miller to study the toric degeneration formed by Gonciulea and Lakshmibai for a Kempf variety. It would be interesting to see if the convexity characterization of the $R$-312-avoiding Demazure tableau sets $\mathcal{D}_\lambda(\pi)$ found here is related to some nice geometric properties for the corresponding Schubert varieties $X(\pi)$ in $G \slash P$. For any $R$-permutation $\pi$ the Demazure tableaux are well suited to studying the associated Schubert variety from the Pl{\"u}cker relations viewpoint, as was illustrated by Lax's re-proof \cite{Lax} of the standard monomial basis that used the scanning method of \cite{Wi2}.
\section{Parabolic Catalan counts}
The section (or paper) cited at the beginning of each item in the following statement points to the definition of the concept:
\begin{thm}\label{theorem18.1}Let $R \subseteq [n-1]$. Write the elements of $R$ as $q_1 < q_2 < ... < q_r$. Set $q_0 := 0$ and $q_{r+1} := n$. Let $\lambda$ be a partition $\lambda_1 \geq \lambda_2 \geq ... \geq \lambda_n \geq 0$ whose shape has the distinct column lengths $q_r, q_{r-1}, ... , q_1$. Set $p_h := q_h - q_{h-1}$ for $1 \leq h \leq r+1$. The number $C_n^R =: C_n^\lambda$ of $R$-312-avoiding permutations is equal to the number of:
\noindent (i) \cite{GGHP}: ordered partitions of $[n]$ into blocks of sizes $p_h$ for $1 \leq h \leq r+1$ that avoid the pattern 312, and $R$-$\sigma$-avoiding permutations for $\sigma \in \{ 123, 132, 213, 231, 321 \}$.
\noindent (ii) Section 2: multipermutations of the multiset $\{ 1^{p_1}, 2^{p_2}, ... , (r+1)^{p_{r+1}} \}$ that avoid the pattern 231.
\noindent (iii) Section 2: gapless $R$-tuples $\gamma \in UG_R(n)$.
\noindent (iv) Here only: $r$-tuples $(\mu^{(1)}, ... , \mu^{(r)})$ of shapes such that $\mu^{(h)}$ is contained in a $p_h \times (n-q_h)$ rectangle for $1 \leq h \leq r$ and for $1 \leq h \leq r-1$ the length of the first row in $\mu^{(h)}$ does not exceed the length of the $p_{h+1}^{st}$ (last) row of $\mu^{(h+1)}$ plus the number of times that (possibly zero) last row length occurs in $\mu^{(h+1)}$.
\noindent (v) Sections 4 and 5: $R$-rightmost clump deleting chains and gapless $\lambda$-keys.
\noindent (vi) Section 6: sets of Demazure tableaux of shape $\lambda$ that are convex in $\mathbb{Z}^{|\lambda|}$.
\end{thm}
\begin{proof}Part (i) first restates our $C_n^R$ definition with the terminology of \cite{GGHP}; for the second claim see the discussion below. The equivalence for (ii) was noted in Section 2. Use Proposition \ref{prop320.2}(ii) to confirm (iii). For (iv), destrictify the gapless $R$-tuples within each carrel. Use Proposition \ref{prop320.2}(i) and Theorem \ref{theorem340} to confirm (v). Part (vi) follows from Corollary \ref{cor520} and Fact \ref{fact420}(v).\end{proof}
To use the Online Encyclopedia of Integer Sequences \cite{Slo} to determine if the counts $C_n^R$ had been studied, we had to form sequences. One way to form a sequence of such counts is to take $n := 2m$ for $m \geq 1$ and $R_m := \{ 2, 4, 6, ... , 2m-2 \}$. Then the $C_{2m}^R$ sequence starts with 1, 6, 43, 352, 3114, ... ; this beginning appeared in the OEIS in Pudwell's A220097. Also for $n \geq 1$ define the \emph{total parabolic Catalan number $C_n^\Sigma$} to be $\sum C_n^R$, sum over $R \subseteq [n-1]$. This sequence starts with 1, 3, 12, 56, 284, ... ; with a `1' prepended, this beginning appeared in Sloane's A226316. These ``hits'' led us to the papers \cite{GGHP} and \cite{CDZ}.
Let $R$ be as in the theorem. Let $2 \leq t \leq r+1$. Fix a permutation $\sigma \in S_t$. Apparently for the sake of generalization in and of itself with new enumeration results as a goal, Godbole, Goyt, Herdan and Pudwell defined \cite{GGHP} the notion of an ordered partition of $[n]$ with block sizes $b_1, b_2, ... , b_{r+1}$ that avoids the pattern $\sigma$. It appears that that paper was the first paper to consider a notion of pattern avoidance for ordered partitions that can be used to produce our $R$-312-avoiding permutations: Take $b_1 := q_1$, $b_2 := q_2 - q_1$, ... , $b_{r+1} := n - q_r$, $t := 3$, and $\sigma := (3;1;2)$. Their Theorem 4.1 implies that the number of such ordered partitions that avoid $\sigma$ is equal to the number of such ordered partitions that avoid each of the other five permutations for $t = 3$. This can be used to confirm that the $C_{2m}^R$ sequence defined above is indeed Sequence A220097 of the OEIS (which is described as avoiding the pattern 123). Chen, Dai, and Zhou gave generating functions \cite{CDZ} in Theorem 3.1 and Corollary 2.3 for the $C_{2m}^R$ for $R = \{ 2, 4, 6, ... , 2m-2 \}$ for $m \geq 0$ and for the $C_n^\Sigma$ for $n \geq 0$. The latter result implies that the sequence A226316 indeed describes the sequence $C_n^\Sigma$ for $n \geq 0$. Karen Collins and the second author of this paper have recently deduced that $C_n^\Sigma = \sum_{0 \leq k \leq [n/2]} (-1)^k \binom{n-k}{k} 2^{n-k-1} C_{n-k}$.
How can the $C_n^\Sigma$ total counts be modeled? Gathering the $R$-312-avoiding permutations or the gapless $R$-tuples from Theorem \ref{theorem18.1}(ii) for this purpose would require retaining their ``semicolon dividers''. Some other objects model $C_n^\Sigma$ more elegantly. We omit definitions for some of the concepts in the next statement. We also suspend our convention concerning the omission of the prefix `$[n-1]$-': Before, a `rightmost clump deleting' chain deleted one element at each stage. Now this unadorned term describes a chain that deletes any number of elements in any number of stages, provided that they constitute entire clumps of the largest elements still present plus possibly a subset from the rightmost of the other clumps. When $n = 3$ one has $C_n^\Sigma = 12$. Five of these chains were displayed in Section 6. A sixth is \cancel{1} \cancel{2} \cancel{3}. Here are the other six, plus one such chain for $n = 17$:
\begin{figure}[h!]
\begin{center}
\setlength\tabcolsep{.1cm}
\begin{tabular}{ccccc}
1& &2& &\cancel{3}\\
&\cancel{1}& &\cancel{2}
\end{tabular}\hspace{7mm}
\begin{tabular}{ccccc}
1& &\cancel{2}& &3\\
&\cancel{1}& &\cancel{3}
\end{tabular}\hspace{7mm}
\begin{tabular}{ccccc}
\cancel{1}& &2& &3\\
&\cancel{2}& &\cancel{3}
\end{tabular}\hspace{7mm}
\begin{tabular}{ccccc}
1& &\cancel{2}& &\cancel{3}\\
& & \cancel{1}& &
\end{tabular}\hspace{7mm}
\begin{tabular}{ccccc}
\cancel{1}& &2& &\cancel{3}\\
& & \cancel{2}
\end{tabular}\hspace{7mm}
\begin{tabular}{ccccc}
\cancel{1}& &\cancel{2}& &{3}\\
& &\cancel{3}
\end{tabular}
\end{center}
\end{figure}
\begin{figure}[h!]
\begin{center}
\setlength\tabcolsep{.3cm}
\begin{tabular}{ccccccccccccccccc}
1 & 2 & \cancel{3} & 4 & 5 & \cancel{6} & 7 & 8 & 9 & 10 & 11 & \cancel{12} & 13 & 14 & \cancel{15} & 16 & 17 \\
& & 1 & 2 & 4 & 5 & 7 & \cancel{8} & 9 & \cancel{10} & 11 & \cancel{13} & \cancel{14} & \cancel{16} & \cancel{17} & & \\
& & & & & 1 & 2 & \cancel{4} & 5 & \cancel{7} & \cancel{9} & \cancel{11} & & & & & \\
& & & & & & & \cancel{1} & \cancel{2} & \cancel{5} & & & & & & &
\end{tabular}
\end{center}
\end{figure}
\vspace{.1pc}\begin{cor}\label{cor18.2} The total parabolic Catalan number $C_n^\Sigma$ is the number of:
\noindent (i) ordered partitions of $\{1, 2, ... , n \}$ that avoid the pattern 312.
\noindent (ii) rightmost clump deleting chains for $[n]$, and gapless keys whose columns have distinct lengths less than $n$.
\noindent (iii) Schubert varieties in all of the flag manifolds $SL(n) / P_J$ for $J \subseteq [n-1]$ such that their ``associated'' Demazure tableaux form convex sets as in Section 7. \end{cor}
\noindent Part (iii) highlights the fact that the convexity result of Corollary \ref{cor520} depends only upon the information from the indexing $R$-permutation for the Schubert variety, and not upon any further information from the partition $\lambda$. In addition to their count $op_n[(3;1;2)] = C_n^\Sigma$, the authors of \cite{GGHP} and \cite{CDZ} also considered the number $op_{n,k}(\sigma)$ of such $\sigma$-avoiding ordered partitions with $k$ blocks. The models above can be adapted to require the presence of exactly $k$ blocks, albeit of unspecified sizes.
\vspace{.5pc}\noindent \textbf{Added Note.} We learned of the paper \cite{MW} after posting \cite{PW2} on the arXiv. As at the end of Section 3, let $R$ and $J$ be such that $R \cup J = [n-1]$ and $R \cap J = \emptyset$. It could be interesting to compare the definition for what we would call an `$R$-231-avoiding' $R$-permutation (as in \cite{GGHP}) to M{\"u}hle's and Williams' definition of a `$J$-231-avoiding' $R$-permutation in Definition 5 of \cite{MW}. There they impose an additional condition $w_i = w_k + 1$ upon the pattern to be avoided. For their Theorems 21 and 24, this condition enables them to extend the notions of ``non-crossing partition'' and of ``non-nesting partition'' to the parabolic quotient $S_n / W_J$ context of $R$-permutations to produce sets of objects that are equinumerous with their $J$-231-avoiding $R$-permutations. Their Theorem 7 states that this extra condition is superfluous when $J = \emptyset$. In this case their notions of $J$-non-crossing partition and of $J$-non-nesting partition specialize to the set partition Catalan number models that appeared as Exercises 159 and 164 of \cite{Sta}. So if it is agreed that their reasonably stated generalizations of the notions of non-crossing and non-nesting partitions are the most appropriate generalizations that can be formulated for the $S_n / W_J$ context, then the mutual cardinality of their three sets of objects indexed by $J$ and $n$ becomes a competitor to our $C_n^R$ count for the name ``$R$-parabolic Catalan number''. This development has made the obvious metaproblem more interesting: Now not only must one determine whether each of the 214 Catalan models compiled in \cite{Sta} is ``close enough'' to a pattern avoiding permutation interpretation to lead to a successful $R$-parabolic generalization, one must also determine which parabolic generalization applies.
\vspace{1pc}\noindent \textbf{Acknowledgments.} We thank Keith Schneider, Joe Seaborn, and David Raps for some helpful conversations, and we are also indebted to David Raps for some help with preparing this paper. We thank the referee for suggesting some improvements in the exposition.
| {'timestamp': '2018-07-19T02:10:24', 'yymm': '1706', 'arxiv_id': '1706.03094', 'language': 'en', 'url': 'https://arxiv.org/abs/1706.03094'} |
\section{Introduction}
\label{sec:introduction}
There are currently many efforts towards demonstrating fundamental quantum effects such as superposition and entanglement in macroscopic systems \cite{Monroe,Brune,Agarwal,Arndt,Sorensen,Friedman,Julsgaard,Esteve,Riedel,OConnell,Lvovsky,Bruno,Palomaki,Vlastakis,
Arndt-Hornberger,Hon-Wai}. One relevant class of quantum states are so-called cat states, i.e. superposition states involving two components that are very different in some physical observable, such as position, phase or spin. Here we propose a method for creating such large superpositions in energy. This is relevant in the context of testing proposed quantum-gravity related energy decoherence \cite{Milburn,Gambini,Blencowe}.
Our method relies on the uniform Kerr-type interaction that can be generated between atoms by weak dressing with a Rydberg state \cite{Johnson,Henkel-Thesis,Pfau-Rydberg}. This can be used to generate cat states similarly to the optical proposal of Ref. \cite{Yurke-Stoler}. Using an optical clock state in Strontium as one of the two atomic basis states makes it possible to create large and long-lived energy superposition states. The superposition can be verified by observing a characteristic revival. We analyze the effects of relevant imperfections including higher-order nonlinearities, spatial inhomogeneity of the interaction, decay from the Rydberg state, atomic motion in the optical lattice, collective many-body decoherence triggered by black-body induced transitions, molecular formation, and diminishing Rydberg level separation for increasing principal number. Our scheme significantly improves the precision of energy decoherence detection.
\begin{figure}[h]
\scalebox{0.50}{\includegraphics*[viewport=50 75 510 485]{Scheme.pdf}}
\centering
\caption{ (Color online) Proposed scheme for creation of large energy superposition. (a) Level scheme in Strontium. The pseudo-spin states are the singlet ground state $|g\rangle$ and a long lived excited triplet state $|e\rangle$. An off-resonant laser field $(\Omega_{r})$ dresses the excited state with the Rydberg level $|r\rangle$. This creates a Kerr-type interaction between the atoms in the excited state. The resonant laser field $(\Omega_{e})$ is applied for population rotation. (b-d) The evolution of the Husimi distribution of the collective spin state on the Bloch sphere. Application of the Kerr-type interaction splits the initial coherent spin state (CSS) (b) into a superposition of two CSS at opposite poles of the Bloch sphere (c). Applying a $\pi /2$ rotation along the $x$ axis following the cat creation process results in a superposition of all atoms being in the ground or excited state. } \label{Scheme}
\end{figure}
Previous related, but distinct, work includes Ref. \cite{Simon-Jaksch} who briefly discussed the creation of energy superposition states in Strontium Bose-Einstein condensates based on collisional interactions. Ref. \cite{Ghobadi} proposed the creation of energy superposition states of light, and ref. \cite{ion-Cat} reported the realization of 14-ion GHZ state, with 24 eV energy separation, but without mentioning the energy superposition aspect. The present proposal promises much greater sensitivity to energy decoherence thanks to a much longer lifetime (compared to Ref. \cite{Ghobadi}) and to both increased size and longer lifetime (compared to Ref. \cite{ion-Cat}). Related work involving Rydberg states includes Refs. \cite{Saffman-Molmer,Opatrny-Molmer}, who performed detailed studies of the creation of moderate-size cat states using Rydberg blockade. The number of atoms is limited to of order ten in these schemes due to competing requirements for the presence and absence of blockade between different Rydberg transitions in the same ensemble. They also don't use metastable optical clock states, resulting in only small differences in energy between the two components. Ref. \cite{Mukherjee} briefly discussed the creation of moderate-size (15 atoms) GHZ type states in Strontium atom chains, without mentioning the energy superposition aspect. Ref. \cite{Mukherjee} uses attractive Rydberg interactions, but not the uniform Kerr-type interaction used in the present work. The number of atoms in Ref. \cite{Mukherjee} is limited by unwanted transitions to other nearby many-body states \cite{Mukherjee-Thesis}.
The paper is organized as follows. We begin with a description of our scheme in Sec.~\ref{sec:Scheme}. In Sec.~\ref{sec:Imperfection} and \ref{Decoherence} we quantify the effects of the main imperfections and decoherence sources on the fidelity of final cat state. In Sec.~\ref{Cat Size} we find an estimate for size of cat states that can be realized with high fidelity. We then show that our scheme is experimentally realizable in Sec.~\ref{Realization}, followed by a detailed discussion in Sec.~\ref{More imperfection}, demonstrating that the effects of atomic motion, molecular formation, collective many-body decoherence, level mixing and BBR radiation induced decoherence can be suppressed. We conclude the paper in Sec.~\ref{Energy-decoherence} with a discussion of the application of energy superposition states for the detection of energy decoherence.
\section{Scheme}
\label{sec:Scheme}
We now describe our proposal in more detail. In an ensemble of $N$ ultra-cold Strontium atoms trapped in a 3D optical lattice \cite{3D Lattice}, one can consider a two-level system consisting of the singlet ground state $|g\rangle$ and a long-lived excited triplet state $|e\rangle$, which are separated in energy by $1.8$ eV. An interaction between the atoms can be induced by dressing the clock state with a strongly interacting Rydberg level \cite{Johnson,Henkel-Thesis,Pfau-Rydberg} as shown in the level scheme of Fig.~\ref{Scheme}. This induces a light shift (LS) on the atoms which depends on the Rydberg blockade.
\subsection{Kerr-type Rydberg Dressed Interaction}
\label{Ker-type}
When the entire ensemble is inside the blockade radius, the dressing laser couples the state with no Rydberg excitation $|\psi_1\rangle=\otimes_{i}\left|\phi_{i}\right\rangle $ (where $\phi\in\{e,g\}$) to a state where only one of the atoms in the $|e\rangle$ level gets excited to the Rydberg level $|\psi_2\rangle=\sum_{i}\left|\phi_{1}...r_{i}...\phi_{N}\right\rangle$ with an enhanced Rabi frequency $\sqrt{N_{e}}\Omega_{r}$ \cite{kuzmich}, where $N_e$ is the number of atoms in the excited state.
Over the Rydberg dressing process, the Hamiltonian can be diagonalized instantaneously
\begin{equation}
D \equiv UHU^\dagger=\left(\begin{array}{cc}E_{-} & 0\\0 & E_{+}\end{array}\right),
\end{equation}
where $E_{\pm}=\frac{\Delta}{2}(1 \pm \sqrt{1+\frac{N_{e}\Omega_{r}^{2}}{\Delta^2}})$ and
\begin{equation}
U=\left(\begin{array}{cc}\cos(\theta/2) & -\sin(\theta/2)\\ \sin(\theta/2) & \cos(\theta/2)\end{array}\right)
\end{equation}
with $\theta=\tan^{-1}(\frac{\sqrt{N_e}\Omega_r}{\Delta}) $. The Schrodinger equation expressed in the dressed state basis $|\varphi>=U|\psi>$ is
\begin{equation}
i\frac{\partial}{\partial t} \left(\begin{array}{cc} |\varphi_{-} \rangle \\ |\varphi_{+}\rangle \end{array}\right)=\left(\begin{array}{cc}E_{-} & -i\dot{\theta}/2 \\ i\dot{\theta}/2 & E_{+}\end{array}\right) \left(\begin{array}{cc} |\varphi_{-} \rangle \\ |\varphi_{+}\rangle \end{array}\right).
\end{equation}
To avoid the scattering of population from the ground dressed state to the excited dressed state, the coupling term $\dot{\theta}=\frac{\sqrt{N_e}\Omega_r \dot{\Delta} -\sqrt{N_e} \Delta \dot{\Omega}_r }{N_e \Omega_r^2+\Delta^2}$ should be smaller than $E_+$ (see realization section \ref{Realization} for examples).
Focusing on the ground dressed state, the effective light shift of the system is
\begin{equation}
E_-=\frac{\Delta}{2}(1 - \sqrt{1+\frac{N_{e}\Omega_{r}^{2}}{\Delta^2}}).
\label{light-shift Eq}
\end{equation}
Within the weak dressing regime $(\frac{\sqrt{N_e}\Omega_r}{\Delta}\ll 1)$ one can Taylor expand the light shift to
\begin{equation}
E_-=\frac{\Delta}{2}[1 - (1+\frac{1}{2} \frac{N_e \Omega_r^2}{\Delta^2} - \frac{1}{8} \frac{N_e^2 \Omega_r^4}{\Delta^4} +O(\frac{N_e \Omega_r^2}{\Delta^2})^3 ) ],\label{Expansion Eq}
\end{equation}
which can be simplified to $E_- \approx (N_e^2-\frac{N_e}{w^2})\frac{\chi_0}{2}$, with $w=\frac{\Omega_r}{2\Delta}$ and $\chi_0=2w^4\Delta$. Therefore adiabatic weak dressing of atoms to the Rydberg level imposes an effective Kerr-type Hamiltonian
\begin{equation}
H= (\hat{N_e}^2-\frac{\hat{N_e}}{w^2})\frac{\chi_0}{2}\label{Kerr-type-Eq}
\end{equation}
on the atoms within the blockade radius. The effects of higher order terms in the Taylor expansion are discussed in Sec.~\ref{Higher Order Non-linearities} and Fig.~\ref{wVsN}.
\subsection{Generation of Cat State on the Equator of the Bloch Sphere}
The two levels $|g_{i}\rangle$ and $|e_{i}\rangle$ for each atom are equivalent to a spin $1/2$ system with Pauli matrices $\sigma_{x}^{(i)}=(|g_i\rangle\langle e_i|+|e_i\rangle \langle g_i|)/2$, $\sigma_{y}^{(i)}=i(|g_i\rangle\langle e_i|-|e_i\rangle \langle g_i|)/2$ and $\sigma_{z}^{(i)}=(|e_i\rangle\langle e_i|-|g_i\rangle \langle g_i|)/2$ acting on the atom at site $i$. We define collective spin operators $S_{l}=\sum_{i=1}^{N}\sigma_{l}^{(i)}$. A coherent spin state (CSS) is defined as a direct product of single spin states \cite{CSS}
\begin{equation}
|\theta,\phi \rangle=\otimes _{i=1}^{N}[\cos{\theta}|g \rangle_{i}+\sin{\theta} e^{i \phi} |e \rangle_{i}],
\end{equation}
where all the spins are pointing in the same direction, and $\phi$ and $\theta$ are the angles on the (collective) Bloch sphere. The CSS can also be represented as \cite{CSS}
\begin{equation}
|\eta \rangle= |\theta,\phi \rangle=(1+|\eta|^{2})^{-N/2} \sum_{N_{e}=0} ^{N} \eta^{N_{e}} \sqrt{C(N,N_e)}|N;N_{e}\rangle,
\end{equation}
where $\eta=\tan(\theta/2)e^{-i\phi}$,
$C(N,N_e)\equiv\left(\begin{array}{c} N\\ N_{e} \end{array}\right)$ and $|N;N_{e} \rangle=\frac{1}{\sqrt{C(N,N_e)}}\sum_{i1<i2<...<iN_{e}}^{N}|g_{1}...e_{i1}...e_{iN_{e}}...g_{N}\rangle$ is the Dicke state of $N_{e}$ excited atoms, where $|N;N_{e} \rangle$ is an alternative representation of the $|J \, M\rangle$ basis with $N=2J$ and $N_e=J+M$.
Let us now discuss the time evolution of an initial CSS $|\eta\rangle$ under the Kerr-type interaction of Eq.~(\ref{Kerr-type-Eq}). The state evolves as
\begin{equation}
|\psi(t) \rangle= (1+|\eta|^{2})^{-N/2} \sum_{N_{e}=0} ^{N} \eta^{N_{e}} e^{-i H t} \sqrt{C(N,N_e)}|N;N_{e}\rangle.
\end{equation}
At the ``cat creation'' time $\tau_{c}=\frac{\pi}{ \chi_{0}}$ the linear term of Eq.~\ref{Kerr-type-Eq} creates a phase rotation, which changes the state to $|\eta' \rangle=|e^{-i\frac{N_e \chi_0}{2w^2}\tau_c}\eta \rangle$. The quadratic term produces coefficients of $(1)$ and $(-i)$ for even and odd $N_{e}$'s respectively. The state can then be rewritten as a superposition of two CSS, namely
\begin{equation}
|\psi(\tau_{c}) \rangle=\frac{1}{\sqrt{2}} (e^{i \frac{\pi}{4}}|\eta' \rangle+e^{-i \frac{\pi}{4}}|-\eta' \rangle)
\end{equation}
in analogy with Ref. \cite{Yurke-Stoler}. Continuing the interaction for another $\tau_{c}$, one can observe the revival of the initial CSS. This revival can be used as proof for the successful creation of a quantum superposition at $\tau_c$, since a statistical mixture of CSS at $\tau_c$ would evolve into another mixture of separate peaks \cite{Hon-Wai,Dalvit}.
\subsection{Creating the Energy Cat}
\label{Steps Towards the Energy Cat}
To create an energy superposition state we thus have to apply the following steps. Starting from the collective ground state $|g \rangle ^{\otimes N}$, we apply a $\pi/2$ pulse on the $|e\rangle-|g\rangle$ transition that results in the maximum eigenstate of the $S_{x}$ operator $|\eta=1 \rangle=(\frac{|e \rangle +| g \rangle}{\sqrt{2}})^{\otimes N}$, as shown in Fig.~\ref{Scheme}(b). Since the atoms are confined to the ground states of optical lattice traps, the position-dependent phase factors associated with laser excitation of the clock state are constant over the course of the experiment and can be absorbed into the definition of the atomic basis states (detailed discussion can be found in Sec.~\ref{atomic motion}). We now apply the Kerr-type interaction. The large coefficient of the linear term in the Hamiltonian leads to a rotation of the created cat state on the equator of Bloch sphere. Applying accurate interaction timing, the state can be chosen to be a superposition of two CSS pointing to opposite directions along the $y$ axis on the Bloch sphere
$| \psi(\tau_{c})\rangle=\frac{1}{\sqrt{2}}(e^{i \frac{\pi}{4}} |\eta=i \rangle+ e^{-i \frac{\pi}{4}} |\eta=-i\rangle)$,
see Fig.~\ref{Scheme}(c) and inset (a) of Fig.~\ref{wVsN}. For example, a timing precision of $\delta \tau_c=\frac{2w^2}{5 \pi\sqrt{N}}\tau_c$ results in an adequate phase uncertainty of $\delta\phi=\frac{1}{5 \sqrt{N}}$ (examples can be found in the realization Sec.~\ref{Realization}). Applying another $\frac{\pi}{2}$ pulse on the created cat state results in $\frac{|e \rangle ^{\otimes N} +| g \rangle^{\otimes N}}{\sqrt{2}}$, which is a superposition of all the atoms being in the ground and excited states, as shown in Fig.~\ref{Scheme}(d). The created state is a superposition of two components with very different energies. To verify the creation of the energy cat state one needs to rotate the state back to the equator and detect the revival of the initial CSS under the Kerr-type interaction, see also the inset of Fig.~\ref{wVsN}(b).
\section{Imperfections}
\label{sec:Imperfection}
In this section we quantify the effects of the most important imperfections with direct impact on the achievable cat size. Other sources of imperfections, which can be made to have relatively benign effects on our scheme, are discussed in Sec.~\ref{More imperfection}.
\subsection{Higher Order Non-linearities}
\label{Higher Order Non-linearities}
\begin{figure}
\centering
\scalebox{0.43}{\includegraphics*[viewport=30 210 580 500]{wVsN.pdf}}
\caption{ (Color online) Effect of higher than second order nonlinearities (from the higher orders of Eq.~\ref{Expansion Eq}) on the fidelity of the cat state. The weak dressing parameter $(w=\frac{\Omega_{r}}{2 \Delta})$ has to be reduced for larger atom numbers $N$ in order to keep a fixed fidelity $F_{nl}$ ($F_{nl}=0.7$ (green), $0.8$ (red), $0.9$ (blue) from top to bottom). The inset shows the Husimi Q function for an $N=100$ cat state (a) with $F_{nl}=0.9$ (corresponding to the black cross in the main figure), as well as the corresponding revival (b). The approximate revival of the initial CSS at the time $t=2 \tau_c$ proves the existence of a quantum superposition at $t=\tau_c$.} \label{wVsN}
\end{figure}
First, we only considered the linear and quadratic terms in $N_e$ in our Hamiltonian, which is accurate for very weak dressing. Applying stronger dressing fields yields a stronger interaction, but also increases the importance of higher order terms in Eq.~(\ref{Expansion Eq}). To quantify the effects of these higher orders, we calculate the fidelity of the cat state $(|\psi'(\tau_{c}) \rangle)$ generated based on Eq.~(\ref{light-shift Eq}) with respect to the closest ideal cat state,
\begin{equation}
F_{nl}=\mbox{max}_{\theta,\phi,\alpha, \tau_c}|\langle \psi'(\tau_{c}) | \frac{1}{\sqrt{2}}(|\theta,\phi \rangle+e^{i \alpha}|\pi-\theta,\phi+\pi \rangle)|^{2}.
\end{equation}
Fig.~\ref{wVsN} shows that the weak dressing parameter $w=\frac{\Omega_r}{2\Delta}$ has to be reduced for larger atom numbers in order to achieve a desired fidelity.
\subsection{Effects of Interaction Inhomogeneities}
\label{sec:Inhomogeneities}
We also considered a uniform blockade over the entire medium, leading to a homogeneous interaction. In practice the interaction is not perfectly homogeneous. One can apply fourth order perturbation theory to find the interaction of the entire weakly dressed system as \cite{Pohl1,Pohl2}
\begin{equation}
\hat{H}=\sum_{i<j} \chi(r_{ij}) \hat{\sigma}^{i}_{ee} \hat{\sigma}^{j}_{ee}-\frac{\Omega^2}{4\Delta}\hat{N}_e.
\end{equation}
The many-body interaction is the sum of binary interactions
\begin{equation}
\chi(r_{ij})=\chi_{0} \frac{R_{b}^{6}}{r_{ij}^{6}+R_{b}^{6}},
\end{equation}
where $R_{b}=|\frac{C_{6}}{2\Delta}|^{1/6}$ is the blockade radius in the weak dressing regime. This binary interaction has a plateau type nature, see Fig.~\ref{Inhomogeneity Fig}(a). The inhomogeneity of the interaction introduces a coupling to non-symmetric states, since the Hamiltonian no longer commutes with the total spin operator ($[S^2,H]\neq 0$). We evaluate the fidelity of a cat state created by the realistic non-uniform interaction with respect to the ideal cat state. Writing the pair interactions $\chi(r_{ij})$ in terms of small fluctuations $\epsilon_{ij}$ around a mean value $\chi_{m}$,
we decompose the Hamiltonian into a sum of two commuting terms, $\hat{V}_{H}=\sum\limits_{i<j} \chi_m \hat{\sigma}^{i}_{ee} \hat{\sigma}^{j}_{ee}-\frac{\Omega^2}{4\Delta}\hat{N}_e=\chi_{m} (\frac{\hat{N}_e^2-\hat{N}_e}{2})-\frac{\chi_0}{2w^2}\hat{N}_e \approx \frac{\chi_{m}}{2}\hat{N}_e^2-\frac{\chi_0}{2w^2}\hat{N}_e$ and $\hat{V}_{IH}=\sum\limits_{i<j} \epsilon_{ij} \hat{\sigma}^{i}_{ee} \hat{\sigma}^{j}_{ee}$, corresponding to the homogeneous and inhomogeneous parts respectively. While the homogeneous part leads to an ideal cat state, the inhomogeneous part reduces the fidelity by a factor $F_{IH}=|\langle \eta=1|e^{-i\hat{V}_{IH}\tau_c}|\eta=1\rangle|^2$, where $|\eta=1 \rangle=(\frac{|e\rangle+|g\rangle}{\sqrt{2}})^N$ is the initial CSS.
Taylor expanding the inhomogeneous part of the evolution operator one obtains an estimate for the fidelity as explained in Appendix \ref{supp}.
Fig.~\ref{Inhomogeneity Fig}(b) shows the resulting infidelity as a function of cat size for constant blockade radius.
\begin{figure}
\centering
\scalebox{0.34}{\includegraphics*[viewport=0 80 800 420]{Inhomogeneity3.pdf}}
\caption{ (Color online) Effect of interaction inhomogeneity. (a) Plateau-type interaction between each pair of atoms dressed to the Rydberg state. The interaction is uniform for separations up to of order the blockade radius. (b) Infidelity caused by interaction inhomogeneity as a function of cat size $(N)$, for a constant blockade radius. Non-linear fidelity is set to $F_{nl}=0.9$, the blockade radius $R_{b}=3.6\mu$m is created by Rydberg dressing to $n=80$, and the atoms are considered to be in a cubic trap with space diagonal $D$ and lattice spacing of $200$nm.} \label{Inhomogeneity Fig}
\end{figure}
\section{Decoherence}
\label{Decoherence}
The main source of decoherence in our system is depopulation of the Rydberg level which also determines the lifetime of the dressed state $(\tau_{\tilde{e}} \approx \tau_r w^{-2})$. In this section we identify different Rydberg decay channels and discuss their effects on the fidelity of the cat state. Loss due to collisions is reduced by the use of an optical lattice trap with a single atom per site. Ref. \cite{Lattice-Lifetime} implemented a Strontium optical clock using a blue-detuned lattice (trap laser wavelength 390 nm) with a collision-limited lifetime of $100$s, demonstrating that loss due to the trap laser can be made negligible. Other sources of decoherence including blackbody radiation induced transitions, collective many-body decoherence and molecular formation will be discussed in Sec.~\ref{More imperfection}.
\subsection{Rydberg Decay Channels}
\label{A coefficient}
The main source of decoherence in our system is depopulation of the Rydberg level which also determines the lifetime of the dressed state $(\tau_{\tilde{e}} \approx \tau_r w^{-2})$. The Rydberg state depopulation rate can be calculated as the sum of spontaneous transition probabilities to the lower states (given by Einstein A-coefficients) \cite{A-coefficient1,A-coefficient2,A-coefficient3}
\begin{equation}\label{Life-time}
\tau_{r}^{-1}=\sum\limits_{f}{A_{if}}=\frac{2e^{2}}{3\epsilon_{0}c^{3}h} \sum\limits_{E_f<E_i} \omega_{if}^{3}\, |\langle i|\vec{r}|f\rangle|^{2},
\end{equation}
where $\omega_{if}=\frac{E_f-E_i}{\hbar}$ is the transition frequency
and $\langle i|\vec{r}|f\rangle$ is the dipole matrix element
between initial and final states (see Appendix~\ref{dipole matrix elements}).
The summation is only over the states $|f\rangle$ with lower energies compared to the initial state.
Using a cryogenic environment \cite{Cryogenic}, black-body radiation induced transitions are negligible, see Sec.~\ref{BBR} for detailed discussion.
Considering the dressing to $5sns \,{}^3S_{1}$ in our proposal, the possible destinations of dipole transitions are limited to $^3P_{0,1,2}$, due to the selection rules. Around 55\% of the transferred population will be trapped within the long-lived $^3P_2$ states, which we refer to as qubit loss. Around 35\% of the population is transferred to $^3P_1$ states, which mainly decay to the ground state $|g \rangle =5s^2\, ^1S_0$ within a short time
(e.g. $\tau_{5s5p\, ^3P_1}=23 \mu s$ \cite{3P0lifetime}), which we refer to as de-excitation. The remaining 10\% of the population is transferred to $^3P_0$ states. Half of this population ($5\%$ of the total) contributes to qubit loss, bringing the total loss to 60 $\%$, while the other half (also $5\%$ of the total) is transferred to the excited state, which effectively causes dephasing of $|\tilde{e}\rangle$ because the photon that is emitted in the process contains which-path information about the qubit state.
\subsection{Effects of Rydberg Decoherence on the Cat State}
The three decoherence types discussed in the previous sub-section have different effects on the cat state. Loss and de-excitation completely destroy the cat state if they occur, while dephasing is both unlikely and relatively benign. We now explain these statements in more detail.
The majority (60\%) of the dressed state's decay goes to non-qubit states $|\tilde{e} \rangle \Rightarrow \delta |\tilde{e} \rangle |0\rangle_p +\sqrt{1-\delta^2} |l \rangle | 1 \rangle_p $, where $\delta^2=e^{-0.6 \gamma_{\tilde{e}} \tau_c}$ and $|1 \rangle_p$ represents the emitted photon. In addition to loss, 35\% of the dressed state's decay is de-excitation $|\tilde{e} \rangle \Rightarrow \delta |\tilde{e} \rangle |0\rangle_p +\sqrt{1-\delta^2} |g\rangle | 1 \rangle_p $, where $\delta^2=e^{-0.35 \gamma_{\tilde{e}} \tau_c}$.
Decay of a single dressed state atom transforms an atomic symmetric Dicke state $|N;N_e \rangle$ into a combination of the original state $|N;N_e \rangle$, a symmetric Dicke state $|N;N_e-1 \rangle$ with one fewer excitation, and $N$ different other Dicke states $(|N-1;N_e-1 \rangle _{\tilde{i}} |l\rangle_i)$ in which the $i$-th atom is transferred to a non-qubit state (the qubit is lost), but which are still symmetric Dicke states for the remaining atoms. The resulting state is $\sqrt{P_0} |N;N_e \rangle |0\rangle_P + \sqrt{P_{de} N_e} |N;N_e-1 \rangle |1\rangle_P + \sqrt{\frac{P_l N_e}{N}}\sum\limits_{i=1}^N |N-1;N_e-1 \rangle _{\tilde{i}} |l\rangle_i |1\rangle_P $ where $P_{k}=\lambda_k e^{-\lambda_k}$ is the probability of losing/de-exciting $(k=l/de)$ an atom over the cat creation time, with $\lambda_{k}= \gamma_{(k)} \frac{N}{2} \tau_c$ (note that $ N_e \sim \frac{N}{2}$ since the cat creation happens on the equator of Bloch sphere) and $P_0=1-P_l-P_{de}$. Here we focus on the regime where the probability of a single atom decaying is sufficiently small that the probability of two atoms decaying can be ignored.
Tracing over the lost qubit and the photonic state one obtains the density matrix $\rho_c=P_0 \rho_0 + \frac{P_l }{N}\sum\limits_{i=1}^N\rho^i_l+ P_{de} \rho_{de}$, where $\rho_0$ and $\rho_{de}$ are in the symmetric subspace with total spin $(J=\frac{N}{2})$, while the $\rho^i_l$ are in $N$ different symmetric subspaces with total spin $(J=\frac{N-1}{2})$. The $\rho_0$ component corresponds to the ideal cat state. All the other components have very small fidelity with ideal cat states, primarily because the decay happens at a random point in time, which leads to dephasing. For example, de-excitation of an atom at $(t_{de}\in[0,\tau_c])$, leads to
\begin{eqnarray}\label{expansion}
&&| \psi^{de}_c(t_{de}) \rangle= 2^{-N/2}\sum\limits_{N_{e}=1} ^{N} \sqrt{C(N,N_e)} \\ \nonumber
&& e^{-iE_{(N_e-1)}(\tau_c-t_{de})} \sqrt{N_e} e^{-iE_{(N_e)}t_{de}}|N;N_{e}-1\rangle,
\end{eqnarray}
where $E_{(N_e-1)}$ represents the dressed state energy of $(N_e-1)$ excited atoms, see Eq.~(\ref{Kerr-type-Eq}).
Inserting the expressions for $E_{N_e}$ and $E_{N_e-1}$, one sees that de-excitation adds a linear term $(iN_e \chi_0 t_{de})$ to the phase. This creates a rotation around the $z$ axis on the Bloch sphere. The uncertainty in the time of decay $t_{de}$ therefore dephases the cat state, resulting in the formation of a ring on the equator of the Bloch sphere, which has a small overlap with the ideal cat state. The fidelity of the resulting density matrix compared to an ideal cat state in the same subspace (which corresponds to the case where de-excitation happens at $t_{de}=0$) can be written as $F_{de}=\frac{1}{\tau_{c}} \intop^{\tau_{c}} _{0} |\langle \psi^{de}_c(t_{de}) | \psi^{de}_c(t_{de}=0) \rangle|^2 dt_{de}$. When the size of the cat state is increased from $N=10$ to $N=160$, the fidelity of the generated cat in the de-excited subspace is reduced from $F_{de}=0.2$ to $F_{de}=0.045$. The fidelity in each of the $N$ subspaces where one atom was lost can be calculated in a similar way, yielding equivalent results. The total fidelity in the presence of Rydberg decoherence is then $F_{dc}=P_0+P_lF_l+P_{de}F_{de} \approx P_0$.
About 5\% of Rydberg decoherence will transfer back to the excited state, which acts as dephasing (modeled by a Lindblad operator $|\tilde{e} \rangle \langle \tilde{e}|$). The dephasing operator commutes with the Hamiltonian for cat state creation. Its effect can therefore be studied by having it act on the final cat state. For example, it can cause a sign flip of $|e\rangle$ for the first atom, resulting in a state $(\frac{|e \rangle+i|g \rangle}{\sqrt{2}})(\frac{|e \rangle-i|g \rangle}{\sqrt{2}})^{\otimes(N-1)}+(\frac{|e \rangle-i|g \rangle}{\sqrt{2}})(\frac{|e \rangle+i|g \rangle}{\sqrt{2}})^{\otimes (N-1)}$. Applying the $\pi/2$ rotation results in a new energy cat $\frac{|g\rangle |e\rangle^{N-1}+|e\rangle|g\rangle^{N-1}}{\sqrt2}$, which is clearly still a large superposition in energy. So the effect of dephasing errors is relatively benign.
Moreover, given the small relative rate of dephasing compared to loss and de-excitation,
the probability of having a sign flip over the cat creation time for the case with decoherence fidelity of $F_{dc}=0.8$ (considered in Fig.~\ref{Nvsn}) will only be 1$\%$.
In conclusion, the fidelity of the cat state is, to a good approximation, equal to the probability of not losing or de-exciting any qubits over the cat creation time, $F_{dc} =P_0=e^{-0.95 \frac{N}{2}\gamma_{\tilde{e}}\tau_c}$.
\section{Estimate of Realizable Cat Size}
\label{Cat Size}
Taking into account the mentioned imperfections, Fig.~\ref{Nvsn} shows the achievable cat size as a function of the principal number $n$. Up to $n \sim 80$, the size increases with $n$.
Higher $n$ leads to a stronger interaction, hence allowing weaker dressing, and to smaller loss, favoring the creation of larger cats. However, for $n \sim 80$ the diminishing spacing between neighboring Rydberg levels (which scales like $n^{-3}$) limits the detuning and hence the interaction strength, since $\chi_0=2w^4 \Delta$ and $w$ has to be kept small, see Fig.~\ref{wVsN}. As a consequence,
larger cat states cannot be achieved at higher principal numbers.
Here we justify the behavior of Fig.~\ref{Nvsn} in a more detailed scaling argument. For a constant fidelity the maximum achievable cat size $N$ at each principal number $n$ is limited by Rydberg decay, $F_{dc}=e^{-\lambda }$ where $\lambda = 0.95 \frac{N}{2} \tau_{c} \gamma_{\tilde{e}} $. Let us analyze how $\lambda$ scales with $N$ and $n$.
The Rydberg decay rate scales as
$\gamma_{|\tilde{e}\rangle} \varpropto w^{2} n^{-3}$.
In order to have a constant non-linearity fidelity of $F_{nl}=0.8$, the dressing strength $w$ has to scale like $N^{-0.84}$, see Fig.~\ref{wVsN}.
The cat creation time $\tau_c=\frac{\pi}{\chi_0} \varpropto w^{-4}\Delta^{-1}$ scales differently before and after the transition point $n \sim 80$. Before the transition point the scaling of $\Delta$ can be obtained by noting that the trap size is a fraction of the blockade radius, $\Delta=\frac{C_{6}}{2R_{b}^{6}} \varpropto \frac{n^{11}}{N^{2}}$, where the exact value of the fraction coefficient is determined by $F_{IH}$, see Fig.~\ref{Inhomogeneity Fig}. Therefore we conclude that $\lambda \varpropto \frac{N^{4.7}}{n^{14}}$, which states that before the transition point larger cat states are realizable by dressing to higher principal numbers, $N \varpropto n^{3}$ for constant fidelity. However, after the transition point the small level spacing imposes a limit on the detuning, $\Delta \varpropto n^{-3}$. Therefore after the transition point $\lambda \varpropto N^{2.7}$, which is independent of $n$. This prevents the realization of larger cat states at higher principal numbers.
One sees that superposition states of over 100 atoms are achievable with good fidelity. In Fig.~4 the interaction inhomogeneity is tuned to create less than $1\%$ infidelity. Dressing to an $S$ orbital is desired due to its isotropic interaction in the presence of trap fields. In Fig.~\ref{Nvsn}, after the transition point in $n$ the detuning is chosen such that 90\% of the Rydberg component of the dressed state is $5sns \,^{3}S_{1}$. Note that without a cryogenic environment the maximum achievable cat size in Fig.~\ref{Nvsn} would be reduced from 165 to 120 atoms, see Sec.~\ref{BBR}.
\begin{figure}
\centering
\scalebox{0.34}{\includegraphics*[viewport=0 55 680 458]{Nvsn4.pdf}}
\caption{ (Color online) Maximum achievable cat size as a function of the principal number $n$ of the Rydberg state. Rydberg state decay is adjusted to cause 20\% infidelity. The interaction inhomogeneity is set to create less than 1\% infidelity, see Fig. 3, and the higher-order nonlinearities are set to create 10\% (red circle), 20\% (purple plus) and 30\% (blue square) infidelity, see Fig. 2. The inset shows the required cat creation time as a function of $n$ for the case where the higher-order nonlinearities cause 10\% infidelity.} \label{Nvsn}
\end{figure}
\section{Experimental Realization}
\label{Realization}
Experimental implementation of our scheme seems feasible. Rydberg excitations in Strontium have been realized over a wide range up to $n=500$
\cite{Sr-Rydberg 1,Sr-Rydberg 2,Sr-Rydberg 3,Sr-Rydberg 4}.
Rydberg dressing of two atoms has been used to create Bell-state entangled atoms \cite{Dressing-realization1}. Recently Rydberg dressing of up to 200 atoms in an optical-lattice has been reported \cite{Dressing-realization2}, where the collective interaction was probed using interferometric techniques. Ref. \cite{Dressing-realization2} also identified a collective many-body decay process, which is however not a limiting factor for our scheme, as discussed in Sec.~\ref{collective decoherence}.
The Rydberg state $5sns \, ^{3}S_{1}$ is accessible from the $5s5p \, ^{3}P_{0}$ level with a $317$nm laser field. The required Rydberg transition Rabi frequency $\Omega_r/2\pi$ (up to 15 MHz) can be obtained with a tunable single-frequency solid state laser of Ref. \cite{Dressing-laser}. The relatively large detuning values ($ 4$MHz$<\Delta/2\pi<340 $MHz in Fig. 4) make the interaction stable against Doppler shifts.
Fulfilling the adiabaticity condition discussed in Sec.~\ref{Ker-type} is not difficult.
In a highly adiabatic example, $\frac{\dot{\theta}}{E_+}=0.01$, the dressing laser can be switched from zero to $\frac{\Omega_r}{2\pi}= 15$ MHz over 18 ns (for $\frac{\Delta}{2\pi}=270$ MHz and 165 atoms). For this example, 99.991\% of the population returns to the ground state at the end of dressing, so adiabaticity is almost perfect. This adiabatic switching time of 18 ns is many orders of magnitude shorter than the related cat creation time of 1.4 ms.
Adequate interaction timing precision is also required to align the created cat on the equator of Bloch sphere as explained in Sec.~\ref{Steps Towards the Energy Cat}.
For the 165-atom cat state mentioned above, a timing precision of order $\delta \tau_c=\frac{2w^2\,\delta\phi}{\chi_0}=\frac{4\Delta}{5\sqrt{N}\Omega_r ^2} \approx 7.5$ ns is required for a phase precision of order $\delta\phi=\frac{1}{5\sqrt{N}}=\pi/150$.
The Husimi Q function can be reconstructed based on tomography, i.e. counting atomic populations after appropriate rotations on the Bloch sphere. Modern fluorescence methods can count atom numbers in the required range with single-atom accuracy \cite{Dressing-realization2,detection}.
\section{Other sources of imperfection}
\label{More imperfection}
\subsection{Effects of Atomic Motion in the Optical Lattice}
\label{atomic motion}
Laser manipulation of the atomic state leads to phases that depend on the atomic position. Atomic motion could therefore lead to decoherence. To suppress this effect, in the present proposal the atoms are confined to the ground states of the optical lattice traps. As a consequence, all position-dependent phase factors are constant over the course of the experiment and can be absorbed into the definition of the excited states. We now explain these points in more detail. Let us consider the $j^{th}$ atom, and let us assume that it is initially in the ground state (zero-phonon state) of its optical lattice site. We will denote the corresponding state $|g\rangle_j |0\rangle_j$. Applying the part of the Hamiltonian that is due to the laser to this
state gives $(\Omega_e(t)\ e^{ik\hat{x}_j}|e\rangle_j\langle g|)|g\rangle_j|0\rangle=\Omega_e(t)|e\rangle_je^{ik\hat{x}_j}|0\rangle_j$.
We can rewrite
the position operator $\hat{x}_j$ as the sum of the constant position of the $j^{th}$ site of the
trap $(x_{0j})$ plus a relative position operator $\hat{\xi}_j=s(\hat{a}_j^{\dagger}+\hat{a}_j)$, where $s=\sqrt{\frac{\hbar}{2m\omega_{tr}}}$ is the spread of the ground state wave function, $\omega_{tr}$ is the trap frequency and $(\hat{a}_j,\hat{a} _j^{\dagger})$ are the phononic annihilation-creation operators of the $j$th atom. In the Lamb-Dicke regime $(\eta=\frac{ks}{\sqrt{2}}\ll1)$ one can expand the exponential to get
\begin{equation}
e^{ik\hat{x }_j}=e^{ikx_{0j}}e^{ik\hat{\xi}_j}=e^{ikx_{0j}}(l+i\eta(\hat{a}_j+\hat{a}_j^{\dagger} )+O(\eta^2)).
\end{equation}
The phase factor $e^{ikx_{0j}}$ is constant over the course of the experiment and can be absorbed into the definition of the atomic basis states by defining $|e'\rangle_j \equiv e^{ikx_{0j}}|e\rangle_j$. The Hamiltonian describing the laser excitation can now be written
in the new basis $|g,0\rangle_j, \, |e',0\rangle_j,\, |e',1\rangle_j$ as:
\begin{equation}
\left(\begin{array}{ccc}
0 & \Omega_{e} & \eta\Omega_{e}\\
\Omega_{e} & 0 & 0\\
\eta\Omega_{e} & 0 & \omega_{tr}
\end{array}\right)\left(\begin{array}{c}
|g,0\rangle_j\\
|e',0\rangle_j\\
|e',1\rangle_j
\end{array}\right)
\end{equation}
Starting from the spin and motional ground state $|g,0\rangle_j$, the probability of
populating the state $ |e',1\rangle_j$, corresponding to the creation of a phonon, will be negligible if $\Omega_e \eta \ll \omega_{tr}$. With the parameters that we considered in our proposal ($\Omega_e\sim 1$ kHz, $\eta=0.1, \frac{\omega_{tr}}{2\pi} \sim 400$ kHz) \cite{3D Lattice} the population of $ |e',1\rangle_j$ will be eight orders of magnitude smaller than the population in the motional ground state.
\subsection{Effects of High Density}
\label{high density}
The relatively small lattice spacing of order $200$nm might raise concerns about molecule formation and level mixing. At high atomic densities there is another potential loss channel, Rydberg molecule formation \cite{Pfau-Molecule}. Molecule formation only occurs when the attractive potential due to Rydberg electron-neutral atom scattering moves the two binding atoms to a very small separation (of order 2nm), where the binding energy of the molecules can ionize the Rydberg electron and form a Sr$^2_{+}$ molecule \cite{Ott-Molecule}. Without the mass transport, stepwise decay or ionization of the Rydberg atom is ruled out by the quantization of Rydberg state, as has been discussed and experimentally tested in \cite{Pfau-Molecule}, because even at high densities the small molecular binding energy of nearby atoms is orders of magnitude smaller than the closest Rydberg levels for all the principal numbers. The occurrence of ion pair formation is also highly unlikely in this system \cite{Ott-Molecule}. We propose that confining the atoms by an optical lattice can prevent the described mass transport and completely close the molecule formation loss channel. High atomic density can also lead to strong level mixing at short distances \cite{level mixing,level mixing2}. However, the experiment of Ref. \cite{level mixing 3} shows that the plateau-type interaction can persist in the presence of strong level mixing because most molecular resonances are only weakly coupled to the Rydberg excitation laser.
\subsection{Effects of Blackbody Radiation}
\label{BBR}
Blackbody radiation (BBR) could reduce the lifetime by transferring the Rydberg state population to neighboring Rydberg levels (with both higher and lower principal numbers $n$) as illustrated in Fig.~\ref{BBR-figure}a. The BBR-induced transition probability is given by the Einstein B-coefficient $\Gamma_{BBR}=\sum\limits_{f} B_{if}=\sum\limits_{f} \frac{A_{if}}{e^{\frac{\hbar \omega_{if}}{k_B T}}-1}$ \cite{A-coefficient1,A-coefficient2,A-coefficient3}, where $T$ is the environment temperature, $k_B$ is the Boltzmann constant and both $\omega_{if}$ and $A_{if}$ are defined in Sec.~\ref{A coefficient}.
At the environment temperatures of 300K, 95K \citep{Cryogenic1} and 3K \citep{Cryogenic2}, including the BBR-induced transitions increases the total decoherence rate $\Gamma_{\tilde{e}}$ by 120\%, 40\% and 1\% (see Fig.~\ref{BBR-figure}b) for $n \approx 80$, which results in maximum achievable cat sizes of 120, 150 and 165 atoms respectively (considering $F_{nl}=0.7, \, F_{dc}=0.8$).
Note that cryogenic environments with 95K and 1K were used in a Strontium lattice clock experiment \citep{Cryogenic1} and in a cavity QED experiment with Rydberg atoms \cite{Cryogenic2} respectively.
BBR could also disturb the Ramsey-type interferometry used for detecting energy decoherence by producing an AC stark shift; this effect is quantified in section \ref{Energy-decoherence}. Furthermore, BBR-induced decoherence could be inhomogeneous due to temperature inhomogeneities in the environment. This would introduce unwanted coupling to non-symmetric Dicke states in the cat creation process. The use of a cryogenic environment significantly suppresses these effects as well.
\begin{figure}
\scalebox{0.34}{\includegraphics*[viewport=1 100 800 480]{BBR2.pdf}}
\caption{ (Color online) Depopulation of Strontium Rydberg levels due to blackbody radiation (BBR) induced transitions. a) BBR-induced transition rates (Einstein-B coefficients) from $5s80s\,^3S_1$ to the neighboring $5snp\, ^3P_2$ (dark blue), $5snp\, ^3P_1$ (light blue), $5snp\, ^3P_0$ (blue) levels. The sum of these transition rates gives the total BBR-induced depopulation rate $\Gamma_{BBR}$. The inset is a 20 times enlarged view. b) Rydberg depopulation rates due to spontaneous decay ($\Gamma_s$ shown in blue diamond) and BBR-induced transitions ($\Gamma_{BBR}$) at environment temperatures of 300K (red circle), 95K \citep{Cryogenic1} (purple circle), and 3K \citep{Cryogenic2} (green circle) as a function of the principal number. The use of a cryogenic environment significantly suppresses the unwanted effects of BBR.} \label{BBR-figure}
\end{figure}
\subsection{Effects of Collective Many-body Decoherence}
\label{collective decoherence}
BBR-induced transitions to neighboring Rydberg levels (see Fig.~\ref{BBR-figure}a) can also lead to collective many-body decoherence \cite{LineBroadening, Dressing-realization2}. The interaction between the target $nS$ Rydberg level and some of the populated neighboring $n'P$ levels is of a strong long-range dipole-dipole type due to the formation of F\"{o}rster resonances. This strong interaction causes an anomalous broadening \cite{LineBroadening}. The mentioned decoherence process only starts after the first BBR-induced transition occurs.
However, the weak dressing strength and small ensemble size $(N<200)$ in our scheme make the probability of populating the target Rydberg state and consequently neighboring Rydberg levels very small. For example at the environment temperatures of 300K, 95K and 3K and for dressing to $n\approx 80$, the probabilities of not populating the strongly interacting neighboring Rydberg levels over the cat creation time for cat sizes of 120, 150 and 165 atoms respectively are $P_{BBR}(0)=exp(-\frac{N}{2}w^2 \Gamma_{BBR} \tau_c)=$98.63\%, 99.26\% and 99.96\% respectively. It has been observed in the realization of many particle Rydberg dressing \cite{Dressing-realization2} that when the transition probability is low enough (of the order of $P_{BBR}(0)\geq 82\%$, as can be calculated from the information provided in Ref. \cite{Dressing-realization2}) the many-body decoherence effects are negligible and decoherence rate is dominated by the Rydberg depopulation rate (see Sec.~\ref{Decoherence}).
\section{Testing Energy Decoherence}
\label{Energy-decoherence}
In the context of modifications of quantum physics, decoherence in the energy basis is quite a natural possibility to consider \cite{Milburn,Gambini,Blencowe}. It is usually introduced as an additional term in the time evolution for the density matrix that is quadratic in the Hamiltonian, $\frac{d\rho}{dt}=\frac{i}{\hbar}[H,\rho]-\frac{\sigma}{\hbar^2}[H,[H,\rho]]$, which leads to a decay of the off-diagonal terms of the density matrix in the energy basis according to $\rho_{nm}(t)=\rho_{nm}(0)e^{-i\omega_{nm}t}e^{-\gamma_E t}$ \cite{Gambini}, where $ \gamma_E=\sigma \omega_{nm}^2$. Here $\omega_{nm}$ is related to the energy difference of the two componants and $\sigma$ can be interpreted as a timescale on which time is effectively discretized, e.g. related to quantum gravity effects. It is of interest to establish experimental bounds on the size of $\sigma$, which could in principle be as small as the Planck time ($10^{-43}$ s).
The corresponding decoherence rate for the energy cat in this proposal would be $\gamma_{E}=\sigma (\frac{N\Delta E}{\hbar})^{2}$, where $\Delta E$ is the energy difference between the ground and excited state of each qubit, and $N$ is the cat size. To detect the energy decoherence one prepares the energy cat state, followed by a waiting period. To observe the decoherence effect, one detects the Ramsey fringes for the revival. The visibility of the Ramsey interference is also sensitive to other decoherence sources, where in the absence of dressing laser the dominant ones are the trap loss rate $\Gamma$, which reduces the visibility by a factor $\exp(-N \Gamma t)$, and phase diffusion that is explained below.
The large energy difference of the cat state increases the sensitivity of the Ramsey interferometry that we are using for the detection of energy decoherence. Therefore, it is important to consider the effect of fluctuations in the detuning between the laser and the atomic transition. Let us first note that the cat state is more sensitive to multi-particle (correlated) than to single-particle (uncorrelated) noise, which results in a phase diffusion affecting the visibility of Ramsey fringes by $e^{-N^2\delta_c^2t^2}$ and $e^{-N \delta_{uc}^2t^2}$ respectively \cite{phase-diffusion}. Comparing the two cases, correlated fluctuations should be $\sqrt{N}$ times more stabilized than uncorrelated fluctuations. The most important source of noise in our system is the fluctuation of the laser frequency. A probe laser linewidth as narrow as 26 mHz \cite{laser1} has been achieved in optical atomic clock experiments, and there are proposals for much smaller linewidths \cite{laser2,laser3} with recent experimental progress \cite{laser4},
justifying our example of a 10mHz linewidth, see below.
Other sources of multi-particle and single-particle noise have been well studied in the context of Strontium atomic clocks \cite{atomic-clocks-noise1,atomic-clocks-noise2} and are comparatively negligible. Here we address a few of them in our scheme.
One of the noise sources is the trap field’s intensity fluctuation; however, using the magic wavelength makes the atomic transition frequency independent of the trap laser intensity. Considering the variation of the Stark shifts due to the trap laser as a function of frequency at the magic wavelength \cite{Blue-detuned}, the relative scalar light shifts could be kept within 0.1mHz uncertainty by applying a trap laser with a 1MHz linewidth. In addition to the scalar light shift, the inhomogeneous polarization of trap fields in 3D optical lattices can result in an inhomogeneous tensor light shift \cite{W. Happer}; however, the use of the bosonic isotope $^{88}$Sr with zero magnetic moment cancels the tensor light shift \cite{T. Akatsuka} in our scheme.
Environmental temperature fluctuations ($\delta T$) also lead to atomic frequency fluctuations that are proportional to $T^3 \delta T$ due to the BBR-induced light shift \cite{atomic-clocks-noise1}. This is another reason why a cryogenic environment is advantageous. For example controlling the environment temperature of $95$K \citep{Cryogenic1} to within a range of $\delta T=1K$ keeps the BBR-induced noise shift below $1$ mHz.
A conservative estimate of the experimentally measurable energy decoherence rate can be obtained by considering the case where the energy decoherence dominates all other decoherence sources during the waiting period. Increasing the cat size $N$ is helpful because it allows one to enhance the relative size of the energy decoherence contribution. For example, choosing $t \propto N^{-1}$ keeps the loss and phase diffusion contributions fixed, while the energy decoherence still increases proportionally to $N$.
Using a cat state with $N=165$ atoms (see Fig. 4), which corresponds to $N \Delta E=$ 300 eV, assuming a laser linewidth of $10$ mHz (see above), and considering a trap loss rate of $\Gamma=10$ mHz \cite{Pfau-Molecule}, the minimum detectable discretization time scale $\sigma$ is of order $10^{-34}$ s. This would improve the measurement precision by 4 and 11 orders of magnitude compared to what is possible based on Ref. \cite{ion-Cat} and Ref. \cite{Ghobadi} respectively.
| {'timestamp': '2016-06-21T02:21:02', 'yymm': '1509', 'arxiv_id': '1509.01303', 'language': 'en', 'url': 'https://arxiv.org/abs/1509.01303'} |
\section{Introduction}
The last few years witnessed a renewed interest in hadronic physics,
originated in part by many new findings in hadron spectroscopy, the most
conspicuous being the narrow pentaquark states reported by more than ten
independent experimental groups~\cite{discovery,posCLAS,SAPHIR,posex,STAR,posexheavy,newposex}.
The narrow state predicted by a chiral soliton model~\cite{DPP97} provided the initial motivation
for the search of the
$\Theta^+$ pentaquark and its possible partners. After the reports of
null results started to accumulate~\cite{nullex,negCLAS,nullexheavy} the initial optimism
declined, and the experimental situation remains ambiguous to the
present. The increase in statistics led to some recent new claims for
positive evidence~\cite{newposex}, while the null
result~\cite{negCLAS} by CLAS is specially significant because it
contradicts their earlier positive result~\cite{posCLAS}, suggesting
that at least in their case the original claim was an artifact due to
low statistics. All this experimental activity spurred a great amount of
theoretical work in all kinds of models for hadrons and a renewed
interest in soliton models. There is a great amount of uncertainty in
model calculations that could be reduced with more
experimental input, like the
spin and parity of the reported exotic states~\cite{qn} or the possible
existence of spin-flavor partners~\cite{STAR,Capstick:2003iq}. Lattice QCD calculations
are constantly improving but the situation also remains
ambiguous~\cite{Liu:2005yc,latt32}, in part by the extrapolations to
light masses and the difficulty to disentangle scattering states from
bound states in a finite volume. Given the difficulties still faced by
first principle QCD calculations, the $1/N_c$
expansion~\cite{'tHooft:1973jz} of QCD, where $N_c$ is the number of
colors, provides a very useful analytical tool (for a recent account
see~\cite{Goity:2005fj}). It is based on the fundamental theory of the
strong interactions, and relates the chiral soliton model to the
more intuitive quark model
picture~\cite{Manohar:1984ys,Jenkins:2004tm,Kopeliovich:2006na,JW},
where the pentaquark correspond to states with quark content $\bar
qq^4$.
In the large $N_c$ limit, QCD has a contracted spin-flavor
symmetry $SU(2F)_c$~\cite{Largenspinflavor,DJM} in the baryon sector, also known as ${\cal K}$-symmetry,
that constrains their mass spectrum and couplings.
The breaking of the spin-flavor
symmetry can be studied systematically in an $1/N_c$ expansion.
This approach has been applied to the ground-state $[\mathbf{56},0^+]$
baryons \cite{DJM,largenrefs,largen,heavybaryons}, and to their orbital
excitations~\cite{PY1,Goity,SU3,symml2,Cohen:2003jk,PiSc,Matagne:2004pm}.
The large $N_c$ expansion has also been applied to hybrid baryons \cite{CPY99} and more recently to
exotic baryons containing
both quarks and antiquarks~\cite{Jenkins:2004vb,CoLe,MW,PiSc2}.
In this paper we assume the existence of these exotic states, and investigate
their properties in the case that they have positive parity. A partial subset
of these states were considered in the $1/N_c$ expansion in Ref.~\cite{Jenkins:2004vb}.
Negative parity exotic states have been studied in~\cite{StWi,MW,PiSc2}.
A brief report of some results presented here has been given in~\cite{Pirjol:2006hg}.
We start by constructing the color singlet $\bar s q^{N_c+1}$ states by
coupling the spin-flavor, orbital and color degrees of freedom, all constrained by Fermi statistics.
The light exotic states we obtain are members of the ${\cal K}=\frac{1}{2}$
and ${\cal K}=\frac{3}{2}$ irreducible representations of the contracted spin-flavor
symmetry. This extends the analysis of \cite{Jenkins:2004vb}, which considered only the
first irreducible representation.
Similar states with one heavy antiquark exist, that can be labelled in the large $N_c$
limit by the conserved quantum number ${\cal K}_\ell$ associated with the light degrees of
freedom. In our case of the positive parity pentaquarks, there is only one tower with
${\cal K}_\ell = 1$, containing all nonstrange states for which the isospin $I$ and spin of the
light degrees of freedom $J_\ell$ satisfy $|I-J_\ell | \leq {\cal K}_\ell \leq I+J_\ell $.
An important difference with respect to the treatment of the strong decay
amplitudes done in Ref.~\cite{Jenkins:2004vb} is that we will keep the orbital
angular momentum explicit in the transition operator, which is required to get the
correct $N_c$ scaling of the relevant couplings.
Although the existence of pentaquarks is not yet established or completely
ruled out by experiments, one thing that seems to be fairly well established
is that if they exist they should be narrow, with a width of $\Gamma \le 1~{\rm MeV}$, otherwise
they would have been observed long
before~\cite{bounds}. Explanations for the uncanny narrow width vary.
Cancellations between coupling constants have been invoked in the context of the original
chiral soliton model \cite{DPP97}. This cancellation
has been argued to be exact in the large $N_c$ limit \cite{Praszalowicz:2003tc}.
However, a detailed comparison of different versions of the chiral soliton model
suggests that there is only one coupling constant to leading order and this cancellation
cannot take place \cite{Walliser:2005pi}. Recently, it has been argued \cite{CoHoLe}
that heavy pentaquark states $\bar Q q^4$
should be narrow in the combined $1/m_{Q}$ and $1/N_c$ limit. Unfortunately, the
experimental situation for the heavy pentaquark states $\bar Q q^4$ is also
inconclusive. The charmed pentaquark initially reported by the H1 Collaboration \cite{posexheavy}
has not yet been confirmed \cite{nullexheavy,Lee:2005pn}.
We agree with the conclusions of Refs.~\cite{Walliser:2005pi,Jenkins:2004vb} about
the existence of a single operator mediating $\Theta \to NK$ transitions at leading order,
but we find an overall suppression factor of $1/\sqrt{N_c}$. This gives the
$N_c$ scaling of the strong widths $\Gamma(\Theta \to NK) \sim O(1/N_c)$ for the
positive parity pentaquarks. The corresponding pion widths among different
$\Theta$ states scale like
$\Gamma(\Theta \to \Theta' \pi) \sim O(N_c^0)$ if the transition $\Theta \to \Theta'$
is allowed by phase space. Any states for which the pion modes are not allowed, which
include the lowest lying pentaquark, are thus predicted to be narrow
in the large $N_c$ limit.
The paper is organized as follows: In Section 2 we construct the complete set of the
positive parity pentaquarks and give their tower structure in the large $N_c$
limit. In Sec.~3 we discuss the strong couplings of the light pentaquarks in the language of
quark operators. Sec.~4 derives the large $N_c$
predictions for the ratios of strong couplings from a consistency condition for
$\pi + \Theta \to K + B $ scattering.
In Sec.~5 we discuss the heavy pentaquarks in the combined large $N_c$ and heavy quark
symmetry limit. Finally, Sec.~6 summarizes our conclusions.
\section{Constructing the states}
\begin{figure}[t!]
\begin{center}
\includegraphics[width=4in]{positive5q.eps}
{\caption{\label{fig1}Schematic representation of the mass spectrum of the
positive parity pentaquarks. In the flavor symmetric large $N_c$ limit, all
these states are degenerate into two irreducible representations of the
contracted symmetry, labelled with ${\cal K}=1/2, 3/2$. The splittings within
each tower are of order $\sim 1/N_c$. }}
\end{center}
\end{figure}
We start by discussing the construction of the exotic states, using the
language of the constituent quark model in the large $N_c$ limit.
The quantum numbers of a $\bar q q^{N_c+1}$
hadron are constrained by the fact that the $N_c+1$ quarks have to be in the
fundamental representation of the color $SU(N_c)$ group. Fermi symmetry
requires their spin-flavor-orbital wave function to be in the mixed symmetric
representation ${\cal MS}_{N_c+1} = [N_c,1,0,\cdots]$, where $[n_1,n_2,\cdots]$ give the
number of boxes in the first, second, etc. row of the corresponding Young tableau. This spin-flavor-orbital wavefunction can be decomposed into products of
irreps of $SU(2F)\otimes O(3)$, i.e. spin-flavor wavefunctions with
increasingly higher orbital angular momentum
\begin{eqnarray}
{\cal MS}_{N_c+1} \to (MS_{N_c+1}\,,L=0) \oplus (S_{N_c+1}\,, L=1) \oplus \cdots
\end{eqnarray}
The first term corresponds to negative parity exotic states.
Their properties have been considered in the $1/N_c$ expansion in
Refs.~\cite{MW,PiSc2}. The second term corresponds to states with a symmetric $SU(2F)$
spin-flavor wave function for the $q^{N_c+1}$ system, carrying one unit
of orbital angular momentum $L=1$. After adding in
the antiquark, this produces positive parity exotics. A subset of these
states were studied using the $1/N_c$ expansion in Ref.~\cite{Jenkins:2004vb}. We reconsider
them here, including all the states dictated by the contracted
$SU(6)_c$ symmetry.
Adopting a Hartree description, one can think of the $q^{N_c+1}$ system
as consisting of $N_c$ quarks in $s$-wave orbitals, plus one
excited quark in a $p$-wave orbital. We write this schematically as
\begin{eqnarray}\label{Hartree}
\Theta = \bar q q^{N_c} q^* \ ,
\end{eqnarray}
with $q^*$ denoting the orbitally excited quark. The spin-flavor of the
excited quark is correlated with that of the $ q^{N_c} $ system
such that the total system is in a symmetric representation of $SU(2F)$,
with $F$ the number of light quark flavors.
For $F=3$
the minimal set of these states contains two irreducible representations
of the contracted $SU(6)_{c}$ symmetry, with ${\cal K}=1/2$ and ${\cal K}=3/2$.
The first few states in each of these representations are \cite{PiSc2} (see Fig.~\ref{fig1})
\begin{eqnarray}\label{K12}
{\cal K} =\frac12: && \mathbf{\overline{10}}_\frac12\,,\quad
\mathbf{27}_{\frac12, \frac32}\,,\quad \mathbf{35}_{\frac32, \frac52}\,,\cdots \ , \\
\label{K32}
{\cal K} =\frac32: && \mathbf{\overline{10}}_\frac32\,,\quad
\mathbf{27}_{\frac12, \frac32, \frac52}\,,\quad
\mathbf{35}_{\frac12, \frac32, \frac52, \frac72}\,,\cdots\,.
\end{eqnarray}
We use the ${\cal K}$ label to denote an entire $SU(6)_{c}$ representation
by the $SU(4)_c$ multiplets containing the states
with maximal strangeness, sitting at the top of the corresponding
weight diagrams of $SU(3)$.
For the case considered in (\ref{K12}), (\ref{K32}) these are the strangeness ${\cal S}=+1$
states with quark content $\bar s q^{N_c+1}$. We recall here
that an irreducible representation of $SU(4)_c$
(tower with fixed strangeness) is labelled by ${\cal K}=0,1/2,1,\dots$ and contains all states
with spin $J$ and isospin $I$ satisfying $|I-J| \leq {\cal K} \leq I+J$.
The first set of ${\cal K}=1/2$ states has been considered in Ref.~\cite{Jenkins:2004vb}.
The treatment adopted here can
describe both towers.
In this paper we will also consider the ${\cal K}=3/2$ tower in detail.
As the antiquark mass $m_Q$ is increased, the two towers become
closer in mass, split only by effects of order
$O(1/m_Q)$ as a consequence of heavy quark symmetry \cite{IsWi}. This corresponds to the charmed or bottom exotic states $\bar Q q^{N_c+1}$,
with $Q = c,b$. A more appropriate description for these states is given \cite{PiSc2} in terms of
one tower for the light degrees of freedom with ${\cal K}_\ell=1$
\begin{eqnarray}\label{set1}
{\cal K}_\ell =1: && \mathbf{\overline{6}}_1\,,\quad
\mathbf{15}_{0,1,2}\,,\quad \mathbf{15'}_{1,2,3}\,,\cdots
\end{eqnarray}
where the subscript denotes the spin of the light degrees of freedom $J_\ell$.
Each of these multiplets corresponds to a heavy quark spin doublet, with
hadron spin $J = J_\ell \pm 1/2$, except for the singlets with $J_\ell = 0$.
The properties of
these states are studied below in Sec.~\ref{secheavy}.
\begin{figure}
\begin{center}
\begin{picture}(300,100)(0,0)
\put(0,80){$|[(L,(S_{\bar q}, S_q)S]JI\rangle$}
\put(0,60){quark model states}
\put(200,80){$|[(L,S_{\bar q}){\cal K}, S_q]JI\rangle$}
\put(220,60){tower states}
\put(100,20){$|[(L,S_q)J_\ell,S_{\bar q}]JI\rangle$}
\put(90,00){heavy pentaquarks}
\put(110,80){\vector(1,0){70}}
\put(125,60){Eq.~(\ref{recoupl})}
\put(50,50){\vector(3,-1){50}}
\put(20,35){Eq.~(\ref{recoupl2})}
\put(240,50){\vector(-3,-1){50}}
\end{picture}
{\caption{\label{fig2}The three possible couplings of the angular momenta in a
pentaquark state with orbital excitation. The connection between the
basis states is given by recoupling relations, given in the text. }}
\end{center}
\end{figure}
Next we discuss the relation between the different coupling schemes
when constructing the pentaquark states $ |\Theta; JI \rangle $ in terms of spin and
orbital states. They are obtained by combining the system of $N_c+1$ light quarks
in a spin-flavor symmetric state $|S_q = I\rangle$ with the
orbital angular momentum $|L=1\rangle$ and the antiquark $|S_{\bar q} = \frac12\rangle$
\begin{eqnarray}
|\Theta; JI \rangle \in |q^{N_c+1}; S_q = I\rangle \otimes
|L=1\rangle \otimes |\bar q; S_{\bar q}=\frac12 \rangle \ .
\end{eqnarray}
The total hadron spin $\vec J$ is thus given by
\begin{eqnarray}
\vec J = \vec S_q + \vec S_{\bar q} + \vec L \ .
\end{eqnarray}
The three angular momenta $\vec S_q , \vec S_{\bar q}, \vec L$ can be
coupled in several ways, which give different pentaquark states (see Fig.~\ref{fig2}).
The large $N_c$ QCD states are obtained by coupling these three vectors
in the order $ \vec L + \vec S_{\bar q} = \vec {\cal K}$,
$\vec {\cal K} + \vec S_q = \vec J$, with ${\cal K}$ taking the two possible values ${\cal K}=1/2, 3/2$.
These states will be denoted $|[(L,S_{\bar q}){\cal K}, S_q]JI;m\alpha\rangle$,
with $I=S_q$, and can be identified in the large $N_c$ limit with the
two towers
corresponding to ${\cal K}=1/2, 3/2$.
Another possible choice for the pentaquark states involves
coupling first $ \vec S_{\bar q}+ \vec S_q =\vec S$,
with $\vec S$ the total spin of the quark-antiquark system.
In a second step, the spin $\vec S$ is coupled with the orbital angular momentum $\vec L$
as $\vec L + \vec S = \vec J$, with $\vec J$ the total spin of the hadron.
We will denote these
states as $|[(L,(S_{\bar q}, S_q)S]JI; m\alpha\rangle$, and they are
the most convenient choice for quark model computations of matrix elements. Note that
this coupling scheme is arbitrary since $S$ is not a good quantum number. On the other
hand ${\cal K}$ is the right quantum number that labels the physical states
in the well defined large $N_c$ limit of QCD.
The connection between the tower states and the quark model states is
given by the usual recoupling formula for 3 angular momenta
\begin{eqnarray}\label{recoupl}
&& |[(L,S_{\bar q}){\cal K}, S_q]JI;m\alpha\rangle
= (-)^{I+1/2+L+J} \\
&& \hspace{2cm} \times \sum_{S=I\pm 1/2} \sqrt{[S] [{\cal K}]}
\left\{
\begin{array}{ccc}
I & \frac12 & S \\
L & J & {\cal K} \\
\end{array}
\right\}
|[(L,(S_{\bar q}, S_q)S]JI; m\alpha\rangle \ , \nonumber
\end{eqnarray}
where $[S]=2 S+1$, etc.
This recoupling relation fixes the mixing matrix of the pentaquark states in the
large $N_c$ limit. This is analogous to a result found for the $\mathbf{70}^-$ orbitally
excited baryons,
for which the
mixing angles are determined in the large $N_c$ limit, up to configuration mixing
effects \cite{PY1}.
Finally, another possible choice for the pentaquark states combines
first the light quark spin with $\vec L$ into the angular momentum of
the light degrees of freedom $\vec J_\ell = \vec L + \vec S_q $ , which is then coupled
with the antiquark spin to the total spin of the baryon $\vec J = \vec J_\ell + \vec S_{\bar q}$.
The corresponding states will
be denoted as
$|[(L, S_q)J_\ell, S_{\bar q}]JI; m\alpha\rangle $ and are appropriate in the
heavy antiquark limit, when the spin of the light degrees of freedom is
a conserved quantum number. A detailed discussion of these states is
given in Sec.~5.
\section{Light pentaquarks}
We start by discussing the mass spectrum of the positive
parity states. The formalism is very similar to that
used for the $L=2$ baryons in the $\mathbf{56}^+$ \cite{symml2} and $p-$wave
orbitally excited charm baryons \cite{charm}.
The mass operator is a linear combination of the most general
isoscalar parity even operators constructed from the
orbital angular momentum $L^i$ and the generators of the $SU(6)_q \otimes SU(6)_{\bar q}$ spin-flavor algebra \cite{Jenkins:2004vb}, the quark operators $S_q, G_q^{ia}, T_q^a$
and the antiquark operators $S_{\bar q}, G_{\bar q}^{ia}, T_{\bar q}^a$.
For simplicity we restrict our consideration here to the ${\cal S}=+1$ pentaquarks
with quark content $\bar s q^{N_c+1} $
and assume isospin symmetry.
To leading order in the $1/N_c$ expansion, the mass operator
acting on these states reads
\begin{eqnarray}
\hat M = c_0 N_c {\mathbf{1}} + c_1 L^i S_{\bar q}^i
+ O(1/N_c) \ ,
\end{eqnarray}
where $c_0$ and $c_1$ are unknown constants.
Spin-flavor symmetry is broken at leading order in $1/N_c$ by only
one operator, describing the spin-orbit interaction of the antiquark.
This operator is diagonal in the ${\cal K}$ basis given by Eq.~(\ref{recoupl})
and is responsible for the $O(N_c^0)$ splitting of the two towers with
${\cal K}=1/2$ and $3/2$. The situation is analogous to the one we have for the non-strange
members of the $\mathbf{70}^-$
excited baryons, where three irreducible representations of the ${\cal K}$-symmetry
have different masses because there are three
operators in the expansion of the mass operator up to $O(N_c^0)$ \cite{PiSc}.
The states at the top of the
weight diagram of a given $SU(3)$ representation (with maximal strangeness)
can decay into a nonstrange ground state baryon and a kaon $\Theta \to BK$.
These transitions are mediated by the strangeness changing axial current and
can be parameterized in terms of an operator $Y^{i\alpha}$ defined as
\begin{eqnarray}
\langle B| \bar s \gamma^i \gamma_5 q^\alpha |\Theta \rangle =
(Y^{i\alpha})_{B\Theta} \ ,
\end{eqnarray}
with $B= N,\Delta, \dots$
The operator $Y^{i\alpha}$ can be expanded in $1/N_c$ as
$Y^{i\alpha} = Y^{i\alpha}_0 + Y^{i\alpha}_1/N_c + \dots$, where the leading
term scales like $O(N_c^0)$, as will
be shown below in Sec.~\ref{Ncounting}.
At leading order in $1/N_c$, the explicit representation of $Y^{i\alpha}_0$ in
the quark operator
expansion gives only one operator
\begin{eqnarray}\label{Axdef}
Y^{i\alpha} = g_0 \bar s \xi^i q^{*\alpha} + O(1/N_c) \ ,
\end{eqnarray}
where $\alpha = \pm 1/2$ denotes the flavor of the orbitally excited
light quark $q^*=u,d$ and $g_0$ is an unknown constant that stands for
the reduced matrix element of the QCD operator. In addition to the
$SU(4)_q \otimes O(3)$ generators we now
need as another basic building block an isoscalar vector
operator acting on the orbital degrees of freedom, which
we denote $\xi^i$.
In the following we compute the $\Theta \to N,\Delta$
matrix elements of the axial current operator, Eq.~(\ref{Axdef}), and show that they
can be expressed in terms of a few reduced matrix elements whose expressions
are already known for arbitrary $N_c$.
The matrix elements of the operator in Eq.~(\ref{Axdef}) take the simplest form
when expressed
using the quark model states on the right-hand side of Eq.~(\ref{recoupl}).
They can be computed straightforwardly with the result
\begin{eqnarray}\label{Ax}
&& \langle I'm'\alpha' | \bar s \xi^i q^{*\beta} |[L,(\frac12 I)S]JI;m\alpha
\rangle = \\
&& \qquad \frac{1}{\sqrt{N_c+1} }
\langle 0 |\!| \xi |\!| L\rangle \delta_{L,1}
\delta_{S,I'} \sqrt{\frac{[J]}{[I']}}
t(I',I)
\left(
\begin{array}{cc|c}
J & 1 & I' \\
m & i & m' \\
\end{array}
\right)
\left(
\begin{array}{cc|c}
I & \frac12 & I' \\
\alpha & \beta & \alpha' \\
\end{array}
\right) \ ,
\nonumber
\end{eqnarray}
where $t(I',I)$ is the reduced matrix element of the ${\bar s} q^\beta$
operator on spin-flavor symmetric states of the $q^{N_c+1}$ system, defined as
\begin{eqnarray}\label{tdef}
\langle I'm'\alpha' |{\bar s} q^\beta | SI;m\alpha\rangle
= t(I',I)\delta_{SI'}\delta_{m'm}
\left(
\begin{array}{cc|c}
I & \frac12 & I' \\
\alpha & \beta & \alpha' \\
\end{array}
\right) \ .
\end{eqnarray}
The reduced matrix elements $t(I',I)$ for arbitrary $N_c$ can be obtained easily
using the occupation number formalism as described in Ref.~\cite{Pirjol:2006jp}.
Explicit results for $t(I',I)$ for all pentaquarks with quantum numbers of interest
are tabulated in Ref.~\cite{Pirjol:2006jp}.
For completeness, we reproduce here the expressions needed in the following.
\begin{eqnarray}\label{4t}
&& t(\frac12,0) = \frac12\sqrt{N_c+1}\,,\hspace{1.6cm}
t(\frac12,1) = \frac{\sqrt3}{2}\sqrt{N_c+5}\,,\\
&& t(\frac32,1) = \frac12 \sqrt{\frac32} \sqrt{N_c-1}\,,\qquad
t(\frac32,2) =\frac12 \sqrt{\frac52} \sqrt{N_c+7} \,.\nonumber
\end{eqnarray}
The explicit suppression factor of $O(1/{\sqrt{N_c})}$ in Eq.~(\ref{Ax}) arises because
we have to annihilate the excited quark $q^*$ carrying one unit of angular momentum.
The detailed derivation of this factor is given below in Section 4.
This suppression factor is absent in the case of negative parity pentaquarks
where the $N_c+1$ quarks are all in $s-$wave orbitals, and the axial current
can annihilate any of them. The reduced matrix element in Eq.~(\ref{tdef})
scales like $t(I',I) \sim N_c^{1/2}$ \cite{Pirjol:2006jp}, which implies that the
$\Theta \to B$ matrix element of the axial current scales like
$\langle Y^{i\alpha}\rangle \sim N_c^0$. This means that the $\Theta \to BK$
partial widths of these
exotic states are suppressed as $1/N_c$ in the large $N_c$ limit.
Finally, the matrix elements of the axial current on the
tower (physical) pentaquark states are obtained by substituting
Eq.~(\ref{Ax}) into the recoupling relation
Eq.~(\ref{recoupl}). Because of the
Kronecker symbol $\delta_{S,I'}$, only one of the two
terms on the right-hand side of Eq.~(\ref{recoupl}) gives a nonvanishing
contribution.
The result is given by
\begin{eqnarray}\label{Kgeneral}
&& \langle I'm'\alpha' | Y^{i\beta}_0 |[(L,\frac12){\cal K}, S_q]JI;m\alpha
\rangle \equiv g_0 T(I',IJ{\cal K})
\left(
\begin{array}{cc|c}
J & 1 & I' \\
m & i & m' \\
\end{array}
\right)
\left(
\begin{array}{cc|c}
I & \frac12 & I' \\
\alpha & \beta & \alpha' \\
\end{array}
\right)
\\
&& \qquad
= {\bar g}_0
t(I',I) \sqrt{[{\cal K}][J]}
\left\{
\begin{array}{ccc}
I & \frac12 & I' \\
1 & J & {\cal K} \\
\end{array}
\right\}
\left(
\begin{array}{cc|c}
J & 1 & I' \\
m & i & m' \\
\end{array}
\right)
\left(
\begin{array}{cc|c}
I & \frac12 & I' \\
\alpha & \beta & \alpha' \\
\end{array}
\right) \ ,
\nonumber
\end{eqnarray}
with ${\bar g}_0$ an overall constant of order $N_c^0$ where we also absorbed the
unknown orbital overlap matrix element.
We list in Tables \ref{K12table} and \ref{K32table} the reduced axial
matrix elements $T(I',IJ{\cal K})$ following from Eq.~(\ref{Kgeneral}), corresponding to
the two towers with ${\cal K}=1/2$ and ${\cal K}=3/2$, respectively.
These tables show also the ratios of the
$p-$wave $\Theta \to BK$ partial widths of these states. They are obtained as
usual, by summing over final states and averaging over initial states
\begin{eqnarray}
\Gamma_{p\rm-wave} &=& g_0^2
\frac{[I']^2}{[J][I]}
|T(I',IJ{\cal K})|^2 |\vec p|^3 \ .
\end{eqnarray}
The predictions for the ratios of the decay amplitudes for the
${\cal K}=1/2$ states agree with those of Ref.~\cite{Jenkins:2004vb}
after taking $N_c=3$. The results for arbitrary $N_c$ and the ratios for the
${\cal K}=3/2$ states are new.
\begin{table}
\caption{\label{K12table} Large $N_c$ predictions for the
$p$-wave strong decay amplitudes
of the positive parity light pentaquarks in the ${\cal K}=1/2$ tower.
The last column shows the ratios of the partial $p-$wave rates, normalized to the
$\overline{\mathbf{10}}_{1/2} \to NK$ width, and
with the phase space factor $|\vec p|^3$ removed.}
\begin{center}
\begin{tabular}{rccc}
\hline\hline
Decay & $T(I',IJ\frac12)$ & $\frac{1}{p^3}\Gamma_{N_c=3}^{\rm ({\it p}-wave)}$ & $\frac{1}{p^3}\Gamma_{N_c \to \infty}^{\rm ({\it p}-wave)}$\\
\hline \hline
$\overline{\mathbf{10}}_{1/2} \to NK$ & $\frac{1}{\sqrt{N_c+1}}t(\frac12, 0)$
& $1$ & $1$\\
\hline
${\mathbf{27}}_{1/2} \to NK$ & $\frac{1}{3\sqrt{N_c+1}} t(\frac12, 1)$ & $\frac29$ & $\frac19$\\
$\to \Delta K$ & $- \frac{2}{3\sqrt{N_c+1}} t(\frac32, 1)$ & $\frac49$ & $\frac89$\\
${\mathbf{27}}_{3/2} \to NK$ & $-\frac{2\sqrt2}{3\sqrt{N_c+1}} t(\frac12, 1)$ & $\frac89$ & $\frac49$\\
$\to \Delta K$ & $\frac{\sqrt5}{3\sqrt{N_c+1}} t(\frac32, 1)$ & $\frac5{18}$ & $\frac59$ \\
\hline
${\mathbf{35}}_{3/2} \to \Delta K$ & $\frac{1}{\sqrt{5(N_c+1)}} t(\frac32, 2)$ & $\frac12$ & $\frac15$\\
${\mathbf{35}}_{5/2} \to \Delta K$ & $-\frac{2}{\sqrt{5(N_c+1)}} t(\frac32, 2)$ & $\frac{4}{3}$ & $\frac{8}{15}$\\
\hline
\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}
\caption{\label{K32table} Large $N_c$ predictions for the
$p$-wave strong decay amplitudes
of the positive parity light pentaquarks in the ${\cal K}=3/2$ tower.
The last column shows the ratios of the partial $p-$wave rates, normalized to the
$\overline{\mathbf{10}}_{3/2} \to NK$ width,
with the phase space factor $|\vec p|^3$ removed.}
\begin{center}
\begin{tabular}{rccc}
\hline \hline
Decay & $T(I',IJ\frac32)$ & $\frac{1}{p^3}\Gamma^{\rm ({\it p}-wave)}_{N_c=3}$ &
$\frac{1}{p^3}\Gamma_{N_c \to \infty}^{\rm ({\it p}-wave)}$ \\
\hline \hline
$\overline{\mathbf{10}}_{3/2} \to NK$ & $-\frac{\sqrt2}{\sqrt{N_c+1}} t(\frac12, 0)$ & $1$ & $1$\\
\hline
${\mathbf{27}}_{1/2} \to NK$ & $-\frac{2\sqrt2}{3\sqrt{N_c+1}} t(\frac12, 1)$ & $\frac{16}{9}$ & $\frac{8}{9}$\\
$\to \Delta K$ & $- \frac{1}{3\sqrt{2(N_c+1)}} t(\frac32, 1)$ & $\frac{1}{18}$ & $\frac{1}{9}$\\
${\mathbf{27}}_{3/2} \to NK$ & $\frac{\sqrt{10}}{3\sqrt{N_c+1}} t(\frac12, 1)$ & $\frac{10}{9}$ & $\frac{5}{9}$ \\
$\to \Delta K$ & $\frac{2}{3\sqrt{N_c+1}} t(\frac32, 1)$ & $\frac29$ & $\frac{4}{9}$\\
${\mathbf{27}}_{5/2} \to \Delta K$ & $-\sqrt{\frac{3}{2(N_c+1)}} t(\frac32, 1)$ & $\frac12$ & $1$ \\
\hline
${\mathbf{35}}_{1/2} \to \Delta K$ & $\frac{1}{\sqrt{2(N_c+1)}} t(\frac32, 2)$ & $\frac52$ & $1$\\
${\mathbf{35}}_{3/2} \to \Delta K$ & $-\frac{2}{\sqrt{5(N_c+1)}} t(\frac32, 2)$ & 2 & $\frac{4}{5}$ \\
${\mathbf{35}}_{5/2} \to \Delta K$ & $\frac12\sqrt{\frac{14}{5(N_c+1)}} t(\frac32, 2)$ & $\frac76$ & $\frac{7}{15}$\\
\hline \hline
\end{tabular}
\end{center}
\end{table}
In the large $N_c$ limit the ratios of strong decay widths satisfy sum rules.
These sum rules express the equality of the widths of each tower state in
each partial wave, and are a consequence of the contracted $SU(4)_c$ symmetry, which
relates all tower states in the large $N_c$ limit. They are given by
\begin{eqnarray}\label{sr1}
\Gamma(\Theta_{R_J} \rightarrow N K ) + \Gamma(\Theta_{R_J} \rightarrow \Delta K )
&=& \Gamma(\Theta_{\mathbf{\overline{10}}_{1/2}} \rightarrow N K ) \ ,
\end{eqnarray}
where $R_J = {\mathbf{27}}_{1/2}, {\mathbf{27}}_{3/2}, {\mathbf{27}}_{5/2},
{\mathbf{35}}_{1/2}, \dots $. These sum rules can be checked explicitly using the results
listed in the last column of Tables~\ref{K12table} and~\ref{K32table}.
These sum rules are not apparent in the results of Ref.~\cite{Jenkins:2004vb}, which correspond to
the case of finite $N_c=3$, for which the contracted symmetry is broken.
Walliser and Weigel \cite{Walliser:2005pi} discussed the pentaquark strong width in
the chiral soliton model. They
found that only one operator contributes to the $\Theta \to NK$ coupling at leading
order in the $1/N_c$ expansion and give the prediction
$\Gamma (\Theta_{\mathbf{27}_{3/2}})/ \Gamma(\Theta_{\mathbf{\overline{10}}_{1/2}}) =
4/9$, which is in agreement with our model independent result in Table~\ref{K12table}.
\section{Large $N_c$ power counting and consistency conditions}
\label{Ncounting}
The results of the preceding section on the $\Theta \to BK$ strong couplings
take a particularly simple form at leading order in the $1/N_c$ expansion.
This follows in a model-independent way from a consistency condition
satisfied by the matrix elements of $Y^{i\alpha}$, similar to a consistency
condition constraining kaon couplings to ordinary baryons \cite{DJM}.
We start by deriving in some detail the $Y^{i\alpha} \sim N_c^0$ scaling of the
leading term,
which follows from the special structure of the exotic states with positive
parity considered in this work.
In particular, we show that taking into account the nonzero orbital angular momentum
$L=1$ of these states is crucial in order to obtain the correct $N_c$ scaling.
As explained in Sec.~2, the exotic state can be written schematically as
$\Theta = {\bar q} q^{N_c} q^* $, where the
$q^*$ quark carries one unit of angular momentum. The $N_c+1$ quarks are in a
completely symmetric spin-flavor wave function and a completely antisymmetric
color-orbital wave function. This state can be described as a linear combination of
terms with given occupation numbers for one-particle states
\begin{eqnarray}\label{occupno}
|q^{N_c} q^* \rangle = \sum_{\{n_i\}} c_{\{n_i\}} | \{ n_1, n_2, n_3, n_4 \}
\rangle \otimes |[n_s^1, n_s^2, \cdots , n_s^{N_c} ; n_p^1, n_p^2, \cdots , n_p^{N_c} ] \rangle \ .
\end{eqnarray}
The first factor denotes the occupation numbers of the four spin-flavor one-quark
states $\{ u_\uparrow, u_\downarrow, d_\uparrow, d_\downarrow \}$ as defined in
Ref.~\cite{Pirjol:2006jp}. The second factor denotes the occupation numbers of the $2N_c$
possible orbital-color one-quark states. These are $s-$ and $p-$wave orbitals,
times the $N_c$ possible color states $\phi_s(x) \otimes |i\rangle$ and $\phi_p(x) \otimes
|i\rangle$, respectively. $\{n_i\}$ denotes the set of all occupation numbers.
We consider in this paper only states with one quark in a $p-$wave orbital, and
denote the color-orbital wave function of such states as $[ N_c ; 1]$.
The axial current is given by the operator in Eq.~(\ref{Axdef})
\begin{eqnarray}
Y_0^{i\alpha} = g_0 \bar s \xi^i q^{*\alpha} \ ,
\end{eqnarray}
where $q^*$ annihilates the spin-flavor state of
the orbitally excited quark and $\xi^i$ acts on the orbital
wave function of that state.
When acting on the state, Eq.~(\ref{occupno}), this operator annihilates one quark in a $p-$wave
orbital. Taking for definiteness $q^* = u_\uparrow^*$, the matrix element of the axial
current reduces to evaluating expressions of the form
\begin{eqnarray}\label{20}
u^{*}_\uparrow |q^{N_c} q^* \rangle &=&
\sum_{\{n_i\}} c_{\{n_i\}} u_\uparrow^{*}|\{ n_1, n_2, n_3, n_4 \} \rangle
\otimes |[N_c ; 1 ]\rangle \ .
\end{eqnarray}
The action of the annihilation operator $u_\uparrow^*$ on the symmetric spin-flavor
state can be computed using the methods of Ref.~\cite{Pirjol:2006jp}. However, one subtle point
is that this operator can only annihilate the excited quark, but not the other $N_c$
$s-$wave quarks.
The spin-flavor state of the excited quark can be made explicit with the help of the
identity
\begin{eqnarray}
&& \{ n_1, n_2, n_3, n_4 \} =
\sqrt{\frac{n_1}{N_c+1}} (u_\uparrow^{*\dagger}) \{n_1-1, n_2, n_3, n_4 \} +
\sqrt{\frac{n_2}{N_c+1}} (u_\downarrow^{*\dagger}) \{n_1, n_2-1, n_3, n_4 \} \nonumber \\
&& \qquad \qquad +
\sqrt{\frac{n_3}{N_c+1}} (d_\uparrow^{*\dagger}) \{n_1, n_2, n_3-1, n_4 \} +
\sqrt{\frac{n_4}{N_c+1}} (d_\downarrow^{*\dagger}) \{n_1, n_2, n_3, n_4-1 \} \ ,
\end{eqnarray}
where $ n_1+n_2+n_3+n_4 = N_c+1$. Using this identity, Eq.~(\ref{20}) can be
evaluated explicitly with the result
\begin{eqnarray}
u^{*}_\uparrow |q^{N_c} q^* \rangle &=&
\sum_{\{n_i\}} c_{\{n_i\}} \sqrt{\frac{n_1}{N_c+1}}|\{ n_1-1, n_2, n_3, n_4 \} \rangle
\otimes| [N_c ; 0 ]\rangle
\end{eqnarray}
and similarly for other spin-flavor states of the excited quark.
These relations generalize the relations given in Ref.~\cite{Pirjol:2006jp} for the action of
one-body operators in the occupation number formalism to the case of more complicated
orbital wave functions.
Note the additional suppression factor $1/\sqrt{N_c+1}$, which would not be present if
all $N_c+1$ quarks were in $s-$wave orbitals. Together with Eq.~(\ref{Ax}) given in
the previous section,
this completes the proof of the $N_c$ scaling of the $\Theta \to B$ matrix elements of the
axial current.
This scaling implies that the
decay amplitude $A(\Theta \to BK)$ scales like $N_c^{-1/2}$, which in turn
predicts that the corresponding strong decay widths are parametrically
suppressed by $1/N_c$. This suppression may be obscured in the total
widths of the $\Theta$ states by two possible mechanisms. First, the pion modes
$\Theta \to \Theta' \pi$, whenever allowed by phase space,
have widths of order $O(N_c^0)$. Second, mixing of the exotic states with radially
excited nucleon states, such as the Roper resonance, could enhance the $N_c$ scaling
of the decay amplitude as $A(\Theta \to \pi N) \sim O(N_c^0)$.
None of these mechanisms applies to the lowest lying pentaquark state(s), for which
the $1/N_c$ expansion offers thus another possible explanation for their
small widths.
Accounting explicitly for the $L=1$ orbital momentum of these states
is crucial for obtaining the $Y^{i\alpha}\sim N_c^0$ scaling.
This can be contrasted with the approach of Ref.~\cite{Jenkins:2004vb}, where the orbital
angular momentum is not explicit. Instead, the angular momentum $L=1$ is coupled
with the antiquark spin $S_{\bar q}$ to a fixed value ${\cal K}=
L+S_{\bar q} = 1/2$, and ${\cal K}$ is effectively identified with $S_{\bar q}=\frac12$.
The $\xi^i$ operator acting on the orbital part does not appear in any of the operators
describing physical quantities, such as masses, axial currents, etc.
In this approach,
the axial current operator mediating the $\Theta \to B $ transition is identified with
\cite{Jenkins:2004vb}
\begin{eqnarray}
Y_{({\rm no} \ \xi )}^{i\alpha} = g_0 \bar s \sigma^i q^\alpha + O(1/N_c)
\end{eqnarray}
and its matrix elements scale like $N_c^{1/2}$ \cite{Pirjol:2006jp}.
We turn next to derive the leading behaviour of $Y^{i\alpha}$ in a hadronic
language.
The matrix elements of the leading
order piece $Y^{i\alpha}_0$ satisfy a consistency condition from
$\pi^a + \Theta \to K^\alpha + B $ scattering and can be obtained in
a model independent way in the large $N_c$ limit. The pion couplings
to ordinary baryons and pentaquarks are parametrized by
\begin{eqnarray}
\langle B'| \bar q \gamma^i \gamma_5 \tau^a q |B \rangle &=&
N_c (X^{ia})_{B'B} , \\
\langle \Theta'| \bar q \gamma^i \gamma_5 \tau^a q |\Theta \rangle &=&
N_c (Z^{ia})_{\Theta'\Theta} .
\end{eqnarray}
These operators have a $1/N_c$ expansion of the form
$X^{ia} = X_0^{ia} + X_1^{ia}/N_c + ... $ ,
and similarly for $Z^{ia}$, where the leading order terms $X_0^{ia}$
and $Z_0^{ia}$ scale as $O(N_c^0)$.
After including the meson decay constants, the overall scaling of the direct and crossed
diagrams separately
is $O(N_c^0)$.
The calculation of the scattering amplitude at the quark level gives a
$1/\sqrt{N_c}$ scaling for the $\pi^a + \Theta \to K^\alpha + B $ amplitude.
This leads to the consistency condition
\begin{eqnarray}\label{cc}
Y_0^{j\alpha} Z_0^{ia} - X_0^{ia} Y_0^{j\alpha} = 0 \ .
\end{eqnarray}
The derivation is similar to the one given for the consistency condition of
meson couplings to ordinary and hybrid baryons
in Ref.~\cite{CPY99}.
The leading order matrix element $\langle X^{ia}_{0} \rangle$ is given by the model-independent
expression \cite{Largenspinflavor,DJM}
\begin{eqnarray}
\langle X_0^{ia}\rangle = g_X
(-)^{J+I'+{\cal K}+1} \sqrt{[I][J]}
\left\{
\begin{array}{ccc}
I' & 1 & I \\
J & {\cal K} & J' \\
\end{array}
\right\}
\left(
\begin{array}{cc|c}
J & 1 & J' \\
J_3 & i & J'_3 \\
\end{array}
\right)
\left(
\begin{array}{cc|c}
I& 1 & I' \\
I_3 & a & I'_3 \\
\end{array}
\right)
\end{eqnarray}
and similarly for $\langle Z^{ia}_0 \rangle$ with $g_Z$ instead of $g_X$.
The $\Theta \to K^\alpha + B $ vertex is parametrized by
\begin{eqnarray}\label{Ksol}
\langle I'I'_3; J'J'_3; {\cal K}' |Y_0^{i\alpha}|II_3; JJ_3; {\cal K} \rangle &=&
\sqrt{[I][J]} {\cal Y}_0(I'J'{\cal K}';IJ{\cal K})
\left(
\begin{array}{cc|c}
J & 1 & J' \\
J_3 & i & J'_3 \\
\end{array}
\right)
\left(
\begin{array}{cc|c}
I & \frac12 & I' \\
I_3 & \alpha & I'_3 \\
\end{array}
\right) \ .
\end{eqnarray}
For ${\cal K}'=0$
this expression is equivalent to Eq.(\ref{Kgeneral}), with the identification
$g_0 T(I',IJ{\cal K}) = \sqrt{[I][J]} {\cal Y}_0(I'I'0;IJ{\cal K})$.
Taking the matrix elements of Eq.(\ref{cc}) between $B(I'J'{\cal K}')$
and $\Theta(IJ{\cal K})$,
and projecting onto channels with total spin $H$ and isospin $T$ in the $s$-channel
we obtain
\begin{eqnarray}
&& \sum_{{\bar I},{\bar J}} (-)^{-{\bar I}-\frac{1}{2}} [{\bar I}][{\bar J}]
\left\{
\begin{array}{ccc}
1 & I' & {\bar I} \\
{\cal K}' & {\bar J} & J'
\end{array}
\right\}
\left\{
\begin{array}{ccc}
1 & {\bar J} & J' \\
1 & H & J
\end{array}
\right\}
\left\{
\begin{array}{ccc}
\frac{1}{2} & {\bar I} & I \\
1 & T & I'
\end{array}
\right\}
{\cal Y}_0({\bar I}{\bar J}{\cal K}';IJ{\cal K})
= \nonumber \\
&& \qquad =
(-)^{H+{\cal K}+{\cal K}'-I'-J} \delta(J'1H) \delta(I'\frac12 T) \frac{g_Z}{g_X}
\left\{
\begin{array}{ccc}
1 & T & I \\
{\cal K} & J & H
\end{array}
\right\}
{\cal Y}_0(I'J'{\cal K}';TH{\cal K}) \ ,
\end{eqnarray}
where $\delta(J'1H)=1$ if $|J'-1|\le H \le J'+1$ and zero otherwise, etc.
The most general solution of this equation implies $g_X=g_Z$, and depends on two arbitrary
couplings $c_y$ with $y=1/2, 3/2$
\begin{eqnarray} \label{y0sol}
{\cal Y}_0(I'J'{\cal K}';IJ{\cal K}) &=&
\sum_{y=1/2,3/2} c_y
\left\{
\begin{array}{ccc}
\frac12 & 1 & y \\
I & J & {\cal K} \\
I' & J' & {\cal K}' \\
\end{array}
\right\} \ ,
\end{eqnarray}
up to an arbitrary phase $(-1)^{2nI+2mJ}$ with $n,m$ integers. For decays to
nonstrange baryons ${\cal K}'=0$, and this equation gives the asymptotic form
for $T(I',IJ{\cal K})$ in the large $N_c$ limit
\begin{equation}
\lim_{N_c \to \infty} T(I',IJ{\cal K})
\propto (-)^{1+I+I'+{\cal K}}
\sqrt{\frac{[I][J]}{[I'][{\cal K}]}}
\left\{
\begin{array}{ccc}
\frac12 & 1 & {\cal K} \\
J & I & I' \\
\end{array}
\right\} \ .
\end{equation}
This agrees with the large $N_c$ limit of the
reduced matrix element obtained by the quark operator calculation in Sec.~3.
\section{Heavy pentaquarks}
\label{secheavy}
Taking the antiquark
to be a heavy quark $Q=c,b$, the quantum numbers of the $\bar Q q^{N_c+1}$
states are simply related to those of the $N_c+1$ quarks, as was discussed earlier in Sec.~2.
These states belong to one
large $N_c$ tower with ${\cal K}_\ell=1$ and are shown in Eq.~(\ref{set1}).
We pause here to compare these states with
the positive parity heavy pentaquarks
considered in \cite{Jenkins:2004vb}. The light quarks in those states belong
to a ${\cal K}_\ell=0$ tower and
include the $SU(3)$ representations
\begin{eqnarray}
\label{set0}
{\cal K}_{\ell}=0 & &:\quad \mathbf{\overline{6}}_0\,, \mathbf{15}_{1}\,,
\mathbf{15'}_{2}\,, \cdots
\end{eqnarray}
where the subscript denotes the spin of the light degrees of freedom $J_\ell$.
Each of these multiplets corresponds to a heavy quark spin doublet, with
hadron spin $J = J_\ell \pm 1/2$, except for the singlets with $J_\ell = 0$.
Note that they are different from the states constructed here in
Eq.~(\ref{set1}), which in addition to the more complex mass spectrum
also have very different strong couplings, as will be seen below.
The heavy pentaquark states require a different recoupling of the three
angular momenta $S_q, S_{\bar q}, L$. Neither $S_q$ nor $L$ are good
quantum numbers for a heavy pentaquark, but only their sum, the
angular momentum of the light degrees of freedom $J_\ell =
S_q + L$, is. The states
with good $J_\ell$ are expressed in terms of the quark model states
by a recoupling relation analogous to Eq.~(\ref{recoupl})
\begin{eqnarray}\label{recoupl2}
&& |[(L,J_{q})J_\ell, S_{\bar q}]JI;m\alpha\rangle
= (-)^{I+1/2+L+J} \\
&& \hspace{2cm} \times \sum_{S=I\pm 1/2} \sqrt{[S][J_\ell]}
\left\{
\begin{array}{ccc}
I & \frac12 & S \\
J & L & J_\ell \\
\end{array}
\right\}
|[(L,(J_{q}, S_{\bar q})S]JI; m\alpha\rangle \ . \nonumber
\end{eqnarray}
A similar recoupling relation can be written which expresses the
heavy pentaquark states $|[(L,J_{q})J_\ell, S_{\bar q}]JI;m\alpha\rangle$
in terms of tower states $|[(L,S_{\bar q}){\cal K}, J_{q}]JI;m\alpha\rangle$,
appropriate for the light pentaquark states. Such a relation makes
explicit the correspondence between light and heavy pentaquarks,
as the mass of ${\bar Q}$ is gradually increased. We do not write this
relation explicitly, but just mention one of its implications: any given
heavy pentaquark state is related to states in both ${\cal K}=1/2$ and $3/2$ towers.
Any treatment which neglects one of these towers will therefore
be difficult to reconcile with a quark model picture.
A generic positive parity heavy pentaquark state $\Theta_{\bar Q}$ can decay
strongly into the four channels
\begin{eqnarray}\label{NHQ}
&& \Theta_{\bar Q} \to N H_{\bar Q}, N H_{\bar Q}^* \ , \\
\label{DelHQ}
&& \Theta_{\bar Q} \to \Delta H_{\bar Q}, \Delta H_{\bar Q}^* \ ,
\end{eqnarray}
where $H_{\bar Q}$ is a pseudoscalar $J^P = 0^-$ heavy
meson with quark content $\bar Q q$, and $H_{\bar Q}^*$ is its heavy quark
spin partner with $J^P = 1^-$. Heavy quark symmetry gives relations among
the amplitudes for these decays \cite{IsWi}. However, these relations alone
are in general not sufficient to predict the ratios of the decay widths of the two
modes in Eq.~(\ref{NHQ}), and of the two modes in Eq.~(\ref{DelHQ}).
On the other hand, large $N_c$ relates the $NH_{\bar Q}$ and $\Delta H_{\bar Q}$
modes. We will show that in the {\em combined} large $N_c$ and heavy
quark limits it is possible to make also predictions for the ratios of the
$N H_{\bar Q}$ and $N H^*_{\bar Q}$ modes.
We start by considering first the large $N_c$
predictions for the amplitude ratios into $N H_{\bar Q}$ and $\Delta H_{\bar Q}$.
These decays are mediated by the heavy-light axial current, which at leading
order in $1/N_c$ is given by a single operator in the quark operator expansion,
analogous to that mediating kaon decay
\begin{eqnarray}
\langle B| \bar Q \gamma^i \gamma_5 q^\beta |\Theta_{\bar Q} \rangle =
g_Q ( \bar Q \xi^i q^\beta)_{B\Theta} + O(1/N_c) \ ,
\end{eqnarray}
with $B= N,\Delta, \dots$ and $g_Q$ an unknown constant.
The matrix elements of this operator can be parameterized in terms of a reduced matrix
element, defined as
\begin{eqnarray}
\langle I'm'\alpha' | \bar Q \xi^i q^\beta |[(L,S_q)J_\ell, \frac12 ]JI;m\alpha
\rangle = T(I', IJJ_\ell)
\left(
\begin{array}{cc|c}
J & 1 & I' \\
m & i & m' \\
\end{array}
\right)
\left(
\begin{array}{cc|c}
I & \frac12 & I' \\
\alpha & \beta & \alpha' \\
\end{array}
\right) \ .
\end{eqnarray}
At leading order in $1/N_c$, the amplitudes $T(I', IJJ_\ell)$
can be expressed in terms of the amplitudes $t(I',JI)$ given in Eq.~(\ref{4t}).
To show this, recall that the matrix elements of the axial current
for the transitions between the
quark model states and the ground state baryons were given in
Eq.~(\ref{Ax}). The corresponding matrix elements taken on the physical heavy pentaquark
states are found by
substituting this result into the recoupling relation Eq.~(\ref{recoupl2}).
We find
\begin{eqnarray}\label{KgeneralQ}
&& \langle I'm'\alpha' | g_Q \bar Q \xi^i q^\beta |[(L,S_q)J_\ell, \frac12 ]JI;m\alpha
\rangle = \\
&& \qquad
\bar g_Q
\frac{1}{\sqrt{N_c+1}}t(I',I) \sqrt{[J_\ell][J]}
\left\{
\begin{array}{ccc}
I & \frac12 & I' \\
J & 1 & J_\ell \\
\end{array}
\right\}
\left(
\begin{array}{cc|c}
J & 1 & I' \\
m & i & m' \\
\end{array}
\right)
\left(
\begin{array}{cc|c}
I & \frac12 & I' \\
\alpha & \beta & \alpha' \\
\end{array}
\right) \ , \nonumber
\end{eqnarray}
where the unknown orbital overlap matrix element has also been absorbed in the
order $N_c^0$ unknown constant $\bar g_Q$.
We summarize in Table~\ref{h5q} the
reduced matrix elements $T(I', IJJ_\ell)$ of the leading order operator in the expansion
of the heavy-light axial current, for
heavy pentaquarks with positive parity. We denote the pentaquark states as
$\Theta_{{\bar Q}J_\ell}^{(R)}(J)$,
with $R=\overline{\mathbf 6}, {\mathbf {15}}, {\mathbf {15}'}$ the $SU(3)$ representation
to which they belong. These results give predictions for the ratios of the partial widths of
$p$-wave strong decays $\Theta_{{\bar Q}J_\ell}^{(R)}(J) \to N H_{\bar Q}, \Delta H_{\bar Q}$.
In the case of heavy pentaquarks there is a second sum rule
\begin{eqnarray}\label{sr2}
\sum_{J_\ell}\Gamma(\Theta_{{\bar Q} J_\ell}^R(J) \rightarrow N H_{\bar Q})
&=& \Gamma(\Theta_{{\bar Q} 1}^{\overline{\mathbf 6}}(J) \rightarrow N H_{\bar Q}) \ ,
\end{eqnarray}
in addition to the one
already discussed in Eq.~(\ref{sr1}).
Both hold in the large $N_c$ limit and can be checked explicitly using the results in
the last column of Table~\ref{h5q}.
\begin{table}
\caption{\label{h5q} Large $N_c$ predictions for the
$p$-wave heavy pentaquark decay amplitudes
$\Theta_{{\bar Q}J_\ell}^{(R)} \to N H_{\bar Q}$ and
$\Theta_{{\bar Q}J_\ell}^{(R)} \to \Delta H_{\bar Q}$.
In the last column we show the ratios of the partial $p$-wave rates,
with the phase space factor $p^3$ removed.}
\begin{center}
\begin{tabular}{rccc}
\hline \hline
Decay & $T(I',IJJ_\ell)$ & $\frac{1}{p^3}\Gamma^{\rm ({\it p}-wave)}_{N_c=3}$ & $\frac{1}{p^3}\Gamma_{N_c \to \infty}^{\rm ({\it p}-wave)}$\\
\hline \hline
$\Theta_ {\bar Q 1}^{(\overline{6})}(\frac12) \to N H_{\bar Q}$ & $ \frac{1}{\sqrt{N_c+1}} t(\frac12,0)$ & 1 & 1 \\
$\Theta_ {\bar Q 1}^{(\overline{6})}(\frac32) \to N H_{\bar Q}$ & $-\sqrt{\frac{2}{N_c+1}} t(\frac12,0)$ & 1 & 1 \\
\hline
$\Theta_ {\bar Q 0}^{({15})}(\frac12) \to N H_{\bar Q}$ & $ \frac{1}{\sqrt{3(N_c+1)}}t(\frac12,1)$ & $\frac23$ & $\frac13$ \\
$\to \Delta H_{\bar Q}$ & $-\frac{1}{\sqrt{3(N_c+1)}}t(\frac32,1)$ & $\frac13$ & $\frac23$ \\
$\Theta_ {\bar Q 1}^{({15})}(\frac12) \to N H_{\bar Q}$ & $-\sqrt{\frac{2}{3(N_c+1)}}t(\frac12,1)$ & $\frac43$ & $\frac23$ \\
$\to \Delta H_{\bar Q}$ & $-\frac{1}{\sqrt{6(N_c+1)}}t(\frac32,1)$ & $\frac16$ & $\frac13$ \\
$\Theta_ {\bar Q 1}^{({15})}(\frac32) \to N H_{\bar Q}$ & $-\frac{1}{\sqrt{3(N_c+1)}}t(\frac12,1)$ & $\frac13$ & $\frac16$ \\
$\to \Delta H_{\bar Q}$ & $ \sqrt{\frac{5}{6(N_c+1)}}t(\frac32,1)$ & $\frac{5}{12}$ & $\frac56$ \\
$\Theta_ {\bar Q 2}^{({15})}(\frac32) \to N H_{\bar Q}$ & $ \sqrt{\frac{5}{3(N_c+1)}}t(\frac12,1)$ & $\frac{5}{3}$ & $\frac56$ \\
$\to \Delta H_{\bar Q}$ & $ \frac{1}{\sqrt{6(N_c+1)}}t(\frac32,1)$ & $\frac{1}{12}$ & $\frac16$ \\
$\Theta_ {\bar Q 2}^{({15})}(\frac52) \to \Delta H_{\bar Q}$ & $-\sqrt{\frac{3}{2(N_c+1)}} t(\frac32,1)$ & $\frac12$ & $1$ \\
\hline
$\Theta_ {\bar Q 1}^{({15'})}(\frac12) \to \Delta H_{\bar Q}$ & $\frac{1}{\sqrt{2(N_c+1)}} t(\frac32,2)$ & $\frac52$ & $1$ \\
$\Theta_ {\bar Q 1}^{({15'})}(\frac32) \to \Delta H_{\bar Q}$ & $\frac{1}{\sqrt{10(N_c+1)}} t(\frac32,2)$ & $\frac14$ & $\frac1{10}$ \\
$ \Theta_ {\bar Q 2}^{({15'})}(\frac32) \to \Delta H_{\bar Q}$ &
$-\frac{3}{\sqrt{10(N_c+1)}} t(\frac32,2)$ & $\frac94$ & $\frac9{10}$\\
$\Theta_ {\bar Q 2}^{({15'})}(\frac52) \to \Delta H_{\bar Q}$ &
$-\frac{1}{\sqrt{10(N_c+1)}} t(\frac32,2)$ & $\frac16$ & $\frac1{15}$\\
$\Theta_ {\bar Q 3}^{({15'})}(\frac52) \to \Delta H_{\bar Q}$ &
$\sqrt{\frac{7}{5(N_c+1)}} t(\frac32,2)$ & $\frac73$ & $\frac{14}{15}$\\
\hline \hline
\end{tabular}
\end{center}
\end{table}
Finally, we will also use heavy quark symmetry to make predictions for modes
containing a $H_{\bar Q}^*$ heavy meson in the final state. To that end,
we first describe the
heavy quark symmetry relations and then we combine them with the large $N_c$
predictions. For simplicity, we restrict ourselves to the $N H_{\bar Q}^{(*)}$ modes
in a $p$-wave.
The specification of the final state in
$\Theta_{{\bar Q}J_\ell} \to [N H^*_{\bar Q}]_{p\rm -wave}$
must include in addition to the total angular momentum ${\vec J} = {\vec S}_N +
{\vec J}_{H^*_Q} + {\vec L}$, also the partial sum of two of the three angular
momenta. We denote here with ${\vec S}_N$ the spin of the nucleon, ${\vec J}_{H^*_Q}$
the spin of the vector heavy meson, and $ {\vec L}$ the orbital angular momentum.
The heavy quark symmetry relations take a simple form when this partial sum is chosen as
${\vec J}_N = {\vec S}_N + {\vec L}$. The decay amplitude for
$\Theta_{\bar Q}(IJ J_\ell) \to [N H^{(*)}_{\bar Q}(J'J'_\ell)]_{J_N}$
is given by \cite{IsWi}
\begin{eqnarray}
A(\Theta_{\bar Q}(IJJ_\ell) \to [N H^{(*)}_{\bar Q}(J'J'_\ell)]_{J_N}) =
\sqrt{[J_\ell][J]}
\left\{
\begin{array}{ccc}
J_\ell & J'_\ell & J_N \\
J' & J & \frac12 \\
\end{array}
\right\} F_{J_\ell J'_\ell J_N}^I \ ,
\end{eqnarray}
where we denoted as usual with $J_\ell$ and $J'_\ell = 1/2$ the spins of
the light degrees of
freedom in the initial and final heavy hadrons. $F_{J_\ell J'_\ell J_N}^I$
are reduced matrix elements, which in general also depend on $S_N$, although this
dependence was omitted for simplicity.
The predictions from these relations are shown in explicit
form in Table~\ref{HQStable}, from which one can read off the number of independent
hadronic amplitudes parameterizing each mode. The $\Theta_ {\bar Q 0}(\frac12)$ decays are parameterized
in terms of one reduced amplitude $f_0$, the decays of the $\Theta_ {\bar Q 1}(\frac12,\frac32)$
depend on two independent amplitudes $f_{1,2}$, and the decays of the $\Theta_ {\bar Q 2}(\frac32,\frac52)$
contain one independent amplitude $f_3$. From this counting it follows that heavy quark symmetry
does not relate, in general, all modes with pseudoscalar and vector heavy mesons.
\begin{table}
\begin{center}
\caption{\label{HQStable} Heavy quark symmetry predictions
for the reduced decay amplitudes
$\Theta_{{\bar Q}J_\ell} \to [N H_{\bar Q}^{(*)}]_{p\rm -wave}$ for final states
with given $\vec J_N = \vec S_N + \vec L$.
The zero entries denote
amplitudes forbidden in the heavy quark limit and suppressed
as $\sim O(\Lambda_{QCD}/m_Q)$. }
\begin{tabular}{ccc|ccc}
\hline \hline
Decay & $J_N=1/2$ & $J_N = 3/2$ & Decay & $J_N=1/2$ & $J_N = 3/2$ \\
\hline \hline
$\Theta_{{\bar Q}0}(\frac12) \to NH_{\bar Q}$ & $-\frac{1}{\sqrt{2}} f_0$ & $-$ &
$\Theta_{{\bar Q}0}(\frac12)\to NH^*_{\bar Q}$ & $ \frac{1}{\sqrt2}f_0$ & $0$ \\
\hline
$\Theta_{{\bar Q}1}(\frac12) \to NH_{\bar Q}$ & $\sqrt{\frac{3}{2}} f_1$ & $-$ &
$\Theta_{{\bar Q}1}(\frac12)\to NH^*_{\bar Q}$ & $\frac{1}{\sqrt6}f_1$ & $- \sqrt{\frac{2}{3}}f_2$ \\
$\Theta_{{\bar Q}1}(\frac32) \to NH_{\bar Q}$ & $-$ & $- \sqrt{\frac32} f_2$ &
$\Theta_{{\bar Q}1}(\frac32)\to NH^*_{\bar Q}$ & $- \frac{2}{\sqrt{3}} f_1$ & $\sqrt{\frac56}f_2$ \\
\hline
$\Theta_{{\bar Q}2}(\frac32) \to NH_{\bar Q}$ & $-$ & $\sqrt{\frac52} f_3$ &
$\Theta_{{\bar Q}2}(\frac32)\to NH^*_{\bar Q}$ & $0$ & ${\frac{1}{\sqrt2}}f_3$ \\
$\Theta_{{\bar Q}2}(\frac52) \to NH_{\bar Q}$ & $-$ & $-$ &
$\Theta_{{\bar Q}2}(\frac52)\to NH^*_{\bar Q}$ & $-$ & $-\sqrt{2}f_3$ \\
\hline \hline
\end{tabular}
\end{center}
\end{table}
Such a relation becomes possible however in the large $N_c$ limit, where all modes with
a pseudoscalar heavy meson in the final state $N H_{\bar Q}$ are related. In the language of
the reduced amplitudes in Table
\ref{HQStable}, this amounts to a relation among the amplitudes $f_{0-3}^I$. These
predictions can be obtained by comparing the amplitudes
listed in Tables 3 and 4.
We find
\begin{eqnarray}
&& f_0^{I=1} \equiv F^{I=1}_{0\frac12\frac12} = -\sqrt{\frac{N_c+5}{N_c+1}} \ , \\
&& f_1^{I=0} \equiv F^{I=0}_{1\frac12\frac12} = {\frac{1}{\sqrt3}} \,, \quad
f_2^{I=0} \equiv F^{I=0}_{1\frac12\frac32} = \sqrt{\frac{2}{3}} \ , \\
&& f_1^{I=1} \equiv F^{I=1}_{1\frac12\frac12} = -\sqrt{\frac{2}{3}} \sqrt{\frac{N_c+5}{N_c+1}} \,, \quad
f_2^{I=1} \equiv F^{I=1}_{1\frac12\frac32} = \frac{1}{\sqrt3} \sqrt{\frac{N_c+5}{N_c+1}} \ , \\
&& f_3^{I=1} \equiv F^{I=1}_{2\frac12\frac32} = \sqrt{\frac{N_c+5}{N_c+1}} \ .
\end{eqnarray}
The corresponding predictions for the partial decay rates are shown in Table~\ref{h5qvsJM}.
For comparison, we also show in this table
the results found in Ref.~\cite{Jenkins:2004vb} for the ${\cal K}_\ell=0$ states.
\begin{table}
\begin{center}
\caption{\label{h5qvsJM} Combined large $N_c$ and heavy quark symmetry predictions
for the ratios of strong decay rates for heavy pentaquark decays
$\Theta_{\bar Q} \to N H_{\bar Q}^{(*)}$.
In the last line we show for comparison
also the corresponding predictions for the pentaquark states with positive parity
considered in Ref.~\cite{Jenkins:2004vb}. }
\begin{center}
\begin{tabular}{ccc}
\hline \hline
& \\[-0.12in]
{$I=0$} & $\Theta_{\bar Q} (\frac12) \to (N H_{\bar Q}) : (N H_{\bar Q}^*)$
& $\Theta_{\bar Q} (\frac32)\to (N H_{\bar Q}) : (N H_{\bar Q}^*) $ \\[0.08in]
\hline \hline
& \\[-0.12in]
${\cal K}_{\ell}=1$ & $\frac{1}{2} : \frac{3}{2} \ \ \ (J_\ell = 1)$
& $\frac{1}{2} : \frac{3}{2} \ \ \ (J_\ell = 1)$ \\[0.08in]
\hline
& \\[-0.12in]
${\cal K}_{\ell}=0$ & 1 : 3 \hspace{0.1cm} $(J_\ell = 0)$ & - \\[0.08in]
\hline \hline
\end{tabular}
\end{center}
\vspace{.8cm}
\begin{tabular}{ccc}
\hline \hline
& & \\[-0.12in]
{$I=1$} & $\Theta_{\bar Q} (\frac12)\to (N H_{\bar Q}) : (N H_{\bar Q}^*)$
& $\Theta_{\bar Q} (\frac32)\to (N H_{\bar Q}) : (N H_{\bar Q}^*) $ \\[0.08in]
\hline \hline
& & \\[-0.12in]
${\cal K}_{\ell}=1$ & $\frac16 : \frac12 \ \ \ (J_\ell = 0)$
& $\frac1{12} : \frac7{12} \ \ \ (J_\ell = 1)$ \\[0.08in]
& $\frac13 : \frac13 \ \ \ (J_\ell = 1)$
& $\frac5{12} : \frac14 \ \ \ (J_\ell = 2)$ \\[0.08in]
\hline
& & \\[-0.12in]
${\cal K}_{\ell}=0 $
& $1$ : $11$ \hspace{0.1cm} $(J_\ell = 1)$
& $4$ : $8$ \hspace{0.1cm} $(J_\ell = 1)$ \\[0.08in]
\hline \hline
\end{tabular}
\end{center}
\end{table}
\newpage
\section{Conclusions}
In this paper we studied the complete set of light and heavy pentaquark
states with positive parity at leading order in the $1/N_c$ expansion.
We discussed the structure of the mass spectrum and the strong decays
of these states. Both are strongly constrained by the contracted
spin-flavor $SU(4)_c$ symmetry emerging in the large $N_c$ limit,
leading to mass degeneracies and sum rules for their decay widths.
The exotic
states which are
composed of only light quarks $(\bar s q^{N_c+1})$
belong to two irreducible representations (towers) of this symmetry, with
${\cal K}=\frac{1}{2}$ (containing a $J^P=\frac12^+$ isosinglet), and
${\cal K}=\frac{3}{2}$ (containing a $J^P=\frac32^+$ isosinglet), respectively.
The strong transitions between any members of a pair of towers are related by the
contracted symmetry. The states with ${\cal K}=\frac{1}{2}$ are identical to those
considered in the large $N_c$ expansion in Ref.~\cite{Jenkins:2004vb}, and we
find complete agreement
with their $N_c=3$ predictions for these states. The more general results for arbitrary $N_c$
and the ${\cal K}=\frac{3}{2}$ tower
states are new.
Taking the antiquark to be a heavy quark, the two irreducible representations
with ${\cal K}=\frac12,\frac32$ are split only by $O(1/m_{Q})$ hyperfine interactions.
In the heavy quark limit they become degenerate and the spin of the light
degrees of freedom is a conserved quantum number. When this is combined with the
large $N_c$ limit a new good quantum number emerges: ${\cal K}_{\ell}$, the tower
label for the light quarks.
We find that the heavy pentaquarks with positive parity belong to one tower with
${\cal K}_\ell = 1$.
These are different from the heavy exotic states considered in
Ref.~\cite{Jenkins:2004vb}, which belong to ${\cal K}_\ell = 0$.
Both sets of states are legitimate from the point of view of the large $N_c$ symmetry,
although explicit realizations of these states are more natural in different models:
the ${\cal K}_\ell = 0$ states are obtained in a Skyrme model picture, while the
${\cal K}_\ell = 1$ states considered here appear naturally in the constituent quark
model picture. The predictions for the strong decays of the two sets of states differ, as
shown in Table~\ref{h5qvsJM}, and can be used to discriminate between them.
There is an important difference between our treatment of the transition operators
and that given in Ref.~\cite{Jenkins:2004vb}, due to the fact that we
keep the orbital angular momentum explicit.
In Sec.~\ref{Ncounting} we show, by explicit computation of the $\Theta \to B$
axial current matrix element in the quark model with $N_c$ colors, that
the strong width of the lowest-lying positive parity exotic state scales like $1/N_c$.
This provides a natural explanation for the existence of narrow exotic states
in the large $N_c$ limit.
\vspace{0.3cm}\noindent
{\em Acknowledgments:}
The work of D.P. was supported by the DOE under cooperative research
agreement DOE-FC02-94ER40818 and by the NSF under grant PHY-9970781.
The work of C.S. was supported in part by Fundaci\'on Antorchas, Argentina
and Fundaci\'on S\'eneca, Murcia, Spain.
| {'timestamp': '2006-12-22T21:04:30', 'yymm': '0612', 'arxiv_id': 'hep-ph/0612314', 'language': 'en', 'url': 'https://arxiv.org/abs/hep-ph/0612314'} |